× TECHNOLOGY
SOFTWARE NETWORKING CLOUD IT SERVICES MOBILE SECURITY STORAGE IOT DATA ANALYTICS CYBER SECURITY ORACLE SAP BIG DATA
BUSINESS
DIGITAL MARKETING ERP RETAIL HEALTHCARE TELECOM
MAGAZINE
CURRENT ISSUE ARCHIVE
OPINION ABOUT US CONTACT US CLIENT FEEDBACK

Alexa now understands sign-language, tap to Alexa in Echo Show

alexa echo show tap

It was presumed that the voice-interfaces were the future of computing, but “camera-and-screen based voice assistant is the ultimate use-case of the Amazon Echo prototype”, says an Indian software developer Abhishek Singh (shek.it). He created a mod that lets Amazon’s Alexa assistant understand some simple sign language commands.

He demonstrates in a video as to how a laptop’s webcam records his gestures, and some back-end machine learning software decodes it. Amazon Echo which is connected to the laptop is served with these decoded instructions, to function. He had to teach his program to understand visual signals by feeding training data.

The mod, a “thought experiment”, as he calls it, uses Google’s TensorFlow software (TensorFlow.js) which allows users to code machine learning applications using JavaScript.

Amazon, coincidentally, released its own update for Alexa which lets users interact with the virtual assistant without using any voice commands. The IoT device Amazon Echo Show which is screen-equipped now includes a new feature called “Tap to Alexa” which lets users with hearing and speech impairments to tap the device’s screen to access the digital assistant. The new device can be taught to hold routine or personalized commands.

However, Abhishek’s project can be the next step towards realizing a more convenient way to interact with voice assistants. He plans to open-source the code and says “… people will be able to download it and build on it further or just be inspired to explore this problem space.”

MAGAZINES