Home AI News Advancing Tech Innovation: Improving Interaction with Sign Language for Smart Assistants

Advancing Tech Innovation: Improving Interaction with Sign Language for Smart Assistants

Advancing Tech Innovation: Improving Interaction with Sign Language for Smart Assistants

Advancing technology and improving accessibility for sign language users have been key focuses of the AI for Accessibility program. In 2019, the program hosted a workshop to find researchers who could contribute to these goals. Abraham Glasser, a Ph.D. student in Computing and Information Sciences and a native ASL signer, was awarded a three-year grant to work on enhancing interactions with home-based smart assistants for sign language users.

RIT’s Golisano College of Computing and Information Sciences, specifically the Center for Accessibility and Inclusion Research (CAIR), has been conducting the research since then. CAIR publishes studies on computing accessibility and includes many Deaf and Hard of Hearing students who use ASL and English.

The research began by investigating how DHH users prefer to interact with personal assistant devices like smart speakers. These devices traditionally relied on voice-based interaction, but newer models now incorporate cameras and display screens. Currently, no devices on the market understand commands in sign language, so developing this capability is crucial for inclusivity. Abraham simulated scenarios where the camera on the device would watch the user sign, process their request, and display the result on the screen.

Previous research had not included DHH users, so the team focused on collecting more data to address this gap. They studied various aspects of interacting with personal assistant devices, including device activation and output modalities like videos, ASL avatars, and English captions.

To gather insights on DHH users’ preferences, Abraham and the team set up a Wizard-of-Oz videoconferencing setup. An ASL interpreter acted as the “wizard” with a personal assistant device, while participants signed to the device without knowing the interpreter was voicing the commands in English. Annotators transcribed each command into English and ASL gloss for analysis.

Through this research, Abraham identified new ways users would interact with the device, including “wake-up” commands not captured in previous studies.

To learn more about this research, you can check out the screenshots above, which show examples of ASL signers interacting with personal assistant devices remotely. Participants used a variety of ASL signs to activate the device before giving each command.

This research is an important step in driving inclusivity in technology for sign language users, and it highlights the potential for future advancements in this field.

Source link


Please enter your comment!
Please enter your name here