Covid-19 Pandemic Moves Research on Assistive Technologies From the Lab to People’s Homes

Liz Zogby - April 26, 2022

Dr. Hernisa Kacorri’s research on object recognizers with blind participants required a pivot to remote usability testing, leading her team to reflect on the many lessons learned from the shift from the lab to people’s homes.

Dual video conferencing in the proposed remote study design. While interacting with a testbed on a smartphone, a blind participant wearing smart glasses communicates with the experimenter through dual video conferencing. Two video streams are being sent to the experimenter: one from the participant’s smart glasses (a pair of Vuzix blade with the camera located on the top right corner) and the other from a laptop camera facing the participant on the same Zoom call.

When the covid-19 pandemic disrupted everyday life around the world, Dr. Hernisa Kacorri, assistant professor in the UMD iSchool and core faculty at the Trace R&D Center, was in the midst of work funded by grants from the National Science Foundation (NSF) and the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR). One thread of Dr. Kacorri’s work involves putting control of artificial intelligence (AI) back into the hands of users with disabilities through “teachable” interfaces. This research is focused on blind users personalizing their mobile phone or smart glasses to recognize objects or people of interest in their surroundings by providing, at first, a few photos so that the AI can later recognize them through the camera. Such technology holds potential for increasing blind users’ independence for tasks like identifying subtle differences between objects as well as navigating public spaces and social interactions.

Central to the research is usability testing of the study prototypes by blind users, a process complicated significantly by the pandemic. Remote usability testing, which often involves using online surveys and video conferencing, was necessary for the safety of study participants and experimenters alike, but presented novel challenges for this particular group of users and the interactions the study team hoped to capture. Dr. Kacorri’s team, including PhD students Jonggi Hong and Kyungjun Lee, HCIM student Ebrima H. Jarjue, and undergraduate student Ernest E. Mensah, quickly pivoted and discovered that the new protocols they designed to move their work from the lab into people’s homes actually afforded interesting questions with ramifications beyond simply the “how” of the research process. Their new paper, From the Lab to People’s Home: Lessons from Accessing Blind Participants’ Interactions via Smart Glasses in Remote Studies, to be published this week in the proceedings of the 19th International Web for All Conference (Web4All 2022), describes their findings with respect to the study design and the implications of their work for the field.

After delivering the study materials to the participants (including the “object recognizer” prototypes for use with a smartphone, Internet hotspots to ensure network connectivity, and objects to capture with the “recognizers”), the team conducted their remote case study in the homes of 12 participants. The users interacted with the materials, taking photos of the various objects with the smartphone-based testbeds (i.e., two different working applications), while communicating with the researchers via videoconference on the laptop and the smart glasses.

In order to increase remote access to the interactions, the study design involved two perspectives: a 3rd person view via a laptop camera and a 1st person view via smart glasses, wearable glasses that incorporate computer technology. This set-up clearly showed the potential for dual perspectives to improve the capture of the interactions between users and the study prototype (58.7% of interactions with smart glasses vs. just 3.7% with the laptop camera), but also pointed to important challenges of this technology. Because the smart glasses are susceptible to head movements and some blind users tend to orient their ear toward a sound source, the extent of the interaction capture was variable across participants. This finding points to the need for more exploration of the field of view of the smart glasses camera; a wider field of view would capture more of the interactions but needs to be balanced with greater image distortion of a wider camera angle. More importantly, the team highlights current limitations in the design of wearable devices that do not support screen reader functionality, which is critical for blind users to operate them.

Other findings included:

  • The dual perspectives served to support real-time communication, requiring fewer interruptions and less need for guidance from the experimenters (e.g., to ask the participant to reposition the camera).
  • Challenges revealed by the study design included the impact of battery life, the network bandwidth limitations of using one hotspot to video stream and upload the photos being captured by the study prototype, and the need for local troubleshooting support.
  • A question for future exploration involves the privacy concerns inherent in using smart glasses in participants’ homes, including the concerns of other individuals who might be inadvertently captured by video during the study.

The team suggests that a similar study design could be used for other testbeds (e.g., smartwatch applications), other settings (e.g., users could be moving rather than sitting in front of a laptop), and other populations where capture of head or gaze information or 1st person perspective would add to the context of the data being captured.

The unexpected transition from the lab to people’s homes afforded Dr. Kacorri and her team the opportunity to explore a new remote experimental setup and protocol for their case study employing smart glasses as part of the study design. The lessons learned will inform their future work, both in the lab and in other environments, as they continue to work on placing control of AI technology in the hands of users with disabilities through teachable interfaces.

Read the full paper here.

 

Original article was published by the Trace Center on April 25, 2022.