MD-based startup and INFO researchers are helping to solve the problem of compliance with physical therapy through a human-like AI coach at home
the_post_thumbnail_caption(); ?>
We have learned to talk to machines. We ask a cheerful, disembodied voice for the weather, a chatbot for a complex recipe, and a friendly, conversational agent to be our sounding board. In their responses, we hear a reflection of our own needs. The voices we give to our technology, and the personalities they embody, reveal a profound truth: even in our most logical—sometimes sterile—interactions, we crave a human touch.
It is at this intersection of technology and fundamental human need that a project from INFO is taking shape. The mission is both technically complex and deeply personal: to build an AI physical therapy coach that doesn’t just instruct, but connects, motivates, and understands.
“You begin physical therapy with the best of intentions, attend a handful of in-clinic sessions, and leave with a list of at-home exercises. But in reality, once you leave, those routines rarely make it into your daily life. Competing priorities and limited guidance get in the way of your recovery,” says Galina Reitz, INFO principal lecturer and director of the HCIM program, who is leading the research.
This gap in care is where recovery often stalls. For older adults, individuals with disabilities, or anyone recovering from surgery, the consequences are more than just prolonged discomfort; they can mean slower recoveries, increased rehospitalization, and a higher burden on the healthcare system.
This intervention is particularly vital for people with Parkinson’s disease, for whom daily movement is not just rehabilitation but a core part of managing the condition. The American Parkinson Disease Association (APDA) has recognized the potential of the project, partnering directly to bring the AI coach to their community.
“We are not trying to replace a person, this is simply a support tool, and we are intentional that the coach experience feels warm, natural, and far from robotic.” -Galina Reitz
“Exercise and movement are vital to keep people with PD as mobile as possible,” says Dr. Rebecca Gilbert, APDA chief mission officer. “In addition to keeping PD symptoms at bay, regular exercise has emotional, mental, and social benefits, so this new app has the potential to make a big difference.”
A Startup’s Intervention

Galina Reitz, Principal Lecturer; Director of the Master’s in Human-Computer Interaction (HCIM) program at the University of Maryland
The tool designed to make that difference comes from Maryland-based startup BeneKinetic, which is tackling this problem with an AI coach that uses a phone’s camera to track a user’s movements during prescribed exercises and offer feedback. The true innovation lies not in the tracking, but in the communication. The aim is to generate the benefits of having a physical therapist present by emulating their knowledge, judgment, and perhaps most challengingly, their bedside manner.
This is where Reitz’s work begins. Working with BeneKinetic through a Maryland Industrial Partnerships (MIPS) funded project, her team seeks to answer a central question: What makes an AI feel human, and why does it matter?
“We focused on cultivating humanlike qualities because many older adults experience social isolation and limited social interaction,” Reitz explains. “We are not trying to replace a person, this is simply a support tool, and we are intentional that the coach experience feels warm, natural, and far from robotic.”
The research delves into the nuances of interaction that we often take for granted in human conversation. Is the voice male, female, or neutral? Is it stern and direct or supportive and empathetic? Does it use medical jargon or simple, encouraging words?
To answer these questions, the team is employing a clever and flexible research method known as “Wizard of Oz” testing. In this phase, a human operator (the “wizard”) observes participants performing exercises and, based on real-time data, provides feedback using pre-recorded audio prompts that simulate the AI. This allows researchers to experiment on the fly without building and rebuilding complex algorithms for every new hypothesis.
“This setup gives us the freedom to iterate right there in the lab,” Reitz explains. “If something’s working, we can build on it instantly, and if something isn’t, we can scrap it on the spot rather than sinking time into a full build-out.”
The Importance of Feedback
This iterative, human-centered approach is critical. “Success to me is getting as many research participants as possible to try it out and give us feedback because there’s a huge gap between development and research,” Reitz emphasizes. “Especially with those in the disability community. We don’t reach those populations often. And so, we make assumptions; we have stereotypes.”

Galina Reitz working with grad students on BeneKinetic project
The ultimate goal is to refine an algorithm that can provide not just technical corrections but also social-emotional benefits. The project outlines a vision where the AI uses supportive language and a tone of voice that motivates the user. It will have a memory of prior interactions, connecting current performance to past sessions to create a sense of continuous, personalized care.
Of course, giving technology a human-like presence raises ethical questions. The line between a helpful tool and a surrogate relationship can be thin. Reitz is mindful of this, recalling past research with voice assistants. “Some of my participants built relationships with it and would call it their friend,” she says. This can be especially dangerous when the AI is feeding the user misinformation.
However, she sees the BeneKinetic project as fundamentally different and lower risk than general-purpose conversational AI. “The coach isn’t designed to offer medical advice or make decisions for the user. Its role is simply to provide supportive feedback, serving as a guided companion throughout the recovery process and not a replacement for clinical expertise” she notes. Unlike a general-purpose AI, that might pull unvetted advice from the internet, this coach operates within a controlled, clinically-informed framework. Its purpose is not to completely replace human connection, but to bridge a critical gap in care.
The potential impact is significant. For a healthcare system that excels at acute care but often fails patients after discharge, this technology represents a promising path forward.
As the project unfolds over the next year, Reitz’s findings will directly shape the AI’s evolving personality. The team will learn whether a 70-year-old with arthritis wants a cheerleader or a straightforward coach, and whether a student athlete prefers technical feedback or simple encouragement.
In the end, the voice of this AI coach will be more than just a set of audio files. It will be the product of deep listening, a synthesis of human need and technological capability. It is an attempt to ensure that when patients are alone at home, tasked with the difficult work of healing, they hear a voice that doesn’t just tell them what to do, but helps them find the strength to do it.
- Faculty & Staff
- INFO News
- Partner & Industry News
- Research News
- HCIM
- Accessibility and Inclusive Design
- Data Privacy and Sociotechnical Cybersecurity
- Health Informatics
- Human-Computer Interaction
- Information Justice, Human Rights, and Technology Ethics
- Machine Learning, AI, Computational Linguistics, and Information Retrieval