Skip to main content

Computer Weekly: How digital twins are helping people with motor neurone disease speak

For people living with motor neurone disease (MND/ALS) and other speech-limiting conditions, the hardest part of conversation is often the delay—by the time a sentence is composed, the moment has passed. This Computer Weekly article introduces VoxAI, an AI-powered “digital twin” platform from the Scott-Morgan Foundation designed to help people speak more naturally by speeding up responses and restoring facial expression and presence. 

The story explains how the system works in practice: it can listen to the ongoing conversation, then present three possible replies that the user can select—often via eye-tracking—based on the AI’s understanding of the person. It includes real examples, like Leah Stavenhagen, whose AI was trained on her writing and interviews, and it highlights why this matters medically and emotionally: reducing the strain of laborious spelling, avoiding awkward pauses, and keeping people active in the flow of dialogue. 

Computer Weekly also gets into the rollout and the ecosystem behind it: the foundation plans to make the software free to use (with a subscription option for advanced features), and a two-year trial led by Tecnológico de Monterrey is planned to evaluate impact. It explicitly credits the technology stack—D-ID for the animated digital avatar layer, ElevenLabs for voice cloning, Irisbond for eye-tracking, and Nvidia GPUs—and closes on a simple idea: when an avatar can bring back someone’s smile, they’re not just “heard,” they’re present again.