‘Is This Doomsday Concern, Or Is It Reality?’ Verily CMO On AI’s Future, Lightpath Metabolic, More
Executive Summary
Andrew Trister, Verily’s chief medical and scientific officer, spoke with Medtech Insight at the HLTH Europe conference about Verily’s newly launched Lightpath Metabolic solution, featuring GLP-1 prescription, AI and strengthened clinical support. Trister also talked about plans for the Study Watch and offered views on the Alzheimer’s research landscape and AI development and regulation in a new era of uncertainty.
Verily, Google’s health tech spinout, continues to reinvent itself. On 11 June, the company announced it will refocus its strategic efforts from the chronic disease management app, Onduo, to a new solution called Lightpath, starting with Lightpath Metabolic. Launched in 2018, Onduo targets people with type 1 and type 2 diabetes and hypertension with a personalized, holistic approach, “connecting on diet, lifestyle and stress management with each unique member on their terms to develop healthy habits,” according to Verily’s website.
Lightpath Metabolic builds on Onduo but is a “much greater offering,” Verily’s chief medical and scientific officer Andrew Trister told Medtech Insight.
“It moves more into cardiometabolic disease and has a greater breadth of experiences for people,” Trister said. “We had already used a lot of remote monitoring tools like continuous glucose monitoring (CGM). Now the idea would be that we incorporate additional technologies and close the loop for people to have different cases around these larger diseases.”
Projected to launch in early 2026, Lightpath will offer tiered programs ranging from managing type 1 diabetes, type 2 diabetes, hypertension, comorbid obesity and/or hyperlipidemia, to less intensive programs aimed at members intent on preventing diabetes or losing weight, including through supporting the use of anti-obesity medications, such as GLP-1 agonists.
Lightpath “will be fueled by continuous data integration and AI,” Verily says, adding, “But Lightpath won’t be a technology alone. It will be paired with health coaches and an affiliated advanced licensed clinical team – endocrinologists, pharmacists, primary care physicians, nurses and registered dietitians. This will allow Lightpath to serve a variety of acuity levels within a single solution based on member need.”
The system’s AI capabilities will serve members – monitoring behavior and helping to create personalized pathways – as well as clinical staff as an “at-the-elbow AI assistant.”
Onduo will be phased out by the end of 2025.
Verily has undergone significant change in recent years. The company started in 2015 as a “moonshot at Google X to tackle health’s biggest challenges.”
Its experimentation-centric approach led to multiple partnerships, including with CGM maker DexCom, Inc., and the development of the Verily Study Watch, a clinical-grade biosensing device with customizable features to collect data in clinical trials, such as physical activity in real-world settings. (Also see "In Five Years, People Will Navigate Their Health Care With An AI Advisor – Verily’s Andrew Trister" - Medtech Insight, 11 Mar, 2024.)
Last year, Verily underwent a reorganization to narrow its focus, which included some layoffs and the departure of former chief medical officer Amy Abernathy.
Trister was named chief medical officer in December, adding to his role as chief scientific officer, which he assumed in August 2023. Before that, he was deputy director of digital health and artificial intelligence at the Gates Foundation, and was a founding member of Apple’s health team. (Also see "Verily’s Andrew Trister On Uniting The Pieces To Create Personalized Health, Equity" - Medtech Insight, 11 Mar, 2024.)
Medtech Insight sat down with Trister during the HLTH Europe conference in Amsterdam, held 17-20 June, to talk about new directions at Verily, plans for the Study Watch, research in neurological disorders such as Alzheimer’s, challenges to AI development and regulation, and what the AI-enabled health care future could look like.
The interview that follows has been slightly edited for content and length.
With respect to what some OpenAI researchers said and others before them, I would raise questions: ‘Is this doomsday concern, or is it reality?’ It's very hard, because things are moving exponentially at this stage and humans are notoriously bad at predicting when things move exponentially. We have questions to what the researcher has seen, which may not be public, so I would take some credibility to this [some of the concerns they raised]. It’s hard for me to understand how these technologies could cause so much harm. They do amazing things – and some are superhuman – but it doesn’t rise to the point of artificial human intelligence.
If we synthesize novel data, we may synthesize what is slightly off what we normally experience. As humans we would recognize this immediately, because we are trained to do pattern recognition – that’s how we learn – if the training sets aren’t things that are representative in real life, the AI may not actually learn. The way things work now is with the ‘human in the loop’ or reinforcement learning from human feedback (RLHF) where the algorithm makes a prediction and the human is the observer, saying, ‘That makes sense or that doesn’t make sense at all.’ If you want the AI do that – which is what researchers are working on – you may spin it into a place that doesn’t make sense.
The researchers at Google DeepMind, OpenAI and Anthropic are incredibly good. My sense is they would build systems that match the human ability, but at some point there’s something where humans will still be better. And my sense is three years is not long enough to see where that exponential will look like. [Trister commented on a prediction made by Daniel Kokotajlo, former researcher at Open AI’s governance division, who reportedly raised concerns there is a 50% chance that powerful AGI models may arrive by 2027. Kokotajlo was part of the group that called for more transparency at Open AI].
The analogy right now is the existing state of the art models act more like an “intern.” They aren’t qualified yet, they haven’t gone through their training. When I was training interns in the hospital, standing at the door looking at a patient and knowing that patient is really sick becomes a human intuition when you spend a lot of time in the hospital. These technologies don’t have that kind of experience yet. An AGI may have that kind of experience, if trained properly. It’s not hard to imagine that we can get there, but it’s hard to imagine what this will mean.