A woman with late-stage breast cancer came to a city hospital, fluids already flooding her lungs. She saw two doctors and got a radiology scan. The hospital’s computers read her vital signs and estimated a 9.3 percent chance she would die during her stay.

Then came Google’s turn. An new type of algorithm created by the company read up on the woman -- 175,639 data points -- and rendered its assessment of her death risk: 19.9 percent. She passed away in a matter of days.

The harrowing account of the unidentified woman’s death was published by Google in May in research highlighting the health-care potential of neural networks, a form of artificial intelligence software that’s particularly good at using data to automatically learn and improve. Google had created a tool that could forecast a host of patient outcomes, including how long people may stay in hospitals, their odds of re-admission and chances they will soon die.

What impressed medical experts most was Google’s ability to sift through data previously out of reach: notes buried in PDFs or scribbled on old charts. The neural net gobbled up all this unruly information then spat out predictions. And it did it far faster and more accurately than existing techniques. Google’s system even showed which records led it to conclusions.

Hospitals, doctors and other health-care providers have been trying for years to better use stockpiles of electronic health records and other patient data. More information shared and highlighted at the right time could save lives -- and at the very least help medical workers spend less time on paperwork and more time on patient care. But current methods of mining health data are costly, cumbersome and time consuming.

As much as 80 percent of the time spent on today’s predictive models goes to the “scut work” of making the data presentable, said Nigam Shah, an associate professor at Stanford University, who co-authored Google’s research paper, published in the journal Nature. Google’s approach avoids this. "You can throw in the kitchen sink and not have to worry about it,” Shah said.

Google’s next step is moving this predictive system into clinics, AI chief Jeff Dean told Bloomberg News in May. Dean’s health research unit -- sometimes referred to as Medical Brain -- is working on a slew of AI tools that can predict symptoms and disease with a level of accuracy that is being met with hope as well as alarm.

Inside the company, there’s a lot of excitement about the initiative. "They’ve finally found a new application for AI that has commercial promise," one Googler says. Since Alphabet Inc.’s Google declared itself an “AI-first” company in 2016, much of its work in this area has gone to improve existing internet services. The advances coming from the Medical Brain team give Google the chance to break into a brand new market -- something co-founders Larry Page and Sergey Brin have tried over and over again.

Software in health care is largely coded by hand these days. In contrast, Google’s approach, where machines learn to parse data on their own, “can just leapfrog everything else,” said Vik Bajaj, a former executive at Verily, an Alphabet health-care arm, and managing director of investment firm Foresite Capital. “They understand what problems are worth solving," he said. "They’ve now done enough small experiments to know exactly what the fruitful directions are.”

Dean envisions the AI system steering doctors toward certain medications and diagnoses. Another Google researcher said existing models miss obvious medical events, including whether a patient had prior surgery. The person described existing hand-coded models as “an obvious, gigantic roadblock” in health care. The person asked not to be identified discussing work in progress.

First « 1 2 3 » Next