The future-fit capabilities of artificial intelligence (AI) hold a particularly exciting value proposition for healthcare.
Modern, AI-assisted processes across biomedical research, cancer screening, product development and treatment recommendation modeling, as well as back-office administrative optimization, all promise to transform physician decisioning and diagnosing across both critical and chronic care delivery pathways.
Their applications may be nothing short of groundbreaking.
But AI models run on data, and data is different in healthcare, making any successful integration of next-generation AI tools, particularly those that impact patient care decision-making, a challenge to scale while ensuring clinical safety.
“Training data is the major bottleneck holding back AI across the [healthcare] industry,” Erik Duhaime, co-founder and CEO of data annotation provider Centaur Labs, told PYMNTS.
To make data-hungry healthcare models work effectively, there needs to be a high quality of annotation, he said.
“The algorithm is only as good as the data that it’s trained on,” he explained.
That’s because, as Duhaime emphasized, “within healthcare, skill matters.”
Read also: Healthcare Industry Could Be Generative AI’s Biggest Proving Ground
“In healthcare, the importance of high-quality data annotation is much higher than in a consumer-facing application of AI,” he said. “If you want to write a song in the style of Bob Dylan, or virtually try on a T-shirt, it’s one thing if the specs are wrong. But it’s another thing entirely if you’re told you have cancer when you don’t, or an AI model tells you that you don’t have cancer and you do.”
Duhaime stressed that the average American will, at some point in their life, experience a medical misdiagnosis, making the stakes even higher for healthcare AI models to be trained on the highest possible quality of data.
“Quality matters when lives are on the line,” he said.
As for how to ensure the highest possible standard of data quality?
Duhaime said that it comes down to the human touch, specifically, a high quality, expert level of data annotation.
“AI is here to augment people … developing an AI algorithm itself is not that hard,” he said. “A company with a great AI solution needs to have great data and a great data annotation process, and that boils down to having great people.”
Scalable success within AI requires getting “great work out of the best people,” he added.
“Humans have the most important role to play in developing AI,” Duhaime said. “…Where you’ll see the future heading is humans being brought more into the loop.”
He emphasized that AI’s future, within healthcare in particular, will be driven by a hybrid, “human-plus-computer” approach.
Underscoring that prediction is that the need for quality control isn’t going anywhere and can’t be outsourced to AI, he said.
“There needs to be a step in between the algorithm and the thousand-dollar-an-hour cardiologist who might not be available on demand,” he explained. “The future will be a little less about annotating training data to build a model then deploying it, and more of a continual dance where there is active learning and reinforcement learning where multiple, highly-trained experts are part of the workflow to continually improve a model.”
The healthcare domain requires more reinforcement learning by triangulating expert human feedback than other AI applications where lives aren’t at stake.
The ongoing challenge is that healthcare data is both historically fragmented, as well as inherently biased according to its source (think patient data from a community hospital versus one in a wealthy ZIP code).
“Eventually we will have super high accuracy, autonomous AI solutions doing things like diagnostics when they are low risk, but there will be a long period for a lot of these things where there will be an interplay between humans and computers, so the algorithm is making the person better, and vice-versa,” Duhaime said.
“You can’t have that algorithm failing,” he added. “Going back to writing that song in the style of Bob Dylan, who cares if it is 1% better? But making your healthcare application 1% better, that provides a tangible increase that can save lives, so maybe it is worth trying to 10x your data set to move from 98% to 99%.”
Duhaime emphasized that AI for healthcare has never been about replacing doctors, but doctors who use AI might end up replacing those physicians who don’t.
“AI doesn’t replace work; it changes how work is organized,” he said.
As for what the Centaur Labs CEO is looking forward to most?
Duhaime said it is continuing to work with the growing list of organizations that are pushing the bounds of innovation within healthcare, helping build a future based on expert data annotation that ensures scalable clinical safety and moves high-quality AI into production faster to grow the technology’s impact from “a 95% to a 99%, or a 99% to a 100%.”