If Google’s AI suite could have its way, all things healthcare could be freed from human intervention – at least insofar as tasks involving the creation and management of health records, reading medical data from imaging for prognosis and delivering reports to patients that are more accurate and faster.
Towards this end, the company recently announced the expansion of its open-source medical AI collection by launching two specific AI models over which developers can build future healthcare applications. The two models – MedGemma 27B Multimodal and MedSigLIP are unique not because they can be downloaded and modified by hospitals on a smartphone.
If one has a sharper than normal eye for data privacy matters, Google’s recent announcement couldn’t have been better time. The company was recently in the eye of a storm over slurping out customer data from cellphone companies for which it was fined $314 million by a US court. It also barely escaped another class action suit filed in a California court in June. Not to mention the embarrassing report that Google and Microsoft put healthcare data at risk due to email related data breaches.
Healthcare tech experts we spoke to in Bengaluru believe that Google’s latest move to open up such powerful tools directly to the developers instead of locking them behind expensive APIs is a step in the right direction. They are hoping the models were trained under regulatory compliance and is an adaptable for all developmental needs.
So, what do they do?
The MedGemma 27B model reads medical texts as its predecessors did but can also analyse medical images and understand what it is being shown. From chest X-rays to pathology slides and patient records going back months or even several years, it can process all the data and come up with a prognosis, possibly much faster than any doctor could.
The company revealed that the results tested against the MedQA standard medical knowledge benchmark scored an impressive 87.7%, which is barely inches away from larger and much more expensive models under use now. Not to mention that it costs a tenth of what the big models cost – small wonder that healthcare startups are smiling.
Per the Google blogpost, MedSigLIP is a lightweight image encoder with 400 million parameters that can capture diverse medical imaging data ranging from X-rays, histopathology patches, dermatology images etc. It allows “the model to learn nuanced features specific to these modalities,” that would bridge the gap between medical images and medical text by encoding them into a common embedding space.
Simply put the AI model can not only capture data from images, it can also make sense of it and even write out a report for the doctor to see. Hospitals and nursing homes seeking cheaper AI solutions could find this useful as it can spot patterns and features crucial to medical contexts. And it does so while recording and docketing everyday images.
Google cited a few examples of how the two new AI models are making a difference. A US-based company is currently testing MedSigLIP for chest X-ray analysis while a Taiwanese group of researchers are using it to work alongside traditional Chinese medical tests to answer staff queries with a higher degree of accuracy.
Gurugram-based Tap.Health, a startup seeking to provide affordable healthcare and working on diabetes control reported that unlike general-purpose AI that could potentially hallucinate medical facts, MedGemma appears to understand the clinical context and presents what could be the key difference between a chatbot that sounds medical and one that is.
And, what’s in it for Google?
We all know that there are no free lunches in the corporate world. So, what’s behind Google’s recent generosity of spirit? By open-sourcing AI models, the company has addressed concerns around healthcare deployments. Hospitals worried about data privacy, researchers sought models that didn’t change behaviour suddenly and developers want freedom to fine-tune.
Now hospitals can run MedGemma on their servers, modify it to suit specific needs and trust that behaviour will remain largely constant. Of course, Google is quite clear that these models will not replace doctors and that they require human supervision and clinical correction. Their output requires validation and recommendations require verification.
And while all of this will happen at the hospitals and research centers, we wonder how Google would nudge them to give up their learnings so that these AI models can learn as they work and become smarter as more human intellect shapes and finetunes it. Google has just given away technology to hospitals, researchers and medical schools so that the entire universe can collaborate to make its medical suite and smarter and sharper. As long as Google doesn’t charge for it, neither the medical community, nor the patients have a cause to complain.