GigaTIME: Scaling tumor microenvironment modeling using virtual population generated by multimodal AI - Microsoft Research Blog

ME/CFS Science Blog

Senior Member (Voting Rights)
'The convergence of digital transformation and the GenAI revolution creates an unprecedented opportunity for accelerating progress in precision health. Precision immunotherapy is a poster child for this transformation. Emerging technologies such as multiplex immunofluorescence (mIF) can assess internal states of individual cells along with their spatial locations, which is critical for deciphering how tumors interact with the immune system. The resulting insights, often referred to as the “grammar” of the tumor microenvironment, can help predict whether a tumor will respond to immunotherapy. If it is unlikely to respond, these insights can also inform strategies to reprogram the tumor from “cold” to “hot,” increasing its susceptibility to treatment.

This is exciting, but progress is hindered by the high cost and limited scalability of current technology. For example, obtaining mIF data of a couple dozen protein channels for a tissue sample can cost thousands of dollars, and even the most advanced labs can barely scale it to a tiny fraction of their available tissue samples.

In our paper published in Cell on December 9, “Multimodal AI generates virtual population for tumor microenvironment modeling(opens in new tab),” we present GigaTIME(opens in new tab), a multimodal AI model for translating routinely available hematoxylin and eosin (H&E) pathology slides to virtual mIF images.'
.....

Full text here:
GigaTIME: Scaling tumor microenvironment modeling using virtual population generated by multimodal AI - Microsoft Research
 
Last edited by a moderator:
Looks like an application of AI in medicine that could actually be useful.

The paper was published in Cell:

Summary​

The tumor immune microenvironment (TIME) critically impacts cancer progression and immunotherapy response. Multiplex immunofluorescence (mIF) is a powerful imaging modality for deciphering TIME, but its applicability is limited by high cost and low throughput. We propose GigaTIME, a multimodal AI framework for population-scale TIME modeling by bridging cell morphology and states. GigaTIME learns a cross-modal translator to generate virtual mIF images from hematoxylin and eosin (H&E) slides by training on 40 million cells with paired H&E and mIF data across 21 proteins. We applied GigaTIME to 14,256 patients from 51 hospitals and over 1,000 clinics across seven US states in Providence Health, generating 299,376 virtual mIF slides spanning 24 cancer types and 306 subtypes. This virtual population uncovered 1,234 statistically significant associations linking proteins, biomarkers, staging, and survival. Such analyses were previously infeasible due to the scarcity of mIF data. Independent validation on 10,200 TCGA patients further corroborated our findings.
Multimodal AI generates virtual population for tumor microenvironment modeling: Cell
 
Back
Top Bottom