Skip Navigation

Bringing AI Toward Personalized Cancer Care

  • Machine language technology brings new power to study each individual's risks and best treatment options.

    From Paths of Progress 2020
    By Eric Bender
  • illustration of 0's and 1's

  • Machine language technology brings new power to study each individual's risks and best treatment options.

    Ask your cellphone a question today, and it probably understands you well enough to provide a fairly reasonable answer. That's new. Ten years ago, such ubiquitous and even expected speech recognition didn't exist anywhere on the planet. Now it's on billions of mobile phones.

    Much of your phone's smarts are based on a rapidly advancing form of artificial intelligence called machine learning, in which the computer basically trains itself to find patterns within massive amounts of data. Machine learning technologies are being applied in many fields, and we're just starting to see the impact they may have in medical research.

    Researchers at Dana-Farber are bringing machine language technology to many tough puzzles in cancer. One pioneering project is gathering information on treatment outcomes in lung cancer, and a second one aims to better identify people who are at most risk to develop pancreatic cancer.

    Understanding Outcomes in Lung Cancer

    Dana-Farber conducts a genomic analysis on tumors from patients who give their consent. That genomic information can be essential to best understand an individual cancer and, in many instances, can help guide how to treat it. This genomic treasure trove can be combined with a wealth of other patient information, such as data from imaging, to offer an enormously valuable resource for cancer scientists.

    But there's a chokepoint for research that taps into this resource: It's surprisingly difficult to capture how well patients are responding to treatment by analyzing routine clinical records.

    "As straightforward as one might expect that process to be, it can be a substantial challenge," says Dana-Farber oncologist Kenneth Kehl, MD, MPH. He explains that information about the patient outcomes shown in a CT scan, for example, may be routinely recorded only in the unstructured text of a radiologist's report. Tracking these outcomes requires trained staff to read and annotate such records, which can be slow and expensive.

    Kehl and his colleagues turned to machine-learning tools to automate this painstaking process, in a project funded by the National Cancer Institute, the Claude Dauphin Philanthropic Research Fund, and the Simeon J. Fortin Foundation. The researchers reported successful results for the effort in a JAMA Oncology paper in 2019. Their "natural language processing" software performed very strongly at interpreting unstructured textual records — matching up well against human experts and delivering results exponentially faster.

    The project focused on lung cancer, which is both very common and very difficult to treat, Kehl points out. Treatment also has become more complex in recent years with a flood of approved therapies to either target specific genetic mutations or to unleash the immune system against the cancer.

    The Dana-Farber team began by gathering more than 14,000 unstructured text reports on radiology images for 1,112 patients with lung cancer. Their next task entailed a large amount of human drudgery: building a database of outcomes to help train the machine learning software, by manually labeling these unstructured text reports.

    "Labeling is a fundamental prerequisite for machine learning — we have to teach machines from a high-quality curriculum if we want to get output we can trust to inform clinical decisions," says medical oncologist Deborah Schrag, MD, MPH, chief of Dana-Farber's Division of Population Sciences and senior author on the paper. "And labeling data is unbelievably tedious."


  • Kenneth Kehl, MD, MPH, and Deborah Schrag, MD, MPH


    Kenneth Kehl, MD, MPH, (left) and Deborah Schrag, MD, MPH, (right) chief of Dana-Farber's Division of Population Sciences


  • Although machine language projects often perform this initial human labeling of data in ways unique to each project, the Dana-Farber team was careful to employ a standard framework for labeling clinical data developed by Schrag and her associates.

    The framework aims to ensure the creation of gold-standard data on which all experts agree — and in this case, it was applied to annotating the outcomes that are occurring in the CT scans.

    The researchers then applied a machine-language-based natural language processing approach that is designed to classify documents. "Just like one can train AI algorithms to identify cats and dogs in pictures, one can train algorithms to look at text and identify, for example, is the writer expressing positive or negative emotions about the subject?" Kehl says.

    "We used one of these document classification techniques to do the same thing. In our case, the model can read the text of, say, a CT scan of the lungs, and report out to us: Is cancer being described on this scan? If it is, is the cancer getting better or worse? And what areas of the body are involved?"

    The researchers demonstrated their software can generate outcome descriptions that are similar to the descriptions generated by people and can do so dramatically faster. Human curators took about 20 minutes to annotate a single imaging report. In half that time, Kehl and his colleagues estimate, the computer models could annotate 30,000 imaging reports in the study.

    Next, the text-processing software examined reports for 1,294 patients whose records had not been manually reviewed. Investigators confirmed that the software's outcome measurements in this group predicted survival about as well as the human assessments of outcomes among the patients whose CT records were manually reviewed.

    The team is now working to demonstrate that the patient outcomes gathered by software do indeed reflect known associations between tumor genomic features and outcomes in lung cancer, says Kehl. Once that's done, he, Schrag, and their coworkers will start to ask new research questions about connections between treatments and outcomes. The researchers also will consider ways to incorporate such models into clinical care delivery, such as finding best ways to manage symptoms. And they plan to check out how well the models perform with other types of cancer.

    "Moving some of this AI work into the clinic is important and complex and challenging," Kehl says. "But it's an important direction to take. It's where the field will have to go."

    Who Gets Pancreatic Cancer?

    When pancreatic cancer is detected before it spreads, 37% of patients survive five years. Unfortunately, that's the best-case scenario. The survival rate plummets to 10% if the tumor is found at a later stage, which happens in about 90% of cases. That's one main reason that pancreatic cancer is expected to become the second-most-deadly type of cancer in the United States by 2030.

    The disease often can be detected at an early stage by imaging with CT, MRI, or endoscopic ultrasound. Currently, such screening is performed only for a very small group of people thought to be at high risk of the disease, either through family history or the presence of certain worrisome genetic mutations. But pancreatic cancer is far too rare for such procedures to be practical for the population at large.

    Researchers know that pancreatic cancer also is linked to other factors including obesity, smoking history, alcohol history, and late-onset diabetes with weight loss and age. But there are no guidelines for integrating all these factors to provide sufficiently reliable predictions of disease risk to allow effective screening to catch the cancer as early as possible.

    Producing such guidelines through machine learning is the goal of a project co-headed by computational biologist Chris Sander, PhD, director of the cBio Center in Dana-Farber's department of Data Sciences, and MIT computer scientist Regina Barzilay, an expert on machine learning in medicine. Medical oncologist Brian Wolpin, MD, MPH, and radiology physician-scientist Michael Rosenthal, MD, PhD, of Dana-Farber; epidemiologist Peter Kraft of the Harvard T. H. Chan School of Public Health; and disease system biologist Søren Brunak, PhD, of the University of Copenhagen are collaborating on the effort, which is funded by Stand Up to Cancer.


  • Chris Sander, PhD


    Chris Sander, PhD, director of the cBio Center in Dana-Farber's department of Data Sciences


  • Their work has begun with a highly informative dataset that includes complete clinical records for about 2.5 million patients in the Danish National Patient Registry, about 20,000 of whom have had pancreatic cancer, Sander says. To meet stringent Danish privacy requirements, researchers actually must make repeated trips to Copenhagen for access to the Danish computing resources. The project also will extract risk profiles from the Henry Ford Health System in Detroit and Partners HealthCare in Boston.

    In building machine-language algorithms, "we're not going in with preconceived notions of what are the deciding factors for risk," Sander says. "We take a snapshot of a personal's clinical records, and starting with this raw data, we challenge the artificial intelligence engine to learn, 'What's the probability that this person will get pancreatic cancer in the next few years?'"

    Within their stockpiles of clinical data, the team will look at unstructured doctor notes (extracted with natural-language processing techniques similar to those in the lung cancer project above), disease codes and treatment codes, blood tests, and images. "So far, we have the first results from the disease codes, and we're getting pretty good results already," Sander says.

    The two-year project aims to produce sufficiently accurate risk guidelines for a clinical trial screening for early signs of cancer. Scientists also hope their algorithms will shed light on the roles various risk factors have in predicting (and maybe even causing) the disease and how these could be used in prevention programs.

    To enter standard clinical care, the guidelines will have to deliver very high accuracy. "You can have less than 100% accuracy for a clinical trial," Sander notes. "But if you want to apply the risk assessment AI tool in clinical practice, then your accuracy has to be very high to minimize the chance of unnecessary treatment."

    Truly effective screening programs also will demand more effective screening techniques. Hundreds of labs around the world are studying new methods of imaging or blood analyses for early detection of solid tumors such as pancreatic cancers.

    "There's huge interest in that research," Sander says. If better guidelines can be combined with better screening, he and his colleagues hope for real progress against this fearsome form of cancer.

    "Our clinicians at Dana-Farber do a lot of work on curing cancer when it's very advanced, which is where the suffering is and where we really need to beat the cancer," he adds. "But it's also important to try to catch cancer at the early stages, avoiding all the suffering along the way and bringing down the total cost of treatment, and that's one of our top priorities."

Posted on June 18, 2020

  • Lung Cancers
  • Pancreatic Cancer
  • Cancer Genetics