Private and secure generative AI tool supports operations and research at Dana-Farber

Posted date


Study Title: GPT-4 in a Cancer Center: Institute-Wide Deployment Challenges and Lessons Learned

Publication: NEJM AI

Dana-Farber Cancer Institute authors: Renato Umeton, PhD; Anne Kwok; Rahul Maurya; Domenic Leco, JD, MBA; Naomi Lenane; Jennifer Willcox, JD; Gregory A. Abel, MD, MPH; Mary Tolikas, PhD, MBA; Dana-Farber Generative AI Governance Committee; Jason M. Johnson, PhD


Dana-Farber Cancer Institute has implemented an artificial intelligence (AI) application intended for general use in a medical center or hospital. The system, called GPT4DFCI, is permitted for operations, administrative, and research uses but prohibited in direct clinical care. The system is deployed within the Dana-Farber digital premises, so all operations, prompts, and responses occur inside a private network. The application is private, secure, HIPAA-compliant, and auditable.

The application rolled out in phases over the last year to increasingly more users. GPT4DFCI was rolled out with detailed guidance; e.g., users were reminded that they are directly responsible for the completeness, veracity, and fairness of any final work products, and must verify GPT-generated content because it might be incomplete, biased, or factually false. The rollout of this tool and associated policy has been guided by the Dana-Farber Generative AI Governance Committee, which includes broad representation of DFCI constituencies, including research, operations, legal, privacy, information security, bioethics, compliance, intellectual property, and patients.

Once clinical care use was ruled out, a survey of initial users showed that the most reported primary uses were extracting or searching for information in notes, reports, or other files and answering general knowledge questions. Other reported uses included summarizing documents or research papers, and drafting or editing letters, meeting minutes, or presentations. The most common concerns reported were inaccurate or false output and ethics and/or compliance with policies.


There is significant potential for generative AI to aid in healthcare, coupled with significant risks of bias, inaccuracy, incompleteness, and misuse. Despite these risks, the Dana-Farber team decided that broad prohibition of generative AI tools would inhibit learning and innovation, which are central to the Dana-Farber mission. To manage risk and advance discovery, a broadly representative, multidisciplinary governance body guided the technical, ethical, and policy decisions behind this implementation. The experience and technical material have been shared to inform other healthcare institutions considering similar efforts.


The Microsoft Azure teams supported Dana-Farber in handling of Azure OpenAI Service quotas, and shared expertise to ensure a resilient application.

News Category
Artificial Intelligence

Media Contacts

If you are a journalist and have a question about this story, please call 617-632-4090 and ask to speak to a member of the media team, or email

The Media Team cannot respond to patient inquiries. For more information, please see Contact Us.


Renato Umeton, PhD


Jason M. Johnson, PhD