AI being used across MGH Institute to increase efficiency and improve research, administrative work, curriculum design, and student learning experience

Jillian DeAlmeida was balancing a demanding schedule, like many third-year Doctor of Occupational Therapy students. As a full-time student doing fieldwork in a pediatric setting, she managed a large caseload and planned interventions, while adjusting to the responsibilities of clinical practice. She needed support, not just to keep up, but to develop the clinical reasoning skills essential for success in the profession. 

“I didn’t know how I was going to get all of this done because I didn’t have time at the clinic and I didn’t have enough time at home,” DeAlmeida recalls. “I needed to find something that could make it easier for me to do everything.” 

Seeking guidance, she reached out to Tara Mansour, OT, OTD, MS, OTR, OT/L, an assistant professor of Occupational Therapy and the department’s academic fieldwork coordinator, who had just the answer: 

Generative artificial intelligence

Generative AI is a technology that allows users to ask questions or provide prompts to generate text and other forms of data within seconds. Unlike a traditional search engine that retrieves existing information, generative AI creates new responses by drawing from large amounts of data and producing outputs based on patterns and probabilities. While the quality of the output depends on the clarity of the prompt, the information generated is not always accurate and requires critical evaluation by the user. 

Because it can be applied to many different tasks and projects, the use of artificial intelligence has begun to permeate all areas of the MGH Institute. 

In DeAlmeida’s case, she used generative AI to create an initial list of therapeutic play-based activities for a young client. But that was just the starting point. AI reduced the time and mental effort required to brainstorm ideas from scratch, allowing her to focus on refining and personalizing interventions rather than generating them. As a result, she was able to concentrate on applying her skills by carefully evaluating each suggestion, cross-referencing it with current literature, and ensuring that every intervention aligned with her client’s needs and goals. 

“AI has been a game-changer for me,” says DeAlmeida. 

That’s exactly how Dr. Mansour envisioned it when she introduced AI to students last spring. 

“I recognized that AI was something this generation of students would use, whether we wanted them to or not,” Dr. Mansour says. “So, I considered how we could teach them to use it ethically, safely, and effectively to facilitate learning and support their performance during fieldwork.” 

Mansour designed a case-based learning activity where students used ChatGPT to generate three potential treatment ideas for a hypothetical patient, learning to omit identifying details from the prompt to maintain confidentiality. To ensure clinical reasoning and evidence-based decision-making, students then researched peer-reviewed studies, textbooks, and other reputable sources to evaluate each AI-generated intervention. Finally, they presented their findings, providing evidence to support or refute each treatment and explaining their clinical rationale. 

“I’m actually challenging them to engage in a more complex process by teaching them to critically evaluate content,” Mansour explains. “They’re not just learning how AI can be a useful tool—they’re also recognizing its limitations and the necessity of ensuring interventions are evidence-based, client-specific, and safe. This approach allows students to focus more on clinical reasoning, skill development, and direct patient care.” 

Mansour and Associate Professor John Wong, PhD, co-authored “Enhancing Fieldwork Readiness in Occupational Therapy Students with Generative AI” last fall. Published in Frontiers in Medicine, the article showed statistically significant changes in students’ comfort using AI, knowledge of ethical and safety considerations, and perceived contributions of ChatGPT in health care. Additionally, 96% of students in the study said they would use AI after graduating. 

The paper’s publication led to numerous speaking engagements both within and beyond occupational therapy. Despite her findings, not all faculty share Mansour’s enthusiasm.

“I ask audiences, ‘How many of you think I’m crazy for sending students to ChatGPT for answers?’ and inevitably, a few hands go up,” Mansour says. “But that’s the point. I’m not just having students find answers—I’m teaching them to critically evaluate AI-generated responses. Instead of simply identifying interventions, they must assess their validity, relevance, and safety, aligning with higher-order thinking skills in Bloom’s taxonomy. This has reinforced that we must educate both students and faculty on appropriate AI use, as teaching methods must evolve, knowing students have access to these tools.”

A woman works on a laptop at a podium
Assistant Professor of Nursing Dr. Rachel Cox Simms says AI is a tool that allows nurse educators to create useful content for students.

AI’s Use in Curriculum Design and National Exams

Instructional Technology manager Greg Moore and his team are looking at how to leverage AI to improve the learning experience of students across the IHP’s academic programs. It includes implementing a question-creation tool that will allow faculty to generate ungraded practice questions from their course content that they can review, edit, or regenerate as needed. 

“It’s a great way for instructors to check that they are meeting their learning objectives,” says Moore, whose group currently is running a pilot program to collect feedback and hear more about faculty needs around AI. “If the tool isn’t creating meaningful questions, it might be an indicator that content is missing or that too much emphasis was placed on material that’s not core to the learning objectives. It’s not just about creating assessment questions quickly; it’s about helping instructors create more engaging learning experiences and giving students more chances for meaningful practice.” 

In the School of Nursing, assistant professors Rachel Cox Simms, DNP, FNP-BC, and Karen Hunt, MSN, RN, RD, CNE, along with Associate Dean for Academic Affairs and Associate Professor Rebecca Hill, PhD, DNP, FNP-C, CNE, FAAN (now senior associate dean of academic affairs at the University of North Carolina at Chapel Hill), wondered how well AI-generated practice questions for the profession’s National Council Licensure Examination, better known as NCLEX, would compare to those crafted by nurses. So, the trio tested their hypothesis by sending both human- and ChatGPT-generated NCLEX questions to nursing faculty at 58 accredited New England programs, asking their peers to focus on five criteria: grammar, clarity, clinical relevance, terminology, and difficulty. The feedback showed that the AI-generated questions in some cases were better. 

Based on their data, the faculty wrote and submitted to the Journal of Nursing Education “Comparative Analysis of NCLEX-RN Questions: A Duel Between ChatGPT and Human Expertise.” The paper received the 2024 Christine A. Tanner Scholarly Writing Award for being the first to research the comparison of AI-generated and human-generated questions in nursing. In announcing the recognition, the award committee wrote, “[I]t is clear, informative, relevant, and has significant potential to contribute to changes in nursing education—whether or not we are ready for them. The perspective taken by these authors might help us to actually get ready.” 

“What’s really exciting about this is that it can be a tool that nurse educators can use to create content that is very useful for their students. It’s going to save time and be more productive for them,” says Dr. Cox Simms, who now gives classroom assignments in which students are tasked to use AI. 

Department of Health Professions Education (HPEd) Assistant Professor Anshul Kumar, PhD, partnered with HPEd faculty plus those in the Physician Assistant Studies program to create a machine-learning model with the goal of predicting which students would pass and fail the Physician Assistant National Certifying Examination, or PANCE, one year before they take the test. 

Dr. Kumar, who found the results to be too dependent on alumni data and prone to making erroneous predictions about current students, is now spearheading the development of the Adaptive Learner App, a kind of smart flash card, with a team of website and app developers from the IHP, Harvard University, and the University of Southern California. The app provides students with practice assessment questions; their answers are stored in the app and then used to identify topics in which they need additional study. In addition to seeing those results, faculty also get class-wide data that can pinpoint any areas in which an entire class may be struggling. 

“It’s not using the AI capabilities that have taken the world by storm over the past two years,” says Kumar, who is piloting the app in his classes, as are PA instructors Brittany Palaski, MPAS, PA-C, and Jessica Spissinger, LICSW, PA-C, CAQ-Psychiatry. “I’m most interested in common-sense fixes and basic tools that can help students and faculty.” 

However, it shouldn’t be seen as a replacement to all traditional ways of studying, notes Director of Library Jessica Bell, MLS. 

“You don’t want it to take over something students should be doing, to lean on it excessively for writing or analyzing information they’re taking in, because that’s where the thinking, ideation, and growth happens,” Bell says. “It’s a question of how much is too much. And we’re all looking at that question right now—what’s appropriate for AI to do and what’s appropriate for a human to do, especially a learner.”

 

A women faces the camera while sitting at a desk with computer screens behind her
Dr. Marziye Eshghi, director of the Speech Physiology and Neurobiology of Aging and Dementia (SPaN-AD) lab, and her team leverage AI technologies to identify individuals at high risk for Alzheimer’s disease.

Limitless Possibilities in Research

“Our researchers are leveraging AI to transform communication sciences, Alzheimer’s disease detection, language acquisition, and rehabilitation sciences,” says Nara Gavini, PhD, MPhil, associate provost for research at the MGH Institute. 

He highlights the power of interprofessional collaboration, where faculty experts in communication sciences, physical therapy, occupational therapy, neuropsychology, nursing, and biomedical engineering work together to tackle complex health challenges. “As AI continues to evolve,” notes Dr. Gavini, “its integration into research and clinical practice at the MGH Institute is driving scientific discovery, enhancing patient outcomes, and expanding access to health care, ultimately reshaping how we understand and improve human health.” 

Jordan Green, PhD, CCC-SLP, FASHA, the school’s chief scientific officer and director of the Speech and Feeding Disorders Lab, has used AI to develop voice recognition tools that identify early detection, track progression, and monitor patients with neurodegenerative diseases such as amyotrophic lateral sclerosis (ALS). He worked with Google’s Euphonia team to develop the technology behind Project Relate, an AI Android phone app that works as a personalized speech recognition tool for adults with impaired or unintelligible speech due to hearing impairment, cerebral palsy, stroke, traumatic brain injury, or progressive neurologic diseases. 

Dr. Green says that while AI is a tool with great scientific potential, it’s still very much a work in progress. 

“We’re leveraging AI tools to enhance speech-language pathology services, and that’s our ultimate goal,” he says, pointing to efforts to research the accuracy and limitations of AI algorithms for assessing speech as well as applications for monitoring health, and identifying neurologic disease. “We are working with a number of groups that are interested in deploying these metrics as biomarkers in clinical trials for new experimental drugs. We hope that the outcomes of this research will help drive down the cost of services, improve the precision of our assessments, and provide enhanced communication tools. In 10 years, we’re going to have a different world because of all this groundwork that’s being done now.”

Marziye Eshghi, PhD, focuses on state-of-the-art multimodal techniques to identify early signs of Alzheimer’s disease (AD) years before cognitive decline becomes observable. As the director of the Speech Physiology and Neurobiology of Aging and Dementia (SPaN-AD) lab, she recently received funding from the Massachusetts AI and Technology Center for Connected Care in Aging and Alzheimer’s Disease. In this project, her research team leverages AI technologies to identify individuals at high risk for AD by integrating remotely collected speech acoustic and kinematic features with AD molecular pathologies. This approach offers a more comprehensive understanding of the early mechanisms underlying AD. 

“The successful development of this AI-driven model has the potential to transform the landscape of AD research and clinical practice,” says Dr. Eshghi. “By offering an innovative and scalable method for early AD detection, this work paves the way for new therapeutic advancements and preventive strategies, ultimately contributing to improved management and quality of life for individuals at risk of AD.”

Dr. Joshua Hartshorne’s research explores why children excel at language learning compared to adults, despite their cognitive limitations, and why some children struggle with language acquisition. Using AI-powered computational modeling and crowdsourcing, he simulates language-learning processes and collects large-scale data to address core questions in the language sciences. 

One topic he focuses on is the challenges of assessing the language abilities of bilingual children, which he says is vastly understudied (according to census data, 20% of U.S. children are bilingual now). If historical trends continue, the percentage is expected to increase dramatically in the coming years, so the need to develop ways to assess this group is urgent. To do that, he and his team at the Advanced Analytics & AI for Communication Science Group are using a combination of neuroimaging studies, computational modeling, and new behavioral studies with different types of learners. 

“Right now, there is no known way of determining whether a bilingual child has a language disorder,” says Hartshorne, a research associate professor. “They don’t learn each of the two languages quite as fast as a monolingual child learns one language, so when you test them, they come up as language delayed. Since most of them are going to be fine, we need a different set of standards.” 

For Janice Palaganas, PhD, RN, NP, ANEF, FNAP, FAAN, FSSH, implementing AI in simulation-based education is paramount to the future of preparing health care practitioners. A professor in the Health Professions Education department, she co-founded the AI Simulation Healthcare Collaborative, which held its inaugural summit last November, bringing together engineers, business professionals, and educators to identify problems, investigate potential public-private partnerships, and develop infrastructure to enhance AI’s role in healthcare education and simulation. 

“Without a collective of different expertise, including AI engineers, industry, and educators, findings are superficial,” says Dr. Palaganas. ‘We learned a lot from each other.” 

And it’s not just faculty who are working on AI. Students in the Master of Science in Genetic Counseling program are learning in fieldwork rotations throughout Mass General Brigham with teams implementing AI into clinical care as well as genetics and genomics test interpretation, notes program chair Maureen Flynn, MS, CGC, MPH. 

“They’re getting a chance to explore and conduct research on topics that are becoming increasingly relevant and important to the roles of genetic counselors,” said Flynn.

A man sits at a desk with a computer monitor behind him
Dr. Joshua Hartshorne and his team at the Advanced Analytics & AI for Communication Science Group are exploring the challenges of assessing the language abilities of bilingual children, among other research topics.

Exploring Possibilities in Clinical Setting and Beyond

As the director of the Institute’s Healthcare Data Analytics program, Shuhan He, MD, has a clear vision of how AI’s ability to crunch very large data sets is beneficial to health care. 

“Data analytics and AI machine learning are really fundamentally the same thing,” says Dr. He, who also is a practicing emergency room physician at Massachusetts General Hospital. “When students take a data analytics program, they are thinking of how to use these new technology tools to solve really fundamental problems.” 

For example, data historically could only measure how long hospital patients waited to be seen. Not anymore. 

“Now we can look at the entire data set and ask, ‘Where is the bottleneck? Why does the bottleneck happen? And when does the bottleneck occur?’” notes He. “You can apply the technology to answer it in a better, more fundamental way. It’s blurring the line between how we traditionally think about healthcare data and problems.” 

He, who has co-authored papers showing ways AI can assist physicians to decrease diagnostic uncertainty and also remove human bias when deciding research grant funding, is currently working with the IHP’s instructional design team on an edX MicroMasters program that will cover topics such as privacy and ethics, machine-learning algorithms, and real-world use cases of AI in healthcare settings.

A woman sits in a chair in front of a window
Director of Academic Operations Heather Easter says the Provost’s Office is embracing AI to enhance productivity, event planning, and faculty support; she calls it a “game-changer.”