Dr. Maria Bajwa earned her PhD in simulation-based education from the MGH Institute and is now an associate professor at the Institute and the artificial intelligence (AI) lead in the Research on Experiential-Based Education and Learning (REBEL) lab. She specializes in healthcare simulation and the ethical integration of AI in health professions education. She spoke to OSC’s Lisa McEvoy about her focus on advancing the capabilities of educators through technology-enhanced learning, AI in health professions education, and the responsible adoption of AI. 

How did you become interested in AI and what are some of your current projects?
I did my PhD here and focused on integrating technology in teaching, learning, and simulation from the perspective of faculty professional development. One of the courses that is a must-have for doing a PhD at the MGH Institute is a predictive analytics course that ignited a thirst for knowledge in me. Predictive analytics stands adjacent to artificial intelligence and prompted me to start learning more about AI on my own. 

When I finished my PhD, I was invited to teach a technology course here. I revamped the course and added a module about AI, before the AI boom even started. Since then, I have gone deep, deep, deep into the subject. I have conducted professional development sessions at several universities for AI training, am part of a team that was awarded a grant to implement AI-infused curricula at six colleges in the State University of New York (SUNY) system, and I’m also part of a group that has founded and launched the AI Healthcare Simulation Collaborative.
My master’s training is in simulation education, as well as my PhD. Simulation is a big part of healthcare education and AI in simulation is not just coming, it’s here already. With all the work I’m doing in AI in simulation, I’m at the crossroads of simulation and AI. With my own interest and training in technology and faculty development, I feel like it’s very natural and organic for me to be working in this area.  

What are the advantages of using AI in simulation?
There are so many things we can do with AI. Simulation is basically providing a person with the opportunity to practice in a controlled, safe environment, getting them ready to face or work with the patient. If there is a need for any hands-on skill practice like suturing, or an assessment, or a soft skill like giving bad news to a patient, we can have AI design a case. Then we will verify that case and see if it is appropriate for the learners. It is efficient because I have cut down my work. And this is only one use case among many. 

When it comes to other important skills like getting a history, which needs to be practiced over and over and over, efficiency is important. If a student doesn’t understand a case, we can tell AI to act as a patient with specific set of symptoms and then have the student act as the provider to practice their questioning technique. If I know a student is stuck, I can create a case and have them practice with it as adaptive learning technique, using ChatGPT. In this case, I'm using the simulated participant case scenario, but I'm also working with AI in my overall lesson plan and that is how we are doing precision learning with AI.

Why is it important that healthcare educators use AI in their teaching?
We want people who are very capable, very competent and confident to take care of us. It’s our responsibility as faculty and educators to start teaching and start assessing in a way that is compatible with AI. Students are using AI tools routinely, and if AI can solve the problem we give to the student, then that is not a good assessment. 

Educators need to change the assignments, change the way they teach, and assess the competency or knowledge. Assessment is a very integrative part of teaching and learning with AI. As an example, English is not my native language. I check everything and then translate it. AI makes doing that easier. So why am I judging my students on how they make a sentence? Is that assessing their critical thinking, how they process a case, or patient, or history, or lab results? Can they put two and two together?

AI will teach the student what to do, give them how-to guides, and then if we put a correct assessment at the back end, it can assess the student at the same time the way I want them to learn and be assessed.

How do you ensure that AI is teaching the correct things?
When we are integrating a technology like artificial intelligence in our teaching and learning practices, we have to have a human in the loop.  We have to keep a cycle that would periodically assess how AI is going; do a program as well as a technology evaluation to see if it is behaving the way we want it to behave and is assessing the things we want to assess.

If I'm making a course, I need to go back and I need to keep checking my prompts. If there is a big update on that AI app or the platform or even in the absence of an update, every three, four, six months, we have to come up with a timeline for a periodic review. Somebody needs to go back and see, is the assignment still working?

For example, when ChatGPT was evolving, it became a sycophant. They found out one of the behaviors it learned was to please the user and they had to put in guardrails for that. The company kept sending more adjustments and now it's back, behaving better again. That only happens with human oversight.

Remember, AI is learning from its own, at its own pace, and that pace is faster than many people combined. We have to see if the output is still coming out the way we want, or if it has learned something and changed. That's why it's called generative, because it is generating and regenerating itself. 

Because of the importance of the human-in-the-loop, we have established an initiative at the REBEL lab, called the AI Assurance Lab, meaning assessing the claims of AI-infused teaching, training and simulation, to see if we can trust AI, or trust the claims it’s making. So, we can hopefully guide the healthcare educators and simulationists towards evidence-based practices for using AI. We need to remember that we need to engineer the trust in our abilities to use AI and in AI-infused applications to help us teach and train the next generation of healthcare providers.