In an era of rapid innovation, big data, and shifting norms, trust is no longer just a social virtue; it must be designed intentionally as we enter a new frontier of artificial intelligence (AI) integration. In AI, especially in healthcare, “trust engineering” offers both a conceptual lens and a practical imperative: how do we create and embed trustworthiness into AI used in healthcare? How trust is engineered shapes whether healthcare providers and educators can rely on AI technology.
Trust is relational, adaptive, and context-sensitive. In science more broadly, trust sits uneasily between skepticism and acceptance. In human-AI teams, where AI systems function as collaborators rather than tools, this tension becomes acute, often leading to either overreliance or cautious disengagement. A key barrier in human-AI trust is the opacity of AI systems, operating as “black boxes,” offering little insight into how they are trained or what biases may be embedded in their design.1 Without transparent and equitable disclosure from developers, clinicians, and healthcare educators are left to trust in systems they cannot independently assess. Transparency, validation, and user training are crucial for the responsible deployment of such systems.1
Here, we introduce the concept of trust engineering, or how trust is built, monitored, and maintained when working with AI. In AI trust engineering, we must examine: Does the AI do what it says it does (i.e., external and construct validity)?2 What changes in research processes are required to ensure AI trustworthiness (i.e., methodological rigor)? Faculty must be equipped to critically engage with AI tools and model that engagement for future health professionals.
Is contemporary AI-augmented research truly trustworthy? While AI has demonstrated improvements in learning, assessment, and certain clinical outcomes, broader trust-building requires deliberate efforts to address ethical and implementation challenges, thereby preventing misuse and harm.3 Models trained on non-diverse, siloed datasets amplify bias. The rapid AI evolution outpaces the availability of diverse training datasets, algorithmic transparency, standardized evaluation protocols, and effective regulatory governance. In other words, cultivating trustworthy AI depends on engineered pathways with clear guidelines for developer transparency and ethical oversight for both industry and users.
In healthcare AI, safety centers on preventing patient harm and maintaining vigilance to detect and address adverse events that can arise from model errors, data drift, or hidden bias in algorithms.1 Safer AI is not error-free, but one designed for safe use. Users must exercise human judgment, know when AI does not know, and protect against the harms on which humans may unknowingly rely.
Trust in human-AI collaboration demands processes to rigorously test and verify whether AI performs as claimed. As AI transforms how we think, teach, write, and publish, our ethics and methodology must also evolve in parallel; AI now functions as a threshold concept in health professions education.4 Establishing governance for oversight, designing safeguards for fair use, and creating standards that clarify the credibility of AI involvement represent a trust tax, an upfront investment in transparency, educator readiness, and critical engagement.5 When implemented deliberately, these investments yield a trust dividend: increased confidence, improved educational outcomes, and more meaningful integration of AI in learning environments.5
Trust remains fundamentally relational. AI designers, clinicians, researchers, patients, and leaders must view themselves as partners, not passive recipients, because we engineer the trust relationships as much as the artifacts. But trust is fragile without the necessary elements of transparency, uniform access, and accountability. Beyond detailed disclosure, participatory design and ongoing oversight are necessary for trust engineering, enabling all stakeholders to engage critically, safely, and equitably in human–AI practice.
Acknowledgements
The authors would like to thank Dr. Roger Edwards for his encouragement and insights on our work. We’d also like to thank MGH IHP for their early buy-in on our research priorities in AI trust engineering.
References
Labkoff S, Oladimeji B, Kannry J, Solomonides A, Leftwich R, Koski E, Joseph AL, Lopez-Gonzalez M, Fleisher LA, Nolen K, Dutta S. Toward a responsible future: recommendations for AI-enabled clinical decision support. Journal of the American Medical Informatics Association. 2024 Nov;31(11):2730-9.
Feigerlova E, Hani H, Hothersall-Davies E. A systematic review of the impact of artificial intelligence on educational outcomes in health professions education. BMC Medical Education. 2025 Jan 27;25(1):129.
Gordon M, Daniel M, Ajiboye A, Uraiby H, Xu NY, Bartlett R, Hanson J, Haas M, Spadafore M, Grafton-Clarke C, Gasiea RY. A scoping review of artificial intelligence in medical education: BEME Guide No. 84. Medical Teacher. 2024 Apr 2;46(4):446-70.
Bajwa M, Morton A, Patel AP, Palaganas JC, Gross IT, Gross IT. Artificial Intelligence: Crossing a Threshold in Healthcare Education and Simulation. Cureus Journals. 2025 Apr 29;2(1).
Bettison J. Trust Tax or Trust Dividend? Bettison Strategy & Communications. Published July 12, 2023. Accessed November 12, 2025. https://www.bettison.com/trust-tax-or-trust-dividend/