Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Filter by Categories
BRIEF COMMUNICATION
Case Report
COMMENTARY
CORRESPONDENCE
Editorial
Guest Editorial
INVITED REVIEW
Letter to Editor
LETTER TO THE EDITOR
ORGINAL RESEARCH
Original Article
ORIGINAL RESEARCH
Original Research Article
PERSPECTIVE
Point of View
Review Article
Viewpoint Article
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Filter by Categories
BRIEF COMMUNICATION
Case Report
COMMENTARY
CORRESPONDENCE
Editorial
Guest Editorial
INVITED REVIEW
Letter to Editor
LETTER TO THE EDITOR
ORGINAL RESEARCH
Original Article
ORIGINAL RESEARCH
Original Research Article
PERSPECTIVE
Point of View
Review Article
Viewpoint Article
View/Download PDF

Translate this page into:

Editorial
4 (
2
); 25-27
doi:
10.25259/GJMS_37_2025

Artificial Intelligence in Biomedicine and Health Care: Hope or Hype?

Physician Researcher, Squad Medicine and Research (SMR), Amadalavalasa, Andhra Pradesh, India,
Department of Internal Medicine, Quinnipiac University Frank H. Netter, School of Medicine, St. Vincent’s Medical Center, Bridgeport, Connecticut, United States.

*Corresponding author: Sushrut M. Ingawale, Department of Internal Medicine, Quinnipiac University Frank H. Netter, School of Medicine, St. Vincent’s Medical Center, Bridgeport, Connecticut, United States. drsushrutingawale@gmail.com

Licence
This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-Share Alike 4.0 License, which allows others to remix, transform, and build upon the work non-commercially, as long as the author is credited and the new creations are licensed under the identical terms.

How to cite this article: Suvvari TK, Ingawale SM. Artificial Intelligence in Biomedicine and Health Care: Hope or Hype? Glob J Med Stud. 2024;4:25-7. doi: 10.25259/GJMS_37_2025

Dear Readers,

Artificial intelligence (AI) has moved rapidly from theoretical promise to practical application in biomedicine and health Care. The question now is whether this momentum represents genuine transformation a well-earned hope or merely a bubble, an overstated hype. By mid2025, real-world deployments are evidencing both breakthrough gains and persistent regulatory, ethical and technical roadblocks. This editorial explores recent developments, analyses of realistic benefits versus inflated expectations and outlines the critical steps needed to turn AI’s promise into reliable progress.

TANGIBLE PROGRESS WITH AI: THE HOPE

Improved efficiency in care delivery and administrative relief

AI-powered scribes automate clinical documentation, significantly reducing clinician workload. In Ontario, using AI scribes led to a 69.5% reduction in laboratory administrators’ documentation time and 3 fewer clinician hours per week.1 These systems generate high-quality notes, improve provider satisfaction and enhance patient engagement, although human review remains essential.2 AI is also reducing administrative burdens in hospitals and clinics. For instance, Australian startup NexusMD secured US $6.3 million in seed funding after its AI agents achieved a 30% improvement in emergency department performance through better documentation and compliance management.3

Enhanced clinical decision support

Artificial Intelligence-augmented Clinical Decision Support Systems (AI-CDSS) are demonstrating measurable value in improving clinical care. A systematic review of 26 studies conducted between 2018 and 2023 found that AICDSS consistently enhanced patient outcomes, increased clinician satisfaction and improved workflow efficiency.4 These systems support diagnostic accuracy; reduce medication errors; and provide patient-specific, evidence-based recommendations at the point of care. Particularly, in high-stakes environments such as intensive care units and cardiovascular settings, AI-CDSS tools have improved early detection, streamlined decision-making and optimised treatment pathways. Clinicians reported greater confidence in clinical judgements when aided by AI, and many systems demonstrated potential to reduce cognitive burden and burnout. However, successful implementation relies heavily on user-centred design, seamless workflow integration and clinician trust. As evidence continues to accumulate, AICDSS is emerging not merely as a supportive tool but as a critical enabler of precision, efficiency and safety in modern healthcare delivery.5,6

Advancements in diagnostics

AI diagnostic tools are revolutionising pathology and imaging. Convolutional neural networks outperform traditional methods in radiology and pathology, facilitating early disease detection.7 Multimodal AI systems integrating imaging, text and clinical records outperform unimodal models by nearly 6.2% points in the diagnostic area under the curve.8 In oncology, AI tools are being employed for early tumour detection, histopathological classification and predicting treatment response.7,9 Similarly, in ophthalmology and cardiology, the Food and Drug Administration (FDA)-approved AI systems now assist in diagnosing diabetic retinopathy and atrial fibrillation, respectively, with real-time analysis capabilities.8,9 In addition, AI is proving valuable in infectious disease diagnostics, where models trained on chest X-rays and clinical markers aid in triaging pneumonia and COVID-19 cases.7-9

Strengthening medical education

AI is playing a transformative role in medical education by enhancing learning personalisation, simulation training and assessment efficiency. AI-driven platforms are being used to generate adaptive quizzes; simulate complex clinical scenarios; and provide immediate, tailored feedback to students and trainees.10 These tools support competency-based education and help bridge gaps in diagnostic reasoning and clinical decision-making. A recent review emphasised the value of AI in augmenting traditional learning modalities, particularly in resource-limited settings, where it can provide scalable and consistent educational support.11 Few studies advocated for integrating AI literacy into undergraduate and postgraduate medical curricula to prepare future clinicians for working alongside intelligent systems.10,11 Furthermore, natural language processing tools have been applied to analyse learner performance and identify knowledge gaps in real time.11 As AI becomes more embedded in health care, equipping medical professionals with the skills to understand and critically evaluate AI outputs will be essential for safe and effective practice.

PERSISTENT CHALLENGES WITH AI: BEHIND THE HYPE

Regulatory and evaluation barriers

Despite advances, many AI tools lack thorough clinical validation. Medical AI systems such as diagnostic tools, image scanners and brain computer interface (BCI) platforms require rigorous evaluations and regulatory clearance – from food and drug administration (FDA) and equivalents in other countries.12 A recent review highlights gaps in evaluation frameworks and emphasises the need for public dialogue amongst developers, clinicians and regulators.13 Further recommendations describe the necessity for pre- and post-deployment oversight, transparent reporting and adverse event monitoring. Safety frameworks such as transparent reporting of a multivariable prediction model for individual prognosis or diagnosis – artificial intelligence (TRIPOD-AI) and consolidated standards of reporting trials – artificial intelligence (CONSORT-AI) mandate transparent reporting; yet, widespread adoption is still underregulated globally.12,13

Bias and fairness concerns

AI models in health care often inherit and amplify biases present in their training data, leading to unequal performance across demographic groups. These biases can affect diagnostic accuracy, treatment recommendations and clinical risk predictions, particularly in underrepresented populations. For instance, studies have shown that commercial AI algorithms may underdiagnose cardiovascular conditions in women and racial minorities due to limited representation in source datasets.14 Few studies reported that commonly used AI tools in dermatology performed significantly worse on darker skin tones, raising concerns about algorithmic equity.15 Moreover, mental health applications, especially natural language processing (NLP)-based chatbots, often misinterpret non-Western idioms, slang or expressions of distress, potentially leading to misclassification or inadequate support.16 The lack of standardised frameworks for bias auditing further complicates efforts to identify and rectify these disparities.14,17 Addressing fairness in AI requires intentional inclusion of diverse data sources, routine performance evaluation across subgroups and transparent reporting of demographic performance metrics. Without these safeguards, AI risks reinforcing structural inequalities rather than mitigating them.

Privacy and data integration

The integration of electronic health records and patient-generated data using AI has the potential to enable more comprehensive, personalised care. However, it also introduces significant privacy and security concerns, particularly regarding data ownership, consent and cross-institutional sharing.18 Approaches such as federated learning, differential privacy and homomorphic encryption have been proposed to address these issues by allowing model training without centralised data access. Despite their theoretical strengths, these methods face real-world implementation challenges including nonuniform data formats, computational overhead and limited regulatory clarity.18,19 Furthermore, ensuring transparency and accountability in AI data processing remains a critical priority to maintain public trust and compliance with global privacy laws such as health insurance portability and accountability act (HIPAA) and general data protection regulation (GDPR).18

Trust and human oversight

The opaque nature of many black box AI models poses a significant barrier to clinician trust, particularly in high-stakes medical decision-making. Without clear insight into how AI systems arrive at conclusions, healthcare professionals may hesitate to rely on them, especially when outcomes conflict with clinical judgement.20 To address this, explainable AI (XAI) frameworks are being developed to provide transparent, interpretable outputs that clinicians can understand and evaluate.21 Human-in-the-loop designs, which allow clinicians to oversee, validate and even override AI recommendations, are essential for safe and ethical deployment.20,21 In addition, maintaining audit trails and integrating clinician feedback into model updates can further enhance accountability and trust in AI-enabled systems.

Recommendations

To transition from hype to sustainable hope in AI-driven health care, it is essential to strengthen evidence and regulatory oversight by mandating standardised reporting frameworks such as TRIPOD-AI, CONSORT-AI and developmental and exploratory clinical investigations of decision-support systems driven by artificial intelligence (DECIDE-AI), alongside multicentre, prospective validation. Ensuring fairness requires the adoption of standardised debiasing protocols and the inclusion of diverse populations in model training and evaluation. Robust privacy must be maintained through federated learning frameworks that align with global standards while balancing transparency and security. XAI should be integrated to enhance interpretability and supported by audit trails and human oversight mechanisms. Finally, educating clinicians through AI-integrated curricula will empower them to critically assess AI tools and promote safe, effective and ethical implementation across healthcare systems.

CONCLUSION

AI is no longer a distant vision – it is actively shaping how we diagnose, treat and manage health. Yet, its future will not be defined by technology alone, but by how thoughtfully, we choose to apply it. The challenge now is not just to build smarter systems, but to ensure that they serve real clinical needs, respect human values and earn the trust of those who use them. As AI continues to evolve, its true success in biomedicine and health care will depend on our ability to combine innovation with responsibility.

References

  1. . Health Technologies. . Ottawa, ON: Canadian Agency for Drugs and Technologies in Health; Available from: https://www.ncbi.nlm.nih.gov/books/nbk613808 [Last accessed on 2025 Jun 29]
    [Google Scholar]
  2. , , , , , . Artificial Intelligence (AI)-Powered Documentation Systems in Healthcare: A Systematic Review. J Med Syst. 2025;49:28.
    [CrossRef] [PubMed] [Google Scholar]
  3. NexusMD Lands $6.3m to Save Aussie Hospitals with AI. . The Australian. Available from: https://www.theaustralian.com.au/business/technology/ai-health-startup-nexusmdlands63mtosaveaussiehospitals/newsstory/44371e3b26a8eb35c64a273c871a662 [Last accessed on 2025 Jun 29]
    [Google Scholar]
  4. , , , , , . Artificial-Intelligence-Based Clinical Decision Support Systems in Primary Care: A Scoping Review of Current Clinical Implementations. Eur J Investig Health Psychol Educ. 2024;14:685-98.
    [CrossRef] [PubMed] [Google Scholar]
  5. , . Effectiveness of Artificial Intelligence (AI) in Clinical Decision Support Systems and Care Delivery. J Med Syst. 2024;48:74.
    [CrossRef] [PubMed] [Google Scholar]
  6. , . AI-Driven Clinical Decision Support Systems: An Ongoing Pursuit of Potential. Cureus. 2024;16:e57728.
    [CrossRef] [PubMed] [Google Scholar]
  7. , , , , , , et al. The Integration of Artificial Intelligence into Clinical Medicine: Trends, Challenges, and Future Directions. Dis Mon. 2025;71:101882.
    [CrossRef] [PubMed] [Google Scholar]
  8. , , , , , , et al. Navigating the Landscape of Multimodal AI in Medicine: A Scoping Review on Technical Challenges and Clinical Applications. Med Image Anal. 2025;105:103621.
    [CrossRef] [PubMed] [Google Scholar]
  9. . How Artificial Intelligence is Shaping Medical Imaging Technology: A Survey of Innovations and Applications. Bioengineering (Basel). 2023;10:1435.
    [CrossRef] [PubMed] [Google Scholar]
  10. , , , , , , et al. Application of Artificial Intelligence in Medical Education: Current Scenario and Future Perspectives. J Adv Med Educ Prof. 2023;11:133-40.
    [Google Scholar]
  11. , , , , , , et al. A Scoping Review of Artificial Intelligence in Medical Education: Beme Guide No. 84. Med Teach. 2024;46:446-70.
    [CrossRef] [PubMed] [Google Scholar]
  12. . Evaluation and Regulation of Artificial Intelligence Medical Devices for Clinical Decision Support. Annu Rev Biomed Data Sci 2025 doi: 10.1146/annurev-biodatasci-103123-095824. Epub ahead of print. PMID: 39971383
    [CrossRef] [PubMed] [Google Scholar]
  13. , , , , , . Guidelines and Standard Frameworks for Artificial Intelligence in Medicine: A Systematic Review. JAMIA Open. 2025;8:ooae155.
    [CrossRef] [PubMed] [Google Scholar]
  14. , , , , , , et al. Fairness of Artificial Intelligence in Healthcare: Review and Recommendations. Jpn J Radiol. 2024;42:3-15.
    [CrossRef] [PubMed] [Google Scholar]
  15. , , , , , , et al. Disparities in Dermatology AI Performance on a Diverse, Curated Clinical Image Set. Sci Adv. 2022;8:eabq6147.
    [CrossRef] [PubMed] [Google Scholar]
  16. , . Your Robot Therapist is not Your Therapist: Understanding the Role of AI-Powered Mental Health Chatbots. Front Digit Health. 2023;5:1278186.
    [CrossRef] [PubMed] [Google Scholar]
  17. , , , , . Addressing Bias in Big Data and AI for Health Care: A Call for Open Science. Patterns (N Y). 2021;2:100347.
    [CrossRef] [PubMed] [Google Scholar]
  18. , , . Data Privacy in Healthcare: Global Challenges and Solutions. Digit Health. 2025;11
    [CrossRef] [PubMed] [Google Scholar]
  19. , , , . The Role of Artificial Intelligence for the Application of Integrating Electronic Health Records and Patient-Generated Data in Clinical Decision Support. AMIA Jt Summits Transl Sci Proc. 2024;2024:459-67.
    [CrossRef] [Google Scholar]
  20. , , . The Compelling Need for Shared Responsibility of AI Oversight: Lessons From Health IT Certification. JAMA. 2024;332:787-8.
    [CrossRef] [PubMed] [Google Scholar]
  21. , , , , , , et al. Toward a Responsible Future: Recommendations for AI-Enabled Clinical Decision Support. J Am Med Inform Assoc. 2024;31:2730-9.
    [CrossRef] [PubMed] [Google Scholar]
Show Sections