Will AI End Medical Hegemony?
There is a sign in the waiting room of my local general practice: “Please be considerate of the GP’s time – one issue only per appointment.”
As a middle-aged woman, I’m not entirely sure which issue to present. Is it the persistent, painful and lumpy breasts, anxiety, night sweats, palpitations, abdominal pain and bloating, headaches, declining eyesight, or joint pain?
The most urgent issue seems to be the palpitations. I raise this, quickly mumbling the context of all of the other symptoms, and find myself referred to a cardiologist for a stress test. The cardiologist is almost dismissive as I walk in. Outwardly healthy female wasting my time again. The results of the stress test prove that he’s right, and without eliciting any other lifestyle, relationship, mental health or other broader context, but after paying a $250 out-of-pocket bill, I’m sent away – apparently healthy, but with all the same symptoms.
A New First Opinion: AI as the Patient’s Ally
What happens if I put the same group of symptoms into ChatGPT-4.0 ?
Unlike the cardiologist and GP, it allows me to include all of my symptoms while asking other questions about my medical and health history. It provides a list of possible explanations, ranks possibilities based on prevalence, and gives a well-articulated breakdown of potential causes. It suggests perimenopause as a probable factor, considering my age, symptoms, and overall health status. It also lists anxiety, hormonal imbalance, and nutritional deficiencies as possibilities, alongside reassuring explanations of why it’s unlikely to be something more sinister. It even offers a structured approach to managing symptoms—something my GP or cardiologist did not do (and my GP could not do in a 6-10 minute appointment).
I am an educated woman with a health background, so I can make sense of this information. But what about those who are not?
With long delays to access a GP and a $AUD130 fee per visit ($40 – 50 out of pocket), am I expected to book eight separate appointments to discuss each symptom individually? This isn’t a viable model of care.
My example is not unique. Social media, newspaper editorials, and Reddit threads are full of people whose first point of call has become AI for their symptom checks (as illustrated in this article). A local herbalist told me of a customer who had their genome mapped, a full blood test, and a hair follicle analysis, then uploaded the results to ChatGPT . They were now seeking non-pharmaceutical treatments based on the AI’s interpretation. The herbalist (a trained naturopath) noted that while the AI’s insights weren’t perfect, they were an excellent guide, and useful to support management and prevention of some of the issues identified by the customer.
Health professionals believe that AI will be a tool to support their clinical decision-making. However, these examples show that AI is already challenging medical hegemony by shifting consumer behaviours well beyond and outside what the health system is equipped to respond to—and we are only at the beginning.
The Limits of Fragmented Care
The Western health model has long been based on medical hegemony, where health professionals (doctors – primarily – as well as nurses and allied health practitioners) serve as gatekeepers of expert knowledge. This knowledge is exclusive, taught at universities, embedded through practice, and protected by regulation. This structure assumes that expertise should remain within a controlled professional domain, ensuring that only qualified individuals can diagnose and treat illness.
But what happens when machines (AI) have access to more knowledge than any individual practitioner?
AI algorithms can already collate, cross-reference, and synthesise vast amounts of data, including medical literature, patient case studies, and real-world health data in seconds. What they currently lack is access to a patient’s full medical history, personal context, and real-world lived experience (unless the patient has provided that information – which is possible). However, as my own case demonstrates, our current medical model isn’t excelling at this either. We lack truly integrated medical records that offer a complete picture of a patient’s health over time. Even when GPs have access to partial histories (as in my case, with 15 years of visits on record), they often don’t interrogate them holistically.
Consider a real-world case: A 53-year-old woman had a lung X-ray for work, revealing a shadow on her lung. Due to medical error and systemic inertia, it took over a year for her GP to refer her to a respiratory clinic. The respiratory physician identified pulmonary fibrosis—rare in a woman of her age—and, unable to explain it, sent her home with a vague and potentially incorrect diagnosis. Had they checked her full medical history, they would have seen that she had a history of anorexia nervosa—a condition that can lead to pulmonary fibrosis. A quick AI-powered review of her records – assuming they were comprehensive and up-to-date – might have flagged this connection immediately, guiding more precise diagnosis and management.
The Rise of Algorithmic Closure
What we are witnessing is the transition from medical hegemony to algorithmic closure—where AI-driven systems, rather than human professionals, determine who has access to knowledge and expertise.
Algorithmic closure refers to AI’s growing role in filtering, prioritising, and distributing knowledge – in this case, medical knowledge. The implications for health work are profound:
- AI Challenges the Knowledge Monopoly – If AI systems consistently outperform doctors in diagnostic accuracy and treatment recommendations, the rationale for a rigid, hierarchical health workforce weakens.
- Patients Gain New Routes to Expertise – Rather than waiting for a GP appointment, patients are increasingly turning to AI for initial assessments, second opinions, and even guidance on managing chronic conditions.
- Regulatory Structures Are Being Outpaced – AI is evolving faster than professional boards and medical institutions can regulate. Who holds liability if AI makes an incorrect diagnosis? The EU’s AI Act and debates over AI in healthcare liability underscore the urgent need for governance frameworks.
- Healthcare Shifts from ‘Single-Symptom’ Medicine to Holistic Care – AI has the potential to consider entire patient histories, genetic predispositions, and social determinants of health in ways that current health models do not. Studies already show AI outperforming radiologists in image-based diagnostics and integrating social factors into treatment planning.
The Role of AI: Support, Not Replacement
AI is not a replacement for human healthcare providers. It has limitations:
- Dependence on patient-reported data, which can be incomplete or inaccurate.
- Lack of lived experience and intuition, which doctors develop over years of practice.
- Algorithmic bias, where AI relies on historical data that may reflect systemic inequalities.
- Absence of human empathy, which remains vital for compassionate care.
The Future of AI in Healthcare: Three Possible Scenarios
The future of AI’s integration into healthcare will depend on governance and societal choices. Three possible scenarios emerge:
- Corporate Capture: Professions could become subcontractors to tech firms, with AI dominating decision-making and professionals reduced to verifiers of algorithmic outputs. In this model, AI dictates medical protocols, legal rulings, and financial decisions, shifting expertise away from public institutions and into corporate-controlled proprietary systems. Algorithmic closure would fully replace traditional professional closure, leaving health professionals with diminished autonomy.
- Hybrid Coexistence: A balanced approach would see AI handling instrumental tasks—such as diagnostic pattern recognition and research synthesis—while human professionals retain moral, interpretive, and discretionary authority. This could lead to the emergence of new hybrid roles, including AI auditors and algorithm interpreters, ensuring that human oversight remains central in high-stakes decision-making. Regulatory mechanisms like the EU AI Act and the U.S. AI Bill of Rights are already being developed to prevent full AI autonomy in professional governance. In healthcare, clinicians are being trained in AI literacy to critically evaluate AI-generated recommendations before applying them in patient care.
- Public Reclamation: The most radical response would involve regulating AI as a public good, ensuring transparency, accountability, and democratic control over AI-driven knowledge. In this scenario, professions would not be displaced but instead take on the role of AI governance stewards, ensuring that AI enhances human expertise rather than replacing it. Governments and professional bodies would enforce strict oversight and ethical safeguards, pushing back against corporate AI monopolies.
The Future: Empowering Patients Through AI and Integrated Data
For the health system, AI presents both a challenge and an opportunity. With workforce shortages and long waiting lists, AI can empower patients to take control of their own health, potentially reducing pressure on overburdened systems while integrating more than just a narrow, medical perspective on what constitutes ‘health data’. The ideal outcome for my own case? A coordinated response—offering education on perimenopause, reassurance, dietary guidance, and perhaps a referral to yoga or meditation classes rather than an expensive and ultimately unnecessary cardiology visit.
Conclusion: Navigating AI’s Role in Healthcare
AI in healthcare presents both immense opportunities and significant risks. The path we take will depend on how AI governance evolves and whether professionals, policymakers, and the public advocate for balanced, ethical integration. While AI can revolutionise diagnostics, patient management, and healthcare access—particularly in underserved areas—its use must be carefully regulated to prevent corporate overreach and ensure it serves public, not just private, interests.
The future of healthcare is not a binary choice between AI and human clinicians. Instead, it is about defining a collaborative framework where AI augments, rather than supplants, professional expertise. If integrated responsibly, AI could unlock a more holistic, efficient, and patient-centered healthcare system. However, failing to establish robust governance mechanisms could see AI reshape healthcare in ways that diminish professional autonomy and public accountability.
The time to shape that future is now.
For a sociological analysis of the way that AI will impact on traditional professional power and authority, see my article: From Bureaucratic Closure to Algorithmic Governance: AI and the Neo-Weberian Crisis of the Professions