The World Health Organization (WHO) recently released new guidelines on the ethics and governance of artificial intelligence as it applies to large language models (LLMs) in the health sphere. These comprehensive recommendations aim to maximize the benefits of this emerging technology while minimizing potential harms.
As the capacities of LLMs like ChatGPT and Gato continue to rapidly advance, both excitement and apprehension surround their potential health applications. That's why the WHO saw fit to establish guidance around the responsible development and deployment of such systems. This article provides an overview of the key facets of this forward-thinking WHO report.
Broad Applications and Benefits
The WHO guidelines highlight the tremendous versatility of LLMs to augment nearly all facets of healthcare and medical research. These models can analyze massive datasets, generate prognostic hypotheses, synthesize findings across papers, suggest novel treatment plans, and so much more. Specifically, LLMs hold promise to enhance diagnosis and treatment of illness, drug discovery and development, personalized medicine, public health surveillance, health education for professionals and consumers, and medical administration and scheduling. For example, an LLM could ingest hundreds of thousands of patient cases and help identify risk factors for conditions or subgroups that respond better to particular therapies. Or the model could screen libraries of molecules to accelerate identification of promising new drug compounds. LLMs also have potential to expand access to health expertise. An AI assistant could provide customized medical advice to underserved populations, helping address health disparities. And such systems could aid overburdened doctors and nurses by automating routine administrative tasks.
Concerns and Considerations
However, the WHO rightly identifies pitfalls in addition to upsides when unleashing extremely capable, scalable LLMs into the health arena. If inappropriately designed or applied, these models could directly or indirectly cause harm. For instance, erroneous medical advice or flawed drug development guidance from an unreliable LLM could negatively impact treatment. Over-automation could also reduce human empathy in healthcare administration.
Broader societal risks span the potential to spread misinformation, violate privacy, perpetuate bias, and undermine informed consent. For example, scraping the web for data can teach problematic stereotypes. And the ability to generate highly convincing text could deception and manipulation of patients or other stakeholders. That's why the WHO calls for measures to ensure LLMs exemplify trustworthiness, equitability, accountability, safety, explainability, and good governance. Responsible development demands assessing benefits and risks across these dimensions prior to any health application.
Key WHO Recommendations
The WHO guidelines put forward detailed recommendations across five foundational principles for the ethical governance of health-related LLMs:
- Protecting Human Autonomy
- Ensure informed consent regarding LLM use, considering consent may be infeasible for training data sourcing
- Allow human overriding of LLM guidance - Promoting Human Well-Being
- Validate safety and effectiveness for intended contexts
- Monitor ongoing impacts to enable appropriate risk mitigation - Ensuring Fairness & Equity
- Proactively assess disparate impacts on vulnerable populations
- Address high financial costs limiting access to benefits - Fostering Explainability & Transparency
- Disclose key design choices and limitations
- Provide explanatory information to accompany LLM outputs - Enabling Responsible Stewardship
- Undertake extensive pilot testing prior to deployment
- Implement ongoing review processes over the LLM lifecycle
Furthermore, given the complexity and novelty of LLMs, the WHO advises adopting an adaptive approach as experience accrues in this sphere. We must actively listen to diverse voices to identify emergent issues and insights that can refine governance to enact these principles.
Ongoing Dialogue Necessary
These WHO guidelines represent seminal guidance for the health field as we stand on the brink of an era of massively capable models with expanding autonomy. Responsible innovation mandates we establish governance to align these emerging technologies with ethical values and domain priorities.
Yet many crucial conversations remain regarding exactly how to translate laudable principles into effective, contextualized practices. Determining proper protocols demands deliberation among ethicists, technologists, domain experts, policymakers, and civil society.
By facilitating nuanced, inclusive discussions around the responsible implementation of LLMs for health, we can help this transformative innovation flourish to uplift the worldwide population. But we must stay vigilant, flexible, and humble given remaining uncertainties. If governance grows as ambitious as the technology itself, society can steer towards optimistic horizons.
Contact us here if you need help with your clinical evaluation efforts info@cosmhq.com
Disclaimer - This post intended for informational purposes and does not constitute legal information or advice. The materials are provided in consultation with US federal law and may not encompass state or local law.