Resources

Resources

Medical device
Regulatory
Product Development

Navigating Risk Management for AI/ML Medical Devices

Applying ISO 14971 to AI/ML medical devices: Considerations for risk management

cosm logo
Cosm
risk in AI and ML medical devices
Share this

Summary of Guidance on ISO 14971 to Medical Devices Incorporating AI and ML Technology

If you’re a medical device manufacturer, chances are you’re familiar with the risk management process detailed in ISO 14971:2019. This well-known standard provides a basis for risk management activities across the product life cycle for regulated medical devices. The risk management process, depicted in the figure below, is a generally accepted and well-understood process that applies to all regulated medical devices. 
As medical device technology advances, particularly in the AI/ML realm, manufacturers are faced with the task of adapting this process to novel technologies. There are certain considerations that must be taken into account that have not typically applied to traditional medical devices. The AAMI CR34971:2022  consensus report [2] exists to bring those considerations to light and provide guidance to those who are applying ISO 14971 to regulated medical technologies that incorporate AI/ML. 

Figure 1 - Risk Management Process [1]

AI and ML Risks

Risks from AI-based technology is different from traditional software and can be bigger than an enterprise, span organizations, and lead to societal impacts. AI systems also bring a set of risks that are not comprehensively addressed by current risk frameworks and approaches. Some AI system features that present risks also can be beneficial. Identifying contextual factors will assist in determining the level of risk and potential management efforts.
Compared to traditional software, AI-specific risks that are new or increased include the following:
  • The data used for building an AI system may not be a true or appropriate representation of the context or intended use of the AI system, and the ground truth may either not exist or not be available. Additionally, harmful bias and other data quality issues can affect AI system trustworthiness, which could lead to negative impacts.
  • AI system dependency and reliance on data for training tasks, combined with increased volume and complexity typically associated with such data.
  • Intentional or unintentional changes during training may fundamentally alter AI system performance.
  • Datasets used to train AI systems may become detached from their original and intended context or may become stale or outdated relative to deployment context.

What’s New In AAMI CR34971?

The essence of the risk management process remains unchanged for AI/ML devices. You still need to create the usual documentation under a risk management file such as a risk management plan and a risk management report. Annex A of the consensus report provides a useful overview of the risk management process. It is during the implementation of the risk management activities where manufacturers need to carefully factor in risks specific to AI/ML such as those applicable to the algorithm, data management, bias, data storage/security/privacy, overtrust, adaptive systems, and other safety considerations; all detailed within this AAMI consensus report (see Annex B for several examples of ML-related hazards). Identifying hazards and hazardous situations for an AI/ML device requires special considerations such as how/if incorrect software configurations could affect device performance or if the level of autonomy built into the device can present unwanted risks (see Annex C for additional considerations for autonomous systems). When applying risk control measures, manufacturers should consider both data quality risk controls, which have to do with data integrity, and operational risk controls, which involve human interaction with the software features. For production and post-production, monitoring the device performance over time is especially important due to the adaptive nature of AI/ML systems and their tendency to learn over time. This may also lead to data drift or use drift; properties that don’t typically apply to traditional medical devices but are very applicable for AI/ML-driven technologies. Finally, personnel involved in AI/ML development tend to have specific training, qualifications, and/or other professional development considerations compared to traditional software developers. A good discussion on personnel qualifications is presented in Annex D of the consensus report.
It's important to note that there is also a TIR34971:2023 version of the standard. This specific version is a joint effort between AAMI and BSI and the content of the TIR is substantively the same as the CR content, with only minor spelling and formatting differences between the U.S. and British versions. 

Conclusion

This AAMI consensus report is not a hard standard that manufacturers are expected to follow. Rather, it’s a useful guide that highlights the uniqueness of risks associated with AI/ML devices. It serves as a supplement to ISO 14971 and provides great examples of risks that should be considered when applying the risk management process to medical devices that incorporate AI/ML. The annexes included within the consensus report are especially useful for practical examples that manufacturers can leverage or use as a starting point when performing risk management activities; specifically Annex B - Risk Management Examples and Annex C - Considerations for Autonomous Systems.

Ready to Unlock Risk Management for AI/ML?

Don't wait! Schedule a Call with Cosm today and discover how we can help you with your AI/ML product and risk management efforts.

REFERENCES

Image Source: Created with assistance from ChatGPT, powered by OpenAI

Disclaimer - https://www.cosmhq.com/disclaimer