Putting Regulation Into Place

Putting Regulation Into Place

When it comes to figuring out what regulations are needed to support the vision of safe and effective AI, a potential advantage healthcare has is the ability to look at the lessons learned in other industries, their challenges, successes, and mistakes when it comes to integrating and deploying AI.

With any new technology, unintended consequences are bound to occur and the stewardship of that technology includes vigilance and active surveillance to identify signals of a potentially undesirable impact. This is a reason why diversity and representation is so critical within AI. Unawareness of the perspectives, needs, and values of different cultures, genders, races, ethnicities, or religions can lead to the development of AI systems that lack fairness and worsen disparities. Combating this requires developing regulation that is effective at protecting against bias and minimizes the risk for direct or indirect harmful effects from operationalizing AI systems.

The questions that then must be answered in regards to regulation are: what should be regulated and who will oversee that regulation. In the U.S., the regulatory oversight of artificial intelligence in health has fallen primarily under the FDA; however, the question of what the FDA is to regulate, and in turn, how they are to regulate it, is much more complex and still a work-in-progress.

FDA regulation

Under FDA guidance, artificial intelligence and machine learning have been categorized as Software as a Medical Device (SaMD), which, as defined by the International Medical Device Regulators Forum, is "software intended to be used for one or more medical purposes that perform these purposes without being part of a hardware medical device."¹ Medical devices, including SaMD, follow 3 primary pathways of regulation through the FDA: 501(K), de novo, and pre-market approval application (PMA). The fourth pathway is the pre-cert program that is still in development. At this time, the only machine learning algorithms that have been approved as an SaMD are “locked” algorithms. A locked algorithm refers to an algorithm that is not continuously learning from its environment. After it has been trained, it is deployed as it is. If any major updates are made to the algorithm that could impact the safety or efficacy of how the algorithm performs, the changes must be submitted to the FDA for approval. This is analogous to submitting a request to the FDA to approve a new indication for an existing medication.

Unfortunately, this process is ill-equipped to handle the regulation of many types of AI/ML-based software that do continuously learn and therefore fall outside the category of locked algorithms. As such, the FDA published a discussion paper "Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) - Discussion Paper and Request for Feedback," in April 2019.² Currently, when software modifications are made to a medical device, a premarket submission may be required to assess the risk involved as a result of such change. The same approach has been proposed for AI/ML-based SaMD whereby software modification affecting device performance, safety, effectiveness, original intended use, or algorithm, would warrant a supplemental application. The FDA recognizes that the premise of AI/ML-based SaMDs is that they are continuously learning. That is, that over time, these AI/ML algorithms may make new recommendations and/or predictions compared to the ones they were originally approved for during FDA approval. (Recall from Part 2, that when a model over-fits, it does not generalize well to new data, which compromises the accuracy of its output. In this case, rather than just being an issue of over-fitting, the actual weights in the model itself can change, which can potentially have consequences on both the safety and efficacy of the model even after its approved).

In creating this proposal, the FDA seeks to formulate a new, total product lifecycle (TPLC) regulatory approach to cater to the unique features of AI/ML technologies. The salient points covered in this proposal include the following:

  • The introduction of the concept of Good Machine Learning Practices (e.g. model training, model validation, real-world performance monitoring, risk management protocols)

  • A total product lifecycle regulatory approach helps to ensure continual evaluation and monitoring of the SaMD from premarket submission to post-marketing performance.

  • Types of AI/ML-based SaMD modifications which require pre-market review include modifications relating to clinical and analytical performance, inputs used by the algorithm and their clinical association to the SaMD output, and/or the intended use of the SaMD

  • Manufacturers are expected to commit to transparency (e.g. updates to FDA, device companies, clinicians, patients, and general users) and real-world performance monitoring (ex. analytics, annual reports)

Lastly, one other regulatory option exists, which is the pathway for non-regulated devices. It is still possible to deploy complex AI/ML systems within healthcare. However, it means that these systems must operate in such a way that they are not considered a type of device that requires regulation. Examples of such a device would be those intended only for general health and wellness purposes. Additionally, some software used in clinical practice does not require regulation. For example, the FDA has de-regulated clinical decision support software. Software that serves to augment clinical decision making and to provide information that would always be interpreted through the clinical judgment of a practitioner does not require FDA clearance. If an AI/ML system were to make decisions autonomously, however, then, it would require FDA approval for use.

Outside of the US, other countries have begun their own process of creating regulatory guidance around SaMD. Health Canada's guidance is aligned with that of the FDA and follows a similar risk-based approach for evaluating medical devices.

*Visit our Resources page for a List of 510(K) clearances and other relevant links

Privacy laws and their implication on AI

In addition to the direct regulation of AI as a medical device, other laws have a more indirect impact on the development of AI. One such law is the EU's General Data Protection Regulation (GDPR), which is one of the toughest privacy laws in the world.³ It imposes significant fines on any organization that violates it, and provides rules about the collection and use of data related to people in the EU. Moreover, this law applies not only to organizations operating in the EU, but any organization that collects or processes the data of EU citizens or provides services or products to EU citizens. The reason the law is important to be aware of is because some argue that the GDPR may hinder innovation in artificial intelligence due to the difficulty for AI development to comply with the data subject privacy rights outlined therein.⁴ Specifically, the right against automated decision-making, the right to erasure, and the right to explanation, pose challenges.

As mentioned in previous sections, deep learning models employ “black box” algorithms, which by definition, are not truly explicable and could violate the "right to explanation" outlined in the GDPR. While various techniques have been developed to provide some insight into how black box algorithms make decisions, these techniques only provide educated hypotheses as to how a decision might have been reached. They are unable to provide background into how a decision was made. Depending on how the GDPR is interpreted, the ability to develop and deploy AI-based systems that rely on deep learning models, including image recognition systems, could face obstacles.

  1. Center for Devices, Radiological Health. Software as a Medical Device (SaMD). December 2018. https://www.fda.gov/medical-devices/digital-health/software-medical-device-samd

  2. Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) - Discussion Paper and Request for Feedback. Food and Drug Administration (FDA). Published November 13, 2019. https://www.regulations.gov/document?D=FDA-2019-N-1185-0001

  3. What is GDPR, the EU’s new data protection law? Published November 7, 2018. https://gdpr.eu/what-is-gdpr/

  4. Alford J, Emamian M, Galik E, Gottlieb A, Sevener L. GDPR and Artificial Intelligence. The Regulatory Review. Published May 9, 2020. https://www.theregreview.org/2020/05/09/saturday-seminar-gdpr-artificial-intelligence/