Biden-Harris administration's late 2023 efforts to enhance AI oversight in health care and life sciences industries preview what lies ahead in 2024
2024 PRINDBRF 0025
By Christine Moundas, Esq., Minal Caron, Esq., and Elana Bengualid, Esq., Ropes & Gray
Practitioner Insights Commentaries
January 12, 2024
(January 12, 2024) - Ropes & Gray attorneys Christine Moundas, Minal Caron and Elana Bengualid discuss initiatives in President Joe Biden's October executive order on artificial intelligence and a Department of Health and Human Services fact sheet relevant to health care and life sciences companies.

Introduction

2023 brought an explosion in availability and public interest in artificial intelligence ("AI") tools. This "boom" reinforced the Biden-Harris Administration's (the "Administration") ongoing interest in developing dedicated oversight frameworks governing the creation, deployment and use of AI tools, culminating in the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (the "AI Executive Order"), issued on October 30, 2023.1
The AI Executive Order provides a roadmap for the federal government's development of guidance and standards governing the safe, equitable, and secure creation, use, and deployment of AI.
The AI Executive Order is particularly important to the health care and life sciences industries.
Specifically, a primary focus of the AI Executive Order is the directive to federal agencies to develop, within the next year or so, guidance, guidelines, regulations, and other tools to govern and oversee the development and deployment of AI that will affect stakeholders across the health care landscape — including providers, drug and device manufacturers, companies developing digital health tools, institutions that conduct federally-funded research, and private equity and other investors in the health care field.
While the AI Executive Order does not create any immediate guidance or regulations that must be implemented today, it provides critical insights and over-arching policy frameworks that institutions and companies involved in the creation, deployment, and use of AI tools should evaluate now.
Building upon the AI Executive Order, on December 14, 2023, the Administration issued a fact sheet (the "HHS Fact Sheet") announcing voluntary commitments by 28 prominent health care organizations "to help move toward safe, secure, and trustworthy purchasing and use of AI technology."2
The HHS Fact Sheet also emphasizes eleven examples of high-impact activities undertaken by the U.S. Department of Health and Human Services ("HHS") relating to responsible oversight of AI technologies in the health care and life sciences industries. The private sector's increased emphasis on AI oversight initiatives, in conjunction with directives set forth in the AI Executive Order, suggest that 2024 will be a busy year for the promulgation of, and compliance with, AI regulatory compliance frameworks.
This article summarizes, as applicable to the health care and life sciences industries, the initiatives set forth through the framework detailed in the AI Executive Order and the HHS Fact Sheet and flags key takeaways from these Administration developments for health care and life sciences companies and institutions who are developing and using AI tools to support their business activities.

HHS AI initiatives to date

HHS has already been very active in furthering initiatives that seek to ensure safe and responsible advancement of AI applicable to the health care and life sciences industries — albeit not necessarily in a comprehensive federally-coordinated manner.
The Administration highlights many of these activities that have been undertaken to date in the HHS Fact Sheet. For example:
•Office of National Coordinator for Health Information Technology (ONC) — On December 13, 2023, the ONC finalized a rule that increases algorithmic transparency for predictive AI use in electronic health records (EHR).3
•Food and Drug Administration (FDA) — To date, the FDA has cleared, authorized or approved upwards of 690 AI-enabled medical devices to improve clinical diagnosis and treatment as well as expand access to patient care.4 Additionally, in May 2023, the FDA released two exploratory discussion papers on the use of AI for drug manufacturing and drug development, titled "Using Artificial Intelligence & Machine Learning in the Development of Drug & Biological Products" and "Artificial Intelligence in Drug Manufacturing."5
•Office of Civil Rights (OCR) — In August 2022, OCR proposed a rule to clarify that Section 1557 of the Affordable Care Act prohibits HIPAA-covered entities from using clinical algorithms to discriminate against patients.6
•Centers for Medicare and Medicaid Services (CMS) — CMS is currently assessing whether the use of algorithms by health care providers and health plans to identify high-risk patients and manage costs introduces bias, affecting the delivery of care.
•Agency for Healthcare Research and Quality (AHRQ) — In December 2023, the AHRQ published a framework that addresses structural racism and discrimination across an algorithm's lifecycle so as to promote health equity.7
•Secretary's Advisory Committee on Human Research Protections (SACHRP) — In October 2022, the SACHRP issued certain recommendations to be considered by institutional review boards (IRBs) when AI tools are used in human subjects research, including with respect to privacy protections, noting that formal federal guidance is required.8
•Administration for Children and Families (ACF) — In September 2022, the ACF, in coordination with the Assistant Secretary for Planning and Evaluation (ASPRE), published a report on their study on issues and needs associated with the use of AI in health and human sectors.9
•National Institutes of Health (NIH) — NIH is using and investing in the use of AI tools to research cancer, Alzheimer's disease, mental illness and autism-spectrum disorders.
•Centers for Disease Control and Prevention (CDC) — CDC is using AI tools to respond to the opioid epidemic, disease outbreaks and to address other public health measures.
•Administration for Strategic Preparedness and Response (ASPR) — ASPR currently leverages AI tools for COVID-19 data collection, analysis and forecasting efforts.

Health care and life sciences focus of AI Executive Order

The AI Executive Order includes a significant focus on issues applicable to the heavily-regulated health care and life sciences industries.
The use of AI to support various facets of health care and life sciences applications — including, for example, drug and device development, digital health tools that rely on personal health information, and clinical trial recruitment — already implicates several laws and regulations, such as the Federal Food, Drug, and Cosmetic Act ("FDCA"), the Health Insurance Portability and Accountability Act of 1996 and its implementing regulations, as amended ("HIPAA"), state privacy laws, and federal and state laws governing fraud, waste, and abuse (e.g., False Claims Act and Anti-Kickback Statute).
None of these regulatory frameworks, however, specifically address the use of AI in the provision of health care or scientific research and development, including drug development.
The AI Executive Order acknowledges that a multi-pronged regulatory strategy must be undertaken by the federal government to ensure effective oversight of AI in health care and other fields, providing that agencies "are encouraged, as they deem appropriate, to consider using their full range of authorities to protect American consumers from ... threats to privacy and to address other risks that may arise from the use of AI," and that agencies should "consider rulemaking, as well as emphasizing or clarifying where existing regulations and guidance apply to AI."

Key initiatives set forth in AI Executive Order

In furtherance of this strategy, the AI Executive Order requires many federal agencies to rapidly develop a series of regulatory road maps and specific guidances regarding oversight of activities in the health care and life sciences fields involving applications of AI.
Below, we outline key initiatives set forth in the AI Executive Order and emphasized in the HHS Fact Sheet that will be highly impactful to the health care and life sciences industries, including: (1) creation of an HHS AI Task Force, (2) development of a strategy for the regulation of AI in drug development, (3) establishment of a voluntary safety program for AI used in clinical settings and (4) the creation of federal funding opportunities to support research involving AI.

1. Creation of HHS AI Task Force

The AI Executive Order seeks to coordinate many of the health care related objectives of the AI Executive Order through the creation of an HHS AI Task Force (the "Task Force"), which must be created by January 28, 2024, by the Secretary of HHS, in consultation with the Secretary of Defense and the Secretary of Veterans Affairs.
The Task Force must, by January 28, 2025, develop a plan on responsible use and deployment of AI tools in health care. The plan should include, as appropriate, policies, framework, and regulatory action and should address the use of AI in research and discovery, drug and device safety, health care delivery and financing, and public health.
Additionally, the AI Executive Order directs the Task Force to identify guidance and resources for the following areas within the health and human services sector: (i) development, maintenance, and use — with appropriate human oversight — of predictive and generative AI in health care and health care financing (e.g., quality measurement, performance improvement, program integrity, benefits administration, patient experience); (ii) long-term safety and real-world performance monitoring of AI tools; (iii) incorporation of equity principles in AI tools, specifically by using disaggregated data when developing new models, and identifying, monitoring, and mitigating algorithmic bias in existing models; (iv) incorporation of privacy and security safeguards to protect the confidentiality and integrity of personally identifiable information; (v) development, maintenance, and availability of documentation to help users assess whether uses of AI are appropriate and safe; (vi) collaboration with state, local, tribal, and territorial health and human services agencies to advance best practice AI use cases; and (vii) identification of ways in which AI can be used to promote workplace efficiency and satisfaction (e.g., reducing administrative burdens).
With respect to the Task Force's responsibility to address privacy and security safeguards for the use or disclosure of health information through AI tools, we note that there are currently no distinct provisions within the HIPAA Privacy Rule that establish clear principles as to how protected health information ("PHI") may be used by vendors that offer AI-powered solutions for covered entities or that seek to collaborate with covered entities in the development of AI tools.
Guidance from the HHS Office of Civil Rights or even revisions to the HIPAA Policy Rule may be needed to ensure stakeholders understand, for example, the ground rules under the HIPAA Privacy Rule for covered entities and companies seeking to utilize a company to provide "data aggregation services" powered by AI, what other types of AI-powered activities can be construed as "health care operations" activities that may be performed by companies as a business associate, and how companies that are reliant on acquiring large amounts of data as business associates of covered entities to support AI offerings may lawfully retain PHI (if at all) for their own "proper management and administration."10
Note that the HHS Fact Sheet does not provide any additional insight into HIPAA's regulation of AI-generated health care.
In short, the Task Force will be responsible for many important developments relating to the use and deployment of AI tools in health care. Stakeholders will need to monitor Task Force developments and be ready to provide feedback to the Task Force on short notice, for example in response to any request for public comment by the Task Force on any of the developments outlined above.

2. Development of a strategy for the regulation of AI in drug development

The AI Executive Order directs the Secretary of HHS to develop, by October 30, 2024, a strategy for the regulation of AI in drug development. The strategy must: (i) define the objectives and principles necessary to regulate each phase of drug development; (ii) identify where additional formal (e.g., statutory or regulatory) or informal guidance is required; (iii) identify relevant financial and administrative considerations for new public/privacy partnerships; and (iv) consider any identified risks.
The AI Executive Order also requires the Secretary of HHS, by April 27, 2024, to direct HHS components to develop a strategy to determine whether AI tools are of appropriate quality. This includes developing an AI assurance policy and requisite infrastructure to enable pre-market assessment and post-market oversight of AI tools in the health care sector.
The development and rollout of these strategies will be significant to drug and device manufacturers, other translational researchers in industry and academia, and companies that are developing AI-powered tools that support the drug development process.
As noted above, FDA already employs strategies to regulate, for example, the development of AI-powered digital health tools that support clinical decision-making. The AI Executive Order seeks to accelerate these efforts by requiring the FDA to develop a strategy, and quickly, as to what is necessary to achieve effective oversight of AI-powered drug development activities (e.g., through FDA guidance, or revisions to regulations under the FDCA).

3. Establishment of a voluntary safety program for AI used in clinical settings

The AI Executive Order directs the Secretary of HHS, in consultation with the Secretary of Defense and the Secretary of Veterans Affairs to, by October 29, 2024, to establish an AI safety program in coordination with voluntary federally listed Patient Safety Organizations.
The program should: (i) establish a framework for approaching the identification and capturing of AI-produced clinical errors in the health care setting, as well as specifications for a system to track patient bias or discrimination-related incidents stemming from the use of AI; (ii) analyze input and output data to develop best practices to avoid said harms; and (iii) distribute those best practices to relevant stakeholders (e.g., health care providers).
Hospitals and other health care providers will need to track developments relating to this program to evaluate what additional obligations may be forthcoming, as these types of tracking efforts may require coordination amongst many different functions within a hospital, health system, or other provider environment.

4. Creation of federal funding opportunities to support research involving AI

The AI Executive Order directs the Secretary of HHS to identify and prioritize grantmaking and other federal awards to support the responsible and innovative development and use of AI in the biomedical sciences and other research disciplines.
The AI Executive Order empowers the National Science Foundation ("NSF") to serve as the leading governmental funding agency for AI-related work, charging NSF with funding additional National AI Research Institutes (i.e., providing significant funding awards to certain universities and other research institutions that will establish centers or institutes focused on AI-related work) and establishing, in partnership with the U.S. Department of Energy, a pilot program to train 500 new scientists by 2025 in "high-performance and data-intensive computing" to meet "the rising demand for AI talent."
The AI Executive Order also directs the Secretary of HHS to (i) collaborate with private sector actors that support the advancement of AI tools that profile immune responses for specific patients and (ii) accelerate grants awarded through the NIH Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Research Diversity (AIM-AHEAD) program and demonstrate current program activities in underserved communities.
As noted in the HHS Fact Sheet, the AIM-AHEAD program was recently announced by NIH and seeks to establish partnerships to increase participation of underrepresented researchers and communities in AI model development and to enhance the abilities of AI technology, especially with respect to EHR data.
In short, the AI Executive Order previews that substantial funding opportunities will be available to both traditional recipients of federal research funding (e.g., academic medical centers, universities, and other nonprofit organizations) as well and for-profit, private sector companies. This flood of apparent new federal funding dedicated to AI will pose challenges to the grants-related compliance functions at research institutions and other funding receipt organizations.
It is likely that many of the researchers and departments that will receive such funding will not be intimately familiar with NSF, NIH, and other conflict of interest management processes.
There may be need, for example, to train many scientists and departmental leaders on the need to ensure that researchers who will be involved in government-funded AI research disclose all of their significant financial interests, including consulting and other work that is done outside of the scope of the researchers' institutional employment, and for institutions to ensure that proper processes are followed to evaluate whether their researchers have any financial conflicts of interest that require mitigation (e.g., implementation of conflict management plans).
Institutions seeking federal funding to carry out AI research will also have to account for funding agency requirements regarding data management and sharing, such as the new NIH Policy for Data Management and Sharing11 effective January 25, 2023, which requires NIH grant applicants to prospectively plan for how scientific data will be preserved and shared, and then to carry out such plans for funded projects.

Considerations for AI initiatives and compliance programs

The AI Executive Order puts in motion a process for significant regulatory developments that will have the potential to transform how health care and life sciences organizations need to conduct their research, product development, and other activities involving AI, and the HHS Fact Sheet demonstrates that the Administration will continue to focus on AI-related developments in the health care and life sciences industries.
Given the flurry of anticipated 2024 developments, industry stakeholders will be well-served by proactive planning for the likelihood of developments that will affect day-to-day activities and processes within an organization.
Strategies to be pursued now include conducting an inventory of existing uses and applications of AI within an organization, assessing what control frameworks already exist that cover AI-related activities, and benchmarking those frameworks against the policy principles highlighted in the AI Executive Order.
These types of activities will allow organizations to discuss development and refinement of AI governance and compliance frameworks now, in order to be best-positioned to respond and adapt to the flurry of federal agency developments that emerge in 2024 and beyond and may necessitate significant changes to existing frameworks.
More specifically, consistent with the principles set forth in the AI Executive Order as well as the voluntary commitments detailed in the HHS Fact Sheet, health care and life sciences entities should consider the following:
•Implement policies and mechanisms in place to evaluate and monitor the use of the AI, identify, and mitigate security risks, as well as assess the validity, integrity, and reliability of AI.
•Ensure appropriate identification of potential regulatory requirements, such as the FDA's software as a medical device (SaMD) clearance requirements.
•Deploy trust mechanisms that inform users of AI-powered tools if the output has not been reviewed or edited by a human.
•Invest in AI-related education, training, development, and research.
•Ensure that AI cannot be used to perpetuate discrimination and bias in health care delivery.
•Enable the use of AI to comply with applicable consumer protection and privacy laws to prevent fraud, unintended bias, discrimination, infringements on privacy and other harms from AI.
•Ensure that the collection, use and retention of data utilized in the creation, deployment and use of AI is lawful, secure and mitigates privacy and confidentiality risks.
•Negotiate appropriate terms with vendors that utilize or provide AI services.
•Collaborate with peers and partners to ensure alignment with fair, appropriate, valid, effective and safe AI principles.
•Implement a risk management framework that tracks AI-powered applications to identify and mitigate potential harms.
The application of AI in the health care and life sciences industries continues to be an exciting prospect. Nevertheless, there remains work to be done in both the private and public sectors to ensure the safe and reliable creation, deployment and use of AI tools.
While the AI Executive Order does not provide immediate, actionable guidance that will permit stakeholders to understand exactly how the creation and deployment of AI tools will fit into existing and new regulatory frameworks governing research and discovery, drug and device safety, health care delivery and financing, and public health, it does preview where the federal government is headed.
2024 will be a busy year for companies and institutions that are subject to the evolving regulatory frameworks contemplated by the AI Executive Order.
Notes
1 President Biden, Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence (Oct. 30, 2023), https://bit.ly/49rDjN2.
2 U.S. Dep't of Health & Human Servs., Fact Sheet: Biden Harris Administration Announces Voluntary Commitments from Leading Healthcare Companies to Harness the Potential and Manage the Risks Posed by AI (Dec. 14, 2023), https://bit.ly/47rWyUy.
3 Final Rule, Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (Dec. 13, 2023), https://bit.ly/48ELIvp.
4 See U.S. Food & Drug Admin., Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices, https://bit.ly/3tGt9rW (last updated Dec. 6, 2023).
5 These papers provided only limited insight into FDA's strategy for regulating the use of AI in these areas, and instead served primarily as a request for feedback from interested stakeholders on issues such as the reliability of data and the development, monitoring, and validation of AI models.
6 87 Fed. Reg. 47824 (Aug. 4, 2022), https://bit.ly/3TOTXkA.
7 Leas Tipkon, et al., Impact of Healthcare Algorithms on Racial and Ethnic Disparities in Health and Healthcare. Comparative Effectiveness Review No. 268. (Prepared by the ECRI-Penn Medicine Evidence-based Practice Center under Contract No. 75Q80120D00002.) AHRQ Publication No. 24-EHC004. Rockville, MD: Agency for Healthcare Research and Quality; (December 2023), https://bit.ly/3RFtrau.
8 Off. for Hum. Research Protections, SACHRP Recommendations: IRB Considerations on the Use of Artificial Intelligence in Human Subjects Research (Oct. 19, 2022), https://bit.ly/3vhgSdQ.
9 Brian Zuckerman et al., Options and Opportunities to Address and Mitigate the Existing and Potential Risks, as well as Promote Benefits, Associated with AI and Other Advanced Analytic Methods, OPRE Report #2022-253, Washington, DC: Office of Planning, Research, and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services (2022), available at https://bit.ly/4aGiU7H.
11 NOT-OD-21-013, Final NIH Policy for Data Management and Sharing (Eff. Jan. 25, 2023), https://bit.ly/47lWZjg.
By Christine Moundas, Esq., Minal Caron, Esq., and Elana Bengualid, Esq., Ropes & Gray
Christine Moundas is a partner in Ropes & Gray's health care group and co-head of the firm's digital health initiative. Moundas, based in the firm's New York office, provides strategic, regulatory, compliance and transactional advice to health care technology companies, health systems, pharmaceutical companies and investors. She counsels clients on cutting-edge issues in the digital health space, including artificial intelligence, interoperability and big data initiatives. She can be reached at [email protected]. Minal Caron is counsel in the firm's health care practice group in Boston. He has significant experience advising clients across the health care and life sciences industries with respect to legal and regulatory issues relating to data privacy and scientific research and development issues, including internal investigations, complex transactions and regulatory matters. He can be reached at [email protected]. Elana Bengualid is an associate in the firm's health care practice in New York. She advises a broad range of health care clients on complex regulatory, compliance, enforcement and transactional matters. She can be reached at [email protected].
Image 1 within Biden-Harris administration's late 2023 efforts to enhance AI oversight in health care and life sciences industries preview what lies ahead in 2024Christine Moundas
Image 2 within Biden-Harris administration's late 2023 efforts to enhance AI oversight in health care and life sciences industries preview what lies ahead in 2024Minal Caron
Image 3 within Biden-Harris administration's late 2023 efforts to enhance AI oversight in health care and life sciences industries preview what lies ahead in 2024Elana Bengualid
End of Document© 2024 Thomson Reuters. No claim to original U.S. Government Works.