Don't miss a thing from RSNA!

To ensure you receive all future messages, please add our new sender domain, info.rsna.org, to your contacts or safe sender list.

OK

AI Policy

RSNA ensures your voice as a radiologist is heard in Washington. We bring medical imaging expertise to the forefront of federal AI policy discussions and deliver trusted guidance to policy makers to ensure patient safety.

RSNA’s Key AI Pillars and Policy Recommendations

Because radiology is the most AI-active field in medicine, with over 76% of FDA-cleared algorithms targeting medical imaging, RSNA supports policies that elevate the expertise of radiologists and promote AI uptake in medicine with flexible, efficient regulation.

Read More  

Accelerate Innovation with Targeted R&D

AI’s promise in healthcare hinges on real-world usability. Federal investment in research and infrastructure can close the gap between technical capability and clinical integration.

Build Trust Through Validation and Transparency

Widespread AI adoption in medicine depends on trust—among patients, providers, and tool developers. Standardized post-deployment monitoring and clear performance benchmarks are essential.

Enable Adoption Through Smart Regulation

AI can enhance care quality and efficiency—but only if federal regulatory frameworks protect patients while unleashing innovation without stifling progress.

Strengthen AI Education and Workforce Readiness

As AI becomes embedded in care delivery, healthcare professionals need the skills to evaluate and use these tools effectively.

Recent Activities

RSNA Meets with HHS Leaders on AI and Health Care Technology

Members of RSNA’s Radiology Informatics Council met with Thomas Keane, MD, MBA, the Assistant Secretary for Technology Policy (ASTP) within the U.S. Department of Health and Human Services (HHS). The meeting explored policy topics of mutual interest for RSNA and ASTP/HHS, namely AI and interoperability in health care.

NIH Seeks Comments on the Agency’s AI Strategic Plan; RSNA Responds.

RSNA commented on the NIH’s AI strategic plan, emphasizing the need for greater collaboration with FDA and HHS, and stronger public-private partnerships in AI development and deployment.

Read the Full Letter

FDA Seeks Comments on its AI Enabled Device Software Functions Recommendations—RSNA Responds

RSNA’s response to the FDA’s draft guidance calls for greater transparency and standardized outputs in AI medical devices, along with post-deployment monitoring and workflow integration for safe, effective use in radiology.

Read the Full Letter

RSNA responds to a Request for Information from the Trump Administration seeking feedback on its development of an AI Action Plan

RSNA’s comments identified five key considerations in the development of an AI Action Plan including, among others, the need to foster trust through robust validation and transparency; encourage innovation through strategic R&D; and reduce barriers to AI adoption through effective and efficient regulatory pathways.

Read the Full Letter

FAQs for Radiologists: Section 1557 ACA

This FAQ offers information regarding the use of clinical algorithms and artificial intelligence (AI) in radiology, in light of the ACA Section 1557 final rule on “Nondiscrimination in Health Programs and Activities.”

Disclaimer: The information provided here is intended as general guidance and should not be considered legal advice. Radiologists should consult with legal counsel for specific questions regarding their obligations under the rule.

  • ACA Section 1557 mandates that all covered entities, including radiologists and radiology practices, be accountable for preventing discrimination, including any bias arising from the use of algorithms or AI.

  • The U.S. Department of Health and Human Services (HHS) conducts rulemaking and enforcement of Section 1557 of the 2010 Affordable Care Act (ACA), which “prohibits discrimination on the basis of race, color, national origin, age, disability, or sex (including pregnancy, sexual orientation, gender identity, and sex characteristics) in covered health programs or activities.”

    On April 26, 2024, HHS issued a final rule regarding Section 1557 of the ACA, which was published in the Federal Register on May 6, 2024.

  • Yes. The rule directly addresses the use of all “patient care decision support tools,” which encompasses all clinical algorithms in electronic health records as well as all AI technologies used in healthcare. It emphasizes that using these tools must not result in discrimination based on race, color, national origin, sex, age or disability.

  • The rule defines these tools to encompass any automated or non-automated tool, mechanism, method, technology or combination used for clinical decision making. In radiology, this could include:

    • AI algorithms used for image analysis and interpretation that might produce varying results based on a patient’s race or ethnicity.
    • AI-based tools for prioritizing patients for imaging studies or interpreting results that might disadvantage certain demographic groups.
    • Risk assessment tools that use race, age or sex as factors in ways that could lead to biased outcomes.
  • Yes. Even if you are not the developer of the AI tool or algorithm, you are still obligated to make reasonable efforts to ensure its use doesn’t result in discriminatory practices. The rule places the responsibility for nondiscriminatory use on the covered entity, which includes healthcare providers like radiologists who receive federal funding (e.g., CMS reimbursement).

  • While the rule doesn’t specify exact steps, here’s what “reasonable efforts” might entail:

    Governance process: Establish and document a governance process for AI deployment and monitoring in your practice, including policies and procedures. Create a process for end users to report their concerns about potential discrimination in clinically deployed AI algorithms.

    Identifying potential bias: Stay informed about known biases in algorithms used in your field. For example, be aware of research highlighting racial discrepancies in AI models of devices like pulse oximeters. When considering new software or tools, inquire about the data used for training, if bias was assessed, and if bias mitigation strategies were employed. Review publicly available resources or consult with developers to understand if your tools utilize race, color, national origin, sex (including sexual orientation and gender identity), age or disability as input variables. Lack of information will likely not be considered “reasonable.”

    Mitigating discrimination: Ensure all radiologists in your practice know they cannot solely rely on algorithmic outputs. They must combine them with their professional judgment and consider patient-specific factors. Educate all users about the potential bias inherent to AI tools during training. Develop convenient processes to document potential bias in practice. If you or any radiologist in your practice becomes aware of potential bias, ensure it is recorded and addressed. This oversight may include adjusting AI model thresholds, exploring alternative tools or reporting the issue to the developer or relevant authorities.

  • Although the legislation doesn’t include a specific “hardship clause” to exempt smaller practices, there is an understanding that larger practices will face closer scrutiny. When assessing a covered entity’s compliance with the “reasonable efforts to identify and mitigate bias” requirement, HHS may consider the entity’s size and available resources. This means that larger, well-resourced entities will be expected to demonstrate greater diligence in meeting this provision.

  • Yes. The rule encourages the HHS Office of Civil Rights (OCR) to provide technical assistance to covered entities. The OCR website provides up-to-date resources and guidance.3 Additionally:

    • The FDA provides information on its role in regulating medical devices, including AI-based tools.4
    • Professional organizations like the American Medical Association and the American College of Radiology often provide resources and best practice guidelines for the ethical and equitable use of AI in radiology. These include tools like, AI Central, which serves as a resource for evaluating imaging AI solutions and the ARCH-AI certification, which recognizes sites that use AI safely and effectively.5
    • Stay informed about research in medical journal publications highlighting potential biases in AI and algorithms.
  • The effective date of the final regulation is July 5, 2024. Note that there are multiple compliance dates by which covered entities must comply with different Section 1557 requirements and provisions. The final rule includes a table that summarizes these compliance dates 6. Several of the compliance requirements are presented below.

    Compliance Requirement Compliance Date
    Assign a Section 1557 Coordinator
    Provide a Notice of Nondiscrimination to patients and the public
    Nov. 2, 2024
    Train staff on new policies and procedures
    Ensure decision support tools are non-discriminatory
    May 1, 2025
    Develop and implement non-discrimination policies and procedures
    Provide notice about available language assistance services and auxiliary aids
    May 1, 2025
    • Does the AI software fall under Section 1557 or any state nondiscrimination laws?
    • Does the software consider any input variables protected under Section 1557 or state non-discrimination laws, such as race, color, national origin, sex, age or disability? If yes, please state which variables and how they are used in the tool’s decision-making process.
    • What steps does the vendor take to mitigate potential harm to patients in protected groups?
    • Does the vendor audit software performance to ensure it does not inadvertently discriminate against protected groups? If yes, what are the frequency and criteria of such audits?
    • How does the vendor ensure transparency around non-discrimination compliance?
    • Does the vendor provide training to its staff and clients on non-discrimination and best practices in healthcare software?
  • While the rule itself doesn’t specify penalties, non-compliance with Section 1557 can have serious consequences, including:

    • OCR investigations and potential enforcement actions
    • Legal action from patients who believe they have been discriminated against
    • Reputational damage to your practice

References

  1. HHS website: Section 1557
  2. Federal Register (5/6/24): Nondiscrimination in Health Programs and Activities
  3. HHS Office for Civil Rights: OCR
  4. FDA Medical Devices: Overview
  5. ARCH-AI: Certification ; Transparent AI: AI Central
  6. Compliance date table (Federal Register summary): Summary of regulatory changes

Subscribe to the Washington Update Newsletter

Get monthly updates about current federal policy developments and RSNA’s initiatives to advance radiology on Capitol Hill.

View Current Issue