Don't miss a thing from RSNA!

To ensure you receive all future messages, please add our new sender domain, info.rsna.org, to your contacts or safe sender list.

OK

Section 1557 ACA - FAQs for radiologists

This FAQ page offers information regarding the use of clinical algorithms and AI in radiology, in light of the Affordable Care Act (ACA) Section 1557 final rule on “Nondiscrimination in Health Programs and Activities.”

Disclaimer: The information provided here is intended as general guidance and should not be considered legal advice. Radiologists should consult with legal counsel for specific questions regarding their obligations under the rule.

ACA Section 1557 mandates that all covered entities, including radiologists and radiology practices, be accountable for preventing discrimination, including any bias arising from the use of algorithms or AI.

  • The U.S. Department of Health and Human Services (HHS) conducts rulemaking and enforcement of Section 1557 of the 2010 Affordable Care Act (ACA), which “prohibits discrimination on the basis of race, color, national origin, age, disability, or sex (including pregnancy, sexual orientation, gender identity, and sex characteristics) in covered health programs or activities.” 1
  • On April 26, 2024, HHS issued a final rule regarding Section 1557 of the ACA, which was published in the Federal Register on May 6, 2024.2

Yes. The rule directly addresses the use of all “patient care decision support tools,” which encompasses all clinical algorithms in electronic health records as well as all AI technologies used in healthcare. It emphasizes that using these tools must not result in discrimination based on race, color, national origin, sex, age or disability.

The rule defines these tools to encompass any automated or non-automated tool, mechanism, method, technology or combination used for clinical decision making. In radiology, this could include:

  • AI algorithms used for image analysis and interpretation that might produce varying results based on a patient’s race or ethnicity.
  • AI-based tools for prioritizing patients for imaging studies or interpreting results that might disadvantage certain demographic groups.
  • Risk assessment tools that use race, age or sex as factors in ways that could lead to biased outcomes.

Yes. Even if you are not the developer of the AI tool or algorithm, you are still obligated to make reasonable efforts to ensure its use doesn’t result in discriminatory practices. The rule places the responsibility for nondiscriminatory use on the covered entity, which includes healthcare providers like radiologists who receive federal funding (e.g. Centers for Medicare & Medicaid reimbursement for provided services).

While the rule doesn’t specify exact steps, here’s what “reasonable efforts” might entail:

  • Governance process
    Establish and document a governance process for AI deployment and monitoring in your practice, incorporating consistent policies and procedures. Create a process for end users to report their concerns about potential discrimination in clinically deployed AI algorithms.

  • Identifying potential bias:
    Stay informed about known biases in AI and algorithms used in your field. For example, be aware of research highlighting racial discrepancies in AI models of devices like pulse oximeters. When considering new software or tools, inquire about the data used for training, if bias was assessed, and if bias mitigation strategies were employed. Review publicly available resources or consult with developers to understand if your tools utilize race, color, national origin, sex (including sexual orientation and gender identity), age or disability as input variables. Lack of information will likely not be considered “reasonable.”

  • Mitigating discrimination:
    Ensure all radiologists in your practice know they cannot solely rely on algorithmic outputs. They must combine them with their professional judgment and consider patient-specific factors. Educate all users about the potential bias inherent to AI tools during training. Develop convenient processes to document potential bias in practice. If you or any radiologist in your practice becomes aware of potential bias, ensure it is recorded and addressed. This oversight may include adjusting AI model thresholds, exploring alternative tools or reporting the issue to the developer or relevant authorities.

Although the legislation doesn’t include a specific “hardship clause” to exempt smaller practices, there is an understanding that larger practices will face closer scrutiny. When assessing a covered entity’s compliance with the “reasonable efforts to identify and mitigate bias” requirement, HHS may consider the entity’s size and available resources. This means that larger, well-resourced entities will be expected to demonstrate greater diligence in meeting this provision.

Yes. The rule encourages the HHS Office of Civil Rights (OCR) to provide technical assistance to covered entities. The OCR website provides up-to-date resources and guidance.3 Additionally:

  • The FDA provides information on its role in regulating medical devices, including AI-based tools.4
  • Professional organizations like the American Medical Association and the American College of Radiology often provide resources and best practice guidelines for the ethical and equitable use of AI in radiology. These include tools like, AI Central, which serves as a resource for evaluating imaging AI solutions and the ARCH-AI certification, which recognizes sites that use AI safely and effectively.5
  • Stay informed about research in medical journal publications highlighting potential biases in AI and algorithms.

The effective date of the final regulation is July 5, 2024. Note that there are multiple compliance dates by which covered entities must comply with different Section 1557 requirements and provisions. The final rule includes a table that summarizes these compliance dates 6. Several of the compliance requirements are presented below.

Compliance requirement Compliance date

Assign a Section 1557 Coordinator

Provide a Notice of Nondiscrimination to patients and the public

Nov. 2, 2024

Train staff on new policies and procedures

Ensure decision support tools are non-discriminatory

May 1, 2025

Develop and implement non-discrimination policies and procedures

Provide notice about available language assistance services and auxiliary aids

May 1, 2025

  • Does the AI software fall under the jurisdiction of Section 1557 or any state-based non-discrimination laws?
  • Does the software consider any input variables protected under Section 1557 or state non-discrimination laws, such as race, color, national origin, sex, age or disability? If yes, please state which variables and how they are used in the tool’s decision-making process.
  • What steps does the vendor take to mitigate potential harm to patients in protected groups?
  • Does the vendor audit software performance to ensure it does not inadvertently discriminate against protected groups? If yes, what are the frequency and criteria of such audits?
  • How does the vendor ensure transparency around non-discrimination compliance?
  • Does the vendor provide training to its staff and clients on non-discrimination and best practices in healthcare software?

While the rule itself doesn’t specify penalties, non-compliance with Section 1557 can have serious consequences, including:

  • OCR investigations and potential enforcement actions
  • Legal action from patients who believe they have been discriminated against
  • Reputational damage to your practice