Home/Learn/US AI Regulations
Back to Home
Compliance Framework

United States AI and Data Regulations

Comprehensive guidance on federal and state artificial intelligence regulations in the United States, including executive orders, sector-specific requirements, and emerging legislative frameworks.

Compliance Guide
19 min read

Federal AI Regulatory Landscape

The United States approaches artificial intelligence regulation through a combination of executive action, agency rulemaking, and sector-specific legislation rather than comprehensive omnibus AI legislation. This sectoral approach reflects the federal structure of government and prioritizes innovation while addressing specific risks in domains such as healthcare, financial services, employment, and civil rights.

Federal agencies with jurisdiction over specific industries or activities have begun issuing guidance and regulations addressing AI systems within their respective domains. The Federal Trade Commission addresses unfair and deceptive practices related to AI. The Equal Employment Opportunity Commission enforces anti-discrimination laws as they apply to AI-driven employment decisions. The Department of Health and Human Services oversees AI in healthcare settings. The Consumer Financial Protection Bureau addresses algorithmic lending and credit decisions.

The absence of comprehensive federal AI legislation creates complexity for organizations operating across multiple sectors or jurisdictions. Organizations must monitor guidance and enforcement actions from multiple federal agencies, comply with varying state laws, and anticipate future legislative developments. This regulatory fragmentation requires sophisticated compliance programs capable of tracking diverse requirements and adapting to rapid changes.

Executive Orders on AI

Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, issued in October 2023, establishes comprehensive federal policy on AI governance. The Order directs federal agencies to take specific actions addressing AI safety, security, privacy, civil rights, consumer protection, and worker rights. While executive orders do not directly regulate private sector entities, they influence regulatory approaches, establish expectations, and often lead to agency rulemaking that does impose binding obligations.

The Order requires developers of foundation models that pose risks to national security, economic security, or public health and safety to share safety test results and other critical information with the federal government before public release. This requirement applies to models trained using computational power exceeding specified thresholds. Organizations developing large-scale AI models must establish processes for identifying models subject to reporting requirements and ensuring timely compliance.

The Order directs the National Institute of Standards and Technology to develop guidance and standards for AI safety, security, and testing. NIST is instructed to establish guidelines for red-team testing, capability evaluations, and risk assessments for AI systems. Organizations should monitor NIST publications and consider adopting emerging standards to demonstrate responsible AI practices and align with anticipated regulatory requirements.

Federal procurement requirements established under the Order affect organizations selling AI systems or AI-enabled products to government agencies. The Order directs development of procurement guidelines that will require AI systems purchased by the federal government to meet safety, security, and performance standards. Organizations serving government customers should prepare for enhanced scrutiny and documentation requirements.

Sector-Specific Regulations

Healthcare and HIPAA

Healthcare providers, health plans, and healthcare clearinghouses using AI systems must comply with the Health Insurance Portability and Accountability Act and its implementing regulations. HIPAA requires protection of protected health information used to train, validate, or operate AI systems. Organizations must implement administrative, physical, and technical safeguards to ensure confidentiality, integrity, and availability of electronic protected health information processed by AI systems.

The Food and Drug Administration regulates AI systems that constitute medical devices under the Federal Food, Drug, and Cosmetic Act. AI systems intended for diagnosis, treatment, cure, mitigation, or prevention of disease are subject to FDA requirements for safety and effectiveness. The FDA has established specific frameworks for regulating software as a medical device and continuously learning AI systems. Organizations developing clinical AI must navigate premarket review processes, quality system requirements, and postmarket surveillance obligations.

Financial Services

Financial institutions using AI for credit decisions must comply with the Equal Credit Opportunity Act, which prohibits discrimination on the basis of protected characteristics. The Fair Credit Reporting Act requires accuracy of information used in credit decisions and provides consumers rights to access and dispute credit information. AI systems used for credit scoring, underwriting, or lending decisions must be designed and validated to ensure compliance with these requirements.

The Consumer Financial Protection Bureau has indicated that financial institutions remain responsible for legal compliance regardless of whether decisions are made by AI systems or humans. Institutions cannot delegate legal accountability to technology vendors or blame algorithmic errors for discriminatory outcomes. Organizations must implement governance frameworks that ensure AI systems comply with applicable consumer protection laws and that human oversight enables detection and correction of compliance issues.

Employment

Employers using AI systems for hiring, promotion, termination, or other employment decisions must comply with Title VII of the Civil Rights Act, the Americans with Disabilities Act, the Age Discrimination in Employment Act, and other federal employment laws. The Equal Employment Opportunity Commission has issued guidance clarifying that employers may be liable for discriminatory outcomes produced by AI systems, even if discrimination was not intentional.

Organizations must validate that AI systems used in employment contexts do not have disparate impact on protected groups. Validation should include statistical analysis of outcomes across demographic groups, examination of training data for bias, and testing of system performance. Where disparate impact is identified, organizations must assess whether the employment practice serves a legitimate business necessity and whether less discriminatory alternatives exist.

State-Level AI Laws

States have enacted diverse laws addressing specific AI applications and risks. California has enacted legislation requiring businesses to disclose use of bots in customer interactions and prohibiting use of deepfakes in election contexts. Illinois requires consent for collection of biometric information, affecting AI systems that process biometric data. New York City requires employers using automated employment decision tools to conduct bias audits and provide notice to affected individuals.

Colorado has enacted comprehensive legislation addressing AI systems that make consequential decisions in areas such as education, employment, financial services, government services, healthcare, housing, insurance, and legal services. The law requires developers and deployers of high-risk AI systems to implement risk management programs, conduct impact assessments, and provide transparency regarding AI use. Colorado law establishes a private right of action for individuals harmed by algorithmic discrimination.

Organizations operating in multiple states face complex compliance challenges arising from varying state requirements. Compliance programs must track applicable state laws based on where AI systems are deployed, where affected individuals are located, and where organizational operations occur. Organizations should consider implementing practices that meet the most stringent state requirements to simplify compliance across jurisdictions.

Algorithmic Accountability

Algorithmic accountability frameworks establish organizational responsibility for outcomes produced by AI systems. Federal agencies and state legislatures increasingly reject arguments that algorithmic complexity absolves organizations of responsibility for discriminatory or harmful outcomes. Organizations deploying AI systems must demonstrate that they understand how systems operate, that systems are validated for intended uses, and that ongoing monitoring detects performance issues.

Accountability requires clear governance structures with assigned responsibilities for AI system oversight. Organizations should designate individuals or teams responsible for reviewing AI system designs, approving deployments, monitoring performance, and investigating issues. Documentation of decision-making processes, risk assessments, and mitigation measures demonstrates accountability and supports defense against regulatory challenges.

Third-party vendors providing AI systems or services do not eliminate organizational accountability. Organizations using vendor-provided AI remain responsible for compliance with applicable laws and for outcomes affecting individuals. Contracts with vendors should include provisions requiring vendors to provide information necessary for compliance assessment, to cooperate with audits and investigations, and to implement corrective measures when issues are identified.

Transparency Requirements

Various federal and state laws establish transparency obligations for organizations using AI systems. Notice requirements mandate disclosure to individuals when AI systems are used to make decisions affecting them. Organizations must inform consumers, employees, or other affected parties that automated systems contribute to decisions in contexts such as employment, credit, housing, or service provision.

Explanation requirements in certain contexts mandate that organizations provide meaningful information about factors influencing automated decisions. The Fair Credit Reporting Act requires creditors to provide specific reasons for adverse credit actions. Similar explanation obligations arise under employment discrimination laws when AI systems contribute to adverse employment actions. Organizations must design AI systems to enable generation of explanations that are accurate, understandable, and useful to affected individuals.

Documentation requirements for regulatory compliance necessitate maintaining detailed records of AI system design, development, validation, and operation. Regulators conducting investigations expect organizations to produce documentation demonstrating compliance efforts, risk assessments, testing results, and monitoring activities. Organizations should establish document retention policies that preserve required records while managing storage costs and privacy risks.

Enforcement Mechanisms

Federal agencies enforce AI-related requirements through investigations, consent orders, and civil penalties. The Federal Trade Commission has brought enforcement actions against companies making deceptive claims about AI capabilities, failing to implement reasonable security for AI systems, or using AI in ways that cause consumer harm. Consent orders often require organizations to implement comprehensive AI governance programs, conduct regular assessments, and submit to ongoing monitoring.

State attorneys general enforce state AI laws and consumer protection statutes as applied to AI systems. State enforcement actions have addressed biometric privacy violations, discriminatory algorithmic practices, and deceptive AI marketing. Multi-state investigations coordinate enforcement across jurisdictions, increasing potential exposure for organizations with national operations.

Private rights of action enable individuals to sue organizations for violations of certain AI-related requirements. State laws addressing biometric privacy, employment discrimination, and consumer protection often authorize private lawsuits with statutory damages, attorney fee provisions, and class action mechanisms. Organizations face significant exposure from class actions alleging systematic violations affecting large numbers of individuals.

Compliance Strategies

Effective compliance with US AI regulations requires comprehensive strategies that address fragmented regulatory landscape and anticipate future developments. Key elements include:

  • Regulatory monitoring systems that track federal agency guidance, state legislation, and enforcement actions relevant to organizational AI activities. Compliance teams should subscribe to agency updates, participate in industry associations, and engage with legal counsel to maintain awareness of evolving requirements.
  • AI system inventories that identify all AI systems in development or deployment, classify systems by risk level and applicable regulatory requirements, and enable compliance assessment and resource allocation. Inventories should be updated as new systems are developed and as regulatory requirements change.
  • Risk assessment frameworks tailored to US regulatory requirements, including evaluation of discrimination risks, privacy impacts, consumer protection issues, and sector-specific obligations. Assessments should inform design decisions, deployment approvals, and ongoing monitoring.
  • Governance structures establishing clear accountability for AI compliance, including designated compliance officers, cross-functional review committees, and escalation procedures for significant issues. Governance should integrate with existing compliance, risk, and ethics functions.
  • Vendor management processes that assess third-party AI providers for compliance with applicable requirements, establish contractual obligations for compliance support, and maintain oversight of vendor practices. Organizations should conduct due diligence before engaging vendors and monitor vendor performance throughout relationships.
  • Training programs educating personnel about AI compliance requirements, organizational policies and procedures, and individual responsibilities. Training should address technical staff involved in AI development, business users deploying AI systems, and compliance personnel overseeing AI activities.
  • Incident response procedures establishing processes for identifying, investigating, and remediating AI compliance issues. Procedures should address regulatory reporting obligations, customer notification requirements, and corrective action implementation.

Organizations should view compliance as an ongoing commitment requiring continuous attention and adaptation. Regular compliance reviews assess whether practices remain current with regulatory developments and whether implemented controls operate effectively. Proactive compliance programs position organizations to respond efficiently to new requirements and to demonstrate good faith efforts in enforcement contexts.

Need Help with US AI Regulations?

Verdict AI automates compliance documentation and helps you navigate complex federal and state AI regulations.

Get Started