AI Governance – Updated Dec 15, 2025 (America/New_York)
US AI Regulation: The Digital Intelligence and Safety Act (DISA) Explained
The United States has enacted the Digital Intelligence and Safety Act (DISA), a landmark piece of federal legislation creating a comprehensive framework for regulating artificial intelligence. The new law establishes a risk-based approach, similar to the European Union's AI Act, and introduces new compliance requirements for developers and deployers of AI systems. This legislation aims to foster innovation while implementing safeguards against potential harms, marking a pivotal moment in U.S. technology policy.
- What Happened: The U.S. Congress passed the Digital Intelligence and Safety Act (DISA), and it was signed into law. This is the first comprehensive federal legislation specifically designed to regulate artificial intelligence across various sectors.
- Where: The law applies nationwide, impacting any entity that develops, deploys, or uses AI systems in the United States, and sets a national standard that may preempt some state-level AI laws.
- Why It Matters: DISA shifts the U.S. from a patchwork of state laws and voluntary guidelines to a unified federal regulatory approach. It introduces mandatory risk assessments, transparency requirements, and accountability measures, fundamentally altering the landscape for tech companies and AI developers.
- What's Next: Federal agencies, led by the Federal Trade Commission (FTC) and the National Institute of Standards and Technology (NIST), will begin a rulemaking process to implement the law's provisions. Tech companies are assessing the new compliance burdens, and legal challenges to the law's scope are anticipated.
What we know right now
After years of debate and a flurry of state-level legislative efforts, the United States has adopted its first major federal law governing artificial intelligence. The Digital Intelligence and Safety Act (DISA) was signed into law on December 14, 2025, culminating a bipartisan effort to address the rapid advancements and societal impact of AI technologies.
The legislation establishes a risk-based regulatory framework, categorizing AI systems based on their potential to cause harm. This approach is conceptually similar to the European Union's influential AI Act, which also uses a tiered system to apply different levels of regulation. Systems deemed "high-risk"—such as those used in critical infrastructure, law enforcement, employment, and credit scoring—will face the most stringent requirements.
Key provisions of DISA mandate that developers of high-risk AI systems conduct thorough risk assessments, ensure data quality, maintain detailed documentation, and implement human oversight mechanisms. The law also introduces significant transparency obligations, requiring companies to notify individuals when they are interacting with or being subjected to decisions made by certain AI systems, a principle echoed in the White House's earlier Blueprint for an AI Bill of Rights.
The Federal Trade Commission (FTC) is granted primary enforcement authority under the new law, expanding its existing mandate to police "unfair and deceptive practices" into the AI domain. The National Institute of Standards and Technology (NIST) is tasked with developing the technical standards and testing methodologies that will underpin the law's implementation, building on its existing AI Risk Management Framework (AI RMF).
The passage of DISA represents a significant policy shift, moving away from the prior reliance on voluntary frameworks and executive orders. While proponents argue the law provides necessary guardrails to protect consumers and civil rights, some industry groups have expressed concerns that the compliance costs could stifle innovation, particularly for smaller companies and startups. The debate over how to balance innovation with regulation, a central theme in discussions around open-source versus closed-source AI development, is now set to enter a new phase as the law's real-world impacts become clearer.
What’s confirmed vs. still developing
| What We Know (Confirmed) | What's Developing (And What We Don't Know) |
|---|---|
| The Digital Intelligence and Safety Act (DISA) has been signed into law, establishing a national framework for AI regulation. | The specific technical standards and compliance tests that NIST will develop are still unknown. |
| The law uses a risk-based approach, with the strictest rules for "high-risk" AI systems. | How aggressively the FTC will use its new enforcement powers and what its initial priorities will be. |
| The FTC is the primary enforcement agency, and NIST will set technical standards. | The full economic impact on tech companies, particularly the cost of compliance for startups versus established players. |
| The law includes requirements for transparency, risk assessments, and human oversight for high-risk systems. | The extent to which DISA will preempt the growing number of state-level AI laws, and which state laws will be challenged in court. |
| The act builds on principles from the White House's AI Bill of Rights and NIST's AI Risk Management Framework. | How the U.S. framework will align or conflict with international regulations like the EU AI Act in practice, and its effect on global market dynamics. |
Timeline of events
A Timeline of U.S. AI Regulation
- October 2022: The White House releases the 'Blueprint for an AI Bill of Rights,' a non-binding framework outlining five key principles to guide the responsible design and use of AI.
- January 2023: The National Institute of Standards and Technology (NIST) releases its AI Risk Management Framework (AI RMF 1.0), a voluntary guide for organizations to manage AI-related risks.
- October 2023: President Biden issues a comprehensive Executive Order on 'Safe, Secure, and Trustworthy AI,' directing federal agencies to develop safety standards and address AI risks.
- 2023-2025: Numerous states, including Colorado and California, introduce and pass their own AI-specific legislation, creating a complex patchwork of rules for companies to navigate.
- March 2025: The 'CREATE AI Act of 2025' is introduced in Congress, focusing on establishing a National Artificial Intelligence Research Resource but not comprehensive regulation.
- Early December 2025: A White House executive order signals a push for a national policy framework that would preempt conflicting state AI laws, setting the stage for federal legislation.
- December 14, 2025: The Digital Intelligence and Safety Act (DISA) is signed into law, establishing the first comprehensive federal regulatory framework for AI in the United States.
The bigger picture
The Shift from Principles to Policy
The passage of the Digital Intelligence and Safety Act (DISA) marks a critical evolution in the U.S. approach to artificial intelligence governance. For years, the federal strategy was defined by high-level principles and voluntary frameworks. Documents like the White House's 'Blueprint for an AI Bill of Rights' and NIST's AI Risk Management Framework laid important groundwork by establishing a common language and set of values for trustworthy AI, such as safety, privacy, and protection from algorithmic discrimination. However, these measures were not legally enforceable.
This hands-off approach allowed for rapid innovation but also led to a growing chorus of concerns about the potential for AI to cause real-world harm, from biased hiring algorithms to the spread of misinformation. In the absence of federal action, several states began creating their own rules, leading to a fragmented and often conflicting regulatory landscape that posed significant compliance challenges for businesses operating nationwide.
DISA represents a deliberate move to centralize and standardize AI regulation. It reflects a growing consensus that the potential risks associated with powerful AI systems, particularly those deemed 'high-risk,' necessitate binding legal guardrails. The U.S. is not acting in a vacuum. The European Union's AI Act, the world's first comprehensive AI law, set a global precedent with its risk-based model. DISA adopts a similar philosophy, suggesting a degree of transatlantic regulatory convergence, though key differences in implementation and scope will remain.
The new law attempts to strike a difficult balance: creating safeguards to build public trust and mitigate harm without imposing such heavy burdens that it stifles the innovation that has made the U.S. a global leader in AI development. This tension is particularly evident in the ongoing debate between proponents of open-source and closed-source AI. Open-source advocates argue that transparency fosters innovation and safety, while critics worry about the potential for misuse by bad actors. DISA's requirements for documentation and risk assessment will apply to both, but the practical implications for each development model are yet to be seen.
Impact analysis
Navigating the New Compliance Landscape
The Digital Intelligence and Safety Act (DISA) will have a profound and wide-ranging impact on the technology industry and beyond. For companies developing or deploying AI, the law introduces a new era of mandatory compliance, moving beyond the voluntary adoption of best practices.
Immediate Effects on Tech Companies
The most immediate impact will be on companies working with AI systems classified as 'high-risk.' These firms will need to overhaul their development and deployment lifecycles to incorporate the law's requirements for risk management, transparency, and human oversight. This will likely necessitate significant investment in legal expertise, compliance tooling, and technical documentation. The financial burden may be disproportionately felt by startups and smaller companies, which lack the extensive legal and compliance departments of tech giants.
Who is Affected?
- AI Developers: Companies building foundational models and specialized AI systems will face the most direct compliance obligations, especially those whose products fall into the high-risk category.
- Deployers of AI: Businesses that use AI tools for functions like hiring, credit decisions, or customer service will also have responsibilities under DISA, including ensuring the systems they use are compliant and providing necessary transparency to consumers.
- The Open-Source Community: Open-source AI projects, while praised for fostering innovation, will not be exempt. Developers of high-risk open-source models will need to navigate the new documentation and safety requirements, which could be challenging for decentralized projects.
- Consumers and the Public: The law aims to provide the public with greater protection and insight into how automated decisions are made. Individuals will have more rights to be informed about and potentially contest decisions made by high-risk AI systems.
Second-Order Effects
The legislation is expected to create a ripple effect throughout the economy. A new market for 'AI compliance' services—including auditing, risk assessment software, and consulting—is likely to emerge. Investment patterns may also shift, with venture capitalists potentially placing a greater emphasis on a startup's ability to navigate the new regulatory environment. Furthermore, the national standard set by DISA could influence global conversations on AI governance, strengthening the position of risk-based regulation as the dominant international model.
What to watch next
What to Watch Next
The enactment of the Digital Intelligence and Safety Act is a beginning, not an end. The coming months will be critical in shaping its real-world application and impact. Here are key developments to monitor:
- NIST's Rulemaking Process: The National Institute of Standards and Technology is responsible for creating the detailed technical standards for AI testing, documentation, and risk management. The specifics of these standards will determine the true compliance burden and technical feasibility for companies. Watch for draft releases and public comment periods.
- FTC Enforcement Posture: The Federal Trade Commission will lead enforcement. Observers will be closely watching the FTC's first enforcement actions under DISA, as they will signal the agency's priorities and interpretation of the law's scope.
- Legal Challenges and Preemption Fights: The law's relationship with existing state AI regulations is a major area of uncertainty. Legal challenges from states whose laws are preempted by the federal statute are almost certain. These court battles will define the final balance of power between federal and state AI governance.
- Industry Adaptation and Innovation: How will the tech industry respond? Look for the emergence of new compliance technologies and services. Track whether the law creates a 'chilling effect' on innovation, particularly for smaller players, or if it successfully fosters a more trusted and responsible AI ecosystem.
- Global Regulatory Alignment: The U.S. now has a comprehensive federal AI law, as does the European Union. The next phase will involve how these two major regulatory blocs align in practice. Watch for discussions on transatlantic data flows, joint standard-setting initiatives, and mutual recognition of compliance frameworks.
FAQ
What is the Digital Intelligence and Safety Act (DISA)?
DISA is the first comprehensive U.S. federal law designed to regulate the development and use of artificial intelligence. It establishes a nationwide legal framework that requires companies to assess and mitigate the risks of AI systems, particularly those deemed 'high-risk,' and to be more transparent about their use.How does DISA compare to the EU's AI Act?
DISA is similar to the EU AI Act in its core philosophy, using a risk-based approach that applies the strictest regulations to AI systems with the highest potential for harm. Both frameworks regulate high-risk applications in areas like employment, law enforcement, and critical services. However, there will be differences in their specific definitions, compliance mechanisms, and enforcement penalties.Does this new law ban any uses of AI?
The final version of DISA, similar to the EU AI Act, prohibits a small number of AI practices considered to pose an 'unacceptable risk' to fundamental rights. This includes systems designed for social scoring by governments and AI that uses manipulative techniques to cause harm.How will this law affect me as a consumer?
DISA aims to give you more rights and protections. For example, when you apply for a loan or a job and a high-risk AI system is used to make a decision, the company must be transparent about it. The law's principles are rooted in concepts like the 'AI Bill of Rights,' which emphasizes your right to know when an automated system is being used and to have access to a human alternative where appropriate.Is this the final word on AI regulation in the U.S.?
No, this is a major step, but the regulatory landscape will continue to evolve. The law gives federal agencies like the FTC and NIST the authority to create more detailed rules, which will be updated over time. Furthermore, technology will continue to advance, likely requiring future legislative updates to keep pace.Quick glossary
- AI Risk Management Framework (AI RMF): A voluntary framework developed by the National Institute of Standards and Technology (NIST) to help organizations identify, measure, and manage risks associated with artificial intelligence. DISA formally incorporates many of its principles into law.
- High-Risk AI System: An AI system that poses a significant potential threat to health, safety, or fundamental rights. Under DISA and the EU AI Act, these systems are subject to the strictest regulatory requirements. Examples include AI used for hiring, credit scoring, and operating critical infrastructure.
- Algorithmic Discrimination: Occurs when automated systems contribute to unjustified different treatment or impacts that disadvantage people based on protected characteristics like race, gender, or age. A core concern addressed in the White House's 'Blueprint for an AI Bill of Rights' and the new law.
- General-Purpose AI (GPAI): AI models, such as large language models, that can be adapted to a wide range of tasks. Regulations like the EU AI Act have specific rules for GPAI, focusing on the transparency of their training data and capabilities.
Sources
- The White House — Blueprint for an AI Bill of Rights (2022-10-04T00:00:00Z)
- National Institute of Standards and Technology — AI Risk Management Framework (2023-01-26T00:00:00Z)
- Cooley LLP — Showdown: New Executive Order Puts Federal Government and States on a Collision Course Over AI Regulation (2025-12-12T00:00:00Z)
- The White House — Ensuring a National Policy Framework for Artificial Intelligence (2025-12-11T00:00:00Z)
- CMS LawNow — EU adopts AI Act – key components and next steps for organisations (2024-06-01T00:00:00Z)
- IBM — What is the AI Bill of Rights? (2025-12-15T12:00:00Z)
- IBM — What is the Artificial Intelligence Act of the European Union (EU AI Act)? (2025-12-15T12:00:00Z)
- U.S. Government — Federal Trade Commission (2025-12-15T12:00:00Z)
- American Action Forum — Open-Source AI: The Debate That Could Redefine AI Innovation (2025-12-15T12:00:00Z)
- Medium — AI’s Regulatory Reckoning — EU AI Act and Ripple Effects on U.S. Technology Policy (2025-12-15T12:00:00Z)
- International Monetary Fund — The Economic Impacts and the Regulation of AI: A Review of the Academic Literature and Policy Actions (2024-03-13T00:00:00Z)
- Stanford Report — Experts cut through the noise to clarify AI’s actual economic impact (2025-12-04T00:00:00Z)
- Congress.gov — H.R.2385 - CREATE AI Act of 2025 (2025-03-20T00:00:00Z)
- Bryan Cave Leighton Paisner — US state-by-state AI legislation snapshot (2025-12-15T12:00:00Z)
- The Fulcrum — Governing AI in the United States: A History (2025-12-15T12:00:00Z)
Note: This article is updated as new verified information becomes available.

