DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

The U.S. has 1,200 AI bills and no good test for any of them

May 15, 2026
in News
The U.S. has 1,200 AI bills and no good test for any of them

In an interview this week on Fox Business, IBM Chairman and CEO Arvind Krishna pressed Washington on the central question facing AI policy: “The balance between too many regulations, it’s terrible; too few, we may not love the outcome, so we got to find the Goldilocks middle.” Krishna extended his warning to the international landscape: “If it turns into a bloated bureaucracy, that would not be so good for us to win the AI race.”

The balance Krishna identifies extends well beyond federal policy. It runs downward into a state-by-state patchwork of legislation now reshaping how American companies build and deploy AI, and upward into a global contest where technological competitiveness underwrites both economic prominence and national security. No clear path forward has emerged at any level. In our conversations with CEOs and political leaders, that lack of clarity is the common refrain.

In the past nine months, the United States has produced more AI legislation than in the prior decade, and on three different theories of what AI policy is supposed to do. California’s SB 53 focuses on transparency from frontier developers. New York’s Responsible AI Safety and Education (RAISE) Act mandates stricter incident reporting and a new oversight office inside the Department of Financial Services. The Texas Responsible Artificial Intelligence Governance Act (TRAIGA) prohibits specific intentional misuses and establishes a 36-month regulatory sandbox. Connecticut joined two weeks ago, when both chambers passed Senate Bill 5 (SB5) by lopsided margins after years of failed attempts.

Meanwhile, federal policy has lurched in opposite directions. President Trump’s December 11 executive order directed the Department of Justice to challenge state AI laws and conditioned broadband funding on alignment with a “minimally burdensome” national standard. The 2026 National Defense Authorization Act (NDAA), signed the day before, excluded preemption language entirely. In April, Anthropic’s disclosure of Mythos Preview, a model withheld from public release due to its autonomous cyber capabilities, introduced a new category of risk into a federal conversation unprepared to absorb such capabilities. The scare has reportedly prompted the White House to consider an executive order establishing an FDA-like pre-release vetting system for advanced AI models—an idea proposed by the second author to the U.S. Senate in 2023.

All this unfolds against a sharper international backdrop. The EU is implementing the AI Act, and China is deploying frontier capability under state direction, while the line between commercial AI and national-security capability is collapsing—raising the cost of incoherent U.S. policy.

By one count, state legislatures introduced over 1,200 AI-related bills in 2025 and enacted just under 150, with the pace accelerating since. Beneath the volume lies a more fundamental problem. Policymakers at every level are working without a shared test to determine whether their legislative efforts constitute good policy.

Why the Current Debate is Stuck

Too often, the debate has been framed as a binary choice between sweeping regulation and unrestricted operation, as though there were no middle ground, and with too little attention given to how proposals might conflict with existing law. Both sides talk past each other because neither has a clear test for which specific regulation, aimed at which actor, addresses which gap, and at what cost to whom, is actually necessary.

At the state level, most bills attempt to regulate “AI” as a category even though many uses sit cleanly within existing consumer protection, civil rights, intellectual property, and data privacy law. Colorado and Utah passed omnibus statutes “with reservations” in 2024, attaching sunset clauses and delayed effective dates that signaled their drafters’ uncertainty, and both states are now visibly retreating.

Colorado passed a “repeal and reenact” maneuver in its final session weeks to roll back onerous audit mandates in favor of targeted transparency. Utah narrowed its disclosure rules, extended the sunset to 2027, and swapped additional omnibus attempts for nine surgical bills targeting chatbot medical advice, AI-generated defamation, and child protection. In Connecticut, a broad 2025 bill died in the House amid a gubernatorial veto threat, while the narrower Connecticut Artificial Intelligence Responsibility and Transparency Act (SB 5) passed in its place two weeks ago, replacing mandatory developer audits with consumer transparency measures.

Yet these narrower successors still impose new compliance burdens beyond those imposed by existing civil rights and consumer protection law. Across statehouses, the same pattern is recurring. Well-intentioned legislation that, read carefully, replicates existing protections at the cost of substantial new compliance burdens.

At the federal level, three live propositions each flop on different grounds. Broad state preemption, in the form of presidential executive authority and the failed congressional moratorium, trades real protection against demonstrable harms, such as deepfake-generated child sexual abuse material (CSAM), AI-driven election fraud, and automated hiring discrimination, for the illusion of federal uniformity. Mandatory frontier-model approval, as currently floated, is poorly targeted and creates an incumbent moat that locks in the largest developers; however, perhaps a better version could be formulated. Capability-specific oversight of frontier models that can autonomously generate cyber exploits or Chemical, Biological, Radiological, and Nuclear (CBRN)-relevant content—the one area where federal action is genuinely needed—is where the federal conversation is not productively focused.

International approaches sharpen the contrast. The EU AI Act applies a tiered, risk-based regime with prescriptive compliance requirements scaled to system risk. China pairs state-directed deployment with detailed sectoral rules—algorithmic recommendation, generative AI, and deep synthesis—under national security review. Singapore and the UK have positioned themselves as governance hubs through voluntary frameworks, model sandboxes, and active industry partnerships. Each is a different bet on the same underlying tradeoff between innovation pace, harm reduction, and national security. The U.S. is currently betting without clearly identifying which bet it has placed.

The common failure is the lack of a structured method for determining whether a proposed rule effectively addresses the gap. A three-stage test offers a clear solution.

The Framework: A Three-Stage Test

Stage 1: The Target Specificity Question

Before evaluating any tradeoffs, a single test should be applied: if “AI” were replaced with “technology” or “software” in the bill text, would existing law already address the harm?

The specificity test is not hypothetical. Connecticut Attorney General William Tong issued an advisory memorandum on February 25, 2026, outlining how Connecticut’s existing civil rights, privacy, data security, competition, and consumer protection laws already apply to a substantial share of AI-related conduct. Massachusetts Attorney General Andrea Joy Campbell issued a similar advisory earlier. Both demonstrate that an attorney general can act on AI deployments without new legislation. Auditability of automated decisions, due process protections, and transparency in government use are already addressed by existing anti-discrimination and consumer protection laws. State bills creating new accountability rights for automated hiring often duplicate protections already enforceable under Title VII and the Americans with Disabilities Act.

The rule, then, is that when existing law adequately addresses the harm, the appropriate instrument is interpretive guidance from the relevant agency. New legislation imposes compliance costs, whereas simple interpretive guidance provides clarity. Many state AI bills do not survive this stage, and the first test is the most efficient single-discipline a state house can adopt.

Stage 2: Four Dimensions of Cost-Benefit Analysis

When existing law does not adequately address the harm, the question becomes whether the proposed rule’s benefits exceed its costs. Every AI policy choice sits along a single axis: a higher degree of regulation generally delivers stronger protections but reduces economic competitiveness, while a lower degree of regulation, beyond basic protections, preserves competitiveness but accepts greater downside risk. The framework’s purpose is not to resolve this tradeoff in the abstract but to make it explicit for each specific proposal.

Four dimensions warrant consideration: harm reduction, national security and critical-infrastructure resilience, innovation environment, and competitive concentration. The first two yield near-clear benefits when targeted well, with cost caveats that must still be weighed. The second two entail genuine tradeoffs.

Harm reduction is the strongest test case. The question is whether the harm is demonstrable, measurable, and unaddressed by existing law. AI-generated child sexual abuse material, election deepfakes, and discriminatory automated hiring decisions pass cleanly. Algorithmic harm framed in the abstract does not. A targeted state law addressing a specific harm produces measurable protection at a reasonable cost. A 50-state patchwork addressing the same harm multiplies compliance costs without proportional improvement.

National security and critical-infrastructure resilience addresses the category Anthropic’s Mythos brought into sharp focus, where risks are too systemic for any state law to address alone. The federal Center for AI Standards and Innovation (CAISI) framework provides a voluntary pre-deployment evaluation of frontier models in classified environments and was recently expanded to include Google DeepMind, Microsoft, and xAI, alongside the original agreements with Anthropic and OpenAI. But the cost caveat is significant. National-security framings can impose capability ceilings on legitimate research, crowd out commercial deployment, and place the U.S. at a technological disadvantage to international competitors. The challenge is calibrating oversight narrow enough to preserve commercial activity but broad enough to address the systemic risks Mythos illustrated.

Innovation environment carries a genuine tradeoff. Higher regulation can anchor durable adoption of AI. Rules compelling basic disclosure or human-in-the-loop oversight in high-stakes contexts can reinforce the trust that sustains adoption over time. Poorly designed governance has the opposite effect. For example, Consumer Financial Protection Bureau (CFPB) complaint volumes nearly doubled between the launch of ChatGPT and 2024, with complaints concentrated among high-adoption firms that scaled deployment without adequate guardrails.

Higher regulation can also push innovation out. Palantir relocated its principal executive office to more business-friendly Miami in February 2026, Elon Musk explicitly cited California law in moving SpaceX and X to Texas, and OpenAI signaled it would exit California amid state attorney general investigations into its proposed for-profit transition. When deployment slows in regulated jurisdictions but accelerates elsewhere, the work migrates, and workers who are meant to be protected lose access to both the productivity gains and the career pathways. Rules that anchor federal and state activity reinforce both adoption and competitiveness. Those that push it out concede both.

Competitive concentration entails the other genuine trade-off. The question is whether the rule widens or narrows the gap between data-mature incumbents and everyone else. Higher regulation tends to entrench incumbents. Only 7% of firms describe their data as fully ready for AI, and 95% of pilots fail to reach production, meaning disclosure, audit, and reporting requirements fall hardest on firms least equipped to absorb them. Mandatory frontier-model approval widens the moat for the four or five firms that can absorb the overhead. While lower regulation preserves a more open competitive field, the existing data and capability gaps mean that smaller rivals already face a steep climb. Standardized frameworks like the NIST AI Risk Management Framework and shared infrastructure programs like California’s CalCompute can reduce per-firm compliance costs and attract smaller firms.

Laying out all four dimensions along the regulation-competitiveness axis forces the debate to consider tradeoffs that current legislative drafting frequently ignores. A bill that scores well on harm reduction can still fail on innovation environment or competitive concentration.

Stage 3: Four Design Tests

Finally, any policy that survives the threshold and tradeoff stages should be evaluated against four design tests: targeting, counterfactual durability, adaptation, and enforceability.

Targeting measures whether the rule is aimed at the actor with the actual capability to mitigate the harm. A rule holding a deployer responsible for harm that only a developer can prevent, or the reverse, is regulatory theater. The EU AI Act’s tiered targeting at the system level is one model, classifying by risk category and assigning specific obligations across the entire value chain from developer to deployer. California SB 53’s developer-focused obligations sit at the other end, placing almost all responsibility on those who built the system. Texas’s TRAIGA imposes liability on whichever actor demonstrates harmful intent.

Counterfactual durability tests whether harm would occur anyway through unregulated substitutes. Banning frontier-model deployment within a state may not stop the underlying capability but shift it to jurisdictions with looser rules or to open-source alternatives. A national rule that does not contemplate the open-source alternative has a built-in evasion route. The 2026 NDAA’s “Covered AI” provisions targeting DeepSeek and High Flyer explicitly recognize this dynamic by prohibiting the two systems from operating within U.S. defense networks, rather than attempting to regulate adversary jurisdictions that federal rulemaking cannot reach.

Adaptation considers whether the rule includes sunset clauses, sandbox carve-outs, or mandatory revision cycles. Colorado’s automatic-repeal provisions and Utah’s delayed sunsets were both actively used to retreat from omnibus regulation. Texas’s 36-month TRAIGA sandbox offers a more developed approach, and Connecticut’s SB 5 modeled its own sandbox on TRAIGA.

Enforceability assesses whether the agency charged with enforcing the rule can actually administer it. Three subfactors matter: technical capacity to evaluate compliance; predictable, clear standards; and clear outcomes when applied. Current AI legislation frequently fails on all. Colorado’s AI Act was stayed by a federal court in April 2026, and the state attorney general delayed enforcement of the replacement statute until rulemaking could be completed. Rules designed for the enforcement capacity in place, such as the CAISI voluntary framework or attorney-general guidance, deliver protection in proportion to administrative capacity rather than legislative ambition.

Cutting across all four tests is the jurisdictional overlay. Frontier-model oversight, critical-infrastructure cybersecurity standards, and much of workforce policy require federal action or multistate compacts. Deepfakes, child sexual abuse material, election fraud, automated hiring discrimination, and procurement transparency more cleanly belong to the states.

How the Framework Cuts Through Live Proposals

Applied honestly, the framework produces sharper verdicts than the current debate allows.

California’s SB 53 partially clears the threshold test. Catastrophic-risk reporting from large frontier developers addresses a gap that California authorities do not fully reach, though several adjacent provisions duplicate authority. Gains in transparency and adoption durability are offset by the regulatory cliff at the $500 million revenue and 10²⁶ FLOP thresholds, which can shift compute decisions strategically rather than safely. The bill’s most consequential weakness lies in the obligation it imposes on developers when the catastrophic harms it contemplates arise primarily during deployment. The CalCompute consortium is its strongest provision, a positive-sum intervention that addresses competitive concentration head-on.

New York’s RAISE Act operates on a similar theory, with stricter provisions, including 72-hour incident reporting (versus California’s 15 days) and a new state oversight office with rulemaking authority. Chapter amendments narrowed the scope considerably, giving the RAISE Act a cleaner threshold case than SB 53, but the cost analysis turns almost entirely on how the oversight body exercises rulemaking authority, a structural risk the bill does not constrain. The same targeting problem as SB 53 remains.

Federal preemption fails on different grounds depending on its scope. Broad preemption fails the threshold test outright. State law on AI-generated CSAM is necessary and preempting it leaves a real gap that the order’s carve-outs only partially close. Narrow preemption of conflicting compliance regimes might pass, but only if paired with a federal floor doing the work the preempted state laws did. The Senate’s 99-1 vote in mid-2025 to strip the moratorium from the budget reconciliation bill suggests that the political system has already reached a similar conclusion.

Mandatory frontier-model approval simultaneously fails the targeting, counterfactual durability, and enforceability tests. Most AI harms originate in deployment, not the model-release decision. Open-source alternatives shift capability outside any regulated perimeter, and no federal agency yet possesses the evaluation capacity the statute would require. A narrowed version focused on CBRN and offensive-cyber capability evaluation, modeled on the NDAA’s AI Futures Steering Committee and CAISI’s expanded pre-deployment evaluation agreements, would pass. The Mythos/Glasswing precedent illustrates the operative model of voluntary disclosure to the Cybersecurity and Infrastructure Security Agency (CISA) and a private-sector coalition before public release, producing a coordinated defensive response without requiring new statutory authority or hampering global competitiveness.

The affirmative model that emerges from applying the framework is defined by a pattern rather than by a single bill. Interpretive guidance from attorneys general and relevant agencies comes first, as Attorney General Tong’s Connecticut advisory and Attorney General Campbell’s earlier Massachusetts advisory demonstrate, doing the threshold work that a substantial share of state AI legislation otherwise duplicates. Narrow legislation follows only where the advisory leaves real gaps and where the gap is genuinely state-level in character—deepfake CSAM, AI-generated election content, automated decision disclosure in benefits administration, and companion-chatbot protections for minors. Sandboxes carry the higher-risk uses on the TRAIGA model. The pattern is replicable across states without locking any one of them into a regime whose enforcement and interpretation will not be testable for years.

Beyond the procedural pattern of guidance-then-legislation, the framework points toward an affirmative substantive agenda. The next twelve months will set the pattern for the decade. The Department of Justice intervened in federal court against Colorado’s 2024 AI Act. California and New York laws are in force. Texas is operating under TRAIGA. Connecticut just enacted a comprehensive framework. And mandatory frontier-model approval is being seriously discussed in Congress for the first time.

The stakes extend beyond domestic compliance. The same decisions position the United States against EU regulators applying the AI Act, Chinese capability development unfolding under state direction, and frontier models whose safety and security implications are now national-security questions in their own right.

The legislative volume is high, but a shared test for distinguishing good policy from bad has been absent from the debate. The framework offered here will not, on its own, resolve any specific dispute. Its purpose is to ensure that the questions before state legislators, members of Congress, and federal agencies are the right questions, asked in the right order, before another five hundred bills are introduced and a patchwork is hardened in place that no one designed and few defend.

The renowned Federalist Papers of the late 1780s, 85 essays by Alexander Hamilton, James Madison, and John Jay, wrestled with this exact debate. Their writings concluded that a stronger federal government was necessary to manage national and international issues while preserving state powers. The balance of power between federal and state governments was the best way to prevent tyranny, manage national affairs such as foreign policy and commerce, and preserve state autonomy over internal local affairs. As Madison warned in Federalist No. 51, “Ambition must be made to counteract ambition.”

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

The post The U.S. has 1,200 AI bills and no good test for any of them appeared first on Fortune.

Consider reconnecting in the season of Mother’s Day and Father’s Day
News

Consider reconnecting in the season of Mother’s Day and Father’s Day

by Los Angeles Times
May 15, 2026

Americans were forecast to spend a record $38 billion on Mother’s Day and $24 billion on Father’s Day gifts this ...

Read more
News

Week from hell now sees CBS anchor booted by hotel boss ‘appalled’ by China coverage

May 15, 2026
News

How A.I. Was the Elephant in the Room at the Trump-Xi Summit

May 15, 2026
News

US Army soldiers are learning to treat the sky above as a source of danger and listen for the buzz of enemy drones

May 15, 2026
News

Freed by Trump, the Jan. 6 criminals are preying on children and others

May 15, 2026
Trump’s Federal Gas Tax Holiday Isn’t Likely to Bring Down Prices

Trump’s Federal Gas Tax Holiday Isn’t Likely to Bring Down Prices

May 15, 2026
Pentagon caught flat-footed as Hegseth makes ‘abrupt’ troop upheaval

Pentagon caught flat-footed as Hegseth makes ‘abrupt’ troop upheaval

May 15, 2026
Your kid’s more likely to eat vegetables if you stick to an easy healthy habit during pregnancy

Your kid’s more likely to eat vegetables if you stick to an easy healthy habit during pregnancy

May 15, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026