Skip to content

Washington debates AI safety as cybersecurity risks rise with advanced models

Could AI soon outpace humans in hacking? Officials clash over regulation as cybersecurity threats grow.

The image shows a drawing of a machine with a white background and text that reads "US Patent...
The image shows a drawing of a machine with a white background and text that reads "US Patent 1,780,077" and "Patent for a hydraulic pump".

Washington debates AI safety as cybersecurity risks rise with advanced models

Calls for government review of advanced AI models are growing in Washington. Concerns focus on systems like Claude Mythos and the cybersecurity risks they may pose. Some officials argue these models could soon surpass humans in exploiting software vulnerabilities. Kevin Hassett, director of the White House National Economic Council, first proposed studying an executive order. This would require advanced AI systems to be 'proven safe' before release. The idea drew criticism for potentially slowing innovation, as federal reviews are slower than model development cycles.

The White House later adjusted its position. Chief of Staff Susie Wiles stressed support for innovation over heavy bureaucracy. Critics also pointed out that a pre-approval regime could politicise AI development, shifting focus from technical merit to regulatory manoeuvring.

Comparisons to the FDA model were dismissed as misleading. AI systems are information products, more like software or speech than physical interventions such as drugs. Another concern was that restricting US providers would not halt global AI progress, as other nations would continue advancing frontier models accessible online.

Experts argue the current cybersecurity environment is already fragile. Insecure systems are frequently exploited by malicious actors even without advanced AI. They suggest a better approach is to strengthen collaborative safety institutions, harden vulnerable systems, and develop targeted safeguards for specific risks. The debate highlights tensions between safety and innovation in AI regulation. A pre-approval system could delay progress and introduce political influences. Alternative measures, such as improving existing cybersecurity frameworks, are being proposed as more effective solutions.

Read also:

Latest