mikemacmarketing, CC BY 2.0 via Wikimedia Commons
U.S. National Security Risks with Artificial Intelligence
On December 11th, President Trump signed an executive order aimed at limiting state regulation of artificial intelligence and establishing a task force in the Department of Justice to sue states that pass their own AI laws. In recent years, states have introduced AI bills to protect the U.S. from emerging risks associated with the technology. This executive order, which prioritizes innovation over Americans’ safety, is not beneficial because it would weaken oversight and accountability of the technology.
U.S. national security is challenged internally and by foreign adversaries who employ AI in ways that threaten U.S. infrastructure and citizens. In 2023, former President Joe Biden signed Executive Order 14110, “Safe, Secure, and Trustworthy Development and Use of AI,” in effort to guide responsible innovation. Specifically, the executive order encourages evaluation of AI, policies that align with the administration’s mission for equity, and responsible development of AI technology.
In January 2025, the Trump administration reversed the Biden Administration’s risk-mitigation approach with Executive Order 14179, “Removing Barriers to American Leadership in AI.” The order was a significant shift to prioritizing innovation and American competitiveness by reducing regulation and accelerating infrastructure development. In May, OpenAI CEO Sam Altman stood before the Senate for a major hearing about AI competitiveness, urging a hands-off approach to AI regulation. He made clear his support for “sensible regulation” that would not stifle the innovation or growth of the AI industry. Nevertheless, the growth of manipulated online content, AI-powered cyberattacks, and potential development of AI autonomous weapons are serious concerns for U.S. national security and should be taken seriously on a federal level.
Political figures have used AI-generated content to push specific agendas and manipulate the public. During the 2024 presidential election, the deputy chairman of Golkar—Indonesia’s oldest and largest political party—posted a deepfake video of the late Indonesian dictator Suharto stating his support for Golkar’s candidates ahead of the election. This is a concern because false content to bolster support for a party or individual can mislead voters. Twenty-four U.S. states have enacted legislation requiring AI-generated campaign ads to state that what is said or depicted did not occur. To achieve transparency and promote an informed electorate, this legislation should be passed on a federal level.
Another risk of AI to national security is when malicious actors use AI to design cyberattacks, which gives them access to sensitive information and gives them the ability to steal currency, disrupt or destroy systems and more. Anthropic, an AI firm, reported in August 2025 that a cybercriminal used the company’s Claude Code AI to hack and extort 17 companies, including a health care institution and a defense contractor. Personal data was stolen, including social security numbers, bank details, and health information. To strengthen cybersecurity defense, Sen. Mike Rounds (SD) and Sen. Kirsten Gillibrand (NY) introduced the Cyber Conspiracy Modernization Act. Senator Rounds stated, “We need more people working to secure cyberspace as well as harsher penalties for those perpetrating these crimes.” If passed, this legislation would extend penalties from ten years to life in prison, depending on the severity of the crime. This legislation is necessary to safeguard cyberspace.
Another potential risk are lethal autonomous weapons systems (LAWS), which are able to identify a target independently and destroy it without human control of the system. As AI military technology continues to advance, concerns have grown about the development of these weapons. They could pose a threat to security because they can misidentify targets. In warfare, the distinction between civilians and combatants is vital because deliberately harming civilians is a war crime. AI weapons that kill enemies without human decision-making raise the question of who is held accountable for the outcomes: those who design a system, those who sell it, or the end-user that deploys it? Although it is not a weapon currently deployed by the U.S., advancements in AI military technology have sparked the discussion of whether LAWS would be beneficial or detrimental to modern warfare.
The Trump administration’s executive order confirms a stronger shift toward deregulation and prioritization of U.S. leadership in AI. However, as this technology continues to accelerate, we must adapt strong regulations to address emerging threats. While leading in the AI race is important, sensible safety measures are essential to protect U.S. national security.


