Lawmakers across 21 states proposed more than 50 bills during the 2025 legislative session that addressed the use of artificial intelligence in education, according to an analysis from the Center for Democracy and Technology, a nonprofit focused on technology policy and consumer rights.
Maddy Dwyer, who wrote the analysis and is a policy analyst for the organization, described state lawmakers’ attention to the topic as “unprecedented.”
It shows just how much the fast-evolving technology has “captured” the public’s attention, Dwyer said. People are seeing AI’s potential to reshape the way we work and live, but they’re also concerned about the “catastrophic” risks of the technology, she said.
Indeed, in the three years since OpenAI’s generative AI tool ChatGPT publicly launched, AI has become one of the most talked-about topics in K-12 schools. Advocates for the technology have talked up its potential to transform education, while skeptics have raised concerns about the downsides of relying on it.
Districts have been trying to find a balance between the potential benefits and risks of the technology. They’ve established policies or guidelines for using it responsibly and offered professional development on its application for teaching and school management.
Now, more than half of teachers are incorporating AI into instruction in some way, according to data from the EdWeek Research Center, especially as it becomes integrated into a wide variety of products and services that schools use.
As more districts and educators incorporate AI into instruction, “it’s critical that states create appropriate guardrails and guidance,” Dwyer said.
The 53 bills each focused on at least one of five policy categories, according to the Center for Democracy and Technology analysis:
- Advancing AI literacy for students and professional development for teachers (15 bills);
- Requiring the creation of guidance or guidelines on the responsible use of AI in classrooms (13 bills);
- Creating task forces to assess the state of AI in education and its effects (12 bills);
- Prohibiting specific AI uses in school, such as those related to mental health support (8 bills); and
- Addressing AI-generated nonconsensual intimate imagery (5 bills).
The four bills that passed, according to the CDT analysis, include:
- Two from Illinois, one focused on creating guidance and a task force and the other on updating the definition of cyberbullying to include nonconsensual intimate imagery.
- One in Louisiana, promoting AI literacy.
- And one from Nevada, prohibiting school-based health workers and counselors from using AI in providing services to students.
The State Education Technology Directors Association, a nonprofit membership organization representing state ed-tech leaders across the country, has been tracking state education agencies’ actions when it comes to AI in education and found similar trends, said Ji Soo Song, the association’s project and initiatives director.
The organization has also found that state education agencies are also building their own capacity to use and evaluate artificial intelligence, leading professional learning initiatives about the technology, and implementing new grants to support appropriate use of it, Song said.
Legislation lacks emphasis on requiring transparency from tech companies
A policy area that Dwyer believes is missing from the conversation is the issue of transparency from ed-tech vendors. It’s challenging for schools and districts to have to ask for that individually, she said.
Vetting AI tools and providers is something SETDA members continue to ask questions about, Song said. While SETDA has developed a procurement guide, “we recognize that there may be additional AI-specific questions that states/districts can be asking of vendors,” he said.
The 2026 legislative session is just beginning, but Dwyer said she expects more bills proposing to tackle AI in education.
“States are rising to the occasion,” she said, especially in the absence of federal action.
The federal government has given a lot of attention to AI in education, but it is mostly focused on expanding its use rather than establishing guardrails or regulations around appropriate use of the technology.
President Donald Trump issued an executive order on AI literacy and integration within the first few months of his second term. As part of that order, the U.S. Department of Education announced advancing the use of AI in education as a grantmaking priority, and first lady Melania Trump launched the Presidential AI Challenge.
There hasn’t been as much movement in Congress on legislation addressing the use and impact of AI in education.
But there is interest. A recent U.S. House of Representatives hearing explored federal policy gaps around AI in education, and some Democrats in a Senate committee hearing about kids’ screen time used their allotted time to ask witnesses about the harms of AI.
Trump in December also signed an executive order aimed at blocking states from crafting their own regulations for AI, but some experts have said it lacks legal authority.
Still, states are going “full-steam ahead,” Dwyer said. “There’s a lot of competing priorities in the education space. In some ways, it’s nice to know state legislatures are trying to tackle the issue.”
