AI hallucinations gone wrong as Alaska uses fake stats in policy

AI hallucinations gone wrong as Alaska uses fake stats in policy


The combination of artificial intelligence and policymaking can occasionally have unforeseen repercussions, as seen recently in Alaska.

In an unusual turn of events, Alaska legislators reportedly used AI-generated citations that were inaccurate to justify a proposed policy banning cellphones in schools. As reported by /The Alaska Beacon/, Alaska’s Department of Education and Early Development (DEED) presented a policy draft containing references to academic studies that simply did not exist.

The situation arose when Alaska’s Education Commissioner, Deena Bishop, used generative AI to draft the cellphone policy. The document produced by the AI included supposed scholarly references that were neither verified nor accurate, yet the document did not disclose the use of AI in its preparation. Some of the AI-generated content reached the Alaska State Board of Education and Early Development before it could be reviewed, potentially influencing board discussions.

Commissioner Bishop later claimed that AI was used only to “create citations” for an initial draft and asserted that she corrected the errors before the meeting by sending updated citations to board members. However, AI “hallucinations”—fabricated information generated when AI attempts to create plausible yet unverified content—were still present in the final document that was voted on by the board.

The final resolution, published on DEED’s website, directs the department to establish a model policy for cellphone restrictions in schools. Unfortunately, the document included six citations, four of which seemed to be from respected scientific journals. However, the references were entirely made up, with URLs that led to unrelated content. The incident shows the risks of using AI-generated data without proper human verification, especially when making policy rulings.

Alaska’s case is not one of a kind. AI hallucinations are increasingly common in a variety of professional sectors. For example, some legal professionals have faced consequences for using AI-generated, fictitious case citations in court. Similarly, academic papers created using AI have included distorted data and fake sources, presenting serious credibility concerns. When left unchecked, generative AI algorithms, which are meant to produce content based on patterns rather than factual accuracy, can easily produce misleading citations.

The reliance on AI-generated data in policymaking, particularly in education, carries significant risks. When policies are developed based on fabricated information, they may misallocate resources and potentially harm students. For instance, a policy restricting cellphone use based on fabricated data may divert attention from more effective, evidence-based interventions that could genuinely benefit students.

Furthermore, using unverified AI data can erode public trust in both the policymaking process and AI technology itself. Such incidents underscore the importance of fact-checking, transparency, and caution when using AI in sensitive decision-making areas, especially in education, where impact on students can be profound.

Alaska officials attempted to downplay the situation, referring to the fabricated citations as “placeholders” intended for later correction. However, the document with the “placeholders” was still presented to the board and used as the basis for a vote, underscoring the need for rigorous oversight when using AI.

(Photo by Hartono Creative Studio)

See also: Anthropic urges AI regulation to avoid catastrophes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: ai, law, policy, research



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *