AI Hallucinations and Other Erratic Behaviors

4 min read

| By K. Lee Lerner |

Despite the fact that there has been a revolutionary integration of artificial intelligence (AI) into apps and other programming, many of the generative AI programs recently released are still considered experimental programs.

An understanding of the limitations of generative AI programs is, increasingly, an urgent concern. Users, especially teachers and students seeking to integrate AI responsibly into lesson plans and studies, need to be aware of both the potential and perils of AI. Employers integrating AI into training programs must also understand the risks of using generative AI without checking outputs against trusted sources.

The time and investment it takes to train large language model (LLM) programs means they are often not current with regard to recent developments or changes.

Some applications carry explicit warnings that information may not be current, while others give approximate cutoff dates for information in their training sets. AI applications are improving with regard to incorporating more recent or real-time data, but, as of August 2024, genuine search models with access to the most current information (not all of which may be vetted) remain in various stages of testing.

Because they are experimental programs undergoing constant development, some AI programs seem to lose capacity over time, as reported by users. This phenomenon greatly diminishes the reliability teachers need in making assignments incorporating AI. Applications may, for example, suddenly refuse to perform tasks due to internal programming changes or changes in neural network interfaces. Subtle variations in student input prompts can also generate wildly divergent answers.

Most generative AI applications also come with disclaimers and warnings that they may return false information. Within the AI community, generation of false information is called a “hallucination.” It’s been widely reported, for example, that some AI programs can simply make up books when asked to provide a bibliography or give details about books and authors that may be false. In New York, lawyers were sanctioned for citing cases that didn’t exist in a legal brief partially generated by an AI program.

In general, users should be aware that the more complex a prompt or question is, the greater the chance AI will hallucinate, especially at the boundaries of its knowledge.

The algorithms that drive AI are based on statistical word associations. The programs are thus oblivious to the greater meaning of their output and lack awareness of false information as well as their own errors. Despite these known flaws, derivative apps and other programs that rely on underlying LLMs are plunging ahead, ignoring the limitations warning issued by some AI developers and ethicists that students are especially vulnerable to AI hallucinations because they lack the knowledge base to discern false information.

Moreover, student vulnerability to AI hallucinations is increased because AI programs are designed to answer authoritatively and/or fashion output with an emphasis on clear writing rather than content accuracy.

Users can help protect themselves from false information by requesting sources along with answers. Unfortunately, some AI programs don’t list sources, nor are sources necessarily listed for those used to compose answers.

When an AI program can’t cite a source, users should be suspicious about its answers, and rewording prompts and comparing AI answers are ways to detect trouble. If the answers to similar prompts differ significantly, the AI program may be hallucinating.

AI best practices always include fact-checking, so users are advised to check generated material against traditional vetted reference materials.



About the Author


Recognized for his use of language, accuracy, and balanced presentation, K. Lee Lerner’s portfolio covering science and global issues for Cengage includes two ALA RUSA Book and Media Awards and two works named Outstanding Academic Titles. Holding degrees in science, education, and journalism, including a master’s degree with academic honors from Harvard, Lerner has served on the board of advisors for the venerable American Men and Women of Science since 2003 and, along with Brenda Wilmoth Lerner, as coeditor for three editions of the Gale Encyclopedia of Science. He was the contributing editor-in-chief for Gale’s Encyclopedia of Espionage, Intelligence, and Security. A member of the National Press Club in Washington, DC, Lerner is an experienced aviator and sailor who has completed two global circumnavigations. His Academia site consistently ranks among those most frequently accessed by students, scholars, and decision-makers from around the world. Additional information may be found at https://scholar.harvard.edu/kleelerner and https://harvard.academia.edu/KLeeLerner/

Leave a Comment