Artificial Intelligence: What You Need to Know

6 min read

| By K. Lee Lerner, M.Ed. |

Educators and librarians are now on the front lines of the generative artificial intelligence (AI) revolution.

Advances in AI were patchy since the concept was first advanced in 1956 by research scientists who organized what was termed the Dartmouth Conference to discuss the creation of machines that could perform tasks thought to require human intelligence. Progress in programming languages and machine-learning algorithms in the 1980s set the stage for explosive industrial integration of AI, especially with the subsequent development of faster, higher-capacity computers.

The creation of neural networks integrating stacks of algorithms and computing devices made possible modern deep learning and natural language processing, which were foundational to emerging AI.

The most recent revolution in generative AI, programs that can create content based on user prompts, is another advance that promises to bring revolutionary change and challenges to many professionals, including educators and librarians.

The first challenge is to explain to students what AI is and what it isn’t. Then the challenge becomes how to critically use AI while simultaneously recognizing its limitations.

Generative AI uses machine-learning models to create text, images, and sounds. There are many generative models (e.g., generative adversarial networks (GANs), autoregressive (AR) models, etc.) comprised of neural networks that create new data patterned on the data set used to train and then tweak the program.

The exact design of ChatGPT, for example, remains confidential, but it’s a large language model (LLM) program capable of generating text based on human interaction. Using components of both supervised learning and reinforcement learning from human feedback (RLHF), ChatGPT and similar AI models emulate cognitive tasks, like thinking and writing.

It’s important for students to learn that AI networks do not think―at least not in the way humans think. AI networks and algorithms emulate thought (e.g., words, sentences, pictures) by rapidly using, and then continuously refining, statistics to link words and pixels into coherent sentences and pictures.

ChatGPT and similar models can perform valuable work. ChatGPT’s answers are based on a statistical distribution of words and word sequences (often called tokens), while humans, depending on their development level, background, and a range of cognitive abilities, selectively choose words and construct sentences.

This difference means that while ChatGPT generates predictable text (a fact key to the development of programs to detect AI use), it doesn’t necessarily generate reliable text or the text the user desires.

Regardless, when ChatGPT was released by its creator, OpenAI, it marked a significant improvement over its AI predecessors in terms of its quality of answers and language use. What the world wide web did for generalizing use of the internet, the release of ChatGPT did in terms of marking a turning point in the utility and usefulness of AI.

While some educational professionals fear AI developments will replace the need for teachers and weaken the need to develop critical-thinking skills, I suggest the opposite is true. Recent advances prove only a greater need for teachers and students to prepare for a world where continual development of critical- and contextual-thinking skills is foundational and essential.

Unlike most industrial AI programs that automatically perform tasks, with generative AI, the human interactions―the questions asked and directions given by users―are integral to the AI output. Human interactions are also essential to minimize harmful, untruthful, and/or biased content. That means students need critical-thinking skills more than ever to know the right questions to ask and when and how to spot errant information.

Trained on large amounts of available online text using machine-learning techniques, the output of ChatGPT and similar AI programs is only as good as its data training set. Moreover, these programs still rely on human input in the RLHF stage. Such input is not always accurate and can suffer from language and cultural differences in those offering feedback.

Many educators fear that easier-to-use AI programs will spur plagiarism and diminish student writing skills, but this need not be the case. While programs to detect AI use by students are increasingly available, they are not yet widely tested, and all such programs can generate false reports. Teachers have, however, an old-technology arrow in their quiver: the in-class writing assignment or sample that offers a comparison of writing styles, vocabulary use, and depth of knowledge. Another favorite check is to ask students to write a summary of a report being submitted or explain how they conducted their research. Both are constructive checks on student performance that offer teachers a chance to improve student outcomes rather than simply checking for plagiarism.

Most importantly, educators should help students understand that AI’s capacity to produce correct answers should be thought of as an inverted pyramid. For events or topics about which much has been written, AI programs have a lot to draw on in formulating answers. The top of the inverted pyramid is wide for topics on Ancient Rome or Shakespeare, but narrow to nonexistent for more-recent advances (or corrections) in science, medicine, or on contemporary topics in the news that often interest and motivate students the most.

At least with current AI versions, educators can think of them as more- sophisticated, Wikipedia-like resources because they hold many of the same perils. Such open-source information is usually not vetted by experts, nor is it rendered in age-appropriate language. Teachers and librarians are still needed to help students find and discern trusted information (especially when the composition of AI training sets remains confidential) and to supply vitally important context that helps students and other users to more deeply appreciate the information they find. 

ChatGPT is but the first wave of a tsunami that the AI revolution promises to bring to many sectors of society, including the educational community. As with many revolutions, educators and students will be challenged to both master the technology and integrate it effectively into teaching and research.

About the Author

K. Lee Lerner is a writer, editor, and aviator who, along with Brenda Wilmoth Lerner, is the editor of the Gale Encyclopedia of Espionage, Intelligence, and Security; Climate Change: In Context; and many other award-winning books and articles on science, technology, and a range of global issues. A full bio and list of his work may be found at and

Leave a Comment