Inside Gale’s Measured Integration of Generative AI

4 min read

| By Darren Person |

As most students and many educators have embraced various academic applications of generative AI, some education companies have rushed to integrate the promising new technology into their products and services—whether it makes sense or not. As we’ve seen in the two years since the launch of ChatGPT, generative AI is a powerful tool, but it’s not a one-size-fits-all solution. Gale’s experience using machine learning to enable text and data mining in Gale Digital Scholar Lab and to bring curriculum-aligned content into classroom instruction with Gale In Context: For Educators has taught us to take a thoughtful approach to incorporating generative AI. Our AI policy is guided by the principle that our application of artificial intelligence will be intentional, conscientious, and progressive. Here’s what that has looked like in practice so far.

When I was at the ALA Annual Conference last summer, Gale conducted a survey among librarians to discover what challenges they felt AI could help them address. Respondents showed widespread interest in applying AI to improve learner outcomes and to lighten the burden on teachers, but they definitely did not want answer-oriented AI tools that replace actual research or bypass students’ opportunities to develop critical thinking skills.

Armed with this knowledge of what educators and librarians wanted and didn’t want, we convened a workshop that brought together teams from different areas of the company, including product and technology, to brainstorm how our current or future products could potentially use AI to solve our customers’ most pressing problems.

Since that workshop, we’ve been developing several AI applications. Gale’s conscientious, design-thinking approach is based not on offering tools that appear cutting-edge but on careful development supported by extensive testing in limited environments. For example, when we were recently faced with the task of translating an enormous amount of online content into a number of foreign languages, we investigated using a large language model (LLM). However, our testing revealed that the “legacy” approach of machine translation worked better than AI in its current state, so we chose the effective technology over the flashier one.

When we do release new generative AI tools, they will be in beta so that the researchers, educators, and students who use them every day can have a voice in how they work. Our goal is not to rush a product to market, but rather to continue the collaborative relationship we have built with our users.

Data breaches like the recent one at PowerSchool have the potential to affect millions of students and instructors. In 2024, 29 cybersecurity laws were passed in 20 states that impacted schools and how they need to approach safeguarding learner data. Gale has always been protective of data security and our users’ privacy, and one of the reasons we’ve taken a measured approach to integrating AI is to be certain that any new tools are as secure as they can possibly be.

As we work to protect the data of everyone who uses Gale’s products and services, we prioritize the intelligence of scholars over artificial intelligence. LLM chatbots are certainly a vast improvement over a web search engine that spits out millions of links, but alongside helpful information, they can also deliver “hallucinations” in the form of made-up facts. By contrast, an article published by a recognized scholar in a peer-reviewed journal with clear journalistic and academic standards is valuable intellectual property that can be cited, reproduced, and corrected, if need be. We have heard loud and clear that our third-party partners don’t want their content ingested into a large language model, and, as part of our mission to elevate educators and education, we take protecting their IP extremely seriously.

To safeguard academic integrity, we’re exploring generative AI tools such as retrieval augmented generation (RAG) that harnesses the general conversational power of an LLM while limiting the chatbot’s responses to specific, academically relevant data it has been trained on, such as peer-reviewed articles. RAG can deliver information only from authoritative articles published with a byline in a peer-reviewed journal with clear journalistic and academic standards.

This is a snapshot of Gale’s AI journey so far. The technology is constantly evolving, and Gale’s policy will evolve with it. With our principles as our North Star, we will continually review our use of AI to ensure that it provides rich academic opportunities for researchers, educators, and students.


Darren Person


Darren Person


Darren Person is Executive Vice President & Chief Digital Officer at Cengage Group.

Leave a Comment