Skip to Main Content
chat loading...

AI, ChatGPT, and the Library: ChatGPT and Information Literacy

This guide provides a starting place for MCC faculty, staff, and students to learn how generative AI intersects with information literacy and how to use it responsibly.

Welcome

Welcome to the MCC Library guide on AI and Information Literacy!

Generative artificial intelligence (AI) is a relatively new technology that is developing quickly. Like the internet in general, these AI tools, like ChatGPT or Google Bard, are neither good or bad when it comes to finding and using information. Instead, they represent a new way in which we can interact with information.

This guide's intention is to help you critically engage with generative AI tools and focuses on how they intersect with information literacy.

Students, please first confirm with your professor that using ChatGPT or other content produced by generative artificial intelligence (AI) is acceptable before using it for any course assignments.

ChatGPT and Information

Where does the information come from?

ChatGPT was trained on a body of text which allows it to generate text in response to a prompt. Some partial lists of the training dataset exist, and ChatGPT will also provide a partial list when queried. However, the entire body of text that has trained ChatGPT is unknown.

When ChatGPT provides an answer to a question, it will not immediately provide a reference for where the information came from. This is because it is pulling predictive language from a wide variety of places, so the information usually doesn't come from a single source. Because of this, you typically cannot trace the response back to a single parent source or know where the information came from.

Can ChatGPT provide references?

Yes, when prompted, ChatGPT can provide references. However, these references may not be where the information actually came from and - more importantly - may not be for real sources. Despite sounding plausible, ChatGPT can easily hallucinate citations. This can be an issue with other generative AI tools, like Bard, as well.

For example, ChatGPT provided this citation and summary in response to a prompt about needing an academic journal article on the topic of amphetamine use and social determinants of health:

"Here's an example of a peer-reviewed journal article on amphetamine abuse and its intersections with social determinants of health:
Stoové, M. A., Dietze, P. M., Jolley, D., Fry, C. L., Brewer, N., & Hellard, M. E. (2008). Increasing access to sterile injecting equipment and syringe exchange in a rural area with a high prevalence of bloodborne viruses. Australian Journal of Rural Health, 16(2), 99-106. 
This article describes a study conducted in a rural area of Australia with a high prevalence of bloodborne viruses, including those associated with amphetamine use..." OpenAI. (2023, February 17). [ChatGPT response to a prompt about locating an academic journal article]. https://chat.openai.com/

Although the summary sounds plausible and the citation looks realistic, this article does not exist. The journal exists, as does the lead author. However, Stoové has not published in this journal.

 

A tabby cat sitting on an open book on top of desk.

Image created with Bing AI Image Creator.

Checking ChatGPT for Credibility

Evaluating all information for credibility is highly recommended, regardless where you find it. This is true for generative AI responses, especially given the information presented above. There are many different tools, checklists, and strategies to help you evaluate your sources. None of them are black-and-white checklists for determining if a source is credible and if you should use it.

Here are two strategies for evaluating information provided by generative AI tools:

1. Lateral Reading

Don't take what ChatGPT tells you at face value. Look to see if other reliable sources contain the same information and can confirm what ChatGPT says. This could be as simple as searching for a Wikipedia entry on the topic or doing a Google search to see if a person ChatGPT mentions exists. When you look at multiple sources, you maximize lateral reading and can help avoid bias from a single source.

Watch Crash Course's "Check Yourself with Lateral Reading" video (14 min) to learn more.

2. Verify Citations

If a generative AI tool provides a reference, confirm that the source exists. Trying copying the citation into a search tool like Google Scholar or the Library's OneSearch. Do a Google search for the lead author. Check for the publication in the Library's publications finder

Second, if the source is real, check that it contains what ChatGPT says it does. Read the source or its abstract.   

Have Questions?

Let's learn more about ChatGPT and generative AI together! The MCC librarians are not experts on AI, but we are happy to help you explore how it intersects with information literacy. Please contact us to continue the conversation and find ways to collaborate.

Creative Commons License

Attribution: Adapted from AI, ChatGPT, and the Library Libguide by Amy Scheelke for Salt Lake Community College, licensed CC BY-NC 4.0, except where otherwise noted.

Copyright © Manchester Community College | 1066 Front Street, Manchester, NH
Phone: (603) 206-8150