We can evaluate AI-generated content in much the same ways as we evaluate other sources. Tests such as the CCOW method of source evaluation can be helpful in determining whether the generated information you've found is reliable.
However, some of the questions we typically ask about sources may be more difficult to answer when working with generative AI, because AI tools do not disclose the processes used or the information consulted to arrive at the answers it provides.
Tests such as the SIFT method of source evaluation (or the Four Moves) can be helpful in determining whether the generated information you've found is reliable.
Before sharing or using information, stop to check your emotional response to the information. Also be sure to consider:
According to digital literacy expert and creator of the SIFT method, Mike Caulfield, "you want to know what you’re reading before you read it" to make the most of your time when engaging in the research process. Knowing about the source can help you better understand the trustworthiness of the information. Take a moment to consider:
You can use lateral reading to investigate what other trusted sources say about the source you're interested in. Google and Wikipedia can be helpful for this step!
Lateral reading can be helpful for this step, too. Look for other sources that
It is good practice to try to locate the original source and context of quotes and information because much of what exists on the internet has been intentionally stripped of context.
Check out the video below for a more in-dept explanation of evaluating AI-generated content: