Have a CCOW: A CRAAP alternative for the internet age

The CRAAP Test is a popular tool for teaching students to evaluate information. Its simplicity and ease of understanding make it suitable for teaching in the limited time of a typical one-shot library instruction session. However, it has recently come under criticism for being unequal to the internet age. Critics hold that students treat the CRAAP criteria as a checklist, rarely leaving the source under evaluation to gather more information to aid their assessment, an activity crucial for internet factchecking. This paper details a new set of evaluation criteria that seeks to retain the convenient conceptual packaging of CRAAP while encouraging an investigative mindset. Students are asked to actively investigate the Credentials, Claims, and Objectives behind the information they are evaluating. A fourth criterion, Worldview, prompts metacognition and builds the self-awareness critical to making good judgements about information. This paper explores the CCOW criteria and details a flipped, online guide and exercise which has been successfully used to teach information literacy (IL) to college students in their first year of study using CCOW.


Introduction
The CRAAP Test has been a staple of information literacy (IL) instruction since its introduction in 2004. Created by Sarah Blakeslee (2004) at California State University, Chico, the CRAAP Test offers an easy-to-remember acronym for five criteria for evaluating information: currency, relevance, authority, accuracy, and purpose. Not only is this acronym memorable for students, it also provides a convenient package for IL concepts which makes it suitable for teaching in diverse contexts and to learners at many different levels. The CRAAP Test is a common feature of library research guides, and has been taught to student groups ranging from middle school to first year of college to postgraduate studies. It has been packaged into one-shots, expanded into multiple sessions, and used as a foundation for full credit information literacy courses.

Criticisms of CRAAP
Today's students are no more adept at reliably navigating the online world than previous generations for whom the internet was newperhaps even less so. A 2016 study by the Stanford History Education Group examined the evaluation habits of middle school, high school, and college students. The results were not inspiring. Whether the information under consideration was a social media post, a photograph, or an article, students rarely left the source to investigate it by, for instance, looking up the organization behind the information and evaluating its purpose or bias. The authors noted, 'Many assume that because young people are fluent in social media they are equally savvy about what they find there. Our work shows the opposite' (Wineburg et al., 2016, p. 7).
In a second report in 2017, the Stanford History Education Group compared the information evaluation skills of three groups: PhD historians, professional fact checkers, and Stanford University undergraduate students. They reported, 'Historians and students often fell victim to easily manipulated features of websites, such as official-looking logos and domain names. They read vertically, staying within a website to evaluate its reliability. In contrast, fact checkers read laterally, leaving a site after a quick scan and opening up new browser tabs in order to judge the credibility of the original site. Compared to the other groups, fact checkers arrived at more warranted conclusions in a fraction of the time' (Wineburg & McGrew, 2017, p. 1). The authors compared the fact-checkers' process of gathering outside information and context to help make their decision to "taking bearings" in real-world navigation (p. 13).
The authors note that several states have passed legislation promoting the teaching of IL, but ask, ' [W]hat if the problem is not that we're failing to teach media literacy, but that we're teaching the wrong kind?' (Wineburg & McGrew, 2017, p. 44). The CRAAP Test is singled out as an example of the 'wrong kind' of IL instruction: the checklist. The authors hold that 'checklists focus students on a website's most easily manipulated features ' (p. 44) and also lock the student into looking only at the site they are evaluating. By contrast: [Fact]Checkers never consulted a list of questions before initiating a search. The elements emphasized by the checklists-what an organization claims on its "About" page, an .org URL, a physical address and contact information-were taken with a grain of salt. That's because the checklist approach cuts searchers off from the most efficient route to learning more about a site: finding out what the rest of the web has to say. This was the biggest lesson we learned from watching these experts: They evaluated unfamiliar websites by leaving them. For fact checkers, the direct route to credibility was indirect (p. 45).
The CRAAP Test is an adaptation of older checklist methods into a more memorable acronym.
The same year that the CRAAP Test was introduced, Marc Meola critiqued the checklist approach, identifying the line of reasoning behind it as 'Librarians evaluate information using a checklist of criteria, and, therefore, should teach undergraduates how to evaluate information by using a checklist ' (2004, p. 332). The set of criteria Meola critiquesauthority, accuracy, objectivity, currency, and coveragewas originally used by librarians to evaluate print materials for inclusion in the library collection. Why not, the thinking went, simply adapt that tried-and-true approach to the internet? Meola points out that it is easier to apply a checklist to print materials than to web sources because the location of needed information such as author and publisher has been standardised in print sources (p. 335). Further, the list of questions which tend to be attached to the evaluation criteria and which make up the checklist itself are often less than helpful. As examples, Meola lists a question attached to the accuracy criterion which asks if the information is error free but does not explain to the students how to determine this, and also a question about whether the author provided contact information, as if the inclusion of an email address makes the information reliable (p. 336). More broadly, Meola claims that the checklist approach 'can serve to promote a mechanical and algorithmic way of evaluation that is at odds with the higher-level judgment and intuition that we presumably seek to cultivate as part of critical thinking' (p. 337).
It is this last point which is important to one of today's prominent critics of the CRAAP Test, Mike Caulfield. In an interview with Inside Higher Ed, Caulfield states, 'There's an overwhelming number of things that CRAAP asks you to think about. When you push students to apply it in a real world situation they get overloaded and apply it mechanically in these reductive ways' (Warner, 2019). Caulfield speaks of the CRAAP Test as encouraging 'recognition heuristics' (Fister, 2019), the skill of recognizing bad information based on tells like whether there are advertisements on the page, or whether contact information is given. Besides overwhelming students with things to think about, many of these tells can be easily faked or are not actually helpful. For instance, students are often taught that the possession of a .org domain is an indication of a quality site, when in reality it means nothing more than that the site chose to pay a little extra for a more official-looking domain.
Even the more useful items on a checklist, such as "What are the author's qualifications?" rely on broad contextual knowledge to answer. When skilled researchers make a judgement about an author's credentials, they do much more than check to see if they have "PhD" after their name. For example, suppose an author claimed a PhD from Calamus International University. For those with appropriate background knowledge, this name will raise a red flag. First, they are unlikely to have heard of this university, which alone may prompt them to investigate further. Second, they may know that there are relatively few international universities, and they tend to be formed by international organisations and carry names suggestive of this fact, such as the United Nations University or the World Maritime University. "Calamus" is an unfamiliar word and an unusual one to appear in the name of an international university, since it does not immediately bring to mind associations of either an international organisation or the geographic location where one might be based. 'Most literacies, ' Caulfield (2016) states, 'are based not on skills, but on a body of knowledge that comes from mindful immersion in a context'. Most students lack the contextual understanding to wonder about the provenance of the author's PhD in this example.
Caulfield argues that students should instead be taught 'reputation heuristics' (Fister, 2019). 'The truth is in the network, ' Caulfield claims (Caulfield, 2017). Students may not possess the requisite knowledge themselves, but they can borrow it by reading laterally to discover what the rest of the web has to say about the source. To return to our earlier example, the top result of a quick Google search for "Calamus International University" is a Wikipedia list of unaccredited universities. This will often be enough to raise for the student the red flag which the knowledgeable researcher's contextual understanding has already raised, and prompt them to investigate further. 'The point is to quickly see if what this source is surprises you, ' Caulfield says (Fister, 2019); '[T]hat surprise can be a powerful key as to what to pay attention to' (Fister, 2019).
Caulfield packages his instruction concepts into the acronym SIFT: Stop, Investigate the source, Find better coverage, and Trace claims, quotes, and media to the original source (Caulfield, 2019). As Caulfield states, 'the problem with CRAAP has never been the acronym. … The difference has always been the difference between a narrow list of things to do (SIFT) and a broad list of things to consider and rate (CRAAP)' (Caulfield, 2019). The acronym is a reminder and framing device for the deeper contextual knowledge and actions necessary to accurately evaluate information.
Alaina Bull, Margy MacMillan, and Alison Head, however, believe that SIFT misses a key problem by treating students as the agents in the information ecosystem. They argue that the information itself has agency: SIFT, like CRAAP, is based on a reactive approach: the individual is an agent, acting upon information objects they find. In today's information landscape, we think it is more useful to invert this relationship and consider the information object as the agent that is acting on the individual it finds (Bull et al., 2021).
Information seeks and finds the students via algorithmic personalization and the efforts of 'trolls, governments, corporations, and other interest groups' (Bull et al., 2021). Bull et al. (2021) continue, 'Information enters, flows through, and ricochets around the systems [the students] inhabitfueled, funded, and filtered by data gathered from every interaction'. They urge a shift away from binary questions such as whether a source is written by an expert towards openended conversation.
Olaf Sundin underlines the idea that information is inseparable from the infrastructure through which it reaches us: Some of the most central components of this infrastructure are the algorithms that orchestrate the flow of data in search engines and social media. These algorithms are fed with the data various actorsnot least by, but not only, humansprovide them with in order to prioritise, rank, share and filter information on individual, geographical, political, or cultural bases' (Sundin, 2017).
Sundin argues that rather than treating this infrastructure as neutral, we should see it as part of 'how things are made into facts,' for instance 'understanding how a factual statement at the top of the search results ended up there in the first place rather than merely questioning the fact as such based on some kind of general critical judgement' (2017).
Another limitation of the CRAAP Test is that it treats the evaluation of information as a one-way process, in which the student sits above the information and judges it dispassionately with the only requirement necessary to guarantee accuracy being the careful application of the CRAAP checklist. Missing is an understanding of information as a two-way interaction, an interplay of ideas in which the evaluator brings their own preconceptions, knowledge, and beliefs to the table, and in which their opinion of the information they are evaluating can be influenced by how well it fits or does not fit with their pre-existing worldview. The ACRL Framework recognizes that metacognition and self-awareness are crucial to IL: Metaliteracy demands behavioral, affective, cognitive, and metacognitive engagement with the information ecosystem. This Framework depends on these core ideas of metaliteracy, with special focus on metacognition, or critical self-reflection, as crucial to becoming more self-directed in that rapidly changing ecosystem. (ACRL, 2015) While metacognitive concepts can be inserted into CRAAP by the instructor, there is no criterion in CRAAP that explicitly reminds students to consider their own response to information as part of the evaluative process.
Not all criticisms of the CRAAP Test or the checklist approach may be valid. Using a checklist need not entail focusing only on the website under evaluation, for instance. While students could make a guess at the credentials of a source by looking at the site's "About Us" page and trying to spot tells like "PhD," they can also be taught to investigate by leaving the source and looking up the author or organization. If, as the Stanford History Group studies suggest, students tend to evaluate a site based only on the information the site itself provides, this may be a limitation not of the CRAAP criteria but of the way it is taught, and of the limited duration and scope of most IL instruction.
In fact, it is likely that one of the reasons the CRAAP Test has been so widely adopted is that it can be easily packaged within common library instruction limitations of time and scope. In his open access book Web Literacy for Student Fact Checkers, Mike Caulfield details his original method of teaching students to be active, investigative, lateral-reading evaluators, 'Four Moves and a Habit.' The habit is 'check your emotions,' and the moves are 'check for previous work, go upstream of the source, circle back, and read laterally' (Caulfield, 2017). These moves were taught in a curriculum rolled out in a variety of disciplinary courses at ten institutions as part of the Association of State Colleges and Universities' Digital Polarization Initiative, with positive results (Cole, 2019). However, each class which featured the module needed to commit at least two weeks of class time to it. For many librarians this kind of involvement represents a rare scenario in which institutional or faculty buy-in facilitates deeper-than-usual engagement. More typical at many institutions is the one-shot, in which a professor asks a librarian to be a guest instructor for a single class session to talk about library resources. It is hard to teach good habits in 50 minutes, but that is often the extent of the direct IL instruction which the students receive.
Is there, then, a way to keep the convenient packaging of the CRAAP Test, but focus more deeply on active investigation, lateral reading, and metacognition?

Changes from CRAAP
One attempt to answer this question paired a pre-class online guide and exercise with an inclass discussion to create a flipped instruction module on source evaluation. The first version of the module was structured around the CRAAP Test, with an emphasis added on the active investigation of each of the CRAAP criteria (Tardiff, 2021). After the module was run successfully for several semesters and favourably assessed, it was revised to focus more directly on investigation and to set up and support the insights which tend to occur during the inclass discussion. To accomplish this, the CRAAP Test was pared down to remove inessential criteria and refocus or rephrase the remaining criteria to encourage investigation. A criterion for metacognition was then added. The result is a new set of criteria to investigate: Credentials, Claims, Objectives, and Worldview, or CCOW, presented in revised online guide and exercise titled, "Have a CCOW," viewable at https://researchguides.gonzaga.edu/CCOW/start.
There are several differences between CRAAP and CCOW. The first lies in presentation. In CRAAP, criteria are usually presented as overarching topics with appropriate questions to ask listed beneath each. Under the topic "Currency," one might find the question, "When was the information published or posted?". Under the topic "Authority," one might find, "Is the information supported by evidence?". It is up to the instructor to unpack these questions and help the students learn the techniques they will need to find the answers. By contrast, the CCOW flipped instruction module presents each criterion as an object of active investigation, and lists not merely questions to ask, but things to do in the investigative process. The first CCOW criterion, Credentials, maps directly to CRAAP's Authority criterion, and asks students to investigate whether the creator of the information is qualified to speak knowledgeably about the topic. Students are encouraged to Google the author or organization to verify their credentials and background, and also to check if anyone else online is pointing out problems with their reliability, which is often the case for vocal spreaders of misinformation.
The second CCOW criterion, Claims, maps to CRAAP's Accuracy criterion, but explicitly tasks the students with finding multiple sources and putting them in conversation: But here we run into a problem: how do you evaluate whether the claims are good, when you're not an expert in the subject yourself?
Easy: find experts. And not just one. In this glorious internet age, you can summon many experts in mere moments. Use your Google-fu to assemble other sites and authors talking about the same subject. Then use your credentials evaluating skills (see the previous section!) to dismiss the self-claimed "experts" who don't have real expertise in the subject. Finally, put the remaining, true experts into conversation with each other. Look for a consensus, if one exists, and seek to understand the reason behind that consensus. (Tardiff, 2020) The third CCOW criterion, Objectives, maps to CRAAP's Purpose criterion, and asks the students to consider why the information was createdto inform, to convince, to sell? CRAAP's Relevance criterion was removed. It asked students if the information fit their current information need; a useful skill in IL, but not directly related to evaluating the source. This criterion is likely a holdover from library acquisitions checklists: "Does this book fit our collection need?".
CRAAP's Currency criterion was also removed, and some of its considerations, such as the information timeline, were rolled into Claims. Currency's impact on a source's reliability is contextual, and making currency a top-level consideration can distract students from more consistently useful evaluation criteria.

Worldview
In the place of the two removed criteria is a new criterion: Worldview, giving metacognition an explicit place in the acronym and reminding students to consider their internal response to the information as they evaluate it.
Worldview as a concept appears in multiple disciplines, each with its own shading. Richard Dewitt (2004) applies the term to the history and philosophy of science, tracing the changes from an Aristotelian outlook to a Newtonian one and beyond. James Sire (2009) uses the concept to build an understanding of the self's and others' religious or non-religious beliefs. The term appears in philosophy to indicate systems of thought, and in sociology to denote beliefs shared across a culture. What is common to all these uses is that a worldview encompasses both a system of interconnected beliefs and an interpretive stance from which the individual evaluates ideas and information. The American Heritage Dictionary's succinct definition expresses it well: a worldview is 'the overall perspective from which one sees and interprets the world' (2018).
Worldview is a more neutral and nuanced way of exploring the idea of bias. Students are familiar with bias; however, bias has a negative connotation, and this can lead students to take a defeatist approach to information when they experience disagreement around certain questions or topics. Students usually understand that it is bad to be biased, but also that everyone is biased, including themselves. Rather than prompting them to a careful consideration of their bias, this understanding can lead them to shrug and express the thought that since everyone is biased, knowing what is true is probably impossible, if there is even a truth to be found in the first place. Worldview has no such negative connotation. Everyone has a worldview, an interpretive stance, a lens through which they view reality. It is not bad; it is natural and expected. And it is valuable to explore and gain understanding both of our own worldview and of the worldviews of others. As the online CCOW guide explains: Behind every piece of information is a person, and deep within every person is their worldview. Remembering this can be helpful when we evaluate information, because it can help us to understand where the source is coming from. Instead of dismissing sources that disagree with us out of hand, we can ask ourselves, "Why do they see things that way?" It doesn't necessarily mean they are bad, or dishonest, or deluded. It may just mean they are looking at the information through a different lens.
Understanding this can help us to have a conversation, a discussion, rather than a personal argument. It can help us to treat people we disagree with as people, rather than as automatically evil representatives of the wrong point of view […] It is important to not only consider the worldview of the source of information, but also to be conscious of our own worldview. At its core, our worldview consists of what we believe to be real and what we believe to be important. This influences how we interact with information. Why are some ideas pleasing to us, and others frightening? When we feel that an idea we were just exposed to must be right, or just has to be wrong, is that because we've looked at the question carefully, or because it appeals to, or threatens, the picture we already have of how things work? (Tardiff, 2020)

Pre-class exercise and in-class discussion
The online guide detailed above makes up the pre-class portion of a flipped instruction module aimed at first year students. Professors assign the guide to their students before class, and the following in-class session with a librarian is spent discussing the module and its concluding exercise.
The guide was created using the LibGuides platform, which allows for an attractive page layout with 'chunked' information distributed in boxes across each page, an approach which has been shown to aid comprehension (Fritch & Pitts, 2016). Each criterion is given its own page, with a final page for the concluding exercise. Conversational language is used throughout, and the text is broken up with images and memes illustrating the concepts. Students report that reading the guide and completing the exercise takes about half an hour.
The concluding exercise asks students to use the CCOW criteria to investigate three articles from different websites about the use of colloidal silver to treat a cold. The first site puts forward colloidal silver as a treatment for a variety of health conditions, the second site provides user reviews of a bottle of colloidal silver, and the final site is an informational page from the National Institutes of Health about colloidal silver. The use of three websites about a single topic gives students practice with, and demonstrates the utility of, the lateral reading urged in the guide, modelling for students the process of putting sites in conversation with each other to build the contextual understanding which enables informed evaluation. Student responses are captured via an embedded Google Form, allowing students to receive credit for completing the pre-class portion of the module, and ensuring that they will arrive at class prepared to take part in discussion.
During the class period students are asked to volunteer observations about each site. Credentials, Claims, and Objectives are always well-represented in the discussion, but the more abstract Worldview is rarely mentioned by the students. The discussion can be guided towards consideration of worldview via questions which help students to reflect on their response to information: "Why do you think you feel that way?" When prompted in this way, students often report that their prior beliefs, family background, or experiences influenced their inclination to trust or distrust the information on the websites.
In the final part of the in-class session, students view a video of a man who turned a vivid blue due to consuming large amounts of colloidal silver, a condition called argyria. The purpose of viewing the video is not merely to underline the dangers of trusting poor information (the first two sites in the exercise downplay or make no mention of argyria), but also to provide a vivid example of worldview in action. In the video, the man states that he is still drinking colloidal silver regularly (Inside Edition, 2019). The students are invited to reflect on why, given the obvious problem colloidal silver caused for him, he is unable to let go of his belief that drinking it is good for him. Far from looking down on the man for his attachment to a harmful idea, the students are encouraged to understand that we all have this human tendency to hold onto ideas in which we are invested and to distrust information which contradicts them. This is because a tightly held idea often is or has become a part of our worldview, of our picture of how reality works. Information which contradicts this idea then threatens not only the single idea, but our entire conception of reality. This causes cognitive dissonance, a feeling of intense discomfort that arises from simultaneously holding two contradictory ideas which cannot both be true. The usual method of dealing with this dissonance is to dismiss the new and threatening information. The alternative would be a change of worldview, which is a far more disruptiveoften traumaticshift to an entirely new interpretive stance, tantamount to a conversion experience (Calhoun & gonzagasocraticclub, 2015).
Students are then asked to consider reasons that a distrust of mainstream medicine might become a part of someone's worldview. Answers include genuine abuses in the medical and pharmaceutical industries, such as Mylan's EpiPen price hike, or Purdue Pharmaceutical's profit-driven promotion of OxyContin while downplaying its addictiveness. Given these and other systemic problems, it is understandable that someone might distrust the medical industry and retreat to other sources. But if they are self-aware, they can recognize their internal reaction and find a middle path that both acknowledges the problems in the medical industry and recognizes that its treatments are based on science, which is not usually the case in alternative medicine.
As the online guide explains: [I]nformation is not a one-way street. It's a two-way street. We don't receive it in a vacuum, judging its merits with emotionless objectivity. We interact with every single piece of information we receive, whether we are conscious of doing so or not. So: In order to be effective investigators of information, we must also investigate ourselves.
When I feel attracted to a piece of information, or repulsed by it, I ask myself: why am I feeling that way? Is it because the information itself is good or bad, reliable or not? Or is it because I feel that it confirms something important to me, or that it threatens something I value?
Being aware of my reactions to information doesn't mean I need to change my mind! Maybe my response to this piece of information is justified. But maybe it's a little too extreme. Or maybe it's just wrong. I can't know unless I am willing to take the uncomfortable risk of examining myself. (Tardiff, 2020)

Assessment
After completing the CCOW instruction module, 72 students in multiple sections of a first year physics lab were asked to find three articles on a scientific topic, of which one article had to come from a popular science magazine and two could come from newspapers, websites, or journals. Students were tasked with evaluating each source using the CCOW criteria. A rubric was used to award up to five points for each criterion. Students received more points for demonstrating active investigation and detailing solid reasoning behind their evaluation. If one or two points were awarded, the evaluation was considered unsatisfactory; if three points were awarded, the evaluation met expectations; if four or five points were awarded, the evaluation exceeded expectations. The assessment results (see Table 1) demonstrated that the students were able to successfully evaluate sources using the CCOW criteria. Several other observations were made from this assessment of student work.
While the majority of students met or exceeded expectations in every criterion, the criterion in which the most active investigation occurred was Credentials. Students demonstrated a willingness to leave the source to research an author, and an acuity in identifying elements such as the author's degrees and experience and whether these elements qualified them to speak knowledgeably about the topic. By contrast, the criterion with the largest percentage of failures to meet expectations was Claims. Once students had successfully evaluated an author's credentials, they were more inclined to trust the claims of that author without extra investigation or triangulation. The majority of unsatisfactory scores in this criterion were awarded two points for successfully identifying claims but failing to provide a judgement about whether they were credible. These students took it for granted that if the author was good, the claims were good. This is not ideal; qualified authors can and do disagree. However, it is heartening that students are making their judgement about whether or not to trust the source's claims based on the author's expertise and not their own preconceived ideas. Only two students incorrectly evaluated the credentials of the author or authors, and one of them correctly identified that their reason for trusting a poor source was due to their prior worldview.
It is also heartening that 94% of students demonstrated metacognition by discussing both the source's worldview and their own, with 44% of students providing extra detail and thoughtful consideration about how their worldview interacted with the source's and how this disposed them to trust or distrust the source's information. Of the students who did not receive a satisfactory score for this criterion, the majority confused "worldview" with "opinion," and simply stated their opinion instead of analysing why they had that opinion, that is, what in their personal background and beliefs leads them to be attracted to or repelled by a piece of information. It will be emphasised to future classes that worldview and opinion are related but not equivalent.

Discussion and conclusion
At the time that the CCOW criteria were created, Mike Caulfield was using his original teaching method, "Four Moves and a Habit," which was designed to be taught across multiple instruction sessions. Since then, Caulfield has created the acronym SIFT, which may be a more convenient packaging of these concepts for shorter class sessions. Regardless of the acronym chosen, it is not enough to simply hand it to the students. Students must be encouraged to be active in the evaluative process, to leave the source, perform lateral reading, and gather contextual information. The SIFT acronym reminds students to take action, but it may be difficult for students to remember what the actions apply to (i.e. "What was I was supposed to 'Find' again?"). By contrast, CCOW's acronym reminds the students of key elements to be investigated, but does not on its own encourage that investigation; it is therefore important to frame the CCOW criteria as four elements to actively investigate, to model for the students what that process of investigation looks like, and to give them the opportunity to practice that process. An acronym can be a convenient reminder of key concepts, but should not be allowed to become a crutch for the instructor, taking the place of deeper engagement and instruction.
Bull, MacMillan, and Head criticise both CRAAP and SIFT for treating the students, rather than the piece of information at question, as having agency. They also suggest an approach which does not seek 'a set of desired answers that we are hoping to coax out of the students: Which of these sources is valid? Who authored this source, and are they an expert?' but instead 'encourages students to engage and interact with their ideas and previous experiences around information agency, the socialness of the information, and how they evaluate non-academic sources' (Bull et al., 2021). While it is true that students should, and often do, recognize the information object as existing within a greater infrastructure which privileges the discoverability of certain information, this understanding must be part of a greater whole. Both CCOW and SIFT encourage students to take agency, to not accept only the information that comes to them but to actively seek out more and different information to build a more complete picture of the topic. For example, the CCOW exercise asks students to look at a page of product reviews, and students are quick to identify that there are many ways in which product reviews can be faked or gamed; they recognize how reviews work within the information infrastructure. However, students are far more trusting of a website created by a self-proclaimed doctor who does not hold a medical degree. The librarian instructor, on the other hand, possesses what the contextual understanding to recognise the problematic nature of this author's expertise and how it impacts the information on the site. By asking leading questions surrounding the criteria "Credentials" and "Objectives," and modelling the investigative process by finding other information to put into conversation with the original source, the librarian can use their expertise to guide the students to a greater understanding than they entered the class session with.
Further, the consideration of worldview can prompt students to consider how information acts upon them. The ACRL Framework points out that metacognition is crucial to IL (ACRL, 2015), and worldview is an effective and non-threatening conceptual foundation for building this metacognition; it is the most crucial part of CCOW. The current politicization and distrust of COVID-19 mitigation measures such as masks and vaccines provides a striking example of why self-awareness is important to deciding which sources of information to trust. The question, "Why do I feel what I feel about this piece of information?" must be explicitly asked if the information is to be evaluated accurately and common pitfalls such as in-group bias and confirmation bias are to be avoided. It is also in considering worldview that students will consider their own place in the information ecosystem, understanding themselves not only as passive consumers of information, but as active participants in the structures through which information flows, able to act for good or for ill.
Since worldview is a relatively abstract idea, it can be challenging to help students to consider it. Students are unlikely to bring worldview up in the in-class discussion without prompting. They can be encouraged to examine worldview via open questions ("Why do you think you feel that way?") and specific examples ("Some people distrust information coming from the National Institutes of Health. What might be part of their worldview to explain that distrust?"). Students may feel threatened by the implication that they may be expected to change their minds if they consider their own worldview; it helps to stress that being self-aware does not mean they have to change their minds, but merely ensures they have a deeper understanding to use when making decisions about information.
The biggest challenge in teaching CCOW originates from its intended use case of being taught via a single 50-minute session. While suitability to the one-shot ensures wide reach, as CCOW can be easily integrated into classes in which the content professor requests a single "library day", 50 minutes is always going to be too short a time to do justice to the threshold concepts under consideration. Only rarely is there any follow-up allowing the librarian to underline or assess the instruction. The assessment results detailed in this article nonetheless indicate that the session is an effective introduction to these important concepts.
Whether the method chosen is CCOW, SIFT, a more active and metacognitive framing of CRAAP, or another method entirely, it is crucial that librarians and other educators adjust IL instruction to meet the challenge of the internet and the common behaviours of students online. This paper provides one possible approach, and it is hoped that it will spark answering ideas, refinements, and approaches. The "Have a CCOW" pre-class guide and exercise are available under a Creative Commons license which allows sharing, remixing, and adaptation.