Wineburg Annotations

Electronic copy available at: https://ssrn.com/abstract=304899

4

 

 

LATERAL READING:

READING LESS AND LEARNING MORE WHEN EVALUATING DIGITAL INFORMATION

Sam Wineburg & Sarah McGrew
Working Paper No 2017.A1/Stanford History Education Group

sheg.stanford.edu
September 201

7

©Sam Wineburg & Sarah McGrew 2017. All rights reserved. Short sections of text, not
to exceed 200 words, may be quoted without explicit permission provided that full credit is
given to the source.

:
 
 

Electronic copy available at: https://ssrn.com/abstract=3048994

RUNNING HEAD:

READING LESS AND LEARNING MORE

 

1

Abstract1

The Internet has democratized access to information but in so doing has opened the floodgates to

misinformation, fake news, and rank propaganda masquerading as dispassionate analysis. To

investigate how people determine the credibility of digital information, we sampled 4

5

individuals: 10 Ph.D. historians, 10 professional fact checkers, and 25 Stanford University

undergraduates. We observed them as they evaluated live websites and searched for information

on social and political issues. Historians and students often fell victim to easily manipulated

features of websites, such as official-looking logos and domain names. They read vertically,

staying within a website to evaluate its reliability. In contrast, fact checkers read laterally,

leaving a site after a quick scan and opening up new browser tabs in order to judge the credibility

of the original site. Compared to the other groups, fact checkers arrived at more warranted

conclusions in a fraction of the time. We contrast insights gleaned from the fact checkers’

practices with common approaches to teaching web credibility.

Keywords: digital literacy, media literacy, expertise, web credibility

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1 The authors thank Joel Breakstone, Teresa Ortega, Mark Smith, Mary E. Ryan, Mike Caulfield, and Susan Monas for comments
on a previous draft. Any errors that remain are the responsibility of the authors. This research was supported by a grant from the
Spencer Foundation, but no endorsement is intended. Corresponding author: Sam Wineburg, Margaret Jacks Professor of
Education and of History (by courtesy), wineburg@stanford.edu

READING LESS AND LEARNING MORE

 

 

2

In October 2010 the Washington Post broke a story about a fourth-grade history

textbook, Our Virginia: Past and Present, which claimed that thousands of African

Americans fought for the Confederacy, “including two black battalions under the command of

Stonewall Jackson” (Sieff, 2010). Given that Jackson died from friendly fire on May 10, 1863,

these “Black Confederates” had to be taking up arms at the height of the Civil War, a time

when the Union Army was still debating the recruitment of African American soldiers.

There’s one problem with this claim—no evidence supports it. The only Confederate

document that addresses drafting Black soldiers is General Orders No. 14, a last-ditch effort to

stall a Union victory issued seventeen days before Lee’s surrender at Appomattox on April 9,

1865. With almost all hope lost, the proposal was still so controversial that the Confederate

leadership felt compelled to issue a disclaimer: “Nothing in this act shall be construed to

authorize a change in the relation which the said slaves shall bear toward their owners.”1

How, then, did the fraudulent claim that thousands of African Americans took up arms

for the Confederacy find its way into materials for school children?

When queried about her sources, author Joy Masoff explained to the Washington Post

that she conducted her research . . . on the Internet. Among the sources she consulted was the

website of the Sons of the Confederate Veterans: “A patriotic, historical and educational

organization, founded in 1896, dedicated to honoring the sacrifices of the Confederate soldier

and sailor and to preserving Southern Culture” (Sons of Confederate Veterans, 1997).

Some might claim that Joy Masoff, a “digital immigrant” (Prensky, 2001), was out of

her league—that today’s students, glued to screens almost since birth, would not have

succumbed to such ruses. However, when the prowess of digital natives has been put to the

test, it has been proven false time and again (Bennett, 2012; Gasser, Cortesi, Malik, & Lee,

READING LESS AND LEARNING MORE

 

 

3

2012; Helsper & Eynon, 2009). Students, it turns out, struggle with nearly every aspect of

gathering and evaluating information online. After studying how college students used

academic databases, Asher and Duke (2011) summarized, “the majority of

students…exhibited significant difficulties that ranged across nearly every aspect of the search

process” (p. 73). They quickly abandoned searches when they did not return the desired

results, relied only on the first page of results, and based their judgments of credibility

primarily on an article’s title and abstract.

In one of the most extensive think-aloud studies to date, Hargittai, Fullerton, Menchen-

Trevino, and Thomas (2010) observed over a hundred college students as they searched online.

Screen and audio recordings of the sessions produced a trove of data: over 80 hours of tape and

770 pages of transcribed interviews. Students overwhelmingly ceded to Google the responsibility

for determining the credibility of information—the higher it ranked in Google’s results, the more

reliable they considered the site to be. Another study found that undergraduates ignored the

valuable information contained in Google’s snippets (the few sentences accompanying each

result), clicking instead on websites in higher positions even when they were “less relevant to the

task” (Pan, Hembrooke, & Joachims, 2007, p. 816).

Wiley et al. (2009) found that college students rarely considered where information came

from when evaluating reliability, a finding replicated across a range of studies with students of

different ages and in different countries (e.g., Barzilai & Zohar, 2012; List, Grossnickle, &

Alexander, 2016; Walraven, Brand-Gruwel & Boshuizen, 2009). Young people are more likely

to judge a website based on its relevance to their searching needs (Iding, Crosby, Auernheimer,

& Klemm, 2009; Julien & Barker, 2009; Walraven et al., 2009), its appearance, or how easy it is

READING LESS AND LEARNING MORE

 

 

4

to navigate (Agosto, 2002; Barzilai & Zohar, 2012).

These studies have focused on typical users; studies of what skilled users do are less

common. Lucassen and Schraagen (2011) studied people active on a car enthusiasts’ forum as

a proxy for expert knowledge about car engines. Unsurprisingly, people who knew more about

cars were better able to detect errors in Wikipedia than those who knew less. Similarly, a

group of Dutch researchers compared psychology students and psychology faculty as they

selected online sources on psychological topics; faculty spent more time scanning search

results while students made more superficial evaluations (Brand-Gruwel, Kammerer, van

Meeuwen, & van Gog, 2017).
 In another study, researchers designated a group of graduate

students in educational technology as “experts” and compared their online research processes

with those of university freshmen (“novices”) (Brand-Gruwel, Wopereis, & Vermetten, 2005).

But the authors provided few clues about how experts went about selecting and evaluating

information.
 

The present study set out to understand in greater detail what experts do when judging

information online. Before we could tackle this issue, though, we needed to figure out who

qualifies as an expert.

We turned to a group of professionals who evaluate sources for a living: historians.

Ample research has established how historians source documents, interrogating a document’s

author and the circumstances of its creation as keys to determining its trustworthiness

(Wineburg, 1991a, 1998; Leinhardt & Young, 1996; Shanahan & Shanahan, 2008). Shanahan,

Shanahan, and Misischia (2011) found wide variations in sourcing among academics from

different fields. While mathematicians explicitly ignored the author of a paper, as it “would only

READING LESS AND LEARNING MORE

 

 

5

be a distraction and could help in no way with the process of making sense of the text,”

historians engaged in “extensive sourcing,” speculating about “who the author was and what he

or she represented” (2011, pp. 408-409).

Despite the growth of digital history, the majority of historians still conduct their

research in archives of print documents. We thus set out to study a second group whose work

is largely done on a computer screen: fact checkers, whose job it is to ascertain truth in digital

form. These professionals are charged with evaluating claims and evidence, and spend much

of their time vetting digital information.

Finally, we recruited a third group: undergraduates at Stanford University. In 2016,

Stanford rejected 95% of its applicants, making it the most competitive university in the

United States. Nearly all admitted students were in the top 10% of their high school classes

and scored above the 90th percentile on the SAT (Stanford University, 2015). These young

people attend a university in the heart of Silicon Valley, where technology startups sprout

within campus labs and where computer science is the most popular major (Stanford

University, 2017). These students are not garden-variety “digital natives,” but drawn from the

tail of the ability distribution and earmarked—at least according to Stanford University

brochures—to lead the digital future.

Method

Participants

Historians. Ten historians were recruited; all held the Ph.D. in history and were

faculty at four-year colleges and universities in either California or Washington state. Six

were male; four were female. Their ages ranged from 39 to 69 (M = 47).

READING LESS AND LEARNING MORE

 

 

6

Fact Checkers. The fact checkers were all employed at well-regarded news and

political fact-checking organizations. Eight were located in New York City or Washington,

DC; two were based on the West Coast. As with the historians, six were male and four female.

Ages ranged from 23 to 60 (M = 34). Two participants held master’s degrees while one held a

Ph.D.; the rest had bachelor’s degrees.

College Students. Students were recruited using fliers posted on campus. Each

received a $25 Amazon gift card for participating. All students were enrolled in the second or

third quarter of their first year and were between the ages of 18 and 19; 11 identified as male,

13 as female, and one as non-binary. Every student reported spending at least four hours

online each day.

Protocol

We developed a set of six online tasks that took approximately 45 minutes to

complete. Our focus was on evaluating digital sources that addressed social and political

issues. Space limitations require that we narrow our discussion here to three of the main tasks

participants completed (see Table 1).2

Table 1
Main Web Evaluations

Topic Processes
Elicited Participants could:

Bullying in schools

URLs:
https://www.acpeds.org/the-college-
speaks/position-statements/societal-

issues/bullying-at-school-never-
acceptable

https://www.aap.org/en-us/about-

the-aap/aap-press-
room/pages/Stigma-At-the-Root-of-

Ostracism-and-Bullying.aspx

Evaluations internal and external to a
site; comparing sites

Scroll, click on links, and leave the
site to access any information

online

Time Limit: 10 minutes

READING LESS AND LEARNING MORE

 

 

7

Minimum wage policy

URL:
https://www.minimumwage.com/20
14/10/denmarks-dollar-forty-one-

menu/

Evaluations internal and external
to a site

Scroll, click on links, and leave the
site to access any information

online

Time Limit: 5 minutes

Teacher tenure: Funding for

plaintiffs in Vergara v. California

Open web search to find out who
paid for the $1.2 million legal fees

Access any information online

Time Limit: 5 minutes

Procedure

Sessions with historians and fact checkers were conducted by the authors; sessions

with students were conducted by one of the authors and other members of the research team.

Participants were asked to complete a series of web-based tasks on a 13-inch MacBook Air.

Websites were live and participants were able to search the Internet as they normally do—

clicking on links, opening new tabs, and leaving a site to search elsewhere. Participants were

encouraged to do what they normally would when evaluating information and determining its

trustworthiness. Additionally, they were asked to verbalize their thoughts as they worked

through the tasks (Ericsson & Simon, 1993; Pressley & Afflerbach, 1995).3

We used a variety of prompts to encourage natural behavior, including: “You can open

up new tabs—do whatever you normally would to learn about a site” and “We’re interested in

your take. You can stay on the page or go out to another website, anything you would

normally do.” We repeated these instructions at the beginning of each task. We also noted the

time limit for each task and gave participants a one-minute warning before time was up. We

set time limits because amount of time that people are willing to devote to a website is

generally quite short—seconds instead of minutes (Haile, 2014; Nielson, 2011). Researchers

at Microsoft found that “dwell time” on websites was “no more than 70 seconds on 80% of the

READING LESS AND LEARNING MORE

 

 

8

205,873 pages” that users visited (Liu, White, & Dumais, 2010, p. 382). Efficient search and

evaluation strategies are essential to anyone trying to manage the deluge of information that

comes across one’s screen.

QuickTime Player version 10 was used to record audio and to capture video of the

computer screen. We also used an iPhone 6 to video-record each session in case parts of the

QuickTime audio files were muffled.

Data Analysis

We developed rubrics to rate the quality of participants’ conclusions for each task.

These rubrics were developed after extensive pilot testing with Ph.D. graduate students and

university professors (we describe these rubrics in greater detail in subsequent sections that

describe each task).

Two coders (the second author and a research assistant who did not participate in the

creation of the rubrics) tested for interrater reliability. We conducted reliability tests on about

a quarter of the data, achieving an interrater agreement of 92% across the three tasks (Cohen’s

Kappa = 0.90).

Additional analyses varied by task. These included tracking the time participants took

to settle on a conclusion; whether they stayed on or left a site, and, if they left, which other

sites they visited; and whether they took steps to find out more about the individuals or groups

behind the sites they consulted.

Results

Task 1: Bullying

READING LESS AND LEARNING MORE

 

 

9

Participants evaluated articles about bullying on the websites of the American Academy

of Pediatrics (“the Academy”) and the American College of Pediatricians (“the College”).

Despite the similarity in names, the two organizations couldn’t be more different. The Academy,

established in 1932, is the largest professional organization of pediatricians in the world, with

64,000 members and a paid staff of 450. The Academy publishes Pediatrics, the field’s flagship

journal, and offers continuing education on everything from Sudden Infant Death Syndrome to

the importance of wearing bicycle helmets during adolescence.

By comparison, the College is a splinter group that in 2002 broke from its parent

organization over the issue of adoption by same-sex couples. It is estimated to have between

200-500 members, one full-time employee, and publishes no journal (Throckmorton, 2011). The

group has come under withering criticism for its virulently anti-gay stance, its advocacy of

“reparative therapy” (currently outlawed for minors in nine U.S. states), and incendiary posts

(one advocates adding P for pedophile to the acronym LGBT, since pedophilia is “intrinsically

woven into their agenda”) (American College of Pediatricians, 2015). The Southern Poverty Law

Center has labeled the College a hate group that is “deceptively named” and acts to “vilify gay

people” (Lenz, 2012; Southern Poverty Law Center, 2016). The College’s portrayal of research

findings on LGBT youth has provoked the ire of the nation’s leading scientists, including Francis

Collins, the former director of National Institutes of Health, who wrote that “the American

College of Pediatricians pulled language out of context from a book I wrote . . . to support an

ideology that can cause unnecessary anguish and encourage prejudice. The information they

present is misleading and incorrect” (as cited in Bradshaw, Weight, & Packard, March 3, 2011).4

A quick glance at the College’s site might lead one to conclude that it is a politically

neutral medical organization (Turban, 2017). The website bears an official-looking logo and

READING LESS AND LEARNING MORE

 

 

10

the motto “Best for Children.” An anodyne “About Us” page informs the reader that the

College “produce[s] sound policy, based upon the best available research, to assist parents and

to influence society in the endeavor of childrearing.” At the same time, the College does not

mask its social positions. The “Mission of the College” states: “We recognize the basic father-

mother family unit, within the context of marriage, to be the optimal setting for childhood

development.” The College’s “Position Statements” are transparent on issues ranging from

abortion (prematurely and unnecessarily ending a human life) to corporal punishment

(effective under certain circumstances).

Participants began by evaluating an article on the College website entitled “Bullying at

School: Never Acceptable,” where a section labeled “Prevention” advises schools to refrain

from recognizing any students as particularly at risk of being bullied:

By focusing a program upon the special characteristic or activity of one student or

group, the school opens the floodgates for other programs promoted by its advocates,

i.e. over issues involving religion, ethnicity, stature, intelligence, race, or even athletic

abilities. By focusing anti-bullying programs, instead, on the topic of general

respectfulness, the school…avoids the pitfalls of calling undue attention to a particular

group or perhaps venturing into controversial teachings. (Trumbull, 2013)

Multiple studies have shown that students who identify as LGBT are more likely to be bullied

than their heterosexual peers—over 80% of LGBT students were “verbally harassed” and over

40% were “physically harassed at school…because of their sexual orientation,” according to a

study cited in the White House Conference on Bullying (Espelage, 2011, p. 65). Yet, the College

implies that programs to reduce bullying against LGBT students amount to “special treatment,”

READING LESS AND LEARNING MORE

 

 

11

and that these programs may “validat[e] individuals displaying temporary behaviors or

orientations” (Trumbull, 2013).

The website of the 64,000-member American Academy of Pediatrics bears a logo and

trademarked motto as well. Resources and professional education opportunities for members

are featured, including details on membership, the group’s history since its founding in 1930,

and opportunities to browse books and journals that it publishes. Participants viewed an article

on the Academy website entitled “Stigma: At the Root of Ostracism and Bullying.” The article

describes a symposium in which six papers were presented, including “Discrimination and

Stigmatization of Non-heterosexual Children and Youth.” Additional presentations focused on

factors that might place youth at risk for bullying, such as weight, sexual orientation, race, and

income (American Academy of Pediatrics, 2014).

Participants were given up to five minutes per site to evaluate the trustworthiness of each

as a source of information about bullying. If they did not explicitly compare the two sites before

the ten minutes were up, we asked: “If you had to say which website was more reliable and

which was less reliable, what would you say?”

We developed a rubric to characterize the quality of the conclusions participants

reached about the sites: we awarded two points for specific, correct, and warranted

descriptions of the sites, one point for vague or indecisive evaluations, and zero points when

participants reached wrong conclusions (such as equating both organizations in terms of

trustworthiness).

For the College website, a Kruskal-Wallis nonparametric analysis of variance

indicated significant differences in the conclusions reached by participants on the College

READING LESS AND LEARNING MORE

 

 

12

website: fact checkers had a perfect mean score of 2 (SD = 0); historians, 0.7 (SD = 0.95); and

students, .16 (SD = 0.37) (H (2) corrected for ties = 27.5, p < .001). Follow-up Mann-Whitney

U tests showed significant differences between fact checkers and historians (p = .003) and fact

checkers and students (p < .001).

There were also significant differences in the quality of conclusion scores for the

Academy site (H (2) corrected for ties = 25.2; p < .001). Fact checkers again had a perfect

score (M = 2, SD = 0), historians a 1.2 (SD = 0.79), and students a 0.4 (SD = 0.58). Follow-up

Mann-Whitney U tests yielded significant differences between fact checkers and historians (p

= .01), fact checkers and students (p < .001), and historians and students (p = .007).

There were striking differences in which site participants judged the most reliable.

Every fact checker unreservedly viewed the Academy’s site as the more reliable; historians

often equivocated, expressing the belief that both sites were reliable; and students

overwhelmingly judged the College’s site the more reliable (see Figure 1).

READING LESS AND LEARNING MORE

 

 

13

Taking Bearings. Fact checkers’ success was closely tied to what we think of as

taking bearings, a concept borrowed from the world of navigation. Exploring an unfamiliar

forest, experienced hikers know how easy it is to lose their way. Only foolhardy hikers trust

their instincts and go traipsing off. Instead they rotate their compass’s bezel to determine

bearings—the angle, measured in degrees, between North and their desired destination.

Obviously, taking bearings on the web is not as precise as measuring an angle in degrees. It

begins, however, with a similar premise: When navigating unfamiliar terrain, first gain a sense

of direction.

Checker C’s approach exemplified the advantages of taking bearings. He spent a mere

eight seconds on the College’s landing page before going elsewhere. “The first thing I would do

Figure 1. Percentage of participants in each group selecting the College or the Academy
as more reliable.
 

READING LESS AND LEARNING MORE

 

 

14

is see if I can find anything on the organization,” he said as he typed the organization’s name

into Google. He clicked on Wikipedia’s entry about the College and read that it is a “socially

conservative association of pediatricians…founded in 2002…as a protest against the [American

Academy’s] support for adoption by gay couples.” Wikipedia’s entry linked to sources including

a Boston Globe story (“Beliefs drive research agenda of new think tanks,” Kranish, 2005), a

report from the Southern Poverty Law Center (“American College of Pediatricians Defames

Gays and Lesbians in the Name of Protecting Children,” Lenz, 2012), and a brief from the

American Civil Liberties Union (“Misinformation from Doctors . . . Out to Hurt Students?,”

Coleman, 2010).

It was a full minute and twenty seconds before Checker C returned to the College’s

article on bullying. Reading the abstract that he had glanced at in the task’s opening seconds

(see Figure 2), he paused at the phrase “no group should be singled out,” and remarked that

this is “often code for, you know, kids who are more likely to be bullied—students of color or

gay or queer children,” adding, “That’s the kind of thing that I never would have known if I

had just looked at [the article on bullying].”

Figure 2. Abstract of “Bullying at School: Never Acceptable” (emphasis added).

READING LESS AND LEARNING MORE

 

 

15

Rendered in under two minutes, Checker C’s conclusion was not only an accurate

evaluation of the bullying article but also of the rest of the College’s website, which presents an

anti-gay stance throughout.5 Overall, fact checkers left the landing page of the College in about

half a minute (M = 32 s, SD = 29 s). In contrast, historians took almost three times as long (M =

88 s, SD = 103 s) (eight of the 10 left the landing page, two did not). The 16 students who left

the landing page (nine never did) took an average of 100 seconds (SD = 52 s).

Fact checkers’ comments as they left the landing page (see Table 2) showed an

immediate impulse to take bearings. They understood the web as a maze filled with trap doors

and blind alleys, where things are not always what they seem. Their stance toward the

unfamiliar was cautious: while things may be as they seem, in the words of Checker D, “I

always want to make sure.”

Table 2

Examples of Fact Checkers’ Comments Upon Leaving the Landing Page

Checker

A “I immediately want to know more about [the College]. So I’m going to go to
About Us.”

D “My first move to figure out whether something is reliable is to click on the
About Us page. . . . At face, the American College of Pediatricians sounds
pretty formal, but I always want to make sure.”

E “I want to learn a lot more about the American College of Pediatricians.”

H “It’s kind of hard to tell how mainstream this organization is, so I might open
another tab just to read a little bit more about, if this is the main American
pediatricians’ professional organization or if this is a splinter group for some
reason.”

Historians’ Reading. Two of ten historians resembled fact checkers in how they took

READING LESS AND LEARNING MORE

 

 

16

bearings. Leaving the landing page after a 20-second glance, Historian H opened the site’s

“Resources” tab and clicked on the link to focusonthefamily.com to confirm that it was in fact

the organization founded by evangelist Dr. James Dobson. He returned to the College’s

“Resources” page, but this time with a hypothesis: “They probably have an agenda to quote,

cure, unquote homosexuality, which is another fundamentalist point of view.” Historian S also

left the College’s site in less than half a minute. Googling the organization’s name, he clicked

on a Breitbart headline, “American College of Pediatricians On Same-Sex Marriage Ruling: A

Tragic Day for America’s Children.” He concluded that the College is “a heavily ideological

site.”

Historians H and S were the exceptions. Asked whether the website of the splinter

group or the 64,000-member Academy was the more trustworthy site, five of their colleagues

equivocated. Seven of the historians never took bearings; one did so only after analyzing the

bullying article for four minutes. After ten minutes of review, most scholars had learned

virtually nothing about the respective agendas of the two pediatrics organizations.

Historians were often taken in by the College’s name and logo; its .org domain; its

layout and aesthetics; and its “scientific” appearance, complete with abstract, references, and

articles authored by medical doctors. Reading the “Bullying at School” article, Historian M

commented on the presence of a scientific abstract and references, compared the site to

WebMD, and noted that it was signed by a doctor (true, but it was not something she verified,

since she never left the landing page). She concluded:

I think I would probably find this pretty reliable on the basis that it’s written by an

expert, it’s citing expert opinions, it’s been reviewed by at least some people from the

READING LESS AND LEARNING MORE

 

 

17

College of Pediatricians, so it agrees with an expert opinion. But it is still nonetheless

still an opinion piece, it’s just an opinion piece that I agree with, and…reflects the

opinion of a group that I want to know the opinion of.

There was no basis for Historian M’s far-reaching conclusions other than the surface features

of the site, its presentation of information, and the M.D. listed after the author’s name.

One feature played a key role in shaping historians’ judgment: the presence of

references
 at the bottom of the College’s entry. Seven of 10 historians explicitly commented

on them (see Table 3), viewing citations to Pediatrics and the Journal of Criminology, among

others, as conferring legitimacy on the article’s content.

Table 3

Historians’ Comments About References

Historian Comments

A “It has references to kind of standard scientific literature, of backing up some of
its claims so it has a kind of authoritative tone to it.”

B “I would look at the references and see who the [author] is citing.”

E “These are all references to professional journals so that definitely reinforces
my sense that it’s a genuine site and that the information found here can be
trusted.”

I “I am looking at some of the footnotes and they all seem like perfectly
credible sources. . . . I can trust this site.”

K “Who are they actually citing? So Pediatrics, okay, so they’re citing real
journals so I trust them a little bit more. . . . So the citations suggest that it has
some reputable characteristics.”

L “I like to look at the sources to see where they are getting things. These are all
academic journals as opposed to random Google News, which you never
know about.”

N “I am looking at the references now and to what extent they’re linked up to

READING LESS AND LEARNING MORE

 

 

18

journals that strike me as peer-reviewed journals and have some kind of
credibility. So, they all seem to come from something that strikes me—I don’t
know, Pediatrics—but I assume it seems to be in some kind of academic
form.”

aNot all references were to scientific articles. Among the 10 references, one was to Free Dictionary, two to Yahoo
News blogs, one to Alliance Defense Fund, and the rest to refereed journal articles.

Students’ Reading. By the end of ten minutes, only three of the 25 students had

successfully distinguished between the stances of the College and the Academy. Fully 60% of

students chose the College as the more reliable site. Even the five who favored the Academy

learned little about the vast differences between the two organizations.

Few students had the sense or inclination to take bearings when landing on an

unfamiliar site. Nine of the 25 never left the original site; those who did tended to click on

links that spoke to a personal interest rather than a search designed to find out more about the

organization behind the website. Student 19, who planned to major in either ancient Greek or

bioengineering, based her evaluation almost exclusively on features like the organization’s

name (“sounds pretty legitimate”); the site’s layout, which included bullet points (“nice to

understand quickly”) and section headings (“that’s really smart”); and the absence of banner

ads (“makes you focus on the article”). Largely on the basis of graphic design, she concluded

that the College’s page was the more reliable of the two: “What struck me was how [the

College’s site] was laid out.” Student 19’s approach was representative of how the majority of

students conducted their evaluations (see Table 4).

Table 4

Students’ Comments About Why They Trusted the College’s Webpage

Reason for Examples of Reasoning

READING LESS AND LEARNING MORE

 

 

19

conferring
trustworthiness

Scientific
Presentation:
abstract,
references,
authored by a
medical doctor

“This seems like it’ll be pretty promising. There’s an abstract, so I feel
like this is like a research thing.” (Student 12)

“So now I see an abstract, which makes me think that this is a very
research-based paper. . . . This seems like a very scientific article,
because everything is in list form and very specific. The diction and the
language is pretty scientific in general. I like that they are citing their
sources with links and stuff.” (Student 15)

“It’s written by a doctor. . . . There’re references. Seems like a legitimate
article.” (Student 20)

Usefulness:
amount of
information, clarity
and accessibility of
article

“It has a very clear title on what its view of bullying is. . . . I really like
how it’s laid out with the little headings to easily find what you need, and
bullet points are always easier to look through also. And the references
are really useful if I were to be doing research project, because then I
could just look at these references afterwards. Yeah, I think this would be
a useful site. It does seem like they have a lot of information.” (Student
13)

“If I were writing a paper…then I would choose [the College] over [the
Academy] simply because this just provides more information relevant to
the topic.” (Student 6)

Answering which is more reliable, after looking at both sites: “The
[College article] because that actually gave me more information about
bullying.” (Student 11)

Graphic design:
pleasant layout,
color scheme, lack
of advertisements

“They seemed equally reliable to me. I enjoyed the interface of the
[College website] better. But they seemed equally reliable. They’re both
from academies or institutions that deal with this stuff every day.”
(Student 5)

“Nice how there’s not really any advertisements on this site. Makes it
seem much more legitimate.” (Student 19)

Organization’s
Apparent
Authority: name,
logo, URL

“I can automatically see this source and trust it just because of how
official it looks—American College of Pediatricians, even the font and
the way the logo looks makes me think this is a mind hive that compiled
this.” (Student 7)

READING LESS AND LEARNING MORE

 

 

20

First statement on arriving at the site: “American College of
Pediatricians. Seems like a credible website, run by pediatricians.”
(Student 16)

First statement on arriving at the site: “.org. So this looks like it might
have been subsidized by a government agency.” (Student 18)

Three of the 25 students selected the Academy as more trustworthy because they

learned something about, and rejected, the College’s ideological stance. Two of the three

stumbled upon information that provided insight into the College’s views, but did not

deliberately seek it out. Only one student in 25 took bearings in a way that could be compared

to the fact checkers’ approach. Even then, the student spent nearly four minutes reading

“Bullying at School: Never Acceptable” before leaving the site.

Task 2: Minimum Wage

Participants evaluated an article entitled “Denmark’s Dollar Forty-One Menu” on the

website minimumwage.com (see Figure 3). The article argues that if the U.S. followed the

example of Denmark and raised wages, it would face higher food prices and diminished job

opportunities. The article links to stories in the New York Times and Columbia Journalism

Review, while the website includes tabs for research reports and news stories. Its “About”

page says it is a project of the Employment Policies Institute (EPI), a group described as a

“nonprofit research organization . . . . [that] sponsors nonpartisan research which is conducted

by independent economists at major universities.”

READING LESS AND LEARNING MORE

 

 

21

Despite their nonpartisan declarations, minimumwage.com and the Employment

Policies Institute are the products of Berman and Company, a Washington, DC-based public

relations firm that lobbies on behalf of the restaurant and hotel industries. Berman’s specialty,

in the words of the New York Times, is to create “official-sounding nonprofit groups to

disseminate information on behalf of corporate clients” (Lipton, 2014). None of this

information, however, is available on minimumwage.com or the Employment Policies

Institute website. A 2013 Salon article characterized the tactics of Berman and Company with

the headline, “Industry P.R. Firm Poses as Think Tank” (Graves, 2013).

Participants were given up to five minutes to evaluate minimumwage.com. They could

Figure 3. “Denmark’s Dollar Forty-One Menu” on minimumwage.com.

READING LESS AND LEARNING MORE

 

 

22

use any Internet resources (including leaving the site) to help them; we repeated the

instructions to do what “they would normally would do” when landing on an unfamiliar site.

Participants who had not reached the Employment Policies Institute website after five minutes

were given this prompt: “Minimumwage.com is paid for by another person or organization.

Spend up to three minutes to figure out who is behind this site.”

We used the following rubric to rate participants’ responses:

Score Description
0 Evaluates minimumwage.com based on surface features; does not identify

connection to the Employment Policies Institute.
1 Determines that the Employment Policies Institute sponsors minimumwage.com,

but learns nothing about the Employment Policies Institute.
2 Determines that the Employment Policies Institute sponsors minimumwage.com;

describes the Employment Policies Institute as a non-profit and non-partisan think
tank or research organization.

3 Determines that the Employment Policies Institute sponsors minimumwage.com;
describes the Employment Policies Institute as an advocacy organization or raises
substantial questions/concerns about its trustworthiness.

4 Determines that the Employment Policies Institute sponsors minimumwage.com
and is a front site created by Berman and Company, a public relations firm.

There were dramatic differences in what fact checkers, historians, and students learned

during the task’s eight minutes. Before prompting, fact checkers’ conclusions averaged 3.3

(SD = .82) on a 5-point scale, versus historians’ average of 1.3 (SD = 1.4) and students’ .

52

(SD = 1.16). A Kruskal-Wallis test showed significance (H (2) corrected for ties = 21.4, p <

.001); follow-up Mann-Whitney U tests showed differences among fact checkers and

historians (p = .003) and fact checkers and students (p < .001).

Without prompting, and in less than a minute, the fact checkers learned that EPI was

minimumwage.com’s parent (See Figure 4; M = 51 s, SD = 43 s). Historians took nearly four

times as long (M = 3 min, 40 s, SD = 2 min). Six of the 10 needed to be prompted to find EPI.

READING LESS AND LEARNING MORE

 

 

23

Among the three groups, students took the longest to get to EPI: an average of 5 minutes and

18 seconds (SD = 1 min, 24 s); the overwhelming majority of students (four-fifths) needed

prompting.

Every fact checker concluded that Richard Berman (or Berman and Company)

sponsored EPI and minimumwage.com. Only six historians did so, and those who did took

nearly twice the time as checkers (Mcheckers = 3 m, 25 s, SD = 1 min, 42 s; Mhistorians = 6 m, SD =

2 min, 35 s). Only forty percent of students made it to Berman and Company; those that did

took an average of nearly seven minutes (M = 6 min, 59 s, SD = 1 min, 51 s).

Reading Laterally. Fact checkers learned more about minimumwage.com and did so

in less time than the others. They employed a powerful heuristic for taking bearings: lateral

reading. Fact checkers almost immediately opened up a series of new tabs on the horizontal

Figure 4. Average time for participants to determine Employment Policies Institute’s
sponsorship of minimuwage.com; average time and percentage of each participant group to
determine Richard Berman or Berman and Company’s sponsorship of both websites.

 

READING LESS AND LEARNING MORE

 

 

24

axis of their browsers before fully reading the article.

Checker A glanced at “Denmark’s Dollar Forty-One Menu” for six seconds before

clicking on the page’s “About” tab, where she learned that the site was “a project of the

Employment Policies Institute.” She used keyboard shortcuts (pressing the command key

while clicking) to open the link to the Employment Policies Institute site in a new tab

alongside minimumwage.com (see Figure 5). After just three seconds on EPI’s home page,

she went to their “About Us,” scanned the bland description (“Founded in 1991, the

Employment Policies Institute is a non-profit research organization dedicated to studying

public policy issues”), and quipped, “This is profoundly not helpful.” In just over a half

minute, she opened a new tab and Googled Employment Policies Institute.

Scanning Google’s snippets, Checker A skipped the first four results and selected

SourceWatch’s entry on EPI: “So this says it’s one of several front groups created by a PR

firm.” She scrolled until she hit a linked quotation from a New York Times reporter who

Figure 5. Checker A’s lateral reading.

 

READING LESS AND LEARNING MORE

 

 

25

“detailed his visit to the EPI, saying, ‘I didn’t see any evidence at all that there was an

Employment Policies Institute office.’” One minute and twenty-seven seconds into the task,

she clicked on SourceWatch’s citation for this quote, which led to a National Public Radio

story, “A Closer Look at How Corporations Influence Congress.” Rather than reading it,

Checker A used Command-F to search for EPI and corroborate the claims made by

SourceWatch. A little over two minutes into the task, she had EPI sized up:

Obviously this isn’t a legitimate organization, based on the reporting of this New York

Times reporter. He talks about actually going there, he doesn’t see any evidence at all

that they actually had an office, there are no employees, all the staff there actually

work for the PR firm.

Only then did she return to her original starting place, minimumwage.com, declaring, “[The

New York Times reporter] is right. It’s a very legitimate looking website, but clearly, this is

also advancing an agenda.”

With breakneck speed, Checker A deftly traversed a digital morass, ignoring massive

amounts of material (she barely read the original article) to conclude that minimumwage.com

and EPI were not what they seemed. Though slightly less efficient, the other checkers largely

mirrored Checker A’s lateral approach. The average time they took to leave the starting page

was just over half a minute (M = 37 s, SD = 41 s). None accepted EPI’s description at face

value; instead, they read laterally, visiting an average of six sites before concluding that

minimumwage.com and EPI were cloaked sites that represented corporate interests.

Historians’ Reading. Historians took longer, on average, to go from

minimumwage.com to EPI than fact checkers took to conclude that both sites were the

READING LESS AND LEARNING MORE

 

 

26

products of Berman and Company. Before prompting, only four of ten historians connected

minimumwage.com to the Employment Policies Institute. As in the previous task, Historians

H and S were the outliers. They left the landing page four times as fast as the others,

averaging 26 seconds; their eight colleagues averaged 2 minutes, 5 seconds. Both were

efficient lateral readers, wasting little time before opening additional tabs. Three of their

colleagues, on the other hand, remained stuck on minimumwage.com for the entire task.

Even when some of the historians sought to read laterally—opening new tabs to

research minimumwage.com or the Employment Policies Institute—they lacked essential

searching skills. For example, a minute into the task, Historian K tried to learn more about

minimumwage.com by opening a new tab to search for the name of the organization. But

instead of putting the name of the organization in quotation marks and adding keywords like

“funding” or “who is behind,” she typed [minimum wage.com] into the search bar, separating

“minimum” from “wage” and adding no additional terms. The outcome was an entire page of

results issued by the very organization she was trying to investigate. Sensing a dead end, she

added [conservative?] to the search bar, which produced yet another page of fruitless results

(see Figure 6).

READING LESS AND LEARNING MORE

 

 

27

Stymied, the historian abandoned lateral reading and returned to the original

“Denmark’s Dollar Forty-One Menu” page, no wiser than before. She clicked the page’s

“Research” tab to engage in a more familiar task: “Let me see how I can interpret the

legitimacy of their research.” Historian K was not alone: her colleagues fumbled such basic

moves as putting terms in quotation marks so that Google could search for contiguous terms.

Each of these historians was an astute reader, but reading skills alone weren’t enough to pull

Figure 6. Historian K’s search results for [minimum wage.com conservative?]

 

READING LESS AND LEARNING MORE

 

 

28

back the curtain from a cloaked website.

Students’ Reading. Students struggled to get to the bottom of

minimumwage.com.

They either spent too much time reading vertically, staying on the page and reading as they

might a print document, or they engaged in fluttering, aimlessly moving across the screen,

“touching or not touching pieces of information … unconscious to its value and without a

plan” (Kirschner & Von Merriënboer, 2013, p. 171). When five minutes were up and before

being prompted, 80% of students had devoted no time to investigating who was behind

minimumwage.com.

Although some students left the landing page quickly, their exit was a far cry from the

strategy of taking bearings. Instead, they meandered to different parts of the site, making

decisions about where to click based on aspects that struck their fancy. A prospective

chemical engineering major quickly glanced at “Denmark’s Dollar Forty-One Menu” before

scrolling to the bottom of the page and clicking on “In Your State,” an interactive map where

users could click on different states and compare minimum wage rates and unemployment

statistics. He spent two minutes playing with it, longer than he spent reading the initial article.

Other students engaged in similar kinds of fluttering, clicking on features that piqued their

curiosity rather than those that would justifiably inform their judgment about the

trustworthiness of the site (see Table 5).

Table 5

Students’ Fluttering on Minimumwage.com

Links
Clicked

Student’s Comment while Clicking Clicking Sequence

https://www.mi “It’s interesting how the Media page is kept very Visited “Media” page after

READING LESS AND LEARNING MORE

 

 

29

nimumwage.co
m/media/

minimalistic, and then you click on other things
[clicking on ‘News Reports,’ which leads to an EPI
page] and it brings you to different pages [clicks back to
‘Media’ page]. But I think it’s actually smart to keep
that elsewhere just to organize it.” (Student 19)

visiting the “Home,” “Myths,”
“Research,” and “In Your
State” pages.

https://www.mi
nimumwage.co
m/research/

“I don’t really want to read their blog, and I’m not
interested right now in what’s my state’s minimum wage
and teen unemployment. . . . And videos and graphics
are too time consuming.” (Student 3)

Explaining her reasoning for
clicking on the “Research” page
instead of the “Blog,” “In Your
State,” or “Video and Graphics”
pages.

https://www.mi
nimumwage.co
m/news/

“I like the layout of the blog, I think it’s also just very
clear and everything’s very cleanly laid out in a single
column. Same with this [‘Research’] page. . . . Oh, and
then here’s a description of the website. Um, this is a
pretty cool page too.” (Student 12)

Clicked through several
pages of the website,
including “Home,” “In Your
State,” “Blog,” “Research,”
“About,” and “Myths.” On
each page, she focused
comments on appearance and
organization of each page.

https://www.mi
nimumwage.co
m/media/

“Maybe this is an impartial website. Is there any such
thing [clicks to ‘Videos and Graphics’ page] as an
impartial website? I don’t know. [reading
advertisements posted on site] ‘Unhappy New Year,’ ‘If
7 out of 10 doctors said you were sick, you would
listen.’” (Student 1)

Clicked to “Media” and “Videos
and Graphics” pages after
viewing the “Home” and “In
Your State” pages.

Task 3: Vergara v. California

In May 2012, lawyers in California filed a lawsuit on behalf of nine public school

students, including one named Beatriz Vergara. They argued that the system of teacher tenure

in California violated the state constitution by denying equal protection to students with

ineffective teachers. In June 2014, a California Superior Court ruled in favor of the nine

students. The case cost more than a million dollars to prosecute, a sum that typically exceeds

the spending money of nine adolescents. In fact, the legal team was hired and financed by

David Welch, a Silicon Valley entrepreneur who founded the organization Students Matter.

The press, however, often omitted this detail. What made for good copy was a David-

versus-Goliath tale of adolescents taking on a powerful teachers’ union: nine students, mostly

READING LESS AND LEARNING MORE

 

 

30

students of color, courageously confronting a rotten bureaucracy and demanding better

teachers. A news item on the website of KABC, the Los Angeles ABC affiliate, reported that

“The verdict is a win for nine students who sued the state saying that tenure policies have

made it impossible for bad teachers to be fired” (“California Teacher Tenure,” 2014). It made

no mention of Students Matter, David Welch, or any of the big money that backed the suit.

Unlike the two previous tasks, this one began with a paper stimulus: the 379-word

article from KABC. We gave participants time to read the article before telling them that the

nine students had a million-dollar legal bill. We then asked them to spend five minutes

searching for who paid the tab. Participants needed to, as it were, “follow the money” by

locating information that named Students Matter, and ultimately David Welch, as the main

backer of the lawsuit.

Vergara was a politically charged case with far-reaching implications. Students Matter

argued that the case was about getting rid of laws that were “handcuffing schools from doing

what’s best for kids when it comes to teachers” (“Vergara v. California,” n.d.); the California

Teachers Association painted it as a “lawsuit brought by wealthy corporate special interests

looking to eradicate educators’ professional and due process rights” (“Vergara v. State of

California,” n.d.). Given these conflicting claims and the number of bona fide news sources

and partisan sites that were writing about the case, site selection and verification were

essential. If participants could verify that Welch was the source of the plaintiffs’ funding

across bona fide sources, they could be more certain that they had successfully navigated

politically muddy waters to arrive at the correct answer.

The 25 Stanford students were the fastest in identifying Welch as the source of funding

READING LESS AND LEARNING MORE

 

 

31

(M = 1 minute, 42 seconds, SD = 86 s). Fact checkers and historians were slower. Historians

took 2 minutes, 1 second (SD = 56 s), and checkers averaged 2 minutes, 8 seconds (SD = 93

s). Although they were the slowest to reach their conclusions, fact checkers were the most

selective when it came to the sites they visited, and took the most time to verify their answers.

We rated the quality of participants’ conclusions using a 5-point scale. Participants

were given a 0 if they never identified Welch; a 1 if they identified Welch but did so only

through a questionable source; a 2 if they identified and verified Welch’s role based on two or

more questionable sources; a 3 if they identified Welch using a bona fide source; and a 4 if

they identified and verified Welch’s role through at least one bona fide source and one

additional source. (We defined bona fide sources as those with well-established credentials,

such as the Los Angeles Times or the Wall Street Journal.)

Using our rubric, the fact checkers’ conclusions merited a 3.6 (SD = 0.70), versus

historians’ 2.4 (SD = 1.3) and students’ 2.3 (SD = 1.5). Fifteen students scored a 0, 1, or 2,

while all but one of the fact checkers’ responses scored a 3 or 4. A Kruskal-Wallis test showed

significance (H (2) corrected for ties = 27.5, p < .001); follow-up Mann-Whitney U tests

showed differences between fact checkers and students (p = .016).

The differences between the students’ and the fact checkers’ approaches can be seen

by comparing Checker D with Student 17, a mathematical and computational science major.

Both identified Welch in under a minute (34 seconds for the student, 50 seconds for the

checker). The student spent just a few seconds on the results yielded by searching for [vergara

v california]. He looked at the first result he came to (the Students Matter page), but quickly

returned to the search results, reminding himself, “I’m looking for the ‘who paid.’” He

READING LESS AND LEARNING MORE

 

 

32

selected vergaratrial.com, a partisan site created by the California Federation of Teachers,

where he located Welch’s name. He never commented on the website’s political slant nor

whether he found it trustworthy; he simply located Welch’s name and accepted it as fact.

Checker D initially searched for [vergara v california] before quickly adjusting it to

[vergara v. california court records]. As she scrolled down the results, she said, “I’m coming

up with a lot of different information. I’d rather click on some press reports.” She skipped the

first three results, all of which were affiliated with Students Matter, along with

vergaratrial.com and cacs.org (an organization she did not recognize), and instead opened

articles from three news organizations and Wikipedia. Exhibiting what we call click restraint,

she spent nearly 20 seconds scanning the results page and reading the snippets before clicking

on any link. Although she opened four additional tabs (see Figure 7), her use of keyboard

shortcuts meant that her eyes and focus never wavered from the results page.

READING LESS AND LEARNING MORE

 

 

33

Figure 7. Checker D’s search results showing the sites she opened.

READING LESS AND LEARNING MORE

 

 

34

Checker D went first to Wikipedia, where she skipped over most of the entry by using

the “Contents” menu to navigate to “Litigants.” There, she read that “funding for the plaintiff

school students was provided by David Welch, a Silicon Valley entrepreneur.” She then

clicked on the Washington Post article she had opened in a different tab. She used the

command-F shortcut to search for Welch’s name and confirmed his role in the case.

Checker D took 16 seconds longer than Student 17 to find Welch’s name. However,

she was more purposeful in the sites she opened, more discerning in the information she

considered trustworthy, and more thorough in ascertaining that David Welch was indeed the

money behind Vergara v. California.

Historians. Historians were only slightly better than students in the quality of their

conclusions (Mhistorians = 2.4 versus Mstudents = 2.3). Although several historians excelled, quickly

locating Welch’s name and verifying his role on trusted sites, two of them relied exclusively

on partisan or questionable sources and made no attempt to verify their conclusions.

A third, Historian N, never made it to Welch. He searched for [Vergara v. California]

and started with Wikipedia. Rather than using it to quickly locate Welch, Historian N went

directly to the references to find “a link to the case itself.” For nearly three minutes, he

examined the original court brief (number BC484642), scrolling up and down the PDF

document, pausing at “Procedural History” and learning that the plaintiffs argued that the

California Educational Code violated the equal protection clause of the state constitution.

After searching in vain for the plaintiffs’ backers, he abandoned Wikipedia and initiated a new

search, adding “plaintiffs” and “attorneys” to his original query.

He clicked on the first result (studentsmatter.org, Welch’s organization) and went to

READING LESS AND LEARNING MORE

 

 

35

“Our Team,” where he recognized the name of the lead attorney (“someone I know … the

Solicitor General under Bush”). By the end of the task the only thing he could say was that the

plaintiffs were represented by a “team with deep legal pockets.”

He was correct, but then again, this was the starting point for the task—participants

were told legal fees in this case were “over a million dollars” and that their goal was to find

out who paid them. By the task’s end, this historian was no closer to answering the question

than when he started. How come?

The simplest answer was that Historian N did what historians are trained to do: search

for primary sources. Had the task been to write a history of the Vergara case, initiating the

research process with the court briefing might’ve made sense. However, when the goal was to

quickly ascertain who backed the teenagers, a close reading of a labyrinthine legal

document—which, as it turned out, never mentioned Welch—took precious time and sapped

limited energy.

Limitations

The purpose of this exploratory study was to better understand the nature of expertise in

the evaluation of online information. We recognize, however, that any task that involves

researchers peering over the shoulders of participants creates an artificial environment that can

distort what people ordinarily do. Despite imperatives to “do what you normally do,” it must be

odd to be shown sites not of one’s choosing and given one-minute warnings to stop searching.

Studies are needed that observe people evaluating sites in more natural settings. At the same

time, we reasoned that tasks without time limits threaten ecological validity—just-in-time

searches are generally matters of minutes or seconds, not hours (Liu et al., 2010; Nielsen, 2011).

READING LESS AND LEARNING MORE

 

 

36

It’s also possible that a different sample of sites might have yielded different results. We sampled

sites that covered a range of topics and perspectives and that varied in the extent to which they

revealed their agendas. But even within the categories we selected, there are innumerable

options, each with unknown content effects. More extensive research is needed to know if the

strategies we identified are generalizable across topics, sites, and searches.

Additionally, it may have been the case that participants didn’t put forth their best efforts,

although we find that unlikely. Our sample was comprised of people with high levels of self-

regard and intellectual confidence. Looking foolish, especially when rendering judgments about

issues of social and political moment, would threaten that self-regard.

We are also aware that professional fact checkers were not the only possible group of

experts we could have sampled. Others, such as Wikipedia editors who have earned the highest

badges, specialists in cyber security, and professional librarians and information scientists, are

also worthy of study. In their approach to websites, two of the ten historians resembled the fact

checkers more than their fellow historians. Small sample sizes exaggerate differences: we can’t

rule out the possibility that doubling or tripling our sample would have produced different

results. Studies that require intensive protocol analysis are always a trade-off between sample

size and available resources. That said, a sample of 45 nearly hour-long protocols is on the

higher end in this genre of research.

Discussion

The participants in this study were all capable individuals. Historians had strings of

esteemed publications to their credit and held coveted positions in a field where such positions

are increasingly rare. The fact checkers worked for prestigious publications and rubbed shoulders

READING LESS AND LEARNING MORE

 

 

37

with famous authors who depended on them to get things right. Our college students were the

gifted winners of the college admissions lottery at the nation’s most competitive university. Yet,

despite our participants’ abundant talents, there were unmistakable differences in how they

navigated the web.

Only two of the ten historians adroitly evaluated digital information. Their colleagues

were often indistinguishable from college students in their meandering searches and general

befuddlement. Both groups often fell prey to the same digital ruses. Considering our participants’

intellectual caliber, we are left to ask: What is it about the Internet that bedevils intelligent

people? Why are they often no wiser after reviewing a website than before? What did fact

checkers do that allowed them to quickly and accurately discern the trustworthiness of

information? How is it that they often spent less time on a website but ended up learning more?

The answer lies with two concepts we introduced earlier: taking bearings and lateral

reading. In order to take bearings, this imperative is issued to the searcher: before diving too

deeply into unfamiliar digital content, make a plan for moving forward. Taking bearings is what

sailors, aviators, and hikers do to plot their course toward a desired destination. Although correct

bearings do not guarantee that travelers will reach that destination, heading in the right direction

substantially increases their chances. To take bearings, web searchers obviously don’t use a

physical compass. But they need metaphorical compasses just as much as hikers need real ones.

The act of taking bearings separated the fact checkers from nearly everyone else.

Evaluating the pediatrics websites, checkers took bearings in every instance before rendering

judgment; historians did so only a quarter of the time and students did so barely at all. Because

errors could cost them their jobs, fact checkers were keenly attuned to the web’s wiles. They

READING LESS AND LEARNING MORE

 

 

38

understood that websites do not sprout by spontaneous generation but are designed, created, and

financed by groups seeking to promote particular—and often partisan—interests. Taking

bearings helped checkers get a fix on these interests.

In an Internet teeming with cloaked sites and astroturfers (front groups pretending to be

grassroots efforts), taking bearings often assumes the form of lateral reading. When reading

laterally, one leaves a website and opens new tabs along a horizontal axis in order to use the

resources of the Internet to learn more about a site and its claims. Lateral reading contrasts with

vertical reading. Reading vertically, our eyes go up and down a screen to evaluate the features of

a site. Does it look professional, free of typos and banner ads? Does it quote well-known

sources? Are bias or faulty logic detectable? In contrast, lateral readers paid little attention to

such features, leaping off a site after a few seconds and opening new tabs. They investigated a

site by leaving it.

Paradoxically, a key feature of lateral reading is not reading. Efficient searchers

intelligently ignore massive amounts of irrelevant (or less crucial) text when making an informed

judgment about the trustworthiness of digital information. But lateral reading doesn’t take place

in a vacuum. It requires knowledge of sources, knowledge of how the Internet and searches are

structured, and knowledge of strategies to make searching and navigating effective.

Fact checkers relied on a robust knowledge of sources to inform their decisions. They

understood and distinguished among an array of online sources, including how sites are spread

across the political spectrum (Daily Kos is liberal, Daily Caller conservative). They recognized

the characteristics that generally make a source reliable or ones that act as fallible proxies for

reliability. On its “About Us” page, the Employment Policies Institute describes itself as “a non-

READING LESS AND LEARNING MORE

 

 

39

profit research organization dedicated to studying public policy issues.” Checker A’s reaction

was simply, “This is profoundly not helpful.” She knew that a nonprofit status does not stamp an

organization as unquestioningly altruistic. In contrast, high school students trying to decide if the

Employment Policies Institute was nonpartisan were often swayed by its nonprofit status

(McGrew, Ortega, Breakstone, & Wineburg, 2017).

Knowledge of sources was therefore necessary but not sufficient. Fact checkers also

possessed knowledge of online structures, particularly how search results are organized and

presented. They knew that the first result was not necessarily the most authoritative, and they

spent time scrolling through results, often scanning the entire first page (and sometimes the

second and third) before clicking on any links. They understood how search engine optimizers

use sophisticated keywords and other techniques to game results, pushing some sites to the front

of the line and more authoritative information to the back. Students, on the other hand, often

clicked on the first results, rarely articulating a rationale for why they selected them (a finding

well-documented by others; e.g., Hargittai et al., 2010; Kirschner & Von Merriënboer, 2013; Pan

et al., 2007).

Lateral reading relies on canny strategies and techniques for navigating the Internet.

Although knowing how to right click to open a new tab might seem purely technical, for our

participants it proved anything but. Indeed, the failure to right click thwarts lateral reading, piling

new windows on top each other and making it impossible to quickly scan multiple sources.

Another key to lateral reading involves choosing keywords and putting quotation marks around

phrases so that Google locates them as a single unit. Without this knowledge, Historian K was

stymied in her attempt to get to the bottom of minimumwage.com.

READING LESS AND LEARNING MORE

 

 

40

Even possessing this knowledge did not guarantee success. Historians and students easily

distinguished between the New York Times and the National Enquirer, and most of the students

right-clicked with ease and fluidity. By any measure of critical thinking, our participants were far

above average. But this was not enough.

Yet, even the most critical thinkers are susceptible to cognitive biases that steer them in

the wrong direction. The majority of historians and students in our sample fell victim to what

Tversky and Kahneman (1974) called the representativeness heuristic, “in which probabilities

are evaluated by the degree to which A resembles B” (p. 1124). In a series of classic

experiments, they showed how people from all walks of life ignored crucial information when

deciding whether Steve (“shy and withdrawn” with a “need for order and structure” and “a

passion for detail”) belonged to the category of librarians or farmers. Subjects blithely

disregarded base rates, forming judgments about the degree to which Steve was “representative

of, or similar to, the stereotype of a librarian” (p. 1124). Facing “intricate and less transparent

problems” (p. 1130), even professional statisticians, who should have known better, succumbed

to the biases of the representativeness heuristic.

Something similar was going on when historians and college students evaluated the site

of American College of Pediatricians. The site resembled what participants expected from a bona

fide medical venue: an impressive sounding name; an official logo and motto (“Best for

Children”); an .org URL; and no overt signs that might raise eyebrows (flashing banner ads,

misspellings, irregular fonts, and broken links). Moreover, the article about bullying conformed

to what people expect from a scientific text (Meyer, 2017): it had an abstract, brief section

headings, and references studded with names of reputable journals like Pediatrics and Journal of

READING LESS AND LEARNING MORE

 

 

41

Criminology. The website’s very blandness worked to its advantage. One historian thought that

even though the site lacked the “interactive features a website might provide,” it did not detract

from its authority because, in his opinion, it was “just meant to be a useful resource for people to

learn about bullying.”

While acknowledging that deploying heuristics can be “economical” and “effective,”

Tversky and Kahneman (1974, p. 1131) emphasized their negative qualities (indeed, the

representativeness heuristic was the crowning example of a “cognitive bias”). Our data provide

ample evidence that something akin to the representativeness heuristic steered many of our

participants down the wrong path. At the same time, our work shines a light on how some

heuristics—skillfully deployed under the right circumstances—can be powerful aids when

navigating a complex problem space.

In evaluating digital information, we distinguish between widely used but flawed weak

heuristics, such as using a domain designation as a proxy for trustworthiness, and strong

heuristics, like lateral reading, which not only save time but often lead to more accurate

judgments than more complex methods. Over the past two decades, Gigerenzer and colleagues

(see Gigerenzer & Gaissmaier, 2011, for review) have redeemed heuristics from the dungeon of

cognitive biases and demonstrated how they can help problem solvers make decisions “more

quickly, frugally, and/or accurately than more complex methods” (2011, p. 454). Lateral reading

fits this definition. Fact checkers read less and learned more—with a speediness that often left

other participants in the dust.

Similar strong heuristics have been identified in a growing number of fields (Gigerenzer,

2007). For example, in criminal profiling, police have relied on complicated mathematical

READING LESS AND LEARNING MORE

 

 

42

models to predict where a repeat offender is most likely to live, considering multiple inputs to

predict probabilities. A fast and frugal alternative that bested more complex methods is the

“circle” heuristic, which draws a circle around the two farthest-flung crime locations and predicts

that the offender will live in the center (Snook, Taylor, & Bennell, 2004). In emergency

medicine, researchers devised a fast and frugal heuristic to help doctors decide when a patient

complaining of chest pain should be assigned to the coronary care unit. A simple question tree of

three yes-or-no answers “sent fewer patients who suffered from a heart attack wrongly into a

regular bed and also nearly halved physicians’ high false-alarm rate” (Gigerenzer & Gaissmaier,

2011, p. 468).

We have focused a great deal on speed, and we shall come back to that presently. While

the college students were faster at finding the name of the financial backer in the Vergara case,

their speed came at the expense of quality. Students arrived at David Welch’s name by

promiscuous clicking, often without regard to a source’s impartiality. Fact checkers took longer

not because of faulty search strategies or unhelpful keywords, but because they slowed down to

review search results. They showed click restraint. Before pressing on any of the results, they

mined Google’s snippets for the wealth of information they contain. They examined each URL,

considered the source of the information, and scanned the brief but fecund sentence fragments

before alighting on a link to click. A searcher’s first click is often destiny, either putting

searchers on a path toward warranted conclusions or sending them into the wilderness of infinite

regress. Click restraint tips the balance toward the former.

On our other tasks, fact checkers were both quicker and more accurate in reaching

decisions. Speed matters. Had participants been given an hour to complete each task, they surely

READING LESS AND LEARNING MORE

 

 

43

would’ve reached better conclusions. Doing so, however, would have detached these tasks from

reality. Depending on what they’re searching for, people spend various amounts of time surfing

the web. But, as researchers have discovered, the amount of time people spend on a typical

search is some variation of “not very long” (Nielson, 2011).

That’s because people do not have hours to research every social or political question

they encounter. Too many issues confront us in our already busy lives. There are emails from

organizations asking us to donate, volunteer, sign petitions; debates to watch and choices to

make about how to vote; arguments posed in comment sections to respond to or ignore; news

articles to pass on, Facebook posts to like, tweets to re-tweet. Facing this onslaught, we need

efficient strategies for separating truth from falsehood, good arguments from bad. Consider the

daunting challenge faced by California voters trying to sift through seventeen separate initiatives

on the 2016 ballot: plans to increase the tobacco tax, ban plastic bags, limit the sale of

ammunition, legalize recreational marijuana, require porn stars to wear condoms while filming,

approve a bond to build new schools, repeal the death penalty or make it easier to mete out, and

so on. If the average voter spent ten minutes researching each initiative, we would consider this

an act of responsible citizenship. The question for our age is this: How do we make those ten

minutes count?

This is neither a plea to banish books nor to turn all reading into ten minute exercises.

Close reading, the careful, analytic search for pattern, detail, and nuance, is essential to any

thoughtful curriculum (Shanahan, 2012; Wolf, 2007). But when the goal is to quickly get up to

speed, the close reading of a digital source, when one doesn’t yet know if the source can be

trusted (or is what it says it is)—proves to be a colossal waste of time.

READING LESS AND LEARNING MORE

 

 

44

In the last few years, Connecticut, Washington, Rhode Island, and Utah have all passed

legislation related to the teaching of media literacy and digital citizenship. Other states have

similar legislation in the works (see medialiteracynow.org). But what if the problem is not that

we’re failing to teach media literacy, but that we’re teaching the wrong kind?

It is impossible to rule out this possibility after surveying some of the most widely

available materials for teaching web credibility. These materials often share a common feature:

they provide checklists to help students decide whether information should be trusted, ranging

from ten questions to as many as 30 (see Common Sense Media, 2012; Media Education Lab,

n.d., News Literacy Project, n.d.). Long or short, checklists focus students on a website’s most

easily manipulated features. For example, college library websites often advise students to use

“Five Criteria for Web Evaluation,” which are based on an article from the Internet’s Stone Age

(Kapoun, 1998). These five criteria (“Authority, Accuracy, Objectivity, Currency, and

Coverage,”)—or variations on the theme (including the CRAAP test: “Currency, Relevance,

Authority, Accuracy, and Purpose”)—can be found on websites hosted by the University of

Alaska Fairbanks to Illinois State and everywhere in between.6

Even if we set aside the concern that students (and the rest of us) lack the patience to

spend fifteen minutes answering questions about a single site, a bigger problem remains:

designating an author, throwing together a reference list, and making sure a site is free of typos

doesn’t confer credibility. Recall that the Employment Policies Institute not only carried an .org

domain but was labeled a 501c(3) “charitable organization.” When the Internet is characterized

by polished web design, search engine optimization, and organizations vying to appear

trustworthy, such guidelines create a false sense of security. In fact, relying on checklists could

READING LESS AND LEARNING MORE

 

 

45

make students more vulnerable to scams, not less. Fact checkers succeeded on our tasks not

because they followed the advice we give to students. They succeeded because they didn’t.

Checkers never consulted a list of questions before initiating a search. The elements

emphasized by the checklists—what an organization claims on its “About” page, an .org URL, a

physical address and contact information—were taken with a grain of salt. That’s because the

checklist approach cuts searchers off from the most efficient route to learning more about a site:

finding out what the rest of the web has to say. This was the biggest lesson we learned from

watching these experts: They evaluated unfamiliar websites by leaving them. For fact checkers,

the direct route to credibility was indirect.

Before we set out on this study, the chief fact checker at a national publication told us

what she tells her staff: “The greatest enemy of fact checking is hubris.” Even for seemingly

innocuous topics, fact checkers are taught to be wary of the “duck test,” a homey example used

to illustrate the logic of abduction, the process of making inferences based on an entity’s most

observable characteristics. While a site may look like a duck, swim like a duck, and quack like a

duck, these professionals spend their days swimming in an Internet teeming with broad-billed,

web-footed creatures, only some of which turn out to be ducks. Before conferring “duckness,”

fact checkers do what fact checkers are trained to do: they check.

The immensity of the Internet makes it impossible to be familiar with every entry Google

spits out. In this treacherous terrain, the most thoughtful response is to become skeptical of one’s

own intelligence. Hubris on the web takes the form of trusting our eyes and brains to examine the

look of a page and its content in order to determine reliability. In contrast, taking bearings,

practicing lateral reading, and engaging in click restraint remind us that our eyes deceive, and

READING LESS AND LEARNING MORE

 

 

46

that we, too, can fall prey to professional-looking graphics, strings of academic references, and

the allure of .org domains. Practicing these strategies is an admission that we are more astute

when we turn to the entire web than when we try to brave it alone.

Rather than making students slog through strings of questions about easily manipulated

features on a single website, we should be teaching them that the World Wide Web is, in the

words of blogger and Internet critic Mike Caulfield (2017), “a web, and the way to establish

authority and truth is to use its web-like properties.” This is what professional fact checkers do.

It is what we should be teaching students to do as well.

READING LESS AND LEARNING MORE

 

 

47

References

Agosto, D. E. (2002). A model of young people’s decision-making in using the Web. Library &

Information Science Research, 24(4), 311–341. doi:10.1016/S0740-8188(02)00131-7

American Academy of Pediatrics (2014). Stigma: At the root of ostracism and bullying.

Retrieved from https://www.aap.org/en-us/about-the-aap/aap-press-room/pages/Stigma-

At-the-Root-of-Ostracism-and-Bullying.aspx.

American College of Pediatricians (2015). “P” for pedophile. Retrieved from

https://www.acpeds.org/p-for-pedophile.

Asher, A. D., & Duke, L. M. (2011). Searching for answers: Student research behavior at Illinois

Wesleyan University. In L. M. Duke & A. D. Asher (Eds.), College libraries and student

culture: What we now know (pp. 71-86). Chicago, IL: ALA Editions.

Barzilai, S., & Zohar, A. (2012). Epistemic thinking in action: Evaluating and integrating online

sources. Cognition and Instruction, 30(1), 39-85. doi:10.1080/07370008.2011.636495

Bennett, S. (2012). Digital natives. In Z. Yan (Ed.), Encyclopedia of cyber behavior: Volume 1

(pp. 212-219). Hershey, PA: IGI Global.

Bradshaw, W., Weight, D. G., & Packard, T. (2011, Mary 3). Same sex attraction not a matter of

choice. The Salt Lake Tribune. Retrieved from

http://archive.sltrib.com/printfriendly.php?id=51356807&itype=cmsid

Brand-Gruwel, S., Kammerer, Y., van Meeuwen, L., & van Gog, T. (2017). Source evaluation

of domain experts and novices during Web search. Journal of Computer Assisted

Learning, 33, 234-251. doi:10.1111/jcal.12162

Brand-Gruwel, S., Wopereis, I., & Vermetten, Y. (2005). Information problem solving by

READING LESS AND LEARNING MORE

 

 

48

experts and novices: Analysis of a complex cognitive skill. Computers in Human

Behavior, 21(3), 487-508. doi:10.1016/j.chb.2004.10.005

California teacher tenure law unconstitutional, judge says. (2014, June 10). Retrieved from

http://abc7.com/education/california-teacher-tenure-law-unconstitutional/106228/.

Caulfield, M. (2017, March 20). How “news literacy” gets the web wrong [Blog post]. Retrieved

from https://hapgood.us/2017/03/04/how-news-literacy-gets-the-web-wrong/.

Coleman, T. (2010). Misinformation from doctors . . . out to hurt students? Retrieved from

https://www.aclu.org/blog/speakeasy/misinformation-doctorsout-hurt-students.

Common Sense Media. (2012). Identifying high-quality sites. Retrieved from

https://www.commonsensemedia.org/educators/lesson/identifying-high-quality-sites-6-8.

Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis: Verbal reports as data (Rev. ed.).

Cambridge, MA: MIT Press.

Espelage, D. (2011). Bullying and the lesbian, gay, bisexual, transgender, questioning (LGBTQ)

community. Retrieved from https://www.stopbullying.gov/at-

risk/groups/lgbt/white_house_conference_materials .

Gasser, U., Cortesi, S., Malik, M., & Lee, A. (2012). Youth and digital media: From credibility

to information quality. Cambridge, MA: Berkman Center for Internet and Society.

Gigerenzer, G. (2007). Gut feelings: The intelligence of the unconscious. New York, NY:

Viking.

Gigerenzer, G., & Gaissmaier, W. (2011). Heuristic decision making. Annual Review of

Psychology, 62, 451-482. doi:10.1146/annurev-psych-120709-145346

Graves, L. (2013, November 13). Corporate America’s new scam: Industry P.R. firm poses as

READING LESS AND LEARNING MORE

 

 

49

think tank! Salon. Retrieved from

http://www.salon.com/2013/11/13/corporate_americas_new_scam_industry_p_r_firm_po

ses_as_think_tank/.

Haile, T. (2014, May 9). What you think you know about the web is wrong. Time. Retrieved

from http://time.com/12933/what-you-think-you-know-about-the-web-is-wrong/.

Hargittai, E., Fullerton, L., Menchen-Trevino, E., & Thomas, K. Y. (2010). Trust online: Young

adults’ evaluation of web content. International Journal of Communication, 4, 468-494.

doi:1932–8036/20100468

Helsper, E. J., & Eynon, R. (2010) Digital natives: Where is the evidence? British

Educational Research Journal, 36(3), 503-520. doi:10.1080/01411920902989227

Iding, M. K., Crosby, M. E., Auernheimer, B., & Klemm, B. (2009). Web site credibility: Why

do people believe what they believe? Instructional Science, 37, 43-63. doi:

  10.1007/s11251008-9080-7
 

Julien, H., & Barker, S. (2009). How high school students find and evaluate scientific

information: A basis for information literacy skills development. Library & Information

Science Research, 31, 12-17. doi:10.1016/j.lisr.2008.10.008

Kapoun, J. (1998). Teaching undergraduates Web evaluation: A guide for library instruction.

College and Research Libraries News, 59, 522-533.

Kirschner, P. A., & van Merriënboer, J. J. G. (2013). Do learners really know best? Urban

legends in education. Educational Psychologist, 48(3), 169-183. doi:

10.1080/00461520.2013.804395

Kranish, M. (2005, July 31). Beliefs drive research agenda of new think tanks: Study on gay

READING LESS AND LEARNING MORE

 

 

50

adoption disputed by specialists. Boston Globe. Retrieved from

http://archive.boston.com/news/nation/washington/articles/2005/07/31/beliefs_drive_rese

arch_agenda_of_new_think_tanks/.

Leinhardt, G., & Young, K. M. (1996). Two texts, three readers: Distance and expertise in

reading history. Cognition and Instruction, 14, 441-486.

doi:10.1207/s1532690xci1404_2

Lenz, R. (2012). American College of Pediatricians defames gays and lesbians in the

name of protecting children. Retrieved from https://www.splcenter.org.

Lipton, E. (2014, February 9). Fight over minimum wage illustrates web of industry ties.

New York Times. Retrieved from http://www.nytimes.com/2014/02/10/us/politics/fight-

overminimum-wage-illustrates-web-of-industry-ties.html.

List, A., Grossnickle, E. M., & Alexander, P. A. (2016). Undergraduate students’ justifications

for source selection in a digital academic context. Journal of Educational Computing

Research, 54(1), 22-61. doi:10.1177/0735633115606659

Liu, C., White, R. W., & Dumais, S. (2010). Understanding web browsing behaviors through

Weibull analysis of dwell time. Proceedings of the 33rd International ACM SIGIR

Conference on Research and Development in Information Retrieval. Geneva,

Switzerland.

Lucassen, T., & Schraagen, J. M. (2011). Factual accuracy and trust in information: The role of

expertise. Journal of the American Society for Information Science and Technology,

62(7), 1232-1242. doi:10.1002/asi.21545

McGrew, S., Ortega, T., Breakstone, J., & Wineburg, S. (2017). The challenge that’s bigger than

READING LESS AND LEARNING MORE

 

 

51

fake news: Teaching students in engage in online civic reasoning. American Educator,

1-13.

McGrew, S., & Wineburg, S. (2017). “Reading Less and Learning More: Expertise in Evaluating

the Credibility of Online Information,” Paper presented at the annual meeting of the

American Educational Research Association, San Antonio, TX.

Media Education Lab. (n.d.) Who do you trust? Retrieved from

http://mediaeducationlab.com/secondary-school-unit-2-who-do-you-trust.

Meyer, K. (2017) How to present scientific findings online. Retrieved from

https://www.nngroup.com.

News Literacy Project (n.d.). Ten questions for fake news detection. Retrieved from

www.thenewsliteracyproject.org/sites/default/files/GO-

TenQuestionsForFakeNewsFINAL .

Nielsen, J. (2011). How long do users stay on web pages? Retrieved from

https://www.nngroup.com.

Pan, B., Hembrooke, H., & Joachims, T. (2007). In Google we trust: Users’ decisions on rank,

position, and relevance. Journal of Computer-Mediated Communication, 12, 801-823.

doi:10.1111/j.1083-6101.2007.00351.x

Prensky, M. (2001). Digital natives, digital immigrants. On the Horizon, 9(5), 1–6. doi:

10.1108/10748120110424816

Pressley, M., & Afflerbach, P. (1995). Verbal protocols of reading: The nature of constructively

responsive reading. Hillsdale, NJ: Erlbaum.

Sons of Confederate Veterans (1997). Black history month, Black Confederate heritage.

READING LESS AND LEARNING MORE

 

 

52

Retrieved from http://www.scv.org/documents/edpapers/blackhistory .

Shanahan, C., Shanahan, T., & Misischia, C. (2011). Analysis of expert readers in three

disciplines: History, mathematics, and chemistry. Journal of Literacy Research, 43, 393-

429. doi:10.1177/1086296X11424071

Shanahan, T. (2012, June 18). What is close reading? [Blog post]. Retrieved from

http://shanahanonliteracy.com/blog/what-is-close-

reading#sthash.mxxi0paG.rtIrn0KW.dpbs

Shanahan, T., & Shanahan, C. (2008). Teaching disciplinary literacy to adolescents: Rethinking

content-area literacy. Harvard Educational Review, 78(1), 40–59.

Sieff, K. (2010, October 20). Virginia 4th-grade textbook criticized over claims on black

Confederate soldiers. The Washington Post. Retrieved from www.washingtonpost.com.

Snook, B., Taylor, P. J., & Bennell, C. (2004). Geographic profiling: The fast, frugal, and

accurate way. Applied Cognitive Psychology, 18, 105-121.

Southern Poverty Law Center. (2017). Active hate groups 2016. Retrieved from

https://www.splcenter.org/fighting-hate/intelligence-report/2017/active-hate-groups-

2016.

Stanford University. (2017). Facts 2017: Academics. Retrieved from

Other Undergraduate Education Facts

Stanford University. (2015). Our selection process: Applicant profile. Retrieved from

http://admission.stanford.edu/basics/selection/profile15.html.

Throckmorton, W. (2011, October 6). The American College of Pediatricians versus the

American College of Pediatrics: Who leads and who follows? [Blog post]. Retrieved

READING LESS AND LEARNING MORE

 

 

53

from http://www.patheos.com/blogs/warrenthrockmorton/2011/10/06/the-american-

college-of-pediatricians-versus-the-american-academy-of-pediatrics-who-leads-and-who-

follows/.

Todd, P. M, & Gigerenzer, G. (2017). What is ecological rationality? In P. M. Todd and G.

Gigerenzer (Eds.), Ecological rationality: Intelligence in the world (pp. 1-37). New York,

NY: Oxford.

Trumbull, D. (2013). Bullying at school: Never acceptable. Retrieved from

http://www.acpeds.org/the-college-speaks/position-statements/societal-issues/bullying-at-

school-never-acceptable.

Turban, J. (2017, May 8). The American College of Pediatricians is an anti-LGBT group [Blog

post]. Retrieved from https://www.psychologytoday.com.

Tversky, A. & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases.

Science, 185(4157), 1124-1131. doi:10.1126/science.185.4157.1124

Vergara v. California. (n.d.). Retrieved from http://studentsmatter.org/case/vergara/.

Vergara v. State of California. (n.d.). Retrieved from http://www.cta.org/Vergara.

Wineburg, S. (1998). Reading Abraham Lincoln: An expert/expert study in the interpretation of historical

texts. Cognitive Science, 22, 319-346.

Wineburg, S. S. (1991). Historical problem solving: A study of the cognitive processes used in the

evaluation of documentary and pictorial evidence. Journal of Educational Psychology, 83, 73-

87.

Wineburg, S., & McGrew, S. (November 1, 2016). Why students can’t Google their way to truth.

Education Week.

READING LESS AND LEARNING MORE

 

 

54

Walraven, A., Brand-Gruwel, S., & Boshuizen, H. (2009). How students evaluate information

and sources when searching the World Wide Web for information. Computers &

Education, 52(1), 234-246. doi:10.1016/j.compedu.2008.08.003

Wiley, J., Goldman, S. R., Graesser, A. C., Sanchez, C. A., Ash, I. K., & Hemmerich, J. A.

(2009). Source evaluation, comprehension, and learning in Internet science inquiry tasks.

American Educational Research Journal, 46(4), 1060-1106. doi:

10.3102/0002831209333183

Wolf, M. (2007). Proust and the squid: The story and science of the reading brain. New York,

NY: HarperCollins.

READING LESS AND LEARNING MORE

 

 

55

Footnotes

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1 General Orders #14. Freedmen and Southern Society Project, University of Maryland,

http://www.freedmen.umd.edu/csenlist.htm, accessed January 3, 2017.

2 In addition to the tasks presented here, the full protocol included 1) brief evaluations of four

static sites, 2) an open web search on a historical question with contemporary ramifications, and

3) locating the registrant of a website. The findings from those tasks are broadly consistent with

what we present here. A description of the full protocol is available from the authors.

3 After introducing each task, we refrained from speaking unless the participant fell completely

silent. In that case, questions like, “What are you thinking?” were used to encourage participants

to verbalize their thoughts.

4 The statement from Collins, which was posted on the National Institutes of Health website, is

also available via the Web Archive:

http://web.archive.org/web/20110727115017/http://www.nih.gov/about/director/04152010_state

ment_ACP.htm.

5 The stance is prominent in other parts of the website, such as a “Position Statement” entitled

“On the Promotion of Homosexuality in the Schools,” which states that “the homosexual

lifestyle carries grave health risks”; that “validating a student’s same-sex attraction during the

adolescent years is premature and may be harmful;” and “sexual reorientation therapy can be

effective.” Retrieved from https://www.acpeds.org/wordpress/wp-content/uploads/On-the-

Promotion-of.. .

6 The University of Alaska/Fairbanks guide is located at https://library.uaf.edu/ls101-evaluation,

while Illinois State University’s is https://guides.library.illinoisstate.edu/evaluating/craap.

Post

at least five
substantive notes, highlight each annotation from passage

Substantive is as much about quality as quantity – you can ask questions, make connections to other things you’ve read or seen, or expand upon the author’s ideas. Your annotations should average around 50-75 words each.

Calculate the price of your order

Select your paper details and see how much our professional writing services will cost.

We`ll send you the first draft for approval by at
Price: $36
  • Freebies
  • Format
  • Formatting (MLA, APA, Chicago, custom, etc.)
  • Title page & bibliography
  • 24/7 customer support
  • Amendments to your paper when they are needed
  • Chat with your writer
  • 275 word/double-spaced page
  • 12 point Arial/Times New Roman
  • Double, single, and custom spacing
  • We care about originality

    Our custom human-written papers from top essay writers are always free from plagiarism.

  • We protect your privacy

    Your data and payment info stay secured every time you get our help from an essay writer.

  • You control your money

    Your money is safe with us. If your plans change, you can get it sent back to your card.

How it works

  1. 1
    You give us the details
    Complete a brief order form to tell us what kind of paper you need.
  2. 2
    We find you a top writer
    One of the best experts in your discipline starts working on your essay.
  3. 3
    You get the paper done
    Enjoy writing that meets your demands and high academic standards!

Samples from our advanced writers

Check out some essay pieces from our best essay writers before your place an order. They will help you better understand what our service can do for you.

  • Analysis (any type)
    Advantages and Disadvantages of Lowering the Voting Age to Thirteen
    Undergrad. (yrs 1-2)
    Political science
    APA
  • Coursework
    Leadership
    Undergrad. (yrs 1-2)
    Business Studies
    APA
  • Essay (any type)
    Is Pardoning Criminals Acceptable?
    Undergrad. (yrs 1-2)
    Criminal Justice
    MLA

Get your own paper from top experts

Order now

Perks of our essay writing service

We offer more than just hand-crafted papers customized for you. Here are more of our greatest perks.

  • Swift delivery
    Our writing service can deliver your short and urgent papers in just 4 hours!
  • Professional touch
    We find you a pro writer who knows all the ins and outs of your subject.
  • Easy order placing/tracking
    Create a new order and check on its progress at any time in your dashboard.
  • Help with any kind of paper
    Need a PhD thesis, research project, or a two-page essay? For you, we can do it all.
  • Experts in 80+ subjects
    Our pro writers can help you with anything, from nursing to business studies.
  • Calculations and code
    We also do math, write code, and solve problems in 30+ STEM disciplines.

Frequently asked questions

Get instant answers to the questions that students ask most often.

See full FAQ
  • Is there a possibility of plagiarism in my completed order?

    We complete each paper from scratch, and in order to make you feel safe regarding its authenticity, we check our content for plagiarism before its delivery. To do that, we use our in-house software, which can find not only copy-pasted fragments, but even paraphrased pieces of text. Unlike popular plagiarism-detection systems, which are used by most universities (e.g. Turnitin.com), we do not report to any public databases—therefore, such checking is safe.

    We provide a plagiarism-free guarantee that ensures your paper is always checked for its uniqueness. Please note that it is possible for a writing company to guarantee an absence of plagiarism against open Internet sources and a number of certain databases, but there is no technology (except for turnitin.com itself) that could guarantee no plagiarism against all sources that are indexed by turnitin. If you want to be 100% sure of your paper’s originality, we suggest you check it using the WriteCheck service from turnitin.com and send us the report.

  • I received some comments from my teacher. Can you help me with them?

    Yes. You can have a free revision during 7 days after you’ve approved the paper. To apply for a free revision, please press the revision request button on your personal order page. You can also apply for another writer to make a revision of your paper, but in such a case, we can ask you for an additional 12 hours, as we might need some time to find another writer to work on your order.

    After the 7-day period, free revisions become unavailable, and we will be able to propose only the paid option of a minor or major revision of your paper. These options are mentioned on your personal order page.

  • How will I receive a completed paper?

    You will get the first version of your paper in a non-editable PDF format within the deadline. You are welcome to check it and inform us if any changes are needed. If everything is okay, and no amendments are necessary, you can approve the order and download the .doc file. If there are any issues you want to change, you can apply for a free revision and the writer will amend the paper according to your instructions. If there happen to be any problems with downloading your paper, please contact our support team.
  • Where do I upload files?

    When you submit your first order, you get a personal account where you can track all your orders, their statuses, your payments, and discounts. Among other options, you will have a possibility to communicate with your writer via a special messenger. You will be able to upload all information and additional materials on your paper using the “Files” tab on your personal page. Please consider uploading everything you find necessary for our writer to perform at the highest standard.
See full FAQ

Take your studies to the next level with our experienced specialists

Live Chat+1 (857) 777-1210 EmailWhatsApp