What do we do?

I got into a discussion with a friend who knows I left Facebook last summer because of their abysmal response to the civil rights audit that indicates they uphold white supremacy through their inability or unwillingness to stop hate groups and misinformation. She shared a post our mutual friend wrote the day after the domestic terrorist/white supremacist attack on the Capitol this week. She wanted to talk about several things, but one of them was what I thought we should do.

Well, for one, we have to all face what so many commentators noted this week: this IS America. We are a country where black and brown people earn less, where white privilege is encoded in our laws and policies, and where law enforcement supports white supremacy either tacitly or actively, as broadcast around the world on January 6.

We also have to face the fact that as I saw in a Twitter post this weekend, people who embrace untruths in spite of overwhelming evidence are no longer the fringe and seem to be incapable of examining information to discern truth. For example, people who believe Coronavirus conspiracy theories even though millions of people are sick and have died. Or believe election results are inaccurate, even though election officials, judges, and Justice Department personnel (including many, many conservatives) have confirmed that the election was conducted and results tallied fairly and accurately. While some news outlets are quick to point out that a majority of Americans DO trust the election results (around 60% depending on where you look) it’s very important not to overlook that 40% do not.

I’ve written about information pollution, filter bubbles, and information literacy here before. The idea that information literacy can backfire is not new. But we have millions of people who not only can’t seem to evaluate information critically enough to discover untruths, but also embrace misinformation. I strongly believe that some do so knowing they are spreading misinformation — not just foreign actors, but many politicians, public officials, and corporations who callously manipulate public opinion for their own benefit.

But in addition to these bad actors (who have always existed), there are also millions of people who feel confident that the untruths they embrace are true. They believe in their own ability to find truth — mostly online — in the sources they trust. And they believe others’ sources are not trustworthy — as evidenced by the anger, mistrust, and violence directed at the press during the insurrection this week and during the last four years in particular, but more generally over my lifetime as conservatives worked hard to convince their adherents that the media is too biased to believe. Progressives too believe the media is biased, for different reasons, and although that has not manifested in as much vitriol, it’s still undermining our ability as a society to find common ground, because we don’t begin with any sort of shared understanding. Let me be clear: I don’t think people shouldn’t question or hold the media accountable, but I do think wholesale mistrust of the media is unhelpful.

What should we do, my friend asked.

Talk about it, is one thing. And continue to try to teach information literacy carefully, including how information is created as well as how to evaluate it. Call on the government to direct resources towards stopping the state sponsored misinformation that sows discontent and mistrust, and disrupting hate groups’ (including white nationalists’) communications.

For me, there are two more things: I’ll try to continue to write to and call local, state, and federal officials to ask them to work on mitigating inequalities, large and small. And to actively seek to dismantle white supremacy. Neither of these is easy or straightforward. Both are pretty tedious and will involve making mistakes and having to apologize and try again. Both require a lens through which everything — including being a librarian — are viewed, to reveal injustices and opportunities to correct them. The justice lens gets dirty sometimes, or slips, or cracks, and has to be cleaned or replaced.

None of us can do this alone. And those of us who are privileged — because of our whiteness or our socioeconomic status or our gender in particular — have to listen to and learn from those whose privilege has been systematically diminished. And then we have to act. As I prepare my library’s budget and consider my database renewals, I want to be sure we are spending our money on sources that not only support teaching and learning, but do so while making an effort to center Black experience and Black voices. For example, I’ve been discussing replacing CQ Researcher with my colleagues, in part because it is not making that effort.

I have no delusions that these actions are enough. But that’s my answer right now to “what should we do?”

Information pollution

I’ve been trying to be more active on Twitter, in part because a lot of open education people and librarians share experiences and thoughts there. This morning I saw a thread that a librarian I follow had retweeted, originally posted by @Viveka, that got me thinking about something that came across my desk from the ALA’s Center for the Future of Libraries in their newsletter Read for Later a couple of weeks ago. The Twitter thread was about a bot tweet that was making its way around Twitter, trying to drum up outrage about the way Disney has cast the forthcoming live action Little Mermaid movie. I won’t go into detail because I don’t want to grant this message any more attention but the gist is that it’s written to divide people and drum up racial strife while saying on the surface that it’s not about race. Which sounds like something a human might actually do — claim not to be racist but then make a point that attempts to divide people over race. But Viveka notes in the thread, “we know it’s a bot because it behaves unlike any human” and then goes on to explain the tells that make this so.

If you are not as observant as Viveka or just accept the content at face value without interrogating this tweet too carefully — and let’s face it, that’s mostly how the majority of tweets are read, quickly and without much thought — you might miss it. Viveka points out, “It has the right hashtags, the petition link works, call to action is clear.” Which might make it seem real enough that in the seconds it takes to skim it, most people would either ignore it, engage with it, or feel moved enough to click the petition.  But it’s been posted “about every ten minutes, as a reply to other tweets mentioning the movie or the actess” (@Viveka).

A bot can do that, a human can’t. Which is why I went back and re-read the New York Times article by Cade Metz & Scott Blumenthal that appeared in Read for Later, “How AI Could Be Weaponized to Spread Disinformation.” Metz and Blumenthal write about two AI companies that are making fake news generators that are getting better and better at mimicking human writing openly available — so that researchers know what we’re up against, as more and more content like the thread above proliferates. Why does it matter what people think about the casting of a Disney film? It doesn’t, but the humans behind this bot creation probably have an interest in dividing Americans over cultural issues. Perhaps so we will vote emotionally, or so we’ll be busy arguing while our government cages children, or tries to start a war somewhere, or . . . you get the idea. And AI makes it more likely that we’ll have trouble identifying what’s bot generated and what’s not.

So why is this a library issue? Libraries of all sorts encourage information literacy, or in the case of school and academic libraries, teach it. Information literacy is a set of skills and habits of mind that allow people to seek, evaluate, and use information effectively and responsibly. Will we be able to keep doing this work if, as Metz and Blumenthal quote OpenAI researcher Alec Radford, “The level of information pollution that could happen with systems like this a few years from now could just get bizarre.”

I’m not sure. Yes, we can keep teaching people to examine and consider information carefully but we have to be careful not to go so far as to convince them to trust nothing, as danah boyd cautions in her article “Did Media Literacy Backfire?” which I re-read every few months to remind myself how hard this work is. Will the media be susceptible to information pollution in the same way social media is? Is it already, in the “balance bias” of its coverage of major issues like climate change?

There are no easy answers. I believe information literacy is a help, but knowing how to do something doesn’t mean someone will do it. Ultimately the fight against information pollution is a matter of will — we have to spend more than a few seconds scanning something online before deciding whether it’s valid or not. I’d like to think libraries have a role in encouraging that, but in the end, it’s probably up to each of us to be smart information consumers.