Fixing What the Web Broke

This article is part of the On Tech newsletter. You can sign up here to receive it on weekdays.

The January riot in the U.S. Capitol showed the harm that can be done if millions of people believe an election was stolen, although there is no evidence of widespread fraud.

The Election Integrity Partnership, a coalition of online information researchers, released a comprehensive analysis of the misrepresentation of the presidential contest this week and recommended ways to avoid repetition.

Internet firms were not solely responsible for the fiction of a stolen election, but the report concluded that they were hubs where false narratives were incubated, amplified, and cemented. I’ll summarize three of the report’s interesting suggestions here on how companies like Facebook, YouTube, and Twitter can change to create a healthier climate of information about elections and everything else.

A general point: it may feel like people’s norms and behaviors on the internet are immutable and inevitable, but they are not. Digital life is still relatively new, and what is good or toxic is the result of conscious choices made by companies and all of us. We can fix what’s broken. And, as shown by another threat to the Capitol this week, it is imperative that we get this right.

1) Raising the bar for the most influential and repeat offenders: Kim Kardashian can change your mind more than your dentist. Research into the 2020 election has shown that a relatively small number of prominent organizations and individuals, including President Donald Trump, played an oversized role in establishing the myth of a rigged vote.

Currently, sites like Facebook and YouTube mainly consider the content of a post or video that is separate from Messenger when determining whether it violates their guidelines. World leaders are given more leeway than the rest of us, and other celebrities are sometimes given a pass for breaking company policies.

That makes no sense.

If internet companies didn’t do anything else, it would make a world of difference if they changed the way they treat the influential people most responsible for spreading falsehoods or twisted facts – and do so over and over again.

The EIP researchers proposed three changes: create stricter rules for people of influence; Prioritize faster decisions for prominent accounts that previously broke the rules. and escalating consequences for habitual super-spreaders of falsified information.

YouTube has long had such a “three strikes” system for accounts that repeatedly break its rules, and Twitter recently adopted versions of this system for posts it deems misleading about elections or coronavirus vaccinations.

The hard part isn’t necessarily policymaking, however. If enforced, it can create a backlash.

2) Internet companies should tell us what they are doing and why: Large websites like Facebook and Twitter have detailed guidelines on what is not allowed – threatening others with violence or selling drugs, for example.

However, internet companies often apply their policies inconsistently and don’t always give clear reasons when people’s posts are flagged or deleted. The EIP report suggested that online businesses do more to let people know about their policies and share evidence to prove why a post broke the rules.

3) Greater visibility and accountability for Internet business decisions: News organizations have reported on Facebook’s own research that found how the computer recommendations led some to fringe ideas and make people more polarized. However, Facebook and other Internet companies usually keep such analyzes secret.

The EIP researchers suggested that internet companies publish their research on misinformation and their assessments of attempts to counteract it. This could improve people’s understanding of how these information systems work.

The report also proposed a change that journalists and researchers have long wanted: ways for outsiders to see posts that have been deleted or flagged as incorrect by internet companies. This would allow accountability for the decisions that internet companies make.

There are no easy solutions to building Americans’ trust in a number of common facts, especially when websites allow lies to travel further and faster than the truth. However, the EIP Recommendations show that we have options and a way forward.

  • Amazon is getting big (ger) in New York: My colleagues Matthew Haag and Winnie Hu wrote about Amazon opening more warehouses in New York neighborhoods and suburbs to enable faster deliveries. A related On Tech 2020 newsletter: Why Amazon Needs More Parcel Hubs, Closer to People’s Home.

  • Our houses are always watching: Police officers have increasingly searched for video from internet-connected doorbell cameras to solve crimes, but the Washington Post writes that the cameras were sometimes a risk to them too. In Florida, a man saw FBI agents come through his home camera and opened fire, killing two people.

  • Square Acquires Jay-Z’s Streaming Music Service: Yes, the company that steals your credit card from the flea market seller will own a streaming music company. No, it doesn’t make sense. (Square said it’s about finding new ways for musicians to make money.)

A cat in London wouldn’t move from the roof of a train for about two and a half hours. There are way too many silly jokes here about the cat surfing the train. (Or maybe just enough silly jokes?)

We want to hear from you. Tell us what you think of this newsletter and what else you would like us to explore. You can reach us at ontech@nytimes.com.

If you do not have this newsletter in your inbox yet, please register here.

Comments are closed.