New York Times Used Blockchain To ID Fake Photos

At a glance, you may think why would someone use blockchain to ID fake photos? But in reality, the immutable nature of blockchain technology may be the answer to catch fakers or conmen as even the police are already seriously looking at a blockchain based ID verification system.

As for the case in journalism, here’s how they’re doing it…

Its goal was more straightforward: Can blockchain make it easier for news consumers to understand where the photos they see online came from? There’s a need for it: Pew reported last June that 46 percent of Americans say they find it difficult to recognize when images are false or have been doctored — and that was before Peak Deepfake.

In the months since then, staffers have been doing user research and building prototypes of such a tool — taking advantage of blockchain’s ability to store data immutably and to track its usage over time. Today, the News Provenance Project is releasing some of its initial findings.

To sum up: They learned a lot of interesting things about how news consumers evaluate the images they see online — and how different people value different contextual clues when determining whether or not a photo seems real and trustworthy. They made a proof-of-concept using blockchain to store more metadata for images. But they determined that a lot of things would have to change structurally about how photos work online for any solution to be widespread.

“It’s less about it being a survey about where are people more susceptible to misinformation and more about people who are more likely to be fooled by misinformation,” said Marc Lavallee, executive director of Times’ R&D team. “What we wanted to focus on is: In a situation where if you are susceptible to that, what are the best safeguards to help head that off at the pass?”

Or as News Provenance Project lead Sasha Koren puts it:

Critical information that typically goes into a photo caption, such as time, date, location and the accurate identification of people and events shown doesn’t travel with a photo when it’s posted to social media, where it can be reposted with egregious inaccuracies.

News organizations have this information. With a bit of work and some thoughtful designs, we could use it to help platforms and, more importantly, news consumers avoid inaccurate uses.

The Times conducted in-depth interviews (“qualitative user studies”) with 34 news consumers in three rounds to both understand how people currently process the news images they see and what information would help them gauge a photo’s validity.

  • Round 1: Interviews with 15 people about the current state of news and misinformation and their “mental models” of trust in news photos.
  • Round 2: Prototype testing with seven people and multiple prototype designs

Round 3: Prototype testing with 12 people and one design

Interviewees from Round 1 fell pretty evenly into one of four categories; see if you can picture them in your head:

Distrustful news skeptic (low trust, high awareness): Seeking to call out bias in mainstream media, a person in this category may use motivated reasoning to find any evidence to confirm their belief that the media is pushing a particular agenda. Building trust in specific news outlets is difficult in these cases: it may be part of a person’s identity to be skeptical of mainstream media and hyper-alert to perceived cues of bias. Importantly though, this skepticism applied more to the editorial framing of a story than a complete denial of facts, such as when and where a picture was taken.

Media-jaded localist (low trust, low awareness): This person may feel marginalized by mainstream media and uncritically accept hot takes from unofficial accounts as truths. They want news that feels local and authentic, but they don’t want to be misled by false information intended to deceive. Additionally, they need clearer cues to identify false and misleading content from unofficial accounts that they trust in good faith.

Late-adopter media traditionalist (high trust, low awareness): A person in this category may be more comfortable learning about news through older mediums such as television or newspapers, but less comfortable making sense of news online within the noise of social media. On this front, people need more education on misinformation and disinformation tactics, as well clear cues to more readily distinguish credible content from media sources they already trust.

“What we saw was a tendency to accept almost all images at first glance, regardless of subject area,” Emily Saltz, the project’s UX lead, said. “People were more discerning the more they focused on a post…If someone was at all skeptical of a photo post and rated it as less credible, it tended to be a reaction to the perceived slant of a caption or headline, rather than a belief that it was made up or edited.”

Researchers identified two groups they felt would be most helped by better image provenance information: “those who trust the media, but lack the baseline digital literacy to reliably assess the credibility of posts, and those who are already confident in their abilities to distinguish credible news photography, but would benefit from even more context to factor into their understanding.”

They tested a number of variations on the kinds of context that could be provided with a photo. Among the things they found:

    • A checkmark — like the blue one Twitter uses for verified accounts — wasn’t enough to instill confidence about an image’s credibility
    • Users preferred the term “sourced” (with the ability to follow up on information) instead of “verified” (which relied on the endorsement of other people)
    • Multiple images of an event were more helpful than a single photo’s edit history to convince a news consumer that the event pictured had taken place
    • Users want to see the process and to know that there will be oversight and accountability for misinformation.

Publishing incorrect information with a provenance signal — as might happen with something posted quickly during breaking news — can cause a user not to trust it again.

“We live in a world where the most well-intentioned things — like raising awareness about climate change, or even just that particular natural disaster — can be fueled by unintentional, well-meaning misinformation. What does that mean when people actually try to do this on purpose?” Lavallee said. “We started looking at the technology angle of it and very quickly from there what we realized is: We can build a perfect technology solution, but if it doesn’t actually matter to people, then it’s not going to work.”

What’s next for the News Provenance Project? The Times says its next phase “will shift from exploration to execution to show how an end-to-end solution can help users share trusted news with confidence. The team will explore other potential solutions in emerging technology and work across newsrooms and industry to imagine what a better internet might look like.”

Whatever progress can be made, it will be essential to make it workable not only for the big guys — the Timeses, Posts, and Journals of the world — but for local outlets and small publishers too.

“We do not succeed if we end up creating a two-tier information ecosystem where only large publishers have these added veracity signals, implicitly casting more scrutiny on smaller outlets,” Lavallee said. From a technical perspective, the solution needs to be as easy to install and use as something like WordPress. From a workflow and best practices perspective, it needs to be as easy and common sense as writing web-friendly headlines for articles.

The work ahead is making sure that metadata is captured in a consistent way, and made available to other organizations, like social platforms, to use it responsibly.

What’s your opinion on this? Do you think a blockchain ID verification system will work? Share your thoughts with me.

This article was curated from www.niemanlab.org