We’re in a world now where we’re going to need to make individual and collective decisions concerning computers. Originally, computers were used to do things that humans did not do well, such as tireless calculations, tedious bookkeeping, and similar miscellanea. But recently – say, the last 20 years – they’ve become an integral part of things that most humans can do quite well on their own.
Such as communications.
Don’t get me wrong, they can certainly enhance communications – but, like all tools, they’re value-neutral. That is, in the wrong hands, malicious or merely shallow-thinking, zealots or the painfully earnest, they can be tremendously damaging, multiplying the effects of, say, a racist note pinned to a cork board in a restaurant a million-fold. The Nazis achieved power through misinformation campaigns, as did many other groups.
Computers make that easy, and the technology nerds make it hard to detect.
But before making a decision, we need to investigate whether and how to authenticate our communications. For years before the election we knew Fox News was a source of misinformation, which was eventually verified by the work of conservative Bruce Bartlett. But since the recent election we’ve been informed that we were flooded with false news items, and that the Russians were also in the ring, unseen but hitting below the belt.
Now NewScientist (19 November 2016, paywall) is reporting on the other side of the teeter-totter, on the side that’s looking to authenticate the news – albeit a very small corner of it. Aviva Rutkin reports on the work of Digital Verification Corps (DVC) of Amnesty International:
Pictures of what look like mass graves. Videos of explosions in city centres. The internet is awash with potential evidence of human rights abuses in some of the world’s most pressing conflicts.
But it can be tough to sift the real evidence from the fakes, or to work out exactly what an image shows. This is the challenge facing the Digital Verification Corps.
Launched by Amnesty International in October, the corps is training students and researchers to authenticate online images so they can help human rights organisations gather robust evidence on modern-day crimes.
“The use of smartphones has basically proliferated, and so too has the amount of potential evidence. But the actual verification of that is critical,” says Andrea Lampros at the University of California, Berkeley’s Human Rights Center (HRC). “That’s what makes it valid and usable – and that requires a tremendous amount of people power. We can help sift through those vast amounts of material and make them really useful to human rights groups and, potentially, courts.”
How will they do it?
The first step in any investigation is a reverse image search. By searching with tools like image search engine TinEye, corps members can pinpoint when a photo was first posted online and quickly rule out obvious fakes, whether shared deliberately or by mistake.
Next the corps tries to confirm when and where the image was taken. Social media often strips out valuable metadata, and this information can also be modified. Where metadata is available, the team might use those details to quiz someone whose says the image is theirs. Does information about the type of camera used to take the photo, for example, match that person’s story?
Corps members are also trained to scour images for landmarks, like schools or mosques, which they can compare with satellite data.
This reminds me of another effort, bellingcat, subtitled by and for citizen investigative journalists. I have not kept up with them, but I do remember seeing articles on their investigations into pictures coming out of the Ukraine during the Russian invasion, and into the downing of Malaysia Flight 117. Today? This excellent post on bellingcat by Elliot Higgins addresses the same issue concerning the DVC:
The work of open source investigators frequently involves using content shared on social media. The reliability of those sources is something that is always under question, not only by the investigators themselves, but also by those who would try to discredit that type of content as being unreliable. …
The latest victims of their own efforts are the Syrian White Helmets, a rescue organisation whose members wear body cameras, and have emerged as one of the leading sources of evidence of air strikes against civilian infrastructure in the Syrian conflict.
Because of this, they have regularly been smeared by the Syrian and Russian governments, and decried as fakes and terrorists. Russian state TV outlet RT (formerly “Russia Today”), for example, ran an opinion piece on 26 October by writer Vanessa Beeley, who labeled them a “terrorist support group and Western propaganda tool”, while a separate report a week earlier questioned the White Helmets’ neutrality by claiming that they were funded by Western governments. As early as May, Kremlin wire Sputnik called the White Helmets a “controversial quasi-humanitarian organisation” which was “fabricating ‘evidence’ of Russia’s ‘disastrous’ involvement in Syria”. This Sputnik piece also quoted Beeley, as saying that the White Helmets “demonize the Assad government and encourage direct foreign intervention.”
So here’s the thing: are we all going to have to become experts at communications authentication? Is it safe to trust organizatioons such as bellingcat and DVC? How do you feel about that?
Or will the Internet as a social communications medium shrivel up and go away as people, realizing how they’re being misled, just walk away?
Where’s Walter Cronkite when you need him?