Evelyn Douek on Lawfare surveys the near-future as technology will soon be able to put words in your mouth in such a way as to make it seem real – and what that means for the regulatory apparatus:
Trump’s purported disavowals of the “Access Hollywood” comments have not been taken seriously because shortly after the recording surfaced, he acknowledged it was his voice by issuing an apology. (There were also eight eyewitnesses who have not contested its veracity.) However, if fake audio and visual content become more widespread, those plagued by inconvenient tapes in the future may not be so quick to admit fault. And they will no doubt often be right to refrain from doing so. Based on our current information environment, public figures of all stripes will likely be the target of faked recordings in attempts to damage them. New technology must be developed to help identify this kind of false content. Wardle and Derakhshan suggest that the sharing of metadata between trusted partners might help verification processes. Currently, many images and video are stripped of metadata to protect privacy and conserve data but this can complicate the verification process. If companies who are responsible for the dissemination or promotion of content are trusted with this information, it could facilitate better fact-checking. For the moment, verification often relies on looking at shadows or seeing if audio syncs perfectly.
While awaiting better verification tools to be developed, regulatory responses to the fake news crisis need to be forward-looking. The current information pollutants are largely text-based, but this will not be the case for much longer. Technology will soon give an ominous new meaning to the old joke: “Who are you going to believe, me or your lying eyes?” There are increasing calls for platforms to regulate content that appears online as a reaction to the spread of Russian propaganda and other fake news stories during the 2016 election. Germany has just passed a law requiring social media companies to remove reported unlawful content within 24 hours. The EU and U.K. are both conducting public consultations with a view to increased regulation. Regulatory engagement with the problems of disinformation is essential, but it is also important to ensure that responses are not knee-jerk reactions to the most recent problems that do not anticipate the next ones. Calls for platforms to bear the responsibility for their products’ information hygiene need to acknowledge that this also makes them arbiters of truth in certain ways. There is no easy answer to the question of how content should be verified, but the answer needs to grapple with the fact that increasingly more reality will be contested.
Sounds like we need ways to tie the audio and video of the environmental inputs to a technological artifact together such that they can be verified to have actually happened, and have happened simultaneously. I know in the open source software movement there are the signatures that verify that something was built by a trusted entity, and the binaries can be verified against that signature. I don’t know if anything similar has been proposed and implemented in this arena, but since Evelyn has brought it up as a critical problem of the future, it seems not. Since we’re taking in more and more of the world via technology, rather than being an eye-witness to it, this will be more and more critical – and those that are not “up to speed” will become separated by an invisible chasm from the rest of us.
And I’m not sure I’m looking forward to a future in which every picture and video must be viewed with extreme skepticism. I suppose I already do, to some extent, but it’s discouraging to think so many people are willing to engage in deceit in order to further their political goals.