On Lawfare, professor Alan Rozenshtein weighs in on the problems of new services offered over the Internet and how they interact with our legal system:
Unfortunately, when it comes to policymaking, the platform-or-publisher question is a prime example of what the early-twentieth-century legal realist Felix Cohen called “transcendental nonsense”: the counterproductive attempt to answer practical questions through conceptual analysis. One of Cohen’s famous examples was the debate over whether a labor union was a “person” and thus could be sued. Instead of torturing ourselves about the essence of labor unions or personhood, Cohen argued, we should instead ask whether we’d rather live in a world in which labor unions could be sued; if yes, then we’ll say that labor union are persons, and if not then we’ll say that they’re not. In this view, the label “person” isn’t driving the analysis but is rather just a shorthand way of describing those entities that the law allows to be sued. And since all definitions are just arbitrary conventions, there’s no purely logical reason to prefer one categorization over another. The real work has to be done by a combination of facts—what is the state of the world and what are the various options for changing it—and values—what sort of world do we want to live in.
In the case of technology companies and their obligations to moderate content—whether of foreign interference in elections, terrorist and extremist speech, or just everyday bullying and harassment—debating over whether companies are platforms or publishers is as backwards a strategy as is arguing over whether labor union are people. Instead of having a theoretical discussion over what kind of entity a technology company is, and then, using that categorization to determine its obligations, we should ask what obligations we want the company to have, and then use whatever label is most convenient to remind ourselves of what we decided. And to answer this latter question, we need to focus on facts—how many users, what kind of content, what sort of algorithms—and values—what tradeoffs are we willing to make between policing bad content and the inevitable infringements on user privacy and free expression that such policing entails. These are hard questions, and definitional debates over whether a technology giant is more like a newspaper or a telephone network won’t help.
In other words, let’s retire the tired debate over whether Facebook or Google or Twitter is a platform or a publisher (or some third, hybrid category). It’s just distracting us from the real issue: not what these companies are, but what they can do.
It’s a little fascinating watching someone dance around the fact that the law is currently inadequate by disputing certain processes whereby we make law, or damn near anything else comprehensible. I think it all keys on this:
And since all definitions are just arbitrary conventions, there’s no purely logical reason to prefer one categorization over another.
Well, no. As any software engineer of the object-oriented variety (and, to a lesser extent, perhaps, the functional-paradigm programmer) knows, we categorize in order to simplify attaining our goals; within the law, categorization means we can avoid enumerating every entity we wish to address within the framework of the law, and, more importantly, extend it to future entities. The definitions are not arbitrary, but are driven by goal-oriented logical processes – and thus we invalidate Cohen’s (Rozenshtein’s citation) remarks.
Without having followed this discussion in any form, I suspect the real solution is going to start with something Rozenshtein should like to discard – the … third, hybrid category. But how to extend the categorization? Elsewhere in his post, he states that
But the companies, led by Google, are increasingly defending their algorithms as First Amendment–protected speech, which suggests a closer affinity to publishers like the New York Times or CNN than to pure communications platforms like AT&T or Verizon.
My understanding is that these algorithms are part of the content delivery system, rather than the content generation system (which, for you categorization buffs, means the users). For Rozenshtein, he uses their existence to suggest a likeness to previous category members, but based on content generation, I think. I could be wrong.
But let’s reconsider, then, the previous generation of publishers and platforms. How did they deliver content?
First, there were the old corkboards in stores and other buildings of both private and public nature, of which you still see a few. People could leave messages of general or specialized interest, which might be answered through public or private means by other interested folks. This might be an example of a platform.
There was, and still is, the public sale of the content. An example is the iconic sale of newspapers by newsboys. While this might function purely to move buyer-generated content to other buyers, it was more usually used by the company providing the content to sell that content, thus making the company a publisher.
The postal service is the last one I shall mention, and I mention it last for a reason. Much like the previous category, public sales, it could be used either way, although more by publishers. LOC (letters of comment) columns provided a minor way for readers (who were not always buyers, although again they overwhelmingly fell into this category) to generate content, but again this was, and is, a publisher-dominated content delivery system. Importantly, this mode of content delivery permitted a limited form of customization, because now the user of the delivery system knew who was receiving the content. In theory, each item could be modified based on knowledge of the reader at the given address. Insofar as I know, there was no regulation of such activity.
This slight diversion down memory lane should serve to awaken a question in the reader: how do the algorithms of Google, et al, fit into this picture of content delivery? The closest categorization is the last one listed, the postal service delivery system, because each recipient can be, and in technical fact has to be, distinguished from the others. Once collection of data extraneous to the technical requirements of delivery commences, the items delivered can be modified based on the extraneous data – and possibly the non-extraneous data as well, to be entirely anal about it. Keeping in mind my long term theme that computers are multipliers, the ability to modify items based on the extraneous data is boosted to the nth degree, compared to the prior generation postal delivery system. This ability to customize is then applied to items both actively collected by the company, as well as those independent content generation entities who are using the company’s delivery system to send content to readers.
And this, I contend in my legal ignorance, is what will end up generating an entirely new categorization for the noted companies, much to their dismay. This massive ability to customize, and the lack of control of the providing company over that content, will make them unique – and subject to regulation.
I hope the questions used while formulating such regulations will include questions regarding the appropriateness of customizing political content to separate users – if one political message generated by, or for, candidate A contradicts another from the same content generator, but is only viewable by non-intersecting subsets of the receivers of the messages, is this an appropriate and desirable use of the medium? If not, then what do you do about it? Ban the entire service a priori, or attempt to detect and punish a posteriori, after the damage is done?
In the end, categorization is the marvelous tool of the human intellect, but one must always remember that they are often imperfectly defined; new categorizations should always be kept in mind, but will always be driven by the goals of those doing the categorization.