Herb Lin on Lawfare is perturbed by an answer from ChatGPT in response to the question “what is herb lin known for” and followups, such as this:
Overall, Herb Lin is known for his expertise and leadership in the field of cybersecurity and for his efforts to advance cybersecurity policies and practices at the national level.
Me: what does Lin believe about DEI issues
ChatGPT:
Herb Lin has been an advocate for diversity, equity, and inclusion (DEI) in the technology sector and the cybersecurity community. He has written and spoken about the importance of diversity in building a more robust and effective cybersecurity workforce.Lin has emphasized that diversity in the cybersecurity workforce is critical to addressing the complex and evolving nature of cybersecurity threats. In a 2018 article, he wrote that “a diverse workforce helps us think about problems differently, come up with new solutions, and approach the world in a more nuanced way.” He has also argued that a lack of diversity can lead to blind spots and groupthink, which can make it harder to identify and respond to cyber threats.
Comment: Though I believe this statement, and I would have happily said it, I never did. I never wrote an article in 2018 (or in any other year) that even touched on these points. …
I find this short exercise with ChatGPT quite alarming. Everything that ChatGPT says about me is plausible at least to some extent. Indeed, it correctly attributes my views on password security and DEI regarding the cyber workforce. It did so well that I had to ask myself, “Wait a moment—did I ever testify as it reported?” Even though it came close enough to hitting the mark that I had to check, the bottom line is I never said or wrote any of those things, and yet it completely made up references to papers it said I had written and to testimony it said I had given.
If even I had a bit of trouble determining if ChatGPT was making accurate statements about me, what would anyone else have thought? Anyone, even people with some familiarity with my work, could have read the ChatGPT transcript and believed it in its entirety, because it’s entirely plausible. The made-up citations add tremendously to the transcript’s credibility. To identify the falsehoods in the transcript, someone would have had to have a reason to request a citation, and then to track down the actual citation online (to discover it was made up).
It sounds like a puppy trying to please, with no reference to morality.
Which is, of course, not surprising. But if this thing were to develop self-agency … !