In case you were looking at a picture of a face and wondering if it’s a deepfake, the tools for creating deepfakes still have their weaknesses. Here’s one:
Creating a fake persona online with a computer-generated face is easier than ever before, but there is a simple way to catch these phony pictures – look at the eyes. The inability of artificial intelligence to draw circular pupils gives away whether or not a face comes from a real photograph.
Generative adversarial networks (GANs) – a type of AI that can generate images from a simple prompt – can produce realistic-looking faces. Because they are made through a process of continual changes, they are less likely to be caught out as fake through simple checks like reverse image searches, which identify the reuse of existing people’s images on fake profiles.
But they do have a tell. The pupils of GAN-generated faces aren’t perfectly round or elliptical, unlike real ones. Real pupils are also symmetrical to one another. Computer-created pupils often have bumpy edges, or they are asymmetrical. [NewScientist (18 September 2021, paywall)]
Somebody will find a way to fix that, probably by upgrading the critiquing program to detect the eye problem and downgrade its results based on it. If I understand GANs properly.