Going Undercover, Ctd

Further remarks on fooling facial recognition software after I provided a link to the original academic article, which is here on arxiv:

… here are some examples of what I saw before https://cvdazzle.com/. The article you shared shows a much more reasonable sort of make up. One you could probably get away with using and not draw attention to yourself.

Another reader:

Yeah, I don’t think the CV Dazzle approach is gonna get you past the human actors in these systems. One issue I saw is I wonder how much supervision the MUA’s had in doing the adversarial makeup. The example they showed with the adversarial, random, digital, and physical makeup had a nose tip shape change in the physical that wasn’t in the adversarial digital. That change, above and beyond what the software prediction of the digital requirement, may be enough to change the real time recognition percentage. In order to judge the validity of the computer calculated changes the physical implantation should be reviewed by several skilled people for how it matches the digital suggestion. Plus, as we agree, the sample size is awfully small. Not sure how broad a spectrum of FR software they had available to them, but comparing across multiple systems would also be useful. I’m sure all we really would need to do to find out the feasibility of this is ask the Chinese, Russians, and our own military and spook agencies. I’m pretty sure they’re doing this all the time.

It would be interesting to see a graph of number of training subjects vs drop in recognition rate as makeup is applied.

Bookmark the permalink.

About Hue White

Former BBS operator; software engineer; cat lackey.

Comments are closed.