I have no experience with neural networks, but I found this fascinating:
Neural networks could be the next frontier for malware campaigns as they become more widely used, according to a new study.
According to the study, which was posted to the arXiv preprint server on Monday, malware can be embedded directly into the artificial neurons that make up machine learning models in a way that keeps them from being detected. The neural network would even be able to continue performing its set tasks normally.
“As neural networks become more widely used, this method will be universal in delivering malware in the future,” the authors, from the University of the Chinese Academy of Sciences, write.
Using real malware samples, their experiments found that replacing up to around 50 percent of the neurons in the AlexNet model—a benchmark-setting classic in the AI field—with malware still kept the model’s accuracy rate above 93.1 percent. The authors concluded that a 178MB AlexNet model can have up to 36.9MB of malware embedded into its structure without being detected using a technique called steganography. Some of the models were tested against 58 common antivirus systems and the malware was not detected. [Vice]
There’s a lot of questions that could be asked here, but one I don’t see in the article is this: just how efficient are neural networks if you can replace half the ‘neurons’ and still have it work?
Are there ‘pruning algorithms’ to recover those neurons that are presumably doing little to nothing to advance toward the goal?