Adversa AI Red Team Invented Technology for Ethical Hacking of Facial Recognition Systems
Adversa AI, an AI Research startup, has demonstrated a new attack method on AI facial recognition applications. By making imperceptible changes in human faces, it makes an AI-driven facial recognition algorithm misrecognize persons. Compared to other similar approaches, this method is transferable across all AI models and at the same time, it’s much more accurate, stealth and resource-efficient.
Adversa AI Red Team has demonstrated a proof-of-concept attack against PimEyes, the most popular and advanced face search engine for public images. It is also similar to Clearview, a commercial facial recognition database sold to law enforcement and governments. PimEyes has been tricked and mistaken Adversa’s CEO for Elon Musk in the photo.
Uniquely, the attack is a black-box one that was developed without any detailed knowledge of the algorithms used by the search engine, and the exploit is transferable to different facial recognition engines. As the attack allows malefactors to camouflage themselves in a variety of ways, the company has named it Adversarial Octopus highlighting such qualities of this animal as stealth, precision, and adaptability.
The existence of such vulnerabilities in AI applications and facial recognition engines, in particular, may lead to dire consequences and may be used in both poisoning and evasion scenarios, such as the following ones:
- Hacktivists may wreak havoc in the AI-driven internet platforms that use face properties as input for any decisions or further training. Attackers can poison or evade the algorithms of big Internet companies by manipulating their profile pictures.
- Cybercriminals can steal personal identities and bypass AI-driven biometric authentication or identity verification systems in banks, trading platforms, or other services that offer authenticated remote assistance. This attack can be even more stealthy in every scenario where traditional deepfakes can be applied.
- Dissidents may secretly use it to hide their internet activities in social media from law enforcement. It resembles a mask or fake ID for the virtual world we currently live in.
Recently Adversa AI has released the world-first analytical report concerning a decade of growing activities in the Secure and Trusted AI field. In the wake of interest in practical solutions for ensuring AI system’s security against advanced adversarial attacks, we have developed our own technology for testing facial recognition systems for such attacks. We are looking for early adopters and forward-thinking technology companies to partner with us on implementing adversarial testing capabilities in your SDLC and MLLC capabilities and increase trust in your AI applications and provide customers best-of-breed solutions.