DeepLocker Artificial intelligence (AI) is seen as a potential solution to automatically detect malware and combat and stop cyberattacks before it can affect any organization.
However, the same technology can also be armed actors, threats to use a new generation of malware that can escape even the best cybernetic security of protection and infect a computer network or launch an attack only when the face of the target is detected by the camera.
To demonstrate this scenario, security researchers at IBM Research invented DeepLocker – a new breed of “high priority and evasive” AI attack tool, which hides its malicious intent, until it reaches a victim.
According to the IBM researcher, DeepLocker is flying under the radar without being detected, and “release their malicious action as soon as the AI model identifies the target with indicators such as facial recognition, voice recognition, and geolocation. ”
Describing it as a “spray and pray” traditional malware, researchers believe that this kind of AI-hidden malware is particularly dangerous, because, as a nation malware, it can infect millions of systems undetected.
A malicious program could hide their malicious payload with mobile application applications, such as software for video conferencing software to prevent detection by most anti-virus scanners and malware, until it reaches individual victims who are identified using indicators such as speech recognition, facial recognition, geolocation, and another system level.
“The uniqueness of DeepLocker is that the use of AI does not ‘start conditions’ for attack release is almost impossible to reverse engineering” – explain the researchers. “The malicious payload will only be unlocked if the goal is reached.”
To demonstrate the capabilities of DeepLocker, the researchers have developed a proof concept WannaCry ransomware application well-known concealer for video conferencing, it has not been detected security software, including antivirus engines and malware protection.
Due to the built-in boot condition, DeepLocker did not unlock and did not perform the redemption in the system until it found the face of the target that could be compared using target photos. public.
“Imagine this application for video conferencing distributed and downloaded by millions of people, which is a likely scenario at the moment in many public platforms. At startup, the application secretly uploads camera snapshots into the built-in AI model, but otherwise, it will behave normally for all users except for the intended purpose, “the researchers added.
“When the victim is sitting in front of the computer and the application uses the face camera feed in the application and a malicious payload will be carried out in secret, because of the face of the victim, which was the key pre-programmed to unlock it. ”
So, all that is needed DeepLocker, – this is your photo, which can be easily found from any of your profiles on social networks on LinkedIn, Facebook, Twitter, Google+ and Instagram, to aim for you.
Trustwave recently opened a face recognition tool called Social Mapper, which you can use to search for targets on many social networks.
IBM Research Group will provide more detailed information and a live demonstration of the implementation of the evidence-based concept on DeepLocker at the Black Hat USA security conference in Las Vegas on Wednesday.