Voice assistants Alexa, Google Assistant and Siri can be vcan be used for attacks involving the use of lasers to issue silent and often invisible commands to perform unwanted actions, such as unlocking doors, visiting websites, locating and starting vehicles and so on. By routing a low power laser beam towards devices equipped with voice assistance systems, researchers who discovered the flaw were able to issue commands up to a distance of 110 meters.
Voice-controlled systems do not normally require the user to authenticate to issue each command, and allow the attack to be completed without the need to know a password or PIN. Considering the distance and the type of technique, these are attacks that can be successfully conducted even between different buildings or outside the victim's building, if the voice-controlled device is located near a window.
This type of attack exploits a vulnerability in the micorfons that use MEMS (micro-electro-echanical systems) components: these components can respond to light electromagnetic radiation by exchanging it for an acoustic wave. The researchers carried out their tests on Siri, Alexa, Google Assistant, Facebook Portal and on a limited number of smartphones and tablets, but there are enough elements to believe that the problem could reasonably affect all devices that use MEMS microphones.
The attack, called Light Command, still has a number of limitations. First of all, as you may have guessed, the attacker must have a "clean shot" towards the target device: this must be in an optical sense. And in many cases the laser beam must be directed to a precise point of the microphone. And unless you use lasers outside the visible spectrum, the light could easily be detected by someone in the vicinity of the target device. Finally, we must not forget that all these voice assistance systems always respond with an acoustic feedback (be it a sentence or even a simple sound) when they have received a command to execute.
But beyond these limitations, the technique identified by the researchers is still significant because, in addition to being a new type of threat, it can be effectively replicated in real situations. Despite everything, the researchers admit that they have not fully understood the scientific reason why this technique proved to work. A better understanding of the phenomenon could therefore lead to even more effective attacks.
"Voice control systems often do not have user authentication systems or, when present, are not implemented correctly. We have shown how an attacker could use commands given by a light beam to perform actions such as unlock the front door, open the garage door, make purchases online and locate and start vehicles if they are connected to the victim's Google account".
The researchers described in their publication different ways of performing the attack, with various types of instrumentation. A first setup includes a simple laser pointer, a laser driver and an audio amplifier, and optionally a tele optic to concentrate the laser for long-range attacks. The laboratory instrument laser driver, which however requires familiarity with the use and configuration of the device. Another kind of setup saw the use of an infrared laser, so not visible to the eye. A third configuration instead involved a phosphor laser to eliminate the need to direct the light beam to a precise point of the MEMS microphone.
In one of several attempts, the researchers were able to carry out an attack at a distance of about 70 meters and through a glass window, while in another test the tele optics was used to concentrate the laser and attach a device placed at a distance of about 110 meters: this is the maximum possible distance in the test environment available to researchers, which leaves open the possibility of conducting such attacks at even greater distances.
Voice-controlled systems do not normally require the user to authenticate to issue each command, and allow the attack to be completed without the need to know a password or PIN. Considering the distance and the type of technique, these are attacks that can be successfully conducted even between different buildings or outside the victim's building, if the voice-controlled device is located near a window.
This type of attack exploits a vulnerability in the micorfons that use MEMS (micro-electro-echanical systems) components: these components can respond to light electromagnetic radiation by exchanging it for an acoustic wave. The researchers carried out their tests on Siri, Alexa, Google Assistant, Facebook Portal and on a limited number of smartphones and tablets, but there are enough elements to believe that the problem could reasonably affect all devices that use MEMS microphones.
The attack, called Light Command, still has a number of limitations. First of all, as you may have guessed, the attacker must have a "clean shot" towards the target device: this must be in an optical sense. And in many cases the laser beam must be directed to a precise point of the microphone. And unless you use lasers outside the visible spectrum, the light could easily be detected by someone in the vicinity of the target device. Finally, we must not forget that all these voice assistance systems always respond with an acoustic feedback (be it a sentence or even a simple sound) when they have received a command to execute.
But beyond these limitations, the technique identified by the researchers is still significant because, in addition to being a new type of threat, it can be effectively replicated in real situations. Despite everything, the researchers admit that they have not fully understood the scientific reason why this technique proved to work. A better understanding of the phenomenon could therefore lead to even more effective attacks.
"Voice control systems often do not have user authentication systems or, when present, are not implemented correctly. We have shown how an attacker could use commands given by a light beam to perform actions such as unlock the front door, open the garage door, make purchases online and locate and start vehicles if they are connected to the victim's Google account".
The researchers described in their publication different ways of performing the attack, with various types of instrumentation. A first setup includes a simple laser pointer, a laser driver and an audio amplifier, and optionally a tele optic to concentrate the laser for long-range attacks. The laboratory instrument laser driver, which however requires familiarity with the use and configuration of the device. Another kind of setup saw the use of an infrared laser, so not visible to the eye. A third configuration instead involved a phosphor laser to eliminate the need to direct the light beam to a precise point of the MEMS microphone.
In one of several attempts, the researchers were able to carry out an attack at a distance of about 70 meters and through a glass window, while in another test the tele optics was used to concentrate the laser and attach a device placed at a distance of about 110 meters: this is the maximum possible distance in the test environment available to researchers, which leaves open the possibility of conducting such attacks at even greater distances.
Source link
https://www.hwupgrade.it/news/sicurezza-software/alexa-google-assistant-e-siri-basta-un-laser-per-ingannarli_85344.html
Dmca