Hey, Siri: Hackers Can Control Smart Devices Using Inaudible Sounds

  /     /     /  
Publicated : 23/11/2024   Category : security


Hey, Siri: Hackers Can Control Smart Devices Using Inaudible Sounds


A technique, dubbed the Near-Ultrasound Inaudible Trojan (NUIT), allows an attacker to exploit smartphones and smart speakers over the Internet, using sounds undetectable by humans.



The sensitivity of voice-controlled microphones could allow cyberattackers to issue commands to smartphones, smart speakers, and other connected devices using near-ultrasound frequencies undetectable by humans for a variety of nefarious outcomes — including taking over apps that control home Internet of Things (IoT) devices.
The technique, dubbed a Near-Ultrasound Inaudible Trojan (NUIT), exploits voice assistants like Siri, Google Assistant, or Alexa and the ability of many smart devices to be controlled by sound. According to researchers at the University of Texas at San Antonio (UTSA) and the University of Colorado at Colorado Springs (UCCS), most devices are so sensitive that they can pick up voice commands even if the sounds are not in the normal frequency range of human voices. 
In a series of videos posted online, the researchers demonstrated attacks on a variety of devices, including iOS and Android smartphones, Google Home and Amazon Echo smart speakers, and Windows Cortana. 
In one scenario, a user might be browsing a website that is playing NUIT attack commands in the background. The victim might have a mobile phone with voice control enabled in close proximity. The first command issued by the attacker might be to turn down the assistants volume so that responses are harder to hear, and thus less likely to be noticed. After that, subsequent commands could ask the assistant to use a smart-door app to unlock the front door lets say. In less concerning scenarios, commands could cause an Amazon Alexa device to start playing music or give a weather report.
The attack works broadly, but the specifics vary per device.
This is not only a software issue or malware, said Guenevere Chen, an associate professor in the UTSA Department of Electrical and Computer Engineering, in a statement. Its a hardware attack that uses the internet. The vulnerability is the nonlinearity of the microphone design, which the manufacturer would need to address.
Attacks using a variety of audible and non-audible frequencies have a long history in the hacking world. In 2005, for example, a group of researchers at the University of California, Berkeley, found that they could
recover nearly all of the English characters
typed during a 10-minute sound recording, and that 80% of 10-character passwords could be recovered within the first 75 guesses. In 2019, researchers from Southern Methodist University used smartphone microphones to record audio of a user typing in a noisy room,
recovering 42% of keystrokes
.
The latest research appears to use the same techniques as a 2017 paper from researchers at Zhejiang University, which used ultrasonic signals to attack popular voice-activated smart speakers and devices. In the attack,
dubbed the DolphinAttack
, researchers modulated voice commands on an ultrasonic carrier signal, making them inaudible. Unlike the current attack, however, the DolphinAttack used a bespoke hardwired system to generate the sounds rather than using connected devices with speakers to issue commands.
The latest attack allows any device compatible with audio commands to be used as a conduit for malicious activity. Android phones could be attacked through inaudible signals playing in a YouTube video on a smart TV, for instance. iPhones could be attacked through music playing from a smart speaker and vice versa.
In most cases, the inaudible voice does not even need to have to be recognizable as the authorized user, said UTSAs Chen in
a recent statement announcing the research
.
Out of the 17 smart devices we tested, [attackers targeting] Apple Siri devices need to steal the users voice, while other voice assistant devices can get activated by using any voice or a robot voice, she said. It can even happen in Zoom during meetings. If someone unmutes themselves, they can embed the attack signal to hack your phone thats placed next to your computer during the meeting.
However, the receiving speaker has to be turned up fairly loud for an attack to work, while the length of the malicious commands has to be less than 0.77 seconds, which can help mitigate drive-by attacks. And devices that are hooked into earbuds and headsets are less likely to be vulnerable to being used by an attacker, according to Chen.
If you dont use the speaker to broadcast sound, youre less likely to get attacked by NUIT, she said. Using earphones sets a limitation where the sound from earphones is too low to transmit to the microphone. If the microphone cannot receive the inaudible malicious command, the underlying voice assistant cant be maliciously activated by NUIT.
The technique is demonstrated in
dozens of videos posted online
by the researchers, who did not respond to a request for comment before publication.

Last News

▸ Some DLP Products Vulnerable to Security Holes ◂
Discovered: 23/12/2024
Category: security

▸ Scan suggests Heartbleed patches may not have been successful. ◂
Discovered: 23/12/2024
Category: security

▸ IoT Devices on Average Have 25 Vulnerabilities ◂
Discovered: 23/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
Hey, Siri: Hackers Can Control Smart Devices Using Inaudible Sounds