In the year 1957, a kid named Joe Engressia found that whistling at a certain frequency allowed him to make long distance phone calls and they were pretty much expensive in that period. The technique he used is called Phone Phreaking and He is called the father of phreaking.
Humans haven’t stopped evolving these methods and undoubted the course of development has reached beyond horizons. Many are used for greater good purpose and some indeed have caused some dent damage. In that course, some techniques resurface from time to time with sophisticated development. And it’s Dolphin Attack now, which is terrorizing the smart phone/OS manufacturers.
Ultrasonic audio commands, that’s the key for its name. Usually, humans can’t hear any sound that has higher frequency than 20,000Hz but microphones can. When voice commands are broadcasted at this higher frequency it, AIs like Alexa, Siri etc easily respond to them. This is a threat that cannot be taken easily when this software are fully equipped to perform any action in the device.
So far, the claims are not recognized by manufacturers but more than one team has reported about this concern. The vulnerable situations for devices are: the attack broadcast should happen within 5 feet from the phone and for AIs like Alexa and Siri it has to be pre-activated. But what you really have to fear is these high Hz voice commands could be embedded with buffering videos on website or telecasted in public.
Google assistant offers an option which makes the device respond to only specific voice. Researchers suggest that the manufacturers should device the equipments with microphones which don’t pick up voices beyond 20000Hz and smart home assistants should avoid responding to external voices.
Amazon has responded with a reply that it will be reviewing the papers released by researchers. Also both Amazon and Google have expressed that they value their customer’s privacy and security very much. Time is to provide an answer or a solution.