Is Your Smart Speaker Spying On You?
Those personal assistants everyone seems to be getting for their home are convenient voice-activated tools for an increasing number of tasks. But when Alexa and Google Home are constantly recording and uploading your voice to the cloud, a third-party server, you could potentially be exposing yourself to hackers, government intrusion, and corporate spying. Are these legitimate concerns or paranoid conspiracy?
Amazon Echo and Alexa
By now, almost everyone is familiar with Amazon’s voice-command personal assistant, Alexa – arguably better than Siri, but not quite as intuitive as HAL 9000, though that’s probably for the best. Since the technology was introduced in 2014, a number of iterations from Amazon and Google have been created to put smart speakers in the homes of every consumer.
These speakers are able to play your favorite song, order pizza, activate other smart devices, and maintain your calendar. But some of the other conveniences they can handle involve personal data, like locating your phone, accessing bank account information, and calling your loved ones. All of this is accomplished by having a microphone ready to start recording at any given moment.
These days, new technological advents are a double-edged sword; they’re increasingly convenient, but also increasingly intrusive. That microphone recording your request to order take-out is also uploading it to a cloud server; a third party that is able to do what it wishes with that information.
Now, it would require a lot of data storage to record and save everything, all the time, so the device waits for a wake word to start listening. Your recording is then sent to the cloud to be translated by voice recognition software.
Amazon calls this the Alexa Voice Service, or AVS, which one can access and use to build your own voice assistant with a single-board computer like Raspberry Pi. That’s why the basic versions of the technology, like the Amazon Dot, that don’t include a Bluetooth speaker, can provide the technology so inexpensively; it’s basically just a microphone that sends your voice recordings to software in the cloud.
So, if the technology is that simple, doesn’t that leave it vulnerable to hackers? Yes, it does. In fact, there have been several different successful hacks that have forced Amazon to warn users of design flaws and vulnerabilities.
On Amazon’s pre-2017 technology, one programmer created a malware code that could be installed on Alexa-enabled devices to make it stream recorded conversations directly to his computer. This program was installed by physically removing the rubber bottom and soldering a connection between the device’s internal hardware, an SD card reader, and his laptop.
Though this particular connection would have been blatantly obvious to the person whose device was tapped, he said he would be able to create a 3D-printed plate that could be easily planted and go unnoticed with more time and development.
Amazon’s response? Don’t buy one of their devices from a third party. This might be the only advice needed for most consumers, but for those unfamiliar with the technology, they may be unwittingly spied on in public places where Amazon products are starting to be planted. Last year, the Wynne hotel in Las Vegas announced it would put Echoes in all of its rooms, while Amazon is adamantly targeting other hotel groups to do the same.
But that wasn’t the only instance of someone finding a security flaw in one of these speakers. Hackers have found a way to translate voice commands into high-frequency pitches able to be heard by Alexa but not by you – kind of like a dog whistle. Again, to use this hack you must be close to the speaker, though it’s much more reticent than having to install something.
While just about anything can be hacked by someone with enough know-how, there have been cases in which these devices have been recording everything as soon as they were turned on, straight from the factory. One tech blogger found his Google Home mini-speaker was recording and uploading his conversations without him saying the wake word. Google has since fixed this design oversight, but it goes to show just how easily it could “accidentally” happen.
Can Government Subpoena Your Voice Assistant?
The technology has only been available for a few years and already the police have tried to acquire a warrant to access recordings from one of these always-on devices. In a case in Arkansas, police sought access to an Amazon Echo in the home of a man convicted of murder.
The company refused to hand over the information and told police that there wouldn’t even be anything there unless the wake word was used to activate the device. Eventually, the defendant agreed to allow the police access to the Echo, from which they found no incriminating evidence, and the case was dropped.
But it wasn’t just the Amazon smart speaker that the police and prosecution hoped to find evidence in, it was also his smart water meter. Prosecutors pointed to a large amount of water used between 1-3 a.m., the time it was thought the defendant hosed down his porch to clean off blood from the victim. The defendant said the am/pm function wasn’t accurate and that he used that much water 12 hours earlier to fill the hot tub.
It’s this level of detail that our web of smart devices can unwittingly tell others about our activity. The use of a smart electric meter can even be used to see what television programs we’re watching by matching electricity fluctuations with the brightness of the screen. Essentially, each program creates a unique power signature, that can be matched to monitor your television habits.
Because these devices are recording data and uploading it to a third-party server, it can be made public or sold to advertisers who can calculate your habits with metadata. It’s unclear whether this type of intrusive data collection is actually being employed by many companies, but it’s highly likely. And as more and more of our appliances become part of the Internet of Things or IoT, every time you use them, data will be mined and analyzed in order to monitor your behavior.
The ACLU has proposed a set of rules and regulations that should be adhered to by law enforcement and other third parties for the protection of consumer privacy, though they haven’t been written into any legislation yet. It seems that unless we figure out a way to prevent the sharing of all this personal information from the use of smart utilities, we may be offering up access to every minute detail of our lives.
Corporations Are Testing Ways to Advertise to Us in Our Dreams
Every day we are under a constant barrage of advertising, as some estimates say the average person can see up to 10,000 ads in a day. Now, are the advertisers coming for our dreams?
Henry David Thoreau once wrote, “[D]reams are the touchstone of our characters.” But for some companies, dreams might be the touchstone for advertising their products.
According to theHUSTLE.com, companies like Molson-Coors are conducting experiments to infiltrate your subconscious and make you dream about their products. Last year, volunteers were reportedly asked to watch this strange, trippy video laden with Coors imagery.
The volunteers then went to sleep while listening to audio from the video. Coors wanted to, “shape and compel” the subconscious to dream about their products, and apparently, it worked.
About 30 percent of the participants reported Coors in their dreams. One woman told theHUSTLE she had a series of “weird Coors dreams.” Later she said they were brought into a focus group where she said, “We all felt like lab rats… it just didn’t really sit right.”