Holiday gifts have come a long way since the days of Red Ryder BB guns and RC cars.
Though the favorite childhood projectile launcher is still available in retro-kitsch fashion, RC cars have been superseded by app-connected quadcopter drones, and we now have a melting pot of internet-connected cloud-managed app-controlled devices, from Android-powered refrigerators to voice-recognizing toy dolls, and home security systems that buzz an app on your mobile phone before calling the police.
There are enough pros, cons, and security risks to each Internet of Things (IoT) toy to fill blog posts between now and next December, but today we’re going to give you some information to consider before you pair that new toy doll to your home wireless network or your phone, and it might encourage you to keep a gift receipt…
So, in the spirit of the season, here’s to adding a little security and privacy into your new stash of connected devices!
People like to think of IoT devices controlled by an app as functioning in the same way as a TV remote control – “point to point”. Point the remote towards the device, it sends a message (change the TV channel), the device takes action. If I’m in my dining room and want to turn up the internet-connected thermostat that is so inconveniently on the wall in the hallway, I open the app from my phone and push a couple of buttons. Boom, my phone just told my thermostat to warm me up.
But what actually happened was the app on my phone reached out over the internet and connected to a cloud server run out of who-knows-where, that’s owned, operated and managed by the device manufacturer or their app-development subcontractor, and, using my login credentials, told the server “send this instruction to my thermostat – Turn up the heat 2 degrees.”
The thermostat was able to get the instruction (Turn up the heat 2 degrees) because it too was talking to the cloud server. All the time. Continually sending data and statistics, and waiting for its next instruction.
I know the pictures above have a very subtle difference that doesn’t look like much, just one little icon in between my phone and my thermostat. So instead, imagine your TV remote control phoning home to your cable company or TV manufacturer every time you tell the TV to change the channel or turn the volume up.
In the second picture, your remote doesn’t tell the TV what to do. Your remote tells the TV manufacturer’s cloud server that “I, Mr. or Ms. Home-User Smith want to turn up the volume on television number #1277953.” The cloud server then tells your TV, which is also always talking to the cloud server, “Hey TV #1277953 owned by HSmith@gmail.com, turn up the volume.”
This is no big deal when we’re talking about house temperatures and TV volumes. Who cares if the manufacturer knows that I like my house at 73 degrees most of the time, or if my TV providers know what channel I’m watching—or that when I’m watching MTV Live, I like the volume up at 11?
But what if we’re talking about something that’s recording every sound and conversation in your living room?
What if we’re talking about a device that unlocks or opens the front door of your house?
What if we’re talking about an app-enabled nanny-cam sending an always-on video feed of your child’s bedroom?
Instead of point-to-point, phone to device, we’re now talking about phone-to-cloud-to-device. That adds a whole new network into the mix with servers and databases and firewalls owned and operated by someone else who may care more about their profit margins than the security of your house or the privacy of your conversations.
One common complaint by experts about the IoT craze is that manufacturers are driven by market realities to build a device as cheaply as possible, manufacture as many as possible as quickly as possible, then sell them all before the next version comes along.
Where does security and privacy fit into that plan? Or, has the IoT market just given up trying?
Manufacturers of IoT teddy bears, door locks, and thermostats aren’t making the same promises to protect your privacy and your data as you would ask from your bank, yet they can know things about you even your bank doesn’t know. They aren’t talking about how they protect your account information, and they aren’t sharing how they ensure that Mr. Sneaky-Thief can’t open your IoT-connected front door lock by sending a few commands from a phone while sitting in your driveway.
Security is an after-thought for too many of these manufacturers. It’s for that reason we have the Mirai botnet today, a digital army of IoT home security cameras that were taken over by malware thanks to jaw-droppingly inept design security choices by the manufacturers. These devices, and thousands more every day, get hacked, have their firmware altered and are turned into drones for internet attacks that knock the likes of Amazon and Netflix out of service across major portions of the United States.
And that whole operation turns out to be because a few guys wanted to make some money with Minecraft.
This is a high-profile instance where the insecurity of a family of IoT products has led to major repercussions. I use it here to demonstrate what can happen with bad security designs—and surprise, surprise, we’re in for another round in 2018.
To be clear, I’m not implying that a robot uprising of internet-connected action figures is on the way (…yet). So, let’s look at some other examples about data and privacy leaks.
Back in February 2017, we learned that 800,000 people’s login and password credentials were “leaked”, along with 2 million recorded messages that the customers’ had mistakenly believed to be private.
Now, “leaked” implies that someone in the toy company decided that information needed to be out there. “Leaked” implies that there were security controls that got breached or taken down during a movie-montage hacking binge. But in this case, it was simply that the toy maker left a customer-data database completely open to the internet. No passwords needed to get to the data, no firewalls, no safeguards – just your data and personal voice messages online and available to any interested individual who knows how to use a search engine.
In that same month, we also learned about an IoT doll now banned in Germany called “My Friend Cayla.” Labelling an insecurely designed child’s toy as an “espionage device” may be a little too on-the-nose, but – knowing that the device has a microphone that could be remotely activated at any time – is it? Knowing that the sound clips are uploaded to a US company for voice-recognition processing, a US company that also does work for intelligence agencies, are they wrong?
In the IoT home security and automation categories, one of my favorite stories of 2017 was about an app-connected garage door opener called “Garadget”. A customer bought the product, had trouble using the app, and then posted snarky messages and reviews about the product.
Right or wrong of the customer, the manufacturer decided this was no longer someone they wanted to do business with, so they informed the customer (Martin) that the device Martin had purchased and installed in his home “will be denied server connection.” They proceeded to brick the device to make it unusable—and since the garage door opener was controlled via a cloud server, the result was that the cloud wouldn’t talk to it anymore.
There were a few failings and lessons in this story, mainly how not to provide customer service to a paying but frustrated customer, but the key point is this:
The stories above are a wide range of examples to show that what we take as given, assume as granted about information, privacy, and security controls, cannot be assumed with the world of IoT.
One recent example is the Amazon Key, the wi-fi-connected door lock designed to let Amazon drivers drop packages off inside your house. On the surface the concept makes sense, but a video from NBC’s The Today Show makes it clear that Amazon Key wasn’t nearly as foolproof as customers would like to think.
Unfortunately, most IoT device and security system manufacturers and integrators don’t provide much detail about either the design of their products, or the security of their device management networks. Some questions a risk manager would ponder could include:
These are questions IoT and security device manufacturers don’t like to talk about, but which customers – you – deserve an answer to before you give Comcast, AT&T Wireless, or any other IoT home security device manufacturer the digital keys to your real front door locks.
We’ll wrap up with this quote from Motherboard:
“As we’ve seen time and time again in the last couple of years, so-called “smart” devices connected to the internet—what is popularly known as the Internet of Things or IoT—are often left insecure or are easily hackable, and often leak sensitive data. There will be a time when IoT developers and manufacturers learn the lesson and make secure by default devices, but that time hasn’t come yet. So if you are a parent who doesn’t want your loving messages with your kids leaked online, you might want to buy a good old fashioned teddy bear that doesn’t connect to a remote, insecure server.”
Just remember that teddy bears, garage door openers, security cameras, and app-controlled door locks, all share the same design principles that let you – and maybe someone else – use them over the internet. And the manufacturers all share the same business goals – make product, make money; lather, rinse, repeat.
Do you see security in there? Me neither.
Wishing you and your families a Happy, Safe, and Secure Holidays, from all of us at Critical Informatics!