Can I trust you my virtual assistant?

Can I trust you my virtual assistant?

February 13, 2019

No Comments

‘Alexa, play me a song.!’ How cool it is to have a virtual assistant. Isn’t it.? IVAs evolved from chatbots, software agents have opened a new world where you can ask a machine questions as if it’s a human and request it to perform certain tasks. But have you ever thought whether these interactions are strictly between you and your assistant?

Requests and responses to and from an IVA, whether in text format (for example, through online chat) or voice format is stored in the cloud. A companion app can be used to access user-IVA conversations. Most probably, the content of such conversations could reveal details of the user, for example, questions about health symptoms. However, the voice recordings of the user themselves also pose a privacy risk. IVAs can communicate with multiple compatible IoT devices running a supported OS. Like Siri in Apple devices, Cortana in Windows. For expanding the IVA’s features or skills, many of the IVAs enable the third-party vendors, to link their services and devices to the intelligent assistant. For example, the IVA from Amazon called Alexa works with many smart-home devices. It also integrates with numerous apps to order food, stream music, video get a ride, check account balances and make credit card payments.

According to Gartner, IVAs are becoming increasingly popular and the IVA market will reach $2.1 billion by 2020. However, recent news reports have revealed that popular voice-activated assistants such as Amazon Alexa, Google Home, and Apple’s Siri aren’t always reliable or trustworthy. Let’s have a look into the possible attacks that could happen while using a virtual assistant.

  • Wiretapping an IVA ecosystem: By sniffing the traffic between the IVAs and the apps the communication mechanisms in the ecosystem can be exposed, even if the companion apps use an encrypted network. Also, studies show that all the network traffic are not transmitted over a secure protocol. Considering an example of a device that does not use encrypted connections to check the network connectivity makes it possible to detect the IVA devices in a home network. It may cause some man-in-the-middle attack and possible changes if the firmware images are transmitted over the unencrypted connections.
  • Compromised IVA Devices: Like any other computer system that is connected to the network, IVA system that has some security vulnerabilities can also be compromised and can be used for some nefarious purposes such as denial-of-service attack. Another issue comes when the compromised device has ‘always on’ listening capability. A third person who gets access can easily collect the data. Sometimes the attacker could remotely control the IVA through talking to it via a compromised device in the home. 
  • Malicious voice commands: Even if some of the IVAs provide a voice-training feature, to prevent impersonation, the system may not able to detect or distinguish between similar voices. So an attacker who can impersonate the user can easily fool the system and use some malicious voice commands to order items, to get unauthorized entry to the garage, home etc. or to perform some mischievous or criminal acts.
  • Unintentional voice recording: Since the IVA devices have “always on ” listening capability, the voice within the range of IVA can be recorded accidently. Since the recording is done automatically, the user might not have control over the voice data. These records are transmitted to the cloud service. This might enable other parties, including the commercial entities with justifiable access to the stored data as well as the hackers who might break into the database to get data from the private conversations.

As the IVA ecosystems expand and the virtual assistants become more intelligent, the challenges in the security part also increase. The need to understand the privacy and security threats from this emerging technology is growing and is becoming a main issue in the meantime.