Can Google Assistant Kill? Potentially.

Tech News |

With the advent of tools like Siri and Google Assistant, there was immediate excitement. We could get our fridge doors to open for us – we could even boil the kettle, whilst being provided with exciting information about our favorite sports stars, politicians, historical figures and anything else we like. Basically, these tools gave us a chance to do a lot of the things we’ve watched people do with envy in movies.

However, while the future of this kind of hardware is obviously very exciting, there are some very legitimate concerns about how safe it is.

Take the story of Alexander Reben. Reben is well-known for being able to find the worst in something, even if it appears innocuous at first. The roboticist is known for his ability to create horrifying results from safe items. For example, he made a mechanical arm that can stab out at people – giving us the idea of what it’s like to be surrounded by a machine with the capacity for violence.

Indeed, he’s decided to take things an extra step further and show us just how dangerous Google Assistant could be in the wrong hands. To do that, we used his Google Home speaker to fire off a handgun. In this case, it was a pellet gun – but it’s a simple yet terrifying example of what could happen.

By working with a gun and a virtual assistant, Reben managed to combine two of the most feared pieces of modern technology: AI assistance and a firearm. By bringing both together, Reben asked the following: “How much should a company be able to foresee how their technology will be used and how much can they ultimately control?

“Even more interestingly, what happens when machines start making the decisions?”

The official response

Of course, Google does not want to have its toys and tools associated with making home-made killing machines. As such, they were quick to release a statement which read: “This appears to be a homebrew project that’s controlled by a smart outlet, not something that’s programmed into the Assistant or uses any type of AI,

“This isn’t condoned by Google and could not launch as an Action for the Assistant because it’s against our Terms of Service, which prohibits Actions that promote gratuitous violence or dangerous activities.”

By simply using the words “Okay, Google, activate gun.” The sequence was able to set off without too much issue. While Reben was quick to point out that this could happen with, for example, an Amazon Echo device, his point wasn’t to wage war with Google, but to show how easy it is for unintended actions to be put in place with the right amount of thought.

“Part of this project was a response to some of the news about deep learning an artificial intelligence being used for military applications,

“This is a provocation, sure, but the simplicity of it is a good way to jump start a conversation.” He said.

“I don’t have the answers, but I think we need to have a conversation,”

Whoever does the answers, it might be a good idea if we can try and find a solution to putting them in place. This is just one example of the horrifying by-product of these kinds of tools: in the wrong hands, they become immensely dangerous.

We want to be better...So if you found a mistake in this article, please let us know