Dawn of the Killer Robots

  • Post author:
  • Post published:August 15, 2018
  • Post category:New Tech
autonomous tank

Dawn of the Killer Robots

Terminator, Aliens, The Matrix: these are just a few of the movies that come to mind when thinking about artificial intelligence (AI) gone awry. There’s always a stage where humankind has reached a point where the autonomy of the machines they created realize they are getting the short end of the stick. Another common trope is when the robots realize the humans are not efficient and full of problems. Somebody always warns humanity about its folly, but they are regarded as crazy or not informed, then, well, we know what happens next. For as long as we have dreamt about living side-by-side with robots and allowing them to take the risks and perform tasks for us, we have also feared what they could be capable of. A mass of machinery, wires, and an emotionless brain that sees ethical decisions in black and white.

We are at that stage in the real world, where we are starting to get more complex AI, and robots are getting integrated into more aspects of our lives. We are starting to attempt to weaponize these cold machines to take over for our warm-blooded soldiers. Right now, we actually have authority figures pledging against the development of weaponized robots. The founders of DeepMind, Skype, and Tesla (to name a few) are at the forefront of this call to action.

example of neural network Neural network example

Teaching Machines to Kill

When it comes to teaching a computer how to do things, many variables are involved. The machine has to be able to discern between any and all objects. This may not seem so hard to the layman, but how do you get from seeing an object, putting it into 1s and 0s, then making the computer understand the difference between one group of numbers from another? The machine has to be exposed to thousands of models of objects, in countless scenarios, countless times. Eventually, the machine will start to make faster and more accurate connections between shapes and environments. Once it understands differentiating between object boundaries, we have to get it to understand what’s alive and what’s an inanimate object. How does it know a human from a tree? Again, this is solved by even more repetition.

visual representation of AI learning visual representation of AI learning

The process can take anywhere from a week to months to years. Getting a robot to see is only part of the battle though. What about voice? What if the AI assumes you are the enemy because of your clothes? You’ll have to be able to communicate with it to let it know otherwise.

Speaking to AI and getting it to understand natural speech is another very time intensive process. We have exposure to this type of technology in our everyday life, with Google Assistant, Siri, and Alexa. The voice commands have been getting better, but they still have a long way to go. How many times have you had incorrect results with voice commands? I have had to redo commands countless times because the AI couldn’t understand what I was saying. Occasionally, even in perfect conditions, it couldn’t understand me.

flying drone Remote controlled drone

Now, imagine this same scenario, only this time the digital assistant is a 7ft robot with weapons pointed at you in a noisy environment. How many attempts would it give you at communicating before deciding you’re the enemy and attacking you? I assume not too many, because every second counts in a war zone. One other very scary aspect of this is something that can be demonstrated with a very pertinent topic—hacking.

Converting Their Format to the Enemy, Wololo

We can’t even completely protect our own, tame, personal computers from hacking. Now imagine your computer is that 7ft robot from before. It has already learned everything it needs to know about vision and natural speech, but unbeknownst to you, the enemy has planted a malicious seed in the robot. You’re just going about your regular tasks, and all of a sudden, your robot compatriot turns on you. What can you do?

Hackers are coming up with newer and more brilliant techniques every day. Anytime a new technology or safeguard is announced, hackers are waiting in the wings, ready to pounce. They could “poison” the dataset the AI is using to learn with incorrect information. It could be something not too serious, like making the robot trip constantly or unable to pick things up. Or, they could make it think that all of its allies are in fact the enemy. Once hackers have access, there’s nothing they can’t do. They could even gain access, years in advance, just waiting for the right time to strike, and we would never know. Of course, we would have certain safeguards against this, but there’s always an exception to the rule, and where there’s a will, there’s a way. That isn’t to say, though, that we should keep AI off the battlefield.

A Robot’s Place on the Field

Humans are pretty resilient by nature, but we can only handle so much, physically, before becoming injured or sick. This is a place where AI can have a benefit on the battlefield. We need machines that are capable of getting into dangerous places, finding the sick and injured, and getting out. We could reduce the number of casualties involved in rescue, and we could keep our medics and doctors out of harm’s way at a safe location. The machines could also be used to carry heavy equipment, allowing our soldiers to maneuver much more safely around obstacles. This would also create much less wear and tear on the body of the soldier, allowing them to continue far more safely and without as much risk to their joints, like knees and ankles.

robot carrying wounded soldier Robot carrying wounded dummy soldier

As we can see, there is a fine line between the positive and negative interactions with AI in this environment. The danger of an AI being exploited for malicious intent is very real (MIT was actually able to create a psychotic AI just by exposing it to the darkest corners of Reddit), but the time is fast approaching where AI companions will be standard. I believe that as long as we keep the weapons out of the hands of the AI, the benefits far outweigh the negatives when it comes to their use.