Red vs Blue vs AI

Crypto Lock Graphic from Shutterstock

One of the top stories in information security recently is the Russian hacking of the U.S. infrastructure. Critical parts of North American cyber systems are vulnerable to foreign hackers according to NPR (National Public Radio). This is all part of recent accusations coming from the White House. The Trump administration accused Russia of targeting the U.S. power grid.

The NPR reports “Since at least March 2016, Russian hackers attempted to infiltrate numerous sectors of American infrastructure including energy, nuclear, commercial facilities, water, aviation and manufacturing”, according to a Department of Homeland Security (DHS) report published on Thursday.

In truth, this is nothing new. The DHS has over the last decade been concerned with the growing lack of talent of disruptive technologists. This has pushed many in government security to consider AI (Artificial Intelligence) as the answer to their problems.

Participants at a U.S. Cyber Challenge boot camp

For example, the DHS is a major supporter of the U.S. Cyber Challenge event. The Cyber Challenge is a search to replace security experts with AI. In 2016 DEFCON announced a quest to build a smart computer system that finds, patches, and fixes bugs more efficiently than humans.

Dr Stephen Walker, Director of DARPA

DARPA has even reported this search is off to a good start with all the teams in DARPA’s Cyber Grand Challenge performing very well. The government now has AI for their Blue Team (defensive security).

What may have been missed however is that disruptive technologists (Red Team, Hackers), have already been using AI to assist with hacks for years now. In fact, this leads us to the verge of a new era in information security, that has yet to be named are addressed. In this new era, AI will be used against humans to attack and defend networks. This could lead to new systems in disruptive technology design to attack AI. This brings us to some useful information for the disruptive technologist.

In November of 2017, a group of MIT students tricked an AI into thinking a turtle was a gun. The students used what is called an adversarial image. This specific picture is designed to confuse AI by misleading the pattern it uses to classify objects. This image is used as an overlay and is invisible to the naked eye.

This is important because this test proved that the AI can be consistently fooled and disrupted. This find is an ace card for a disruptive technologist who may have to take on AI in the future.

Reference:
(https://www.engadget.com/2017/11/02/mit-students-trick-ai-object-recognition/)
(https://www.theregister.co.uk/2016/08/05/game_over_security_researchers_darpas_ai_bug_hunters_will_take_your_jobs/)

Author

  • Andre Joseph, Vice President Global Cyber Security and Technology Controls at JPMorgan

    Andre Joseph is Vice President Global Cyber Security & Technology Controls at JPMorgan Chase & Co. His many accomplishments include working on one of the largest global, cloud migrations in history with Project Atlas as the lead cloud security engineer and oversight of a 3.3 trillion daily global cash infrastructure.

    View all posts
Previous Post
Cyber Hacker Photo by Bermix Studio on Unsplash

When World’s Oldest Industry Meets Blockchain

Next Post
Bitcoin Stack

Event: Is Blockchain & Bitcoin Our New Banking?

Related Posts