One of the top stories in information security recently is the Russian hacking of the U.S. infrastructure. Critical parts of North American cyber systems are vulnerable to foreign hackers according to NPR (National Public Radio). This is all part of recent accusations coming from the White House. The Trump administration accused Russia of targeting the U.S. power grid.
The NPR reports “Since at least March 2016, Russian hackers attempted to infiltrate numerous sectors of American infrastructure including energy, nuclear, commercial facilities, water, aviation and manufacturing”, according to a Department of Homeland Security (DHS) report published on Thursday.
In truth, this is nothing new. The DHS has over the last decade been concerned with the growing lack of talent of disruptive technologists. This has pushed many in government security to consider AI (Artificial Intelligence) as the answer to their problems.
For example, the DHS is a major supporter of the U.S. Cyber Challenge event. The Cyber Challenge is a search to replace security experts with AI. In 2016 DEFCON announced a quest to build a smart computer system that finds, patches, and fixes bugs more efficiently than humans.
DARPA has even reported this search is off to a good start with all the teams in DARPA’s Cyber Grand Challenge performing very well. The government now has AI for their Blue Team (defensive security).
What may have been missed however is that disruptive technologists (Red Team, Hackers), have already been using AI to assist with hacks for years now. In fact, this leads us to the verge of a new era in information security, that has yet to be named are addressed. In this new era, AI will be used against humans to attack and defend networks. This could lead to new systems in disruptive technology design to attack AI. This brings us to some useful information for the disruptive technologist.
In November of 2017, a group of MIT students tricked an AI into thinking a turtle was a gun. The students used what is called an adversarial image. This specific picture is designed to confuse AI by misleading the pattern it uses to classify objects. This image is used as an overlay and is invisible to the naked eye.
This is important because this test proved that the AI can be consistently fooled and disrupted. This find is an ace card for a disruptive technologist who may have to take on AI in the future.
Reference:
(https://www.engadget.com/2017/11/02/mit-students-trick-ai-object-recognition/)
(https://www.theregister.co.uk/2016/08/05/game_over_security_researchers_darpas_ai_bug_hunters_will_take_your_jobs/)