Cognitive Tools: Can They Succeed Where Human Beings Have Failed?
Present IT prevention and protection do not even come close to addressing all of the cyber threats that governments, corporations, and utilities face each day. We put a defense into place and trust that it’ll sniff out a breach and, as one takes place, mitigate the damage that a malicious intruder may inflict.
However, with millions of lines of code running upon some company’s systems, and with occasionally hundreds of workers within a company closing and opening files and clicking through untold sites, opportunities for a break-in are monumental.
AV-Test GmbH, a German IT security research institute found 121.6 million new malware programs last year. This year the total stands at around 839.2 million, with 11 million new ones found recently.
Experts say that human analysis is limited, and that they rapidly become overwhelmed.
On the other hand, AI is able to deal with millions of calculations in one second. It has the ability to identify malicious activity that human beings miss.
Here’s the good news: machine learning, advances in AI, and advanced behavioral analytics might change the equation in security’s favor.
Those cognitive tools are being used to scan, as well as catalogue millions of well-known malware files to identify similarities which may assist it in identifying new risks, zero-day malware, before they occur. Trained algorithms are now learning the signature traits of hackers themselves to cease their illicit entry into systems. Plus, algorithms are figuring out the behavior of in-house users to assist in detecting an intruder.
All those tools leverage the signature strengths of AI. It may be taught to recognize a multitude of facts, make decisions, and identify visual patterns. Engineers may teach artificial intelligence to recognize well-known characteristics of prior malware files, like coding, content, and size. This is done in the instance of foreseeing new malware files. As a user clicks upon a suspect file, the artificial intelligence then can immediately compare it to the database of malicious code, as well as craft an alert if it finds a threat.
Search for “True Intelligence”
However, these types of smart layers face two problems. They may slow in-house work flow and they aren’t always correct. They may cause false positives and identify harmless files as threats. Plus, if you are a big company, facing a multitude of threats a day, the logjam that results from looking into each new one may overwhelm staff members.
Then there is the issue of bad code vs. bad actors. A big corporation with its very own proprietary software might be running millions of lines of code. And the code, consistently edited and revised by in-house code writers, might be as flawed as the human beings creating it. By a few estimates, new software programs may launch with as much as 40% of defective or useless coding.
And existing iterations of artificial intelligence are easy to either confuse or trick. For example, an AI may be taught to recognize a dog from a cat after it’s fed hundreds of images of dogs and cats. However, if even a single pixel is out of place, it’ll get confused. That doesn’t happen with a human.
That is where machine learning—in which the software does not have to be fed information yet instead utilizes statistical methods to go learning from it and searching for it— comes in. Automated learning is the objective of advanced behavioral analytics. Enterprises have utilized behavioral analytics for years to learn about consumer trends and behavior, as well as baseline statistics that assist in marketing products and tailoring their offerings to a certain demographic or single customer.
In applying statistical learning to a company’s 3 primary areas of security concern—the enterprises assets, the user, and the network—the software may identify baseline behaviors then sniff out anomalies in any, or all 3, areas. For example, the artificial intelligence may learn how often an asset such as a program or a file is utilized, by whom in the organization and how often, as well as which devices it communicates with. As anomalies get flagged and checked out, the system reinforces its very own learning.
However, even that learning doesn’t come close to true intelligence because a computer isn’t able to actually reason or show intuition.
In some areas, artificial intelligence is powerful, such as signal processing or image detection. However, in the security domain it’s still weak. That is because we do not even know how the brain works.
That is one of the issues experts must solve. Experts do not know how reasoning and intuition forms. They can predict how artificial intelligence is going to work in the future; however, things advance further than what they now can understand. It’ll be something they cannot imagine.
Plus, while the majority of hackers aren’t sophisticated, some are, and are probably attempting to deal with the artificial intelligence issues themselves and turn it into their very own weapon. However, the majority of them are merely searching for vulnerabilities within a system. And these are too easy to find, because of human thinking.
When a panel of experts recently were tasked with searching for flaws within the infrastructure at a major electrical utility, and especially at their protocol ID anomalies, they did not need to look far.
According to the experts, they were using passwords such as 123456 and everyone had it both outside and inside the utility. Most places already are insecure like that. The experts add that you do not need sophisticated malware to get in.
And within cases such as that, you do not need a machine brain to stop them either.
For more information on how TSI Cyber can protect your business contact TSI today!