LRQA Nettitude Blog

An insight into how Artificial Intelligence is used in Penetration Testing

Posted by Nettitude on Sep 18, 2020

Copy of An insight into how Artificial Intelligence is used in Penetration Testing

With the digital era being well upon us, today’s state of affairs in the cybersecurity world has grown rather complex, and there are no exceptions for those of us who work in pen testing. Whilst traditional penetration testing techniques are still very much relevant to today’s reality, there’s no denying that there are many new tools, techniques and even new responsibilities that make penetration testing, on the whole, a mammoth task. In light of this, it becomes increasingly difficult for human teams to stay on top of these requirements effectively, and it’s becoming more and more necessary to lean on technological automation to support our cybersecurity endeavours.

In the following blog post, we’ll explore how AI is used in pen testing exercises and evaluate what has changed since the days of traditional penetration testing methods, as well as look at the effectiveness of new technologies acting as a multi-disciplinary workforce with their human counterparts.

The current state of pen testing

The current state of the traditional penetration testing service is probably best described as steady; a stream of updates to tools and techniques, with no clear industry-changing practices in sight. With this constant stream of updates, even the best hackers among us reach a ceiling of how many techniques can be remembered, and how to use the different tools that are needed in order to provide a comprehensive penetration test. A lot of these techniques and tools are used for relatively low-hanging fruit, or vulnerabilities that are not all that difficult to find with the correct knowledge or insight and could most likely be automated if the prerequisites of things like application state, network state, and reconnaissance flags are all met.

Managing all of these items however through traditional programmatic means is most likely a recipe for disaster, as each environment can be fairly unique, it only takes one small oversight for an attack not to work, or a slight configuration deviation for it to not be detected in an automated manner. The need for automation in the offensive cyber security industry is rising quickly, as the supply of consultants and experts in the area cannot keep up with the demand for these skill sets. This means that if we can use automation in a clever or novel way, we can effectively bolster human expertise with machine efficiency. One of the proposed solutions to this is the use of Artificial Intelligence.

 

Doesn’t this go against the nature of penetration testing?

Now I know what you are thinking, mixing hacking with AI seems like a bad idea, and really if we were talking about general intelligence (you know, then that may be the case), we are a long while off from artificial general intelligence: (AGI) (which would be smart enough to learn to hack better than humans). However, using AI to gain insights that human hackers may miss, or train a data model to provide probabilistic values that a certain vulnerability may exist based on other observations may be a good way to incorporate it into offensive security.

The defensive side of cyber security has been harnessing AI-based technologies now for years, with some strong applications within network monitoring, behaviour tracking and host-based detection techniques, allowing defensive security professionals to be able to monitor large and complex networks with a relatively small team. These teams are constantly gaining more and more AI-based tooling to help to scale their efforts in a war against ever-expanding network attack surfaces, curious user bases and the Internet of Things infinitely expanding. It is time for offensive security to join these efforts.

 

Introducing offensive security techniques

So how exactly do we as offensive security professionals keep up with the pace of AI adoption in defensive techniques, as well as the demand from industry for more and more hackers? Well, Watson, it is a fairly simple solution, we use AI ourselves.

Typically, the use of AI in offensive security has been prohibited by the complex tasks that security consultants need to apply to a highly diverse set of environments, and the need for an attacker to only ever find one way into a system, in comparison to the defensive problem of having to protect all avenues. A great example here lies within the realm of web application testing. Despite the usage of popular frameworks increasing, the way that developers and organisations use these frameworks is very different, and what could be considered a vulnerability on one website using the React framework might not be relevant to all websites using it. If we think about this from an infrastructure perspective, a Linux-based server vlan looks considerably different to a Windows-based vlan at both the reconnaissance and network traffic levels.

 

What other challenges do we face?

Being able to appropriately signal vulnerabilities based on features of an environment, without knowing the environment beforehand is indeed a difficult task, even for the most complex of AI, training an appropriate model that is flexible enough to highlight the features that signal that vulnerabilities may be present, is a daunting and complex endeavour. However, this is not an impossible endeavour.

In addition, being able to enter into an environment without much background knowledge is one of the key parts of being a (human)penetration tester, with our general day-to-day consisting of exploring new and often unique environments, being able to quickly adapt to fundamental technology changes, and still apply our security knowledge to these technologies is imperative, and any AI that is built in this offensive respect needs to be able to adapt as well. This flexibility is one of the hardest things to emulate programmatically, but as techniques evolve in machine learning, the ability to accomplish more complex tasks becomes realistic.

 

How can we get around this?

A more likely scenario is adding artificial intelligence capabilities to tools, and allowing consultants to do the initial "environment scoping" and then using assistive AI to provide an "AI Enhanced" penetration testing experience. Whilst not solving the lack of talent in the industry, it does create a more consistent environment for our capabilities, as well as giving consultants more time to look at interesting and complex issues. Think less apocalyptic AI takeover and more Google Assistant for hackers.

Our darker-hatted adversaries have been utilising AI in swathes in recent times. From conducting advanced phishing attack campaigns using automation and AI to using the technology to create a voice signature of a CEO, and use it in a phishing attack to have an organization send money to Hungary. [1] Additionally, malicious actors are using AI to evade filters and anti-scripting tools such as spam filters, and captchas, making it even more difficult for organisations to keep their systems from being bruteforced. On top of the bypass of these protections, AI has been used to train neural networks which are more efficient at guessing passwords and cracking hashes using generative adversarial networks [2].

Based on current research, however, it appears that black hat adversaries are yet to harness artificial intelligence to conduct the entire attack, from beginning to end. The examples identified above are generally cases where AI has been used for a specific technique in order to improve a specific attack and whilst there is merit in doing this and increasing the effectiveness and complexity of an attack, it still requires a significant amount of human input.

In conclusion, there are a lot of challenges in creating a fully autonomous AI for use with offensive security, and most of the uses that we have seen from black hat hackers have been very specific to a task, and not a more comprehensive solution that can apply machine learning on top of machine learning. As the penetration testing industry continues to grow and continues to require more and more people with the skills and talents to perform advanced security assessments, AI will become an invaluable piece of every hacker’s toolkit to provide a consistent, quick and in-depth assessment of an organisation's security.

Want to find out more about what penetration exercises LRQA Nettitude can perform? Visit the webpage and get in touch.

 

[1] https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402

[2]https://arxiv.org/abs/1709.00440

Topics: Cyber Security, Nettitude, Security Blog, Security Testing, Cyber Security Blog, Download Area, SEO Series

Subscribe Here!

About LRQA Nettitude

Through our connected portfolio of advanced cybersecurity solutions, LRQA Nettitude helps organisations to identify and manage the vulnerabilities and threats that pose a risk to their business, building cybersecurity resilience and underpinning your business strategy with proactive measures.

Recent Posts

Posts by Tag

See all