What’s the difference between machine learning and artificial intelligence (AI)?
Do these new terms have measurable impact on the state of cybersecurity today?
What’s stopping businesses from adopting AI?
RSA security solutions VP Ben Desjardins answers the big questions about AI’s potential to help secure businesses.
Generally, the most common misconception is that AI and machine learning are the same thing, when in fact the latter is a subset of the former.
In the context of cybersecurity, AI is the application of insights that can be executed through automated action.
The misconception leads to the idea that AI has the potential to go rogue and execute unintended actions.
Therefore, it’s important to understand machine learning takes two forms: supervised and unsupervised.
A supervised process is when humans define the means of categorisation, whereas unsupervised is when humans still control if and how the insights from the machine learning process of data are applied to specific actions.
AI (and machine learning) have been assisting cybersecurity for some time.
In fact, some might consider the biggest misconception of AI is that the technology and application in cybersecurity are new when in-fact both AI and machine learning have been implemented for quite some time.
However, now organisations are capable of leveraging AI as the technologies have advanced to a point where they address critical issues in cybersecurity.
AI has great potential to automate repeatable security actions and help organisations address the skills gap within security operations.
One of the biggest challenges I hear practitioners voice is cutting through the marketing jargon around AI to really understand how these models work and their potential impact within their organisations.
Some solutions with elements of AI or machine learning get presented like a “black box”, with a lot of mystery around the algorithms and models.
Users should test these models against specific valuable use cases they have identified.
Finally, while having cool new tech is great, getting true value comes from connecting the security insights and actions within a real business context.
At RSA, we’ve been using machine learning for quite some time.
Again, the science has advanced significantly but is not entirely new.
An example includes using machine learning models to detect fraudulent transactions based on the analysis of large data sets of transactions.
Many fraudulent transactions hold similar characteristics that can be identified and correlated through pattern analysis.
Another area is around user behaviours both in the way they try to access (authenticate into) systems or applications, as well as the behaviours they exhibit once authenticated.
More advanced models apply deeper insights to identify anomalies that can be blocked or investigated.
As an example, our security solution can learn behaviours, such as the usual time a user accesses the system and their location.
So if a cybercriminal from a foreign location attempts to access the system, it will prompt an authentication request.
As the science around AI and machine learning advances along with the valuable application, its impact will continue to grow.
Digital transformation is yielding huge amounts of data which all needs analysing and processing to identify patterns.
Being able to crack this will be key.
Combined with broad and deep visibility, metadata can help identify issues and alerts that are associated with the same attack campaign.
As the above capabilities of AI applied to cybersecurity advance, the technology has the potential to help security teams better keep pace with the fast-evolving threat landscape.
Protection from modern threats requires a combination of machine learning driven AI to analyse traffic and behaviours, and accurately detect attacks as well as automate responses to threats when possible.
Therefore, security analysts can focus on higher level tasks and investigations.