Interview: LogRhythm's Simon Howe on AI and cybersecurity
Machine learning (ML) is an emerging technology under the artificial intelligence (AI) umbrella which is being adopted by major organisations worldwide at a rapid pace, with its main draw being the streamlining of processes and diagnosis of issues.
It seems fitting, then, that this emerging tech can take on the threats that come with advancing cyber attack and malware technology.
This is where a user and entity behaviour analytics (UEBA) strategy can come in handy for IT security managers, says LogRhythm vice president of sales for APAC, Simon Howe.
Techday spoke to Howe about the benefits of UEBA, and how important AI and ML is for effective IT security.
What is UEBA, and what are the dangers facing organisations with more user and entity-based threats coming to light?
UEBA tools allow an organisation's security team to establish a baseline [using ML] that identifies what constitutes normal behaviour for each user on the network. Once this is in place, the tool can then monitor for any unusual activity.
The most effective UEBA tools need to be able to detect and respond to three key things: insider threats before fraud is perpetrated, compromised accounts before more systems are taken over, and privileged account abuse before sensitive data is accessed or operations are affected.
Insider threats can also occur as the result of activity by people outside an organisation.
External individuals can gain access to internal systems where they then masquerade as a user by taking over a legitimate account.
In what ways will machine learning protect against the rising threat of user and entity-based threats?
Machine learning makes it possible to teach context to security systems.
This enables them to synthesise various forms of data to create ‘white lists’ of normal behaviour for individuals and organisations. Activities which fall outside these parameters can then be flagged and addressed.
Machine learning itself offers a number of ways to improve an organisation’s infrastructure security. These include:
- Threat prediction and detection, where anomalous activity is assessed in order to recognise emerging threats
- Risk management, involving the monitoring and analysing of user activity, asset contents and configurations, network connections, and other asset attributes
- Vulnerability information prioritisation, by using learned information about an organisation’s assets and where weaknesses might exist
- Threat intelligence curation through which information within threat intelligence feeds is reviewed to improve quality
- Event and incident investigation and response, which involves reviewing and analysing information on events and incidents in order to identify next steps and organise the most appropriate response
In your view, is there a way forward for cybersecurity teams that does not include an AI or ML-based approach?
ML offers much better capabilities than humans can deliver when it comes to recognising and predicting certain types of patterns.
These new tools can also move beyond rule-based approaches that require knowledge of known patterns.
Instead, they can learn typical patterns of activity within an IT infrastructure and spot unusual deviations that could mark an attack.
However, while modern tools such as AI and ML can support a CISO’s arsenal of cyber support infrastructure, organisations still require some human involvement to respond and recover from incidents.
For example, in areas such as deciding if an issue is a false positive, communicating with the affected team, and coordinating actions with other organisations.
Indeed, today’s security products cannot fully automate the Security Operations Centre (SOC) and completely eliminate the need for security analysts, incident responders, and other SOC staff, but ML can streamline and automate some process to reduce the need for human responders.
Are there any pitfalls that security teams could fall into while implementing AI/ML tech? If so, how can they avoid them?
IT security teams will need to be mindful of some key steps that have to be taken while implementing AI/ML. These include:
- Providing ML-powered tools with real-time access to large sets of high-quality, rich structured data that shows all security-related events throughout the organisation
- Feeding the tools with the contextual information necessary to understand the meaning and importance of each observed activity and detected anomaly
- Performing supervised learning with extensive sets of high-quality training data to educate the tools on which activities are good and which are bad.
How will the impending Privacy Bill affect organisations' cybersecurity strategies in New Zealand?
The Privacy Bill has brought together people with expert knowledge and first-hand experiences to develop privacy platform legislation that will be required to protect citizens.
The legislation will further help New Zealand businesses to enforce better security hygiene that should ideally cover four important components, including:
- Enforce encryption
It’s important not to leave setting up encryption in the hands of the users as there is no way to ensure that it is completed and any stored data is secure.
- Patch regularly
It’s a basic step in any security strategy. It’s unwise to expect users to regularly update their devices and so this should become a responsibility of the IT department.
- Remote wipes
Another layer of security can be provided by ensuring the IT department has the ability to remotely wipe devices should they go missing. This can ensure any data is removed before it can be misused and the device rendered useless for any criminal.
- Improve reporting
To ensure compliance with the new regulations, organisations should have the capacity to report on the status of all end points and data stores. It will be important to have in place tools that can automate this process to ensure reports are always complete and up to date at all time.