SecurityBrief Asia - Technology news for CISOs & cybersecurity decision-makers
Story image
Cloud security alert - why log files are not the answer
Thu, 5th Apr 2018
FYI, this story is more than a year old

Once production applications and workloads have been moved to the cloud, re-evaluating the company's security posture and adjusting the processes used to secure data and applications from cyberattacks are critical next steps.

Cloud infrastructure is ideal for providing resources on demand and significantly reducing the cost of acquiring, deploying and maintaining internal resources.  In addition, organisations can quickly scale cloud resources up or down eliminating the need to over-provision—just in case.

But losing control over the physical infrastructure means not being able to use familiar tools to develop insight into what is happening in that infrastructure.  Anyone responsible for IT security needs a strategy for monitoring what is happening in their company's cloud, so they can shut down any attacks that occur and limit the damage.

The use of log files

While users do not have direct access to public cloud infrastructure, cloud providers do offer access to logs of events that have taken place in the user's cloud—often for an additional cost. With logs, administrators can view, search, analyse, and even respond to specific events if they use APIs to integrate the event data with a security event and incident management (SEIM) solution.  So why aren't log files sufficient to maintain security?

First, all necessary data may not be collected through log files. While management events are automatically logged, data events are not. Some providers may support collection of custom logs, but users would need to specify and activate the logs ahead of time. This makes it difficult or sometimes impossible to go back and investigate areas that were not already being tracked.

Second, while event logs are useful for identifying when an alert was triggered, they do not provide enough information to determine what caused the alert. More detailed information is needed to perform root cause analysis and execute timely remediation. The rise of advanced persistent threats (APTs) as the most damaging type of breach cannot be stopped by merely analysing log files.

The most advanced network security solutions require detailed data in real-time to have a chance of detecting APTs. Log files are typically generated at specified intervals, depending on the level of service the user pays for. Users then need to set up a mechanism for storing log files for future analysis; this is not the default. So, while data useful in a breach investigation can be collected, it is not available in real-time and limits the speed of containment and recovery.

Third, sophisticated adversaries are increasingly adept at moving inside an organisation without triggering any alerts. In many attacks, previously unseen malware enters an enterprise and lurks there undetected, exfiltrating data over a period of many months. Security today requires more rigorous oversight than log files provide.

And finally, in the long run, logs can be expensive to manage. Obtaining sufficient log data and sifting through it demands time, money, and a commitment to data integration. Existing security monitoring tools that use log data may not be sufficient to investigate new threats and investments may be required for additional tools.

Security analysts could end up spending more time on complex data administration, rather than focusing on correlation analysis and incident response.

What can packet data do?

Data packets are like nested Russian dolls with the content enclosed inside various headers that work to move the packet efficiently through the network. The headers can be very informative, but security today is dependent on what is called deep packet inspection (DPI) of the packet's payload or content.

DPI exposes the specific websites, users, applications, files, or hosts involved in an interaction—information that is not available by inspecting header data alone.

Cloud environments have many potential vulnerabilities that attackers can exploit. And attacks are frequently conducted in multiple stages that may not be caught by intrusion detection systems or next generation firewalls.

To stay ahead of would-be attackers, security analysts increasingly use data correlation and multi-factor analysis to find patterns associated with illegitimate activity. These sophisticated solutions require granular data to work effectively. Most organisations have solutions like these deployed on-premises to evaluate packet data captured from physical infrastructure.

How to gain access to packet level data in the cloud

Unlike physical infrastructure that can be tapped to produce copies of data packets, cloud architecture is not directly accessible. In the event of an ongoing attack or data breach, a user may be frustrated to learn that the data they need to isolate and resolve the issue is not included in the Service Level Agreement they have with their provider. Fortunately, there are new methods to access packet level data in clouds.

Container-based sensors have been developed that sit inside the cloud instances and generate copies of packet data. The sensors are automatically deployed inside every new cloud that is spun up, for unlimited scalability. Because the sensors are inside each cloud instance, they have access to every raw packet that comes or goes from that instance. This cloud-native approach to data access ensures no data is missed, for strong cloud security.

What are the benefits of a cloud visibility platform?

Of course, having access to all the packet-level data from every cloud instance presents another problem—volumes of data that can overwhelm security solutions and even lead them to drop packets. A cloud visibility platform filters the raw packets according to user-defined rules and strips out unnecessary data, to deliver only the relevant data to each security solution. This enables security solutions to work more efficiently.

Today, there are two types of visibility platforms available for cloud workloads. One uses a lift-and-shift approach and takes the visibility engine developed for the data center and moves it to the cloud. The engine itself is a monolithic processor that aggregates and filters all the data in one location.

The other approach distributes data aggregation and filtering to each of the cloud instances and communicates the results to a cloud-based management interface. Data can either be delivered directly from the cloud instances to cloud-based security monitoring solutions or backhauled to the data center.

The distributed solution has the advantage of being highly scalable, since the data does not need to be transported to a central location for processing. And the distributed solution is more reliable, since there is no single point of failure.

Whether responding to a security incident, data breach, or in support of litigation, an organisation needs to have a highly-effective cloud visibility platform for accessing and preserving the digital traffic that impacts their business. Log files are just not able to fulfil that requirement.

Conclusion

Ultimately, log files are diagnostic tools. They are not security solutions and they cannot facilitate an effective response to a security threat or breach. With the rising use of advanced persistent threats and multi-stage attacks, effective security requires detailed packet-level data, from every interaction that happens in the cloud.

The cost of capturing and filtering packet data will be offset by the increased ability of the security team to detect attacks and accelerate incident response.