The shift to the cloud creates massive changes in the infrastructure management and compute environments; With them, SIEM’s evolve and are being offered as a service (SaaS) for example by Microsoft Sentinel, IBM Qradar on Cloud, Splunk and others.
So why would you want a cloud SIEM?
In traditional SIEM’s, which you install on-prem manually, you must manage updates manually, schedule downtime for applying the updates, assign resources for the server that will be hosting the SIEM, manage all the network-related settings, write all the logic for the rules by yourself and find a workaround to connect the logs to the system, and waste a lot of time on parsing the logs.
In cloud SIEM’s you don’t have to do any of that, you just subscribe for a SaaS solution that is managed for you by the provider. Yes, you will have to configure the logs to be sent to the SIEM but that’s much easier on the cloud compared to on-prem. Usually, the companies that provide the SIEM write parsers/modules for the 3rd party services that need to be monitored or simply provide built-in solutions along with some sample rules as a start – which greatly reduces the complexity of configurations.
This is a huge advantage for the cloud SIEM’s, as it makes all the management so much easier and allows the analysts to focus on collecting the events, thinking of new logics for alerts, performing proactive monitoring by looking in the logs and searching for anomalies, writing automation for incidents, and more.
Is it beneficial for everyone?
In general, the answer is YES! Especially if you have a cloud environment or require compliance to different regulations, for it can bring a lot of flexibility and ease the management. Usually, cloud-based SIEM’s will offer out-of-the-box integration with a lot of other cloud tools such as threat intelligence, compliance platforms, DevOps and CI/CD tools, and vulnerability assessments. All this assists us in creating an advanced security monitoring ecosystem that provides us with a lot of flexibility in monitoring and detecting threat actors in different phases of potential attacks. Let's take for example one of the advanced cloud SIEMs, Microsoft Sentinel.
Sentinel is a cloud SIEM + SOAR platform offered by Microsoft. Sentinel is in Azure cloud but it's great for monitoring any other cloud as well, not just Azure. It has all the capabilities of a SIEM including automation (which is also known as SOAR). Sentinel sits on an Azure Log Analytics Workspace, meaning that all logs and data are aggregated into the workspace and Sentinel simply takes the data from there. All the data connectors (ways to gather logs from systems) will send the data to the log analytics via its API, then solutions are provided by Microsoft (they are quite fast with that) which simplifies the integration. It’s literally 2 clicks away! In case there is no integration, we can of course write one with Azure Functions (serverless applications), or even Graph API. Sentinel also has an active GitHub community that constantly writes new rules, integrations, playbooks, and dashboards, to try to improve Sentinel. Sentinel also has automatic updates; every month, new features and new integrations are being introduced – for example, last month, built-in integration with Amazon S3 was made possible. New rules are also being introduced, which allow monitoring of specific alerts in just one click. Let’s assume you want to automate an action for some alert, for that you have Azure Logic Apps which are integrated into Azure Sentinel and make automation very simple.
Sentinel also has threat-hunting capabilities, for it has a lot of built-in queries that the analysts can run and analyze for any suspicious behaviour or anomalies. You can add new queries by yourself using Kusto Query Language (KQL) which queries the log analytics.
Does Microsoft Sentinel support only Azure-based products?
No, it supports most types of public cloud providers like AWS, GCP and others...
Let’s assume you want to monitor events from the AWS Cloud trail, this integration is built-in and provided by Microsoft– all we need in that case is to allow the API to access AWS by assuming a role and pull the events to log analytics workspace. After that, it's all automatic. And the logs are automatically parsed in the log analytics. But let’s say you want to create a new field from the parse logs, that is possible too and is quite easy with the extend function in KQL along with parse_json or extract_regex functions. There are a few rules that come built-in, provided by Microsoft, which is a good start, but then we can add our own rules by writing a query using KQL with our desired logic. And that’s it!Let’s take another case where the integration isn’t built-in. For example, integration with Duo. In this integration, we can use the already created API function which can be found in GitHub and import it manually to our environment by creating an Azure function specific for that. The function will use DUO’s API to get the logs and push them into the log analytics workspace. From there it’s the same process as we previously explained, but we don’t have built-in rules, we need to create them all from scratch.
By Alex Shpilevoy, Cloud Security Specialist, and Dima Tatur, Head of Cybersecurity Division and CISO, at Commit