This Elastic integration collects logs from Azure
What is an Elastic integration?
This integration is powered by Elastic Agent. Elastic Agent is a single, unified way to add monitoring for logs, metrics, and other types of data to a host. It can also protect hosts from security threats, query data from operating systems, forward data from remote services or hardware, and more. Refer to our documentation for a detailed comparison between Beats and Elastic Agent.
Prefer to use Beats for this use case? See Filebeat modules for logs or Metricbeat modules for metrics.
See the integrations quick start guides to get started:
The Azure Logs integration collects logs for specific Azure services like Azure Active Directory (Sign-in, Audit, Identity Protection, and Provisioning logs), Azure Spring Cloud, Azure Firewall, and several others using the Activity and Platform logs.
You can then visualize that data in Kibana, create alerts to notify you if something goes wrong, and reference data when troubleshooting an issue.
For example, if you wanted to detect possible brute force sign-in attacks, you could install the Azure Logs integration to send Azure sign-in logs to Elastic. Then, set up a new rule in the Elastic Observability Logs app to alert you when the number of failed sign-in attempts exceeds a certain threshold. Or, perhaps you want to better plan your Azure capacity. Send Azure Activity logs to Elastic to track and visualize when your virtual machines fail to start due to an exceed quota limit.
The Azure Logs integration collects logs.
Logs help you keep a record of events that happen on your Azure account. Log data streams collected by the Azure Logs integration include Activity, Platform, Active Directory (Sign-in, Audit, Identity Protection, Provisioning), and Spring Cloud logs.
You need Elasticsearch for storing and searching your data and Kibana for visualizing and managing it. You can use our hosted Elasticsearch Service on Elastic Cloud, which is recommended, or self-manage the Elastic Stack on your own hardware.
Before using the Azure integration you will need:
Azure Diagnostic settings allow you to export metrics and logs from a source service, or resource, to one destination for analysis and long-term storage.
┌──────────────────┐ ┌──────────────┐ ┌─────────────────┐
│ Active Directory │ │ Diagnostic │ │ Event Hub │
│ <<source>> │─────▶│ settings │────▶│ <<destination>> │
└──────────────────┘ └──────────────┘ └─────────────────┘
Examples of source services:
The Diagnostic settings support several destination types. The Elastic Agent requires a Diagnostic setting configured with Event Hub as the destination.
Azure Event Hubs is a data streaming platform and event ingestion service. It can receive and temporary store millions of events.
Elastic Agent with the Azure Logs integration will consume logs from the Event Hubs service.
┌────────────────┐ ┌────────────┐
│ adlogs │ │ Elastic │
│ <<event hub>> │─────▶│ Agent │
└────────────────┘ └────────────┘
To learn more about Event Hubs, refer to Features and terminology in Azure Event Hubs.
The Storage account is a versatile Azure service that allows you to store data in various storage types, including blobs, file shares, queues, tables, and disks.
The Azure Logs integration requires a Storage account container to work. The integration uses the Storage account container for checkpointing; it stores data about the Consumer Group (state, position, or offset) and shares it among the Elastic Agents. Sharing such information allows multiple Elastic Agents assigned to the same agent policy to work together; this enables horizontal scaling of the logs processing when required.
┌────────────────┐ ┌────────────┐
│ adlogs │ logs │ Elastic │
│ <<event hub>> │────────────────────▶│ Agent │
└────────────────┘ └────────────┘
│
consumer group info │
┌────────────────┐ (state, position, or │
│ azurelogs │ offset) │
│ <<container>> │◀───────────────────────────┘
└────────────────┘
The Elastic Agent automatically creates one container for each enabled integration. In the container, the Agent will create one blob for each existing partition on the event hub.
For example, if you enable one integration to fetch data from an event hub with four partitions, the Agent will create the following:
The information stored in the blobs is small (usually < 500 bytes per blob) and accessed relatively frequently. Elastic recommends using the Hot storage tier.
You need to keep the storage account container as long as you need to run the integration with the Elastic Agent. If you delete a storage account container, the Elastic Agent will stop working and create a new one the next time it starts. By deleting a storage account container, the Elastic Agent will lose track of the last message processed and start processing messages from the beginning of the event hub retention period.
Elastic strongly recommends installing the individual integrations ("Azure Active Directory" logs or "Azure Activity logs") instead of the collective ones ("Azure Logs"). This allows you to have a dedicated event hub for each Azure service or log group, the recommended approach for optimal performance.
Before adding the integration, you must complete the following tasks.
The event hub receives the logs exported from the Azure service and makes them available to the Elastic Agent to pick up.
Here's the high-level overview of the required steps:
For a detailed step-by-step guide, check the quickstart Create an event hub using Azure portal.
Take note of the event hub Name, which you will use later when specifying an eventhub in the integration settings.
You should use the event hub name (not the event hub namespace name) as a value for the eventhub option in the integration settings.
If you are new to Event Hub, think of the event hub namespace as the cluster and the event hub as the topic. You will typically have one cluster and multiple topics.
If you are familiar with Kafka, here's a conceptual mapping between the two:
Kafka Concept | Event Hub Concept |
---|---|
Cluster | Namespace |
Topic | An event hub |
Partition | Partition |
Consumer Group | Consumer Group |
Offset | Offset |
Elastic strongly recommends creating one event hub for each Azure service you collect data from.
For example, if you plan to collect Azure Active Directory (Azure AD) logs and Activity logs, create two event hubs: one for Azure AD and one for Activity logs.
Here's an high-level diagram of the solution:
┌────────────────┐ ┌──────────────┐ ┌────────────────┐
│ Azure AD │ │ Diagnostic │ │ adlogs │
│ <<service>> │──▶│ settings │──▶│ <<event hub>> │──┐
└────────────────┘ └──────────────┘ └────────────────┘ │ ┌────────────┐
│ │ Elastic │
├──▶│ Agent │
┌────────────────┐ ┌──────────────┐ ┌────────────────┐ │ └────────────┘
│ Azure Monitor │ │ Diagnostic │ │ activitylogs │ │
│ <<service>> ├──▶│ settings │──▶│ <<event hub>> │──┘
└────────────────┘ └──────────────┘ └────────────────┘
Having one event hub for each Azure service is beneficial in terms of performance and easy of troubleshooting.
For high-volume deployments, we recommend one event hub for each data stream:
┌──────────────┐ ┌─────────────────────┐
│ Diagnostic │ │ signin (adlogs) │
┌─▶│ settings │──▶│ <<event hub>> │──┐
│ └──────────────┘ └─────────────────────┘ │
│ │
┌────────────────┐ │ ┌──────────────┐ ┌─────────────────────┐ │ ┌────────────┐
│ Azure AD │ │ │ Diagnostic │ │ audit (adlogs) │ │ │ Elastic │
│ <<service>> │─┼─▶│ settings │──▶│ <<event hub>> │──┼─▶│ Agent │
└────────────────┘ │ └──────────────┘ └─────────────────────┘ │ └────────────┘
│ │
│ ┌──────────────┐ ┌─────────────────────┐ │
│ │ Diagnostic │ │provisioning (adlogs)│ │
└─▶│ settings │──▶│ <<event hub>> │──┘
└──────────────┘ └─────────────────────┘
Like all other event hub clients, Elastic Agent needs a consumer group name to access the event hub.
A Consumer Group is a view (state, position, or offset) of an entire event hub. Consumer groups enable multiple agents to each have a separate view of the event stream, and to read the logs independently at their own pace and with their own offsets.
Consumer groups allow multiple Elastic Agents assigned to the same agent policy to work together; this enables horizontal scaling of the logs processing when required.
In most cases, you can use the default consumer group named $Default
. If $Default
is already used by other applications, you can create a consumer group dedicated to the Azure Logs integration.
The Elastic Agent requries a connection string to access the event hub and fetch the exported logs. The connection string contains details about the event hub used and the credentials required to access it.
To get the connection string for your event hub namespace:
Create a new Shared Access Policy (SAS):
When the SAS Policy is ready, select it to display the information panel.
Take note of the Connection string–primary key, which you will use later when specifying a connection_string in the integration settings.
The Diagnostic settings export the logs from Azure services to a destination and in order to use Azure Logs integration, it must be an Event Hub.
To create a diagnostic settings to export logs:
In the diagnostic settings page you have to select the source log categories you want to export and then select their destination.
Each Azure services exports a well-defined list of log categories. Check the individual integration doc to learn which log categories are supported by the integration.
Select the subscription and the event hub namespace you previously created. Select the event hub dedicated to this integration.
┌────────────────┐ ┌──────────────┐ ┌────────────────┐ ┌────────────┐
│ Azure AD │ │ Diagnostic │ │ adlogs │ │ Elastic │
│ <<service>> ├──▶│ settings │──▶│ <<event hub>> │─────▶│ Agent │
└────────────────┘ └──────────────┘ └────────────────┘ └────────────┘
The Elastic Agent stores the consumer group information (state, position, or offset) in a Storage account container. Making this information available to all agents allows them to share the logs processing and resume from the last processed logs after a restart.
To create the Storage account:
Take note of the Storage account name, which you will use later when specifying the storage_account in the integration settings.
When the new Storage account is ready, you can look for the access keys:
Take note of the Key value, which you will use later when specifying the storage_account_key in the integration settings.
This is the final diagram of the a setup for collecting Activity logs from the Azure Monitor service.
┌────────────────┐ ┌──────────────┐ ┌────────────────┐ ┌────────────┐
│Active Directory│ │ Diagnostic │ │ adlogs │ logs │ Elastic │
│ <<service>> ├──▶│ settings │──▶│ <<event hub>> │────────▶│ Agent │
└────────────────┘ └──────────────┘ └────────────────┘ └────────────┘
│
┌──────────────┐ consumer group info │
│ azurelogs │ (state, position, or │
│<<container>> │◀───────────────offset)──────────────┘
└──────────────┘
The Elastic Agent can use one Storage account container for all integrations.
The Agent will use the integration name and the event hub name to identify the blob to store the consumer group information uniquely.
Use the following settings to configure the Azure Logs integration when you add it to Fleet.
eventhub
:
string
A fully managed, real-time data ingestion service. Elastic recommends using only letters, numbers, and the hyphen (-) character for Event Hub names to maximize compatibility. You can use existing Event Hubs having underscores (_) in the Event Hub name; in this case, the integration will replace underscores with hyphens (-) when it uses the Event Hub name to create dependent Azure resources behind the scenes (e.g., the storage account container to store Event Hub consumer offsets). Elastic also recommends using a separate event hub for each log type as the field mappings of each log type differ.
Default value insights-operational-logs
.
consumer_group
:
string
Enable the publish/subscribe mechanism of Event Hubs with consumer groups. A consumer group is a view (state, position, or offset) of an entire event hub. Consumer groups enable multiple consuming applications to each have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets.
Default value: $Default
connection_string
:
string
The connection string required to communicate with Event Hubs. See Get an Event Hubs connection string for more information.
A Blob Storage account is required to store/retrieve/update the offset or state of the Event Hub messages. This allows the integration to start back up at the spot that it stopped processing messages.
storage_account
:
string
The name of the storage account that the state/offsets will be stored and updated.
storage_account_key
:
string
The storage account key. Used to authorize access to data in your storage account.
storage_account_container
:
string
The storage account container where the integration stores the checkpoint data for the consumer group. It is an advanced option to use with extreme care. You MUST use a dedicated storage account container for each Azure log type (activity, sign-in, audit logs, and others). DO NOT REUSE the same container name for more than one Azure log type. See Container Names for details on naming rules from Microsoft. The integration generates a default container name if not specified.
resource_manager_endpoint
:
string
Optional. By default, the integration uses the Azure public environment. To override this and use a different Azure environment, users can provide a specific resource manager endpoint
Examples:
https://management.chinacloudapi.cn/
https://management.microsoftazure.de/
https://management.azure.com/
https://management.usgovcloudapi.net/
This setting can also be used to define your own endpoints, like for hybrid cloud models.
Visit the page for each individual Azure Logs integration to see details about exported fields and sample events.
Version | Details |
---|---|
1.5.13 | Enhancement View pull request Extend the Storage Account container documentation and add link to requiements and setup instructions |
1.5.12 | Enhancement View pull request Added categories and/or subcategories. |
1.5.11 | Enhancement View pull request Add a new message format to the AzureFirewallNetworkRule log category |
1.5.10 | Bug fix View pull request Check for 'event.original' already existing in Application Gateway and Event Hub ingest pipelines |
1.5.9 | Bug fix View pull request Check for 'event.original' already existing in firewall logs ingest pipeline |
1.5.8 | Bug fix View pull request Add storage_account_container option to the Application Gateway integration |
1.5.7 | Bug fix View pull request Fix parsing of authentication_processing_details field in signin logs |
1.5.6 | Bug fix View pull request Fix parsing error client port is blank and adjust for timeStamp |
1.5.5 | Bug fix View pull request Rename identity as identity_name when the value is a string |
1.5.4 | Enhancement View pull request Enable Event Hub integration by default and improve documentation |
1.5.3 | Enhancement View pull request Data streams start as disabled on new installs |
1.5.2 | Bug fix View pull request Fix PR link in changelog |
1.5.1 | Bug fix View pull request Fix documentations formatting (remove extra 'Overview' heading) |
1.5.0 | Enhancement View pull request Add Azure Application Gatewaty data stream |
1.4.1 | Enhancement View pull request Update Azure Logs documentation |
1.4.0 | Enhancement View pull request Add two new data streams to the Azure AD logs integration: Azure Identity Protection logs and Provisioning logs |
1.3.0 | Enhancement View pull request Add the possibility to override the default generated storage account container |
1.2.3 | Enhancement View pull request Update docs with recommended Event Hub configuration |
1.2.2 | Enhancement View pull request Update package name and description to align with standard wording |
1.2.1 | Bug fix View pull request Fix Azure Sign-in logs ingest pipeline bug |
1.2.0 | Enhancement View pull request Support Azure firewall logs |
1.1.11 | Bug fix View pull request Improve support for event.original field from upstream forwarders. |
1.1.10 | Enhancement View pull request Update readme with links to Microsoft documentation |
1.1.9 | Bug fix View pull request Improve handling of IPv6 IP addresses. |
1.1.8 | Enhancement View pull request Update docs with details about Event Hub name recommendations |
1.1.7 | Bug fix View pull request Add geo.name and result_description fields in platformlogs |
1.1.6 | Bug fix View pull request Fix azure.activitylogs.identity with a a concrete value Bug fix View pull request Add identity_name, tenant_id, level and operation_version into activity logs |
1.1.5 | Enhancement View pull request Add documentation for multi-fields |
1.1.4 | Bug fix View pull request Fix event.duration field mapping conflict in all Azure data streams. |
1.1.3 | Enhancement View pull request Added the forwarded tag by default to all log types. |
1.1.2 | Bug fix View pull request Add device_detail.is_compliant and device_detail.is_managed fields Bug fix View pull request Change authentication_requirement_policies to flattened type |
1.1.1 | Bug fix View pull request Fix field mapping conflict in the auditlogs data stream for client.ip . Changed azure-eventhub.offset and azure-eventhub.sequence_number to longs from keyword in the eventhub data stream. |
1.1.0 | Enhancement View pull request Support new Azure audit logs and signin logs |
1.0.1 | Enhancement View pull request Remove beta release tag from data streams |
1.0.0 | Enhancement View pull request Move azure package to GA |
0.12.3 | Enhancement View pull request Update to ECS 8.0 |
0.12.2 | Bug fix View pull request Regenerate test files using the new GeoIP database |
0.12.1 | Bug fix View pull request Change test public IPs to the supported subset |
0.12.0 | Enhancement View pull request Release azure package for v8.0.0 |
0.11.0 | Enhancement View pull request Add Azure Event Hub Input |
0.10.1 | Enhancement View pull request Uniform with guidelines |
0.10.0 | Enhancement View pull request signinlogs - Add support for ManagedIdentitySignInLogs, NonInteractiveUserSignInLogs, and ServicePrincipalSignInLogs. |
0.9.2 | Bug fix View pull request Prevent pipeline script error |
0.9.1 | Bug fix View pull request Fix logic that checks for the 'forwarded' tag |
0.9.0 | Enhancement View pull request Update to ECS 1.12.0 |
0.8.6 | Bug fix View pull request Add ECS client.ip mapping |
0.8.5 | Enhancement View pull request Update docs and logo |
0.8.4 | Enhancement View pull request Convert to generated ECS fields |
0.8.3 | Enhancement View pull request Import geo_points from ECS |
0.8.2 | Enhancement View pull request Update error message |
0.8.1 | Enhancement View pull request Add support for springcloud logs inside the platformlogs pipeline |
0.8.0 | Enhancement View pull request Import ECS field definitions |
0.7.0 | Enhancement View pull request Add spring cloud logs |
0.6.2 | Enhancement View pull request update to ECS 1.11.0 |
0.6.1 | Enhancement View pull request Escape special characters in docs |
0.6.0 | Enhancement View pull request Update integration description |
0.5.1 | Enhancement View pull request Re-add pipeline changes for invalid json |
0.5.0 | Enhancement View pull request Add input groups |
0.4.0 | Enhancement View pull request Set "event.module" and "event.dataset" |
0.3.1 | Enhancement View pull request sync package with module changes |
0.3.0 | Enhancement View pull request update to ECS 1.10.0 and adding event.original options |
0.2.3 | Enhancement View pull request update to ECS 1.9.0 |
0.2.2 | Bug fix View pull request Correct sample event file. |
0.2.1 | Bug fix View pull request Add check for empty configuration options. |
0.2.0 | Enhancement View pull request Add changes to use ECS 1.8 fields. |
0.0.1 | Enhancement View pull request initial release |