You are viewing docs on Elastic's new documentation system, currently in technical preview. For all other Elastic docs, visit elastic.co/guide.
Last updated: Apr 10th, 2023

ForgeRock

Collect audit logs from ForgeRock with Elastic Agent.

What is an Elastic integration?

This integration is powered by Elastic Agent. Elastic Agent is a single, unified way to add monitoring for logs, metrics, and other types of data to a host. It can also protect hosts from security threats, query data from operating systems, forward data from remote services or hardware, and more. Refer to our documentation for a detailed comparison between Beats and Elastic Agent.

Prefer to use Beats for this use case? See Filebeat modules for logs or Metricbeat modules for metrics.

ForgeRock is a modern identity platform which helps organizations radically simplify identity and access management (IAM) and identity governance and administration (IGA). The ForgeRock integration collects audit logs from the API.

Configuration

Authorization parameters for the ForgeRock Identity Cloud API (API Key ID, and API Key Secret) can be created in the Identity Cloud admin UI.

Logs

AM_Access events

This is the forgerock.am_access dataset. These logs capture all incoming Identity Cloud access calls as audit events. This includes who, what, when, and the output for every access request. More information about these logs.

An example event for am_access looks as following:

{
    "@timestamp": "2022-10-05T18:21:48.248Z",
    "client": {
        "ip": "1.128.0.0"
    },
    "ecs": {
        "version": "8.7.0"
    },
    "event": {
        "action": "AM-ACCESS-ATTEMPT",
        "id": "45463f84-ff1b-499f-aa84-8d4bd93150de-256203",
        "type": "access"
    },
    "forgerock": {
        "eventName": "AM-ACCESS-ATTEMPT",
        "http": {
            "request": {
                "headers": {
                    "accept": [
                        "text/plain,*/*"
                    ],
                    "content-type": [
                        "application/x-www-form-urlencoded"
                    ],
                    "host": [
                        "openam-chico-poc.forgeblocks.com"
                    ],
                    "user-agent": [
                        "Jersey/2.34 (HttpUrlConnection 11.0.9)"
                    ],
                    "x-forwarded-for": [
                        "34.94.38.177, 34.149.144.150, 10.168.0.8"
                    ],
                    "x-forwarded-proto": [
                        "https"
                    ]
                },
                "secure": true
            }
        },
        "level": "INFO",
        "realm": "/",
        "request": {
            "detail": {
                "grant_type": "client_credentials",
                "scope": "fr:idm:*"
            }
        },
        "source": "audit",
        "topic": "access"
    },
    "http": {
        "request": {
            "Path": "https://openam-chico-poc.forgeblocks.com/am/oauth2/access_token",
            "method": "POST"
        }
    },
    "observer": {
        "vendor": "ForgeRock Identity Platform"
    },
    "service": {
        "name": "OAuth"
    },
    "transaction": {
        "id": "1664994108247-9f138d8fc9f59d23164c-26466/0"
    }
}

Exported fields

FieldDescriptionType
@timestamp
Date/time when the event originated. This is the date/time extracted from the event, typically representing when the event was generated by the source. If the event source has no original timestamp, this value is typically populated by the first time the event was received by the pipeline. Required field for all events.
date
client.domain
The domain name of the client system. This value may be a host name, a fully qualified domain name, or another host naming format. The value may derive from the original event or be added from enrichment.
keyword
client.ip
IP address of the client (IPv4 or IPv6).
ip
client.port
Port of the client.
long
data_stream.dataset
The field can contain anything that makes sense to signify the source of the data. Examples include nginx.access, prometheus, endpoint etc. For data streams that otherwise fit, but that do not have dataset set we use the value "generic" for the dataset value. event.dataset should have the same value as data_stream.dataset. Beyond the Elasticsearch data stream naming criteria noted above, the dataset value has additional restrictions: * Must not contain - * No longer than 100 characters
constant_keyword
data_stream.namespace
A user defined namespace. Namespaces are useful to allow grouping of data. Many users already organize their indices this way, and the data stream naming scheme now provides this best practice as a default. Many users will populate this field with default. If no value is used, it falls back to default. Beyond the Elasticsearch index naming criteria noted above, namespace value has the additional restrictions: * Must not contain - * No longer than 100 characters
constant_keyword
data_stream.type
An overarching type for the data stream. Currently allowed values are "logs" and "metrics". We expect to also add "traces" and "synthetics" in the near future.
constant_keyword
ecs.version
ECS version this event conforms to. ecs.version is a required field and must exist in all events. When querying across multiple indices -- which may conform to slightly different ECS versions -- this field lets integrations adjust to the schema version of the events.
keyword
event.action
The action captured by the event. This describes the information in the event. It is more specific than event.category. Examples are group-add, process-started, file-created. The value is normally defined by the implementer.
keyword
event.dataset
Name of the dataset. If an event source publishes more than one type of log or events (e.g. access log, error log), the dataset is used to specify which one the event comes from. It's recommended but not required to start the dataset name with the module name, followed by a dot, then the dataset name.
keyword
event.duration
Duration of the event in nanoseconds. If event.start and event.end are known this value should be the difference between the end and start time.
long
event.id
Unique ID to describe the event.
keyword
event.module
Name of the module this data is coming from. If your monitoring agent supports the concept of modules or plugins to process events of a given source (e.g. Apache logs), event.module should contain the name of this module.
keyword
event.outcome
This is one of four ECS Categorization Fields, and indicates the lowest level in the ECS category hierarchy. event.outcome simply denotes whether the event represents a success or a failure from the perspective of the entity that produced the event. Note that when a single transaction is described in multiple events, each event may populate different values of event.outcome, according to their perspective. Also note that in the case of a compound event (a single event that contains multiple logical events), this field should be populated with the value that best captures the overall success or failure from the perspective of the event producer. Further note that not all events will have an associated outcome. For example, this field is generally not populated for metric events, events with event.type:info, or any events for which an outcome does not make logical sense.
keyword
forgerock.eventName
The name of the audit event.
keyword
forgerock.http.request.headers
The headers of the HTTP request.
object
forgerock.http.request.headers.accept
The accept parameter for the request.
keyword
forgerock.http.request.headers.accept-api-version
The accept-api-version header of the HTTP request.
keyword
forgerock.http.request.headers.content-type
The content-type header of the HTTP request.
keyword
forgerock.http.request.headers.host
The host header of the HTTP request.
keyword
forgerock.http.request.headers.origin
The origin header of the HTTP request.
keyword
forgerock.http.request.headers.user-agent
The user-agent header of the HTTP request.
keyword
forgerock.http.request.headers.x-forwarded-for
The x-forwarded-for header of the HTTP request.
keyword
forgerock.http.request.headers.x-forwarded-proto
The x-forwaded-proto header of the HTTP request.
keyword
forgerock.http.request.headers.x-requested-with
The x-requested with header of the HTTP request.
keyword
forgerock.http.request.queryParameters
The query parameter string of the HTTP request.
object
forgerock.http.request.secure
A flag describing whether or not the HTTP request was secure.
boolean
forgerock.level
The log level.
keyword
forgerock.objectId
Specifies the identifier of an object that has been created, updated, or deleted.
keyword
forgerock.realm
The realm where the operation occurred.
keyword
forgerock.request.detail
Details around the response status.
object
forgerock.request.detail.action
Details around the request action.
keyword
forgerock.request.detail.grant_type
The request's grant type.
keyword
forgerock.request.detail.scope
The request's scope.
keyword
forgerock.request.detail.token_type_hint
The request's token type.
keyword
forgerock.request.operation
The request operation.
keyword
forgerock.request.protocol
The protocol associated with the request; REST or PLL.
keyword
forgerock.response.detail
Details around the response status.
object
forgerock.response.detail.active
A flag for whether or not the response was active.
boolean
forgerock.response.detail.client_id
The responses's client id.
keyword
forgerock.response.detail.revision
The responses's revision.
keyword
forgerock.response.detail.scope
The responses's scope.
keyword
forgerock.response.detail.token_type
The responses's token type.
keyword
forgerock.response.detail.username
The responses's username.
keyword
forgerock.response.elapsedTime
Time to execute event.
date
forgerock.response.elapsedTimeUnits
Units for response time.
keyword
forgerock.response.status
Status indicator, usually SUCCESS/SUCCESSFUL or FAIL/FAILED.
keyword
forgerock.roles
IDM roles associated with the request.
keyword
forgerock.source
The source of the event.
keyword
forgerock.topic
The topic of the event.
keyword
forgerock.trackingIds
Specifies a unique random string generated as an alias for each AM session ID and OAuth 2.0 token.
keyword
http.request.Path
The path of the HTTP request.
keyword
http.request.method
HTTP request method. The value should retain its casing from the original event. For example, GET, get, and GeT are all considered valid values for this field.
keyword
http.response.body.content
The full HTTP response body.
wildcard
http.response.body.content.text
Multi-field of http.response.body.content.
match_only_text
http.response.status_code
HTTP response status code.
long
input.type
Input type
keyword
observer.vendor
Vendor name of the observer.
keyword
server.ip
IP address of the server (IPv4 or IPv6).
ip
service.name
Name of the service data is collected from. The name of the service is normally user given. This allows for distributed services that run on multiple hosts to correlate the related instances based on the name. In the case of Elasticsearch the service.name could contain the cluster name. For Beats the service.name is by default a copy of the service.type field if no name is specified.
keyword
source.ip
IP address of the source (IPv4 or IPv6).
ip
source.port
Port of the source.
long
tags
List of keywords used to tag each event.
keyword
transaction.id
Unique identifier of the transaction within the scope of its trace. A transaction is the highest level of work measured within a service, such as a request to a server.
keyword
user.id
Unique identifier of the user.
keyword

AM_Activity events

This is the forgerock.am_activity dataset. These logs capture state changes to objects that have been created, updated, or deleted by Identity Cloud end users. This includes session, user profile, and device profile changes. More information about these logs.

An example event for am_activity looks as following:

{
    "@timestamp": "2022-10-05T20:55:59.966Z",
    "ecs": {
        "version": "8.7.0"
    },
    "event": {
        "action": "AM-SESSION-CREATED",
        "id": "45463f84-ff1b-499f-aa84-8d4bd93150de-438366",
        "reason": "CREATE"
    },
    "forgerock": {
        "level": "INFO",
        "objectId": "45463f84-ff1b-499f-aa84-8d4bd93150de-438033",
        "realm": "/",
        "source": "audit",
        "topic": "activity",
        "trackingIds": [
            "45463f84-ff1b-499f-aa84-8d4bd93150de-438033"
        ]
    },
    "observer": {
        "vendor": "ForgeRock Identity Platform"
    },
    "service": {
        "name": "Session"
    },
    "transaction": {
        "id": "5ff83988-8f23-4108-9359-42658fcfc4d1-request-3/0"
    },
    "user": {
        "effective": {
            "id": "id=d7cd65bf-743c-4753-a78f-a20daae7e3bf,ou=user,ou=am-config"
        },
        "id": "id=d7cd65bf-743c-4753-a78f-a20daae7e3bf,ou=user,ou=am-config"
    }
}

Exported fields

FieldDescriptionType
@timestamp
Date/time when the event originated. This is the date/time extracted from the event, typically representing when the event was generated by the source. If the event source has no original timestamp, this value is typically populated by the first time the event was received by the pipeline. Required field for all events.
date
data_stream.dataset
The field can contain anything that makes sense to signify the source of the data. Examples include nginx.access, prometheus, endpoint etc. For data streams that otherwise fit, but that do not have dataset set we use the value "generic" for the dataset value. event.dataset should have the same value as data_stream.dataset. Beyond the Elasticsearch data stream naming criteria noted above, the dataset value has additional restrictions: * Must not contain - * No longer than 100 characters
constant_keyword
data_stream.namespace
A user defined namespace. Namespaces are useful to allow grouping of data. Many users already organize their indices this way, and the data stream naming scheme now provides this best practice as a default. Many users will populate this field with default. If no value is used, it falls back to default. Beyond the Elasticsearch index naming criteria noted above, namespace value has the additional restrictions: * Must not contain - * No longer than 100 characters
constant_keyword
data_stream.type
An overarching type for the data stream. Currently allowed values are "logs" and "metrics". We expect to also add "traces" and "synthetics" in the near future.
constant_keyword
ecs.version
ECS version this event conforms to. ecs.version is a required field and must exist in all events. When querying across multiple indices -- which may conform to slightly different ECS versions -- this field lets integrations adjust to the schema version of the events.
keyword
event.action
The action captured by the event. This describes the information in the event. It is more specific than event.category. Examples are group-add, process-started, file-created. The value is normally defined by the implementer.
keyword
event.dataset
Name of the dataset. If an event source publishes more than one type of log or events (e.g. access log, error log), the dataset is used to specify which one the event comes from. It's recommended but not required to start the dataset name with the module name, followed by a dot, then the dataset name.
keyword
event.duration
Duration of the event in nanoseconds. If event.start and event.end are known this value should be the difference between the end and start time.
long
event.id
Unique ID to describe the event.
keyword
event.module
Name of the module this data is coming from. If your monitoring agent supports the concept of modules or plugins to process events of a given source (e.g. Apache logs), event.module should contain the name of this module.
keyword
forgerock.after
Specifies the JSON representation of the object after the activity.
object
forgerock.after.sunAMAuthInvalidAttemptsData
Example JSON representation of the object after the activity.
keyword
forgerock.before
Specifies the JSON representation of the object prior to the activity.
object
forgerock.before.sunAMAuthInvalidAttemptsData
Example JSON representation of the object prior to the activity.
object
forgerock.changedFields
Specifies the fields that were changed.
keyword
forgerock.eventName
The name of the audit event.
keyword
forgerock.level
The log level.
keyword
forgerock.objectId
Specifies the identifier of an object that has been created, updated, or deleted.
keyword
forgerock.realm
The realm where the operation occurred.
keyword
forgerock.source
The source of the event.
keyword
forgerock.topic
The topic of the event.
keyword
forgerock.trackingIds
Specifies a unique random string generated as an alias for each AM session ID and OAuth 2.0 token.
keyword
input.type
Input type
keyword
observer.vendor
Vendor name of the observer.
keyword
service.name
Name of the service data is collected from. The name of the service is normally user given. This allows for distributed services that run on multiple hosts to correlate the related instances based on the name. In the case of Elasticsearch the service.name could contain the cluster name. For Beats the service.name is by default a copy of the service.type field if no name is specified.
keyword
tags
List of keywords used to tag each event.
keyword
transaction.id
Unique identifier of the transaction within the scope of its trace. A transaction is the highest level of work measured within a service, such as a request to a server.
keyword
user.effective.id
Unique identifier of the user.
keyword
user.id
Unique identifier of the user.
keyword

AM_Authentication events

This is the forgerock.am_authentication dataset. These logs capture when and how a user is authenticated and related audit events. More information about these logs.

An example event for am_authentication looks as following:

{
    "@timestamp": "2022-10-05T18:21:48.253Z",
    "ecs": {
        "version": "8.7.0"
    },
    "event": {
        "action": "AM-LOGIN-COMPLETED",
        "category": "authentication",
        "id": "45463f84-ff1b-499f-aa84-8d4bd93150de-256208",
        "outcome": "success"
    },
    "forgerock": {
        "entries": [
            {
                "info": {
                    "authIndex": "module_instance",
                    "authIndexValue": "Application",
                    "authLevel": "0",
                    "ipAddress": "1.128.0.0"
                },
                "moduleId": "Application"
            }
        ],
        "eventName": "AM-LOGIN-COMPLETED",
        "level": "INFO",
        "principal": [
            "autoid-resource-server"
        ],
        "realm": "/",
        "source": "audit",
        "topic": "authentication",
        "trackingIds": [
            "45463f84-ff1b-499f-aa84-8d4bd93150de-256204"
        ]
    },
    "observer": {
        "vendor": "ForgeRock Identity Platform"
    },
    "service": {
        "name": "Authentication"
    },
    "transaction": {
        "id": "1664994108247-9f138d8fc9f59d23164c-26466/0"
    },
    "user": {
        "id": "id=autoid-resource-server,ou=agent,ou=am-config"
    }
}

Exported fields

FieldDescriptionType
@timestamp
Date/time when the event originated. This is the date/time extracted from the event, typically representing when the event was generated by the source. If the event source has no original timestamp, this value is typically populated by the first time the event was received by the pipeline. Required field for all events.
date
data_stream.dataset
The field can contain anything that makes sense to signify the source of the data. Examples include nginx.access, prometheus, endpoint etc. For data streams that otherwise fit, but that do not have dataset set we use the value "generic" for the dataset value. event.dataset should have the same value as data_stream.dataset. Beyond the Elasticsearch data stream naming criteria noted above, the dataset value has additional restrictions: * Must not contain - * No longer than 100 characters
constant_keyword
data_stream.namespace
A user defined namespace. Namespaces are useful to allow grouping of data. Many users already organize their indices this way, and the data stream naming scheme now provides this best practice as a default. Many users will populate this field with default. If no value is used, it falls back to default. Beyond the Elasticsearch index naming criteria noted above, namespace value has the additional restrictions: * Must not contain - * No longer than 100 characters
constant_keyword
data_stream.type
An overarching type for the data stream. Currently allowed values are "logs" and "metrics". We expect to also add "traces" and "synthetics" in the near future.
constant_keyword
ecs.version
ECS version this event conforms to. ecs.version is a required field and must exist in all events. When querying across multiple indices -- which may conform to slightly different ECS versions -- this field lets integrations adjust to the schema version of the events.
keyword
event.dataset
Name of the dataset. If an event source publishes more than one type of log or events (e.g. access log, error log), the dataset is used to specify which one the event comes from. It's recommended but not required to start the dataset name with the module name, followed by a dot, then the dataset name.
keyword
event.id
Unique ID to describe the event.
keyword
event.module
Name of the module this data is coming from. If your monitoring agent supports the concept of modules or plugins to process events of a given source (e.g. Apache logs), event.module should contain the name of this module.
keyword
event.outcome
This is one of four ECS Categorization Fields, and indicates the lowest level in the ECS category hierarchy. event.outcome simply denotes whether the event represents a success or a failure from the perspective of the entity that produced the event. Note that when a single transaction is described in multiple events, each event may populate different values of event.outcome, according to their perspective. Also note that in the case of a compound event (a single event that contains multiple logical events), this field should be populated with the value that best captures the overall success or failure from the perspective of the event producer. Further note that not all events will have an associated outcome. For example, this field is generally not populated for metric events, events with event.type:info, or any events for which an outcome does not make logical sense.
keyword
forgerock.entries
The JSON representation of the details of an authentication module, chain, tree, or node.
flattened
forgerock.eventName
The name of the audit event.
keyword
forgerock.level
The log level.
keyword
forgerock.principal
The array of accounts used to authenticate.
keyword
forgerock.realm
The realm where the operation occurred.
keyword
forgerock.source
The source of the event.
keyword
forgerock.topic
The topic of the event.
keyword
forgerock.trackingIds
Specifies a unique random string generated as an alias for each AM session ID and OAuth 2.0 token.
keyword
input.type
Input type
keyword
observer.vendor
Vendor name of the observer.
keyword
service.name
Name of the service data is collected from. The name of the service is normally user given. This allows for distributed services that run on multiple hosts to correlate the related instances based on the name. In the case of Elasticsearch the service.name could contain the cluster name. For Beats the service.name is by default a copy of the service.type field if no name is specified.
keyword
tags
List of keywords used to tag each event.
keyword
transaction.id
Unique identifier of the transaction within the scope of its trace. A transaction is the highest level of work measured within a service, such as a request to a server.
keyword
user.id
Unique identifier of the user.
keyword

AM_Config events

This is the forgerock.am_config dataset. These logs capture access management configuration changes for Identity Cloud with a timestamp and by whom. More information about these logs.

An example event for am_config looks as following:

{
    "@timestamp": "2022-09-20T14:40:10.664Z",
    "ecs": {
        "version": "8.7.0"
    },
    "event": {
        "action": "AM-CONFIG-CHANGE",
        "category": "configuration",
        "id": "4e8550cd-71d6-4a08-b5b0-bb63bcbbc960-20605"
    },
    "forgerock": {
        "level": "INFO",
        "objectId": "ou=test,ou=agentgroup,ou=OrganizationConfig,ou=1.0,ou=AgentService,ou=services,o=alpha,ou=services,ou=am-config",
        "operation": "CREATE",
        "realm": "/alpha",
        "source": "audit",
        "topic": "config",
        "trackingIds": [
            "4e8550cd-71d6-4a08-b5b0-bb63bcbbc960-5563"
        ]
    },
    "observer": {
        "vendor": "ForgeRock Identity Platform"
    },
    "transaction": {
        "id": "1663684810619-c42f8145dec437c43428-2465/0"
    },
    "user": {
        "effective": {
            "id": "id=dsameuser,ou=user,ou=am-config"
        },
        "id": "id=d7cd65bf-743c-4753-a78f-a20daae7e3bf,ou=user,ou=am-config"
    }
}

Exported fields

FieldDescriptionType
@timestamp
Date/time when the event originated. This is the date/time extracted from the event, typically representing when the event was generated by the source. If the event source has no original timestamp, this value is typically populated by the first time the event was received by the pipeline. Required field for all events.
date
data_stream.dataset
The field can contain anything that makes sense to signify the source of the data. Examples include nginx.access, prometheus, endpoint etc. For data streams that otherwise fit, but that do not have dataset set we use the value "generic" for the dataset value. event.dataset should have the same value as data_stream.dataset. Beyond the Elasticsearch data stream naming criteria noted above, the dataset value has additional restrictions: * Must not contain - * No longer than 100 characters
constant_keyword
data_stream.namespace
A user defined namespace. Namespaces are useful to allow grouping of data. Many users already organize their indices this way, and the data stream naming scheme now provides this best practice as a default. Many users will populate this field with default. If no value is used, it falls back to default. Beyond the Elasticsearch index naming criteria noted above, namespace value has the additional restrictions: * Must not contain - * No longer than 100 characters
constant_keyword
data_stream.type
An overarching type for the data stream. Currently allowed values are "logs" and "metrics". We expect to also add "traces" and "synthetics" in the near future.
constant_keyword
ecs.version
ECS version this event conforms to. ecs.version is a required field and must exist in all events. When querying across multiple indices -- which may conform to slightly different ECS versions -- this field lets integrations adjust to the schema version of the events.
keyword
event.action
The action captured by the event. This describes the information in the event. It is more specific than event.category. Examples are group-add, process-started, file-created. The value is normally defined by the implementer.
keyword
event.category
This is one of four ECS Categorization Fields, and indicates the second level in the ECS category hierarchy. event.category represents the "big buckets" of ECS categories. For example, filtering on event.category:process yields all events relating to process activity. This field is closely related to event.type, which is used as a subcategory. This field is an array. This will allow proper categorization of some events that fall in multiple categories.
keyword
event.dataset
Name of the dataset. If an event source publishes more than one type of log or events (e.g. access log, error log), the dataset is used to specify which one the event comes from. It's recommended but not required to start the dataset name with the module name, followed by a dot, then the dataset name.
keyword
event.id
Unique ID to describe the event.
keyword
event.module
Name of the module this data is coming from. If your monitoring agent supports the concept of modules or plugins to process events of a given source (e.g. Apache logs), event.module should contain the name of this module.
keyword
forgerock.changedFields
Specifies the fields that were changed.
keyword
forgerock.eventName
The name of the audit event.
keyword
forgerock.level
The log level.
keyword
forgerock.objectId
Specifies the identifier of an object that has been created, updated, or deleted.
keyword
forgerock.operation
The state change operation invoked.
keyword
forgerock.realm
The realm where the operation occurred.
keyword
forgerock.source
The source of the event.
keyword
forgerock.topic
The topic of the event.
keyword
forgerock.trackingIds
Specifies a unique random string generated as an alias for each AM session ID and OAuth 2.0 token.
keyword
input.type
Input type
keyword
observer.vendor
Vendor name of the observer.
keyword
tags
List of keywords used to tag each event.
keyword
transaction.id
Unique identifier of the transaction within the scope of its trace. A transaction is the highest level of work measured within a service, such as a request to a server.
keyword
user.effective.id
Unique identifier of the user.
keyword
user.id
Unique identifier of the user.
keyword

AM_Core events

This is the forgerock.am_core dataset. These logs capture access management debug logs for Identity Cloud. More information about these logs.

An example event for am_core looks as following:

{
    "@timestamp": "2022-12-05T19:29:20.845Z",
    "ecs": {
        "version": "8.7.0"
    },
    "event": {
        "reason": "Connection attempt failed: availableConnections=0, maxPoolSize=10"
    },
    "forgerock": {
        "context": "default"
    },
    "log": {
        "level": "DEBUG",
        "logger": "org.forgerock.opendj.ldap.CachedConnectionPool"
    },
    "observer": {
        "vendor": "ForgeRock Identity Platform"
    },
    "process": {
        "name": "LDAP SDK Default Scheduler"
    }
}

Exported fields

FieldDescriptionType
@timestamp
Date/time when the event originated. This is the date/time extracted from the event, typically representing when the event was generated by the source. If the event source has no original timestamp, this value is typically populated by the first time the event was received by the pipeline. Required field for all events.
date
data_stream.dataset
The field can contain anything that makes sense to signify the source of the data. Examples include nginx.access, prometheus, endpoint etc. For data streams that otherwise fit, but that do not have dataset set we use the value "generic" for the dataset value. event.dataset should have the same value as data_stream.dataset. Beyond the Elasticsearch data stream naming criteria noted above, the dataset value has additional restrictions: * Must not contain - * No longer than 100 characters
constant_keyword
data_stream.namespace
A user defined namespace. Namespaces are useful to allow grouping of data. Many users already organize their indices this way, and the data stream naming scheme now provides this best practice as a default. Many users will populate this field with default. If no value is used, it falls back to default. Beyond the Elasticsearch index naming criteria noted above, namespace value has the additional restrictions: * Must not contain - * No longer than 100 characters
constant_keyword
data_stream.type
An overarching type for the data stream. Currently allowed values are "logs" and "metrics". We expect to also add "traces" and "synthetics" in the near future.
constant_keyword
ecs.version
ECS version this event conforms to. ecs.version is a required field and must exist in all events. When querying across multiple indices -- which may conform to slightly different ECS versions -- this field lets integrations adjust to the schema version of the events.
keyword
error.stack_trace
The stack trace of this error in plain text.
wildcard
error.stack_trace.text
Multi-field of error.stack_trace.
match_only_text
event.dataset
Name of the dataset. If an event source publishes more than one type of log or events (e.g. access log, error log), the dataset is used to specify which one the event comes from. It's recommended but not required to start the dataset name with the module name, followed by a dot, then the dataset name.
keyword
event.module
Name of the module this data is coming from. If your monitoring agent supports the concept of modules or plugins to process events of a given source (e.g. Apache logs), event.module should contain the name of this module.
keyword
event.reason
Reason why this event happened, according to the source. This describes the why of a particular action or outcome captured in the event. Where event.action captures the action from the event, event.reason describes why that action was taken. For example, a web proxy with an event.action which denied the request may also populate event.reason with the reason why (e.g. blocked site).
keyword
forgerock.context
The context of the debug event.
keyword
input.type
Input type
keyword
log.level
Original log level of the log event. If the source of the event provides a log level or textual severity, this is the one that goes in log.level. If your source doesn't specify one, you may put your event transport's severity here (e.g. Syslog severity). Some examples are warn, err, i, informational.
keyword
log.logger
The name of the logger inside an application. This is usually the name of the class which initialized the logger, or can be a custom name.
keyword
observer.vendor
Vendor name of the observer.
keyword
process.name
Process name. Sometimes called program name or similar.
keyword
process.name.text
Multi-field of process.name.
match_only_text
tags
List of keywords used to tag each event.
keyword
transaction.id
Unique identifier of the transaction within the scope of its trace. A transaction is the highest level of work measured within a service, such as a request to a server.
keyword

IDM_access events

This is the forgerock.idm_access dataset. These logs capture messages for the identity management REST endpoints and the invocation of scheduled tasks. This is the who, what, and output for every identity management access request in Identity Cloud. More information about these logs.

An example event for idm_access looks as following:

{
    "@timestamp": "2022-11-01T15:04:50.110Z",
    "client": {
        "ip": "1.128.0.0",
        "port": 56278
    },
    "ecs": {
        "version": "8.7.0"
    },
    "event": {
        "duration": 2000000,
        "id": "a9a32d9e-7029-45e6-b581-eafb5d502273-49025",
        "outcome": "success",
        "type": "access"
    },
    "forgerock": {
        "eventName": "access",
        "http": {
            "request": {
                "headers": {
                    "host": [
                        "idm"
                    ]
                },
                "secure": false
            }
        },
        "level": "INFO",
        "request": {
            "operation": "READ",
            "protocol": "CREST"
        },
        "response": {
            "elapsedTime": 2,
            "elapsedTimeUnits": "MILLISECONDS",
            "status": "SUCCESSFUL"
        },
        "roles": [
            "internal/role/openidm-reg"
        ],
        "source": "audit",
        "topic": "access"
    },
    "http": {
        "request": {
            "Path": "http://idm/openidm/info/ping",
            "method": "GET"
        },
        "response": {
            "status_code": 200
        }
    },
    "observer": {
        "vendor": "ForgeRock Identity Platform"
    },
    "server": {
        "ip": "175.16.199.0"
    },
    "transaction": {
        "id": "a9a32d9e-7029-45e6-b581-eafb5d502273-49021"
    },
    "user": {
        "id": "anonymous"
    }
}

Exported fields

FieldDescriptionType
@timestamp
Date/time when the event originated. This is the date/time extracted from the event, typically representing when the event was generated by the source. If the event source has no original timestamp, this value is typically populated by the first time the event was received by the pipeline. Required field for all events.
date
client.ip
IP address of the client (IPv4 or IPv6).
ip
client.port
Port of the client.
long
data_stream.dataset
The field can contain anything that makes sense to signify the source of the data. Examples include nginx.access, prometheus, endpoint etc. For data streams that otherwise fit, but that do not have dataset set we use the value "generic" for the dataset value. event.dataset should have the same value as data_stream.dataset. Beyond the Elasticsearch data stream naming criteria noted above, the dataset value has additional restrictions: * Must not contain - * No longer than 100 characters
constant_keyword
data_stream.namespace
A user defined namespace. Namespaces are useful to allow grouping of data. Many users already organize their indices this way, and the data stream naming scheme now provides this best practice as a default. Many users will populate this field with default. If no value is used, it falls back to default. Beyond the Elasticsearch index naming criteria noted above, namespace value has the additional restrictions: * Must not contain - * No longer than 100 characters
constant_keyword
data_stream.type
An overarching type for the data stream. Currently allowed values are "logs" and "metrics". We expect to also add "traces" and "synthetics" in the near future.
constant_keyword
ecs.version
ECS version this event conforms to. ecs.version is a required field and must exist in all events. When querying across multiple indices -- which may conform to slightly different ECS versions -- this field lets integrations adjust to the schema version of the events.
keyword
event.dataset
Name of the dataset. If an event source publishes more than one type of log or events (e.g. access log, error log), the dataset is used to specify which one the event comes from. It's recommended but not required to start the dataset name with the module name, followed by a dot, then the dataset name.
keyword
event.module
Name of the module this data is coming from. If your monitoring agent supports the concept of modules or plugins to process events of a given source (e.g. Apache logs), event.module should contain the name of this module.
keyword
forgerock.eventName
The name of the audit event.
keyword
forgerock.http.request.headers.host
The host header of the HTTP request.
keyword
forgerock.http.request.secure
A flag describing whether or not the HTTP request was secure.
boolean
forgerock.level
The log level.
keyword
forgerock.request.operation
The request operation.
keyword
forgerock.request.protocol
The protocol associated with the request; REST or PLL.
keyword
forgerock.response.elapsedTime
Time to execute event.
date
forgerock.response.elapsedTimeUnits
Units for response time.
keyword
forgerock.response.status
Status indicator, usually SUCCESS/SUCCESSFUL or FAIL/FAILED.
keyword
forgerock.roles
IDM roles associated with the request.
keyword
forgerock.source
The source of the event.
keyword
forgerock.topic
The topic of the event.
keyword
http.request.Path
The path of the HTTP request.
keyword
http.request.method
HTTP request method. The value should retain its casing from the original event. For example, GET, get, and GeT are all considered valid values for this field.
keyword
http.response.status_code
HTTP response status code.
long
input.type
Input type
keyword
observer.vendor
Vendor name of the observer.
keyword
server.ip
IP address of the server (IPv4 or IPv6).
ip
tags
List of keywords used to tag each event.
keyword
transaction.id
Unique identifier of the transaction within the scope of its trace. A transaction is the highest level of work measured within a service, such as a request to a server.
keyword
user.id
Unique identifier of the user.
keyword

IDM_activity events

This is the forgerock.idm_activity dataset. These logs capture operations on internal (managed) and external (system) objects in Identity Cloud. idm-activity logs the changes to identity content, such as adding or updating users, changing passwords, etc. More information about these logs.

An example event for idm_activity looks as following:

{
    "@timestamp": "2022-11-01T17:55:08.523Z",
    "ecs": {
        "version": "8.7.0"
    },
    "event": {
        "id": "a9a32d9e-7029-45e6-b581-eafb5d502273-259113",
        "outcome": "success"
    },
    "forgerock": {
        "eventName": "activity",
        "level": "INFO",
        "objectId": "internal/role/8713dd4e-3f4a-480d-9172-3a70a2dea73f",
        "operation": "PATCH",
        "passwordChanged": false,
        "revision": "6e415003-2c2e-46a1-9546-8eaa4e5f68d8-2451",
        "source": "audit",
        "topic": "activity"
    },
    "observer": {
        "vendor": "ForgeRock Identity Platform"
    },
    "transaction": {
        "id": "1667325297350-5f3959fa550528a7ef3d-23359/0"
    },
    "user": {
        "effective": {
            "id": "d7cd65bf-743c-4753-a78f-a20daae7e3bf"
        },
        "id": "d7cd65bf-743c-4753-a78f-a20daae7e3bf"
    }
}

Exported fields

FieldDescriptionType
@timestamp
Date/time when the event originated. This is the date/time extracted from the event, typically representing when the event was generated by the source. If the event source has no original timestamp, this value is typically populated by the first time the event was received by the pipeline. Required field for all events.
date
data_stream.dataset
The field can contain anything that makes sense to signify the source of the data. Examples include nginx.access, prometheus, endpoint etc. For data streams that otherwise fit, but that do not have dataset set we use the value "generic" for the dataset value. event.dataset should have the same value as data_stream.dataset. Beyond the Elasticsearch data stream naming criteria noted above, the dataset value has additional restrictions: * Must not contain - * No longer than 100 characters
constant_keyword
data_stream.namespace
A user defined namespace. Namespaces are useful to allow grouping of data. Many users already organize their indices this way, and the data stream naming scheme now provides this best practice as a default. Many users will populate this field with default. If no value is used, it falls back to default. Beyond the Elasticsearch index naming criteria noted above, namespace value has the additional restrictions: * Must not contain - * No longer than 100 characters
constant_keyword
data_stream.type
An overarching type for the data stream. Currently allowed values are "logs" and "metrics". We expect to also add "traces" and "synthetics" in the near future.
constant_keyword
ecs.version
ECS version this event conforms to. ecs.version is a required field and must exist in all events. When querying across multiple indices -- which may conform to slightly different ECS versions -- this field lets integrations adjust to the schema version of the events.
keyword
event.dataset
Name of the dataset. If an event source publishes more than one type of log or events (e.g. access log, error log), the dataset is used to specify which one the event comes from. It's recommended but not required to start the dataset name with the module name, followed by a dot, then the dataset name.
keyword
event.id
Unique ID to describe the event.
keyword
event.module
Name of the module this data is coming from. If your monitoring agent supports the concept of modules or plugins to process events of a given source (e.g. Apache logs), event.module should contain the name of this module.
keyword
forgerock.eventName
The name of the audit event.
keyword
forgerock.level
The log level.
keyword
forgerock.message
Human readable text about the action.
keyword
forgerock.objectId
Specifies the identifier of an object that has been created, updated, or deleted.
keyword
forgerock.operation
The state change operation invoked.
keyword
forgerock.passwordChanged
Boolean specifying whether changes were made to the password.
boolean
forgerock.revision
Specifies the object revision number.
integer
forgerock.source
The source of the event.
keyword
forgerock.topic
The topic of the event.
keyword
input.type
Input type
keyword
observer.vendor
Vendor name of the observer.
keyword
tags
List of keywords used to tag each event.
keyword
transaction.id
Unique identifier of the transaction within the scope of its trace. A transaction is the highest level of work measured within a service, such as a request to a server.
keyword
user.effective.id
Unique identifier of the user.
keyword
user.id
Unique identifier of the user.
keyword

IDM_authentication events

This is the forgerock.idm_authentication dataset. These logs capture the results when you authenticate to an /openidm​ endpoint to complete certain actions on an object. More information about these logs.

An example event for idm_authentication looks as following:

{
    "@timestamp": "2022-10-05T18:21:48.253Z",
    "ecs": {
        "version": "8.7.0"
    },
    "event": {
        "category": "authentication",
        "id": "45463f84-ff1b-499f-aa84-8d4bd93150de-256208",
        "outcome": "success"
    },
    "forgerock": {
        "entries": [
            {
                "info": {
                    "authIndex": "module_instance",
                    "authIndexValue": "Application",
                    "authLevel": "0",
                    "ipAddress": "1.128.0.0"
                },
                "moduleId": "Application"
            }
        ],
        "eventName": "authentication",
        "level": "INFO",
        "method": "MANAGED_USER",
        "principal": [
            "openidm-admin"
        ],
        "result": "SUCCESSFUL",
        "topic": "authentication",
        "trackingIds": [
            "45463f84-ff1b-499f-aa84-8d4bd93150de-256204"
        ]
    },
    "observer": {
        "vendor": "ForgeRock Identity Platform"
    },
    "transaction": {
        "id": "1664994108247-9f138d8fc9f59d23164c-26466/0"
    },
    "user": {
        "id": "id=user"
    }
}

Exported fields

FieldDescriptionType
@timestamp
Date/time when the event originated. This is the date/time extracted from the event, typically representing when the event was generated by the source. If the event source has no original timestamp, this value is typically populated by the first time the event was received by the pipeline. Required field for all events.
date
data_stream.dataset
The field can contain anything that makes sense to signify the source of the data. Examples include nginx.access, prometheus, endpoint etc. For data streams that otherwise fit, but that do not have dataset set we use the value "generic" for the dataset value. event.dataset should have the same value as data_stream.dataset. Beyond the Elasticsearch data stream naming criteria noted above, the dataset value has additional restrictions: * Must not contain - * No longer than 100 characters
constant_keyword
data_stream.namespace
A user defined namespace. Namespaces are useful to allow grouping of data. Many users already organize their indices this way, and the data stream naming scheme now provides this best practice as a default. Many users will populate this field with default. If no value is used, it falls back to default. Beyond the Elasticsearch index naming criteria noted above, namespace value has the additional restrictions: * Must not contain - * No longer than 100 characters
constant_keyword
data_stream.type
An overarching type for the data stream. Currently allowed values are "logs" and "metrics". We expect to also add "traces" and "synthetics" in the near future.
constant_keyword
ecs.version
ECS version this event conforms to. ecs.version is a required field and must exist in all events. When querying across multiple indices -- which may conform to slightly different ECS versions -- this field lets integrations adjust to the schema version of the events.
keyword
event.dataset
Name of the dataset. If an event source publishes more than one type of log or events (e.g. access log, error log), the dataset is used to specify which one the event comes from. It's recommended but not required to start the dataset name with the module name, followed by a dot, then the dataset name.
keyword
event.id
Unique ID to describe the event.
keyword
event.module
Name of the module this data is coming from. If your monitoring agent supports the concept of modules or plugins to process events of a given source (e.g. Apache logs), event.module should contain the name of this module.
keyword
forgerock.entries
The JSON representation of the details of an authentication module, chain, tree, or node.
flattened
forgerock.eventName
The name of the audit event.
keyword
forgerock.level
The log level.
keyword
forgerock.method
The authentication method, such as JWT or MANAGED_USER.
keyword
forgerock.principal
The array of accounts used to authenticate.
keyword
forgerock.result
Status indicator, usually SUCCESS/SUCCESSFUL or FAIL/FAILED.
keyword
forgerock.topic
The topic of the event.
keyword
forgerock.trackingIds
Specifies a unique random string generated as an alias for each AM session ID and OAuth 2.0 token.
keyword
input.type
Input type
keyword
observer.vendor
Vendor name of the observer.
keyword
tags
List of keywords used to tag each event.
keyword
transaction.id
Unique identifier of the transaction within the scope of its trace. A transaction is the highest level of work measured within a service, such as a request to a server.
keyword
user.id
Unique identifier of the user.
keyword

IDM_config events

This is the forgerock.idm_config dataset. These logs capture configuration changes to Identity Cloud with a timestamp and by whom. More information about these logs.

An example event for idm_config looks as following:

{
    "@timestamp": "2022-10-19T16:12:12.549Z",
    "ecs": {
        "version": "8.7.0"
    },
    "event": {
        "category": "configuration",
        "id": "5e787c05-c32f-40d3-9e77-666376f6738f-134332"
    },
    "forgerock": {
        "changedFields": [
            "/mappings"
        ],
        "eventName": "CONFIG",
        "level": "INFO",
        "objectId": "sync",
        "source": "audit",
        "topic": "config"
    },
    "observer": {
        "vendor": "ForgeRock Identity Platform"
    },
    "transaction": {
        "id": "1666195908296-b802a87436c00618a43e-13149/0"
    },
    "user": {
        "effective": {
            "id": "d7cd65bf-743c-4753-a78f-a20daae7e3bf"
        },
        "id": "d7cd65bf-743c-4753-a78f-a20daae7e3bf"
    }
}

Exported fields

FieldDescriptionType
@timestamp
Date/time when the event originated. This is the date/time extracted from the event, typically representing when the event was generated by the source. If the event source has no original timestamp, this value is typically populated by the first time the event was received by the pipeline. Required field for all events.
date
data_stream.dataset
The field can contain anything that makes sense to signify the source of the data. Examples include nginx.access, prometheus, endpoint etc. For data streams that otherwise fit, but that do not have dataset set we use the value "generic" for the dataset value. event.dataset should have the same value as data_stream.dataset. Beyond the Elasticsearch data stream naming criteria noted above, the dataset value has additional restrictions: * Must not contain - * No longer than 100 characters
constant_keyword
data_stream.namespace
A user defined namespace. Namespaces are useful to allow grouping of data. Many users already organize their indices this way, and the data stream naming scheme now provides this best practice as a default. Many users will populate this field with default. If no value is used, it falls back to default. Beyond the Elasticsearch index naming criteria noted above, namespace value has the additional restrictions: * Must not contain - * No longer than 100 characters
constant_keyword
data_stream.type
An overarching type for the data stream. Currently allowed values are "logs" and "metrics". We expect to also add "traces" and "synthetics" in the near future.
constant_keyword
ecs.version
ECS version this event conforms to. ecs.version is a required field and must exist in all events. When querying across multiple indices -- which may conform to slightly different ECS versions -- this field lets integrations adjust to the schema version of the events.
keyword
event.dataset
Name of the dataset. If an event source publishes more than one type of log or events (e.g. access log, error log), the dataset is used to specify which one the event comes from. It's recommended but not required to start the dataset name with the module name, followed by a dot, then the dataset name.
keyword
event.id
Unique ID to describe the event.
keyword
event.module
Name of the module this data is coming from. If your monitoring agent supports the concept of modules or plugins to process events of a given source (e.g. Apache logs), event.module should contain the name of this module.
keyword
forgerock.changedFields
Specifies the fields that were changed.
keyword
forgerock.eventName
The name of the audit event.
keyword
forgerock.level
The log level.
keyword
forgerock.objectId
Specifies the identifier of an object that has been created, updated, or deleted.
keyword
forgerock.source
The source of the event.
keyword
forgerock.topic
The topic of the event.
keyword
input.type
Input type
keyword
observer.vendor
Vendor name of the observer.
keyword
tags
List of keywords used to tag each event.
keyword
transaction.id
Unique identifier of the transaction within the scope of its trace. A transaction is the highest level of work measured within a service, such as a request to a server.
keyword
user.effective.id
Unique identifier of the user.
keyword
user.id
Unique identifier of the user.
keyword

IDM_core events

This is the forgerock.idm_core dataset. These logs capture identity management debug logs for Identity Cloud. More information about these logs.

An example event for idm_core looks as following:

{
    "@timestamp": "2022-12-05T20:01:34.448Z",
    "ecs": {
        "version": "8.7.0"
    },
    "event": {
        "reason": "Dec 05, 2022 8:01:34 PM org.forgerock.openidm.internal.InternalObjectSet readInstance"
    },
    "observer": {
        "vendor": "ForgeRock Identity Platform"
    }
}

Exported fields

FieldDescriptionType
@timestamp
Date/time when the event originated. This is the date/time extracted from the event, typically representing when the event was generated by the source. If the event source has no original timestamp, this value is typically populated by the first time the event was received by the pipeline. Required field for all events.
date
data_stream.dataset
The field can contain anything that makes sense to signify the source of the data. Examples include nginx.access, prometheus, endpoint etc. For data streams that otherwise fit, but that do not have dataset set we use the value "generic" for the dataset value. event.dataset should have the same value as data_stream.dataset. Beyond the Elasticsearch data stream naming criteria noted above, the dataset value has additional restrictions: * Must not contain - * No longer than 100 characters
constant_keyword
data_stream.namespace
A user defined namespace. Namespaces are useful to allow grouping of data. Many users already organize their indices this way, and the data stream naming scheme now provides this best practice as a default. Many users will populate this field with default. If no value is used, it falls back to default. Beyond the Elasticsearch index naming criteria noted above, namespace value has the additional restrictions: * Must not contain - * No longer than 100 characters
constant_keyword
data_stream.type
An overarching type for the data stream. Currently allowed values are "logs" and "metrics". We expect to also add "traces" and "synthetics" in the near future.
constant_keyword
ecs.version
ECS version this event conforms to. ecs.version is a required field and must exist in all events. When querying across multiple indices -- which may conform to slightly different ECS versions -- this field lets integrations adjust to the schema version of the events.
keyword
event.dataset
Name of the dataset. If an event source publishes more than one type of log or events (e.g. access log, error log), the dataset is used to specify which one the event comes from. It's recommended but not required to start the dataset name with the module name, followed by a dot, then the dataset name.
keyword
event.module
Name of the module this data is coming from. If your monitoring agent supports the concept of modules or plugins to process events of a given source (e.g. Apache logs), event.module should contain the name of this module.
keyword
input.type
Input type
keyword
observer.vendor
Vendor name of the observer.
keyword
tags
List of keywords used to tag each event.
keyword

IDM_sync events

This is the forgerock.idm_sync dataset. These logs capture any changes made to an object resulting in automatic sync (live sync and implicit sync) to occur when you have a repository mapped to Identity Cloud. More information about these logs.

An example event for idm_sync looks as following:

{
    "@timestamp": "2022-10-19T16:09:17.900Z",
    "ecs": {
        "version": "8.7.0"
    },
    "event": {
        "id": "5e787c05-c32f-40d3-9e77-666376f6738f-130280",
        "outcome": "success"
    },
    "forgerock": {
        "action": "ASYNC",
        "eventName": "sync",
        "level": "INFO",
        "linkQualifier": "default",
        "mapping": "managedalpha_user_managedMarketinglist",
        "situation": "SOURCE_IGNORED",
        "source": "audit",
        "sourceObjectId": "managed/alpha_user/9d88b635-9b7a-48d3-9a57-1978b99a5f41",
        "topic": "sync"
    },
    "observer": {
        "vendor": "ForgeRock Identity Platform"
    },
    "transaction": {
        "id": "1666195747447-56a35455016b7da218a6-11991/0"
    },
    "user": {
        "id": "d7cd65bf-743c-4753-a78f-a20daae7e3bf"
    }
}

Exported fields

FieldDescriptionType
@timestamp
Date/time when the event originated. This is the date/time extracted from the event, typically representing when the event was generated by the source. If the event source has no original timestamp, this value is typically populated by the first time the event was received by the pipeline. Required field for all events.
date
data_stream.dataset
The field can contain anything that makes sense to signify the source of the data. Examples include nginx.access, prometheus, endpoint etc. For data streams that otherwise fit, but that do not have dataset set we use the value "generic" for the dataset value. event.dataset should have the same value as data_stream.dataset. Beyond the Elasticsearch data stream naming criteria noted above, the dataset value has additional restrictions: * Must not contain - * No longer than 100 characters
constant_keyword
data_stream.namespace
A user defined namespace. Namespaces are useful to allow grouping of data. Many users already organize their indices this way, and the data stream naming scheme now provides this best practice as a default. Many users will populate this field with default. If no value is used, it falls back to default. Beyond the Elasticsearch index naming criteria noted above, namespace value has the additional restrictions: * Must not contain - * No longer than 100 characters
constant_keyword
data_stream.type
An overarching type for the data stream. Currently allowed values are "logs" and "metrics". We expect to also add "traces" and "synthetics" in the near future.
constant_keyword
ecs.version
ECS version this event conforms to. ecs.version is a required field and must exist in all events. When querying across multiple indices -- which may conform to slightly different ECS versions -- this field lets integrations adjust to the schema version of the events.
keyword
event.dataset
Name of the dataset. If an event source publishes more than one type of log or events (e.g. access log, error log), the dataset is used to specify which one the event comes from. It's recommended but not required to start the dataset name with the module name, followed by a dot, then the dataset name.
keyword
event.id
Unique ID to describe the event.
keyword
event.module
Name of the module this data is coming from. If your monitoring agent supports the concept of modules or plugins to process events of a given source (e.g. Apache logs), event.module should contain the name of this module.
keyword
event.outcome
This is one of four ECS Categorization Fields, and indicates the lowest level in the ECS category hierarchy. event.outcome simply denotes whether the event represents a success or a failure from the perspective of the entity that produced the event. Note that when a single transaction is described in multiple events, each event may populate different values of event.outcome, according to their perspective. Also note that in the case of a compound event (a single event that contains multiple logical events), this field should be populated with the value that best captures the overall success or failure from the perspective of the event producer. Further note that not all events will have an associated outcome. For example, this field is generally not populated for metric events, events with event.type:info, or any events for which an outcome does not make logical sense.
keyword
forgerock.action
The synchronization action, depicted as a Common REST action.
keyword
forgerock.eventName
The name of the audit event.
keyword
forgerock.level
The log level.
keyword
forgerock.linkQualifier
ForgeRock's link qualifier applied to the action.
keyword
forgerock.mapping
Name of the mapping used for the synchronization operation.
keyword
forgerock.situation
keyword
forgerock.source
The source of the event.
keyword
forgerock.sourceObjectId
Object ID on the source system.
keyword
forgerock.targetObjectId
Object ID on the target system
keyword
forgerock.topic
The topic of the event.
keyword
input.type
Input type
keyword
observer.vendor
Vendor name of the observer.
keyword
tags
List of keywords used to tag each event.
keyword
transaction.id
Unique identifier of the transaction within the scope of its trace. A transaction is the highest level of work measured within a service, such as a request to a server.
keyword
user.id
Unique identifier of the user.
keyword

Changelog

VersionDetails
1.1.0
Enhancement View pull request
Update package to ECS 8.7.0.
1.0.0
Enhancement View pull request
Initial draft of the package