View this page in Japanese (日本語) | Chinese (简体中文) | Back to README
On this page, we’ll walk you through how to load logs from each AWS service into SIEM on Amazon OpenSearch Service. Follow the steps below to configure each AWS service.
- Common Configurations
- Security, Identity, & Compliance
- Management & Governance
- Networking & Content Delivery
- Storage
- Database
- Analytics
- Compute
- Containers
- End User Computing
- Multiple regions / multiple accounts
- Loading logs from an existing S3 bucket
SIEM on OpenSearch Service determines the log type based on the name and path of the file that is put into the Amazon Simple Storage Service (Amazon S3) bucket. The initial value used for this is either the default output path or file name of each service. Additional identifiable information is used for services where the log type cannot be determined using the default values only. If you want to output logs to S3 using a file path different from the initial value, create user.ini and add your own file name or S3 object key to the “s3_key” field. See Changing Configurations of SIEM on OpenSearch Service on how to edit user.ini.
If you have the privilege of setting an arbitrary output path to the S3 bucket, include your AWS account ID and region in the output path (as the prefix). This information will be attached to the loaded logs. However, if the information is already contained in the logs, the information in the logs will be prioritized.
if you want to store files in the S3 bucket enabling AWS Key Management Service (AWS KMS) encryption, use the AWS KMS customer-managed key that is automatically created when deploying SIEM on OpenSearch Service. The default alias name is aes-siem-key. You can also use an existing AWS KMS customer-managed key. Click here to see how to do this.
The AWS account used here for instruction purpose is 123456789012 . Replace this with your AWS account when following the steps.
The initial value of s3_key: GuardDuty
(part of the default output path)
- Log in to the AWS Management Console
- Navigate to the GuardDuty console
- Choose [Settings] from the left pane
- Scroll to [Findings export options] panel
- Frequency for updated findings: Choose [Update CWE and S3 every 15 minutes] and then choose [Save] (recommended)
- Choose [Configure now] for S3 bucket and enter the following parameters:
- Check [Existing bucket In your account]
- Choose a bucket: Choose [aes-siem-123456789012-log]
- Replace 123456789012 with your AWS account ID
- Log file prefix: Leave blank
- KMS encryption: Check [Choose key from your account]
- Key alias: Choose [aes-siem-key]
- Choose [Save]
Configuration is now complete. Choose [Generate sample findings] on the same settings screen to verify that loading into SIEM on OpenSearch Service has been successfully set up.
The initial value of s3_key: Inspector2_Finding
(specified in the Firehose output path)
No | CloudFormation | Description |
---|---|---|
1 | link | CloudFormation for core resource. This template gets the S3 bucket name of the log forwarding destination and creates IAM roles. Commonly used in other AWS service settings. |
2 | link | This template creates Firehose,set up EventBridge to deliver Events to the Firehose. A template common to Security Hub and Config Rules. |
The initial value of s3_key: /CloudHSM/
(specified in the Firehose output path)
No | CloudFormation | Description |
---|---|---|
1 | link | CloudFormation for core resource. This template gets the S3 bucket name of the log forwarding destination and creates IAM roles. Commonly used in other AWS service settings. |
2 | link | This template creates Firehose,set up CloudWatch Logs subscription filters to deliver CloudWatch Logs to the Firehose. The firehose exports logs to S3 bucket. |
The initial value of s3_key : /DirectoryService/MicrosoftAD/
(specified in the Firehose output path)
- Navigate to the Directory Service Console and forward log to CloudWatch Logs.
- Configure with CloudFormation
AWS WAF has two types: AWS WAF and AWS WAF Classic. AWS WAF is exported to SIEM S3 bucket via Kinesis Data Firehose, or to an S3 bucket for WAF logs and then replicated to SIEM S3 bucket. You can select it. AWS Classic is exported to SIEM S3 bucket via Kinesis Data Firehose.
The initial value of s3_key: aws-waf-logs-
or _waflogs_
(part of the default output path)
Please refer to the following official document for how to export AWS WAF to S3 bucket for WAF.
Logging and monitoring web ACL traffic / Amazon Simple Storage Service
Here's how to export AWS WAF ACL traffic to SIEM S3 bucket via Kinesis Data Firehose. Kinesis Data Firehose names must start with [aws-waf-logs-], and because this prefix is included in the file names when they are output to the S3 bucket, we are using it to determine the log type.
First, deploy Kinesis Data Firehose
- Navigate to the Amazon Kinesis console and select the region where AWS WAF was deployed
- Choose [Delivery streams] from the left pane => [Create delivery stream]
- On the [New delivery stream] screen, enter the following parameters:
- Delivery stream name: Enter [aws-waf-logs-XXXX(any name)]
- Source: Check [Direct PUT or other sources]
- Choose [Next]
- On the [Process records] screen, choose the following parameters:
- Data transformation: [Disabled]
- Record format conversion: [Disabled]
- Choose [Next]
- On the [Choose a destination] screen, choose/enter the following parameters:
- Destination: [Amazon S3]
- S3 bucket: [aes-siem-123456789012-log]
- Replace 123456789012 with your AWS account ID
- S3 prefix: Enter [AWSLogs/123456789012/WAF/region/]
- S3 error prefix: Enter [AWSLogs/123456789012/WAF/region/error/]
- Replace 123456789012 with your AWS account and ap-northeast-1 with your region. If the resource to which you attach WAF is CloudFront, set the region to global
- On the [Configure settings] screen, enter the following parameters:
- Buffer size: Enter [any number]
- Buffer interval: Enter [any number]
- S3 compression: [GZIP]
- Leave the following parameters as default
- Choose [Next]
- Choose [Create delivery stream]
- Navigate to the WAFv2 console
- Choose [Web ACLs] from the left pane
- From the drop-down menu at the center of the screen, choose the [region] where you deployed WAF => Choose the name of the target WAF to collect logs from
- Choose [Logging and metrics] tab => [Enable logging]
- From the [Amazon Kinesis Data Firehose Delivery Stream] drop-down menu, choose the [Kinesis Firehose you created]
- Choose [Enable logging] to complete the configuration
- Navigate to the WAF Classic console
- Choose [Web ACLs] from the left pane
- From the drop-down menu at the center of the screen, choose the [region] where you deployed WAF => Choose the name of the target WAF to collect logs from
- Choose the [Logging] tab at the top right of the screen => Choose [Enable logging]
- From the [Amazon Kinesis Data Firehose] drop-down menu, choose the [Kinesis Firehose you created]
- Choose [Create] to complete the configuration
The initial value of s3_key: SecurityHub
or securityhub
(specified in the Firehose output path)
No | CloudFormation | Description |
---|---|---|
1 | link | CloudFormation for core resource. This template gets the S3 bucket name of the log forwarding destination and creates IAM roles. Commonly used in other AWS service settings. |
2 | link | This template creates Firehose,set up EventBridge to deliver Events to the Firehose. A template common to Security Hub and Config Rules. |
- Log output is sent via Kinesis Data Firehose, and since there is no standard save path, use the above s3_key as the prefix of the destination S3 bucket for Kinesis Data Firehose.
- Create Firehose and EventBridge rules for each region when aggregating Security Hub findings from multiple regions
Configuring Kinesis Data Firehose
- Navigate to the Amazon Kinesis console
- Choose [Delivery streams] from the left pane
- Choose [Create delivery stream] at the top left of the screen
- On the [New delivery stream] screen, enter the following parameters:
- Delivery stream name: Enter [aes-siem-firehose-securityhub]
- Source: Check [Direct PUT or other sources]
- [Enable server-side encryption for source records in delivery stream] is optional
- Choose [Next]
- On the [Process records] screen, choose the following parameters:
- Data transformation: [Disabled]
- Record format conversion: [Disabled]
- Choose [Next]
- On the [Choose a destination] screen, choose/enter the following parameters:
- Destination: [Amazon S3]
- S3 bucket: [aes-siem-123456789012-log]
- S3 prefix: Enter [AWSLogs/123456789012/SecurityHub/[region]/]
- S3 error prefix: Enter [AWSLogs/123456789012/SecurityHub/[region]/error/]
- Replace 123456789012 with your AWS account and [region] with your region.
- On the [Configure settings] screen, enter the following parameters:
- Buffer size: Enter [any number]
- Buffer interval: Enter [any number]
- S3 compression: [GZIP]
- Leave the following parameters as default
- Choose [Next]
- Choose [Create delivery stream] to complete deployment of Kinesis Data Firehose
Configuring EventBridge
- Navigate to the EventBridge console
- Choose [Rules] from the left pane => [Create rule]
- Enter the following parameters on the [Create rule] screen:
- Name: aes-siem-securityhub-to-firehose
- Define pattern: Choose Event pattern
- Event matching pattern: Pre-defined pattern by service
- Service provider: AWS
- Service Name: Security Hub
- Event Type: Security Hub Findings - Imported
- No change required for the “Select event bus” pane
- Target: Firehose delivery stream
- Stream: aes-siem-firehose-securityhub
- Choose any value for the rest
- Choose [Create] to complete the configuration
The initial value of s3_key: _network-firewall_
(part of the default output path)
Ref: AWS Network Firewall - Developer Guide - Logging and monitoring in AWS Network Firewall(S3)
The initial value of s3_key: CloudTrail/
or CloudTrail-Insight/
(part of the default output path)
Follow the steps below to output CloudTrail logs to the S3 bucket:
- Log in to the AWS Management Console
- Navigate to the CloudTrail console
- Select [Trails] from the left pane => Choose [Create trail] at the top right.
- Enter the following parameters on the [Choose trail attributes] screen.
- Trail name: [aes-siem-trail]
- Enable for all accounts in my organization: any (Skip this step if the field is grayed out and you are unable to check the box)
- Storage location: Check [Use existing S3 bucket]
- Select [aes-siem-123456789012-log]
- Replace 123456789012 with your AWS account ID
- Log file SSE-KMS encryption: Recommended to check [Enabled]
- AWS KMS customer managed CMK: Check [Existing]
- AWS KMS alias: Choose [aes-siem-key]
- Log file validation: Recommended to check [Enable]
- SNS notification delivery: Don’t check Enabled
- CloudWatch Logs: Don’t check Enabled
- Tags: any
- Choose [Next]
- On the [Choose log events] screen, enter the following parameters:
- Event type
- Management events: [checked]
- Data events: any
- Insights events: any
- Management events
- API activity: Check both [Read] and [Write]
- Exclude AWS KMS events: any
- Event type
- Choose [Next]
- Choose [Create trail]
- Configuration History: The initial value of s3_key:
_ConfigHistory_
(part of the default output path) - Configuration Snapshot: The initial value of s3_key:
_ConfigSnapshot_
(part of the default output path)
To export Config log to S3 bucket, see "Delivery method" of the Developer Guide Setting Up AWS Config with the Console. You can just choose SIEM's log S3 bucket for S3 bucket name.
The initial value of s3_key: Config.*Rules
(specified in the Firehose output path)
No | CloudFormation | Description |
---|---|---|
1 | link | CloudFormation for core resource. This template gets the S3 bucket name of the log forwarding destination and creates IAM roles. Commonly used in other AWS service settings. |
2 | link | This template creates Firehose,set up EventBridge to deliver Events to the Firehose. A template common to Security Hub and Config Rules. |
The initial value of s3_key: (TrustedAdvisor|trustedadvisor)
(specified in the Firehose output path)
In order to collect Trusted Advisor results, the AWS support plan must be Business Support, Enterprise On-Ramp Support, or Enterprise Support. Please refer to Compare AWS Support Plans for details.
No | CloudFormation | Description |
---|---|---|
1 | link | CloudFormation for core resource. This template gets the S3 bucket name of the log forwarding destination and creates IAM roles. Commonly used in other AWS service settings. |
2 | link | This template creates Lambda,set up EventBridge to export Trusted Advisor check results to S3. |
For CloudFront, you can record requests sent for distribution in two ways. Standard logs (access logs) and real-time logs. Click here to see the difference between the two.
The initial value of s3_key: (^|\/)[0-9A-Z]{12,14}\.20\d{2}-\d{2}-\d{2}-\d{2}.[0-9a-z]{8}\.gz$$
The log type is determined by the default output file name using regular expressions. The logs do not contain AWS account IDs, so you should include them in the S3 prefix.
- Log in to the AWS Management Console
- Navigate to the Amazon CloudFront console
- Choose [Logs] from the left pane => [Distribution logs] tab
- Choose the [Distribution ID] for which you want to load logs
- Choose [Edit] which is next to [Standard logs] title
- Enter the following parameters in the [Edit standard logs] window that pops up
- Set Standard logs to [Enabled]
- S3 bucket: [aes-siem-123456789012-log]
- Replace 123456789012 with your AWS account ID
- S3 bucket prefix: [AWSLogs/123456789012/CloudFront/global/distribution ID/standard/]
- Replace “123456789012” with your AWS account ID, and "distribution ID" with your CloudFront distribution ID
- Cookie logging: [Yes]
- Choose [Update] to complete the configuration
CloudFront real-time logs are delivered to the data stream that you choose in Amazon Kinesis Data Streams. Then the log data is sent to Amazon S3 via Amazon Kinesis Data Firehose.
The initial value of s3_key: CloudFront/.*/realtime/
Because real-time logs do not have a standard storage path, specify the S3 path above with a prefix. You can use any characters for .* (period and asterisk), so include the region etc. CloudFront logs do not contain AWS account IDs and distribution IDs, so ensure to include them in the S3 prefix.
Configure them in the following order:
- Kinesis Data Stream
- Kinesis Data Firehose
- CloudFront
Configuring Kinesis Data Stream and Kinesis Data Firehose:
- Log in to the AWS Management Console
- Navigate to the Amazon Kinesis console in N.Virginia region
- Choose [Data streams] from the left pane => [Create a data stream]
- Enter the following parameters on the [Create a data stream] screen
- Data stream name: Enter [any name]
- Number of open shards: Enter [any number of shards]
- Choose [Create data stream]
- Now you’re ready to configure Kinesis Data Firehose. Wait until the status of the created data stream becomes [active] and then choose [Process with delivery stream] from the [Consumers] pane at the bottom of the screen.
- On the [New delivery stream] screen, enter the following parameters:
- Delivery stream name: Enter [any name]
- Source: Check [Kinesis Data Stream]
- Kinesis data stream: Choose the [Kinesis data stream created in the previous step]
- Choose [Next]
- On the [Process records] screen, choose the following parameters:
- Data transformation: [Disabled]
- Record format conversion: [Disabled]
- Choose [Next]
- On the [Choose a destination] screen, choose/enter the following parameters:
- Destination: [Amazon S3]
- S3 bucket: [aes-siem-123456789012-log]
- Replace 123456789012 with your AWS account ID
- S3 prefix: [AWSLogs/123456789012/CloudFront/global/distribution ID/realtime/]
- Replace 123456789012 with your AWS account ID
- S3 error prefix: [AWSLogs/123456789012/CloudFront/global/distribution ID/realtime/error/]
- On the [Configure settings] screen, enter the following parameters:
- Buffer size: Enter [any number]
- Buffer interval: Enter [any number]
- S3 compression: [GZIP]
- Leave the following parameters as default
- Choose [Next]
- Choose [Create delivery stream]
Configuring Amazon CloudFront:
- Navigate to the Amazon CloudFront console
- Choose [Logs] from the left pane => Choose [Real-time log configurations] tab
- Choose [Create configuration] on the right side of the screen
- Enter the following parameters on the [Create real-time log configuration] screen
- Name: Enter [any name]
- Sampling rate: [100]
- Importing all logs into SIEM on OpenSearch Service
- Fields: [Check all fields]
- All are checked by default
- Endpoint: Choose the [Kinesis data stream created two steps previously]
- IAM role: Choose [Create new service role CloudFrontRealtimeLogConfiguRole-XXXXXXXXXXXX]
- Distribution: Choose [the target distribution]
- Cache behavior(s): Choose [Default(*)]
- Choose [Create configuration] to complete the configuration
The initial value of s3_key: vpcdnsquerylogs
(part of the default output path)
- Navigtate to the Route 53 Resolver console
- Choose [Query logging] from the left pane
- Enter the following parameters on the [Configure query logging] screen
- Name: Enter [any name]
- Destination for query logs: Choose [S3 bucket]
- Amazon S3 bucket: Choose [aes-siem-123456789012-log]
- Replace 123456789012 with your AWS account ID
- VPCs to log queries for: [Add any VPC]
- Choose [Configure query logging] to complete the configuration
The initial value of s3_key: vpcflowlogs
(part of the default output path)
Follow the steps below to output VPC flow logs to the S3 bucket:
- Log in to the AWS Management Console
- Navigate to the Amazon VPC console
- Choose [VPC] or [Subnet] from the left pane => Check the box of the resource to load
- Choose the [Flow logs] tab at the bottom of the screen => Choose [ Create flow log]
- Enter the following parameters on the Create flow log screen
- Name: any name
- Filter: any, but [All] is recommended
- Maximum aggregation interval: any, but setting this to 1 minute will increase the log volume
- Destination: Check [Send to an S3 bucket]
- S3 bucket ARN: [arn:aws:s3:::aes-siem-123456789012-log]
- Replace 123456789012 with your AWS account ID
- Log record format: Check [AWS default format] or check "Custom format" and select "Log format".
- Tags: any
- Choose [Create flow log]
The initial value of s3_key: vpcflowlogs
(part of the default output path)
Follow the steps below to output VPC flow logs to the S3 bucket:
- Log in to the AWS Management Console
- Navigate to the Amazon VPC console
- Choose [Transit gateway] or [Transit gateway attachments] from the left pane => Check the box of the resource to load
- Choose the [Flow logs] tab at the bottom of the screen => Choose [ Create flow log]
- Enter the following parameters on the Create flow log screen
- Name: any name
- Destination: Check [Send to an S3 bucket]
- S3 bucket ARN: [arn:aws:s3:::aes-siem-123456789012-log]
- Replace 123456789012 with your AWS account ID
- Log record format: Check [AWS default format] or check "Custom format" and select "Log format".
- Log file format: any
- Hive-compatible S3 prefix: any
- Partition logs by time: any
- Tags: any
- Choose [Create flow log]
Follow the steps below to output each of the following three load balancer logs to the S3 bucket:
- Application Load Balancer(ALB)
- Network Load Balancer(NLB)
- Classic Load Balancer(CLB)
The initial value of s3_key is determined by the default output path and file name using regular expressions
- ALB:
elasticloadbalancing_.*T\d{4}Z_\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}_\w*\.log\.gz$$
- NLB:
elasticloadbalancing_.*T\d{4}Z_[0-9a-z]{8}\.log\.gz$$
- CLB:
elasticloadbalancing_.*T\d{4}Z_\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}_\w*\.log$$
- Log in to the AWS Management Console
- Navigate to the Amazon EC2 console
- Choose [Load balancers] from the left pane => [Check the box] of the target load balancer to collect logs from
- Choose [Description ] tab => Enter the following parameters for ALB/NLB/CLB:
- For ALB/NLB: Choose [Edit attributes]
- Access logs: Check [Enable]
- S3 location: Enter [aes-siem-123456789012-log]
- Replace 123456789012 with your AWS account ID
- Create this location for me: unchecked
- Choose [Save]
- For CLB: Choose [Configure Access Logs]
- Enable access logs: [checked]
- Interval: Choose [5 minutes or 60 minutes]
- S3 location: Enter [aes-siem-123456789012-log]
- Replace 123456789012 with your AWS account ID
- Create this location for me: unchecked
- Choose [Save] to complete the configuration
- For ALB/NLB: Choose [Edit attributes]
The initial value of s3_key: aws-fsx-
Amazon FSx for Windows File Server audit logs are exported from Kinesis Data Firehose to the S3 bucket. Kinesis Data Firehose names must start with [aws-fsx-], and because this prefix is included in the file names when they are output to the S3 bucket, we are using it to determine the log type.
- Configure with CloudFormation
- Navigate to the FSx Console and forward logs to Firehose.
Follow the steps below to output S3 access logs to the S3 bucket. If you are already capturing S3 logs using CloudTrail data events, click here to see the difference from S3 access logging.
The initial value of s3_key: s3accesslog
(there is no standard save path, so specify it using a prefix)
- Log in to the AWS Management Console
- Navigate to the Amazon S3 console
- From the bucket list, choose the S3 bucket you want to collect logs from.
- Choose [Properties] tab => [Server access logging]
- Check Enable logging
- Choose target bucket: aes-siem-123456789012-log
- Replace 123456789012 with your AWS account ID
- Target prefix: [AWSLogs/AWS account ID/s3accesslog/region/bucket name/ ]
- It’s important to have [s3accesslog] in the path
- Choose [Save]
You can publish the following logs to Cloud Watch Logs and load them into the OpenSearch Service.
- Audit log
- Error log
- General log
- Slow query log
The initial value of s3_key (specified in the Firehose output path)
- Audit log:
(MySQL|mysql|MariaDB|mariadb).*(audit)
- Error log:
(MySQL|mysql|MariaDB|mariadb).*(error)
- General log:
(MySQL|mysql|MariaDB|mariadb).*(general)
- Slow query log:
(MySQL|mysql|MariaDB|mariadb).*(slowquery)
Please refer to the following documentation to publish logs to CloudWatch
- How do I publish logs for Amazon RDS or Aurora for MySQL instances to CloudWatch?
- How can I enable audit logging for an Amazon RDS MySQL or MariaDB instance and publish the logs to CloudWatch?
Configurations for CloudWatch Logs subscription filter and Firehose (Aurora MySQL / MySQL / MariaDB)
The CloudFormation templates below creates a Firehose for each log type and export the logs to an S3 bucket using the CloudWatch Logs subscription filter.
No | CloudFormation | Description |
---|---|---|
1 | link | CloudFormation for core resource. This template gets the S3 bucket name of the log forwarding destination and creates IAM roles. Commonly used in other AWS service settings. |
2 | link | This template creates a Firehose for each type of log, set up CloudWatch Logs subscription filters to deliver CloudWatch Logs to the Firehose. The firehose exports RDS logs to S3 bucket. |
Destination S3 bucket:
- AWSLogs/123456789012/RDS/MySQL/[region]/[logtype]/
- Replace 123456789012 with your AWS account ID
If you have multiple database instances and want to reuse an already created Firehose, enter use_existing
in CreateFirehose and name of an existing Firehose
in FirehoseName in the second template.
Note: If you configure the settings manually, please do not set compression when exporting the logs to the S3 bucket. When receiving logs from CloudWatch Logs, it has already been compressed with gzip, so it will be double compressed and cannot be processed properly
Reference:
- Aurora User Guide / MySQL database log files
- RDS User Guide / MySQL database log files
- RDS User Guide / MariaDB database log files
- Using advanced auditing with an Amazon Aurora MySQL DB cluster
The initial value of s3_key : Postgre
or postgre
(specified in the Firehose output path)
Please refer to the following documentation to publish logs to CloudWatch
-
How do I enable query logging using Amazon RDS for PostgreSQL?
Parameter value Description Default log_min_duration_statement 10000 (ms) Any SQL statement that runs at least for the specified amount of time or longer gets logged. -1 (disabled) log_statement ddl Sets the type of statements logged. None log_statement_stats 1 (enabled) Writes cumulative performance statistics to the server log. 0 (disabled) log_lock_waits 1 (enabled) Logs long lock waits. By default, this parameter isn't set. 0 (disabled) log_connections 1 (enabled) Logs each successful connection. 0 (disabled) log_disconnections 1 (enabled) Logs the end of each session and its duration. 0 (disabled)
Configurations for CloudWatch Logs subscription filter and Firehose (Aurora PostgreSQL / PostgreSQL)
The CloudFormation templates below creates a Firehose for each log type and export the logs to an S3 bucket using the CloudWatch Logs subscription filter.
No | CloudFormation | Description |
---|---|---|
1 | link | CloudFormation for core resource. This template gets the S3 bucket name of the log forwarding destination and creates IAM roles. Commonly used in other AWS service settings. |
2 | link | This template creates a Firehose for each type of log, set up CloudWatch Logs subscription filters to deliver CloudWatch Logs to the Firehose. The firehose exports RDS logs to S3 bucket. |
Destination S3 bucket:
- AWSLogs/123456789012/RDS/PostgreSQL/[region]/postgresql/
- Replace 123456789012 with your AWS account ID
If you have multiple database instances and want to reuse an already created Firehose, enter use_existing
in CreateFirehose and name of an existing Firehose
in FirehoseName in the second template.
Note: If you configure the settings manually, please do not set compression when exporting the logs to the S3 bucket. When receiving logs from CloudWatch Logs, it has already been compressed with gzip, so it will be double compressed and cannot be processed properly
Reference:
- Configuring and authoring Kibana dashboards
- How can I track failed attempts to log in to my Amazon RDS DB instance that's running PostgreSQL?
The initial value of s3_key: (redis|Redis).*(slow|SLOW)
(specified in the Firehose output path)
To export Redis slow log to Firehose, see User Guide Log delivery. You select JSON format and deliver to Firehose. Then you configure the Firehose to export to S3 bucket.
The initial value of s3_key: KafkaBrokerLogs
(part of the default output path)
The initial value of s3_key: (OpenSearch|opensearch).*(Audit|audit)
(specified in the Firehose output path)
To export OpenSearch audit logs to CloudWatch Logs, see Developer Guide Monitoring audit logs in Amazon OpenSearch Service. Then Configure CloudWatch Logs and Firehose to export them to S3 bucket.
- OS system logs
- The initial value of s3_key:
/[Ll]inux/
(specified in the Firehose output path)
- The initial value of s3_key:
- Secure logs
- The initial value of s3_key:
[Ll]inux.?[Ss]ecure
(specified in the Firehose output path)
- The initial value of s3_key:
Log output is sent via Kinesis Data Firehose, and since there is no standard save path, use the above s3_key as the prefix of the destination S3 bucket for Kinesis Data Firehose. Region information is not contained in the logs, so you can include it in your S3 key to capture it. There are two ways to load secure logs : loading logs as OS system logs and then classifying them as secure logs; or loading logs as secure logs from the beginning. The former method determines secure logs by the process name, so choose the latter to ensure all secure logs are fully loaded. The latter, on the other hand, requires you to deploy Firehose for each log destination.
The following are examples of sending logs to the S3 log bucket from Amazon Linux.
-
Create an IAM role and attach it to an EC2 instance
The role needs to have permissions to get and put configurations from AWS Systems Service Parameter Store and transfer logs to CloudWatch Logs.
The permissions:
- logs:CreateLogStream
- logs:CreateLogGroup
- logs:PutLogEvents
- ssm:GetParameter
- ssm:PutParameter
- ssm:UpdateInstanceInformation
-
Install CloudWatch Agent on EC2 instances deployed with Amazon Linux 2023 (AL2023) or Amazon Linux 2 (AL2). Install rsyslog additionally for AL2023.
# Amazon Linux 2023 sudo dnf install -y amazon-cloudwatch-agent rsyslog
# Amazon Linux 2 sudo yum install -y amazon-cloudwatch-agent
For information, see the official documentations: Installing the CloudWatch agent
-
Create a configuration file for CloudWatch Agent
The steps are an example configuration to forward logs to CloudWatch Logs. Please change the input values as appropriate, including settings for Cloud Watch Metrics, etc. Save your configuration in AWS Systems Manager Parameter Store. Subsequent EC2 instances use configuration files saved in Parameter Store, so this step is not necessary.
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard
(snip) Do you want to monitor any log files? 1. yes 2. no default choice: [1]: [RETURN] Log file path: /var/log/messages[RETURN] Log group name: default choice: [messages] /ec2/linux/messages[RETURN] Log stream name: default choice: [{instance_id}] [RETURN] Log Group Retention in days default choice: [1]: [RETURN] Do you want to specify any additional log files to monitor? 1. yes 2. no default choice: [1]: [RETURN] Log file path: /var/log/secure[RETURN] Log group name: default choice: [messages] /ec2/linux/secure[RETURN] Log stream name: default choice: [{instance_id}] [RETURN] Log Group Retention in days default choice: [1]: [RETURN] Do you want to specify any additional log files to monitor? 1. yes 2. no default choice: [1]: 2[RETURN] Do you want to store the config in the SSM parameter store? 1. yes 2. no default choice: [1]: [RETURN] What parameter store name do you want to use to store your config? (Use 'AmazonCloudWatch-' prefix if you use our managed AWS policy) default choice: [AmazonCloudWatch-linux] [RETURN] (snip)
For more information, see Create the CloudWatch agent configuration file
-
Forward logs to CloudWatch Logs
Use configuration files saved in parameter store
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c ssm:AmazonCloudWatch-linux sudo systemctl start amazon-cloudwatch-agent sudo systemctl enable amazon-cloudwatch-agent # When the second time deployment, sudo systemctl restart amazon-cloudwatch-agent
-
Output logs to Firehose using a CloudWatch Logs subscription and choose the S3 bucket as the destination for Firehose output
No CloudFormation Description 1 link CloudFormation for core resource. This template gets the S3 bucket name of the log forwarding destination and creates IAM roles. Commonly used in other AWS service settings. 2 link This template creates two Firehose,set up CloudWatch Logs subscription filters to deliver CloudWatch Logs to the Firehose. The firehose exports linux logs to S3 bucket. Destination S3 bucket:
- [AWSLogs/123456789012/EC2/Linux/System/[region]/]
- [AWSLogs/123456789012/EC2/Linux/Secure/[region]/]
- Replace 123456789012 with your AWS account ID
The initial value of s3_key : /[Ww]indows.*[Ee]vent
(specified in the Firehose output path)
Log output is sent via Kinesis Data Firehose, and since there is no standard save path, use the above s3_key as the prefix of the destination S3 bucket for Kinesis Data Firehose. Region information is not contained in the logs, so you can include it in your S3 key to capture it.
Here’s an outline of the steps:
- Install CloudWatch Agent in the EC2 instance deployed as Windows Server
- Forward logs to CloudWatch Logs
- Configure with CloudFormation
- siem-log-exporter-core.template
- siem-log-exporter-cwl-nocompress.template
- Prefix to output logs : [AWSLogs/123456789012/EC2/Windows/Event/[region]/]
- Replace 123456789012 with your AWS account ID
You can import Apache logs that format is Common Log Format (CLF), Combined Log Format (combined), combinedio, and X-Forwarded-For added at the beginning installed on Amazon Linux 2023 or Amazon Linux 2.
- Apache access log
- The initial value of s3_key:
[Aa]pache.*[Aa]ccess/
(specified in the Firehose output path)
- The initial value of s3_key:
- Apache error log
- The initial value of s3_key:
[Aa]pache.*[Ee]rror/
(specified in the Firehose output path)
- The initial value of s3_key:
Log output is sent via Kinesis Data Firehose, and since there is no standard save path, use the above s3_key as the prefix of the destination S3 bucket for Kinesis Data Firehose. Region information is not contained in the logs, so you can include it in your S3 key to capture it.
The following are examples of sending logs to the S3 log bucket from Amazon Linux.
If you want to collect all logs from multiple websites (e.g. blog.example.net, shop.example.com, etc.), execute CloudFormaiton template for each website to create CloudWatch Logs and Kinesis Firehose with different resource names.
-
Create an IAM role and attach it to an EC2 instance
-
Install Apache Web Server
If you are going through Amazon CloudFront or Elastic Load Balancer (ELB), it is recommended to change the Apache configuration file to rewrite the client IP address to the actual client IP address instead of the IP address of CloudFront or ELB. Alternatively, by adding X-Forwarded-For at the beginning of access_log, SIEM solutions can extract the actual client IP address instead of the IP address of CloudFront or ELB.
For more information, see How do I capture client IP addresses in the web server logs behind an ELB?
-
Create a configuration file for CloudWatch Agent
The steps are an example configuration to forward logs to CloudWatch Logs. Please change the input values as appropriate, including settings for Cloud Watch Metrics, etc. Save your configuration in AWS Systems Manager Parameter Store. Subsequent EC2 instances use configuration files saved in Parameter Store, so this step is not necessary.
If you also want to transfer Linux OS logs, please refer to EC2 Instance (Amazon Linux 2/2023) and combine them.
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard
(snip) Do you want to monitor any log files? 1. yes 2. no default choice: [1]: [RETRUN] Log file path: /var/log/httpd/access_log[RETRUN] Log group name: default choice: [messages] /ec2/apache/access_log[RETRUN] Log stream name: default choice: [{instance_id}] [RETRUN] Log Group Retention in days default choice: [1]: [RETRUN]
Set the necessary logs as follows.
Log file path Log group name /var/log/httpd/access_log /ec2/apache/access_log /var/log/httpd/error_log /ec2/apache/error_log /var/log/httpd/ssl_access_log /ec2/apache/ssl_access_log /var/log/httpd/ssl_error_log /ec2/apache/ssl_error_log Do you want to specify any additional log files to monitor? 1. yes 2. no default choice: [1]: 2[RETRUN] Do you want to store the config in the SSM parameter store? 1. yes 2. no default choice: [1]: [RETRUN] What parameter store name do you want to use to store your config? (Use 'AmazonCloudWatch-' prefix if you use our managed AWS policy) default choice: [AmazonCloudWatch-linux] AmazonCloudWatch-apache[RETRUN] (snip)
For more information, see Create the CloudWatch agent configuration file
-
Forward logs to CloudWatch Logs
Use configuration files saved in parameter store
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c ssm:AmazonCloudWatch-apache sudo systemctl start amazon-cloudwatch-agent sudo systemctl enable amazon-cloudwatch-agent # When the second time deployment, sudo systemctl restart amazon-cloudwatch-agent
-
Output logs to Firehose using a CloudWatch Logs subscription and choose the S3 bucket as the destination for Firehose output
No CloudFormation Description 1 link CloudFormation for core resource. This template gets the S3 bucket name of the log forwarding destination and creates IAM roles. Commonly used in other AWS service settings. 2 link This template creates two Firehose,set up CloudWatch Logs subscription filters to deliver CloudWatch Logs to the Firehose. The firehose exports apache logs to S3 bucket. Destination S3 bucket:
- [ AWSLogs/aws-account-id=123456789012/service=apache-access/web-site-name=[sitename]/aws-region=[region]/ ]
- [ AWSLogs/aws-account-id=123456789012/service=apache-error/web-site-name=[sitename]/aws-region=[region]/ ]
- Replace 123456789012 with your AWS account ID
You can import NGINX logs that format is Combined Log Format (combined) and X-Forwarded-For added at the end installed on Amazon Linux 2023 or Amazon Linux 2.
- NGINX access log
- The initial value of s3_key:
[Nn]ginx.*[Aa]ccess/
(specified in the Firehose output path)
- The initial value of s3_key:
- NGINX error log
- The initial value of s3_key:
[Nn]ginx.*[Ee]rror/
(specified in the Firehose output path)
- The initial value of s3_key:
Log output is sent via Kinesis Data Firehose, and since there is no standard save path, use the above s3_key as the prefix of the destination S3 bucket for Kinesis Data Firehose. Region information is not contained in the logs, so you can include it in your S3 key to capture it.
The following are examples of sending logs to the S3 log bucket from Amazon Linux.
If you want to collect all logs from multiple websites (e.g. blog.example.net, shop.example.com, etc.), execute CloudFormaiton template for each website to create CloudWatch Logs and Kinesis Firehose with different resource names.
-
Create an IAM role and attach it to an EC2 instance
-
Install NGINX Web Server
If you are going through Amazon CloudFront or Elastic Load Balancer (ELB), it is recommended to change the NGINX configuration file to rewrite the client IP address to the actual client IP address instead of the IP address of CloudFront or ELB. Alternatively, by adding X-Forwarded-For at the end of access.log, SIEM solutions can extract the actual client IP address instead of the IP address of CloudFront or ELB.
For more information, see How do I capture client IP addresses in the web server logs behind an ELB?
If you are using NGINX as an HTTPS server, it is recommend separating the HTTPS logs from access.log and error.log and saving them as ssl_access.log and ssl_access.log. Internally, the SIEM processes access.log and error.log as HTTP logs, and ssl_access.log and ssl_access.log as HTTPS logs.
Example of configuration to separate
server { listen 443 ssl; ... access_log /var/log/nginx/ssl_access.log main; error_log /var/log/nginx/ssl_error.log; ... }
-
Create a configuration file for CloudWatch Agent
The steps are an example configuration to forward logs to CloudWatch Logs. Please change the input values as appropriate, including settings for Cloud Watch Metrics, etc. Save your configuration in AWS Systems Manager Parameter Store. Subsequent EC2 instances use configuration files saved in Parameter Store, so this step is not necessary.
If you also want to transfer Linux OS logs, please refer to EC2 Instance (Amazon Linux 2/2023) and combine them.
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard
(snip) Do you want to monitor any log files? 1. yes 2. no default choice: [1]: [RETRUN] Log file path: /var/log/nginx/access.log[RETRUN] Log group name: default choice: [messages] /ec2/nginx/access.log[RETRUN] Log stream name: default choice: [{instance_id}] [RETRUN] Log Group Retention in days default choice: [1]: [RETRUN]
Set the necessary logs as follows.
Log file path Log group name /var/log/nginx/access_log /ec2/nginx/access_log /var/log/nginx/error_log /ec2/nginx/error_log /var/log/nginx/ssl_access_log /ec2/nginx/ssl_access_log /var/log/nginx/ssl_error_log /ec2/nginx/ssl_error_log Do you want to specify any additional log files to monitor? 1. yes 2. no default choice: [1]: 2[RETRUN] Do you want to store the config in the SSM parameter store? 1. yes 2. no default choice: [1]: [RETRUN] What parameter store name do you want to use to store your config? (Use 'AmazonCloudWatch-' prefix if you use our managed AWS policy) default choice: [AmazonCloudWatch-linux] AmazonCloudWatch-nginx[RETRUN] (snip)
For more information, see Create the CloudWatch agent configuration file
error.log is a multi-line log. Specify multi-line setting in the CloudWatch Agent configuration file
AWS Systems Manager Parameter Store:
AmazonCloudWatch-nginx
{ (snip) "logs": { "logs_collected": { "files": { "collect_list": [ (snip) { "file_path": "/var/log/nginx/access.log", "log_group_name": "/ec2/nginx/access.log", "log_stream_name": "{instance_id}", "retention_in_days": -1 }, { "file_path": "/var/log/nginx/error.log", "log_group_name": "/ec2/nginx/error.log", "log_stream_name": "{instance_id}", "timestamp_format": "%Y/%m/%d %H:%M:%S", "multi_line_start_pattern": "{timestamp_format}", "retention_in_days": -1 }, { "file_path": "/var/log/nginx/ssl_error.log", "log_group_name": "/ec2/nginx/ssl_error.log", "log_stream_name": "{instance_id}", "timestamp_format": "%Y/%m/%d %H:%M:%S", "multi_line_start_pattern": "{timestamp_format}", "retention_in_days": -1 } ] } } } }
-
Forward logs to CloudWatch Logs
Use configuration files saved in parameter store
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c ssm:AmazonCloudWatch-nginx sudo systemctl start amazon-cloudwatch-agent sudo systemctl enable amazon-cloudwatch-agent # When the second time deployment, sudo systemctl restart amazon-cloudwatch-agent
-
Output logs to Firehose using a CloudWatch Logs subscription and choose the S3 bucket as the destination for Firehose output
No CloudFormation Description 1 link CloudFormation for core resource. This template gets the S3 bucket name of the log forwarding destination and creates IAM roles. Commonly used in other AWS service settings. 2 link This template creates two Firehose,set up CloudWatch Logs subscription filters to deliver CloudWatch Logs to the Firehose. The firehose exports nginx logs to S3 bucket. Destination S3 bucket:
- [ AWSLogs/aws-account-id=123456789012/service=nginx-access/web-site-name=[sitename]/aws-region=[region]/ ]
- [ AWSLogs/aws-account-id=123456789012/service=nginx-error/web-site-name=[sitename]/aws-region=[region]/ ]
- Replace 123456789012 with your AWS account ID
The initial value of s3_key: N/A. Create and configure Firehose for each container application
- ECS logs are sent to Firehose via FireLens (Fluent Bit) and output to S3
- The log type of each container application is determined by the S3 file path. So you need to provision Firehose for each log type
- Container information is captured from ECS metadata. Enable it in task definitions
- By default, STDERR is not loaded. If you want to load it, set ignore_container_stderr = False in user.ini. @timestamp is the time at which the SIEM log was received.
Configuring Kinesis Data Firehose
- Follow the steps in [Kinesis Data Firehose Settings] in Security Hub.
- Include the key that determines the application in the output path to S3 (apache, for example)
- Because the AWS account and region are captured from the logs stored in S3, it is optional to include these two parameters in the S3 output path
Configuring AWS FireLens
- For information about the task definition file for sending logs via FireLens and IAM permission settings, see official documentation and aws-samples’ Send to Kinesis Data Firehose in amazon-ecs-firelens-examples
Configuring SIEM
- Include the following for each log type in user.ini
# Specifying the logs are via firelens
via_firelens = True
# Specifying whether stderr is loaded or not. Logs will not be loaded if this is True
ignore_container_stderr = True
The initial value of s3_key : (WorkSpaces|workspaces).*(Event|event)
(specified in the Firehose output path)
The initial value of s3_key : (WorkSpaces|workspaces).*(Inventory|inventory)
- Configure with CloudFormation
You can load logs from other accounts or regions into SIEM on OpenSearch Service by using S3 replication or cross-account output to the S3 bucket that stores logs. The output paths should be follow the S3 keys configured above.
You can also load logs into SIEM on OpenSearch Service from an already existing S3 bucket and/or by using an AWS KMS customer-managed key. To use an existing S3 bucket or AWS KMS customer-managed key, you must grant permissions to Lambda function es-loader. See this to deploy using AWS CDK.