wesupport

Need help?

Our experts have had an average response time of 13.14 minutes in February 2024 to fix urgent issues.

We will keep your servers stable, secure, and fast at all times for one fixed price.

Retrieve log data from CloudWatch – How to do

by | Jul 25, 2021

Wondering how to retrieve log data from CloudWatch? We can help you!

Here at Bobcares, we handle similar requests from our customers as a part of our Server Management Services.

Today let’s see how our Support Engineers retrieve the log data for our customers.

How to Retrieve log data from CloudWatch

Following are the methods that our Support Engineers use to retrieve log data from CloudWatch:

Method 1: Use subscription filters

To immediately retrieve log data from CloudWatch Logs in real time, we can use subscription filters.

And we can use any subscription filter with Kinesis, Lambda, or Kinesis Data Firehose.

Logs that are sent to a receiving service through a subscription filter are Base64 encoded and compressed with the gzip format.

To create a subscription filter for Kinesis

1. First, we can the following command to create a destination Kinesis stream

$ C:\>  aws kinesis create-stream --stream-name "RootAccess" --shard-count 1

2. Next, we will create the IAM role that grants CloudWatch Logs the permission to put data into Kinesis stream.

{
  "Statement": {
    "Effect": "Allow",
    "Principal": { "Service": "logs.region.amazonaws.com" }, "Action": "sts:AssumeRole" } }

3. After that we can use the following create-role command to create the IAM role and specifying the trust policy file.

aws iam create-role --role-name CWLtoKinesisRole --assume-role-policy-document file://~/TrustPolicyForCWL.json

We must ensure to note the Role.Arn value for a later step.

4. Then we will create a permissions policy using the following to define the actions CloudWatch Logs can do on the account.

{
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "kinesis:PutRecord",
      "Resource": "arn:aws:kinesis:region:123456789012:stream/RootAccess" } ] }

5. And sssociate the permissions policy with the role using the following put-role-policy command:

aws iam put-role-policy  --role-name CWLtoKinesisRole --policy-name Permissions-Policy-For-CWL --policy-document file://~/PermissionsForCWL.json

6. After that we can create the CloudWatch Logs subscription filter. The subscription filter immediately starts the flow of real-time log data from the chosen log group to the Kinesis stream:

aws logs put-subscription-filter \
    --log-group-name "CloudTrail" \
    --filter-name "RootAccess" \
    --filter-pattern "{$.userIdentity.type = Root}" \
    --destination-arn "arn:aws:kinesis:region:123456789012:stream/RootAccess" \ --role-arn "arn:aws:iam::123456789012:role/CWLtoKinesisRole"

Once we set up the subscription filter, CloudWatch Logs forwards all the incoming log events that match the filter pattern to the Kinesis stream.

7. We can verify this using the following Kinesis get-records command to fetch some Kinesis records:

aws kinesis get-shard-iterator --stream-name RootAccess --shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON
{
    "ShardIterator":
    "AAAAAAAAAAFGU/kLvNggvndHq2UIFOw5PZc6F01s3e3afsSscRM70JSbjIefg2ub07nk1y6CDxYR1UoGHJNP4m4NFUetzfL+wev+e2P4djJg4L9wmXKvQYoE+rMUiFq+p4Cn3IgvqOb5dRA0yybNdRcdzvnC35KQANoHzzahKdRGb9v4scv+3vaq+f+OIK8zM5My8ID+g6rMo7UKWeI4+IWiK2OSh0uP"
}
aws kinesis get-records --limit 10 --shard-iterator "AAAAAAAAAAFGU/kLvNggvndHq2UIFOw5PZc6F01s3e3afsSscRM70JSbjIefg2ub07nk1y6CDxYR1UoGHJNP4m4NFUetzfL+wev+e2P4djJg4L9wmXKvQYoE+rMUiFq+p4Cn3IgvqOb5dRA0yybNdRcdzvnC35KQANoHzzahKdRGb9v4scv+3vaq+f+OIK8zM5My8ID+g6rMo7UKWeI4+IWiK2OSh0uP"

8. The Data attribute in a Kinesis record is Base64 encoded and compressed with the gzip format. We can examine the raw data from the command line using the following Unix commands:

echo -n "<Content of Data>" | base64 -d | zcat

The Base64 decoded and decompressed data is formatted as JSON with the following structure:

{
    "owner": "111111111111",
    "logGroup": "CloudTrail",
    "logStream": "111111111111_CloudTrail_us-east-1",
    "subscriptionFilters": [
        "Destination"
    ],
    "messageType": "DATA_MESSAGE",
    "logEvents": [
        {
            "id": "31953106606966983378809025079804211143289615424298221568",
            "timestamp": 1432826855000,
            "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}"
        },
        {
            "id": "31953106606966983378809025079804211143289615424298221569",
            "timestamp": 1432826855000,
            "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}"
        },
        {
            "id": "31953106606966983378809025079804211143289615424298221570",
            "timestamp": 1432826855000,
            "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}"
        }
    ]
}

Generally, filtering for log events is performed internally, which prevents CloudWatch API throttling. Amazon Kinesis Data Streams automatically retries throttled service API calls.

Method 2: Run a query in CloudWatch Logs Insights

To quickly search and analyze the log data, we can run a query in CloudWatch Logs Insights.

Method 3: Export log data to Amazon S3 (batch use cases)

Here we will create the export task for exporting logs from a log group.

To export data to Amazon S3 using the CloudWatch console

1. Firstly, we have to sign in as the IAM user.

2. Then open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.

3. From the dashboard go to Log groups.

4. And select the name of the log group.

5. After that go to Actions and click on Export data to Amazon S3.

6.Now, under Define data export, we must set the time range for the data to export using From and To.

If the log group has multiple log streams, we can provide a log stream prefix to limit the log group data to a specific stream.

7. We can click on Advanced, and enter the log Stream Prefix.

8. Now select the the account associated with the Amazon S3 bucket.

9. And for the S3 bucket name, select an Amazon S3 bucket. Also for S3 Bucket prefix we can provide the randomly generated string that we specified in the bucket policy.

10. After that we must click on Export to export the log data to Amazon S3.

11. For viewing the status of the log data that we exported to Amazon S3, go to Actions and click View all exports to Amazon S3.

Method 4: Call GetLogEvents or FilterLogEvents

To find log data manually we can use GetLogEvents or FilterLogEvents in the CloudWatch API.

[Need assistance? We can help you]

Conclusion

To conclude, we saw the steps that our Support Techs follow to Retrieve log data from CloudWatch for our customers.

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

GET STARTED

var google_conversion_label = "owonCMyG5nEQ0aD71QM";

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Categories

Tags