Check S3 Bucket Encryption Compliance Across AWS Account

Ensure your data's safety with S3 encryption. Explore how to check and implement best practices for S3 encryption.

Patrick Londa
Author
Apr 6, 2022
 • 
 min read
Share this post

If you are using S3 buckets to store data, you’ll want to make sure that your data is protected, both at rest and in transit.

In this post, we’ll walk you through how to check and implement data security measures for both types.

Blink Automation: Send Report of Publicly-Accessible S3 Buckets to Slack
AWS + Slack
Try This Automation

Types of AWS S3 Encryption

Understanding and implementing S3 encryption is the primary means of ensuring that your data remains secure while at rest. The two main options for S3 encryption are server-side encryption (SSE) and client-side encryption (CSE). CSE puts all of the control (and responsibility) in your hands, whereas SSE can be easily managed through the Amazon AWS console or command line interface (CLI). In this post, we’ll mostly focus on setting up SSE encryption.

Encrypting Existing Objects

When you encrypt an existing object using SSE, you're actually replacing the object with an encrypted duplicate of itself. Because you are replacing the object, you should consider setting up S3 versioning so you can revert the change if necessary.

To overwrite an object with an SSE-S3 encrypted copy of itself, use:

aws s3 cp s3://awsexamplebucket/myfile s3://awsexamplebucket/myfile --sse AES256

To overwrite an object with an SSE-KMS encrypted copy of itself, use:

aws s3 cp s3://awsexamplebucket/myfile s3://awsexamplebucket/myfile --sse aws:kms

To overwrite an object with an SSE-KMS encrypted copy of itself using a customer-managed key, use:

aws s3 cp s3://awsexamplebucket/myfile s3://awsexamplebucket/myfile 
--sse aws:kms --sse-kms-key-id arn:aws:kms:us-west-2:111122223333:key/3aefc301-b7d2-4601-9298-5a854cf9999d

To overwrite all of the objects in an S3 bucket with encrypted copies of themselves, use:

aws s3 cp s3://awsexamplebucket/ s3://awsexamplebucket/ 
--sse aws:kms --recursive

When you have replaced any existing non-encrypted objects with encrypted versions, then you can move on to setting rules for new objects.

Setting Default Encryption

Here are the different ways you can enable default encryption with either SSE-S3 or SSE-KMS for all new objects created in an S3 bucket.

To enable default SSE-S3 encryption, edit the rules section of the bucket as follows:

aws s3api put-bucket-encryption --bucket bucket-name 
--server-side-encryption-configuration '{
    "Rules": [        
        {            
            "ApplyServerSideEncryptionByDefault": {                
                "SSEAlgorithm": "AES256"            
            }        
        }    
    ]
}'

To enable default SSE-KMS encryption, edit the rules section of the bucket as follows:

aws s3api put-bucket-encryption --bucket bucket-name 
--server-side-encryption-configuration '{
    "Rules": [            
        {                
            "ApplyServerSideEncryptionByDefault": {
                "SSEAlgorithm": "aws:kms",
                "KMSMasterKeyID": "KMS-Key-ARN"
            },                
            "BucketKeyEnabled": true
        }        
    ]    
}'

Once you have made these changes, your data should be encrypted at rest for both your existing and brand new S3 buckets.

Finding S3 Buckets Not Meeting Your Encryption Standard

Now that you've handled at rest encryption, here's how to monitor and implement encryption for your data while it's in transit. First, check the security rules associated with an S3 Bucket.

Updating S3 Buckets with Security Policies

If “aws:SecureTranport” isn't showing up under your inbound and outbound rules, you'll want to implement it to ensure that your data remains secure while in transit. Data in transit is vulnerable to interception via a “man-in-the-middle” attack. You should also consider implementing “s3-bucket-ssl-requests-only”, which enables ongoing detective controls.

To add or edit a bucket policy, first sign in to the AWS Management Console and open the S3 console. From the Buckets list, select the bucket you want to create or edit a policy for. Click Permissions and then Edit (under Bucket Policy). You should be able to see and edit a JSON file containing your bucket policy.

In this JSON, look for “aws:SecureTransport” and change the boolean value from “False” to “True”.

To make a bucket compliant with “s3-bucket-ssl-requests-only”, you'll need to make some more edits to the JSON file. Modify the file to include these lines:

{
"Sid": "AllowSSLRequestsOnly",
"Action": "s3:*",
"Effect": "Deny",
"Resource": [ "arn:aws:s3:::DOC-EXAMPLE-BUCKET",
"arn:aws:s3:::DOC-EXAMPLE-BUCKET/*"
},

This will ensure that your S3 bucket explicitly denies access to HTTP requests, the condition required by the “s3-bucket-ssl-requests-only” rule.

Blocking Public Access to Your S3 Buckets

By default, new buckets, access points, and objects are not accessible to the public. However, users can change bucket policies and permissions to enable public access. Amazon S3 offers its Block Public Access feature to ban other users from enabling public access

With Amazon S3 Block Public Access enabled, any changes to policies and permissions that would otherwise allow the public to access your data will be overridden. The four relevant options (stored in your S3 bucket’s file) are: “BlockPublicAcls”, “IgnorePublicAcls”, “BlockPublicPolicy”, and “RestrictPublicBuckets”.

When “BlockPublicAcls” is set to “TRUE” on an S3 bucket, PUT Bucket acl and PUT Object calls fail if the specified access control list (ACL) is public.

When “IgnorePublicAcls” is set to “TRUE” on an S3 bucket, Amazon S3 will ignore all public ACLs on a bucket and any objects it contains. This will block public access while permitting PUT Object calls that include a public ACL.

When “BlockPublicPolicy” is set to “TRUE” on an S3 bucket, Amazon S3 will block all attempts to attach a policy to a bucket that permits public access.

When “RestrictPublicBuckets” is set to “TRUE” on an S3 bucket with a public policy, only authorized users and AWS service principals will be able to access the bucket.

Automate Security Checks with Blink:

For quick checks like this, using a specific CLI tool or script might get the job done, but it can be hard to incorporate it into your regular security practice. With Blink, you can schedule this specific check to run regularly.

Blink Automation: Check Amazon S3 Bucket Compliance
Blink Automation: Check Amazon S3 Bucket Compliance

This automation is available in the Blink library. When it runs, it does the following steps:

  1. Gets S3 Block Public Access Settings
  2. Gets S3 Buckets with public write access.
  3. Gets S3 Buckets with public read access.
  4. Gets S3 Bucket service encryption status.
  5. Gets S3 Buckets SSL enforcement.
  6. Sends reports as CSV files.

This simple automation is easy to customize. Run it on a schedule or send the report via email, Slack, or Teams.

There are over 5K automations in the Blink library to choose from, or you can build your own to match your unique needs.

Get started with Blink today and see how easy automation can be.

Expert Tip