Skip to content

aws

Sample lambda functions that run based on S3 event trigger on s3:ObjectCreated.

UPDATE

Upload objects greater than 16MB+ requires the additional event trigger of s3:ObjectCreated:CompleteMultipartUpload.

Kudus for AWS Support for helping me with the troubleshoot.

1) SNS topic with email, txt, ... subscribers.

2) Create S3 event trigger with the lambda funtion as a target

s3-object-created-trigger-lambda-and-sns-notification

S3 events for:

  • s3:ObjectCreated:Put

  • s3:ObjectCreated:CompleteMultipartUpload

3) Target a lambda functions to run below logic

lambda sample code for s3 presign url

  • DISCLAIMER >> Use at your own responsability. <<

Sample Lambda code


UPDATE and other details

Upload objects greater than 16MB+ requires the additional event trigger of s3:ObjectCreated:CompleteMultipartUpload.

Kudus for AWS Support for helping me with the troubleshoot.

S3 events trigger for:

  • s3:ObjectCreated:Put

  • s3:ObjectCreated:CompleteMultipartUpload

Objects greater than 16MB are getting uploaded as a Multipart uploads.

Multipart upload allows us to upload a single object as a set of parts. Each part is a contiguous portion of the object's data.

S3 bucket event s3:ObjectCreated:Put provides notification when an object is created by an HTTP PUT operation.

S3 bucket event s3:ObjectCreated:CompleteMultipartUpload provides notification for an object which was created by the completion of a S3 multi-part upload.

Documentation

Happy learning

Antonio Feijao

AWS Security Hub work in Python 3 and Boto3

Testing some commands with python3 and boto3

List of AWS Services from Boto3 clients

The below error message, shows very usefull information. I can see all the boto3.client's name that I can use.

Does anyone knows how to get the list without using the error?

Input

import boto3

boto3.client(dir)

output (tweaked to show the services names in a multi line list)

UnknownServiceError: Unknown service: '<built-in function dir>'.

Valid service names are:

accessanalyzer,
account,
acm,
acm-pca,
alexaforbusiness,
amp,
amplify,
amplifybackend,
amplifyuibuilder,
apigateway,
apigatewaymanagementapi,
apigatewayv2,
appconfig,
appconfigdata,
appflow,
appintegrations,
application-autoscaling,
application-insights,
applicationcostprofiler,
appmesh,
apprunner,
appstream,
appsync,
athena,
auditmanager,
autoscaling,
autoscaling-plans,
backup,
backup-gateway,
batch,
braket,
budgets,
ce,
chime,
chime-sdk-identity,
chime-sdk-meetings,
chime-sdk-messaging,
cloud9,
cloudcontrol,
clouddirectory,
cloudformation,
cloudfront,
cloudhsm,
cloudhsmv2,
cloudsearch,
cloudsearchdomain,
cloudtrail,
cloudwatch,
codeartifact,
codebuild,
codecommit,
codedeploy,
codeguru-reviewer,
codeguruprofiler,
codepipeline,
codestar,
codestar-connections,
codestar-notifications,
cognito-identity,
cognito-idp,
cognito-sync,
comprehend,
comprehendmedical,
compute-optimizer,
config,
connect,
connect-contact-lens,
connectparticipant,
cur,
customer-profiles,
databrew,
dataexchange,
datapipeline,
datasync,
dax,
detective,
devicefarm,
devops-guru,
directconnect,
discovery,
dlm,
dms,
docdb,
drs,
ds,
dynamodb,
dynamodbstreams,
ebs,
ec2,
ec2-instance-connect,
ecr,
ecr-public,
ecs,
efs,
eks,
elastic-inference,
elasticache,
elasticbeanstalk,
elastictranscoder,
elb,
elbv2,
emr,
emr-containers,
es,
events,
evidently,
finspace,
finspace-data,
firehose,
fis,
fms,
forecast,
forecastquery,
frauddetector,
fsx,
gamelift,
glacier,
globalaccelerator,
glue,
grafana,
greengrass,
greengrassv2,
groundstation,
guardduty,
health,
healthlake,
honeycode,
iam,
identitystore,
imagebuilder,
importexport,
inspector,
inspector2,
iot,
iot-data,
iot-jobs-data,
iot1click-devices,
iot1click-projects,
iotanalytics,
iotdeviceadvisor,
iotevents,
iotevents-data,
iotfleethub,
iotsecuretunneling,
iotsitewise,
iotthingsgraph,
iottwinmaker,
iotwireless,
ivs,
kafka,
kafkaconnect,
kendra,
kinesis,
kinesis-video-archived-media,
kinesis-video-media,
kinesis-video-signaling,
kinesisanalytics,
kinesisanalyticsv2,
kinesisvideo,
kms,
lakeformation,
lambda,
lex-models,
lex-runtime,
lexv2-models,
lexv2-runtime,
license-manager,
lightsail,
location,
logs,
lookoutequipment,
lookoutmetrics,
lookoutvision,
machinelearning,
macie,
macie2,
managedblockchain,
marketplace-catalog,
marketplace-entitlement,
marketplacecommerceanalytics,
mediaconnect,
mediaconvert,
medialive,
mediapackage,
mediapackage-vod,
mediastore,
mediastore-data,
mediatailor,
memorydb,
meteringmarketplace,
mgh,
mgn,
migration-hub-refactor-spaces,
migrationhub-config,
migrationhubstrategy,
mobile,
mq,
mturk,
mwaa,
neptune,
network-firewall,
networkmanager,
nimble,
opensearch,
opsworks,
opsworkscm,
organizations,
outposts,
panorama,
personalize,
personalize-events,
personalize-runtime,
pi,
pinpoint,
pinpoint-email,
pinpoint-sms-voice,
polly,
pricing,
proton,
qldb,
qldb-session,
quicksight,
ram,
rbin,
rds,
rds-data,
redshift,
redshift-data,
rekognition,
resiliencehub,
resource-groups,
resourcegroupstaggingapi,
robomaker,
route53,
route53-recovery-cluster,
route53-recovery-control-config,
route53-recovery-readiness,
route53domains,
route53resolver,
rum,
s3,
s3control,
s3outposts,
sagemaker,
sagemaker-a2i-runtime,
sagemaker-edge,
sagemaker-featurestore-runtime,
sagemaker-runtime,
savingsplans,
schemas,
sdb,
secretsmanager,
securityhub,
serverlessrepo,
service-quotas,
servicecatalog,
servicecatalog-appregistry,
servicediscovery,
ses,
sesv2,
shield,
signer,
sms,
sms-voice,
snow-device-management,
snowball,
sns,
sqs,
ssm,
ssm-contacts,
ssm-incidents,
sso,
sso-admin,
sso-oidc,
stepfunctions,
storagegateway,
sts,
support,
swf,
synthetics,
textract,
timestream-query,
timestream-write,
transcribe,
transfer,
translate,
voice-id,
waf,
waf-regional,
wafv2,
wellarchitected,
wisdom,
workdocs,
worklink,
workmail,
workmailmessageflow,
workspaces,
workspaces-web,
xray

A curiosity...
I echod all the aww services' names to wc -l to get a count of the services.

See for yourself how many boto3.client client has,
which, should give us an idea of how many services AWS has.

echo "accessanalyzer, account, acm, acm-pca, alexaforbusiness, amp, amplify, amplifybackend, amplifyuibuilder, apigateway, apigatewaymanagementapi, apigatewayv2, appconfig, appconfigdata, appflow, appintegrations, application-autoscaling, application-insights, applicationcostprofiler, appmesh, apprunner, appstream, appsync, athena, auditmanager, autoscaling, autoscaling-plans, backup, backup-gateway, batch, braket, budgets, ce, chime, chime-sdk-identity, chime-sdk-meetings, chime-sdk-messaging, cloud9, cloudcontrol, clouddirectory, cloudformation, cloudfront, cloudhsm, cloudhsmv2, cloudsearch, cloudsearchdomain, cloudtrail, cloudwatch, codeartifact, codebuild, codecommit, codedeploy, codeguru-reviewer, codeguruprofiler, codepipeline, codestar, codestar-connections, codestar-notifications, cognito-identity, cognito-idp, cognito-sync, comprehend, comprehendmedical, compute-optimizer, config, connect, connect-contact-lens, connectparticipant, cur, customer-profiles, databrew, dataexchange, datapipeline, datasync, dax, detective, devicefarm, devops-guru, directconnect, discovery, dlm, dms, docdb, drs, ds, dynamodb, dynamodbstreams, ebs, ec2, ec2-instance-connect, ecr, ecr-public, ecs, efs, eks, elastic-inference, elasticache, elasticbeanstalk, elastictranscoder, elb, elbv2, emr, emr-containers, es, events, evidently, finspace, finspace-data, firehose, fis, fms, forecast, forecastquery, frauddetector, fsx, gamelift, glacier, globalaccelerator, glue, grafana, greengrass, greengrassv2, groundstation, guardduty, health, healthlake, honeycode, iam, identitystore, imagebuilder, importexport, inspector, inspector2, iot, iot-data, iot-jobs-data, iot1click-devices, iot1click-projects, iotanalytics, iotdeviceadvisor, iotevents, iotevents-data, iotfleethub, iotsecuretunneling, iotsitewise, iotthingsgraph, iottwinmaker, iotwireless, ivs, kafka, kafkaconnect, kendra, kinesis, kinesis-video-archived-media, kinesis-video-media, kinesis-video-signaling, kinesisanalytics, kinesisanalyticsv2, kinesisvideo, kms, lakeformation, lambda, lex-models, lex-runtime, lexv2-models, lexv2-runtime, license-manager, lightsail, location, logs, lookoutequipment, lookoutmetrics, lookoutvision, machinelearning, macie, macie2, managedblockchain, marketplace-catalog, marketplace-entitlement, marketplacecommerceanalytics, mediaconnect, mediaconvert, medialive, mediapackage, mediapackage-vod, mediastore, mediastore-data, mediatailor, memorydb, meteringmarketplace, mgh, mgn, migration-hub-refactor-spaces, migrationhub-config, migrationhubstrategy, mobile, mq, mturk, mwaa, neptune, network-firewall, networkmanager, nimble, opensearch, opsworks, opsworkscm, organizations, outposts, panorama, personalize, personalize-events, personalize-runtime, pi, pinpoint, pinpoint-email, pinpoint-sms-voice, polly, pricing, proton, qldb, qldb-session, quicksight, ram, rbin, rds, rds-data, redshift, redshift-data, rekognition, resiliencehub, resource-groups, resourcegroupstaggingapi, robomaker, route53, route53-recovery-cluster, route53-recovery-control-config, route53-recovery-readiness, route53domains, route53resolver, rum, s3, s3control, s3outposts, sagemaker, sagemaker-a2i-runtime, sagemaker-edge, sagemaker-featurestore-runtime, sagemaker-runtime, savingsplans, schemas, sdb, secretsmanager, securityhub, serverlessrepo, service-quotas, servicecatalog, servicecatalog-appregistry, servicediscovery, ses, sesv2, shield, signer, sms, sms-voice, snow-device-management, snowball, sns, sqs, ssm, ssm-contacts, ssm-incidents, sso, sso-admin, sso-oidc, stepfunctions, storagegateway, sts, support, swf, synthetics, textract, timestream-query, timestream-write, transcribe, transfer, translate, voice-id, waf, waf-regional, wafv2, wellarchitected, wisdom, workdocs, worklink, workmail, workmailmessageflow, workspaces, workspaces-web, xray" | tr ' ' '\n' | wc -l
299
(...)  | tr ' ' '\n' | wc -l
299

299 AWS boto3 client or 299 AWS Services? (checked on 2022-03-24)


Working with aws boto3 client securityhub

Here is an example of the second service name. The number [1] shows a second position in the array. [0] if the first position, first value.

input

import boto3

securityhub = boto3.client('securityhub')

securityhub.describe_products()['Products'][1]

output

{'ProductArn': 'arn:aws:securityhub:xxxxxx:xxxxxx:product/armordefense/armoranywhere',
 'ProductName': 'Armor Anywhere',
 'CompanyName': 'ARMOR',
 'Description': 'Armor Anywhere delivers managed security and compliance for AWS.',
 'Categories': ['Managed Security Service Provider (MSSP)'],
 'IntegrationTypes': ['SEND_FINDINGS_TO_SECURITY_HUB'],
 'MarketplaceUrl': 'https://aws.amazon.com/marketplace/seller-profile?id=797425f4-6823-xxxxxx',
 'ActivationUrl': 'https://amp.armor.com/account/cloud-connections',
 'ProductSubscriptionResourcePolicy': '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":"xxxxxx"},"Action":["securityhub:BatchImportFindings"],"Resource":"arn:aws:securityhub:xxxxxx:xxxxxx:product-subscription/armordefense/armoranywhere","Condition":{"StringEquals":{"securityhub:TargetAccount":"xxxxxx"}}},{"Effect":"Allow","Principal":{"AWS":"xxxxxx"},"Action":["securityhub:BatchImportFindings"],"Resource":"arn:aws:securityhub:xxxxxx:xxxxxx:product/armordefense/armoranywhere","Condition":{"StringEquals":{"securityhub:TargetAccount":"xxxxxx"}}}]}'}

Now, let's use a for loop to list through all services names.

initialising the variable companies

and running a for loop to add all companies into the variable companies

companies = []

for CompanyName in securityhub.describe_products()['Products']:
    print(CompanyName['CompanyName'])
    companies.append(CompanyName['CompanyName'])

(...)

Then, to remove duplicate services names in the variable companies, we can use dict.fromkeys to remove duplicates.

```python list(dict.fromkeys(companies))

['3CORESec', 'ARMOR', 'AWS', 'Alert Logic', 'Amazon', 'Aqua Security', 'Atlassian', 'AttackIQ', 'Barracuda Networks', 'BigID', 'Blue Hexagon', 'Capitis', 'Caveonix', 'Check Point', 'Cloud Custodian', 'Cloud Storage Security', 'CrowdStrike', 'CyberArk', 'DisruptOps, Inc.', 'FireEye', 'Forcepoint', 'Fugue', 'Guardicore', 'HackerOne', 'IBM'] ```

AWS run multiple commands in multiple AWS accounts in the AWS Organizations at the same time

AWS run multiple commands in multiple AWS accounts in the AWS Organizations at the same time

** USE AT YOUR OWN RESPONSABILITY **

I needed to run the same AWS CLI command in multiple accounts within an AWS Organization.

I like code that is easy to read, does what is meant to do, is quick to run, is "safe" and efficient.

Might not be perfect for everyone, but it does well what I need it to do, and might be useful for others or for me again in the feature.

So, I built a script to do what I need in a way that can be re-used for multiple commands in multiple accounts at the same time.

If you have, for example +100 AWS Accounts, you might want to "slow down" this script.

for example, I sometimes use the for-loop below to "slow down" my script :)

for NUMBER in $(seq 0 9); do
    ./99-run-this-command-on-this-account.sh ${NUMBER} ; sleep 20
done

Script to run command multiple AWS account within the organisation

Important

cat 99-run-this-command-on-this-account.txt

#!/bin/bash


run_this_command () {
      account_id=$1
    account_name=$2
       role_name=$3

    echo "--------------------------------------------"
    echo "Going to run a command on ${account_id}, ${account_name}, using the role: ${role_name}"

    # assume a role in the account
    new_role=$(aws sts assume-role --role-arn "arn:aws:iam::${account_id}:role/${role_name}" --role-session-name ${account_id}-${role_name})

    AWS_ACCESS_KEY_ID=$(echo ${new_role} | jq -r '.Credentials.AccessKeyId' )
    AWS_SECRET_ACCESS_KEY=$(echo ${new_role} | jq -r '.Credentials.SecretAccessKey' )
    AWS_SESSION_TOKEN=$(echo ${new_role} | jq -r '.Credentials.SessionToken' )

    ACCOUNT_ID=$(echo ${new_role} | jq -r '.AssumedRoleUser.Arn' | cut -f 5 -d ':')

    export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
    export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
    export AWS_SESSION_TOKEN=${AWS_SESSION_TOKEN}

    export ACCOUNT_ID={ACCOUNT_ID}

    echo "Got key_id >>${AWS_ACCESS_KEY_ID}<< for account_id >>${account_id}, ${account_name}<< "

    #aws s3 ls

    aws securityhub update-standards-control \
        --standards-control-arn "arn:aws:securityhub:eu-west-2:${account_id}:control/cis-aws-foundations-benchmark/v/1.2.0/1.13" \
        --control-status "DISABLED" \
        --disabled-reason "SCP applied at the root level for the organisation will block any root actions."

    aws securityhub update-standards-control \
        --standards-control-arn "arn:aws:securityhub:eu-west-2:${account_id}:control/cis-aws-foundations-benchmark/v/1.2.0/1.14" \
        --control-status "DISABLED" \
        --disabled-reason "SCP applied at the root level for the organisation will block any root actions."

    aws securityhub update-standards-control \
        --standards-control-arn "arn:aws:securityhub:eu-west-2:${account_id}:control/aws-foundational-security-best-practices/v/1.0.0/IAM.6" \
        --control-status "DISABLED" \
        --disabled-reason "SCP applied at the root level for the organisation will block any root actions."

    #for bucket in $(aws s3api list-buckets | jq '.Buckets[].Name' | sed s/'"'//g) ; do
    #    echo "${account_id}, ${account_name}, enabling PublicAccessBlockConfiguration for bucket: ${bucket}" ;
    #    #aws s3api get-public-access-block --bucket ${MYBUCKET} | jq '.PublicAccessBlockConfiguration' ;
    #    aws s3api put-public-access-block \
    #          --bucket ${bucket} \
    #          --public-access-block-configuration "BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"
    #done

}




number=$1
role_name=<ROLE_NAME_IN_HERE>
filter_name="prod"

# DO I NEED to filter by account name?
#
# DO I NEED a specific list of accounts?  >> command to list account - 'aws organizations list-accounts'
#
# Below, sample filter to list account by creation date.
# cat ./results/00-list-of-accounts-in-org.txt
# jq -r '.Accounts[] | "\(.JoinedTimestamp);\(.Id);\(.Name)" ' ./results/00-list-of-accounts-in-org.txt  | sort | grep '2022-03' | awk -F ';' '{print $2 ";" $3}'
# jq -r '.Accounts[] | "\(.JoinedTimestamp);\(.Id);\(.Name)" ' ./results/00-list-of-accounts-in-org.txt  | sort | grep '2022-03' | awk -F ';' '{print $2 ";" $3}' > work-on-this-list-of-accounts.txt
#
# pick your for-loop
#
#for account in $(jq '.Accounts[] | .Id + ";" + .Name' ./results/00-list-of-accounts-in-org.txt | tr -d '"' | sort | grep ^${number} | grep -v ${filter_name} ); do
#for account in $(jq '.Accounts[] | .Id + ";" + .Name' ./results/00-list-of-accounts-in-org.txt | tr -d '"' | sort | grep ^${number} ); do
 for account in $(cat ./work-on-this-list-of-acocunts.txt | tr -d '"' | sort | grep ^${number} ); do

    #sample account result: "000123456789;my-super-aws-account-name"
    account_id=$(  echo ${account} | cut -f1 -d ';' )
    account_name=$(echo ${account} | cut -f2 -d ';' )

    echo "Getting ready to run a command on this account : ${account_id}; ${account_name}
        "
    run_this_command ${account_id} ${account_name} ${role_name} &

done

AWS Systems Manager or AWS SSM to create a private Networking tunnel to resources in the private subnet

AWS Systems Manager or AWS SSM to create a private Networking tunnel to resources in the private subnet

Pre-requisit

you need the AWS SSM agent installed on your laptop/desktop - documentation here https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent.html

AWS SSM create a tunnel to Linux instance in an AWS private subnet

1) Assume a role in the account

2) Get the instance ID you want to tunnel to

3) Start a session with AWS SSM agent

Sample commands for creating a tunnel to the instance on port 3389 and my PC/laptop localhost on port 5222

# Using a variable, so easier to reuse the command

instance_id="i-Xxxxxx"


# Sample command for creating a tunnel to the instance on port 22 and my PC/laptop localhost on port 5222

aws ssm start-session --target ${instance_id} \
    --document-name AWS-StartPortForwardingSession \
    --parameters portNumber="22",localPortNumber="5222" \
    --region eu-west-2

After creating the tunnel to the instance, you still need a valid ssh key to ssh into the ec2 instance.

Via the SSM in the console, you could add your public key to the authorized-keys - where is another website explaning that https://www.ssh.com/academy/ssh/authorized-keys-file


Example using AWS SSM to create a private networking tunnel to use as remote desktop into a Windows instance in a private subnet

Sample command for creating a tunnel to the instance on port 3389 and my PC/laptop localhost on port 5222

Same step above 1) and 2)

3) Create the tunnel with the RDP port as a destination portNumber

# Using a variable, so easier to reuse the command

instance_id="i-Xxxxxx"


# Sample command for creating a tunnel to the instance on port 22 and my PC/laptop localhost on port 5222

aws ssm start-session --target ${instance_id} \
    --document-name AWS-StartPortForwardingSession \
    --parameters portNumber="3389",localPortNumber="5222" \
    --region eu-west-2

Now, using your favourity remote destop application, you can RDP to localhost:5222 which will be tunneled into the Windows instance in the private subnet on port 3389.


Additional documentation

  • "Port Forwarding Using AWS System Manager Session Manager"

https://aws.amazon.com/blogs/aws/new-port-forwarding-using-aws-system-manager-sessions-manager/

  • "... Tunnel through AWS Systems Manager to access my private VPC resources"

https://aws.amazon.com/premiumsupport/knowledge-center/systems-manager-ssh-vpc-resources/

AWS pages with dark-theme feature

Have you tried the AWS dark theme feature with cookie named awsc-color-theme?

Still a work in progress from AWS but looks pretty good already, and if you prefer darker pages, I think you will like this tip.

To try this feature, on your browser do the following:

Open the browser Developer-Tools (right-click then Inspect, or F12 or fn+F12 for MacOS),
go to Application, then Storage, then Cookies, choose the available cookie.

At the end of the list that open, double click after the last row to create a new cookie entry.

In this new cookie entry

add the "Name" as awsc-color-theme ← !! "awsc-", not just "aws-"

add "Value" as dark,

and finally "Path" as .amazon.com

Then refresh the page.


creating the awsc-color-theme cookie entry

awsc-color-theme, dark, .amazon.com

00-awsc-color-theme-cookies.png


before the awsc-color-theme cookie entry

01-awsc-color-theme-s3-light.png


after the awsc-color-theme cookie entry

02-awsc-color-theme-s3-dark.png


Credits

First I found about this feature was from barney_parker,

then found it in here https://twitter.com/rpadovani93/status/1419583500290859008


Happy learning!

Antonio Feijao UK

AWS EC2 userdata sample script to build an Webpage

Sample of an AWS EC2 userdata script that installs apache and automatically creates an index.html file as a landing webpage with information about the instance - instanceId, availabilityZone, instanceType and region. This could also be used with launch configuration on an Auto Scaling Group (ASG) to use as multiple instances on Elastic Load Balancing load balancer (ALB) to easy show the usage of multiple instances behind the load balancer.

At your own risk, always review what you are running.

To run this userdata script, add the below into the EC2 userdata

#!/bin/bash
curl https://raw.githubusercontent.com/AntonioFeijaoUK/aws-ec2-userdata-samples/master/sample01-hello-world-region-az.sh | bash

Repository is here https://github.com/AntonioFeijaoUK/aws-ec2-userdata-samples

Direct link is here https://raw.githubusercontent.com/AntonioFeijaoUK/aws-ec2-userdata-samples/master/sample01-hello-world-region-az.sh

Other samples on AWS

If you tried it and helped you understand better how it works, please leave a comment.


Happy learning

Antonio Feijao UK

docker-images-and-containers-start-with-basic-and-bonus-advanced-security

Docker images and containers start with basic and bonus advanced security Docker logo

Sample basic command for Docker containers and images

docker --help - list the help options for docker command

docker run --help - list the help option for docker run command

docker images - list your local docker images

docker ps - list local running docker containers

docker ps -a - list local running OR stopped docker containers

docker rm f66ae9b25d96 - remove docker container with ID f66ae9b25d96

docker run --rm centos:7 tail -f /dev/null - runs a docker container from image centos:7, keeps container running with tail -f /dev/null command

docker exec -it 513ee56fde09 /bin/sh - interactive shell on the container that is running

docker exec -it -u root 8891619cbcf0 /bin/sh - interactive shell with sudo privileges (NOT RECOMMENDED, security risk)

docker kill 513ee56fde09 - kill (stops) the docker container that was running

docker run ubuntu - runs a docker container with version ubuntu. It will download the docker image ubuntu:latest from docker-hub if it does not find it locally in the machine.

docker pull amazonlinux - updates the local docker images named `

docker commit 3808b8454239 centos-suresh:v01 - save current container with a image, which you can run more containers after

(...)


Bonus advanced security with Docker containers and images

chroot

unshared

....


Happy learning

Antonio Feijao UK

aws-cloudwatch-logs-cloudtrail-logs-filters-sample

AWS VPC flowlogs CloudWatch logs CloudTrail logs and filters examples

Official Documentations - https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html


Basic sample VPC-flow-logs

[version, account, eni, source, destination, srcport, destport="8000", protocol, packets, bytes, windowstart, windowend, action, flowlogstatus]


VPC flow logs example with combination of AND

[version, account, eni, source="185.2*", destination, (srcport!="80" && srcport!="443"), (destport!="80" && destport!="443"), protocol, packets, bytes, windowstart, windowend, action, flowlogstatus]


Basic Expressions Operators

= -- EQUAL

!= -- NOT EQUAL

< -- SMALL THAN

> -- GRATER THEN

<= -- SMALL OR EQUAL

>= -- GRATER OR EQUAL

&& -- AND

|| -- OR


Cloud Trail Logs Filter examples

  • filter by Failure Console Logins 'ConsoleLogin="Failure"'

{ $.eventSource = "signin.amazonaws.com" && $.responseElements.ConsoleLogin = "Failure" }

  • exclude know IP address

{ ($.sourceIPAddress != "52.123.123.5") && ($.sourceIPAddress != "33.123.123.*") && ($.sourceIPAddress != "*.amazonaws.com") }

  • AWS login without using MFA

{ $.eventSource="signin.amazonaws.com" && $.additionalEventData.MFAUsed="No" }


Happy learning

Antonio Feijao UK