Our AWS account got compromised after their outage
Could there be any link between the two events?
Here is what happened:
Some 600 instances were spawned within 3 hours before AWS flagged it off and sent us a health event. There were numerous domains verified and we could see SES quota increase request was made.
We are still investigating the vulnerability at our end. our initial suspect list has 2 suspects. api key or console access where MFA wasn’t enabled.
I would normally say that "That must be a coincidence", but I had a client account compromise as well. And it was very strange:
Client was a small org, and two very old IAM accounts had suddenly had recent (yesterday) console log ins and password changes.
I'm investigating the extent of the compromise, but so far it seems all they did was open a ticket to turn on SES production access and increase the daily email limit to 50k.
These were basically dormant IAM users from more than 5 years ago, and it's certainly odd timing that they'd suddenly pop on this particular day.
Cloudtrail events should be able to demonstrate WHAT created the EC2s. Off the top of my head I think it's the runinstance event.
couple folks on reddit said while they were refreshing during the outage, they were briefly logged in as a whole different user
i cant imagine it's related. if it is related, hello Bloomberg News or whoever will be reading this thread because that would be a catastrophic breach of customer trust that would likely never fully return
Highly likely to be coincidence. Typically an exposed access key. Exposed password for non-MFA protected console access happens but is less common.
Any chance you did something crazy while troubleshooting downtime (before you knew it was an AWS issue)? I've had to deal with a similar situation, and in my case, I was lazy and pushed a key to a public repo. (Not saying you are, just saying in my case it was a leaked API key)
Sounds like a coincidence to me
Is it possible that people who already managed to get access (that they confirmed) has been waiting for any hiccups in AWS infrastructure in order to hide among the chaos when it happens? So maybe the access token was exposed weeks/months ago, but instead of going ahead directly, idle until there is something big going on.
Certainly feels like an strategy I'd explore if I was on that side of the aisle.
If I was a burgler holding a stolen key to a house, waiting to pick a good day, a city-wide blackout would probably feel like a good day.
Not uncommon that machines get exposed during trouble-shooting. Just look at the Crowdstrike incident just the other year. People enabled RDP on a lot machines to "implement the fix" and now many of these machines are more vulnerable than if if they never installed that garbage security software in the first place.
Lot of keys and passwords being panic entered on insecure laptops yesterday.
Do not discount the possibility of regular malware.
us-east-1 is unimaginably large. The last public info I saw said it had 159 datacenters. I wouldn't be surprised if many millions of accounts are primarily located there.
While this could possibly be related to the downtime, I think this is probably an unfortunate case of coincidence.
During time of panic, that’s when people are most vulnerable to phishing attacks.
Total password reset and tell your AWS representative. They usually let it slide on good faith.