Creating AWS EC2 Spot Instances with a Launch Template

With EC2, you have a huge variety of instance types to chose from, and with each type having a range of sizes small to large:

  • General Purpose
  • Compute Optimized
  • GPU instances
  • FPGA instances
  • Memory Optimized
  • Storage Optimized

For each of the types you have a further range of options for how you chose to provision an instance, which has an impact on how the instance is priced:

  • On-demand: requested and provisiond as you need them
  • Spot Instances: spare capacity instances auctioned at a lower price than On-Demand, but may not always be available. You request a price, if an instance is available at that price it is provisioned, otherwise you wait until available (see current pricing here)
  • Reserved Instances
  • Dedicated Hosts
  • Dedicated Instances

I’ve never created a Spot Instance before, and I’m curious what the steps are. As with every service on AWS, there’s more than one approach, and I’ve going to look at using a Launch Template:

Bu creating a Launch Template you can configure a number of settings for you instance (AMI image, EC2 instance type, etc).

From the Request Spot Instance page in the EC2 Management Console, you can now use your Launch Template which prepopulates most of the settings for your requested Spot Instance:

Further down on this page is where you request your pricing – it defaults to buy at cheapest available, and is capped at the current On-Demand price (if the spot price rises to match the On-Demand price then there’s no savings from the Spot pricing and you might as well get On-Demand instead):

After submitted, it shows the request in a submitted state:

So here’s my first error:

Repeated errors have occured processing the launch specification “t2.small, ami-41e0b93b, Linux/UNIX (Amazon VPC), us-east-1d”. It will not be retried for at least 13 minutes. Error message: com.amazonaws.services.ec2.model.AmazonEC2Exception: Network interfaces and an instance-level subnet ID may not be specified on the same request (Service: AmazonEC2; Status Code: 400; Error Code: InvalidParameterCombination)

Network instances and a instance-level subnet – I did add an interface because I wanted to select the public ip option. Let’s create a new version of the template and try again.

Now my request is in ‘pending fulfillment’ status:

Same error. One of the answers here suggests this is because I don’t have a Security Group. Ok, let’s add a Security Group to the launch template and try again.

Same error even though I added a Security Group to my template, but I noticed this is the Request Spot Instance options – when you select your Template, if you’ve made version updates to your template, make sure you select the latest version as it defaults to 1, i.e. I was restarting with the original template that I know doesn’t work:

Next error:

com.amazonaws.services.ec2.model.AmazonEC2Exception: The security group ‘sg-825bfe14’ does not exist in VPC ‘vpc-058f5d7c’ (Service: AmazonEC2; Status Code: 400; Error Code: InvalidGroup.NotFound)

Hmm. So I’m interpreting this as my Security Group is not for the default VPC that my instance was assigned, so let’s create a new VPC, and then a new Security Group for this VPC:

Now create a new Template version with this new VPC and SG.

Next error:

Error message: com.amazonaws.services.ec2.model.AmazonEC2Exception: Security group sg-b756adc0 and subnet subnet-f756c7bf belong to different networks. (Service: AmazonEC2; Status Code: 400; Error Code: InvalidParameter)

SG and subnet belong to different networks. Ok, getting close. Let’s take a look.

On the VPC for my SG I have: 10.0.0.0/16

On my subnet for us-east-1d I have: 10.0.0.0/24

Ah, ok. let’s add a new subnet for us-east-1d with the same CIDR block and try again.

When creating your spot request, make sure you select your VPC and subnet to match:

Ahah! Now we’re looking good and my Spot Instance is being provisioned:

Ugh, next error:

Looks like in my template I didn’t give a device name (‘missing device name’) for my EBS volume, e.g. /dev/sdb. New template version, trying again.

Next error:

Error message: com.amazonaws.services.ec2.model.AmazonEC2Exception: The parameter iops is not supported for gp2 volumes. (Service: AmazonEC2; Status Code: 400; Error Code: InvalidParameterCombination)

Geesh. Ok, removing the iops value in the template and trying again (would help to have some validation on the template form)

And now:

 

Fulfilled, we made it, a Spot Instance provisioned!

At this point though my instance was started without a public IP, so now I’ve got the Security Group and Subnet issue sorted, I’ll go back to the template and add a network interface and select ‘assign public IP’. Rather than assigning this on the network interface though, it looks like it’s also an option from the subnet config, so I edited and added it here:

And now we’re up, with a public IP! Whether my User Data init script actually did what it was supposed to is the next thing to check, but I’ll look at that next!

Using Netflix Eureka with Spring Cloud / Spring Boot microservices (part 2)

Several months back I started to look at setting up a simple example app with Spring Boot microservices using Netflix Eureka as a service registry. I got distracted by other shiny things for a few months, but just went back to finish this off.

The example app comprises of 3 Spring Boot apps:

  • SpringCloudEureka: registers the Eureka server using @EnableEurekaServer
  • SpringBootService1 with endpoint POST /service1/example1/address
    • registers with Eureka server with @EnableDiscoveryClient
    • uses Ribbon load balancer aware RestTemplate to call Service2 to validate a zipcode
  • SpringBootService2 provides endpoint GET /service2/zip/{zipcode} which is called by Service1
    • also registers with Eureka server with @EnableDiscoveryClient so it can be looked up by Service1

SpringBootService1 and SpringBootService2 both register with the Eureka server using the annotation @EnableDiscoveryClient. Using some magic with @EnableFeignClients, SpringBootService1 is able to call SpringBootService2 using a regular Spring RestTemplate, but it is Eureka aware and able to lookup SpringBootService2 by service name inplace of an absolute ip and port.

This allows the services to be truly decoupled. Service1 needs to know it needs to call Service2 to perform some purpose (in this case validate a zip code), but it doesn’t need to know where Service2 is deployed or what ip address/port it is available on.

Example code is available on github here.

Publishing a message from a webapp to an AWS SQS Queue via AWS Lamba

I’m building a simple project that needs to receive requests from a simple webpage and process them over time sequentially (more on this later!). Using an AWS SQS queue seems like a good fit for what I’m looking for. Without creating something heavyweight like exposing a REST endpoint running in an EC2 instance, this also seemed like a good opportunity to look into integrating calls from a webpage to a AWS Lambda function. This gives the benefit of not needing to pay for an EC2 instance when it’s up but idle.

To get started I created an AWS SQS queue using the AWS Console (the name of the queue might give away what I’m working on 🙂

I then created a Lambda function to post a message to the queue, using the script from this gist here:

Testing the Lambda from the AWS Console I get this error:

2017-11-12T17:07:19.969Z error: Fail Send Message: AccessDenied: Access to the resource https://sqs.us-east-1.amazonaws.com/ is denied.
2017-11-12T17:07:20.007Z {"errorMessage":"error"}

Per post here, we need to update the default policy we added during creation of the Lambda to include permission to post messages to the queue. The missing permission is to allow sqs:SendMessage and sqs:GetQueueUrl on your SQS Queue resource (insert your ARN for your queue in the Resource name):

{
      "Action": [
        "sqs:SendMessage",
        "sqs:GetQueueUrl"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:sqs:us-east-1:SOME_ID_HERE:test-messages"
    }

Using the Saved Test Event, now we’re looking good!

2017-11-12T17:32:03.906Z	messageId: f04a...
END RequestId: 658a...
REPORT RequestId: 658a... Duration: 574.09 ms
Billed Duration: 600 ms
Memory Size: 128 MB
Max Memory Used: 42 MB

Let’s take a look in our queue from the SQS Management Console and see if our payload is there:

Now we’ve got our Lambda to post a message into our queue, how can we can call it from a webpage using some Javascript? Looking in the AWS docs there’s an example here. This page also walks through creating configuring  the AWS SDK api to use a Cognito identity pool for unauthorized access to call the Lambda. Step by step on how to create Cognito pools via the AWS Console are in the docs here. It seems there’s a gap in the docs though as it doesn’t explicitly state how to to create a Cognito pool for unauthorized access.

Just out of curiousity, if you attempt to call your Lambda function without any authentication, you get an error that looks like this:

Error: Missing credentials in config
 at credError (bundle.js:10392)
 at Config.getCredentials (bundle.js:10433)
 at Request.VALIDATE_CREDENTIALS (bundle.js:11562)

Ok, so back to creating the Cognito Pool. From the AWS Console, select Cognito. The option you need to select is ‘Manage Federated Identities’ which is where the option is for creating a pool for authenticated access:

Check the box: ‘Enable access to unauthenticated identities’:

Now we’re back to the AWS SDK for JavaScript and can plug in in our Cognito pool id into this config:

AWS.config.update({region: 'REGION'}); AWS.config.credentials = new AWS.CognitoIdentityCredentials({IdentityPoolId: 'IdentityPool'});

My JavaScript to call the Lambda function so far looks like this:

var AWS = require('aws-sdk');

//init AWS credentials with unauthenticated Cognito Identity pool
AWS.config.update({region: 'us-east-1'});
AWS.config.credentials = new AWS.CognitoIdentityCredentials({IdentityPoolId: 'pool-id-here'});

var lambda = new AWS.Lambda();
// create payload for invoking Lambda function
var lambdaCallParams = {
    FunctionName : 'LightsOnMessageToQueue',
    InvocationType : 'RequestResponse',
    LogType : 'None'
};

function callLambda(){
    var result;
    lambda.invoke(lambdaCallParams, function(error, data) {
        if (error) {
            console.log(error);
        } else {
            result = JSON.parse(data.Payload);
            console.log(result);
        }
    });
}

module.exports = {
    callLambda: callLambda
}

Calling the JavaScript now, I get a different error:

assumed-role/Cognito_PostToQueueUnauthRole/CognitoIdentityCredentials
is not authorized to perform: lambda:InvokeFunction 
on resource: arn:aws:lambda:us-east-1:xxx:function:LightsOnMessageToQueue"}

The error is telling us that the permission ‘lambda:InvokeFunction’ is missing for the role Cognito_PostToQueueUnauthRole, so let’s go back and edit and add it. The role was created when we stepped through the Cognito setup steps, but to edit it we need to go to the IAM section on the AWS Console. Searching for Lambda related policies to include in this role, it looks like this is what we’re looking for:

We don’t want to grant InvokeFuntion on all (*) resources though, we can use the JSON for this policy to add a new ‘inline policy’ to our role, and then edit it to specify the ARN for our function.

Back to the JavaScript app, we can now see the SDK making several XHR requests to AWS, including a POST to /functions/LightsOnMessageToQueue/invocations returning with a 200.

Checking the AWS Console, we’re now successfully making calls to our Lambda function, and messages are being posted to the queue:

To host my simple webpage, since it’s static content this can easily be served from AWS S3. I created a new Bucket, granted public read access, and enabled the ‘static website hosting’ website option:

To package the app for deployment, AWS have a sample webpack.config.js here. I did an ‘npm run build’ and then uploaded the index.html and bundle.js to my bucket.

So far this is one part of a project, I’ll post another update when I’ve made some progress on the next part.

 

Installing Docker in an AWS EC2 instance

AWS offers their own EC2 Container Service (ECS) which simplifies deploying Docker containers to EC2 instances (and clusters of instances) and management of your containers. If you want to do-it-yourself though, you can easily install docker yourself in your own instance.

For example, in an Ubuntu EC2 instance,

sudo apt-get install docker.io

Start the docker service with:

sudo service docker start

If you want to manage you own Docker install on EC2, AWS have a guide walking you what you need to know – for further details see here: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html

(Latest Ubuntu apt packages are docker-ce and docker-ee – see the Docker docs here for more info)