Table of Contents
- About this tutorial
- Content and duration
- What you will use
- Before you begin, prerequisites
- Step 1: Create a Docker repository on ECS
- Step 2: Create a cluster on ECS
- Step 3: Define a Task
- Step 4: Create a Service
- Step 5: Invoke REST operations on the Service
Amazon EC2 Container Service ("ECS" hereafter) is a scalable container management service that allows you to manage Docker containers on a cluster of Amazon EC2 instances. Some of the latest improvements -- including auto-scaling at the container level and load balancing to multiple containers running the same image with dynamic port mapping on a given instance -- make it a viable and probably simpler alternative to Google Kubernetes and Docker Swarm.
TIBCO BusinessWorks Container Edition provides a simple, visual, and low-code way to create microservices, especially when it comes to orchestration workloads.
This tutorial, which should take approximately 20 to 30 minutes to complete (depending on how quickly you type and how keyboard shortcuts savvy you are) will guide you through the steps required to make your first deployment of a TIBCO BusinessWorks Container Edition built microservice on ECS. A sequel walks you through the additional steps of defining auto-scaling policies both at the container and instance levels.
Some of the terms we will use are closely related to ECS. Having a quick read about how ECS works and a few basics should help to have a better grasp of what you can achieve in terms of container management and will enhance your experience.
We will make use of the Amazon Web Services CLI tools to perform a few operations related to the docker repository. You will need a valid Amazon Web Services account, and also have the AWS CLI set up and configured before working through this guide. Ensure that your account has the appropriate administrative privileges. Amazon provides extensive documentation on how to set up the AWS CLI should you need it.
Our base microservice will be the BookStore sample shipped with BusinessWorks Container Edition. Documentation is available online. Make sure you have reviewed its contents and tested it in your local development environment, both running in TIBCO Business Studio Container Edition as well as in a local Docker. As a result of that experimentation, you should have a ready-to-use bookstore application packaged as a BWCE-based docker image named bwce-rest-bookstore-app.
Finally, we assume you already have set up the matching postgreSQL database. In this tutorial, the database engine will need to be accessible from Amazon Web Services, so a good option is hosting it on AWS Relational Database Service (RDS).
AWS ECS can pull images from external repositories. It however has a requirement that the target repository is secure, which makes it a bit impractical for tutorial use - unless your organization has already set up a full-fledged, secure repository. Otherwise, doing that on AWS ECS is the quickest way to fulfill the requirement.
Let us go through the (very few) steps required to do so.
Log on to AWS and navigate to the EC2 Container Service dashboard:
Now, kick start the creation of a new repository:
We shall name the repository bwce-bookstore as illustrated below:
Make a note of the URI of the repository that has been provisioned for you as we will need it when setting our ECS cluster up.
You will also be given instructions on how to interact with that repository and push images thereto in the following screen. Note that the build step is probably not necessary as you have a pre-built docker image for the BookStore application. You will however need to adapt the fourth step to reflect the exact naming of the image in your local repository you want to tag; that should by default be bwce-rest-bookstore-app.
Please perform all the listed steps bearing in mind you need to use the values provided in the instructions generated for you, the above screenshot pertains to my own AWS setup.
Congratulations, you have performed the first step of this tutorial!
Let us start with a cluster of three instances spanning over all the availability zones we have in our region - here Asia Pacific (Sydney). We will address the elasticity of the cluster in a further tutorial, so the initial cluster configuration is just a baseline. All the sizing figures can also be adjusted manually once the cluster has been created.
Within the ECS dashboard, pick the Cluster section on the left-hand side part of the window, and click on the ?Create Cluster? button as follows:
We will use the Community keyword in the name of every object created in this tutorial, making sure there is no naming conflict with your current EC2 setup as there is sometimes a requirement that items are uniquely named.
Let us hence begin with Community-Cluster, having three m3.medium instances with each 22GiB storage - this is the minimum allowed by Amazon. We do not plan on remotely accessing the instances as part of the tutorial, so there is no need to specify a key pair to SSH in. These settings are illustrated below:
The next section of the form focuses on Network and Security settings. We will create a new VPC and use the default CIDR block, but will add a third Subnet so our VPC can span over the three availability zones in Sydney; you may however adjust if your own region has fewer availability zones. Let us also create a new Security Group and let (for now) all traffic in on port 8080 - remember that this is the default port the BookStore application uses. We will eventually need to adjust the Security Group settings later on and may consider closing direct traffic from the Internet to the individual instances of the cluster as we will dispatch traffic through an upfront load balancer.
The last section of the form is about picking the right IAM role:
Let us now finish the creation of that cluster by clicking on the Create button at the bottom right:
The upcoming screen will let you know about the progress of that creation. Many AWS EC2 and VPC resources are generated as a result, so expect to wait for a minute for this to complete - or even longer if it is a busy time on AWS. A CloudFormation stack will also be generated to automate the creation and tear-down of the cluster. You may use that as a template to automate the provisioning of clusters in the future.
As designed, the VPC spans across three Subnets distributes over the availability zones in our region, here ap-southeast-2a, ap-southeast-2b, and ap-southeast-2c. An auto-scaling group has already been set up for our cluster, but we will address auto-scaling as a second tutorial. Make note of your VPC identifier as we will then need it in the next steps.
The outcome is illustrated below:
At this stage, we are done with the cluster settings: congratulations! A dashboard that is accessible from the ECS landing page will enable you to further control your cluster. You can get there now by clicking the View Cluster button (which is this time on the top?).
This dashboard is illustrated below:
The most interesting tab at this stage is ECS Instances. Three instances have been created, spanning the availability zones:
We can now focus on defining the Task that will run on our cluster - in ECS limbo, this is the workload we will run as containers on the instance.
Click on the Task Definitions section on the left-hand side of your ECS landing page:
We will actually create a brand new Task Definition here, as illustrated below:
Let us fill out the first section of the form, which sets the basics. Our Task will be named Community-Task, the Docker Network Mode will be Bridge as we intend to bridge the ports of our image with that of the host in order to fulfill API calls. We won?t discuss placement constraints in this tutorial, so leave that empty, as illustrated below.
We will also need an IAM Task Role here, so follow the pop-out link to the IAM Console there and create an IAM new role as follows:
You will then be prompted to select the role type. It is a fair bit of scrolling down to find the role we need, Amazon EC2 Container Service Task Role:
Move on to Step 3 and let us then add the Amazon EC2ContainerServiceAutoscaleRole policy in preparation for the upcoming tutorial on auto-scaling, as illustrated here:
Finally, let us name this role Community-TaskRole (even though you ultimately may want to choose a different name as the Role will be reusable across Task definitions), and proceed with the creation:
Now that the role is successfully created, close the pop-out tab and go back to that pertaining to our Task Definition. In there, refresh the list of roles and pick our latest creation:
The next section of our Task Definition form is all about the Docker image(s) to be used. In our case, we shall have a single one as the required PostgreSQL instance can be hosted on Amazon Web Services RDS, as suggested in the introduction. Several images could however be deployed for more complex use cases - explore this on your own once you have successfully completed this tutorial.
Let us add one container:
The first part of the Add container pop-in form is about the details of the container, which we will name Bookstore-Community-Container. Use the image details that you wrote down in Step 1 of this tutorial - the settings illustrated below pertain to my own repository. The suggested memory limit for the container (1024 MB), considering the low footprint of the BookStore sample, is plenty but resource usage fine-tuning is not our main focus here.
Finally, the port mappings are simple as we expose port 8080 in the image. By leaving the host port empty, you actually let EC2 do dynamic port binding so that multiple containers can be deployed on the same host without port mapping conflicts. The application load balancer we will set up, later on, will distribute service calls to all containers from a single endpoint and automatically point to the appropriate destination ports as further containers are created.
The following section is about setting the environment required by our container. Let us ignore allocated CPU units here (as it is relevant mostly for multi-container tasks), and keep this container as essential to the task. Setting up entry points, commands, and working directory are not required for the BookStore application. The Env Variables passed to Docker are much more interesting and should reflect the requirements set by the Bookstore sample. We set the BW.HOSTNAME to 0.0.0.0 so that all adapters are bound, use the standard 8080 port. You should reflect the DB_URL, DB_USERNAME, and DB_PASSWORD set up for your PostgreSQL database.
We will disregard all the other sections in this tutorial, but a wealth of options is available on AWS ECS.
Let us now confirm this container addition at the very bottom:
Back in the Task Definition form, we will ignore the Volumes section (additional storage, potentially shared across containers) and go straight to the Task Definition creation:
This will lead to the Task Definition summary form. Note that the Task Definition name has been suffixed with :1 to denote its first revision. From then, we will be able to proceed with the next step of our tutorial, creating a Service based on this Task definition.
From the Task definition summary, select Create Service from the drop-down next to the Task definition name:
The Service creation form enables you to pick the Cluster to run the Service on (so that you can create different Services running on different Clusters with different settings, reusing a Task Definition). Do choose Community-Cluster and let us name this Service Bookstore-Community-Service. We will start with three tasks deployed (i.e. one per EC2 instance) and set the minimum healthy percentage to 66% (i.e. roughly two) so that more tasks are deployed if required:
The next section of the form is about the placement of Tasks (in our case it is a single Container) on the instances of the Cluster. The default (balancing across availability zones) is fine:
The last section is about load balancing and auto-scaling. We will discuss auto-scaling in an upcoming tutorial, so let us handle the load balancing now:
In order to do that, you will need to have a pre-existing EC2 Elastic Load Balancer, so let us do that straight away by opening a new browser tab, moving to your EC2 Dashboard, and selecting the Load Balancer section on the left-hand side.
Then click on the Create Load Balancer button:
We will need to use an Application Load Balancer:
Then set it up as follows, naming it Bookstore-Community-ELB and having a listener on port 8080. Choose the right VPC (as per your notes), and select all availability zones:
Ignore the security warning in the second step and proceed to configure the Security Groups, as follows - be mindful to pick the non-default Security Group as your Load Balancer must be in the same Security Group as your Cluster instances to work properly:
Note that you will need to change the inbound rules of your Security Group to allow internal traffic on TCP ports 32768-61000. These ports will be dynamically mapped by ECS to our containers? 8080 and the Load Balancer will need to place traffic on these ports within the Security Group. Consider doing this straight away in a new browser tab:
Then move to the fourth step of your Load Balancer settings and configure the routing Target Group named Bookstore-Community-Target as follows, using port 8080 and the /books http URL to perform health checks on. This is for lack of a better one, as this specific operation may potentially return a large payload in the BookStore application:
Before moving to the Register Targets step, ensure you adjust the Advanced health check settings as well. BWCE docker containers may need some time to start and the standard settings are likely to fail (especially if several containers are started at the same time on an instance, or if you choose a more modest instance?). Feel free to adjust these values if you experience unhealthy and draining instances right after service creation but these settings did work for me:
On the fifth step, Register Targets, you do not have to register members as the Cluster will eventually perform the registration for you (including for instances spun as a result of auto-scaling):
Finally, review the settings and proceed with the creation of the Load Balancer:
Once the Load Balancer creation is confirmed, close the page:
Now let us move back to the browser tab pertaining to the Load Balancer settings for the Service. Make sure you select the Application Load Balancer type, keep the default IAM role, pick your recently created Bookstore-Community-ELB, and add the default container (Bookstore-Community-Container) to the ELB:
In the section that then appears, select the only available Listener port as well as the already created Target Group (Bookstore-Community-Target). This will grey out further options:
Finally, save your settings:
As we will explore auto scaling in an upcoming tutorial, let us confirm the creation of the Service as it is:
A page updating you on the creation is then displayed. A few tens of seconds may be needed for the operation to complete:
The Service summary page then shows us that three Tasks (and in this case three Containers as we defined the Task as having a single bwce-bookstore container) running with Elastic Load Balancing.
Et voilà! Your Service is now ready to respond to requests sent to the endpoint of your Elastic Load Balancer.
You are now able to invoke REST operations against the service and have the load spread across containers and instances. We will soon explore how to add auto-scaling to the mix so as to adapt our resource usage to the level of traffic occurring on the Service as part of a new tutorial.
curl -X GET -H "Content-type: application/json" -H "Accept: application/json" "http://Bookstore-Community-ELB-143712164.ap-southeast-2.elb.amazonaws.com:8080/books"
Relax and enjoy. Or, continue on to Part two of this Tutorial