Introducing the all-new TIBCO Community site!

For current users, please click "Sign In" to reset your password and access the enhanced features. If you're a first-time visitor, we extend a warm welcome—click "Sign Up" to become a part of the TIBCO Community!

If you're seeking alternative community sites, explore ibi, Jaspersoft, and Spotfire.

Jump to content
  • TIBCO BusinessWorks? Container Edition Tutorial: Deploy highly-available and scalable microservices on AWS EC2 Container Service (part 2)


    Manoj Chaurasia

    Table of Contents


    About this tutorial

    Content and duration

    Amazon EC2 Container Service ("ECS" hereafter) is a scalable container management service that allows you to manage Docker containers on a cluster of Amazon EC2 instances.

    This sequel, which should take you about 15 minutes to complete, will walk you through the additional steps needed to define Auto Scaling policies both at the container and instance level. Auto Scaling on ECS is all about reducing the resources used and cost incurred by your ECS-based services when there is little volume, and increasing it back when volume resumes.

    What you will use

    Before you begin, prerequisites

    You should have fully completed part 1 of this tutorial before moving ahead as we assume all the associated resources are available for us to use. We also recommend having gone through the basics of Amazon Web Services' CloudWatch to get a better grasp of the appropriate metrics that can be used for making scaling decisions.

    Step 1: Set up Auto Scaling at the container level

    To set up Auto Scaling at the container level, you will first need to log in into Amazon Web Services. We will first focus on creating two AWS CloudWatch alarm to trigger our Auto Scaling. To do so, you will need to navigate to the CloudWatch part of your AWS Dashboard:

    cloudwatch_22.thumb.png.4ad7a4d6d706681cfcc464fa9a544fe3.png

    Then, move the Alarms section:

    cloudwatch_23.png.4facd73085c3850fa9faaf57290c0c1d.png

    We will now create CloudWatch alarms for Low CPU usage and High CPU usage.

    Creating a CloudWatch LowCPU Alarm

    Click on the Create Alarm button.

    cloudwatch_24.png.5164111f3c15b27078c4ed84c4bae8b9.png

    You will then be presented a list of metric types you can base your alarm on. For our tutorial, we will look at CPU usage at the Cluster/Service level, but know that the latency at the Elastic Load Balancer level could also be a sensible option:

    cloudwatch_25.thumb.png.95b0d1380a01205b84dff988943e24cb.png

    Then pick the CPUUtilization Metric for Cluster Community-Cluster and Service Bookstore-Community-Service, as follows:

    cloudwatch_26.thumb.png.03a9972305873c280a71595057c60617.png

    and click Next at the bottom of the dialog box:

    cloudwatch_27.png.4c03212abb9da19d5ece8bb010ffaff3.png

    Then, define the Alarm. Use LowCPU as its name, and Whenever CPUUtilization <= 33 as the criteria. You can keep the default of 1 consecutive period(s) of 5 minutes, or, if you like scaling to be faster to kick-in, try 2 consecutive period(s) of 1 minute. Both settings did work for us on M3-Medium instances in the Sydney region, but as we already highlighted kickstarting BWCE Docker images will be somewhat CPU intensive on your instance for some time (30-45 seconds for us), so it is a good idea to fine-tune those numbers as required:

    cloudwatch_28.thumb.png.0eed9d4ef19aa4fff80b55f29194dfec.png

    Finally, confirm the creation of the Alarm by clicking on the Create Alarm button at the bottom-right of the dialog box.

    Creating a CloudWatch HighCPU Alarm

    Reperform the steps above, using this time HighCPU as the Alarm name, and CPUUtilization >= 66 as the criteria.

    Setting up the ServiceLowCPU Auto Scaling Policy

    Now that you have defined the base Alarms, let us navigate back to the Bookstore-Community-Service definition. Use the following step by step guidance, if needed.

    First navigate to the Amazon ECS part of the AWS Dashboard:

    image31_1.thumb.png.b680447fe93766bfc2da6ba8083828cc.png

    Then - if you are not landing there to start with, focus on Clusters resources:

    snapshot_0a.png.843ca73c7c25d3e23e8d274d86c85335.png

    In the previous tutorial, we created the Community-Cluster resource. You should navigate to its definition page using the associated hyperlink:

    snapshot_3.thumb.png.f0bcd963ff4f5f6adacf466ca8755bba.png

    Finally, within the Community-Cluster, we created a Service named Bookstore-Community-Service. This is precisely the base resource we will use to add Auto Scaling at the container level:

    snapshot_5.thumb.png.3d84e3732b8fb64c25f637dec69276f7.png

     

    Once you are on the Service's definition page, let us update this definition to add Auto Scaling:

    snapshot_1.thumb.png.042a3e0cc0b01da5833a00cab32a8721.png

    This is done by clicking on the Configure Service Auto Scaling button:

    snapshot_6.thumb.png.ea194bab160a26e0c5ddf00c419433a3.png

    We shall configure Auto Scaling to adjust our service's desired count. The minimum number of tasks shall be 2, the maximum 6, and the desired one shall remain at 3. On top of that, we will add Policies to automatically take action based on metrics:

    snapshot_6.thumb.png.e3ed8974f300059fde8263495df9156d.png

    When now enter the details of ServiceLowCPU Policy. Use ServiceLowCPU as the Policy name, and select the existing LowCPU Alarm as a trigger. The scaling action will be to remove one (1) task when the Alarm is raised. We define a Cooldown period of 120 seconds before considering any further scaling (up or down) action, but you may want to adjust as required.

    snapshot_7.thumb.png.f524d10812e56d2cca3fc634ca11a1f6.png

    Save the Policy.

    Setting up the ServiceHighCPU Auto Scaling Policy

    Reperform the few last steps to add a second Policy named ServiceHighCPU, defined as follows:

    snapshot_8.thumb.png.5cac5b3db58190fcbfce75620698647d.png

    Save the second Policy.

    Confirming the policies and updating Service

    Once the two Policies have been added, you should see a summary that is similar to that of the screenshot below. Confirm your setting by clicking on the Update Service button at the bottom:

    snapshot_9.thumb.png.ac77472463bd51180c1984324c4640aa.png

    The Auto Scaling is now active at the service/container level!

    Tasks will be added or removed depending on CPU Utilisation. Note that using CPU will only make sense if you add Instance-based Auto Scaling as well, especially because additional tasks are likely to increase the pressure on CPU even further. Another option would be to use latency at the Load Balancer level - in such case adding tasks is likely to have a beneficial effect on that metric and it will put pressure on CPU utilization, which can then trigger the Instance level Auto Scaling.

    At any rate, all settings related to Auto Scaling need to be thoroughly tested before adopting them in production as they will be sensitive to your microservice performance, cluster settings, etc.

    Step 2: Set up Auto scaling at the EC2 Instance level

    The creation of ECS Cluster in part 1 of this tutorial automatically produced two associated objects: the Launch Configuration (so we have a blueprint of the EC2 Instances that are part of the Cluster), and the Auto Scaling Group. Let us have a look at the latter by navigating to the Auto Scaling Groups section in the EC2 part of the AWS Dashboard:

    instance_autoscaling_10.thumb.png.395361a711570f159dc6bf0b9c1c7cf0.png

    If you have played with Auto Scaling Groups before, you may want to use the filter with Community-Cluster as the keyword to find the appropriate one. The screenshot below give you the details of the default settings as a result of going through part 1 of the tutorial (i.e. desired count of 3 instances, max 3, min 0 with a fixed number of instances - unless changed manually):

    instance_autoscaling_11.thumb.png.5a38c001140a54dce7b152f0f8617aa6.png

    Setting a minimum number of instances

    In the Auto Scaling Group main panel, you may want to assign the value 2 to the minimum desired number of instances before fiddling with Auto Scaling as otherwise the scale down rules may bring down to 0 instances! You could also, if you wanted, change the number of desired instances and that of the maximum:

    instance_autoscaling_21.thumb.png.2478330dac45a3f1003ba31a1c064273.png

    Now this is done, we can set forth scaling policies.

    ClusterLowCPU Policy

    Navigate to the Scaling Policies tab and create a new Policy:

    instance_autoscaling_12.png.370d1e0f5311027292a39c1d71f0ab7d.png

    instance_autoscaling_13.thumb.png.e64a59b0ddfca811a51bb5791e74f7a7.png

    The first Policy to be created will be named ClusterLowCPU, removing 1 instance when the underlying (to be created) will be triggered:

    instance_autoscaling_13.thumb.png.50bde8ea99bf2574eb878b90837763a9.png

    The Alarm could be defined as illustrated below, based on a average CPU utilization on the cluster of less than 20% for at least 5 minutes:

    instance_autoscaling_15.thumb.png.96efdeae10e72bb224c285960075574d.png

    The confirmation of the Creation of the Alarm will display the following alert (giving you a link to the associated CloudWatch artefact), which you can dismiss:

    instance_autoscaling_16.thumb.png.11766ba673d0a88666f2a6a537766ba4.png

    Finally, confirm the creation of the Policy by hitting the Create button on the top right corner:

    instance_autoscaling_17.thumb.png.3641b6b754660ea7ae9834a9be1bc844.png

    ClusterHighCPU Policy

    Now reperform the step above to add the ClusterHighCPU Policy as outlined below:

    instance_autoscaling_20.thumb.png.21e5dd8208420f7d7266fb2b28962b61.png

    Congratulations, you are done!

    Conclusion

    Congratulations on having setup Auto Scaling both at the Cluster Instance and Task/Container level!

    If you leave your service and cluster alone, you should see it scale down within a short amount of time as follows:

    autoscaling_conclusion.thumb.png.e549048cf710b45e7279425ecf860781.png

    On the contrary, if you applied heavy load on your service, you would then see it scale up both in terms of Instances and Tasks:

    autoscaling_conclusion_2.thumb.png.026fe9d4d98a2c02b3d1f195d38a00f1.png

    Relax and enjoy!


    Contributors: Emmanuel Schweitzer

    User Feedback

    Recommended Comments

    There are no comments to display.


×
×
  • Create New...