IBM MQ in ECS on AWS Fargate

AWS Fargate is a serverless compute resource specifically designed for running containers. With AWS Fargate the deployment and management of the container(s) are controlled by either Elastic Container Service (ECS) or Elastic Kubernetes Service (EKS).

Initially, users often get confused by the relationship between Fargate and the container management solution (EKS or ECS). Therefore, I want to labor the point of the difference, when deploying a container within the context of an enterprise, any solution will include two components:

  1. Compute resources: the machines where the containers run. This is Fargate.
  2. Container management: software that assures that the containers are deployed onto the compute based on the configuration. This is EKS or ECS.

Therefore, Fargate is often referred to as Fargate ECS and Fargate EKS, and are two different solutions to running and managing a container. One of the first decisions in any Fargate project is deciding if the target environment will be Fargate ECS or Fargate EKS. This paper will focus exclusively on Fargate ECS, and Fargate EKS will be left for another discussion.

The next conceptual leap is regarding the term “serverless”. There are various definitions for “serverless” ranging from Functions as a Service to Containers as a Service with the ability to scale to zero. As many people often highlight there is always a server in serverless, but it is instead a question of the visibility of the server from a development, deployment, maintenance, and often most importantly from a cost/billing standpoint. If we measure Fargate against this definition it becomes more obvious that it would qualify for the term “serverless”:

This leads us to the strength and the weakness of Fargate ECS, or to be more precise ECS itself. The focus is on the first-time experience, and a simplified number of concepts and artifacts to understand. This can be hugely powerful, however experienced users of Kubernetes would miss certain features such as enhanced security, scalability, and deployment features.

Usage Scenarios for IBM MQ and Fargate ECS

When thinking about Fargate and its serverless aspects, a normally stateful application such as IBM MQ may seem like an odd fit. There are considerations for certain, but there are many use cases where IBM MQ and Fargate ECS are a natural fit:

Within this paper, we will demonstrate how to set up and configure IBM MQ on Fargate ECS with EFS providing the storage layer. Prior to jumping into the steps let's understand the key building blocks of a Fargate ECS deployment:

There are additional aspects that need to be considered around the security permissions and network configuration, these will be covered in the step-by-step instructions, but the three concepts above are the most important to understand.

Deploy IBM MQ Advanced Developer Edition on Fargate ECS

In this section, we will guide you through the process of deploying the freely available IBM MQ Advanced Developer Edition on Fargate ECS.

Pre-requisites

Prior to creating any Fargate ECS resources, there are several prerequisites required.   Depending on your setup on AWS these steps may differ, but these have been verified in a new AWS account.

Creating EFS storage for IBM MQ

As mentioned in the section, there are various approaches to deploying IBM MQ on Fargate. These instructions focus on the IBM MQ deployments with EFS scenario. This was deliberately chosen as it will be a common scenario and represents the more comprehensive setup. The storage layer will be EFS and this needs to be created. Like all the instructions this will be completed using the standard AWS command line to minimize the likelihood of changes (which are more likely to occur in the AWS Web UI).

 
aws efs create-file-system \
      --performance-mode generalPurpose \
      --throughput-mode bursting \
      --encrypted \
      --tags Key=Name,Value=ibm-mq-fargate-storage 


The output of this command will return a large JSON structure, review this output to identify the FileSystemId. This will need to be used at a later stage and therefore stored in a safe location.

Creation of EFS Mount Points

The EFS storage has been created, but currently, there are no mount points, this makes the EFS storage not particularly useful as no one is able to connect. Mount points are created for each Availability Zone where containers may run. In the following case, we only have containers within a single Availability Zone, but the process is logically the same if you had multiple, you simply specify a subnet in each of the Availability Zones.

Prior to creating the mount point we need to create a new security group that allows NFS communication into the service. This involves two commands:

 
aws ec2 create-security-group --description "IBM MQ EFS Security Group" --group-name ibmmqfargateEFSSG --vpc-id $vpcId


Where $vpcId is your AWS VPC identifier.  If you are unsure of this value, you can retrieve this by running:

 
aws ec2 describe-vpcs


The request will result in a JSON response that will include a GroupId value. Retain this for future use. An authorization rule must be added to the security group to allow inbound NFS communication on port 2049. 

 
aws ec2 authorize-security-group-ingress --group-id $GroupId --protocol tcp --port 2049 --cidr=0.0.0.0/0


Where $GroupId corresponds to the GroupId value from the previous command. The final piece of information required is the subnets to be associated with the mount point. You may already be aware of this ID, but if you are not you can run the following command to view:

 
aws ec2 describe-subnets


With these pieces of information, you are ready to create the mount points by running the following command:

 
aws efs create-mount-target \
      --file-system-id $FileSystemId \
      --subnet-id $subnetid \
      --security-groups $GroupId

 

Creation of EFS Access Point

An access point represents an application-specific entry point into an EFS file system that makes it easier to manage application access. In certain situations, such as running the container as the root user, it is not required, however for completeness this can be created using the command below:

 
aws efs create-access-point \
      --file-system-id $FileSystemId \
      --posix-user "Uid=1001,Gid=0" \
      --root-directory "Path=/mqm,CreationInfo={OwnerUid=1001,OwnerGid=0,Permissions=777}" \
      --tags Key=Name,Value MQAccessPoint


The EFS ID ($FileSystemId) needs to be customized in the above command based on your environment, using the identifier from the section. A new directory called mqm is created on the EFS storage with the specified ownership permissions. Any connection via the access point will also use the Uid=1001 and Gid=0 for access, which is the same user that the MQ container uses by default.

Assumed to have been created already

These instructions assume you have completed the following:

Steps for Deployment

Now all the pre-requisites have been completed we are ready to create the ECS Task Definition and deploy the ECS Service.

Register the MQ Fargate ECS Task Definition

Below is a complete task definition with three sections that require mandatory customization for your AWS environment:

There are many other customizations that you may want to complete, for instance, you may want to include additional environment variables (to change the behavior of the deploy Queue Manager), memory and CPU limits, change the ulimits associated with the compute resources, or expose different ports. All these options and many more are available and described in the AWS RegisterTaskDefinition documentation.

 
{
    "executionRoleArn": "$ExecutionRoleArn",
    "containerDefinitions": [
        {
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-group": "ibmmqfargate",
                    "awslogs-region": "eu-west-2",
                    "awslogs-stream-prefix": "ibmmq/fargate"
                }
            },
            "portMappings": [
                {
                    "hostPort": 9443,
                    "protocol": "tcp",
                    "containerPort": 9443
                },
                {
                    "hostPort": 1414,
                    "protocol": "tcp",
                    "containerPort": 1414
                }
            ],
            "environment": [
                {
                    "name": "LICENSE",
                    "value": "accept"
                },
                {
                    "name": "MQ_APP_PASSWORD",
                    "value": "password"
                },
                {
                    "name": "MQ_QMGR_NAME",
                    "value": "QM1"
                }
            ],
            "ulimits": [
                {
                    "name": "nofile",
                    "softLimit": 10240,
                    "hardLimit": 10240
                },
                {
                    "name": "nproc",
                    "softLimit": 4096,
                    "hardLimit": 4096
                }
            ],
            "user": "1001:0",
            "mountPoints": [
                {
                    "sourceVolume": "queuemanager",
                    "containerPath": "/mnt/mqm",
                    "readOnly": false
                }
            ],
            "image": "ibmcom/mq:9.2.3.0-r1-amd64",
            "healthCheck": {
                "command": [
                    "CMD-SHELL",
                    "chkmqhealthy || exit 1"
                ],
                "startPeriod": 300
            },
            "essential": true,
            "name": "ibmmq922"
        }
    ],
    "memory": "1024",
    "family": "ibmmq",
    "requiresCompatibilities": [
        "FARGATE"
    ],
    "networkMode": "awsvpc",
    "cpu": "512",
    "volumes": [
        {
            "efsVolumeConfiguration": {
                "transitEncryption": "ENABLED",
                 "authorizationConfig": {
                    "iam": "DISABLED",
                    "accessPointId": "$AccessPointId"
                },
                "fileSystemId": "$FileSystemId"
            },
            "name": "queuemanager"
        }
    ],
    "tags": [
       {
           "key": "productId",
           "value": " f3beb980b6ca487ea6a3db33262afa3c"
       }
   ]
}


Once you have completed the customizations and are ready to register the task definition, run the following command:

 
aws ecs register-task-definition --cli-input-json 


Where fargate-task-definition-ibm-mq.json corresponds to the location of the JSON file describing the task definition.

Create the MQ Fargate ECS Service on the Cluster

To allow communication to the MQ deployment a new security group needs to be created. This will allow communication on the MQ data port (1414) and access to the Web Console (9443). Clearly, if you don’t require access to both customize as required. This involves three commands:

 
aws ec2 create-security-group --description "IBM MQ Fargate ECS Security Group" --group-name ibmmqfargateSG --vpc-id $vpcId


Where $vpcId is your AWS VPC identifier.  If you are unsure of this value, you can retrieve this by running:

 
aws ec2 describe-vpcs


The original request will result in a JSON response that will include a GroupId value. Retain this for future use. Two authorization rules must be added to allow inbound communication on ports 1414 and 9443. 

 
aws ec2 authorize-security-group-ingress --group-id $GroupId --protocol tcp --port 9443 --cidr=0.0.0.0/0

aws ec2 authorize-security-group-ingress --group-id $GroupId  --protocol tcp --port 1414 --cidr=0.0.0.0/0


Where $GroupId  corresponds to the GroupId value from the previous command. 

The Fargate ECS service will be exposed on the subnets within your VPC. You need to identify the subnet IDs and specify these when creating the service. If you are unsure what these values would be, you can run the following command to discover:

 
aws ec2 describe-subnets


You now have all the information required to run the create-service command:

 
aws ecs create-service \
       --cluster $clusterName \
       --service-name ibmmqdev \
       --task-definition $taskarn \
       --desired-count 1 \
       --launch-type FARGATE \
       --platform-version LATEST \
       --deployment-configuration maximumPercent=100,minimumHealthyPercent=0 \
       --network-configuration "awsvpcConfiguration={subnets=[$subnetids],securityGroups=[$GroupId],assignPublicIp=ENABLED}" \
       --tags key=productId,value=208423bb063c43288328b1d788745b0c \
       --propagate-tags SERVICE


 Once the task is running it will be exposed to a public IP address. Within the AWS web UI this can be seen on the running task details page, but from the command line several commands are required:

Retrieval of the running task ID:

 
aws ecs list-tasks --service-name ibmmqdev --cluster $clusterName

 

Retrieve the networkInterfaceId from the running task details

 
aws ecs describe-tasks --tasks $runningTaskARN --cluster $clusterName

 

Retrieve the public IP address from the network interface

 
aws ec2 describe-network-interfaces --network-interface-ids $networkInterfaceId | grep PublicIp


With this information, you can use a standard web browser to access the MQ Web UI.

Assuming you have not changed the task description, the username is “admin” and the password “passw0rd” and using this information you will be able to access the MQ Web UI. 

MQ Web UI

Summary

AWS Fargate ECS is an easy-to-use container orchestration solution with the added advantage that resources can be quickly and easily scaled to 0. There are many scenarios where IBM MQ is a natural fit and we have demonstrated how you can get started in minutes.

 

 

 

 

Top