Setting up Fargate for ECS Exec

Examples for both CloudFormation and Terraform.

Michael Ludvig
Cognizant Servian

--

Photo by Jason Leung on Unsplash

In the previous post I have introduced a new convenient ecs-session tool for logging into your Fargate ECS containers through the ECS Exec mechanism. Today we’re going to explore the ECS configuration needed for ecs-session to work.

ECS Exec requirements

The containers that you want to access via ECS Exec / ecs-session must meet the following requirements:

  1. The task IAM role must permit SSM Sessions. That is typically done by attaching AmazonSSMManagedInstanceCore managed policy.
  2. The task or service must have Execute Command setting enabled. Typically it’s a setting in your CloudFormation or Terraform template for the ECS Service.
  3. The tasks must have outbound access to the SSM service over port 443. This typically means opening egress access on port 443 in the Security Group. The outbound access can be either direct via a Public IP and IGW, or from a private subnet through a NAT gateway. It can also be achieved with VPC endpoints for “SSM” and “SSM Messages” services.
  4. The tasks must have ECS Platform Version 1.4 or later and can run either on Fargate or on EC2, it doesn’t matter, both launch types work.

Now let’s see how to configure all the above in both CloudFormation and Terraform as these are the most popular IaC tools for AWS.

CloudFormation example

I have included a sample CloudFormation template in the aws-ssm-tools GitHub repository that you can experiment with. Following are the key snippets…

First we have to configure the ECS Service: make sure that it’s got the Execute Command setting enabled, that it’s the LATEST Platform Version, and that depending on the Subnet it gets or doesn’t get a Public IP address assigned. Unfortunately Fargate doesn’t obey the subnets’ public IP defaults and has to be told explicitly whether to assign it or not.

Service:
Type: AWS::ECS::Service
Properties:
EnableExecuteCommand: true # Execute Command enabled
LaunchType: FARGATE # Both FARGATE and EC2 work
PlatformVersion: LATEST # LATEST is 1.4 as of now
NetworkConfiguration:
AwsvpcConfiguration:
AssignPublicIp: ENABLED # ENABLED if the task runs in
# a public subnet with a direct
# outside access, or DISABLED when
# it runs in a private subnet
# behind a NAT gateway.

The Security Group must have outbound access to port 443 or it won’t be able to connect to the SSM service. Alternatively we can create VPC endpoints for the “com.amazonaws.{region}.ssm” and “.ssmmessages” services.

SecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
SecurityGroupEgress:
- IpProtocol: tcp
FromPort: 443
ToPort: 443
CidrIp: 0.0.0.0/0 # We don't know the SSM IP addresses

And finally we have to add the right managed policy to the Task IAM role to let it contact the SSM service.

TaskRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
...
ManagedPolicyArns:
# This must be in the TaskRole, not in the TaskExecutionRole!
- arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore

Check out the complete CFN template in the sample-templates/template-ecs-task.yml file of AWS SSM Tools GitHub repo.

Once deployed we can see the two running containers:

~ $ ecs-session --list
cfn-demo-Cluster-d7XiIQ4p service:cfn-demo-Service-qGL0q36I 5a94f6...c2e9f1c nginx 172.31.28.175
cfn-demo-Cluster-d7XiIQ4p service:cfn-demo-Service-qGL0q36I ed126b...17c3f3d nginx 172.31.41.201

Using one of the unique identifiers, for example the IP address, we can now login to the container. By default it starts /bin/sh hence the bacis prompt.

~ $ ecs-session 172.31.28.175
Starting session with SessionId: ecs-execute-command-079d9d572be3f
# hostname
ip-172-31-28-175.ap-southeast-1.compute.internal
# ^D
Exiting session with sessionId: ecs-execute-command-079d9d572be3f

Check out the previous post for more information about ecs-session.

Terraform example

Terraform is another popular Infrastructure as Code tool for AWS as well as for other clouds. Let’s see how to configure ECS Exec here. Again just the important snippets, you can check out the full terraform configuration in the GitHub.

First let’s examine the ECS Service config. Similarly to the CloudFormation example above it sets the Platform Version to LATEST, enables the Execute Command setting and configures the Public IP depending on the subnets.

resource "aws_ecs_service" "service" {
launch_type = "FARGATE"
platform_version = "LATEST" # LATEST is >= 1.4.0 -> ok
enable_execute_command = true # Enable ECS Exec
network_configuration {
assign_public_ip = true # Depends on the "subnets"
subnets = local.default_subnet_ids
security_groups = [aws_security_group.ecs_task_sg.id]
}
# ... other settings ...
}

Next we look at the Task Role (don’t confuse with the Fargate Task Execution Role — that’s a different one). It has to have the Amazon SSM Managed Instance Core managed policy associated.

resource "aws_iam_role" "ecs_task_role" {
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ecs-tasks.amazonaws.com"
}
},
]
})
managed_policy_arns = [
"arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
]
}

And finally the Security Group — again it has to have an outbound access to port 443 in order to talk to the SSM service.

resource "aws_security_group" "ecs_task_sg" {
# ... other settings ...
egress {
description = "Outbound access from ECS tasks to SSM service"
protocol = "tcp"
from_port = 443
to_port = 443
cidr_blocks = ["0.0.0.0/0"]
}
}

That’s pretty much it. Check out the complete Terraform configuration in the “sample-templates/terraform/ecs.tf” file in the AWS SSM Tools GitHub repo.

After we terraform init && terraform apply the config we can list and login to the containers. This time let’s explicitly run /bin/bash for a better user experience.

~ $ ecs-session --list --cluster tf-demo-cluster
tf-demo-cluster service:tf-demo-service 660661...29bbaafe apache 172.31.21.98
tf-demo-cluster service:tf-demo-service 70ae6b...a240ec82 apache 172.31.38.224
~ $ ecs-session 172.31.21.98 --command /bin/bash
Starting session with SessionId: ecs-execute-command-0d081d1bcface
root@ip-172-31-21-98:/usr/local/apache2# hostname
ip-172-31-21-98.ap-southeast-1.compute.internal
root@ip-172-31-21-98:/usr/local/apache2# ^D
exit
Exiting session with sessionId: ecs-execute-command-0d081ed1bcface

Happy ECS Exec’ing!

And remember: Containers are ephemeral, disposable resources that should self-configure at start. Just because you can doesn’t mean you should [login to your containers] 😉

--

--