Notes On AWS Cloudformation

Synopsis

pip install awscli --upgrade --user

# configure your profile 
aws configure --profile my-profile

aws cloudformation create-stack --stack-name myteststack 
    --template-body file://sampletemplate.json 
    --parameters file://parameters.json
    --profile my-profile

    # Alternatively ...
    --parameters ParameterKey=KeyPairName,ParameterValue=my-key-pair
                 ParameterKey=SubnetIDs,ParameterValue=ID1\\,ID2


aws --region us-east-1 ec2 describe-images --owners amazon  \
  --filters "Name=virtualization-type,Values=hvm"  \
  "Name=architecture,Values=x86_64" \
  --query 'Images[*].{ID:ImageId,Name:Name,CreationDate:CreationDate}'

Notes

  • You can not edit cloud formation templates - but you can update stack by supplying new updated template file for the same stack (It is in effect sort of editing the template).
  • You can not rename stack.
  • To locate all properties supported by resource, for example S3, go to https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-bucket.html
  • Templates Components:
    • Resources
    • Parameters
    • Mappings: E.g. maps for regions, variables for dev vs prod, etc.
    • Outputs: References to what has been created. (Can be imported to another)

YAML vs JSON

  • Every json file is also YAML file.

  • JSON does not support inline comments but YAML does (with # lines )

  • Top level JSON element must be Object or Array. e.g. { "key" : "val" } or [5] A simple string or number is not a valid json entity: e.g. "hello" or 5

  • Note: If the first line of YAML file, starts with "-" then top level object is "Array", Else it is "Object".

  • Indentation level is very important for array elements in YAML.

  • Example JSON :

    {
      "simple-key": "simple-value",
      "key1": [ "str1", "long string str2" ],
      "key2": [ "1", 2 ],
      "array_contains_objects": [  
         { "key11" : "val11", "key12" : "val12" },
         { "key21" : "val21", "key22" : "val22" }
      ],
      "object": {
        "key": "value",
        "array": [ { "key1": "val1", "key2" : "val2" } ]
      },
      "myparagraph": "first line \n second line\n", 
      "mycontent": "First Line\n Second line has no newline at end"
    }
    
  • Example YML :

    ---
    --- Three or more dashes used for comments
    ---
    # This is also a comment
    #
    simple-key : simple-value
    
    # Array elements are *optionally* indented at same indentation level.
    # Every element of array must start with -
    key1:
     - str1
     - long string str2    
    
    # Numbers auto-recognized as numerical type. Force quote if needed.
    key2:
    - "1"
    - 2                   
    
    # Note: Recognize this pattern for array of objects.
    array_contains_objects:
    - key11: val11
      key12: val12
    - key21: val21
      key22: val22
    
    # Object elements must be indented. 
    object:
     key: value
     array:
      - key1 : val1
      - key2 : val2
    myparagraph: |
      first line
      last line has newline at end
    
    mycontent: |-      # No new line at end. Similar to echo -n
      First Line
      Second line has no newline at end
    

Stack options

You can set the following stack options:

  • Tags: Tags are arbitrary key-value pairs.
  • Permissions: An existing AWS IAM service role that AWS CloudFormation can assume. By default, it uses your account credentials.

You can also set the following advanced options for stack creation:

  • Stack policy: Permission policy meant to prevent update to certain resources. i.e. Allow/Deny policy statements with Action: update* and Resources: xxx, etc.
  • Rollback configuration: Create alarms that triggers rollback.
  • Notification Options: Specify SNS topic for stack events.
  • Stack creation options (not applicable for stack updates):
    • Rollback on failure : By default true to rollback all operations.
    • Timeout: By default no timeout for stack creation. In Minutes.
    • Termination protection: prevents accidental deletes. Disabled by default.

CFN Init

Typically, you want to auto configure your EC2 instances, AutoScaling Group etc using some sort of custom scripts. Some yum statements, etc.

User script can be configured under Ec2 Configure Instance Details: Advanced. Just cut and paste your script there.

Using cloud formation :

Resources:
  WebServer:
    Type: AWS::EC2::Instance
    Properties:
      UserData:
        Fn::Base64:  |
           #!/bin/bash
           yum update -y
           yum install -y httpd24 php56 mysql55-server php56-mysqlnd
           ...
  • It is clumsy if the script is a large one.
  • What if you can extract contents from url to arbitrary directory and run script ?
  • What if you can specify all the init dependencies as Metadata in template ?
  • That is what implemented by cloud formation.

CFN helper scripts

There is 4 python scripts helps to automate the initialization:

  • cfn-init - e.g. /opt/aws/bin/cfn-init -s <stack_name> -r <resource_name>
  • cfn-signal
  • cfn-get-metadata
  • cfn-hup - A daemon for Custom hooks to execute when certain metadata changes.

These 4 scripts are part of AWS yum package: yum install -y aws-cfn-bootstrap

The cloudformation Metadata for EC2 Instance can specify the configuration to list the packages and extract a tar ball from remote http location.

Example

WebServerHost:
 Type: AWS::EC2::Instance
 Metadata:
   Comment: Install a simple PHP application
   AWS::CloudFormation::Init:
     config:
       packages:
         yum:
           httpd: []
           php: []
       groups:
         apache: {}
       users:
         "apache":
           groups:
             - "apache"
       sources:
         "/home/ec2-user/aws-cli": "https://github.com/aws/aws-cli/tarball/master"                                                     
         ...

   Properties:
     UserData:
       Fn::Base64:
         !Sub |         # Note: Sub means substitute
           #!/bin/bash
           yum install -y aws-cfn-bootstrap
           /opt/aws/bin/cfn-init --stack ${AWS::StackName} --resource WebServerHost

Recipies

Custom Parameters

In the template, use custom parameters like this ...

Parameters:
  InstanceTypeParameter:
    Type: String
    Default: t2.micro
    AllowedValues:
      - t2.micro
      - m1.small
      - m1.large
  Description: Enter t2.micro, m1.small, or m1.large. Default is t2.micro.

To use these parameters, you will specify this :

Resources:
  Ec2Instance:
   Type: AWS::EC2::Instance
   Properties:
     InstanceType:
       Ref: InstanceTypeParameter
     # Alternate syntax:
     # InstanceType: !Ref  InstanceTypeParameter
     # InstanceType:  { "Ref" : "InstanceTypeParameter" }
     ImageId: ami-0ff8a91507f77f867

The type of parameters could be: String, Number, CommaDelimitedList, List<Type>

Parameters can have Constraints: AllowedValues, AllowedPattern, Min/Max Value, Min/Max Length, etc.

If the Type is AWS::EC2::KeyPair::KeyName, then only available keypairs can be assigned.

Typical use of CommaDelimitedList :

Parameters: 
  DbSubnetIpBlocks: 
    Description: "Comma-delimited list of three CIDR blocks"
    Type: CommaDelimitedList
    Default: "10.0.48.0/24, 10.0.112.0/24, 10.0.176.0/24"

DbSubnet1:
  Type: AWS::EC2::Subnet
  Properties:
    AvailabilityZone: !Sub
      - "${AWS::Region}${AZ}"
      - AZ: !Select [0, !Ref VpcAzs]
    VpcId: !Ref VPC
    CidrBlock: !Select [0, !Ref DbSubnetIpBlocks]
    # "CidrBlock" : { "Fn::Select" : [ "0", {"Ref" : "DbSubnetIpBlocks"} ] }

Note the use of Fn::Select in the above example!

Intrinsic Functions

Fn::Base64
Fn::Cidr
Condition Functions (Fn::And, Not, If, Or, Not, Equals, etc) 
Fn::FindInMap
Fn::GetAtt
Fn::GetAZs
Fn::ImportValue
Fn::Join
Fn::Select
Fn::Split
Fn::Sub
Fn::Transform
Ref

Example use of If condition :

# !If [condition_name, value_if_true, value_if_false]
SecurityGroups:
- !If [CreateNewSecurityGroup, !Ref NewSecurityGroup, !Ref ExistingSecurityGroup]

Example use of Condition function:

"MyOrCondition" : {
   "Fn::Or" : [
      {"Fn::Equals" : ["sg-mysggroup", {"Ref" : "ASecurityGroup"}]},
      {"Condition" : "SomeOtherCondition"}
   ]
}

Create S3 Bucket

When you create stack, you are prompted for stack name -- let us say, it is, "thava-s3-stack".

Cloudformation template file contents:

---
Resources:
  MyS3Bucket:
    Type: "AWS::S3::Bucket"
    Properties: {}      

Since you have not specified bucket name, it is derived as:

"thava-s3-stack-mys3bucket-xxxUUIDxxx"

The logical ID of this resource is "MyS3Bucket". The bucket name is also physicalID

Make S3 Bucket in the stack Public

Choose stack => Update Stack => Then supply new updated template file :

---
Resources:
  MyS3Bucket:
    Type: "AWS::S3::Bucket"
    Properties:
      AccessControl: PublicRead

Preview the changes, it tells you the modification is done in-place. i.e. Replacement : false. For certain updates, the resource may have to be deleted and re-created.

If you had updated, BucketName property, it is recreated. ( Replacement : true )

Create EC2 Instance

You need ImageId to spin instance. Some common ones are :

Ubuntu Server 18.04 LTS (HVM) - ami-024a64a6685d05041 (64-bit x86)
RHEL 8 (HVM), SSD Volume Type - ami-098bb5d92c8886ca1 (64-bit x86)
CentOS Linux 7 x86_64 2019.01 - ami-9887c6e7

Example template:

---
Resources:
  MyInstance:
    Type: AWS::EC2::Instance
    Properties:
      AvailabilityZone: us-east-1a
      ImageId:  ami-9887c6e7
      InstanceType: t2.micro

Optional Attributes of Resources

  • Depends On: Though cloudformation auto-detects dependencies between resources, it really helps if you specify when the dependency is not obvious. This also helps to properly rollback if creation fails.
  • DeletionPolicy: You can decide to protect RDS even if the stack is deleted.
  • Creation Policy: See CFN init
  • Metadata: Examples in CFN init

Cross Stack References

Use Outputs Feature of the template. For resoure, You have logical ID (e.g. MyBucket), PhysicalID (Real bucket name), and export Name (which is Outputs: => Export: => Name value) :

Outputs::
# # MyFirstSecGroup is used as one "Key" in Outputs. Unused in cross stack ref. # MyFirstSecGroup: Description: My first security Group Value: !Ref MySsg1 Export: # You can import this later. eg. !ImportValue MySshPrimaryGroup # This Name is created in Global Name space for my account! Name: MySshPrimaryGroup

In another stack, we can import this security group as follows :

Resources:
  MyEC2:
    Type: AWS::EC2::Instance
    Properties:
      InstanceType: t2.micro
      ImageId: ami-a4c7edb2
      SecurityGroups:
       - !ImportValue MySshPrimaryGroup
      ....

Using GetAtt

See this example to create a Volume in the same availabilty zone as my EC2 :

NewVolume:
 Type: "AWS::EC2::Volume"
 Condition: CreateProdResources
 Properties:
   Size: 100
   AvailabilityZone:
     !GetAtt EC2Instance.AvailabilityZone    

Grouping of Parameters for better user Interface

Example:

Metadata:
  AWS::CloudFormation::Interface:
    ParameterGroups:
      # specify a Label and list of parameters to be grouped under that, etc.
      # specify another (Label, paramters)

Detect Stack Drifts

If the resources defined in stack has been changed manually (using console or cli), then the stack is said to be drifted.

From AWS Console, stack listing, there is set of stack attributes shown in table. Example: Stack name, status etc. Using settings, enable Drift Status. You can click "Detect Drift" to get updated drift status.

AWS cli command :

aws detect-stack-drift --stack-name <value>

Detect Available EC2 Images using CLI

aws --region us-east-1 ec2 describe-images --owners amazon  \
  --filters "Name=virtualization-type,Values=hvm"  \
  "Name=architecture,Values=x86_64" \
  --query 'Images[*].{ID:ImageId,Name:Name,CreationDate:CreationDate,Description:Description}'

Nested Stacks

  • Nested Stacks is one of the best practices.
  • Remember ECS is Elastic Container Service allows you to deploy docker containers in a manner to be easily scalable. i.e. You run EC2 instances with in auto scaling group (restarts failing instance) and define scaling policies to scale on demand. i.e. You manage the scaling between min capacity and max capacity. You also create and attach load balancer to ASG. You pay for the EC2 instances.
  • AWS Fargate takes ECS to next level. You pay for running tasks (i.e. jobs). Each task contains one or more containers. Each task that runs in Fargate comes with a dedicated Elastic Network Interface (ENI) with a private IP address. All containers of the same task can communicate with each other via localhost. A public IP address can be enabled as well. You pay per running task (job). Scaling is transparent for the user. Much like AWS Lambda, you just schedule the Task (set of docker containers ?) and everything is taken care for you.
  • Traditional ECS supports only container instances. Modern ECS supports Fargate. Fargate has limitations -- EFS, EBS Integration not possible.

Exporting From Stacks Vs Nested Stacks

Export resources for Global use across your account:

Outputs:
  StackVPC:
    Description: Team VPC
    Value: !Ref TeamVPC
    Export:
      Name: !Sub "${AWS::StackName}-TeamVPC"

Exporting approach is natural, but you need some sort of scripting to orchestrate to create the bigger stack.

Nested stack is a Resource of type AWS::CloudFormation::Stack within your CloudFormation template. You pass the stack a URL to the Template file in S3, along with the parameters needed :

Resources:

  baseFargate:
    Type: AWS::CloudFormation::Stack
    Properties:
      Parameters:
        ClusterName:
          !Ref ClusterName
      TemplateURL: https://s3.amazonaws.com/bucket-name/fargate-cluster.yaml

service slayer demo project

  • It has three containerised NodeJS apps:

    containers/defense-api containers/offense-api containers/arbiter-api

  • We’ll need ECR (Elastic Container repositories) for each app.

Synopsis:

# Create ECR (if not already existing)
api=service-slayer-arbiter-api
aws ecr create-repository --repository-name $api
...

ACCOUNT_ID=$(aws sts get-caller-identity |  jq -r '.Account')
$(aws ecr get-login --no-include-email --region us-east-1)

docker build -t $api ./arbiter-api/
docker tag $api:latest \
      $ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/$api:latest
docker push $ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/$api:latest
...

BUCKET_NAME=devopstar

## Creates S3 bucket
aws s3 mb s3://$BUCKET_NAME

## S3 cloudformation deployments
### Base
aws s3 cp fargate-cluster.yaml s3://$BUCKET_NAME/../fargate-cluster.yaml
...
### CI/CD
aws s3 cp codebuild.yaml s3://$BUCKET_NAME/.../codebuild.yaml

The main deploy.yaml file contains the parent stack and contains bunch of stacks with some predefined dependencies :

defenseService:
  # This dependency does the real orchestration !!!
  DependsOn: [ baseFargate, baseNetworking ]
  Type: AWS::CloudFormation::Stack
  Properties:
    Parameters:
      ....
      VPCId:
        !GetAtt [ baseNetworking, Outputs.VPC ]
      PublicSubnetIDs:
        !GetAtt [ baseNetworking, Outputs.SubnetsPublic ]
      FargateCluster:
        !GetAtt [ baseFargate, Outputs.FargateCluster ]
      ....

Make sure your nested stack really outputs the value that you fetch:

Outputs:

  VPC:
    Description: 'VPC.'
    Value: !Ref VPC

The outputs of the top level stack can refer to outputs of base stacks:

Outputs:

  ArbiterApiEndpoint:
    Description: API Endpoint for the Arbiter Service
    Value: !GetAtt [ arbiterService, Outputs.EndpointUrl ]

The Parameters can be saved in a separate file :

[
  { "ParameterKey":"ProjectName", "ParameterValue":"Service Slayer" },
  { "ParameterKey":"BucketName",  "ParameterValue":"devopstar" },
  ...
]

Command example :

aws cloudformation create-stack \
    --stack-name "service-slayer" \
    --template-body file://cloudformation/deployment.yaml \
    --parameters file://cloudformation/deployment-params.json \
    --capabilities CAPABILITY_IAM

# Note: update-stack with same options is also supported.

Limitations

  • You can not create dynamic number of resources. (simulate For loop sort)