Attach an IAM Role to an EC2 Instance with CloudFormation

CloudFormation allows you to manage your AWS infrastructure by defining it in code.

In this post, I will show you guys how to create an EC2 instance and attach an IAM role to it so you can access your S3 buckets.

First, you’ll need a template that specifies the resources that you want in your stack. For this step, you use a sample template that I already prepared:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Description" : "Attach IAM Role to an EC2",
"Parameters" : {
"KeyName" : {
"Description" : "EC2 Instance SSH Key",
"Type" : "AWS::EC2::KeyPair::KeyName"
},
"InstanceType" : {
"Description" : "EC2 instance specs configuration",
"Type" : "String",
"Default" : "t2.micro",
"AllowedValues" : ["t2.micro", "t2.small", "t2.medium"]
}
},
"Mappings" : {
"AMIs" : {
"us-east-1" : {
"Name" : "ami-8c1be5f6"
},
"us-east-2" : {
"Name" : "ami-c5062ba0"
},
"eu-west-1" : {
"Name" : "ami-acd005d5"
},
"ap-southeast-2" : {
"Name" : "ami-8536d6e7"
}
}
},
"Resources" : {
"Test" : {
"Type" : "AWS::EC2::Instance",
"Properties" : {
"InstanceType" : {
"Ref" : "InstanceType"
},
"ImageId" : {
"Fn::FindInMap" : [
"AMIs",
{
"Ref" : "AWS::Region"
},
"Name"
]
},
"KeyName" : {
"Ref" : "KeyName"
},
"IamInstanceProfile" : {
"Ref" : "ListS3BucketsInstanceProfile"
},
"SecurityGroupIds" : [
{
"Ref" : "SSHAccessSG"
}
],
"Tags" : [
{
"Key" : "Name",
"Value" : "Test"
}
]
}
},
"SSHAccessSG" : {
"Type" : "AWS::EC2::SecurityGroup",
"Properties" : {
"GroupDescription" : "Allow SSH access from anywhere",
"SecurityGroupIngress" : [
{
"FromPort" : "22",
"ToPort" : "22",
"IpProtocol" : "tcp",
"CidrIp" : "0.0.0.0/0"
}
],
"Tags" : [
{
"Key" : "Name",
"Value" : "SSHAccessSG"
}
]
}
},
"ListS3BucketsInstanceProfile" : {
"Type" : "AWS::IAM::InstanceProfile",
"Properties" : {
"Path" : "/",
"Roles" : [
{
"Ref" : "ListS3BucketsRole"
}
]
}
},
"ListS3BucketsPolicy" : {
"Type" : "AWS::IAM::Policy",
"Properties" : {
"PolicyName" : "ListS3BucketsPolicy",
"PolicyDocument" : {
"Statement" : [
{
"Effect" : "Allow",
"Action" : [
"s3:List*"
],
"Resource" : "*"
}
]
},
"Roles" : [
{
"Ref" : "ListS3BucketsRole"
}
]
}
},
"ListS3BucketsRole" : {
"Type" : "AWS::IAM::Role",
"Properties" : {
"AssumeRolePolicyDocument": {
"Version" : "2012-10-17",
"Statement" : [
{
"Effect" : "Allow",
"Principal" : {
"Service" : ["ec2.amazonaws.com"]
},
"Action" : [
"sts:AssumeRole"
]
}
]
},
"Path" : "/"
}
}
},
"Outputs" : {
"EC2" : {
"Description" : "EC2 IP address",
"Value" : {
"Fn::Join" : [
"",
[
"ssh ec2-user@",
{
"Fn::GetAtt" : [
"Test",
"PublicIp"
]
},
" -i ",
{
"Ref" : "KeyName"
},
".pem"
]
]
}
}
}
}

The template creates a basic EC2 instance that uses an IAM Role with S3 List Policy. It also creates a security group which allows SSH access from anywhere.

Note: I used also the Parameters section to declare values that can be passed to the template when you create the stack.

Now we defined the template. Sign in to AWS Management Console then navigate to CloudFormation, and click on “Create Stack“. Upload the JSON file:



You would be asked to assign a name to this stack, and choose your EC2 specs configuration & SSH KeyPair:



Make sure to check the box “I ackownledge the AWS CloudFormation might create IAM resources” in order to create the IAM Policy & Role:



Once launched, you will get the following screen with launching process events:



After a while, you will get the CREATE_COMPLETE message in the status tab:



Once done, on the output tab, you should see how to connect via SSH to your instance:



If you point your terminal to the value shown in the output tab, you should be able to connect via SSH to server:

1
ssh ec2-user@52.91.239.135 -i key.pem


Let’s check if we can list the S3 buckets using the AWS CLI:

1
aws s3 ls


Awesome ! so we are able to list the buckets, but what if we want to create a new bucket:



It didn’t work, and it’s normal because the IAM Role attached to the instance doesn’t have enough permission (CreateBucket action).

Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Build a Serverless Memes Function with OpenFaaS

In this quick post, I will show you how to build a Serverless function in Go to get the latest 9Gag Memes using OpenFaaS.



This tutorial assume that you have:

  • faas-cli installed – The easiest way to install the faas-cli is through cURL:
1
curl -sSL https://cli.openfaas.com | sudo sh
  • Swarm or Kubernetes environment configured – See Docs.

1 – Create a function

Create a handler.go file with the following content:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
package main

import (
"encoding/json"
"fmt"
"io/ioutil"
"log"
"os"

"github.com/mlabouardy/9gag"
)

func main() {
tag, err := ioutil.ReadAll(os.Stdin)
if err != nil {
log.Fatalf("Unable to read standard input: %s", err.Error())
}
gag9 := gag9.New()
memes := gag9.FindByTag(string(tag))
rawJson, _ := json.Marshal(memes)
fmt.Println(string(rawJson))
}

The code is self-explanatory, it uses 9Gag Web Crawler to parse the website and fetch memes by their tag.

2 – Docker Image

I wrote a simple Dockerfile using the Multi-stage builds technique to reduce the image size down:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
FROM golang:1.9.1 AS builder
MAINTAINER mlabouardy <mohamed@labouardy.com>
WORKDIR /go/src/github.com/mlabouardy/Memes9Gag
RUN go get -d -v github.com/mlabouardy/9gag
COPY handler.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .

FROM alpine:latest
RUN apk --no-cache add ca-certificates
ADD https://github.com/openfaas/faas/releases/download/0.5.1-alpha/fwatchdog /usr/bin
RUN chmod +x /usr/bin/fwatchdog
WORKDIR /root/
COPY --from=builder /go/src/github.com/mlabouardy/Memes9Gag/app .
ENV fprocess="/root/app"
CMD ["fwatchdog"]

3 – Configuration file

1
2
3
4
5
6
7
8
9
provider:
name: faas
gateway: http://localhost:8080

functions:
memes-9gag:
lang: Dockerfile
handler: ./function
image: mlabouardy/memes-9gag

Note: If pushing to a remote registry change the name from mlabouardy to your own Hub account.

4 – Build

Issue the following command:

1
faas-cli build -f ./stack.yml

5 – Deploy

1
2
faas-cli push -f ./stack.yml
faas-cli deploy -f ./stack.yml

6 – Tests

Once deployed, you can invoke the function via:

cURL:

1
curl http://localhost:8080/function/memes-9gag -d "GoT"

FaaS CLI:

1
echo "GoT" | faas-cli invoke memes-9gag

UI:



Note: all code used in this demo, is available on my GitHub 😍

Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Preventing race conditions in Docker

It’s easy to get race conditions with Compose & Docker. Take for example, if you have a common pattern when you have the application server depends on the database, but since the database server didn’t have time to configure itself and application has already started it would just failed connecting for it.

A race condition example with NodeJS app & MySQL:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
var MySQL = require('mysql'),
express = require('express'),
app = express();

var connection = MySQL.createConnection({
host : process.env.MYSQL_HOST || 'localhost',
user : process.env.MYSQL_USER || '',
password : process.env.MYSQL_PASSWORD || ''
});

connection.connect(function(err){
if(err){
console.log('error connecting:', err.stack);
process.exit(1);
}
console.log('connected as id:', connection.threadId);
})

app.get('/', function(req, res){
res.send('Hello world :)');
})

app.listen(3000, function(){
console.log('Server started ....');
})

To build the application container, I used the following Dockerfile:

1
2
3
4
5
6
7
8
9
10
11
12
FROM node:8.7.0
MAINTAINER mlabouardy <mohamed@labouardy.com>

WORKDIR /app

RUN npm install mysql express

COPY server.js .

EXPOSE 3000

CMD node server.js

To deploy the stack, I used docker-compose:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
version: "3.0"

services:
mysql:
image: mysql:5.6
environment:
- MYSQL_ROOT_PASSWORD=root
networks:
- db-net

app:
build: .
ports:
- 3000:3000
environment:
- MYSQL_HOST=mysql
- MYSQL_USER=root
- MYSQL_PASSWORD=root
networks:
- db-net

networks:
db-net:
driver: bridge

Let’s build the image:

1
docker-compose build


Then, create the containers:

1
docker-compose up -d 


Let’s see the status:

1
docker-compose ps


The application failed to start, lets see why ?

1
docker-compose logs -f app


RACE CONDITION ! The application container come up before the DB and tried to connect to MySQL database and fail with a database connection error. To avoid that, There are many solutions:

  • Adding a mechanism in the code to wait for DB to be up and setup before starting to connect to it
  • Using restart policy – Docker Docs
  • Holding the container until the database is up and running

I will go with the 3rd solution, an open source tool called Dockerize, the advantage of this tool is that’s its pretty fast to just look over the opening the socket until it’s getting open and then launch the web app.

Note: Dockerize gives you the ability to wait for services on a specified protocol (file, tcp, tcp4, tcp6, http, https and unix)

So just update the Dockerfile to install Dockerize:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
FROM node:8.7.0
MAINTAINER mlabouardy <mohamed@labouardy.com>

RUN apt-get update && apt-get install -y wget

ENV DOCKERIZE_VERSION v0.5.0
RUN wget https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& tar -C /usr/local/bin -xzvf dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& rm dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz

WORKDIR /app

RUN npm install mysql express

COPY server.js .

EXPOSE 3000

CMD dockerize -wait tcp://mysql:3306 -timeout 1m && node server.js

Then, build the new image:



1
2
docker-compose up -d
docker-compose ps


1
docker-compose logs -f app


Its working !

Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Highly Available Bastion Hosts with Route53

Instances in a private subnet don’t have a public IP address, and without a VPN or a DirectConnect option, Bastion Host (JumpBox) is the expected mechanism to reach your servers. Therefore, we should make it Highly Available.

In this quick post, I will show you how to setup a Highly Available Bastion Hosts with the following targets :

  • Bastion hosts will be deployed in two Availability Zones to support immediate access across the VPC & withstand an AZ failure.
  • Elastic IP addresses are associated with the bastion instances to make sure the same trusted Elastic IPs are used at all times.
  • Bastion Hosts will be reachable via a permanent DNS entry configured with Route53.


In order to easily setup the infrastructure described above, I used Terraform:

1
2
git clone https://github.com/mlabouardy/terraform-aws-labs
cd bastion-highavailability

Note: I did a tutorial on how to the setup a VPC with Terraform so make sure to read it for more details.

Update the variables.tfvars file with your SSH Key Pair name and an existing Hosted Zone ID. Then, issue the following command:

1
terraform apply -var-file=variables.tfvars

That will bring up the VPC, and all the necessary resources:



Now in your AWS Management Console you should see the resources created:

EC2 Instances:



DNS Record:



Finally, create an SSH tunnel using the DNS record to your private instance:

1
ssh -f ec2-user@bastion.slowcoder.com -i /d/aws/vpc.pem -L 2800:10.0.3.218:22 -N

Once done, you should now be able to access to your private instances via SSH:

1
ssh ec2-user@localhost -p 2800 -i /d/aws/vpc.pem


Take it further ? instead of defining number of bastion hosts, we could use a bastion host inside an autoscaling group with min target set to 1.

Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Create Front-End for Serverless RESTful API

In this post, we will build an UI for our Serverless REST API we built in the previous tutorial, so make sure to read it before following this part.

Note: make sure to enable CORS for the endpoint. In the API Gateway Console under Actions and Enable CORS:



The first step is to clone the project:

1
git clone https://github.com/mlabouardy/movies-dynamodb-lambda.git

Head into the ui folder, and modify the js/app.js with your own API Gateway Invoke URL:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
angular.module('app', [])
.controller('MainCtrl', function($scope, $http){
var self = $scope;
var apiUrl = 'https://kbouwyuvoc.execute-api.us-east-1.amazonaws.com/prod/movies'; // replace with API Gateway Invoke URL

self.movies = [];
self.movie = {};
self.error = '';

self.getMovies = function(){
$http.get(apiUrl).then(function(res){
self.movies = res.data;
})
}

self.create = function(){
$http.post(apiUrl, self.movie).then(function(res){
self.getMovies();
self.movie = '';
self.error = '';
}, function(err){
self.error = err.data.status;
});
}

self.getMovies();
});


Once done, you are ready to create a new S3 bucket:

1
aws s3 mb s3://movies-tutorial

Copy all the files in the ui directory into the bucket:

1
aws s3 cp ui/ s3://movies-tutorial --recursive -grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers

Finally, turns website hosting on for your bucket:

1
aws s3 website s3://movies-tutorial --index-document index.html

After running this command all of our static files should appear in our S3 bucket:



Your bucket is configured for static website hosting, and you now have an S3 website url like this http://<bucket_name>.s3-website-us-east-1.amazonaws.com



Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Setup AWS Lambda with Scheduled Events

This post is part of my “Serverless” series. In this part, I will show you how to setup a Lambda function to send mails on a defined scheduled event from CloudWatch.

1 – Create Lambda Function

So start by cloning the project :

1
git clone https://github.com/mlabouardy/schedule-mail-lambda.git

I implemented a simple Lambda function in NodeJS to send an email using MailGun library

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
'use strict';
var mg = require('mailgun-js')({
apiKey: process.env.MAILGUN_API_KEY || 'YOUR_API_KEY',
domain: process.env.MAILGUN_DOMAIN || 'DOMAIN_NAME'
});

exports.sendEmail = function(event, context, callback){
mg.messages().send({
from: 'mohamed.labouardy@gmail.com',
to: 'mohamed@labouardy.com',
subject: 'Hello',
text: 'Sent from lambda on a defined schedule'
}, function(err, body){
callback(err, body)
})
}

Note: you could use another service like AWS SES or your own SMTP server

Then, create a zip file:



Next, we need to create an Execution Role for our function:

1
2
3
4
5
6
7
8
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": { "AWS" : "*" },
"Action": "sts:AssumeRole"
}]
}
1
aws iam create-role --role-name lambda_execution --assume-role-policy-document file://lambda_role_policy.json


Execute the following Lambda CLI command to create a Lambda function. We need to provide the zip file, IAM role ARN we created earlier & set MAILGUN_API_KEY and MAILGUN_DOMAIN as parameters.

1
2
3
4
aws lambda create-function --region us-east-1 --function-name mail-scheduler \
--zip-file fileb://schedule-mail-lambda.zip --role arn:aws:iam::3XXXXXXX3:role/lambda_execution \
--handler index.sendEmail --runtime nodejs6.10 \
--environment Variables="{MAILGUN_API_KEY=key-6XXXXXXXXXXXXXXXXXXXXXX5,MAILGUN_DOMAIN=sandboxXXXXXXXXXXXXXX.mailgun.org}"

Note: the –runtime parameter uses Node.JS 6.10 but you can also specify Node.JS 4.3

Once created, AWS Lambda returns function configuration information as shown in the following example:



Now if we go back to AWS Lambda Dashboard we should see our function has been successfuly created:



2 – Configure a CloudWatch Rule

Create a new rule which will trigger our lambda function each 5 minutes:



Note: you can specify the value as a rate or in the cron expression format. All schedules use the UTC time zone, and the minimum precision for schedules is one minute

If you go back now to the Lambda Function Console and navigate to the Trigger tab, you should see the CloudWatch has been added:



After 5 minutes, CloudWatch will trigger the Lambda Function and you should get an email notification:



Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Setup Docker Swarm on AWS using Ansible & Terraform

This post is part of “IaC” series explaining how to use Infrastracture as Code concepts with Terraform. In this part, I will show you how to setup a Swarm cluster on AWS using Ansible & Terraform as shown in the diagram below (1 Master and 2 Workers) in less than 1 min ⏱:



All the templates and playbooks used in this tutorial, can be found on my GitHub](https://github.com/mlabouardy/terraform-aws-labs/tree/master/docker-swarm-cluster). 😎

Note: I did some tutorials about how to get started with Terraform on AWS, so make sure you read it before you go through this post.

1 – Setup EC2 Cluster using Terraform

1.1 – Global Variables

This file contains environment specific configuration like region name, instance type …

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
variable "aws_region" {
description = "AWS region on which we will setup the swarm cluster"
default = "us-east-1"
}

variable "ami" {
description = "Amazon Linux AMI"
default = "ami-4fffc834"
}

variable "instance_type" {
description = "Instance type"
default = "t2.micro"
}

variable "key_path" {
description = "SSH Public Key path"
default = "/home/core/.ssh/id_rsa.pub"
}

variable "bootstrap_path" {
description = "Script to install Docker Engine"
default = "install-docker.sh"
}

1.2 – Config AWS as Provider

1
2
3
provider "aws" {
region = "${var.aws_region}"
}

1.3 – Security Group

This SG allows all the inbound/outbound traffic:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
resource "aws_security_group" "default" {
name = "sgswarmcluster"

# Allow all inbound
ingress {
from_port = 0
to_port = 65535
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 65535
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

# Enable ICMP
ingress {
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["0.0.0.0/0"]
}
}

1.4 – EC2 Instances

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
resource "aws_key_pair" "default"{
key_name = "clusterkp"
public_key = "${file("${var.key_path}")}"
}

resource "aws_instance" "master" {
ami = "${var.ami}"
instance_type = "${var.instance_type}"
key_name = "${aws_key_pair.default.id}"
user_data = "${file("${var.bootstrap_path}")}"
vpc_security_group_ids = ["${aws_security_group.default.id}"]

tags {
Name = "master"
}
}

resource "aws_instance" "worker1" {
ami = "${var.ami}"
instance_type = "${var.instance_type}"
key_name = "${aws_key_pair.default.id}"
user_data = "${file("${var.bootstrap_path}")}"
vpc_security_group_ids = ["${aws_security_group.default.id}"]

tags {
Name = "worker 1"
}
}

resource "aws_instance" "worker2" {
ami = "${var.ami}"
instance_type = "${var.instance_type}"
key_name = "${aws_key_pair.default.id}"
user_data = "${file("${var.bootstrap_path}")}"
vpc_security_group_ids = ["${aws_security_group.default.id}"]

tags {
Name = "worker 2"
}
}

Bootstrap script to install latest version of Docker:

1
2
3
4
5
#!/bin/sh
yum update
yum install -y docker
service docker start
usermod -aG docker ec2-user

2 – Transform to Swarm Cluster with Ansible

The playbook is self explanatory:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
- name: Init Swarm Master
hosts: masters
gather_facts: False
remote_user: ec2-user
tasks:
- name: Swarm Init
command: docker swarm init --advertise-addr {{ inventory_hostname }}

- name: Get Worker Token
command: docker swarm join-token worker -q
register: worker_token

- name: Show Worker Token
debug: var=worker_token.stdout

- name: Master Token
command: docker swarm join-token manager -q
register: master_token

- name: Show Master Token
debug: var=master_token.stdout

- name: Join Swarm Cluster
hosts: workers
remote_user: ec2-user
gather_facts: False
vars:
token: "{{ hostvars[groups['masters'][0]]['worker_token']['stdout'] }}"
master: "{{ hostvars[groups['masters'][0]]['inventory_hostname'] }}"
tasks:
- name: Join Swarm Cluster as a Worker
command: docker swarm join --token {{ token }} {{ master }}:2377
register: worker

- name: Show Results
debug: var=worker.stdout

- name: Show Errors
debug: var=worker.stderr

Now we defined all the required templates and playbook, we only need to type 2 commands to bring up the swarm cluster:

1
2
terraform apply
ansible -i hosts playbook.yml

Note: Make sure to update the hosts file with the public ip of each EC2 instance.

Setting up the Swarm cluster in action is show below 😃 :

Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

How ChatOps can help you DevOps better !

When people hear DevOps, they often relate it to “automation”, “team work” and many “tools”, which is right. DevOps is all about CAMS : a culture of automation, measurement and sharing. The purpose of this article is to show how can ChatOps boost DevOps by bringing CAMS to everyday practice.

What is DevOps ?

Set of practices that emphasize the collaboration and communication of both Software Engineers and IT & Infrastructure Operations to reduce the time to market of a Product. One main goal of DevOps is to deploy features into production quickly and to detect and correct problems when they occur, without disrupting other services (Self-healing, Blue/Green deployment, Canary updates …)

There are several guidelines in order to achieve DevOps maturity, here are a few you need to know:

  • Continuous Integration is the practice of integrating, building and testing code within the development environment. It requires developers team to integrate code into a shared repository (Version Control System). Checks out the code and runs all the pre-deployment tests (Do not require code to be deployed to a server). In case it passed compile and package the code into an Artifact (JAR, Docker Image, gem …). Push it into an Artifact Repository Manager. The Artifact is deployed then inside an immutable container to a test environment (Quality Assurance). Once deployed post-deployment tests (Functional, integration & performance tests) are executed.
  • Continuous Delivery is an extension to continuous integration pipeline. It aims to make any change to the system releasable, it requires a person or business rule to decide when the final push to production should occur.
  • Continuous Deployment is an advanced evolution of continuous delivery. It’s the practice of deploying all the way into production without any human intervention.

In addition to those practices discussed above. Today, most DevOps teams embrace collaborative messaging platforms, such as Slack, to communicate with each other. And get real time updates about the system through online chat. And that’s certainly the spirit behind ChatOps.

What is ChatOps ?

“Placing tools directly in the middle of the conversation” — Jesse Newland, GitHub

Collaboration and conversation are a force that let work and learn together to produce new things. And this is happening in an exponential way that accelerates every year.

ChatOps (an amalgamation of chat and operations) is an emerging movement that led to ease the integration between teams and various tools/platforms of DevOps and others. It is about conversation driven development by bringing the tools into conversations. Robots are today members of your team to whom you can send a request and get an instant response.

ChatOps is a model where people, tools, process and automation are connected in a transparent flow. It also helps collaborate and control pipelines in one window.

Today, DevOps practices toolchain puts in many grades of tools, including : development software, networks and servers management, tests, monitoring, etc. Collaborating and controlling the practices’ pipelines in one window has helped developers teams work in more efficient and agile way.

Three main components in Chatops:
Collaboration tool: It’s the Chat Client where stakeholders and teams are connected between them and to the systems they work on. The are several chat platforms :

  • Slack: a leading chat platform for teams which accumulated more that 4 million daily active users. It is also one of the first platforms that integrated bots into its system.
  • HipChat by Atlassian is a group chat, file sharing, video chat & screen sharing built for teams & business.

Bot: It is the core of chatops methodology. The Bot comes in the middle of the collaboration tool and the Devops tools.
The Bot receives requests from team members, retrieves information from integrated systems by executing set of commands (scripts).

  • Hubot, a leading bot tool for ChatOps. It is a valuable open source robot (CoffeeScript) for automating chat rooms by Github made back in 2013. Hubot is useful and powerful via Scripts. They define the skills of your Hubot instance. Hundreds of them are written and maintained by the community and you can create your own. It mainly helps to automate most of ops-related tasks.
  • Lita is a framework for bots dedicated to company chat rooms written in Ruby. It is heavily inspired from Hubot. This framework can be used to build operational task automations and has a very comprehensive list of plugins which means that it can be integrated in many chat platforms as : Slack, Facebook Messenger and others.
  • Cog, made by Operable, is another chatbot framework to help automate DevOps workflows. It’s designed to be chat platform and language agnostic, and uses a Unix-style pipeline to activate complex functionality.
  • ErrBot is a chatbot daemon that generates bots that are in the middle of a chat platform and DevOps tools. It is written in Python and aims to make integrating and tool that provides an API, easily to a chat platform via commands.

System integration: Third key element in ChatOps. Simply put, it is the DevOps tools allowing more productivity. Such as :

  • Issue tracking: JIRA, OTRS, TeamForge …
  • Version Control Systems: Github, Gitlab, Bitbucket …
  • Infrastructure as Code (IaC): Terraform, Vagrant, Packer, Swarm, Kubernetes, Docker, AWS CloudFormation …
  • Configuration Management tools (Provisioning): Ansible, Salt, Chef, Puppet …
  • Continuous Integration Servers: Jenkins, Travis CI, Bamboo …
  • Monitoring: Grafana, Kibana, Prometheus …

Nowadays ChatOps are operational. Several teams around the world already connected their chat platforms to their build systems to get notifications, query and execute processes on their continuous integration servers. Same thing happens in QA (Quality Assurance) teams, support teams and the rest.

With ChatOps, trust is build among the team especially that work is shared and brought into the foreground by putting it all in one place. Your chat platform is your new command line.

Conversation-driven collaboration is not new, but with ChatOps, we observed a combination of the oldest form of collaboration and the newest techies. We are not surprised that this combination have changed the way staff members work. This should make people often think of making software that allows this collaboration to be more contributory , soft and secure.

For further Information on DevOps and ChatOps approaches, check out our DevOps Wiki on Github.

Thanks for reading,

Running Docker on AWS EC2

In this quick tutorial, I will show you how to install Docker 🐋 on AWS EC2 instance and run your first Docker container.

1 – Setup EC2 instance

I already did a tutorial on how to create an EC2 instance, so I won’t repeat it. There are few ways you’ll want to differ from the tutorial:

We select the “Amazon Linux AMI 2017.03.1 (HVM), SSH Volume Type” as AMI. The exact versions may change with time.
We configure the security groups as below. This setting allows access to port 80 (HTTP) from anywhere, and SSH access also.



Go ahead and launch the instance, it will take couple of minutes:



2 – Install Docker

Once your instance is ready to use, connect via SSH to the server using the public DNS and the public key:



Once connected, use yum configuration manager to install Docker, by typing the following commands:

1
2
sudo yum update -y
sudo yum install -y docker

Next, start the docker service:



In order to user docker command without root privileges (sudo), we need to add ec2-user to the docker group:

1
sudo usermod -aG docker ec2-user

To verify that docker is correctly installed, just type:



As you can see the latest version of docker has been installed (v17.03.1-ce)

Congratulation ! 💫 🎉 you have now an EC2 instance with Docker installed.

3 – Deploy Docker Container

It’s time to run your first container 😁. We will create an nginx container with this command:



If we run the list command “docker ps”, we can see that a nginx container has been created from the nginx official image.



Finally, you visit your instance public DNS name in your browser, you should see something like this below:



Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×