How to play PUBG on AWS

AWS GPU instances are known for deep learning purposes but they can also be used for running video games. This tutorial goes through how to set up your own EC2 GPU optimised instance to run the top-selling and most played game “PlayerUnknown’s Battlegrounds (PUBG)”.

To get started, make sure you are in the AWS region closest to you, select Microsoft Windows Server to be the AMI and set the instance type to be g2.2xlarge. The instance is backed by Nvidia Grid GPU (Kepler GK104), 8x hardware hyper-threads from an Intel Xeon E5–2670 and 15GB of RAM.



For games with resource-intensive, you should use the next generation of GPU instances: P2, P3 and G3 (have up to 4 NVIDIA Tesla M60 GPUs).

After this is done, click on “Launch Instances”, and you should see a screen showing that your instance is been created:



To connect to your Windows instance, you must retrieve the initial administrator password and specify this password when you connect to your instance using Remote Desktop:

Before you attempt to log in using Remote Desktop Connection, you must open port 3389 on the security group attached to your instance



After you connect, install Microsoft Direct X11 after installing Chrome (it saves a lot of time):



Next, install the graphic driver for maximum gaming performance:



Once installed, make sure to reboot the instance for changes to take effect:



Then, install Steam, login using your account and install PUBG from the “Library” section:



You can take advantage of AWS high network performance (up to 10 Gbps of bandwidth):



Once the game is installed, you can play PUBG on your virtualized GPU instance:



You can take this further, and use Steam In-Home Streaming feature to stream your game from your EC2 instance to your Mac:



Enjoy the game ! you can now play your games on any device connected to the same network:



You might want to bake an AMI based on your instance to avoid set it up all again the next time you want to play and use spot instances to reduce the instance cost. Also, make sure to stop your instances when you’re done for the day to avoid incurring charges. GPU instances are costly (disk storage also costs something, and can be significant if you have a large disk footprint).

Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Docker on Elastic Beanstalk Tips

AWS Elastic Beanstalk is one of the most used PaaS today, it allows you to deploy your application without provisioning the underlying infrastructure while maintaining the high availability of your application. However, it’s painful to use due to the lack of documentation and real-world scenarios. In this post, I will walk you through how to use Elastic Beanstalk to deploy Docker containers from scratch. Followed by how to automate your deployment process with a Continuous Integration pipeline. At the end of this post, you should be familiar with advanced topics like debugging and monitoring of your applications in EB.



1 – Environment Setup

To get started, create a new Application using the following AWS CLI command:

1
2
aws elasticbeanstalk create-application --application-name avengers \
--region eu-west-3

Create a new environment. Let’s call it “staging” :

1
2
3
4
5
aws elasticbeanstalk create-environment --application-name avengers \
--environment-name staging \
--solution-stack-name "64bit Amazon Linux 2017.09 v2.9.2 running Docker 17.12.0-ce" \
--option-settings file://options.json \
--region eu-west-3

Head back to AWS Elastic Beanstalk Console, your new environment should be created:



Point your browser to the environment URL, a sample Docker application should be displayed:



Let’s deploy our application. I wrote a small web application in Go to return a list of Marvel Avengers (I see you Thanos 😉 )

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
package main

import (
"encoding/json"
"io/ioutil"
"log"
"net/http"
)

type Avenger struct {
Character string `json:"character"`
Name string `json:"name"`
}

var avengers []Avenger

func init() {
data, _ := ioutil.ReadFile("avengers.json")
json.Unmarshal(data, &avengers)
}

func IndexHandler(w http.ResponseWriter, r *http.Request) {
response, _ := json.Marshal(avengers)
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(200)
w.Write(response)
}

func main() {
http.HandleFunc("/", IndexHandler)
if err := http.ListenAndServe(":3000", nil); err != nil {
log.Fatal(err)
}
}

Next, we will create a Dockerfile to build the Docker image. Go is a compiled language, therefore we can use the Docker multi-stage feature to build a lightweight Docker image:

1
2
3
4
5
6
7
8
9
10
11
12
13
FROM golang:1.10 as builder
WORKDIR /go/src/github.com/mlabouardy/docker-eb-ci-mon
COPY main.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .

FROM alpine:latest
MAINTAINER mlabouardy
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /go/src/github.com/mlabouardy/docker-eb-ci-mon/app .
COPY avengers.json .
EXPOSE 3000
CMD ["./app"]

Next, we create a Dockerrun.aws.json that describes how the container will be deployed in Elastic Beanstalk:

1
2
3
4
5
6
7
8
9
10
11
12
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "mlabouardy/avengers",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "3000"
}
]
}

Now the application is defined, create an application bundle by creating a ZIP package:

1
zip -r deployment.zip .

Then, create a S3 bucket to store the different versions of your application bundles:

1
aws s3 mb s3://avengers-docker-eb --region AWS_REGION

And create a new application version from the application bundle:

1
2
3
4
5
aws elasticbeanstalk create-application-version --application-name avengers \
--version-label v1 \
--source-bundle S3Bucket="avengers-docker-eb",S3Key="deployment.zip" \
--auto-create-application \
--region AWS_REGION


Finally, deploy the version to the staging environment:

1
2
3
aws elasticbeanstalk update-environment --application-name avengers \
--environment-name staging \
--version-label v1 --region AWS_REGION

Give it a few seconds while it’s deploying the new version:



Then, repoint your browser to the environment URL, a list of Avengers will be returned in a JSON format as follows:



Now that our Docker application is deployed, let’s automate this process by setting up a CI/CD pipeline.

2 – CI/CD Pipeline

I opt for CircleCI, but you’re free to use whatever CI server you’re familiar with. The same steps can be applied.

Create a circle.yml file with the following content:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
version: 2
jobs:
build:
docker:
- image: circleci/golang:1.10

working_directory: /go/src/github.com/mlabouardy/docker-eb-ci-mon

steps:
- checkout

- setup_remote_docker

- run:
name: Install AWS CLI
command: |
sudo apt-get update
sudo apt-get install -y awscli

- run:
name: Test
command: go test

- run:
name: Build
command: docker build -t mlabouardy/avengers:latest .

- run:
name: Push
command: |
docker login -u$DOCKERHUB_LOGIN -p$DOCKERHUB_PASSWORD
docker tag mlabouardy/avengers:latest mlabouardy/avengers:${CIRCLE_SHA1}
docker push mlabouardy/avengers:latest
docker push mlabouardy/avengers:${CIRCLE_SHA1}

- run:
name: Deploy
command: |
zip -r deployment-${CIRCLE_SHA1}.zip .
aws s3 cp deployment-${CIRCLE_SHA1}.zip s3://avengers-docker-eb --region eu-west-3
aws elasticbeanstalk create-application-version --application-name avengers \
--version-label ${CIRCLE_SHA1} --source-bundle S3Bucket="avengers-docker-eb",S3Key="deployment-${CIRCLE_SHA1}.zip" --region eu-west-3
aws elasticbeanstalk update-environment --application-name avengers \
--environment-name staging --version-label ${CIRCLE_SHA1} --region eu-west-3

The pipeline will firstly prepare the environment, installing the AWS CLI. Then run unit tests. Next, a Docker image will be built, then pushed to DockerHub. Last step is creating a new application bundle and deploying the bundle to Elastic Beanstalk.

In order to grant Circle CI permissions to call AWS operations, we need to create a new IAM user with following IAM policy:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"elasticbeanstalk:*",
"s3:*",
"ec2:*",
"cloudformation:*",
"autoscaling:*",
"elasticloadbalancing:*"
],
"Resource": "*"
}
]
}

Generate AWS access & secret keys. Then, head back to Circle CI and click on the project settings and paste the credentials :



Now, everytime you push a change to your code repository, a build will be triggered:



And a new version will be deployed automatically to Elastic Beanstalk:



3 – Monitoring

Monitoring your applications is mandatory. Unfortunately, CloudWatch doesn’t expose useful metrics like Memory usage of your applications in Elastic Beanstalk. Hence, in this part, we will solve this issue by creating our custom metrics.

I will install a data collector agent on the instance. The agent will collect metrics and push them to a time-series database.

To install the agent, we will use .ebextensions folder, on which we will create 3 configuration files:

  • 01-install-telegraf.config: install Telegraf on the instance
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
container_commands:
01downloadpackage:
command: "wget https://dl.influxdata.com/telegraf/releases/telegraf-1.6.0-1.x86_64.rpm -O /tmp/telegraf.rpm"
ignoreErrors: true
02installpackage:
command: "yum localinstall -y /tmp/telegraf.rpm"
ignoreErrors: true
03removepackage:
command: "rm /tmp/telegraf.rpm"
ignoreErrors: true
04enablereboot:
command: "chkconfig telegraf on"
ignoreErrors: true
05fixpermission:
command: "usermod -a -G docker telegraf"
ignoreErrors: true
  • 02-config-file.config: create a Telegraf configuration file to collect system usage & docker containers metrics.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
files:
"/etc/telegraf/telegraf.conf":
mode: "000666"
owner: root
group: root
content: |
[global_tags]
hostname="Avengers"

# Read metrics about CPU usage
[[inputs.cpu]]
percpu = false
totalcpu = true
fieldpass = [ "usage*" ]
name_suffix = "_vm"

# Read metrics about disk usagee
[[inputs.disk]]
fielddrop = [ "inodes*" ]
mount_points=["/"]
name_suffix = "_vm"

# Read metrics about network usage
[[inputs.net]]
interfaces = [ "eth0", "eth1" ]
fielddrop = [ "icmp*", "ip*", "tcp*", "udp*" ]
name_suffix = "_vm"

# Read metrics about memory usage
[[inputs.mem]]
name_suffix = "_vm"

# Read metrics about swap memory usage
[[inputs.swap]]
name_suffix = "_vm"

# Read metrics about system load and uptime
[[inputs.system]]
name_suffix = "_vm"

# Read metrics from docker socket api
[[inputs.docker]]
endpoint = "unix:///var/run/docker.sock"
container_names = []
name_suffix = "_docker"

[[outputs.influxdb]]
database = "instances"
urls = ["http://172.31.38.51:8086"]
namepass = ["*_vm"]

[[outputs.influxdb]]
database = "containers"
urls = ["http://172.31.38.51:8086"]
namepass = ["*_docker"]
  • 03-start-telegraf.config: start Telegraf agent.
1
2
3
4
container_commands:
01starttelegraf:
command: "service telegraf start"
ignoreErrors: true

Once the application version is deployed to Elastic Beanstalk, metrics will be pushed to your timeseries database. In this example, I used InfluxDB as data storage and I created some dynamic Dashboards in Grafana to visualize metrics in real-time:

Containers:



Hosts:



Note: for in-depth explaination on how to configure Telegraf, InfluxDB & Grafana read my previous article.

Full code can be found on my GitHub. Make sure to drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy

Infrastructure Cost Optimization with Lambda

Having multiple environments is important to build a continuous integration/deployment pipeline and be able to reproduce bugs in production with ease but this comes at price. In order to reduce cost of AWS infrastructure, instances which are running 24/7 unnecessarily (sandbox & staging environments) must be shut down outside of regular business hours.

The figure below describes an automated process to schedule, stop and start instances to help cutting costs. The solution is a perfect example of using Serverless computing.



Note: full code is available on my GitHub.

2 Lambda functions will be created, they will scan all environments looking for a specific tag. The tag we use is named ‘Environment’. Instances without an Environment tag will not be affected:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
func getInstances(cfg aws.Config) ([]Instance, error) {
instances := make([]Instance, 0)

svc := ec2.New(cfg)
req := svc.DescribeInstancesRequest(&ec2.DescribeInstancesInput{
Filters: []ec2.Filter{
ec2.Filter{
Name: aws.String("tag:Environment"),
Values: []string{os.Getenv("ENVIRONMENT")},
},
},
})
res, err := req.Send()
if err != nil {
return instances, err
}

for _, reservation := range res.Reservations {
for _, instance := range reservation.Instances {
for _, tag := range instance.Tags {
if *tag.Key == "Name" {
instances = append(instances, Instance{
ID: *instance.InstanceId,
Name: *tag.Value,
})
}
}
}
}

return instances, nil
}

The StartEnvironment function will query the StartInstances method with the list of instance ids returned by the previous function:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
func startInstances(cfg aws.Config, instances []Instance) error {
instanceIds := make([]string, 0, len(instances))
for _, instance := range instances {
instanceIds = append(instanceIds, instance.ID)
}

svc := ec2.New(cfg)
req := svc.StartInstancesRequest(&ec2.StartInstancesInput{
InstanceIds: instanceIds,
})
_, err := req.Send()
if err != nil {
return err
}
return nil
}

Similarly, the StopEnvironment function will query the StopInstances method:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
func stopInstances(cfg aws.Config, instances []Instance) error {
instanceIds := make([]string, 0, len(instances))
for _, instance := range instances {
instanceIds = append(instanceIds, instance.ID)
}

svc := ec2.New(cfg)
req := svc.StopInstancesRequest(&ec2.StopInstancesInput{
InstanceIds: instanceIds,
})
_, err := req.Send()
if err != nil {
return err
}
return nil
}

Finally, both functions will post a message to Slack channel for real-time notification:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
func postToSlack(color string, title string, instances string) error {
message := SlackMessage{
Text: title,
Attachments: []Attachment{
Attachment{
Text: instances,
Color: color,
},
},
}

client := &http.Client{}
data, err := json.Marshal(message)
if err != nil {
return err
}

req, err := http.NewRequest("POST", os.Getenv("SLACK_WEBHOOK"), bytes.NewBuffer(data))
if err != nil {
return err
}

resp, err := client.Do(req)
if err != nil {
return err
}

return nil
}

Now our functions are defined, let’s build the deployment packages (zip files) using the following Bash script:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#!/bin/bash

echo "Building StartEnvironment binary"
GOOS=linux GOARCH=amd64 go build -o main start/*.go

echo "Creating deployment package"
zip start-environment.zip main
rm main

echo "Building StopEnvironment binary"
GOOS=linux GOARCH=amd64 go build -o main stop/*.go

echo "Creating deployment package"
zip stop-environment.zip main
rm main

The functions require an IAM role to be able to interact with EC2. The StartEnvironment function has to be able to describe and start EC2 instances:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:CreateLogGroup",
"ec2:DescribeInstances",
"ec2:StartInstances"
],
"Resource": [
"*"
]
}
]
}

The StopEnvironment function has to be able to describe and stop EC2 instances:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:CreateLogGroup",
"ec2:DescribeInstances",
"ec2:StopInstances"
],
"Resource": [
"*"
]
}
]
}

Finally, create an IAM role for each function and attach the above policies:

1
2
3
4
5
6
7
8
9
10
11
12
13
#!/bin/bash

echo "IAM role for StartEnvironment"
arn=$(aws iam create-policy --policy-name StartEnvironment --policy-document file://start/policy.json | jq -r '.Policy.Arn')
result=$(aws iam create-role --role-name StartEnvironmentRole --assume-role-policy-document file://role.json | jq -r '.Role.Arn')
aws iam attach-role-policy --role-name StartEnvironmentRole --policy-arn $arn
echo "ARN: $result"

echo "IAM role for StopEnvironment"
arn=$(aws iam create-policy --policy-name StopEnvironment --policy-document file://stop/policy.json | jq -r '.Policy.Arn')
result=$(aws iam create-role --role-name StopEnvironmentRole --assume-role-policy-document file://role.json | jq -r '.Role.Arn')
aws iam attach-role-policy --role-name StopEnvironmentRole --policy-arn $arn
echo "ARN: $result"

The script will output the ARN for each IAM role:



Before jumping to deployment part, we need to create a Slack WebHook to be able to post messages to Slack channel:



Next, use the following script to deploy your functions to AWS Lambda (make sure to replace the IAM roles, Slack WebHook token & the target environment):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#!/bin/bash

START_IAM_ROLE="arn:aws:iam::ACCOUNT_ID:role/StartEnvironmentRole"
STOP_IAM_ROLE="arn:aws:iam::ACCOUNT_ID:role/StopEnvironmentRole"
AWS_REGION="us-east-1"
SLACK_WEBHOOK="https://hooks.slack.com/services/TOKEN"
ENVIRONMENT="sandbox"

echo "Deploying StartEnvironment to Lambda"
aws lambda create-function --function-name StartEnvironment \
--zip-file fileb://./start-environment.zip \
--runtime go1.x --handler main \
--role $START_IAM_ROLE \
--environment Variables="{SLACK_WEBHOOK=$SLACK_WEBHOOK,ENVIRONMENT=$ENVIRONMENT}" \
--region $AWS_REGION


echo "Deploying StopEnvironment to Lambda"
aws lambda create-function --function-name StopEnvironment \
--zip-file fileb://./stop-environment.zip \
--runtime go1.x --handler main \
--role $STOP_IAM_ROLE \
--environment Variables="{SLACK_WEBHOOK=$SLACK_WEBHOOK,ENVIRONMENT=$ENVIRONMENT}" \
--region $AWS_REGION \


rm *-environment.zip

Once deployed, if you sign in to AWS Management Console, navigate to Lambda Console, you should see both functions has been deployed successfully:

StartEnvironment:



StopEnvironment:



To further automate the process of invoking the Lambda function at the right time. AWS CloudWatch Scheduled Events will be used.

Create a new CloudWatch rule with the below cron expression (It will be invoked everyday at 9 AM):



And another rule to stop the environment at 6 PM:



Note: All times are GMT time.

Testing:

a – Stop Environment



Result:



b – Start Environment



Result:



The solution is easy to deploy and can help reduce operational costs.

Full code can be found on my GitHub. Make sure to drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Build a Serverless Production-Ready Blog

Are you tired of maintaining your CMS (WordPress, Drupal, etc) ? Paying expensive hosting fees? Fixing security issues everyday ?



I discovered not long ago a new blogging framework called Hexo which let you publish Markdown documents in the form of blog post. So as always I got my hands dirty and wrote this post to show you how to build a production-ready blog with Hexo and use the AWS S3 to make your blog Serverless and pay only per usage. Along the way, I will show you how to automate the deployment of new posts by setting up a CI/CD pipeline

To get started, Hexo requires Node.JS & Git to be installed. Once all requirements are installed, issue the following command to install Hexo CLI:

1
npm install -g hexo-cli

Next, create a new empty project:

1
hexo init slowcoder.com

Modify blog global settings in _config.yml file:

1
2
3
4
5
6
7
8
# Site
title: SlowCoder
subtitle: DevOps News and Tutorials
description: DevOps, Cloud, Serverless, Containers news and tutorials for everyone
keywords: programming,devops,cloud,go,mobile,serverless,docker
author: Mohamed Labouardy
language: en
timezone: Europe/Paris

Start a local server with “hexo server“. By default, this is at http://localhost:4000. You’ll see Hexo’s pre-defined “Hello World” test post:



If you want to change the default theme, you just need to go here and find a new one you prefer.

I opt for Magnetic Theme as it includes many features:

  • Disqus and Facebook comments
  • Google Analytics
  • Cover image for posts and pages
  • Tags Support
  • Responsive Images
  • Image Gallery
  • Social Accounts configuration
  • Pagination

Clone the theme GitHub repository as below:

1
git clone https://github.com/klugjo/hexo-theme-magnetic themes/magnetic

Then update your blog’s main _config.yml to set the theme to magnetic. Once done, restart the server:



Now you are almost done with your blog setup. It is time to write your first article. To generate a new article file, use the following command:

1
hexo new POST_TITLE


Now, sign in to AWS Management Console, navigate to S3 Dashboard and create an S3 Bucket or use the AWS CLI to create a new one:

1
aws s3 mb s3://slowcoder.com

Add the following policy to the S3 bucket to make all objects public by default:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
"Id": "Policy1522074684919",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1522074683215",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::slowcoder.com/*",
"Principal": "*"
}
]
}

Next, enable static website hosting on the S3 bucket:

1
aws s3 website s3://slowcoder.com --index-document index.html

In order to automate the process of deployment of the blog to S3 each time a new article is been published. We will setup a CI/CD pipeline using CircleCI.

Sign in to CircleCI using your GitHub account, then add the circle.yml file to your project:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
version: 2
jobs:
build:
docker:
- image: node:9.9.0
working_directory: ~/slowcoder.com
steps:
- checkout
- run:
name: Install Hexo CLI
command: npm install -g hexo-cli
- restore_cache:
keys:
- npm-deps-{{ checksum "package.json" }}
- run:
name: Install dependencies
command: npm install
- save_cache:
key: npm-deps-{{ checksum "package.json" }}
paths:
- node_modules
- run:
name: Generate static website
command: hexo generate
- run:
name: Install AWS CLI
command: |
apt-get update
apt-get install -y awscli
- run:
name: Push to S3 bucket
command: cd public/ && aws s3 sync . s3://slowcoder.com

Note: Make sure to set the AWS Access Key ID and Secret Access Key in your Project’s Settings page on CircleCI (s3:PutObject permission).

Now every time you push changes to your GitHub repo, CircleCI will automatically deploy the changes to S3. Here’s a passing build:



Finally, to make our blog user-friendly, we will setup a custom domain name in Route53 as below:



Note: You can go further and setup a CloudFront Distribution in front of the S3 bucket to optimize delivery of blog assets.

You can test your brand new blog now by typing the following adress: http://slowcoder.com :



Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Add new users to EC2 and give SSH Key access

In this quick post, I will show you how to add a new user to an EC2 instance and SSH with your own private key rather than having to authenticate using the private key generated by AWS.



Connect via SSH into your instance using its public IP:



Next, create a new user using the following command:

1
sudo adduser labouardy


Next, we switch the shell session to the new account:

1
sudo su labouardy

Create .ssh directory, and change the directory permission to 700 (only the file owner can read, write or open the directory):

1
2
mkdir .ssh
chmod 700 .ssh

Note: ensure you are in the new user’s home directory (example: /home/labouardy)

Create an empty file called authorized_keys in the .ssh directory and change its permissions to 600 (only the file owner can read or writ eto the file)

1
2
touch authorized_keys
chmod 600 authorized_keys


Finally, edit the authorized_keys file and past in your public key:



Once you’ve done this, exist out back to your machine, then try to SSH using the the new credential and user account you’ve created:



We now are logged in as user labouardy 😄

Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Attach an IAM Role to an EC2 Instance with CloudFormation

CloudFormation allows you to manage your AWS infrastructure by defining it in code.

In this post, I will show you guys how to create an EC2 instance and attach an IAM role to it so you can access your S3 buckets.

First, you’ll need a template that specifies the resources that you want in your stack. For this step, you use a sample template that I already prepared:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Description" : "Attach IAM Role to an EC2",
"Parameters" : {
"KeyName" : {
"Description" : "EC2 Instance SSH Key",
"Type" : "AWS::EC2::KeyPair::KeyName"
},
"InstanceType" : {
"Description" : "EC2 instance specs configuration",
"Type" : "String",
"Default" : "t2.micro",
"AllowedValues" : ["t2.micro", "t2.small", "t2.medium"]
}
},
"Mappings" : {
"AMIs" : {
"us-east-1" : {
"Name" : "ami-8c1be5f6"
},
"us-east-2" : {
"Name" : "ami-c5062ba0"
},
"eu-west-1" : {
"Name" : "ami-acd005d5"
},
"ap-southeast-2" : {
"Name" : "ami-8536d6e7"
}
}
},
"Resources" : {
"Test" : {
"Type" : "AWS::EC2::Instance",
"Properties" : {
"InstanceType" : {
"Ref" : "InstanceType"
},
"ImageId" : {
"Fn::FindInMap" : [
"AMIs",
{
"Ref" : "AWS::Region"
},
"Name"
]
},
"KeyName" : {
"Ref" : "KeyName"
},
"IamInstanceProfile" : {
"Ref" : "ListS3BucketsInstanceProfile"
},
"SecurityGroupIds" : [
{
"Ref" : "SSHAccessSG"
}
],
"Tags" : [
{
"Key" : "Name",
"Value" : "Test"
}
]
}
},
"SSHAccessSG" : {
"Type" : "AWS::EC2::SecurityGroup",
"Properties" : {
"GroupDescription" : "Allow SSH access from anywhere",
"SecurityGroupIngress" : [
{
"FromPort" : "22",
"ToPort" : "22",
"IpProtocol" : "tcp",
"CidrIp" : "0.0.0.0/0"
}
],
"Tags" : [
{
"Key" : "Name",
"Value" : "SSHAccessSG"
}
]
}
},
"ListS3BucketsInstanceProfile" : {
"Type" : "AWS::IAM::InstanceProfile",
"Properties" : {
"Path" : "/",
"Roles" : [
{
"Ref" : "ListS3BucketsRole"
}
]
}
},
"ListS3BucketsPolicy" : {
"Type" : "AWS::IAM::Policy",
"Properties" : {
"PolicyName" : "ListS3BucketsPolicy",
"PolicyDocument" : {
"Statement" : [
{
"Effect" : "Allow",
"Action" : [
"s3:List*"
],
"Resource" : "*"
}
]
},
"Roles" : [
{
"Ref" : "ListS3BucketsRole"
}
]
}
},
"ListS3BucketsRole" : {
"Type" : "AWS::IAM::Role",
"Properties" : {
"AssumeRolePolicyDocument": {
"Version" : "2012-10-17",
"Statement" : [
{
"Effect" : "Allow",
"Principal" : {
"Service" : ["ec2.amazonaws.com"]
},
"Action" : [
"sts:AssumeRole"
]
}
]
},
"Path" : "/"
}
}
},
"Outputs" : {
"EC2" : {
"Description" : "EC2 IP address",
"Value" : {
"Fn::Join" : [
"",
[
"ssh ec2-user@",
{
"Fn::GetAtt" : [
"Test",
"PublicIp"
]
},
" -i ",
{
"Ref" : "KeyName"
},
".pem"
]
]
}
}
}
}

The template creates a basic EC2 instance that uses an IAM Role with S3 List Policy. It also creates a security group which allows SSH access from anywhere.

Note: I used also the Parameters section to declare values that can be passed to the template when you create the stack.

Now we defined the template. Sign in to AWS Management Console then navigate to CloudFormation, and click on “Create Stack“. Upload the JSON file:



You would be asked to assign a name to this stack, and choose your EC2 specs configuration & SSH KeyPair:



Make sure to check the box “I ackownledge the AWS CloudFormation might create IAM resources” in order to create the IAM Policy & Role:



Once launched, you will get the following screen with launching process events:



After a while, you will get the CREATE_COMPLETE message in the status tab:



Once done, on the output tab, you should see how to connect via SSH to your instance:



If you point your terminal to the value shown in the output tab, you should be able to connect via SSH to server:

1
ssh ec2-user@52.91.239.135 -i key.pem


Let’s check if we can list the S3 buckets using the AWS CLI:

1
aws s3 ls


Awesome ! so we are able to list the buckets, but what if we want to create a new bucket:



It didn’t work, and it’s normal because the IAM Role attached to the instance doesn’t have enough permission (CreateBucket action).

Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Highly Available Bastion Hosts with Route53

Instances in a private subnet don’t have a public IP address, and without a VPN or a DirectConnect option, Bastion Host (JumpBox) is the expected mechanism to reach your servers. Therefore, we should make it Highly Available.

In this quick post, I will show you how to setup a Highly Available Bastion Hosts with the following targets :

  • Bastion hosts will be deployed in two Availability Zones to support immediate access across the VPC & withstand an AZ failure.
  • Elastic IP addresses are associated with the bastion instances to make sure the same trusted Elastic IPs are used at all times.
  • Bastion Hosts will be reachable via a permanent DNS entry configured with Route53.


In order to easily setup the infrastructure described above, I used Terraform:

1
2
git clone https://github.com/mlabouardy/terraform-aws-labs
cd bastion-highavailability

Note: I did a tutorial on how to the setup a VPC with Terraform so make sure to read it for more details.

Update the variables.tfvars file with your SSH Key Pair name and an existing Hosted Zone ID. Then, issue the following command:

1
terraform apply -var-file=variables.tfvars

That will bring up the VPC, and all the necessary resources:



Now in your AWS Management Console you should see the resources created:

EC2 Instances:



DNS Record:



Finally, create an SSH tunnel using the DNS record to your private instance:

1
ssh -f ec2-user@bastion.slowcoder.com -i /d/aws/vpc.pem -L 2800:10.0.3.218:22 -N

Once done, you should now be able to access to your private instances via SSH:

1
ssh ec2-user@localhost -p 2800 -i /d/aws/vpc.pem


Take it further ? instead of defining number of bastion hosts, we could use a bastion host inside an autoscaling group with min target set to 1.

Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Running Docker on AWS EC2

In this quick tutorial, I will show you how to install Docker 🐋 on AWS EC2 instance and run your first Docker container.

1 – Setup EC2 instance

I already did a tutorial on how to create an EC2 instance, so I won’t repeat it. There are few ways you’ll want to differ from the tutorial:

We select the “Amazon Linux AMI 2017.03.1 (HVM), SSH Volume Type” as AMI. The exact versions may change with time.
We configure the security groups as below. This setting allows access to port 80 (HTTP) from anywhere, and SSH access also.



Go ahead and launch the instance, it will take couple of minutes:



2 – Install Docker

Once your instance is ready to use, connect via SSH to the server using the public DNS and the public key:



Once connected, use yum configuration manager to install Docker, by typing the following commands:

1
2
sudo yum update -y
sudo yum install -y docker

Next, start the docker service:



In order to user docker command without root privileges (sudo), we need to add ec2-user to the docker group:

1
sudo usermod -aG docker ec2-user

To verify that docker is correctly installed, just type:



As you can see the latest version of docker has been installed (v17.03.1-ce)

Congratulation ! 💫 🎉 you have now an EC2 instance with Docker installed.

3 – Deploy Docker Container

It’s time to run your first container 😁. We will create an nginx container with this command:



If we run the list command “docker ps”, we can see that a nginx container has been created from the nginx official image.



Finally, you visit your instance public DNS name in your browser, you should see something like this below:



Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×