How to play PUBG on AWS

AWS GPU instances are known for deep learning purposes but they can also be used for running video games. This tutorial goes through how to set up your own EC2 GPU optimised instance to run the top-selling and most played game “PlayerUnknown’s Battlegrounds (PUBG)”.

To get started, make sure you are in the AWS region closest to you, select Microsoft Windows Server to be the AMI and set the instance type to be g2.2xlarge. The instance is backed by Nvidia Grid GPU (Kepler GK104), 8x hardware hyper-threads from an Intel Xeon E5–2670 and 15GB of RAM.



For games with resource-intensive, you should use the next generation of GPU instances: P2, P3 and G3 (have up to 4 NVIDIA Tesla M60 GPUs).

After this is done, click on “Launch Instances”, and you should see a screen showing that your instance is been created:



To connect to your Windows instance, you must retrieve the initial administrator password and specify this password when you connect to your instance using Remote Desktop:

Before you attempt to log in using Remote Desktop Connection, you must open port 3389 on the security group attached to your instance



After you connect, install Microsoft Direct X11 after installing Chrome (it saves a lot of time):



Next, install the graphic driver for maximum gaming performance:



Once installed, make sure to reboot the instance for changes to take effect:



Then, install Steam, login using your account and install PUBG from the “Library” section:



You can take advantage of AWS high network performance (up to 10 Gbps of bandwidth):



Once the game is installed, you can play PUBG on your virtualized GPU instance:



You can take this further, and use Steam In-Home Streaming feature to stream your game from your EC2 instance to your Mac:



Enjoy the game ! you can now play your games on any device connected to the same network:



You might want to bake an AMI based on your instance to avoid set it up all again the next time you want to play and use spot instances to reduce the instance cost. Also, make sure to stop your instances when you’re done for the day to avoid incurring charges. GPU instances are costly (disk storage also costs something, and can be significant if you have a large disk footprint).

Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Episode 3:Deploy a Highly Available Jenkins Cluster on AWS

This is the first video in a 10 part series by Mohamed Labouardy, showing how to build a simple DevOps pipeline, including built-in security at each stage. To follow the series, please join the community to receive notification as each episode is released.



Episode 2:Build an AWS VPC using Infrastructure as Code

This is the first video in a 10 part series by Mohamed Labouardy, showing how to build a simple DevOps pipeline, including built-in security at each stage. To follow the series, please join the community to receive notification as each episode is released.



Episode 1:Core Concepts

This is the first video in a 10 part series by Mohamed Labouardy, showing how to build a simple DevOps pipeline, including built-in security at each stage. To follow the series, please join the community to receive notification as each episode is released.



Highly Available Docker Registry on AWS with Nexus

Have you ever wondered how you can build a highly available & resilient Docker Repository to store your Docker Images ?



In this post, we will setup an EC2 instance inside a Security Group and create an A record pointing to the server Elastic IP address as follow:



To provision the infrastructure, we will use Terraform as IaC (Infrastructure as Code) tool. The advantage of using this kind of tools is the ability to spin up a new environment quickly in different AWS region (or different IaaS provider) in case of incident (Disaster recovery).

Start by cloning the following Github repository:

1
git clone https://github.com/mlabouardy/terraform-aws-labs.git

Inside docker-registry folder, update the variables.tfvars with your own AWS credentials (make sure you have the right IAM policies).

1
2
3
4
5
6
7
8
9
10
11
12
resource "aws_instance" "default" {
ami = "${lookup(var.amis, var.region)}"
instance_type = "${var.instance_type}"
key_name = "${aws_key_pair.default.id}"
security_groups = ["${aws_security_group.default.name}"]

user_data = "${file("setup.sh")}"

tags {
Name = "registry"
}
}

I specified a shell script to be used as user_data when launching the instance. It will simply install the latest version of Docker CE and turn the instance to Docker Swarm Mode (to benefit from replication & high availability of Nexus container)

1
2
3
4
5
6
7
#!/bin/sh
yum update -y
yum install -y docker
service docker start
usermod -aG docker ec2-user
docker swarm init
docker service create --replicas 1 --name registry --publish 5000:5000 --publish 8081:8081 sonatype/nexus3:3.6.2

Note: Surely, you can use a Configuration Management Tools like Ansible or Chef to provision the server once created.

Then, issue the following command to create the infrastructure:

1
terraform apply -var-file=variables.tfvars

Once created, you should see the Elastic IP of your instance:



Connect to your instance via SSH:

1
ssh ec2-user@35.177.167.36

Verify that the Docker Engine is running in Swarm Mode:



Check if Nexus service is running:



If you go back to your AWS Management Console. Then, navigate to Route53 Dashboard, you should see a new A record has been created which points to the instance IP address.



Point your favorite browser to the Nexus Dashboard URL (registry.slowcoder.com:8081). Login and create a Docker hosted registry as below:



Edit the /etc/docker/daemon.json file, it should have the following content:

1
2
3
{
"insecure-registries" : ["registry.slowcoder.com:5000"]
}

Note: For production it’s highly recommended to secure your registry using a TLS certificate issued by a known CA.

Restart Docker for the changes to take effect:

1
service docker restart

Login to your registry with Nexus Credentials (admin/admin123):



In order to push a new image to the registry:

1
docker push registry.slowcoder.com:5000/mlabouardy/movies-api:1.0.0-beta


Verify that the image has been pushed to the remote repository:



To pull the Docker image:

1
docker pull registry.slowcoder.com:5000/mlabouardy/movies-api:1.0.0-beta


Note: Sometimes you end up with many unused & dangling images that can quickly take significant amount of disk space:



You can either use the Nexus CLI tool or create a Nexus Task to cleanup old Docker Images:



Populate the form as below:



The task above will run everyday at midnight to purge unused docker images from “mlabouardy” registry.

Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Build a CI/CD pipeline for Dockerized Microservices and Serverless Functions in AWS

Learn DevSecOps best practices with free hands-on labs and workshops. My workshop will focus on a range of subjects from building a CI/CD pipeline in AWS for Dockerized Microservices and Serverless functions to Monitoring and Logging. Join for the rollout this Wednesday, January 16, with new sessions added each week for the next 10 weeks.



There is no cost to participate in the workshops other than your contact information.


Join Here

Hosting a Free Static Website on Google Cloud Storage

This guide walks you through setting up a free bucket to serve a static website through a custom domain name using Google Cloud Platform services.

Sign in to Google Cloud Platform, navigate to Cloud DNS service and create a new public DNS zone:



By default it will have a NS (Nameserver) and a SOA (Start of Authority) records:



Go to you domain registrar, in my case I purchased a domain name from GoDaddy (super cheap). Add the nameserver names that were listed in your NS record:



PS: It can take some time for the changes on GoDaddy to propagate through to Google Cloud DNS.

Next, verify you own the domain name using the Open Search Console. Many methods are available (HTML Meta data, Google Analytics, etc). The easiest one is DNS verification through a TXT record:



Add the TXT record to your DNS zone created earlier:



DNS changes might take some time to propagate:



Once you have verified domain, you can create a bucket with Cloud Storage under the verified domain name. The storage class should be “Multi-Regional” (geo redundant bucket, in case of outage) :



Copy the website static files to the bucket using the following command:

1
gsutil rsync -R . gs://www.serverlessmovies.com/

After the upload completes, your static files should be available on the bucket as follows:



Next, make the files publicly accessible by adding allUsers entity with Object Viewer role to the bucket permissions:



Once shared publicly, a link icon appears for each object in the public access column. You can click on this icon to get the URL for the object:



Verify that content is served from the bucket by requesting the index.html link in you browser:



Next, set the main page to be index.html from “Edit website configuration” section:



Now, we need to map our domain name with the bucket we created earlier. Create a CNAME record that points to c.storage.googleapis.com:



Point your browser to your domain name, your website should be served:



While our solution works like a charm, we can access our content through HTTP only (Google Cloud Storage only supports HTTP when using it through a CNAME record). In the next post, we will serve our content through a custom domain over SSL using a Content Delivery Network (CDN).

Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Deploy Private Docker Registry on GCP with Nexus, Terraform and Packer

In this post, I will walk you through how to deploy Sonatype Nexus OSS 3 on Google Cloud Platform and how to create a private Docker hosted repository to store your Docker images and other build artifacts (maven, npm and pypi, etc). To achieve this, we need to bake our machine image using Packer to create a gold image with Nexus preinstalled and configured. Terraform will be used to deploy a Google compute instance based on the baked image. The following schema describes the build workflow:



PS : All the templates used in this tutorial, can be found on my GitHub.

To get started, we need to create the machine image to be used with Google Compute Engine (GCE). Packer will create a temporary instance based on the CentOS image and use a shell script to provision the instance:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
{
"variables" : {
"zone" : "YOUR ZONE",
"project" : "YOUR PROJECT ID",
"source_image" : "centos-7-v20181210",
"ssh_username" : "packer",
"credentials_path" : "PATH/account.json"
},
"builders" : [
{
"type": "googlecompute",
"account_file": "{{user `credentials_path`}}",
"project_id": "{{user `project`}}",
"source_image": "{{user `source_image`}}",
"ssh_username": "{{user `ssh_username`}}",
"zone": "{{user `zone`}}",
"image_name" : "nexus-v3-14-0-04"
}
],
"provisioners" : [
{
"type" : "file",
"source" : "./nexus.rc",
"destination" : "/tmp/nexus.rc"
},
{
"type" : "file",
"source" : "./repository.json",
"destination" : "/tmp/repository.json"
},
{
"type" : "shell",
"script" : "./setup.sh",
"execute_command" : "sudo -E -S sh '{{ .Path }}'"
}
]
}

The shell script, will install the latest stable version of Nexus OSS based on their official documentation and wait for the service to be up and running, then it will use the Scripting API to post a groovy script:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
#!/bin/bash

NEXUS_USERNAME="admin"
NEXUS_PASSWORD="admin123"

echo "Install Java JDK 8"
yum update -y
yum install -y java-1.8.0-openjdk wget

echo "Install Nexus OSS"
mkdir /opt/nexus
cd /opt/nexus
wget https://download.sonatype.com/nexus/3/latest-unix.tar.gz
tar -xvf latest-unix.tar.gz
rm latest-unix.tar.gz
mv nexus-3.14.0-04 nexus
useradd nexus
chown -R nexus:nexus /opt/nexus/
ln -s /opt/nexus/nexus/bin/nexus /etc/init.d/nexus
cd /etc/init.d
chkconfig --add nexus
chkconfig --levels 345 nexus on
mv /tmp/nexus.rc /opt/nexus/nexus/bin/nexus.rc
service nexus restart

until $(curl --output /dev/null --silent --head --fail http://localhost:8081); do
printf '.'
sleep 2
done


echo "Upload Groovy Script"
curl -v -X POST -u $NEXUS_USERNAME:$NEXUS_PASSWORD --header "Content-Type: application/json" 'http://localhost:8081/service/rest/v1/script' -d @/tmp/repository.json

echo "Execute it"
curl -v -X POST -u $NEXUS_USERNAME:$NEXUS_PASSWORD --header "Content-Type: text/plain" 'http://localhost:8081/service/rest/v1/script/docker-repository/run'

The script will create a Docker private registry listening on port 5000:

1
2
3
4
import org.sonatype.nexus.blobstore.api.BlobStoreManager; 
import org.sonatype.nexus.repository.storage.WritePolicy;

repository.createDockerHosted('mlabouardy', 5000, 443, BlobStoreManager.DEFAULT_BLOBSTORE_NAME, true, true, WritePolicy.ALLOW)

Once the template files are defined, issue packer build command to bake our machine image:



If you head back to Images section from Compute Engine dashboard, a new image called nexus should be created:



Now we are ready to deploy Nexus, we will create a Nexus server based on the machine image we baked with Packer. The template file is self-explanatory, it creates a set of firewall rules to allow inbound traffic on port 8081 (Nexus GUI) and 22 (SSH) from anywhere, and creates a google compute instance based on the Nexus image:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
provider "google" {
credentials = "${file("${var.credentials}")}"
project = "${var.project}"
region = "${var.region}"
}

resource "google_compute_firewall" "nexus" {
name = "nexus-firewall"
network = "${google_compute_network.nexus.name}"

allow {
protocol = "tcp"
ports = ["22", "8081"]
}

source_ranges = ["0.0.0.0/0"]
}

resource "google_compute_network" "nexus" {
name = "nexus-network"
}

resource "google_compute_instance" "nexus" {
name = "nexus"
machine_type = "${var.instance_type}"
zone = "${var.zone}"

boot_disk {
initialize_params {
image = "${var.image_name}"
size = 100
}
}

metadata {
sshKeys = "${var.ssh_user}:${file(var.ssh_pub_key_file)}"
}

network_interface {
network = "${google_compute_network.nexus.name}"
access_config = {}
}
}

On the terminal, run the terraform init command to download and install the Google provider, shown as follows:



Create an execution plan (dry run) with the terraform plan command. It shows you things that will be created in advance, which is good for debugging and ensuring that you’re not doing anything wrong, as shown in the next screenshot:



When you’re ready, go ahead and apply the changes by issuing terraform apply:



Terraform will create the needed resources and display the public ip address of the nexus instance on the output section. Jump back to GCP Console, your nexus instance should be created:



If you point your favorite browser to http://instance_ip:8081, you should see the Sonatype Nexus Repository Manager interface:



Click the “Sign in” button in the upper right corner and use the username “admin” and the password “admin123”. Then, click on the cogwheel to go to the server administration and configuration section. Navigate to “Repositories”, our private Docker repository should be created as follows:



The docker repository is published as expected on port 5000:



Hence, we need to allow inbound traffic on that port, so update the firewall rules accordingly:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
resource "google_compute_firewall" "nexus" {
name = "nexus-firewall"
network = "${google_compute_network.nexus.name}"

allow {
protocol = "tcp"
ports = ["22", "8081", "5000"]
}

source_ranges = ["0.0.0.0/0"]
}

resource "google_compute_network" "nexus" {
name = "nexus-network"
}

Issue terrafrom apply command to apply the changes:



Your private docker registry is ready to work at instance_ip:5000, let’s test it by pushing a docker image.

Since we have exposed the private Docker registry on a plain HTTP endpoint, we need to configure the Docker daemon that will act as client to the private Docker registry as to allow for insecure connections.



  • On Windows or Mac OS X: Click on the Docker icon in the tray to open Preferences. Click on the Daemon tab and add the IP address on which the Nexus GUI is exposed along with the port number 5000 in Insecure registries section. Don’t forget to Apply & Restart for the changes to take effect and you’re ready to go.
  • Other OS: Follow the official guide.

You should now be able to log in to your private Docker registry using the following command:



And push your docker images to the registry with the docker push command:



If you head back to Nexus Dashboard, your docker image should be stored with the latest tag:



Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Deploy a Docker Swarm cluster on GCP using Terraform in 8 steps

Kubernetes might be the ultimate choice when deploying heavy workloads on Google Cloud Platform. However, Docker Swarm has always been quite popular among developers who prefer fast deployments and simplicity— and among ops who are learning to get comfortable with an orchestrated environment.

In this post, we will walk through how to deploy a Docker Swarm cluster on GCP using Terraform from scratch. Let’s do it!



All the templates and playbooks used in this tutorial, can be found on my GitHub.

Get Started

To get started, sign in to your Google Cloud Platform console and create a service account private key from IAM:



Download the JSON file and store it in a secure folder.

For simplicity, I have divided my Swarm cluster components to multiple template files — each file is responsible for creating a specific Google Compute resource.

1. Setup your swarm managers

In this example, I have defined the Docker Swarm managers based on the CoreOS image:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
resource "google_compute_instance" "managers" {
count = "${var.swarm_managers}"
name = "manager"
machine_type = "${var.swarm_managers_instance_type}"
zone = "${var.zone}"

boot_disk {
initialize_params {
image = "${var.image_name}"
size = 100
}
}

metadata {
sshKeys = "${var.ssh_user}:${file(var.ssh_pub_key_file)}"
}

network_interface {
network = "${google_compute_network.swarm.name}"
access_config = {}
}
}

2. Setup your swarm workers

Similarly, a set of Swarm workers based on CoreOS image, and I have used the resource dependencies feature of Terraform to ensure the Swarm managers are deployed first. Please note the usage of depends_on keyword:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
resource "google_compute_instance" "workers" {
count = "${var.swarm_workers}"
name = "worker${count.index + 1}"
machine_type = "${var.swarm_workers_instance_type}"
zone = "${var.zone}"

depends_on = ["google_compute_instance.managers"]

boot_disk {
initialize_params {
image = "${var.image_name}"
size = 100
}
}

metadata {
sshKeys = "${var.ssh_user}:${file(var.ssh_pub_key_file)}"
}

network_interface {
network = "${google_compute_network.swarm.name}"
access_config = {}
}
}

3. Define your network rules

Also, I have defined a network interface with a list of firewall rules that allows inbound traffic for cluster management, raft sync communications, docker overlay network traffic and ssh from anywhere:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
resource "google_compute_firewall" "swarm" {
name = "swarm-firewall"
network = "${google_compute_network.swarm.name}"

allow {
protocol = "icmp"
}

allow {
protocol = "tcp"
ports = ["22", "2377", "7946"]
}

allow {
protocol = "udp"
ports = ["7946", "4789"]
}

source_ranges = ["0.0.0.0/0"]
}

resource "google_compute_network" "swarm" {
name = "swarm-network"
}

4. Automate your inventory with Terraform

In order to take automation to the next level, let’s use Terraform template_file data source to generate a dynamic Ansible inventory from Terraform state file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
data "template_file" "inventory" {
template = "${file("templates/inventory.tpl")}"

depends_on = [
"google_compute_instance.managers",
"google_compute_instance.workers",
]

vars {
managers = "${join("\n", google_compute_instance.managers.*.network_interface.0.access_config.0.nat_ip)}"
workers = "${join("\n", google_compute_instance.workers.*.network_interface.0.access_config.0.nat_ip)}"
}
}

resource "null_resource" "cmd" {
triggers {
template_rendered = "${data.template_file.inventory.rendered}"
}

provisioner "local-exec" {
command = "echo '${data.template_file.inventory.rendered}' > ../ansible/inventory"
}
}

The template file has the following format, and it will be replaced by the Swarm managers and workers IP addresses at runtime:

1
2
3
4
5
[managers]
${managers}

[workers]
${workers}

Finally, let’s define Google Cloud to be the default provider:

1
2
3
4
5
provider "google" {
credentials = "${file("${var.credentials}")}"
project = "${var.project}"
region = "${var.region}"
}

5. Setup Ansible roles to provision instances

Once the templates are defined, we will use Ansible to provision our instances and turn them to a Swarm cluster. Hence, I created 3 Ansible roles:

  • python: as its name implies, it will install Python on the machine. CoreOS ships only with the basics, it’s a minimal linux distribution without much except tools centered around running containers.
  • swarm-init: execute the docker swarm init command on the first manager and store the swarm join tokens.
  • swarm-join: join the node to the cluster using the token generated previously.

By now, your main playbook will look something like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
---
- name: Install Python
hosts: managers:workers
gather_facts: False
roles:
- python

- name: Init Swarm cluster
hosts: managers
gather_facts: False
roles:
- swarm-init

- name: Join Swarm cluster
hosts: workers
gather_facts: False
vars:
token: ""
manager: ""
roles:
- swarm-join

6. Test your configuration

To test it out, open a new terminal session and issue terraform init command to download the google provider:



Create an execution plan (dry run) with the terraform plan command. It shows you things that will be created in advance, which is good for debugging and ensuring that you’re not doing anything wrong, as shown in the next screenshot:



You will be able to examine Terraform’s execution plan before you deploy it to GCP. When you’re ready, go ahead and apply the changes by issuing terraform apply command.

The following output will be displayed (some parts were cropped for brevity):



If you head back to Compute Engine Dashboard, your instances should be successfully created:



7. Create your Swarm cluster with Ansible

Now our instances are created, we need to turn them to a Swarm cluster with Ansible. Issue the following command:

1
ansible-playbook -i inventory main.yml


Next, SSH to the manager instance using it’s public IP address:



If you run docker node ls, you will get a list of nodes in the swarm:



Deploy the visualizer service with the following command:

1
2
3
docker service create --name=visualizer --publish=8080:8080/tcp \
--constraint=node.role==manager --mount=type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \
dockersamples/visualizer


8. Update your network rules

The service is exposed on port 8080 of the instance. Therefore, we need to allow inbound traffic on that port, you can use Terraform to update the existing firewall rules:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
resource "google_compute_firewall" "swarm" {
name = "swarm-firewall"
network = "${google_compute_network.swarm.name}"

allow {
protocol = "icmp"
}

allow {
protocol = "tcp"
ports = ["22", "2377", "7946", "8080"]
}

allow {
protocol = "udp"
ports = ["7946", "4789"]
}

source_ranges = ["0.0.0.0/0"]
}

resource "google_compute_network" "swarm" {
name = "swarm-network"
}

Run terraform apply again to create the new ingress rule, it will detect the changes and ask you to confirm it:



If you point your favorite browser to your http://instance_ip:8080, the following dashboard will be displayed which confirms our cluster is fully setup:



In an upcoming post, we will see how we can take this further by creating a production-ready Swarm cluster on GCP inside a VPC — and how to provision Swarm managers and workers on-demand using instance groups based on increases or decreases in load.

We will also learn how to bake a CoreOS machine image with Python preinstalled with Packer, and how to use Terraform and Jenkins to automate the infrastructure deployment!

Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Build a Ruby based Lambda Function

At AWS re:Invent 2018, it was announced that Ruby is now a supported language for AWS Lambda. In this post, I walk you through how to write your very first Ruby-based Lambda function from scratch, followed by how to configure, deploy, and test a Lambda function.



API Gateway will forward incoming requests to the target Ruby based Lambda function, which will call the corresponding DynamoDB operation on the movies table.

To get started, create a Lambda execution role with permission to invoke the Scan operation on the DynamoDB table:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Action": "dynamodb:Scan",
"Resource": [
"arn:aws:dynamodb:eu-west-3:*:table/movies",
"arn:aws:dynamodb:eu-west-3:*:table/movies/index/*"
]
}
]
}

The function entry-point below is is self explanatory, it uses the AWS SDK (the package is pre-installed in Lambda) to instantiate a DynamoDB client in the appropriate region and issues the Scan operation on the DynamoDB table (defined in an environment variable):

1
2
3
4
5
6
7
8
9
10
11
require 'aws-sdk'
require 'json'

def lambda_handler(event:, context:)
dynamodb = Aws::DynamoDB::Client.new(region: ENV['AWS_REGION'])

resp = dynamodb.scan({
table_name: ENV['TABLE_NAME'],
})
{ statusCode: 200, body: JSON.generate(resp.items) }
end

The AWS SDK for Ruby is included in the Lambda execution environment by default.

Now that our handler is defined, head to the Lambda form creation and select the IAM role (you might need to refresh the page for the changes to take effect) from the Existing role drop-down list. Then, click the Create function button:



Set the table name as an environment variable:



The movies table contains a set of movies:



Create a deployment package (zip file) and update the function’s code using the AWS CLI command:

1
2
zip -r deployment.zip handler.rb
aws lambda update-function-code --function-name ScanMovies --zip-file fileb://./deployment.zip

Make sure to set the Lambda function handler to handler.lambda_handler

Once the function has been deployed, invoke it manually using the sample event data by clicking on the “Test” button in the top right of the console.



So far, we learned how to build our first Lambda function with Ruby. We also learned how to invoke it manually from the console. To leverage the power of Lambda, we are going to learn how to trigger this Lambda function in response to incoming HTTP requests (event-driven architecture) using the AWS API Gateway service:



Create a deployment stage and open your favorite browser with the API Invoke URL; you should see a message like the one shown in the following screenshot:



The following screenshot shows a properly configured Ruby based Lambda function with IAM access to DynamoDB:



Like what you’re read­ing? Check out my book and learn how to build, secure, deploy and manage production-ready Serverless applications in Golang with AWS Lambda.

Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×