Highly Available Bastion Hosts with Route53

Instances in a private subnet don’t have a public IP address, and without a VPN or a DirectConnect option, Bastion Host (JumpBox) is the expected mechanism to reach your servers. Therefore, we should make it Highly Available.

In this quick post, I will show you how to setup a Highly Available Bastion Hosts with the following targets :

  • Bastion hosts will be deployed in two Availability Zones to support immediate access across the VPC & withstand an AZ failure.
  • Elastic IP addresses are associated with the bastion instances to make sure the same trusted Elastic IPs are used at all times.
  • Bastion Hosts will be reachable via a permanent DNS entry configured with Route53.


In order to easily setup the infrastructure described above, I used Terraform:

1
2
git clone https://github.com/mlabouardy/terraform-aws-labs
cd bastion-highavailability

Note: I did a tutorial on how to the setup a VPC with Terraform so make sure to read it for more details.

Update the variables.tfvars file with your SSH Key Pair name and an existing Hosted Zone ID. Then, issue the following command:

1
terraform apply -var-file=variables.tfvars

That will bring up the VPC, and all the necessary resources:



Now in your AWS Management Console you should see the resources created:

EC2 Instances:



DNS Record:



Finally, create an SSH tunnel using the DNS record to your private instance:

1
ssh -f ec2-user@bastion.slowcoder.com -i /d/aws/vpc.pem -L 2800:10.0.3.218:22 -N

Once done, you should now be able to access to your private instances via SSH:

1
ssh ec2-user@localhost -p 2800 -i /d/aws/vpc.pem


Take it further ? instead of defining number of bastion hosts, we could use a bastion host inside an autoscaling group with min target set to 1.

Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Create Front-End for Serverless RESTful API

In this post, we will build an UI for our Serverless REST API we built in the previous tutorial, so make sure to read it before following this part.

Note: make sure to enable CORS for the endpoint. In the API Gateway Console under Actions and Enable CORS:



The first step is to clone the project:

1
git clone https://github.com/mlabouardy/movies-dynamodb-lambda.git

Head into the ui folder, and modify the js/app.js with your own API Gateway Invoke URL:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
angular.module('app', [])
.controller('MainCtrl', function($scope, $http){
var self = $scope;
var apiUrl = 'https://kbouwyuvoc.execute-api.us-east-1.amazonaws.com/prod/movies'; // replace with API Gateway Invoke URL

self.movies = [];
self.movie = {};
self.error = '';

self.getMovies = function(){
$http.get(apiUrl).then(function(res){
self.movies = res.data;
})
}

self.create = function(){
$http.post(apiUrl, self.movie).then(function(res){
self.getMovies();
self.movie = '';
self.error = '';
}, function(err){
self.error = err.data.status;
});
}

self.getMovies();
});


Once done, you are ready to create a new S3 bucket:

1
aws s3 mb s3://movies-tutorial

Copy all the files in the ui directory into the bucket:

1
aws s3 cp ui/ s3://movies-tutorial --recursive -grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers

Finally, turns website hosting on for your bucket:

1
aws s3 website s3://movies-tutorial --index-document index.html

After running this command all of our static files should appear in our S3 bucket:



Your bucket is configured for static website hosting, and you now have an S3 website url like this http://<bucket_name>.s3-website-us-east-1.amazonaws.com



Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Setup AWS Lambda with Scheduled Events

This post is part of my “Serverless” series. In this part, I will show you how to setup a Lambda function to send mails on a defined scheduled event from CloudWatch.

1 – Create Lambda Function

So start by cloning the project :

1
git clone https://github.com/mlabouardy/schedule-mail-lambda.git

I implemented a simple Lambda function in NodeJS to send an email using MailGun library

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
'use strict';
var mg = require('mailgun-js')({
apiKey: process.env.MAILGUN_API_KEY || 'YOUR_API_KEY',
domain: process.env.MAILGUN_DOMAIN || 'DOMAIN_NAME'
});

exports.sendEmail = function(event, context, callback){
mg.messages().send({
from: 'mohamed.labouardy@gmail.com',
to: 'mohamed@labouardy.com',
subject: 'Hello',
text: 'Sent from lambda on a defined schedule'
}, function(err, body){
callback(err, body)
})
}

Note: you could use another service like AWS SES or your own SMTP server

Then, create a zip file:



Next, we need to create an Execution Role for our function:

1
2
3
4
5
6
7
8
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": { "AWS" : "*" },
"Action": "sts:AssumeRole"
}]
}
1
aws iam create-role --role-name lambda_execution --assume-role-policy-document file://lambda_role_policy.json


Execute the following Lambda CLI command to create a Lambda function. We need to provide the zip file, IAM role ARN we created earlier & set MAILGUN_API_KEY and MAILGUN_DOMAIN as parameters.

1
2
3
4
aws lambda create-function --region us-east-1 --function-name mail-scheduler \
--zip-file fileb://schedule-mail-lambda.zip --role arn:aws:iam::3XXXXXXX3:role/lambda_execution \
--handler index.sendEmail --runtime nodejs6.10 \
--environment Variables="{MAILGUN_API_KEY=key-6XXXXXXXXXXXXXXXXXXXXXX5,MAILGUN_DOMAIN=sandboxXXXXXXXXXXXXXX.mailgun.org}"

Note: the –runtime parameter uses Node.JS 6.10 but you can also specify Node.JS 4.3

Once created, AWS Lambda returns function configuration information as shown in the following example:



Now if we go back to AWS Lambda Dashboard we should see our function has been successfuly created:



2 – Configure a CloudWatch Rule

Create a new rule which will trigger our lambda function each 5 minutes:



Note: you can specify the value as a rate or in the cron expression format. All schedules use the UTC time zone, and the minimum precision for schedules is one minute

If you go back now to the Lambda Function Console and navigate to the Trigger tab, you should see the CloudWatch has been added:



After 5 minutes, CloudWatch will trigger the Lambda Function and you should get an email notification:



Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

How ChatOps can help you DevOps better !

When people hear DevOps, they often relate it to “automation”, “team work” and many “tools”, which is right. DevOps is all about CAMS : a culture of automation, measurement and sharing. The purpose of this article is to show how can ChatOps boost DevOps by bringing CAMS to everyday practice.

What is DevOps ?

Set of practices that emphasize the collaboration and communication of both Software Engineers and IT & Infrastructure Operations to reduce the time to market of a Product. One main goal of DevOps is to deploy features into production quickly and to detect and correct problems when they occur, without disrupting other services (Self-healing, Blue/Green deployment, Canary updates …)

There are several guidelines in order to achieve DevOps maturity, here are a few you need to know:

  • Continuous Integration is the practice of integrating, building and testing code within the development environment. It requires developers team to integrate code into a shared repository (Version Control System). Checks out the code and runs all the pre-deployment tests (Do not require code to be deployed to a server). In case it passed compile and package the code into an Artifact (JAR, Docker Image, gem …). Push it into an Artifact Repository Manager. The Artifact is deployed then inside an immutable container to a test environment (Quality Assurance). Once deployed post-deployment tests (Functional, integration & performance tests) are executed.
  • Continuous Delivery is an extension to continuous integration pipeline. It aims to make any change to the system releasable, it requires a person or business rule to decide when the final push to production should occur.
  • Continuous Deployment is an advanced evolution of continuous delivery. It’s the practice of deploying all the way into production without any human intervention.

In addition to those practices discussed above. Today, most DevOps teams embrace collaborative messaging platforms, such as Slack, to communicate with each other. And get real time updates about the system through online chat. And that’s certainly the spirit behind ChatOps.

What is ChatOps ?

“Placing tools directly in the middle of the conversation” — Jesse Newland, GitHub

Collaboration and conversation are a force that let work and learn together to produce new things. And this is happening in an exponential way that accelerates every year.

ChatOps (an amalgamation of chat and operations) is an emerging movement that led to ease the integration between teams and various tools/platforms of DevOps and others. It is about conversation driven development by bringing the tools into conversations. Robots are today members of your team to whom you can send a request and get an instant response.

ChatOps is a model where people, tools, process and automation are connected in a transparent flow. It also helps collaborate and control pipelines in one window.

Today, DevOps practices toolchain puts in many grades of tools, including : development software, networks and servers management, tests, monitoring, etc. Collaborating and controlling the practices’ pipelines in one window has helped developers teams work in more efficient and agile way.

Three main components in Chatops:
Collaboration tool: It’s the Chat Client where stakeholders and teams are connected between them and to the systems they work on. The are several chat platforms :

  • Slack: a leading chat platform for teams which accumulated more that 4 million daily active users. It is also one of the first platforms that integrated bots into its system.
  • HipChat by Atlassian is a group chat, file sharing, video chat & screen sharing built for teams & business.

Bot: It is the core of chatops methodology. The Bot comes in the middle of the collaboration tool and the Devops tools.
The Bot receives requests from team members, retrieves information from integrated systems by executing set of commands (scripts).

  • Hubot, a leading bot tool for ChatOps. It is a valuable open source robot (CoffeeScript) for automating chat rooms by Github made back in 2013. Hubot is useful and powerful via Scripts. They define the skills of your Hubot instance. Hundreds of them are written and maintained by the community and you can create your own. It mainly helps to automate most of ops-related tasks.
  • Lita is a framework for bots dedicated to company chat rooms written in Ruby. It is heavily inspired from Hubot. This framework can be used to build operational task automations and has a very comprehensive list of plugins which means that it can be integrated in many chat platforms as : Slack, Facebook Messenger and others.
  • Cog, made by Operable, is another chatbot framework to help automate DevOps workflows. It’s designed to be chat platform and language agnostic, and uses a Unix-style pipeline to activate complex functionality.
  • ErrBot is a chatbot daemon that generates bots that are in the middle of a chat platform and DevOps tools. It is written in Python and aims to make integrating and tool that provides an API, easily to a chat platform via commands.

System integration: Third key element in ChatOps. Simply put, it is the DevOps tools allowing more productivity. Such as :

  • Issue tracking: JIRA, OTRS, TeamForge …
  • Version Control Systems: Github, Gitlab, Bitbucket …
  • Infrastructure as Code (IaC): Terraform, Vagrant, Packer, Swarm, Kubernetes, Docker, AWS CloudFormation …
  • Configuration Management tools (Provisioning): Ansible, Salt, Chef, Puppet …
  • Continuous Integration Servers: Jenkins, Travis CI, Bamboo …
  • Monitoring: Grafana, Kibana, Prometheus …

Nowadays ChatOps are operational. Several teams around the world already connected their chat platforms to their build systems to get notifications, query and execute processes on their continuous integration servers. Same thing happens in QA (Quality Assurance) teams, support teams and the rest.

With ChatOps, trust is build among the team especially that work is shared and brought into the foreground by putting it all in one place. Your chat platform is your new command line.

Conversation-driven collaboration is not new, but with ChatOps, we observed a combination of the oldest form of collaboration and the newest techies. We are not surprised that this combination have changed the way staff members work. This should make people often think of making software that allows this collaboration to be more contributory , soft and secure.

For further Information on DevOps and ChatOps approaches, check out our DevOps Wiki on Github.

Thanks for reading,

Setup Docker Swarm on AWS using Ansible & Terraform

This post is part of “IaC” series explaining how to use Infrastracture as Code concepts with Terraform. In this part, I will show you how to setup a Swarm cluster on AWS using Ansible & Terraform as shown in the diagram below (1 Master and 2 Workers) in less than 1 min ⏱:



All the templates and playbooks used in this tutorial, can be found on my GitHub](https://github.com/mlabouardy/terraform-aws-labs/tree/master/docker-swarm-cluster). 😎

Note: I did some tutorials about how to get started with Terraform on AWS, so make sure you read it before you go through this post.

1 – Setup EC2 Cluster using Terraform

1.1 – Global Variables

This file contains environment specific configuration like region name, instance type …

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
variable "aws_region" {
description = "AWS region on which we will setup the swarm cluster"
default = "us-east-1"
}

variable "ami" {
description = "Amazon Linux AMI"
default = "ami-4fffc834"
}

variable "instance_type" {
description = "Instance type"
default = "t2.micro"
}

variable "key_path" {
description = "SSH Public Key path"
default = "/home/core/.ssh/id_rsa.pub"
}

variable "bootstrap_path" {
description = "Script to install Docker Engine"
default = "install-docker.sh"
}

1.2 – Config AWS as Provider

1
2
3
provider "aws" {
region = "${var.aws_region}"
}

1.3 – Security Group

This SG allows all the inbound/outbound traffic:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
resource "aws_security_group" "default" {
name = "sgswarmcluster"

# Allow all inbound
ingress {
from_port = 0
to_port = 65535
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 65535
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

# Enable ICMP
ingress {
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["0.0.0.0/0"]
}
}

1.4 – EC2 Instances

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
resource "aws_key_pair" "default"{
key_name = "clusterkp"
public_key = "${file("${var.key_path}")}"
}

resource "aws_instance" "master" {
ami = "${var.ami}"
instance_type = "${var.instance_type}"
key_name = "${aws_key_pair.default.id}"
user_data = "${file("${var.bootstrap_path}")}"
vpc_security_group_ids = ["${aws_security_group.default.id}"]

tags {
Name = "master"
}
}

resource "aws_instance" "worker1" {
ami = "${var.ami}"
instance_type = "${var.instance_type}"
key_name = "${aws_key_pair.default.id}"
user_data = "${file("${var.bootstrap_path}")}"
vpc_security_group_ids = ["${aws_security_group.default.id}"]

tags {
Name = "worker 1"
}
}

resource "aws_instance" "worker2" {
ami = "${var.ami}"
instance_type = "${var.instance_type}"
key_name = "${aws_key_pair.default.id}"
user_data = "${file("${var.bootstrap_path}")}"
vpc_security_group_ids = ["${aws_security_group.default.id}"]

tags {
Name = "worker 2"
}
}

Bootstrap script to install latest version of Docker:

1
2
3
4
5
#!/bin/sh
yum update
yum install -y docker
service docker start
usermod -aG docker ec2-user

2 – Transform to Swarm Cluster with Ansible

The playbook is self explanatory:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
- name: Init Swarm Master
hosts: masters
gather_facts: False
remote_user: ec2-user
tasks:
- name: Swarm Init
command: docker swarm init --advertise-addr {{ inventory_hostname }}

- name: Get Worker Token
command: docker swarm join-token worker -q
register: worker_token

- name: Show Worker Token
debug: var=worker_token.stdout

- name: Master Token
command: docker swarm join-token manager -q
register: master_token

- name: Show Master Token
debug: var=master_token.stdout

- name: Join Swarm Cluster
hosts: workers
remote_user: ec2-user
gather_facts: False
vars:
token: "{{ hostvars[groups['masters'][0]]['worker_token']['stdout'] }}"
master: "{{ hostvars[groups['masters'][0]]['inventory_hostname'] }}"
tasks:
- name: Join Swarm Cluster as a Worker
command: docker swarm join --token {{ token }} {{ master }}:2377
register: worker

- name: Show Results
debug: var=worker.stdout

- name: Show Errors
debug: var=worker.stderr

Now we defined all the required templates and playbook, we only need to type 2 commands to bring up the swarm cluster:

1
2
terraform apply
ansible -i hosts playbook.yml

Note: Make sure to update the hosts file with the public ip of each EC2 instance.

Setting up the Swarm cluster in action is show below 😃 :

Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Running Docker on AWS EC2

In this quick tutorial, I will show you how to install Docker 🐋 on AWS EC2 instance and run your first Docker container.

1 – Setup EC2 instance

I already did a tutorial on how to create an EC2 instance, so I won’t repeat it. There are few ways you’ll want to differ from the tutorial:

We select the “Amazon Linux AMI 2017.03.1 (HVM), SSH Volume Type” as AMI. The exact versions may change with time.
We configure the security groups as below. This setting allows access to port 80 (HTTP) from anywhere, and SSH access also.



Go ahead and launch the instance, it will take couple of minutes:



2 – Install Docker

Once your instance is ready to use, connect via SSH to the server using the public DNS and the public key:



Once connected, use yum configuration manager to install Docker, by typing the following commands:

1
2
sudo yum update -y
sudo yum install -y docker

Next, start the docker service:



In order to user docker command without root privileges (sudo), we need to add ec2-user to the docker group:

1
sudo usermod -aG docker ec2-user

To verify that docker is correctly installed, just type:



As you can see the latest version of docker has been installed (v17.03.1-ce)

Congratulation ! 💫 🎉 you have now an EC2 instance with Docker installed.

3 – Deploy Docker Container

It’s time to run your first container 😁. We will create an nginx container with this command:



If we run the list command “docker ps”, we can see that a nginx container has been created from the nginx official image.



Finally, you visit your instance public DNS name in your browser, you should see something like this below:



Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Build a Facebook Messenger bot with Go and Messenger API

In this first tutorial of the “ChatOps” series , I will show you quickly how to create a Facebook Messenger Bot in Golang. All the code used in this demo can be found on my Github.

1 – Messenger bot

We will start by creating a dummy web server with one endpoint to print a hello world message. I’ll use “gorilla/mux” package to do that. I found it much easier to setup routes for our server rather than using the go standard library.

We first need to install “gorilla/mux” library:

1
go get github.com/gorilla/mux

Then, create a file called app.go with the following content:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
package main

import (
"fmt"
"log"
"net/http"

"github.com/gorilla/mux"
)

func HomeEndpoint(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "Hello from mlabouardy :)")
}

func main() {
r := mux.NewRouter()
r.HandleFunc("/", HomeEndpoint)
if err := http.ListenAndServe(":8080", r); err != nil {
log.Fatal(err)
}
}

So basically, we have a main function which create a new route “/”, and associate to this path a HomeEndpoint function that use ResponseWrite reference to print a message. Then it start a HTTP server on port 8080.

To verify that things are working, start your local server with:

1
go run app.go

Then in your browser, visit http://localhost:8080 and you should see:

Now that you’ve got your first endpoint working, we will need to expose 2 endpoints to make it work with Facebook platform:

1.1 – Verification Endpoint: (GET /webhook)

1
2
3
4
5
6
7
8
9
10
11
12
func VerificationEndpoint(w http.ResponseWriter, r *http.Request) {
challenge := r.URL.Query().Get("hub.challenge")
token := r.URL.Query().Get("hub.verify_token")

if token == os.Getenv("VERIFY_TOKEN") {
w.WriteHeader(200)
w.Write([]byte(challenge))
} else {
w.WriteHeader(404)
w.Write([]byte("Error, wrong validation token"))
}
}

It simply looks for the Verify Token and responds with the challenge sent in the verification request.

1.2 – Messages Handler Endpoint: (POST /webhook)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
func MessagesEndpoint(w http.ResponseWriter, r *http.Request) {
var callback Callback
json.NewDecoder(r.Body).Decode(&amp;callback)
if callback.Object == "page" {
for _, entry := range callback.Entry {
for _, event := range entry.Messaging {
ProcessMessage(event)
}
}
w.WriteHeader(200)
w.Write([]byte("Got your message"))
} else {
w.WriteHeader(404)
w.Write([]byte("Message not supported"))
}
}

It serialize the request body into Callback object , then it parse it and fetch the message object and pass it as an argument to ProcessMessage function that will use Facebook Graph API to send the response to the user (in this case we will send an image):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
func ProcessMessage(event Messaging) {
client := &amp;http.Client{}
response := Response{
Recipient: User{
ID: event.Sender.ID,
},
Message: Message{
Attachment: &amp;Attachment{
Type: "image",
Payload: Payload{
URL: IMAGE,
},
},
},
}
body := new(bytes.Buffer)
json.NewEncoder(body).Encode(&amp;response)
url := fmt.Sprintf(FACEBOOK_API, os.Getenv("PAGE_ACCESS_TOKEN"))
req, err := http.NewRequest("POST", url, body)
req.Header.Add("Content-Type", "application/json")
if err != nil {
log.Fatal(err)
}

resp, err := client.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
}

Our local server url http://localhost:8080 is not available to all the other people in the internet and doesn’t support HTTPS which is necessary for Facebook Messenger bot. Therefore, we need to expose it to the public.

2 – Deployment

Note: You could also use a tool like ngrok. It basically creates a secure tunnel on your local machine along with a public URL you can use for browsing your local server. Keep in mind, to use your bot in production, you need to use a real IaaS like AWS, Heroku, Clever Cloud, etc

In this tutorial I will choose CleverCloud as IaaS provider, it deploys your Go application for free and offer you extra add ons like monitoring, logs, scaling, continuous delivery …

In order to deploy to CleverCloud you’ll need a CleverCloud user account. Signup is free and instant. After signing up:

We click on “Add an application”, then you can either upload your app code from local repository or from Github:

Next, We choose GO as our server side language. Then, we click on “Next

We left all fields as default, and we click on “Create

Our server does not use any external resources (MySQL, Redis …) so we will simply skip this part by clicking on “I don’t need any add-on

Congratulations! You have successfully deployed your server.

Note: The app URL is :

https://ID.cleverapps.io

ID is the string on the bottom right of the dashboard,

3 – Facebook Setup

3.1 – Create Facebook Page

If you don’t already have one, you need to create a facebook page that we will use to connect our bot to.

Just give it a name, and that’s it now you have created a Facebook page for you Bot

3.2 – Create Facebook App

Once the page was created, we will create a facebook app which will be connected your webhook server and your public page: which works as middleware that connects your Webhook (APP URL) and your public page.

You need to give your app a name then click on “Skip and Create App ID

After creating an app, we need to add Messenger as a platform. So we click on “Add Product” then select Messenger as a product

Now you’re in the Messenger settings. There are few things in here you’ll need to fill out in order to get your chatbot wired up to the server endpoint we set up earlier.

3.2.1 – Generate a Page Access Token

Using the page we created earlier, you’ll get a random “Page Access Token

You need to copy the token to your clipboard. We’ll need it as an environment variable (PAGE_ACCESS_TOKEN) for our server.

3.2.2 – Setup subscription

Next, we click on “Setup webhook” button on “Webhooks” section, a new popup will show up:

  • Callback URL : Clever Cloud we set up earlier
  • Verify Token : A secret token that will be sent to your bot, in order to verify the request is coming from Facebook, Make sure to remember the value because we will need it as a second environment variable (VERIFY_TOKEN) for the server
  • Subscription Fields : represents which events you want Facebook to notify your webhook about, in this case, we will choose “messages

After you’ve configure your subscription, you’ll need to subscribe to the specific page, you want to receive message notification for

3.2.3 – Set environment variables

Once you’ve gotten your PAGE_ACCESS_TOKEN and VERIFY_TOKEN, make sure you add those two as environment variables for the server on clever cloud dashboard

Then restart the application and you should be good to go !

4 – Test the Bot

Go to your Facebook Page and send a message to it. You should see a gif back to you

5 – Customize your Bot’s behavior

In this quick tutorial I showed you how to build a simple & dumb bot for facebook messenger, to make it smarter and have more interactions with the user. We need to use a NLP backend like api.ai (Google), wit.ai (Facebook) or motion.ai. And that will be the subject of my upcoming tutorial

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×