Bots using Microsoft Bot Platform and Heroku: Customer Life-cycle Management

This post is about using the Microsoft Bot Platform with Heroku to build a bot!

The demo scenario is very simple:

  1. User starts the conversation
  2. Bot asks for an account number
  3. Customer provides an account number or indicates they are not a customer
  4. Bot retrieves details if available for a personalised greeting and asks how can it be of help today
  5. Customer states the problem/reason for contact
  6. Bot uses sentiment analysis to provide the appropriate response


Bots are nothing but automated programs that carry out some well defined set of tasks. They are old technology (think web-crawlers).

Recent developments such as Facebook/Skype platform APIs being made available for free, easy availability of cloud-computing platforms and relative sophistication of machine learning as a service  has renewed interest in this technology especially for customer life-cycle management applications.

Three main components of a modern, customer facing bot app are:

  • Communication Platform (e.g. Facebook Messenger, Web-portal,  Skype etc.): the eyes, ears and mouth of the bot
  • Machine Learning Platform: the brain of the bot
  • Back end APIs for integration with other systems (e.g. order management): the hands of the bot

Other aspects include giving a proper face to the bot in terms of branding but from a technical perspective above three are complete.

Heroku Setup

Heroku provides various flavours of virtual containers (including a ‘free’ and ‘hobby’ ones) for different types of applications. To be clear: a ‘dyno’ is a lightweight Linux container which runs a single command that you specify.

Another important reason to use Heroku is that it provides a ‘https’ endpoint for your app which makes it more secure. This is very important as most platforms will not allow you to use a plain ‘http’ endpoint (e.g. Facebook Messenger). So unless you are ready to fork out big bucks for proper web-hosting and SSL certificates start out with something like Heroku.

Therefore for a Node.JS dyno you will run something like node <js file name>.

The cool thing about Heroku (in my view) is that it integrates with Git so deploying your code is as simple as ‘git push heroku <branch name to push from>’.

You will need to follow a step by step process to make yourself comfortable with Heroku (including installing the Heroku CLI) here:

We will be using a Node.JS flavour of Heroku ‘dynos’.

Heroku has an excellent ‘hello world’ guide here:


Microsoft Bot Platform

The Microsoft Bot Platform allows you to create, test and publish bots easily. It also provides connectivity to a large number of communication platforms (such as Facebook Messenger). Registration and publishing is FREE at the time of writing.

You can find more information on the Node.js base framework here:

The dialog framework in the MS Bot Platform is based on REST paths. This is a very important concept to master before you can start building bots.


Microsoft provide a publishing platform to register your bot.

Once you have the bot correctly published on a channel (e.g. Web, Skype etc.) messages will be passed on to it via the web-hook.

You need to provide an endpoint (i.e. the web-hook) to a web app in Node.JS which implements the bot dialog framework to publish your bot. This web app is in essence the front door to your ‘bot’.

You can test the bot locally by downloading the Microsoft Bot Framework simulator.

The demo architecture is outlined below:

Bot Demo Architecture

Bot Demo Architecture

Detailed Architecture for the Demo

There are three main components to the above architecture as used for the demo:

  1. Publish the bot in the Bot Registry (Microsoft) for a channel – you will need your Custom Bot application endpoint to complete this step,in the demo I am publishing only to a web-channel which is the easiest to work with in my opinion. Once registered you will get an application id and secret which you will need to add to the bot app to ‘authorise’ it.
  2. Custom Bot Application (Node.JS) with the embedded bot dialog – the endpoint where the app is deployed needs to be public, a HTTPS endpoint is always better! I have used Heroku to deploy my app which gives me a public HTTPS endpoint to use in the above step.
  3. Machine Learning Services – to provide functionality to make the Bot intelligent, we can have a statically scripted bot with just the embedded dialog but where is the fun in that? For the demo I am using Watson Sentiment Analysis API to detect the users sentiment during the chat.

*One item that I have purposely left out within the Custom Bot app, in the architecture, is the service that provides access to the data which drives the dialog (i.e. Customer Information based on the Account Number). In the demo a dummy service is used that returns hard coded values for Customer Name when queried using an Account Number.

The main custom bot app Javascript file is available below, right click and save-as to download.

Microsoft Bot Demo App


Javascript: Playing with Prototypes – II

Let us continue the discussion about Prototypes in Javascript and show the different ways in which inheritance can work. Inheritance is very important because whether you are trying to extend the JQuery framework or trying to add custom event sources in Node.JS you will need to extend an existing JS object.

Let us remember the most important mantra in JS – “nearly everything interesting is an object, even functions”

Objects are mutable, primitives (e.g. strings) are NOT!

Let us first introduce the example. There is a base object: Person which has two properties ‘id’ and ‘age’ and getter/setter methods for these. We want to create a child object: Student, which should inherit the id and age properties from Person and add its own read-only ‘student id’ property.

Base object: Person
function Person(id)
{ = 0;
  this.age = 0;
Add set/get methods for Age and Id
Person.prototype.setId = function(id)
{ = id;
Person.prototype.getId = function()
Person.prototype.setAge = function(age)
  this.age = age;
Person.prototype.getAge = function()
  return this.age;
Child object Student which should extend properties and methods from Person
function Student(sid)
  this.sid = sid;
  Constructor for Person (to be safe)
  Student Id getter
  Student.prototype.getSid = function()
    return this.sid;


There are different ways (patterns) of implementing ‘inheritance’ based (Inheritance Methods):

  • Pattern 1: Student.prototype = Object.create(Student);
  • Pattern 2: Student.prototype = Object.create(Person.prototype);
  • Pattern 3: Student.prototype = new Person;

Below is the snippet of code we use to probe what happens in each of the three cases. Two instances of Student are created (s1 and s2). Then we examine the prototypes and assign values to some of the properties.

<Inheritance Method: one of the three options above>
var s1 = new Student(101);
var s2 = new Student(102);
console.log("Proto S1",Object.getPrototypeOf(s1));
console.log("Proto S2",Object.getPrototypeOf(s2));
if (Object.getPrototypeOf(s1) == Object.getPrototypeOf(s2)) {
  console.log("Compare prototypes:",true);
console.log("Compare Id S1:S2",s1.getId(),s2.getId());
console.log("S2 set age 20");
console.log("S1 age",s1.getAge());
console.log("S2 age",s2.getAge());


Let us look at what happens in each case:

1) Student.prototype = Object.create(Student);


S1: { sid: 101, id: 0, age: 0 }
S2: { sid: 102, id: 0, age: 0 }
Proto S1: { getSid: [Function] }
Proto S2: { getSid: [Function] }
Compare prototypes: true
TypeError: Object object has no method 'setId'
at Object.<anonymous> (/Users/azaharmachwe/node_code/NodeTest/thisTest.js:73:4)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Function.Module.runMain (module.js:497:10)
at startup (node.js:119:16)
at node.js:901:3


The surprising result is that an exception is thrown. It seems there is no method ‘setId’ on the Student instance. This means that inheritance did not work. We can confirm this by looking at the prototype of S1 and S2 instance. Only the getter for student id defined in the Student object is present. We have not inherited any of the methods from Person.

But if we look at the list of attributes we see ‘id’ and ‘age’ present. So it seems the attributes were acquired somehow.

If we look at the way we define the Person object we actually add the ‘id’ and ‘age’ attributes to the instance (i.e. we use where as the accessor methods are added on the prototype. When we create an instance of Student as Student.prototype = Object.create(Student) we correctly set the attributes as they are defined at the instance level.

If the line in bold is removed then you will only see the Student level attribute (‘sid’).


2) Student.prototype = Object.create(Person.prototype);


S1: { sid: 101, id: 0, age: 0 }
S2: { sid: 102, id: 0, age: 0 }
Proto S1: { getSid: [Function] }
Proto S2: { getSid: [Function] }
Compare prototypes: true
S1 30
Compare Id S1:S2 1 2
S2 set age 20
S1 age 30
S2 age 20

No errors this time.

So we see both S1 and S2 instances have the correct attributes (Person + Student) prototypes for both contain the getter defined in Student and both have the same prototype. Something more interesting is the fact that we can set ‘age’ and ‘id’ on them as well showing us that the attribute setters/getters have been inherited from Person.

But why can’t we see the get/set methods for ‘age’ and ‘id’ on the Student prototype? The reason is that with the call to Object.create with the Person.prototype parameter we chain the prototype of Person with that of Student. To see the get/set methods for ‘age’ and ‘id’ that the Student instance is using add the following line to the probe commands:


This proves that the object is inheriting these methods at the prototype level and not at the object level. This is the recommended pattern for inheritance.

3) Student.prototype = new Person;

This is a method you may see in some examples out there. But this is not the recommended style. The reason is that in this case you are linking the prototype of Student with an instance of Person. Therefore you get all the instance variables of the super-type included in the sub-type.


S1: { sid: 101 }
S2: { sid: 102 }
Proto S1: { id: 0, age: 0, getSid: [Function] }
Proto S2: { id: 0, age: 0, getSid: [Function] }
Compare prototypes: true
S1 30
Compare Id S1:S2 1 2
S2 set age 20
S1 age 30
S2 age 20

Note the presence of ‘id’ and ‘age’ attributes with default values in the prototypes of S1 and S2. If the attributes are array or object type (instead of a primitive type as in this case), we can get all kinds of weird, difficult to debug behaviours. This is the case with frameworks where a base object needs to be extended to add custom functionality. I came across this issue while trying to create a custom Node.JS event source.

Wrong way to extend: A Node.JS example

I have seen many Node.JS custom event emitter examples that use pattern number (3). The correct pattern to use is pattern (2). Let us see why.

The code below extends the Node.JS EventEmitter (in ‘events’ module) to create a custom event emitter. Then two instance of this custom event emitter are created. Different event handling callback functions for the two instances are also defined. This will allow us to clearly identify which instance handled the event.

In the end we cause the custom event to fire on both the instances.

var ev = require("events");
Create a custom event emitter by extending the Node.JS event emitter
function myeventemitter(id)
{ = id;;
Try different ways of extending
myeventemitter.prototype = new ev.EventEmitter; = function()
Initialise two instances of the custom event emitter
var myee1 = new myeventemitter("A");
var myee2 = new myeventemitter("B");
Define callbacks on the custom event ('go')
  console.log("My EE1: Go event received from",id);
  console.log("My EE2: Go event received from",id);
Cause the custom event to fire on both the custom event emitters
Dump the prototype of our custom event emitter

Note we are using pattern (3) to extend the EventEmitter:

myeventemitter.prototype = new ev.EventEmitter;

We expect that custom events fired on instance 1 will result in the event handling function on instance 1 being called. The same thing should happen for instance 2. Let us look at the actual output:

Fire A
My EE1: Go event received from A
My EE2: Go event received from A
Fire B
My EE1: Go event received from B
My EE2: Go event received from B
{ domain: null,
_events: { go: [ [Function], [Function] ] },
_maxListeners: 10,
fire: [Function] }

This looks wrong! When we cause instance 1 to fire its custom event it actually triggers the event handling functions in both the instances! Same happens when we try with instance 2.

The reason as you may have already guessed is that when we use pattern (3) we actually attach the JSON object that holds the individual event handling functions to the prototype (variable name: _events). This can be seen in the above output.

Therefore both instances of the custom event emitter will have the same set of event handling functions registered because there is only one such set.

To correct this just switch the extension patter to (2):

Fire A
My EE1: Go event received from A
Fire B
My EE2: Go event received from B
{ fire: [Function] }

The output now looks correct. Only the instance specific callback function is called and the prototype does not store the event handling functions. Therefore each instance of the custom event emitter has its own set for storing event handling functions.

Horizontal Web-app Scaling with Nginx and Node.JS

One highly touted advantage of using Node.JS is that it makes applications easy to scale. This is true to an extent especially when it comes to web-apps.

A stateless request-response mechanism lends itself to parallelisation. This is as easy as spinning up another instance of the request handling process on the same or different machine.

Where state-full request-response is required (say to maintain session information) then to scale up the ‘state’ must be shared safely across different instances of the request handling processes. This separates out the ‘functional’ aspects of the request handling mechanism from the side-effect related code.

To tie in all the different web-app instances under a single public address and to load-balance across them we need a ‘reverse-proxy’. We will use Nginx for this.

Software needed:

  • Nginx (v 1.7.10)
  • Node.JS (v 0.10.12)

First let us setup the Nginx configuration:

events {
	worker_connections 768;
http {
	upstream localhost {
	server {
		listen 80;
		location / {
			proxy_pass http://localhost;


More info about setting up and running Nginx –

This configuration sets up the public address as localhost:80 with three private serving instances on the same machine at port: 18081, 18082 and 18083.

Let us also create a serving process in Node.JS using the Express framework:

var express = require("express");
var app = express();
var name = process.argv[2];
var PORT = process.argv[3] || 18080;
console.log("Server online: ",name,":",PORT);
app.get("/", function(request,response)
           console.log("You have been served by: ",name,"on",PORT);
           response.write("Served by :"+name+" on "+PORT);


This takes in server name and port as the arguments.

We will spin up three instances of this serving process on the same machine with the  port numbers as in the Nginx config.

If we name the above as server.js then the instances can be spun up as:

node server.js <server_name> <port>

*Make sure you use the correct port (as provided in the Nginx config file).

Screen Shot 2015-03-22 at 01.51.15


Then just point your browser to localhost:80 and you should see:

Screen Shot 2015-03-22 at 01.56.33


Press refresh multiple times and you should see your request being served by different instances of web-app. Nginx by default uses ’round-robin’ load-balancing therefore you should see each of the instances being named one after the other as below (almost!).

Screen Shot 2015-03-22 at 01.56.45 Screen Shot 2015-03-22 at 01.58.15


Scaling out is as simple as spinning up a new instance and adding its IP and port to the Nginx configuration and reloading it.


Understanding the NodeJS EventLoop

The EventLoop is the secret sauce in any NodeJS based app.

It provides the ‘magical’ async behaviour and takes away the extra pain involved in explicit thread based parallelisation. On the flip side you have to account for the resulting single threaded JavaScript engine that processes the callbacks from the EventLoop. If you don’t then the traditional style of writing ‘blocking’ code can and will trip you over!

The LIBUV has an EventLoop which loops through the queue of events and executes the JS callback function (on a single thread as at any given time).

You can have multiple event sources (Event Emitters in NodeJS land) running in LIBUV on multiple threads (e.g. doing file I/O and socket I/O at the same time) that put events in the queue. But there is always ONE thread for executing JS therefore can only ‘handle’ one of those events at a time (i.e. execute the associated JS callback function).

Keeping this in mind let us look at a few such ‘natural’ errors where the code looks fine to the untrained eye but the expected output is not produced.

1) Wave bye bye to While Loops with Flags!

A common scenario is where we have while loops controlled by a flag variable for example. If you were wanting to read from console till the user types ‘exit’ then you would write something like this using blocking functions:

while (command !=exit) 
//Do something with the command
command = reader.nextLine()
end while

It will work because the loop will always be blocked till the nextLine() method executes and gives us a valid value for the command or throws an exception.

If you try and do the same in NodeJS using the async functions you might be tempted to re-write it as below. First we register a callback function which will trigger when the enter key is hit on the console. It will accept as a parameter the full line typed on the console. We promptly put this into the global command object and finish. After setting up the callback, we start an infinite loop waiting for ‘exit’. In case the command is undefined (null) we just loop again (‘burning rubber’ so as to say).

var command = null
//Register a callback function 
reader.on(‘data’, function (data) { command = data })
while (command !=exit) 
if (command !=null)
//Do something with the command
command = null
end if
end while

Unfortunately this code will never work. Any guesses what will be the output? If you guessed that it will go into an infinite loop with command always equal to ‘null’ you are correct!

The reason is very simple: JS code in NodeJS is processed by a single thread. In this case that single thread will be kept busy going through the while loop. Thus it will never get a chance to handle the console input event by executing the callback. Thus command will always stay ‘null’.

This can be fixed by removing the while loop.

var command = null
//Register a callback function 
reader.on(‘data’, function (data) 
		command = data 
		if(command == 'exit')
		end if
		Here we can either parse the command
		and perform the required action 
		 we can emit a custom event which all
		 the available command processors listen for 
		 but only the target command processor responds		

 2) Forget the For Loop (at least long running ones)

This next case is a very complex one because it is very hard to figure out whether its the for loop thats to blame. The symptoms may not show up all the time and they may not even show up in the output of your app. The symptoms can also change depending on things like the hardware configuration and configuration of database servers your code is interacting with (if any).

Let us take a simple example of inserting a fixed length array of data items into a database. In case the insert function is blocking the following code will work as expected.

for(var i=0; i<data.length; i++)
end for

In case the insert function is non-blocking (e.g. NodeJS) then we can experience all kinds of weird behaviour depending on the length of the array, such as incomplete insertions, sporadic exceptions and even instances where everything works as expected!

In case of the while loop example, the JS thread is blocked forever so no callbacks are processed. In case of for loops, the JS thread is blocked till the loop finishes running. This means in our example if we are using non-blocking insert the loop will execute rapidly without waiting for the insert to complete. Instead of blocking, the insert operation will generate an event on completion.

This is part of the reason why NodeJS applications can get a lot of work done without resorting to explicit thread management.

If the array is big enough we can end up flooding the receiver leading to buffer overflows along the way and resulting in dropped inserts. In some cases if the array is not that big the system may behave normally.

The question of how big an array can we deal with is also difficult to answer. It changes from case to case, as it depends on the hardware, the configuration of the target database (e.g. buffer sizes) and so on.

The solution involves getting rid of the long running for loop and using events and callbacks. This throttles the insert rate by making them sequential (i.e. making sure next insert is triggered only when the previous insert has completed)

var count = 0
//Callback function to add the next data item
function insertOnce()
                 Exit process by closing any external connections (e.g. database)
                 and clearing any timers. Ending the process by force is another option
                 but it is not recommended
		function ()
		//Called once current data has been inserted
//Call insertOnce on the inserted event
event_listener.on('inserted', insertOnce)
//Start the insertion by doing the first insert manually.

 3) Are we done yet?

 Blocking is not always a bad thing. It can be used to track progress because when a function returns you know it has completed its work one way or the other.

One way to achieve in NodeJS is to use some kind of a counter global variable that counts down to zero or up to a fixed value. Another way to do this is to set and clear timers in case you are not able to get a count value. This technique works well when you have to monitor the progress of a single stage of an operation (e.g. inserting data into a database as in our example above).

But what if we had multiple stages that we wanted to make sure execute in a synchronous manner. For example:

1) Load raw data into database

2) Calculate max/min values

3) Use max/min values to normalise raw data and insert into a new set of tables

There are some disadvantages with this approach:

1) Counters and timers add unwanted bulk to your code

2) Global variables are easy to override accidentally especially when using simple names like ‘count’

3) Your code begins to look like a house with permanent scaffolding around it

Furthermore once you detect that the one stage has finished, how do you proceed to the next stage?

Do you get into callback hell and just start with the next stage there and then, ending up with a single code file with all three stages nested within callbacks (Answer: No!)?

Do you try and break your stages into separate code files and use spawn/exec/fork to execute them (Answer: Yes)?

It is a rather dull answer but it makes sure you don’t have too much scaffolding in any one file.

Javascript: Playing with Prototypes – I

The popularity of Javascript (JS) has skyrocketed ever since it made the jump from the browser to the server-side (thank you Node.JS). Therefore a lot of the server-side work previously done in Java and other ‘core’ languages is now done in JS. This has resulted in a lot of Java developers (like me) taking a keen interest in JS.

Things get really weird when you try and map a ‘traditional’ OO language (like Java) to a ‘prototype’ based OO language like JS. Not to mention functions that are really objects and can be passed as parameters.

That is why I thought I would explore prototypes and functions in this post with some examples.

Some concepts:

1) Every function is an object! Let us see, with an example, the way JS treats functions.

  1. function Car(type) {
  2. this.type = type;
  3. //New function object is created
  4. this.getType = function()
  5. {
  6. return this.type;
  7. };
  8. }
  9. //Two new Car objects
  10. var merc = new Car("Merc");
  11. var bmw = new Car("BMW");
  12. /*
  13. * Functions should be defined once and reused
  14. * but this proves that the two Car objects
  15. * have their own instance of the getType function
  16. */
  17. if(bmw.getType == merc.getType)
  18. {
  19. console.log(true);
  20. }
  21. else
  22. {
  23. //Output is false
  24. console.log(false);
  25. }

The output of the above code is ‘false’ thereby proving the two functions are actually different ‘objects’.


2) Every function (as it is also an object) can have properties and methods. By default each function is created with a ‘prototype’ property which points to a special object that holds properties and methods that should be available to instances of the reference type.

What does this really mean? Let us change the previous example to understand what’s happening. Let us play with the prototype object and add a function to it which will be available to all the instances.

  1. function Car(type) {
  2. this.type = type;
  3. }
  5. Car.prototype.getType = function()
  6. {
  7. return this.type;
  8. }
  10. //Two new Car objects
  11. var merc = new Car("Merc");
  12. var bmw = new Car("BMW");
  14. /*
  15.  * Functions should be defined once and reused
  16.  * This proves that the two Car objects
  17.  * have the same instance of the getType function
  18.  */
  19. if(bmw.getType == merc.getType)
  20. {
  21. //Output is true
  22. console.log(true);
  23. }
  24. else
  25. {
  26. console.log(false);
  27. }

We added the ‘getType’ function to the prototype object for the Car function. This makes it available to all instances of the Car function object. Therefore we can think of the prototype object as the core of a Function object. Methods and properties attached to this core are available to all the instances of the function Object.

This core object (i.e. the prototype) can be manipulated in different ways to support OO behaviour (e.g. Inheritance).


3) Methods and properties can be added to both the core or the instance. This enables method over-riding as shown in the example below.

  1. function Car() {
  3. }
  5. //Adding a property and function to the prototype
  6. Car.prototype.type = "BLANK";
  8. Car.prototype.getType = function()
  9. {
  10. return this.type;
  11. }
  13. //Two new Car objects
  14. var merc = new Car();
  15. var bmw = new Car();
  17. //Adding a property and a function to the INSTANCE (merc)
  18. merc.type = "Merc S-Class";
  19. merc.getType = function()
  20. {
  21. return "I own a "+this.type;
  22. }
  24. //Output
  25. console.log("Merc Type: ", merc.getType());
  26. console.log("BMW Type: ", bmw.getType());
  27. console.log("Merc Object: ",merc);
  28. console.log("BMW Object: ",bmw);


The output:

Merc Type:  I own a Merc S-Class

> This shows that the ‘getType’ on the instance is being called.


> This shows that the ‘getType’ on the prototype is being called.

Merc Object:  { type: ‘Merc S-Class’, getType: [Function] }

> This shows the ‘merc’ object structure in JSON format. We see the property and function on the instance.

BMW Object:  {}

> This shows the ‘bmw’ object structure in JSON format. We see there are no properties or functions attached to the instance.