Building a Chat Bot in under 60 seconds with Karmaloop AIML Bot Server

The Rise of the Bots

We have entered the century of Artificial Intelligence, it is finally here. However we won’t have super smart Human like AI from day one. We will ease into the   process with dumb bots taking over mundane jobs and automating them. As an example, I am already done with manually setting up alarms. That job has gone to Siri and Alexa. I am also done setting reminders. They have taken that too. In this article, let us see how we can build a bot in under 60 seconds.

Karmaloop AIML Bot Framework

Brief History

Chat bots have been here since the 1990s. In the late 90s, Dr. Richard Wallace created an XML based specification called AIML or Artificial Intelligence Markup Language. This was clean, simple and easy to customize. I was going through my Computer Science graduation in 2005, and that is when I stumbled upon Dr. Richard Wallace’s work with ALICE. ALICE was a super smart chat bot for her time and her AIML files were craftfully done. At that time, I was learning C# and decided to build an AIML parser in C# to bring ALICE to life on my computer, and also use her as my final semester project. Long story short, it all went well, and I got through my engineering and got busy with a job and later, business. It was only until 2016 when the bug bit me again and I restarted work on the libraries and in 2018 I finally released the first Open Source version of what I called, Karmaloop AIML Bot Server. Today, I will show you how you can get started with a solid underlying foundation of ALICE bot and add your custom capabilities on top.

Download and Run Karmaloop AIML Bot Server

Get the binaries from the Releases page on Github –

If you are on Windows, extract and navigate to the folder then run the KarmaloopAIMLBotServer.exe file to run the bot server. If you are on macOS or Linux, install and use Mono to run the above file.

mono KarmaloopAIMLBotServer.exe

Awesome! Now you should have ALICE bot running on your machine. Open a web browser window and point it to http://localhost:8880/api/ChatUi/index.html

On Windows, you may need to run the following command to make sure the server can open port 8880 for listening to incoming API requests.

netsh http add urlacl url=http://*:8880/ user=Everyone listen=yes

Now chat and have fun!

Customize and add your own conversations

To explore this topic in full, you may first want to aprise yourself of AIML 2.0 specifications –

Step 1.

In the folder where you extracted the binaries, should be a folder called “aiml”. This is where the AIML files are stored by default. Let us create a new aiml file called magicmirror.aiml and then copy paste the following code into the file

<!--?xml version="1.0" encoding="UTF-8"?-->
<aiml version="1.0">
  <template>My Queen, you are the fairest in the land.</template>

Save the file, and restart your bot server. Then at the chat prompt, type the question – Magic mirror in my hand, who is the fairest in the land?

The conversation should look somewhat like below:

As expected! The response is exactly what we coded into the aiml file. That is it. If this took you under 60 seconds to do, then you built your first chat bot in less than a minute.

OK, but how can I produce dynamic responses?

Hmm… so you are ready to build something more complex, and would like to mash external data into your responses… right? Like build a weather skill that can tell you the weather of any city in the world. So let us keep this complex bit for a later post, but if your curiosity is insatiable, then simply open the file called “zweather.aiml”. Then open the source code (which you can download from Github) and look up weather skill. Set breakpoints and see how the skill executes itself. If you are good, you should already be able to see the simplicity and power of this approach, but hey more on that later!

Where to go from here

Now that you know how to create a basic conversational bot with Karmaloop AIML Bot Server you may be itching to build your own conversational bot and handle complex conversations. May be you want to automate arguments with your wife (LOL). Whatever your need may be, you may want to invariably start with reading up on AIML 2.0 specs for which link has been shared above. You may also want to visit the Github page and see how to compile from source on your platform. If you need any help, do post in the comments section and I can surely help.

Happy Bot Building!

Programmatically Triggering a Task Scheduler Job with Windows Events


Do you use Windows Task Scheduler to run scheduled batch jobs? It could be used for a great deal of things, e.g. sending out scheduled email reports or run a machine processes at a timed interval. I use it for running daily data imports, running optimization scripts on the database, sending scheduled emails and Push Notifications etc. In this quick tutorial, I will show you how to trigger a scheduled job programmatically. And just in case you are thinking, what’s the big deal? Use SCHTASKS, then you should already know that SCHTASKS fails at running tasks that require elevated permissions on the same local server instance.

Problem Statement

Why trigger a scheduled job programmatically?

Task Scheduler is used to schedule a batch job / script / program to run at a schedule. Software like Google Chrome, Apple iTunes, Graphics Drivers, Adobe Flash, etc. all schedule jobs in the Task Scheduler to check for updates. While these are fairly simple use cases and work perfectly well as long as you want to run the job on a schedule. The problem is when you want to trigger them manually. An example would be an Updater Script that runs periodically to check for software updates. Now what if you want to provide the end user a functionality to also trigger the update check manually by pressing a button on your app’s UI. Or what if you have a financial web app running on a server that uses Task Scheduler to send out monthly billing statements to customers. Now you would need to give your users a way to request for the last statement at the press of a button as well, else you risk having frustrated customers.

Why not use SCHTASKS?

Ok for those who don’t know, SCHTASKS is the command line interface to Task Scheduler and can be used to create, edit and run jobs manually. Since it can run jobs, the first thought would be to kick off the job using a simple command like SCHTASKS /Run /TN <task-name>

From within C# you could use Process.Start to achieve the needful. Try the same thing if your job needs elevated permissions or another user account to run and you will see SCHTASKS failing. This is critical because a scheduled job may need to access restricted resources like an encrypted folder or perform administrative system functions. So how do we get around that?


Scheduled jobs can be configured to have triggers. The primary trigger in 90% of the use cases is a scheduled time when the job would need to kick off. However there are a few more trigger options as you will see in this screenshot below

You will notice that your job can be triggered by a number of trigger mechanisms, schedule being the most popular use case. But we want a programmable trigger. “On an event” is perfect for our case because we can generate an event in the Event Log. Let’s get working and see how to make it happen.

Step 1. Create the “Event Source”

You can use the eventcreate.exe command or use a PowerShell command shown below

Command Line


PowerShell 2.0

New-EventLog -LogName Application -Source MYEVENTSOURCE

What the above does is create a distinct Event Source for us. This allows are job to “listen” for events coming from a specific event source. If we define our own event source specific to our application, we can then take custom actions based on the eventid.

Now, let us review where we are:

  • We know how to create Scheduled Jobs/Tasks with Event based Triggers
  • We know how to create a Custom event source for our application
  • We can define a set of unique event IDs specific to our application (it can be any random numbers) that our code knows how to handle
  • What remains to be seen is how can we log an event into the Windows Event Log programmatically

Step 2. Create a Scheduled Task with an Event Trigger

The below screenshot explains it all. Scheduled Tasks are easy to create. Usually most tasks have a time schedule, for example, daily at 12 am. All we are doing different is adding a trigger whose trigger is an event, as shown in the screenshot below.

Now define the trigger to listen to an event as shown below

Notice how we picked our custom MYEVENTSOURCE in the Source select box. The number in the event id, 24311, is a custom event id and can be any number you want. You may have different tasks listen for event ids. For example, I may configure a job to run for Client A when the event ID is 24311 or a Client B when the event ID is 24312. It purely is our choice.

Step 3. Log an Event from code

You can do this with almost any programming language, but in my example I will use C#.

var source = "MYEVENTSOURCE";

    EventLog.WriteEntry(source, "Run Import", EventLogEntryType.Information, 24311);
catch (Exception ex)

Yep, now watch your Scheduled Task kick off automatically when this C# snippet logs an event.


Well, now you know this neat trick that can start your job programmatically on-demand. I have found this extremely useful in scenarios where I was earlier planning to use a Windows Service, but I could eventually void writing and configuring services by simply using the powerful Task Scheduler which is a part of your Windows OS and configuring programmatic event triggers. Hope this helps you in some way.

REST APIs – .NET Core Web API 2 versus SailsJS versus Express TypeORM

Building a REST API driven application? You are certainly spoilt for choices with the numerous frameworks in all kinds of tech stacks that have hit the market off late. While we cannot practically be covering the pros and cons of all, we do have three frameworks that standout in the Open Source world – .NET Core Web API 2 which comes from the Microsoft shop, SailsJS which internally leverages Express and Waterline ORM and an Express + TypeORM combination. Each three have very distinctive advantages and drawbacks that we will look at. Before we start, I do have a confession. Loopback has intentionally been left out of this discourse. Well, I have bit of a history with that framework. Not that I hate it, but back in 2015, I went from choosing it to build a complex enterprise grade API backend in Loopback to phone calls with IBM sales staff explaining me why paying several thousands of dollars was the way to go for us. Ever since I hung up on the sales people at IBM, I have never looped back to Loopback (pun intended), and discovered far more interesting projects instead. However things have changed dramatically with the release of Loopback 4, and it deserves to be covered in a completely separate write up.

SailsJS (NodeJS + Express + Waterline ORM)

SailsJS beats every other framework hands down when it comes to the time it takes to get your first API setup up and running. There are very good reasons for it to have achieved that title. I have architected several solutions with a SailsJS backend – a high grade security chat app, a social networking website for pet owners, a job portal and a food ordering system to name a few.


  • End to end API ecosystem – When you start with SailsJS, the only additional dependency you need to address is the connector to the Database, and nothing else practically is required to be configured (at least not right away). But even that is optional, you could start with API development even before having finalized your Database system. Let the argument between NoSQL vs RDBMS rage on while your team can actually start building the API on the side. Sails comes packaged with sails-disk which stores/retrieves the data from local disk or in memory. Being based off of Express means it supports policies, familiar req res objects, controllers and models almost the same way that Express does
  • Blueprints API – When you can start making GET, PUT, POST, DELETE calls almost immediately – life really doesn’t get better than this. Imagine running these short few commands to get your first API up and running:

    # npm i -g sails
    # sails new todo-api
    # cd todo-api
    # sails generate api items
    # sails lift

    Off you go! Now point your Postman to localhost:1337/items and you can do your first POST (post any JSON, it will get stored onto sails-disk), GET PUT DELETE etc.
  • Express under the hood – In the Node world, Express is the gold standard for REST API development and that’s what is kicking stuff under the hood in Sails. The good part is, if you are familiar with Express, you already know how to work with 50% of SailsJS
  • Connectors – You have connectors for every major Database system – RDBMS as well as NoSQL.
  • Waterline ORM – While Waterline isn’t the best ORM in the world, it gets the basic job done and pretty quickly. It lacks advanced features specifically in respect to NoSQL database systems which make it a challenge to work with for more advanced tasks. For example, you cannot query nested JSON objects with as much ease or simplicity as the native query language provides.
  • Sails Sockets – Web sockets are first class citizens with Sails. You can listen to changes on any model you may have created from the get go without having to write any additional code


There are several. Read on.

  • Waterline ORM – The underlying Waterline ORM can be a terrific boost when you are building a quick API backend for a relatively simple app. The moment you get into advanced queries, cracks start showing up. The biggest advantage that any ORM provides is database vendor independence. Yes, Waterline gets you that, but not efficiently. Inevitably, you would have to fall back onto native queries and that’s when you loose vendor independence
  • TypeScript Support – While it will be incorrect to say that TypeScript support is missing, it is by no means a first class citizen in the Sails world. You can still write your Models and Controllers in TypeScript, but that is the end of it. The SailsJS framework itself still relies on the loosely typed underlying Javascript objects. Those who understand the perils of loosely typed backend programming are going to be immediately turned off by this, and understandably so
  • Production Deployment – OK most problems related to production deployment are shared across NodeJS based Frameworks including Express. When you want to take your API to a serious production environment, several key challenges creep in. For instance, there is no standard simple way of configuring high grade web servers like Apache or IIS to work well with SailsJS or Express apps. The steps are convoluted and unorganized. E.g. You would like to use pm2 to manage your NodeJS process. But how do you get pm2 to work efficiently with Apache or IIS? Go figure.

.NET Core Web API 2

Microsoft under Satya Nadella has got somethings done right in a phenomenal way. The first was to embrace Open Source rather than compete with it. As a result, we got .NET Core and a bunch of application building capabilities that came along with it. We can now build and deploy .NET command line apps, ASP.NET MVC web apps and RESTful APIs with Web API 2 using .NET Core and remain fully in the realm of cross-platform open source code base. Having said that, why would we discuss a .NET based framework amongst Node based frameworks? Because in my opinion, these frameworks are the fastest way to build RESTful APIs across the technology landscape (note just in Node and .NET world) and I would love to be challenged on that thought. Back to Web API 2


  • C# – Strongly typed, highly object oriented, mature and beautiful. Very few languages can pride themselves on these qualities and be backed by hundreds of thousands of programmers. For the folks who are saying in their heads, “Ok smarty, tell me one language feature that gets me to move my butt onto the C# side”, for them I have one word answer – LINQ
  • .NET Core – Open Source, cross-platform and fast, .NET Core negates all the dis-qualities that were associated with the traditional .NET Framework. Plus the benefit of having a large programmer base addressing these technologies is a BIG plus in tackling the learning curve
  • Entity Framework Core – Those who have worked with EntityFramework 6 or earlier, know the power of this now-mature ORM framework. Without going too much into the nuts and bolts of the framework itself, let me say this in a nutshell – Waterline x Nitro boosters = EntityFramework. EFCore is fast, efficient, powerful and highly configurable.
  • Rapid API Development – While beating Sails in speed of getting started is quite a challenge for most frameworks, .NET Core Web API 2 isn’t too far behind, thanks to EFCore. You can create migrations for your Models to the database or scaffold models based on your database schema relatively easily. In most cases it is only a few short commands
  • Visual Studio Code, Azure, IIS and more – This is one of the major strengths of building with .NET Core. You feel right at home with other Microsoft offerings. Develop on VS Code, deploy on Azure all in just minutes! Interestingly, Visual Studio Code does not come prepackaged with C# support. You need to add a plugin, but once that’s done, its easy as pie. You can build, debug and deploy apps from right within Visual Studio Code and work on your favorite Linux Distro while at it or on a Mac (Ok you get the point – for me nothing is more important than being cross-platform not only for deployment, but also during development)
  • Compatible with Enterprise Systems – When building for Enterprises, you will encounter challenges you never would while developing for smaller projects. Take an example of Integrated Security with SQL Server. I am building an API and I cannot store plain text passwords in connection string configurations. The only way I can connect to a SQL Server system is by using Integrated Security. I have gone through this painful exercise with an Express API and let me tell you, it was not pretty. We tried convincing the IT Security department of why it was just fine to store credentials in environment variables but all of that didn’t fly. We eventually found a workaround but the lack of Integrated Security support with most Node based SQL Server connector frameworks was a harsh reality we faced much, much further down the line during deployment.


  • Not Enough Connectors – Entity Framework Core does not support every database system and its brother. You have to research carefully before you make this jump. Here is a list of whats supported so far. If you are certain that the database vendor will not change in future and what you have now is supported by EFCore, go for it. You could still code for unsupported databases using direct queries but that would take you away from EFCore and the benefits it brings. It would then be equivalent to writing an API in Express (instead of Sails)
  • No TypeScript Support – Full stack engineers are the future. And they love their language of programming to be the same across the front and back. While you can achieve that with something like Angular, TypeORM and Express, you cannot claim that with .NET Core Web API 2, at least for now. The reason I say that, is because conceptually .NET Core is language independent and may support compilation of TypeScript into Intermediate Language (IL). But till that happens, you will end up coding in different front end and backend languages
  • Microsoft – Those who have been in the industry long enough, know the perils of locking yourself in with Microsoft. As I started by saying that Microsoft has changed significantly since their new CEO, anything coming out of Redmond has to be taken with precaution

TypeORM and Express

If the title leads you on to it, you would have guessed this is not an API framework like SailsJS. It is a combination of two frameworks to achieve the same/similar result though. TypeORM is exclusively an ORM framework. Express on the other hand, is exclusively an API framework and has no built in ORM or even database connection for that matter. The beauty is, both these frameworks focus on their strengths while playing well with each other, just like Web API 2 + EFCore.


  • TypeScript – Finally! We can say that our programming language would remain the same in the front and the back end. This is a major advantage for those aiming to build cross-functional teams working on Angular or React web apps
  • Excellent ORM features – What bogged down SailsJS was Waterline but in case of TypeORM, being a truly world class ORM is what the focus is. One look at TypeORM’s documentation will have you convinced that it should be able to handle most of your complex ORM needs. It still lacks the beauty of LINQ with EFCore, but it gets the job done really well! Has the most powerful Query Builder in the NodeJS world, IMHO
  • Support for Indices – Well this is important provided you are looking at database vendor independence. Telling the ORM which column has indices is a neat way to ensure whenever you migrate to a different provider, the indices go along. Plus it is great for Continuous Integrations because your test database can be rebuilt with indices
  • Listners, Migrations, Query Builder and more – I will stop short of explaining each of these, but do read about these features. These make TypeORM stand out and a much better candidate for an Enterprise grade use case
  • Well Documented – The official website should answer most of your questions. It is a well documented framework
  • Connectors – Has a wide selection of database connectors. The current connectors support includes MySQL / MariaDB / Postgres / CockroachDB / SQLite / Microsoft SQL Server / Oracle / sql.js / MongoDB


  • Poor “Getting Started” experience – The setup is nothing as simple as SailsJS or Web API + EFCore. You are on your own to define the project structure and code layout. In short, you either start with a boilerplate or create your own structure from scratch. There are decent boilerplates to start with but you have to look out for how upto-date they are. You will need to setup a project with TypeORM, Express and any other dependencies you anticipate
  • Not an end-to-end API ecosystem – Unlike Web API 2 + EF Core, these two frameworks don’t recognize each other out of the box. They rely on your boilerplate or your project setup to get anywhere close to a “point-and-shoot by Postman” scenario

Processing Excel XLSX files on browser client-side with Angular

Every programmer working on data-driven applications finds themselves in this situation eventually – upload and process an XLSX Excel file. You have plethora of frameworks and libraries to handle Excel XLSX files on the server side, be it in C# or Java. But with the world moving towards Single Page Application frameworks like Angular and React, we developers face a catch 22 – handle XLSX file uploads on the server side or the client side? What is better? Having gone through this very recently, read on for my notes on this subject.

How the heck on the browser client-side?

There is one framework in particular to the rescue – SheetJS/js-xlsx – this gets you exactly what you need in a nifty little package. The browser support is fantastic with every modern web browser supporting it – yes even the mobile ones! Check out the browser support matrix –

Browser support matrix

For the Javascript world, my usual saying is “If the dang thing works on IE9, you call it a good night…” and that belief hasn’t failed me in years.

And Server Side?

While explaining how to do this on C# or Java would be a drag, there are countless ways to do this and the web is full of it. Just look it up. What you want to do here though is probably use Node. Guess what? The same js-xlsx module works on NodeJS too and practically with exactly the same code. The major difference being in how you access the raw file ArrayBuffer. Read their server side demo code and you will see how straightforward it is – js-xlsx Server Side

Where do you handle it then? Browser or the Server?

This is the trick question, the answer to which always starts with – “It depends”. Yes it really depends on a few factors. The first one is – how heavy is your Excel file? Does it run into the hundreds of thousands of rows or more? Stick with Server Side. Handling files that large would definitely suck up the memory on most browsers and make your app prone to crashes. In my experience so far, files with under 10,000 rows are good for being processed on the browser client side on most machines (even the ones with as low as 2 gigs of RAM).

OK why even bother with Browser side then?

Speed. No really, speed. Picking an XLSX file on the FileChooser and having it rendered as an editable table literally takes just milliseconds. Your users could be selecting an XLSX file, editing or viewing the contents, and being on their way in a fraction of the time it would take for the file to be uploaded and processed on the server and then brought back to the client side for review or editing. It really boils down to user experience. As long as you manage processing the XLSX files on the browser side without crashing it, you will be making the users life so much simpler and you should definitely prefer handling the Workbooks on the client side. Heavy transactional or big data should without a second thought be handled on the server side.


I hope this short write up helps you decide on what direction to take while making a decision on processing XLSX workbooks. I hope you also find the referenced js-xlsx library handy and use it handle workbooks in Javascript code (whether on the server or the client side).

PostgreSQL – NoSQL and Javascript Functions with PLV8 – Part 2

PostgreSQL and it’s NoSQL features are awesome, right? If you don’t know what I am talking about, you should check my previous post to familiarize yourself with an introduction to NoSQL in Postgres. While it doesn’t go in depth of the NoSQL features, it does get you started much faster knowing that you can manipulate and operate on JSONB/JSON data using the Javascript language.

In this post we are going to talk about a bit more of the NoSQL features and functions that we can use to navigate our way around NoSQL columns. Some of the things we are going to look at:

  • Making JSONB/JSON columns part of our SELECT or UPDATE (SQL) queries
  • Modifying JSONB fields partially (or filling in details that need to come from other relational fields)

Querying JSONB/JSON NoSQL columns

In the simplest of use cases, you will find a need to select your rows based on a fields value within the JSONB. For the purpose of our exercise, let us assume a table and possible values of the column cells.


id      |  owner_id  |  post_text                                                                |   comments
1        |  27                 |  PostgreSQL is fun!                                            |   [ ]

In the above example, we have a Posts table which has a column called comments of type JSONB. Why JSONB? Because then we don’t need to have a separate table called comments which has a post_id column with foreign key reference to our posts table. The comments JSON structure could look like the following:

   "comment": "Yes indeed it's fun",
   "timestamp": 1234567890,
   "owner": 12
   "comment": "Where I live, Mongo rules!",
   "timestamp": 1234567890,
   "owner": 18

The above structure, if you notice, is firstly a JSON array since it starts with the square brackets. Secondly, in each comment in the array, we do not repeat information like owner_name or owner_profile_picture_url. We make a safe assumption that these details are available in a single Profiles table which has the profile details of each user in the system. So far so good right? It absolutely seems like the perfect spot to use NoSQL JSONB datatype. But there are some problems we will encounter later when we get down to build usable APIs against our table that need to be consumed by our front end apps.

Problem 1

How do I get the owner_name and owner_profile_picture_url for each “owner” of the comment? In the traditional world of RDBMS, it would be a simple join. That is what we will do with the owner_id in the Posts table.

SELECT, o.profile_picture_url, p.post_text
    posts p INNER JOIN profiles o ON p.owner_id =

Let us now see how we would do something similar with the comments JSONB array. But before we get to the array, we must see what difference it would have made if instead of being an array it was a single comment.

   "comment": "Yes indeed it's fun",
   "timestamp": 1234567890,
   "owner": 12

In this case we would use one of the arrow selectors part of PostgreSQL, and do something like this:

SELECT, o.profile_picture_url, p.post_text, 
 p.comments->>'comment' as comment, as comment_owner_name, co.profile_picture_url as comment_owner_profile_picture_url
 posts p INNER JOIN profiles o ON p.owner_id =
 LEFT JOIN profiles co ON p.comments->>'owner' =

Does that give you a fair idea of how to pick a field from a JSONB column? Therefore, if you have a meta JSONB column in a table, you could extend the meta column with additional future columns and never have to change the schema itself!

But the above hasn’t solved our problem yet, i.e. how do we do the same for a JSONB array. But before that we need to at a few more basic things with it. For instance, selecting is one thing, how do we filter based on the JSONB array? Let us look at that next.

Searching Through a JSONB Array

The simplest way to search a JSONB array from within a SQL query is to use the @> operator. This operator lets you find through individual elements in an array. Let us presume a facebook style ‘likes’ column on the posts table. Instead of solving it with a traditional RDBMS style approach where we introduce a likes table with the ids of all the users who have liked a certain post with the post_id foreign key, we will rather use a JSONB array to store all the owner IDs right within the posts table. In the example below, the post with id 1 has been liked with profiles with id 18 and 4.

id      |  owner_id  |  post_text                                                                |   comments   | likes
1        |  27                 |  PostgreSQL is fun!                                            |   [ ]                      | [18, 4]

Now what if we need to find all the posts which have been liked by the user with id 4, we would need to do something like this –

SELECT * FROM posts p WHERE likes @> '[4]'::jsonb

We have to take explicit care about the operator, operands and the data types of the operands. It is very easy to get sucked down the rabbit hole if you miss these details.

Problem 2

How do we join the details from a JSONB array with the concerned tables and fetch more details like name of the user or their profile picture etc. ? We left a similar question unanswered in Problem 1 which we will find answers to now.

The jsonb_array_elements function is almost a must-use when we are down to opening up an array, picking up the elements we need and joining them with another table. What this function does, is to open and break up each element in the array as if it was a cell in a set of rows. So if you were to do something like this –

SELECT jsonb_array_elements(likes) as profile_id FROM posts
WHERE = 1;

Would return you something like this –

|    profile_id   |

Now we can use this to join on the profiles table and fetch the user details. How we choose to do it is up to us individual developers, but here is simple way of doing this

SELECT, p.profile_picture_url
    profiles p INNER JOIN
    (SELECT jsonb_array_elements(likes) as profile_id FROM posts
     WHERE = 1) pid
     ON pid.profile_id =

This is extremely powerful. It simplifies our otherwise complex ER Diagram by an order of magnitude. I no more need redundant data or several tables that are a result of normalization and have id columns spread all over. Things can be kept concise as long as you are cognizant of your NoSQL and RDBMS hybrid design. Keep in mind NoSQL is by definition is not enforcing which means that you could throw in anything out of place in the JSONB columns and it wouldn’t complain. So an obvious design choice would be to use relational data-model where it makes sense and use NoSQL where it doesn’t.

For our final leg in this post, we will find the answer to the question posed in Problem 1, i.e. how can we modify the JSON output so that it includes all of the information needed to send out to our consumers e.g. an API service.

Modifying Your JSONB Response

Being able to search through and connect data with JSONB is one step in the right direction. However being able to turn around concise information in modern data formats like JSON right from within your queries is what we are gunning for. The simplest way to achieve this is using PLV8 which is a native Javascript programming environment. You can modify JSON objects just the way you would in web environments.

But in order to be cognizant of performance, it pays off to learn some of the functions in Postgres that let you modify a JSON/JSONB column on the fly during a query. Let us revisit the problem we left unsolved in Problem 1 and also return the names and profile picture URLs for the people who have commented on a post. For this we will use the function called jsonb_set.

SYNTAX: jsonb_set

jsonb_set(target jsonb, path text[], new_value jsonb[,create_missing boolean])

Once you get a hang of using jsonb_set, you can manipulate JSONB objects right from within a query. It’s time to solve our problem –

       jsonb_set(p.comments, '{name}', ('"' || || '"')::jsonb, true) as comments
     profiles pf INNER JOIN (SELECT
                                      jsonb_array_elements(comments) as comments
                               FROM posts) p
         ON p.comments->>'owner' =

{"name": "chris", "owner": 1, "comment": "i like it"}
{"name": "abi", "owner": 2, "comment": "me too!"}

In the above solution, we added a name field to the JSONB comment. If you notice this combines the solutions from both Problem 1 and 2 to produce the right JSONB output. Just the way we added the comment owners name to the JSONB we can fetch as many details as we like and append to create a formatted JSON that is just the way our end consumer apps want!

Now venture out into the world of NoSQL and Javascript PLV8 and tell me if it enhances or spices up a relational DB setup. Bon-voyage mes amis!

PostgreSQL – Writing Javascript Functions in PLV8 with NoSQL

PostgreSQL is awesome, right?! We are doing our fourth successful project with PostgreSQL 10.3 as our Data Persistence Layer (database) and with each implementation, we are loving it. At first, it was the NoSQL features within a relational environment that got us hooked. It just is so much easier to convince the dinosaurs (old techies in their post 50s who tend to have an adverse opinion on any new tech) to go flirt with NoSQL. In my experience, as long as we stayed in their comfort zones by keeping 90% of our Data Model relational, and only about 10% of NoSQL in our structure – everyone was happy.

How did we introduce NoSQL in the traditional world of RDBMS developers?

The first thing we did, was to add a “meta” column with JSONB type to almost every table. It was almost invisible to the naked eye on an ER diagram. No one bothered to ask about a column named “meta” much at first. That changed drastically over time. Here is an example. Any time someone realised that they actually needed a many to many relationship between two tables, we would lap up the opportunity to show off what NoSQL could achieve with minimal amount of changes – and with elegance. The neglected poor old “meta” column that had spent most of it’s lifetime remaining “null” now sprung into action and solved a real world problem. To take an example, assume two tables, one called Restaurants and another called Menus. Initially we designed the system believing that a restaurant can have multiple menus (while in real life it’s only a single menu for most) so we addressed the issue by having a one to many relationship between Restaurants and Menus – i.e. one Restaurant could have several Menus. As time went on, we encountered a client who had several restaurants and each restaurant had several menus (depending on what time of the day you went there). Now unfortunately our old fashioned approach needed work-arounds to solve this problem because there was no easy way to make the a menu be a part of several restaurants. We decided to solve it using two approaches. The first one was a traditional crosslink table. The second was adding a JSON array field into the meta JSON called restaurants in the menus table and vice versa in the restaurants table. I won’t go into much detail, but you already get the idea about which solution was more elegant. NoSQL clearly won the preference.

PLV8 JavaScript Functions

OK now we are ready to dive into the world of NoSQL inside of JavaScript and look at what PLV8 can do for us. The biggest criticism we would usually take for adding NoSQL into our Data Model was about how non-standard and cumbersome it was for us to use PostgreSQL JSON functions to play around with the NoSQL data. And yes it isn’t pretty and neither is it a standardized approach that someone from the world of Oracle or SQL Server could easily familiarise themselves with. Say hello to the PLV8 extension! Now we have a standard programming language called JavaScript that is understood and known by a large group of developers. The adventurous kinds in the area of RDBMS have at some point or the other dipped their toes in NoSQL and encountered JavaScript along the way. Those were the ones I convinced on exploring PLV8 – and eureka! – in a short amount of time we had a good chunk of functions written in JavaScript living right beside the traditional PL/pgSQL functions.

OK enough, show me how it’s done

Step 1.
Add the PLV8 extension to your PostgreSQL database.


Step 2.
Write your first function!

CREATE OR REPLACE FUNCTION public.getreviews(postid bigint)
 LANGUAGE 'plv8'

COST 100

var plan = plv8.prepare('SELECT COALESCE(reviews, \'[]\'::jsonb) as reviews FROM posts WHERE id = $1', ['bigint']);
 var reviews = plan.execute([i_postsid])[0].reviews;
 for (var index = 0; index < reviews.length; index++) {
    var review = reviews[index];
    //do something with your review object
 return reviews;


What the above function achieves is simply take a postid and return all the reviews in a NoSQL field. But if you are a JavaScript junkie, then you already know how to open the pandora’s box now! You can manipulate the JSON way more easily compared to using inbuilt JSON functions in PostgreSQL and pass it around. In the above example, note the few things of importance. Number one is the plv8 object which acts as our bridge to the PostgreSQL database. Second is the fact that your regular PL/pgSQL is no more a first class citizen within those $BODY$ start and end markers. We have gone JavaScript!

I have kept this short to serve as an introduction and motivational to help interested developers push the NoSQL agenda. Cheers!

India-China Standoff – Checkmate – Part 2

In this post, we will further explore the strategic implications of the ongoing standoff and look at the broader perspective. We covered initial ground in the part 1 of this post which you can read via the link.

It’s all about CPEC (OBOR)

Analysts have been speculating about the reason and timing of the Doklam standoff for more than two months now. Several scenario outlines pointed out in my last post have become reality, for example the statement from Chinese military that they don’t foresee a military engagement specially the short border war that the Chinese media had been hyping up. Is that the acceptance of defeat we have been looking for? Not quite. Doklam for both China and India is not as important as the CPEC and both countries are at poles end on the subject. China understands how critical it is for them to bypass the Indian Ocean in Indian control to keep their economic growth in the right trajectory. At the same time, for India it is violation of the disputed territory of Kashmir which India claims in entirety. China has been “Salami Slicing” it’s way into Kashmir and anyone with enough interest can check border regions with China on Google maps in Satellite view and clearly see how China is encroaching upon Indian territory year by year. Go zoom in on Demchok and you will see their roads being built well inside undisputed Indian territory. Why are the Chinese doing this? It’s all about CPEC. China understands that to continue its dominant economic rise it needs to export more and export faster and of course, cheaper is always better. CPEC forms a large part of that equation. It reduces it’s bottleneck on South China Sea and Indian Ocean’s straits of Malacca. Looking at the other way round, oil imports of China go predominantly through the straits of Malacca, a spot the Indian Navy dominates due to its strategic positioning in the Andamans.

Why does CPEC terrify India?

Not in 5 years, not in 10 years however in 20 years CPEC can eventually become the lasso on India’s neck. Anytime India does something against Chinese whims and fancies, the lasso can be tightened and India can be brought to it’s knees. China’s biggest pain point in its North Western theatre is supply lines and local support. CPEC changes that by making the region economically active and more and more establishments sprouting up to support the supply lines. The local population which is largely anti-China then starts tilting more towards Beijing due to the economics of scale. If ever China decides to help and support Pakistan, India would stand lesser chances to last than it does today.

What can India do today?

India has already done something today to at least begin addressing the issue. Firstly, it has called China’s frenemy bluff. China has spewed venom all over India via its state controlled media exposing the dragons true intent all along. China has always considered India a rival and always considered Pakistan a puny pawn in the game. Its philosophy has throughout been that of keeping India at bay letting it tackle Pakistan while it itself gets to silently upgrade infrastructure and boost trade. If India has to rise, it has to take the dragon head on, and the Doklam standoff is towards that eventual goal.

How will things turn out from here on?

China will remember Doklam. It will certainly loose the Doklam battle but it won’t forget it that easy. Depending on Xi’s fate, China will either respond militarily in another sector where it has the military advantage or it will make mends to show the world, and slowly light up the “insurgency fire” in the north eastern states. It would get its little pawn in the game called Pakistan to play terrorism in the Kashmir valley.

What if there is a Short-war?

Both sides will be affected even if it’s a short war. China’s economy maybe the second largest but it is in tatters. The GDP to Debt ratio is unrealistically piling up on them and the gamble can totally backfire. If China had to do a short war, it would have done it by now. It knows the risk. Very less is known about the real state of Chinese economy but it is by no means doing good. Figures known to the public tell a grim story. Is it a balloon set to explode? Possibly. But so is India’s. India isn’t doing all that well itself, but clearly in a far superior position than that of China. Militarily China will loose and there is a simple explanation to this. India is positioned strategically, and has supply lines closer to mainland. China on the other hand will have to fetch supplies from exorbitant distances, rendering it easier for India to break its supply lines. In fact Doklam standoff is just to ensure India’s supply lines are never lost control of.

Even more, India may want to use a Chinese first shot aggression as an excuse to render CPEC unstable and useless. How? Siachen. We do control the highest battle grounds in the region and could use that to our advantage giving the Chinese a tough time with CPEC. If India can stay afoot on it’s recent statement about re-acquiring PoK, and put the plans in action, it would be a disaster for China. It would loose straits of Malacca as well as CPEC leading to an eventual crash of the Chinese economy. It would be far more dangerous for China to attack India than merely using it as rhetoric.


While India may rest assured of victorious outcome in a short war, it does need to worry about China’s increasing military supplies to insurgents and the Maoists. A proxy war is what India is really bad at fighting. We will have to invent and invest into our own bag of proxy wars against China specially with Tibet and Xinjuang. We will then have to brace for the long haul and patiently wait for India to fight it out. Highly unlikely if the Modi government succumbs to defeat in the 2019 elections, though.

India-China Standoff – Checkmate – Part 1

India and China have been on a military standoff in the Doklam region and tri-junction area in the Sikkim Sector. The disagreement from China’s angle is that Indian troops entered undisputed Chinese territory and blocked their road construction. They also claim that the Doklam region is disputed between itself and Bhutan and India has no part to play in a bilateral dispute. India’s angle is not very clear to the public apart from defending the Siliguri corridor, also known as the Chicken’s neck. Arguments from both the sides, in my humble opinion, are foolish because both governments have strategic objectives. For China, building a road leading to no-where, is foolish and serves no other purpose than sending military equipment out to the front more quickly and effectively. India too, is giving unwarranted importance to the Doklam plateau because in the event of a full scale military confrontation, China would loose Doklam first due to clear advantages that the Indian military has in the region.

It is imperative we look at what the geography of this region looks like and what may or can happen in case of a full confrontation or a limited tactical confrontation.

India - China border standoff 2017
India – China border standoff 2017

The interesting bit is, that this standoff has two very distinct sides, one public, and the other private yet strategic. We will be exploring the strategic angle in this post series, because in all honesty, that is the angle government is unable to disclose publicly, yet is what it has in mind as the primary objective. Being no expert by any means on military strategy, I would still like to make a humble effort at decoding the strategic angle. Here goes the outline from India’s perspective:

  • A strong protest by India (read Prime Minister Modi) against China’s dual play – play with us or play against us – pick a camp and stick to it
  • We will block any attempts by Chinese at “Salami-Slicing” tactics
  • If we can’t have a full scale confrontation with Pakistan due to Nuclear deterrence issues, neither can China with India – hence – the checkmate
  • Force China to up the ante and explore all confrontation options – come to the realisation it can’t do much – and then get to the negotiation table to re-build the entire partnership
  • Realising a full scale confrontation will destroy India and severely damage China, tame the dragon to play along bilaterally, because China isn’t a Kamikaze state but a mature world power
  • Having a humungous trade deficit between the two countries, let China know it has the lions share to loose even in case of a limited conflict war-theatre
  • Send a message to all countries having disputes with China specially in the South China Sea, that the dragon has met it’s match in India, regardless of Chinese rhetoric about how inferior the Indian Military is in comparison to the Chinese Military
  • Exploit the negative rhetoric against China in India to push the manufacturing sector in India
  • Let China be very well reminded of how upset (and terrified) India is of the CPEC/OBOR because of the easy troop movement it allows for China en-circling India
  • Let the reverse of the above also be known that it is China that will have much to loose if India starts exploring pre-emptive blocking strategies against CPEC/OBOR

I hope to cover each of the strategic objectives in detail in subsequent posts and play out mock confrontation scenarios. Much of this has probably already happened and discussed behind closed doors on both sides, however it is not in the open or public domain.


And I am back!

Fellas! My blog is back after being dead for more than a year. It took me some maintenance and technical fixing to get the website back up and running. I intend to be more active with my blogging from now on, and talk about subjects that I care about from the bottom of my heart. The latest edition to my loved subjects is Artificial Intelligence (AI) and my work with a new startup called Karmaloop AI (

Post your comments!

C# dynamic objects

The best use of dynamic variables in C# I have found in use with deserializing JSON data for which I may not have the schema, or I know is a frequently changing schema.
For everything else I just avoid using it.

What do I use? Well, NewtonSoft’s JSON deserializer and this simple statement:

dynamic apiData = JsonConvert.Deserialize<dynamic>(jsonData);

Simple and neat.
#CSharp #dynamicvariables