REST APIs – .NET Core Web API 2 versus SailsJS versus Express TypeORM

Building a REST API driven application? You are certainly spoilt for choices with the numerous frameworks in all kinds of tech stacks that have hit the market off late. While we cannot practically be covering the pros and cons of all, we do have three frameworks that standout in the Open Source world – .NET Core Web API 2 which comes from the Microsoft shop, SailsJS which internally leverages Express and Waterline ORM and an Express + TypeORM combination. Each three have very distinctive advantages and drawbacks that we will look at. Before we start, I do have a confession. Loopback has intentionally been left out of this discourse. Well, I have bit of a history with that framework. Not that I hate it, but back in 2015, I went from choosing it to build a complex enterprise grade API backend in Loopback to phone calls with IBM sales staff explaining me why paying several thousands of dollars was the way to go for us. Ever since I hung up on the sales people at IBM, I have never looped back to Loopback (pun intended), and discovered far more interesting projects instead. However things have changed dramatically with the release of Loopback 4, and it deserves to be covered in a completely separate write up.

SailsJS (NodeJS + Express + Waterline ORM)

SailsJS beats every other framework hands down when it comes to the time it takes to get your first API setup up and running. There are very good reasons for it to have achieved that title. I have architected several solutions with a SailsJS backend – a high grade security chat app, a social networking website for pet owners, a job portal and a food ordering system to name a few.

Pros

  • End to end API ecosystem – When you start with SailsJS, the only additional dependency you need to address is the connector to the Database, and nothing else practically is required to be configured (at least not right away). But even that is optional, you could start with API development even before having finalized your Database system. Let the argument between NoSQL vs RDBMS rage on while your team can actually start building the API on the side. Sails comes packaged with sails-disk which stores/retrieves the data from local disk or in memory. Being based off of Express means it supports policies, familiar req res objects, controllers and models almost the same way that Express does
  • Blueprints API – When you can start making GET, PUT, POST, DELETE calls almost immediately – life really doesn’t get better than this. Imagine running these short few commands to get your first API up and running:

    # npm i -g sails
    # sails new todo-api
    # cd todo-api
    # sails generate api items
    # sails lift

    Off you go! Now point your Postman to localhost:1337/items and you can do your first POST (post any JSON, it will get stored onto sails-disk), GET PUT DELETE etc.
  • Express under the hood – In the Node world, Express is the gold standard for REST API development and that’s what is kicking stuff under the hood in Sails. The good part is, if you are familiar with Express, you already know how to work with 50% of SailsJS
  • Connectors – You have connectors for every major Database system – RDBMS as well as NoSQL.
  • Waterline ORM – While Waterline isn’t the best ORM in the world, it gets the basic job done and pretty quickly. It lacks advanced features specifically in respect to NoSQL database systems which make it a challenge to work with for more advanced tasks. For example, you cannot query nested JSON objects with as much ease or simplicity as the native query language provides.
  • Sails Sockets – Web sockets are first class citizens with Sails. You can listen to changes on any model you may have created from the get go without having to write any additional code

Cons

There are several. Read on.

  • Waterline ORM – The underlying Waterline ORM can be a terrific boost when you are building a quick API backend for a relatively simple app. The moment you get into advanced queries, cracks start showing up. The biggest advantage that any ORM provides is database vendor independence. Yes, Waterline gets you that, but not efficiently. Inevitably, you would have to fall back onto native queries and that’s when you loose vendor independence
  • TypeScript Support – While it will be incorrect to say that TypeScript support is missing, it is by no means a first class citizen in the Sails world. You can still write your Models and Controllers in TypeScript, but that is the end of it. The SailsJS framework itself still relies on the loosely typed underlying Javascript objects. Those who understand the perils of loosely typed backend programming are going to be immediately turned off by this, and understandably so
  • Production Deployment – OK most problems related to production deployment are shared across NodeJS based Frameworks including Express. When you want to take your API to a serious production environment, several key challenges creep in. For instance, there is no standard simple way of configuring high grade web servers like Apache or IIS to work well with SailsJS or Express apps. The steps are convoluted and unorganized. E.g. You would like to use pm2 to manage your NodeJS process. But how do you get pm2 to work efficiently with Apache or IIS? Go figure.

.NET Core Web API 2

Microsoft under Satya Nadella has got somethings done right in a phenomenal way. The first was to embrace Open Source rather than compete with it. As a result, we got .NET Core and a bunch of application building capabilities that came along with it. We can now build and deploy .NET command line apps, ASP.NET MVC web apps and RESTful APIs with Web API 2 using .NET Core and remain fully in the realm of cross-platform open source code base. Having said that, why would we discuss a .NET based framework amongst Node based frameworks? Because in my opinion, these frameworks are the fastest way to build RESTful APIs across the technology landscape (note just in Node and .NET world) and I would love to be challenged on that thought. Back to Web API 2

Pros

  • C# – Strongly typed, highly object oriented, mature and beautiful. Very few languages can pride themselves on these qualities and be backed by hundreds of thousands of programmers. For the folks who are saying in their heads, “Ok smarty, tell me one language feature that gets me to move my butt onto the C# side”, for them I have one word answer – LINQ
  • .NET Core – Open Source, cross-platform and fast, .NET Core negates all the dis-qualities that were associated with the traditional .NET Framework. Plus the benefit of having a large programmer base addressing these technologies is a BIG plus in tackling the learning curve
  • Entity Framework Core – Those who have worked with EntityFramework 6 or earlier, know the power of this now-mature ORM framework. Without going too much into the nuts and bolts of the framework itself, let me say this in a nutshell – Waterline x Nitro boosters = EntityFramework. EFCore is fast, efficient, powerful and highly configurable.
  • Rapid API Development – While beating Sails in speed of getting started is quite a challenge for most frameworks, .NET Core Web API 2 isn’t too far behind, thanks to EFCore. You can create migrations for your Models to the database or scaffold models based on your database schema relatively easily. In most cases it is only a few short commands
  • Visual Studio Code, Azure, IIS and more – This is one of the major strengths of building with .NET Core. You feel right at home with other Microsoft offerings. Develop on VS Code, deploy on Azure all in just minutes! Interestingly, Visual Studio Code does not come prepackaged with C# support. You need to add a plugin, but once that’s done, its easy as pie. You can build, debug and deploy apps from right within Visual Studio Code and work on your favorite Linux Distro while at it or on a Mac (Ok you get the point – for me nothing is more important than being cross-platform not only for deployment, but also during development)
  • Compatible with Enterprise Systems – When building for Enterprises, you will encounter challenges you never would while developing for smaller projects. Take an example of Integrated Security with SQL Server. I am building an API and I cannot store plain text passwords in connection string configurations. The only way I can connect to a SQL Server system is by using Integrated Security. I have gone through this painful exercise with an Express API and let me tell you, it was not pretty. We tried convincing the IT Security department of why it was just fine to store credentials in environment variables but all of that didn’t fly. We eventually found a workaround but the lack of Integrated Security support with most Node based SQL Server connector frameworks was a harsh reality we faced much, much further down the line during deployment.

Cons

  • Not Enough Connectors – Entity Framework Core does not support every database system and its brother. You have to research carefully before you make this jump. Here is a list of whats supported so far. If you are certain that the database vendor will not change in future and what you have now is supported by EFCore, go for it. You could still code for unsupported databases using direct queries but that would take you away from EFCore and the benefits it brings. It would then be equivalent to writing an API in Express (instead of Sails)
  • No TypeScript Support – Full stack engineers are the future. And they love their language of programming to be the same across the front and back. While you can achieve that with something like Angular, TypeORM and Express, you cannot claim that with .NET Core Web API 2, at least for now. The reason I say that, is because conceptually .NET Core is language independent and may support compilation of TypeScript into Intermediate Language (IL). But till that happens, you will end up coding in different front end and backend languages
  • Microsoft – Those who have been in the industry long enough, know the perils of locking yourself in with Microsoft. As I started by saying that Microsoft has changed significantly since their new CEO, anything coming out of Redmond has to be taken with precaution

TypeORM and Express

If the title leads you on to it, you would have guessed this is not an API framework like SailsJS. It is a combination of two frameworks to achieve the same/similar result though. TypeORM is exclusively an ORM framework. Express on the other hand, is exclusively an API framework and has no built in ORM or even database connection for that matter. The beauty is, both these frameworks focus on their strengths while playing well with each other, just like Web API 2 + EFCore.

Pros

  • TypeScript – Finally! We can say that our programming language would remain the same in the front and the back end. This is a major advantage for those aiming to build cross-functional teams working on Angular or React web apps
  • Excellent ORM features – What bogged down SailsJS was Waterline but in case of TypeORM, being a truly world class ORM is what the focus is. One look at TypeORM’s documentation will have you convinced that it should be able to handle most of your complex ORM needs. It still lacks the beauty of LINQ with EFCore, but it gets the job done really well! Has the most powerful Query Builder in the NodeJS world, IMHO
  • Support for Indices – Well this is important provided you are looking at database vendor independence. Telling the ORM which column has indices is a neat way to ensure whenever you migrate to a different provider, the indices go along. Plus it is great for Continuous Integrations because your test database can be rebuilt with indices
  • Listners, Migrations, Query Builder and more – I will stop short of explaining each of these, but do read about these features. These make TypeORM stand out and a much better candidate for an Enterprise grade use case
  • Well Documented – The official website should answer most of your questions. It is a well documented framework
  • Connectors – Has a wide selection of database connectors. The current connectors support includes MySQL / MariaDB / Postgres / CockroachDB / SQLite / Microsoft SQL Server / Oracle / sql.js / MongoDB

Cons

  • Poor “Getting Started” experience – The setup is nothing as simple as SailsJS or Web API + EFCore. You are on your own to define the project structure and code layout. In short, you either start with a boilerplate or create your own structure from scratch. There are decent boilerplates to start with but you have to look out for how upto-date they are. You will need to setup a project with TypeORM, Express and any other dependencies you anticipate
  • Not an end-to-end API ecosystem – Unlike Web API 2 + EF Core, these two frameworks don’t recognize each other out of the box. They rely on your boilerplate or your project setup to get anywhere close to a “point-and-shoot by Postman” scenario

Getting your cheap Android Phone/Tablet to get detected for Debugging by Linux (Mint or Ubuntu)

Welcome to a post another road block I recently solved on the Android development saga. I got myself a cheap Android tablet (Byond Mi-1). In an effort to use it for Android Development with Linux Mint / Ubuntu, I had to get across quite a few steps other than what is normal. Lets go step by step:

  1. Figure out your Tablet’s Vendor ID – Use the lsusb command. It will dump out the details of all the USB devices connected to your machine. Usually your cheap tablet will not show up with a name on the dump, however in most likelihood it will be the last item on that list. To be sure, copy the output of the lsusb command into a text editor or spreadsheet. Then connect your Tablet with the computer and turn on Mass Storage (on the tablet). Run lsusb again and grab the dump and put it into a text editor or spreadsheet. There should be an extra line pertaining to your device. There will be an ID in the form of ID 1234:5678. 1234 will be your Vendor id. Take a note of it.
  2. Run the command:
    sudo gedit /etc/udev/rules.d/51-android.rules
    Copy paste these lines:
    SUBSYSTEM==”usb”, ATTR{idVendor}==”1234″, MODE=”0666″, GROUP=”plugdev”
    SUBSYSTEM==”usb”, ENV{DEVTYPE}==”usb_device”, ENV{PRODUCT}==”1234/*”, MODE=”0666″
    SUBSYSTEM==”usb”, SYSFS{idVendor}==”1234″, MODE=”0666″ 

    Please appropriately change 1234 to your correct device id.

  3. Run the following command to create a adb_usb.ini file in your .android folder in your home.
    sudo gedit ~/.android/adb_usb.ini
    Simply write your device id in this format:
    0x1234
    Save and exit
  4. Reboot your computer
  5. Unlock your tablet and go to settings. Find Developer Settings and switch on USB debugging. This step will depend on your Android version.
  6. Connect your tablet to the computer
  7. Get to your android sdk’s platform tools folder and run the command:
    ./adb devices
  8. If your device is listed, then yuhoo you got your cheap tablet ready for development.

Pretty cool eh!?

Laptop LCD screen brightness in Linux Mint 13 or Ubuntu 12.04

I recently set up a Linux workstation and based on my lookup on best distributions available, two came to fore: Ubuntu 12.04 and Linux Mint 13 (Maya). Ubuntu has always been a fantastic Linux distro, but as I learned Linux Mint is actually based off of Ubuntu and did a better job at being a full featured OS, I decided to get it setup on my desktop. I have been very pleased so far!
One of the issues faced was inability to control brightness of the screen. I could not do say from the keys on the keyboard and neither did system settings work. The fix was easy as I learned about it on other forums. Here is the link to fix the problem:
http://shellboy.com/linux-mint-13-on-dell-xps-15-brightness-keys-not-working.html

Upgrading MonoDevelop to the latest stable build on Linux Mint / Ubuntu

All those developing with MonoDevelop on Linux Mint or Ubuntu must have noticed that the software repository does not provide the latest release of MonoDevelop (3.0.3.5 as of this writing). The only way to get to the updated version is to compile it on your own. Compiling a big project like Monodevelop on Linux usually scares the crap out of some, specially those migrating in from Windows backgrounds. Although there is nothing special about it, you satisfy project dependencies and compile using the provided tools. Also it is basically a standard linux three step process, configure, make and make install.

In spite of all of that, there are some of us who believe in keeping things simple. That allows us to channel our creative energy and spirit into other things that matter. I obviously don’t want to fight dependencies after dependencies and have no energy left to work on my own project. So here is a the best way I have found to make a clean build of Monodevelop from fresh stable source code. Actually the credit goes to John Ruiz who put up this simple shell script that does the job for us. Get his script from https://raw.github.com/jar349/tools/master/install-monodevelop.sh and save it to a folder. Usually it would land up in your Downloads folder in your home. Make sure to give it “execute” permissions. You can use the UI, right click the file, go to Properties, select Permissions tab, and check the box that says “Allow executing file as a program” With that done, now you need to start your Terminal, navigate to the Downloads folder and run the script as ./install-monodevelop.sh

It will do a bunch of stuff and by the end of its run, it will have monodevelop built and installed. Simply type monodevelop on the command line to run! Yep you are done!

Wiki Article Reliability Algorithm/Software

Let me start by saying that I am a supporter of Wikipedia, I contribute articles and information wherever I think I have sufficient knowledge. I also contribute annually a certain amount to Wikipedia donations. Having said that, it does hurt me sometimes when people rubbish you if you quote them something from wikipedia or you give them a wikipedia link in an attempt to prove your point. People who don’t know how wikipedia works or have very little surface knowledge seem to disregard it with much ease. I read somewhere about an article that how teachers in most school discredit any wikipedia sources of research. Yes they dislike it because in many cases it contradicts their text books. In reality, Wikipedia is a mighty flattener of the world by providing free access and authoring capability of information to general public. Let me quote an example, have you heard of the famous saying, “History is written by conquerors”? Not anymore. With rising popularity of Wikipedia, every piece of historical article is being subjected to views from all directions. One such example would be the role of “Aryan Invasion Theory” in Indian history. For more than one century we have heard the Aryan Invasion theory and taken it as practical history, of course until now. Without going into the details, you will notice Wikipedia article on the subject seems to stay neutral by presenting both sides of the argument.

Now coming to the original intention of writing this article, I propose to write first an algorithm and then a practical implementation of the algorithm as a web service/site that other applications can use. Yes, everything will be open source and free. The purpose of the algorithm would be to present the reader with a version of the wikipedia page (or for that matter any wiki page) that the algorithm thinks is the most stable/reliable version. How the algorithm will work is a set of steps that I will be detailing next.

  • Access the History page of the article
  • Fetch a list of all the authors
  • Loop through all edits made by non-registered-users i.e. random edits
  • Check if these edits against article lifecycle, i.e. how far in the stable life of the article was the edit made
  • If the edit was made and no registered user edit was made after it, remove it
  • Mark every other random edit as “Candidate for Removal”
  • Fetch a list of newly registered users who have recently modified the page
  • Check if the author has made edits to other pages, if yes, look at the activity interval. If there are rapid edits, the author could be spammer. If the edit made was very recent, mark it as “Recent Edits” and “Candidate for Removal”.
  • Every content line that has a [citation needed] marking, mark them as “candidate for removal”
  • Find trustworth authors, by finding every author that has been editing on wikipedia for quite a long time
  • Promote their edits to “Trustworthy Info”
  • Find any “Candidates for Removal” in the “Trustworthy Info” and let “Trustworthy Info” suppress Candidate for Removals
  • Based on the stringency of user settings, curate the “Candidates for Removal” in the final rendering of the article

This could just turn out to be the quick moderator you need while browsing the excellent and superb Wikipedia! And this doesnt just apply to wikipedia, it also applies to Technical wikis we use at work. There are many people writing and modifying wiki pages. If its a big organization, I bet there are many new joinees and interns who are not necessarily the most trusted people to edit wikis. However, the best use of it is on public wiki sites where trust worthiness of an article becomes a big question for few.

Hobby Open Source Projects

As I have written on several occasions about me, I love programming. You can find the software I have written on this blog. Just made a link to a program called CopyFat with its installer and source code. I will mention here the projects I have done in the past and will eventually post them with Source Code and Installers.

  • LECIDE – Learners/Experts Configurable Integrated Development Environment
    • With LECIDE I intended to make something like Eclipse IDE much before it showed up. Obviously my effort was much juvenile and to be honest, I started by creating a Notepad clone in VB6 and ended up creating a very complex IDE with syntax coloring, instant help tooltips, multiple compiler support, and BLADE – a drag and drop GUI designer for creating C++ dialog resources. Unfortunately, the way the code was written was horrible. If I wanted to do something new with it today, I would rather jump off a building than taking a dive back into its code. However the project was pretty extensive for the time it was built in and can be used to learn a lot. Download the executables and installer here. Download the source code here. Kindly run it in Windows 7 compatibility mode, because certain components don’t work on Vista/Win7.
  • CopyFat 2.0 – File copy program
    • You can read more about it here. It was one of the most useful utilities I ever wrote. It helped me and my friends on several occassions!
  • CyberBrowser – Tabbed IE based browser
    • A pretty simple browser with Tabbed browsing capability back in the days when IE was still a single window browser. A good reference for those wanting to learn how to use the WebBrowser control in VB6. Download the source code here.
  • Winsock Based FTP client
    • While learning socket programming, I implemented my own FTP client. The important thing to note is, I didn’t use any third party components to derive FTP functionalities. The code actually talks to the FTP server by opening ports and opening parallel channels for file downloads etc. Great stuff if you are learning socket programming in VB6! Get the source code right here.

CopyFat – Life saver file copy program!

I wrote this program back in my early college years when CDs still reigned the media storage and most of them had scratches on them. This program did one simple thing, it would copy the file as much as possible. Wondering how retrieving a damaged file helps at all? Well some file types, specially video media files, are more tolerant to missing chunks of data. Most video players know how to skip over damaged frames and move on over it.This made it a great tool for recovering movies from damaged CDs/DVDs. I also enabled bulk copy on it so that it would copy an entire folder using the same trick as mentioned above.I am uploading an installer as well as the source code for it.