REST APIs – .NET Core Web API 2 versus SailsJS versus Express TypeORM

Building a REST API driven application? You are certainly spoilt for choices with the numerous frameworks in all kinds of tech stacks that have hit the market off late. While we cannot practically be covering the pros and cons of all, we do have three frameworks that standout in the Open Source world – .NET Core Web API 2 which comes from the Microsoft shop, SailsJS which internally leverages Express and Waterline ORM and an Express + TypeORM combination. Each three have very distinctive advantages and drawbacks that we will look at. Before we start, I do have a confession. Loopback has intentionally been left out of this discourse. Well, I have bit of a history with that framework. Not that I hate it, but back in 2015, I went from choosing it to build a complex enterprise grade API backend in Loopback to phone calls with IBM sales staff explaining me why paying several thousands of dollars was the way to go for us. Ever since I hung up on the sales people at IBM, I have never looped back to Loopback (pun intended), and discovered far more interesting projects instead. However things have changed dramatically with the release of Loopback 4, and it deserves to be covered in a completely separate write up.

SailsJS (NodeJS + Express + Waterline ORM)

SailsJS beats every other framework hands down when it comes to the time it takes to get your first API setup up and running. There are very good reasons for it to have achieved that title. I have architected several solutions with a SailsJS backend – a high grade security chat app, a social networking website for pet owners, a job portal and a food ordering system to name a few.

Pros

  • End to end API ecosystem – When you start with SailsJS, the only additional dependency you need to address is the connector to the Database, and nothing else practically is required to be configured (at least not right away). But even that is optional, you could start with API development even before having finalized your Database system. Let the argument between NoSQL vs RDBMS rage on while your team can actually start building the API on the side. Sails comes packaged with sails-disk which stores/retrieves the data from local disk or in memory. Being based off of Express means it supports policies, familiar req res objects, controllers and models almost the same way that Express does
  • Blueprints API – When you can start making GET, PUT, POST, DELETE calls almost immediately – life really doesn’t get better than this. Imagine running these short few commands to get your first API up and running:

    # npm i -g sails
    # sails new todo-api
    # cd todo-api
    # sails generate api items
    # sails lift

    Off you go! Now point your Postman to localhost:1337/items and you can do your first POST (post any JSON, it will get stored onto sails-disk), GET PUT DELETE etc.
  • Express under the hood – In the Node world, Express is the gold standard for REST API development and that’s what is kicking stuff under the hood in Sails. The good part is, if you are familiar with Express, you already know how to work with 50% of SailsJS
  • Connectors – You have connectors for every major Database system – RDBMS as well as NoSQL.
  • Waterline ORM – While Waterline isn’t the best ORM in the world, it gets the basic job done and pretty quickly. It lacks advanced features specifically in respect to NoSQL database systems which make it a challenge to work with for more advanced tasks. For example, you cannot query nested JSON objects with as much ease or simplicity as the native query language provides.
  • Sails Sockets – Web sockets are first class citizens with Sails. You can listen to changes on any model you may have created from the get go without having to write any additional code

Cons

There are several. Read on.

  • Waterline ORM – The underlying Waterline ORM can be a terrific boost when you are building a quick API backend for a relatively simple app. The moment you get into advanced queries, cracks start showing up. The biggest advantage that any ORM provides is database vendor independence. Yes, Waterline gets you that, but not efficiently. Inevitably, you would have to fall back onto native queries and that’s when you loose vendor independence
  • TypeScript Support – While it will be incorrect to say that TypeScript support is missing, it is by no means a first class citizen in the Sails world. You can still write your Models and Controllers in TypeScript, but that is the end of it. The SailsJS framework itself still relies on the loosely typed underlying Javascript objects. Those who understand the perils of loosely typed backend programming are going to be immediately turned off by this, and understandably so
  • Production Deployment – OK most problems related to production deployment are shared across NodeJS based Frameworks including Express. When you want to take your API to a serious production environment, several key challenges creep in. For instance, there is no standard simple way of configuring high grade web servers like Apache or IIS to work well with SailsJS or Express apps. The steps are convoluted and unorganized. E.g. You would like to use pm2 to manage your NodeJS process. But how do you get pm2 to work efficiently with Apache or IIS? Go figure.

.NET Core Web API 2

Microsoft under Satya Nadella has got somethings done right in a phenomenal way. The first was to embrace Open Source rather than compete with it. As a result, we got .NET Core and a bunch of application building capabilities that came along with it. We can now build and deploy .NET command line apps, ASP.NET MVC web apps and RESTful APIs with Web API 2 using .NET Core and remain fully in the realm of cross-platform open source code base. Having said that, why would we discuss a .NET based framework amongst Node based frameworks? Because in my opinion, these frameworks are the fastest way to build RESTful APIs across the technology landscape (note just in Node and .NET world) and I would love to be challenged on that thought. Back to Web API 2

Pros

  • C# – Strongly typed, highly object oriented, mature and beautiful. Very few languages can pride themselves on these qualities and be backed by hundreds of thousands of programmers. For the folks who are saying in their heads, “Ok smarty, tell me one language feature that gets me to move my butt onto the C# side”, for them I have one word answer – LINQ
  • .NET Core – Open Source, cross-platform and fast, .NET Core negates all the dis-qualities that were associated with the traditional .NET Framework. Plus the benefit of having a large programmer base addressing these technologies is a BIG plus in tackling the learning curve
  • Entity Framework Core – Those who have worked with EntityFramework 6 or earlier, know the power of this now-mature ORM framework. Without going too much into the nuts and bolts of the framework itself, let me say this in a nutshell – Waterline x Nitro boosters = EntityFramework. EFCore is fast, efficient, powerful and highly configurable.
  • Rapid API Development – While beating Sails in speed of getting started is quite a challenge for most frameworks, .NET Core Web API 2 isn’t too far behind, thanks to EFCore. You can create migrations for your Models to the database or scaffold models based on your database schema relatively easily. In most cases it is only a few short commands
  • Visual Studio Code, Azure, IIS and more – This is one of the major strengths of building with .NET Core. You feel right at home with other Microsoft offerings. Develop on VS Code, deploy on Azure all in just minutes! Interestingly, Visual Studio Code does not come prepackaged with C# support. You need to add a plugin, but once that’s done, its easy as pie. You can build, debug and deploy apps from right within Visual Studio Code and work on your favorite Linux Distro while at it or on a Mac (Ok you get the point – for me nothing is more important than being cross-platform not only for deployment, but also during development)
  • Compatible with Enterprise Systems – When building for Enterprises, you will encounter challenges you never would while developing for smaller projects. Take an example of Integrated Security with SQL Server. I am building an API and I cannot store plain text passwords in connection string configurations. The only way I can connect to a SQL Server system is by using Integrated Security. I have gone through this painful exercise with an Express API and let me tell you, it was not pretty. We tried convincing the IT Security department of why it was just fine to store credentials in environment variables but all of that didn’t fly. We eventually found a workaround but the lack of Integrated Security support with most Node based SQL Server connector frameworks was a harsh reality we faced much, much further down the line during deployment.

Cons

  • Not Enough Connectors – Entity Framework Core does not support every database system and its brother. You have to research carefully before you make this jump. Here is a list of whats supported so far. If you are certain that the database vendor will not change in future and what you have now is supported by EFCore, go for it. You could still code for unsupported databases using direct queries but that would take you away from EFCore and the benefits it brings. It would then be equivalent to writing an API in Express (instead of Sails)
  • No TypeScript Support – Full stack engineers are the future. And they love their language of programming to be the same across the front and back. While you can achieve that with something like Angular, TypeORM and Express, you cannot claim that with .NET Core Web API 2, at least for now. The reason I say that, is because conceptually .NET Core is language independent and may support compilation of TypeScript into Intermediate Language (IL). But till that happens, you will end up coding in different front end and backend languages
  • Microsoft – Those who have been in the industry long enough, know the perils of locking yourself in with Microsoft. As I started by saying that Microsoft has changed significantly since their new CEO, anything coming out of Redmond has to be taken with precaution

TypeORM and Express

If the title leads you on to it, you would have guessed this is not an API framework like SailsJS. It is a combination of two frameworks to achieve the same/similar result though. TypeORM is exclusively an ORM framework. Express on the other hand, is exclusively an API framework and has no built in ORM or even database connection for that matter. The beauty is, both these frameworks focus on their strengths while playing well with each other, just like Web API 2 + EFCore.

Pros

  • TypeScript – Finally! We can say that our programming language would remain the same in the front and the back end. This is a major advantage for those aiming to build cross-functional teams working on Angular or React web apps
  • Excellent ORM features – What bogged down SailsJS was Waterline but in case of TypeORM, being a truly world class ORM is what the focus is. One look at TypeORM’s documentation will have you convinced that it should be able to handle most of your complex ORM needs. It still lacks the beauty of LINQ with EFCore, but it gets the job done really well! Has the most powerful Query Builder in the NodeJS world, IMHO
  • Support for Indices – Well this is important provided you are looking at database vendor independence. Telling the ORM which column has indices is a neat way to ensure whenever you migrate to a different provider, the indices go along. Plus it is great for Continuous Integrations because your test database can be rebuilt with indices
  • Listners, Migrations, Query Builder and more – I will stop short of explaining each of these, but do read about these features. These make TypeORM stand out and a much better candidate for an Enterprise grade use case
  • Well Documented – The official website should answer most of your questions. It is a well documented framework
  • Connectors – Has a wide selection of database connectors. The current connectors support includes MySQL / MariaDB / Postgres / CockroachDB / SQLite / Microsoft SQL Server / Oracle / sql.js / MongoDB

Cons

  • Poor “Getting Started” experience – The setup is nothing as simple as SailsJS or Web API + EFCore. You are on your own to define the project structure and code layout. In short, you either start with a boilerplate or create your own structure from scratch. There are decent boilerplates to start with but you have to look out for how upto-date they are. You will need to setup a project with TypeORM, Express and any other dependencies you anticipate
  • Not an end-to-end API ecosystem – Unlike Web API 2 + EF Core, these two frameworks don’t recognize each other out of the box. They rely on your boilerplate or your project setup to get anywhere close to a “point-and-shoot by Postman” scenario

Processing Excel XLSX files on browser client-side with Angular

Every programmer working on data-driven applications finds themselves in this situation eventually – upload and process an XLSX Excel file. You have plethora of frameworks and libraries to handle Excel XLSX files on the server side, be it in C# or Java. But with the world moving towards Single Page Application frameworks like Angular and React, we developers face a catch 22 – handle XLSX file uploads on the server side or the client side? What is better? Having gone through this very recently, read on for my notes on this subject.

How the heck on the browser client-side?

There is one framework in particular to the rescue – SheetJS/js-xlsx – this gets you exactly what you need in a nifty little package. The browser support is fantastic with every modern web browser supporting it – yes even the mobile ones! Check out the browser support matrix –

Browser support matrix

For the Javascript world, my usual saying is “If the dang thing works on IE9, you call it a good night…” and that belief hasn’t failed me in years.

And Server Side?

While explaining how to do this on C# or Java would be a drag, there are countless ways to do this and the web is full of it. Just look it up. What you want to do here though is probably use Node. Guess what? The same js-xlsx module works on NodeJS too and practically with exactly the same code. The major difference being in how you access the raw file ArrayBuffer. Read their server side demo code and you will see how straightforward it is – js-xlsx Server Side

Where do you handle it then? Browser or the Server?

This is the trick question, the answer to which always starts with – “It depends”. Yes it really depends on a few factors. The first one is – how heavy is your Excel file? Does it run into the hundreds of thousands of rows or more? Stick with Server Side. Handling files that large would definitely suck up the memory on most browsers and make your app prone to crashes. In my experience so far, files with under 10,000 rows are good for being processed on the browser client side on most machines (even the ones with as low as 2 gigs of RAM).

OK why even bother with Browser side then?

Speed. No really, speed. Picking an XLSX file on the FileChooser and having it rendered as an editable table literally takes just milliseconds. Your users could be selecting an XLSX file, editing or viewing the contents, and being on their way in a fraction of the time it would take for the file to be uploaded and processed on the server and then brought back to the client side for review or editing. It really boils down to user experience. As long as you manage processing the XLSX files on the browser side without crashing it, you will be making the users life so much simpler and you should definitely prefer handling the Workbooks on the client side. Heavy transactional or big data should without a second thought be handled on the server side.

Conclusion

I hope this short write up helps you decide on what direction to take while making a decision on processing XLSX workbooks. I hope you also find the referenced js-xlsx library handy and use it handle workbooks in Javascript code (whether on the server or the client side).

PostgreSQL – NoSQL and Javascript Functions with PLV8 – Part 2

PostgreSQL and it’s NoSQL features are awesome, right? If you don’t know what I am talking about, you should check my previous post to familiarize yourself with an introduction to NoSQL in Postgres. While it doesn’t go in depth of the NoSQL features, it does get you started much faster knowing that you can manipulate and operate on JSONB/JSON data using the Javascript language.

In this post we are going to talk about a bit more of the NoSQL features and functions that we can use to navigate our way around NoSQL columns. Some of the things we are going to look at:

  • Making JSONB/JSON columns part of our SELECT or UPDATE (SQL) queries
  • Modifying JSONB fields partially (or filling in details that need to come from other relational fields)

Querying JSONB/JSON NoSQL columns

In the simplest of use cases, you will find a need to select your rows based on a fields value within the JSONB. For the purpose of our exercise, let us assume a table and possible values of the column cells.

Posts

id      |  owner_id  |  post_text                                                                |   comments
———————————————————————————————–
1        |  27                 |  PostgreSQL is fun!                                            |   [ ]

In the above example, we have a Posts table which has a column called comments of type JSONB. Why JSONB? Because then we don’t need to have a separate table called comments which has a post_id column with foreign key reference to our posts table. The comments JSON structure could look like the following:

[{
   "comment": "Yes indeed it's fun",
   "timestamp": 1234567890,
   "owner": 12
},
{
   "comment": "Where I live, Mongo rules!",
   "timestamp": 1234567890,
   "owner": 18
}]

The above structure, if you notice, is firstly a JSON array since it starts with the square brackets. Secondly, in each comment in the array, we do not repeat information like owner_name or owner_profile_picture_url. We make a safe assumption that these details are available in a single Profiles table which has the profile details of each user in the system. So far so good right? It absolutely seems like the perfect spot to use NoSQL JSONB datatype. But there are some problems we will encounter later when we get down to build usable APIs against our table that need to be consumed by our front end apps.

Problem 1

How do I get the owner_name and owner_profile_picture_url for each “owner” of the comment? In the traditional world of RDBMS, it would be a simple join. That is what we will do with the owner_id in the Posts table.

SELECT 
    o.name, o.profile_picture_url, p.post_text
FROM
    posts p INNER JOIN profiles o ON p.owner_id = o.id

Let us now see how we would do something similar with the comments JSONB array. But before we get to the array, we must see what difference it would have made if instead of being an array it was a single comment.

{
   "comment": "Yes indeed it's fun",
   "timestamp": 1234567890,
   "owner": 12
}

In this case we would use one of the arrow selectors part of PostgreSQL, and do something like this:

SELECT 
 o.name, o.profile_picture_url, p.post_text, 
 p.comments->>'comment' as comment,
 co.name as comment_owner_name, co.profile_picture_url as comment_owner_profile_picture_url
FROM
 posts p INNER JOIN profiles o ON p.owner_id = o.id
 LEFT JOIN profiles co ON p.comments->>'owner' = co.id

Does that give you a fair idea of how to pick a field from a JSONB column? Therefore, if you have a meta JSONB column in a table, you could extend the meta column with additional future columns and never have to change the schema itself!

But the above hasn’t solved our problem yet, i.e. how do we do the same for a JSONB array. But before that we need to at a few more basic things with it. For instance, selecting is one thing, how do we filter based on the JSONB array? Let us look at that next.

Searching Through a JSONB Array

The simplest way to search a JSONB array from within a SQL query is to use the @> operator. This operator lets you find through individual elements in an array. Let us presume a facebook style ‘likes’ column on the posts table. Instead of solving it with a traditional RDBMS style approach where we introduce a likes table with the ids of all the users who have liked a certain post with the post_id foreign key, we will rather use a JSONB array to store all the owner IDs right within the posts table. In the example below, the post with id 1 has been liked with profiles with id 18 and 4.

id      |  owner_id  |  post_text                                                                |   comments   | likes
———————————————————————————————–
1        |  27                 |  PostgreSQL is fun!                                            |   [ ]                      | [18, 4]

Now what if we need to find all the posts which have been liked by the user with id 4, we would need to do something like this –

SELECT * FROM posts p WHERE likes @> '[4]'::jsonb

We have to take explicit care about the operator, operands and the data types of the operands. It is very easy to get sucked down the rabbit hole if you miss these details.

Problem 2

How do we join the details from a JSONB array with the concerned tables and fetch more details like name of the user or their profile picture etc. ? We left a similar question unanswered in Problem 1 which we will find answers to now.

The jsonb_array_elements function is almost a must-use when we are down to opening up an array, picking up the elements we need and joining them with another table. What this function does, is to open and break up each element in the array as if it was a cell in a set of rows. So if you were to do something like this –

SELECT jsonb_array_elements(likes) as profile_id FROM posts
WHERE posts.id = 1;

Would return you something like this –

|    profile_id   |
——————————————-
18
4

Now we can use this to join on the profiles table and fetch the user details. How we choose to do it is up to us individual developers, but here is simple way of doing this

SELECT 
    p.name, p.profile_picture_url
FROM
    profiles p INNER JOIN
    (SELECT jsonb_array_elements(likes) as profile_id FROM posts
     WHERE posts.id = 1) pid
     ON pid.profile_id = p.id

This is extremely powerful. It simplifies our otherwise complex ER Diagram by an order of magnitude. I no more need redundant data or several tables that are a result of normalization and have id columns spread all over. Things can be kept concise as long as you are cognizant of your NoSQL and RDBMS hybrid design. Keep in mind NoSQL is by definition is not enforcing which means that you could throw in anything out of place in the JSONB columns and it wouldn’t complain. So an obvious design choice would be to use relational data-model where it makes sense and use NoSQL where it doesn’t.

For our final leg in this post, we will find the answer to the question posed in Problem 1, i.e. how can we modify the JSON output so that it includes all of the information needed to send out to our consumers e.g. an API service.

Modifying Your JSONB Response

Being able to search through and connect data with JSONB is one step in the right direction. However being able to turn around concise information in modern data formats like JSON right from within your queries is what we are gunning for. The simplest way to achieve this is using PLV8 which is a native Javascript programming environment. You can modify JSON objects just the way you would in web environments.

But in order to be cognizant of performance, it pays off to learn some of the functions in Postgres that let you modify a JSON/JSONB column on the fly during a query. Let us revisit the problem we left unsolved in Problem 1 and also return the names and profile picture URLs for the people who have commented on a post. For this we will use the function called jsonb_set.

SYNTAX: jsonb_set

jsonb_set(target jsonb, path text[], new_value jsonb[,create_missing boolean])

Once you get a hang of using jsonb_set, you can manipulate JSONB objects right from within a query. It’s time to solve our problem –

SELECT 
       jsonb_set(p.comments, '{name}', ('"' || pf.name || '"')::jsonb, true) as comments
FROM
     profiles pf INNER JOIN (SELECT
                                      jsonb_array_elements(comments) as comments
                               FROM posts) p
         ON p.comments->>'owner' = pf.id::text

RESULT:
{"name": "chris", "owner": 1, "comment": "i like it"}
{"name": "abi", "owner": 2, "comment": "me too!"}

In the above solution, we added a name field to the JSONB comment. If you notice this combines the solutions from both Problem 1 and 2 to produce the right JSONB output. Just the way we added the comment owners name to the JSONB we can fetch as many details as we like and append to create a formatted JSON that is just the way our end consumer apps want!

Now venture out into the world of NoSQL and Javascript PLV8 and tell me if it enhances or spices up a relational DB setup. Bon-voyage mes amis!

PostgreSQL – Writing Javascript Functions in PLV8 with NoSQL

PostgreSQL is awesome, right?! We are doing our fourth successful project with PostgreSQL 10.3 as our Data Persistence Layer (database) and with each implementation, we are loving it. At first, it was the NoSQL features within a relational environment that got us hooked. It just is so much easier to convince the dinosaurs (old techies in their post 50s who tend to have an adverse opinion on any new tech) to go flirt with NoSQL. In my experience, as long as we stayed in their comfort zones by keeping 90% of our Data Model relational, and only about 10% of NoSQL in our structure – everyone was happy.

How did we introduce NoSQL in the traditional world of RDBMS developers?

The first thing we did, was to add a “meta” column with JSONB type to almost every table. It was almost invisible to the naked eye on an ER diagram. No one bothered to ask about a column named “meta” much at first. That changed drastically over time. Here is an example. Any time someone realised that they actually needed a many to many relationship between two tables, we would lap up the opportunity to show off what NoSQL could achieve with minimal amount of changes – and with elegance. The neglected poor old “meta” column that had spent most of it’s lifetime remaining “null” now sprung into action and solved a real world problem. To take an example, assume two tables, one called Restaurants and another called Menus. Initially we designed the system believing that a restaurant can have multiple menus (while in real life it’s only a single menu for most) so we addressed the issue by having a one to many relationship between Restaurants and Menus – i.e. one Restaurant could have several Menus. As time went on, we encountered a client who had several restaurants and each restaurant had several menus (depending on what time of the day you went there). Now unfortunately our old fashioned approach needed work-arounds to solve this problem because there was no easy way to make the a menu be a part of several restaurants. We decided to solve it using two approaches. The first one was a traditional crosslink table. The second was adding a JSON array field into the meta JSON called restaurants in the menus table and vice versa in the restaurants table. I won’t go into much detail, but you already get the idea about which solution was more elegant. NoSQL clearly won the preference.

PLV8 JavaScript Functions

OK now we are ready to dive into the world of NoSQL inside of JavaScript and look at what PLV8 can do for us. The biggest criticism we would usually take for adding NoSQL into our Data Model was about how non-standard and cumbersome it was for us to use PostgreSQL JSON functions to play around with the NoSQL data. And yes it isn’t pretty and neither is it a standardized approach that someone from the world of Oracle or SQL Server could easily familiarise themselves with. Say hello to the PLV8 extension! Now we have a standard programming language called JavaScript that is understood and known by a large group of developers. The adventurous kinds in the area of RDBMS have at some point or the other dipped their toes in NoSQL and encountered JavaScript along the way. Those were the ones I convinced on exploring PLV8 – and eureka! – in a short amount of time we had a good chunk of functions written in JavaScript living right beside the traditional PL/pgSQL functions.

OK enough, show me how it’s done

Step 1.
Add the PLV8 extension to your PostgreSQL database.

CREATE EXTENSION plv8;

Step 2.
Write your first function!

CREATE OR REPLACE FUNCTION public.getreviews(postid bigint)
 RETURNS json
 LANGUAGE 'plv8'

COST 100
 VOLATILE 
AS $BODY$


var plan = plv8.prepare('SELECT COALESCE(reviews, \'[]\'::jsonb) as reviews FROM posts WHERE id = $1', ['bigint']);
 var reviews = plan.execute([i_postsid])[0].reviews;
 for (var index = 0; index < reviews.length; index++) {
    var review = reviews[index];
    //do something with your review object
 }
 return reviews;

$BODY$;

What the above function achieves is simply take a postid and return all the reviews in a NoSQL field. But if you are a JavaScript junkie, then you already know how to open the pandora’s box now! You can manipulate the JSON way more easily compared to using inbuilt JSON functions in PostgreSQL and pass it around. In the above example, note the few things of importance. Number one is the plv8 object which acts as our bridge to the PostgreSQL database. Second is the fact that your regular PL/pgSQL is no more a first class citizen within those $BODY$ start and end markers. We have gone JavaScript!

I have kept this short to serve as an introduction and motivational to help interested developers push the NoSQL agenda. Cheers!

C# dynamic objects

The best use of dynamic variables in C# I have found in use with deserializing JSON data for which I may not have the schema, or I know is a frequently changing schema.
For everything else I just avoid using it.

What do I use? Well, NewtonSoft’s JSON deserializer and this simple statement:

dynamic apiData = JsonConvert.Deserialize<dynamic>(jsonData);

Simple and neat.
#CSharp #dynamicvariables

Few tips on improving speed of your MongoDB database

Those of you who have done a project with MongoDB will notice that it functions and behaves quite differently than traditional RDBMS systems. From super fast queries to all of a sudden taking forever to return 10 documents is something beginners always face with MongoDB. I am no expert but these are the steps I took and Mongo worked much nicer than it had earlier.

  • Configure “Mongod” to run as a service Many beginners make this mistake and its a very common one. Make sure you run it as a service which allows MongoDB to do better performance management and handle incoming queries a lot better.
  • Indexing This should not even need to be mentioned, but with Mongo don’t do a blind indexing. Think of the fields you group the documents the most in your queries and set the indexes with that in mind. This will do a lot to speed up your MongoDB
  • Start using _id Again this is something people do a lot, i.e. they don’t use the inbuilt _id field. You should using that over your own ids. Since it’s an ObjectID, it indexes better and is truly unique reducing programmer headache of creating unique id fields
  • Create a re-indexer service Like any other database MongoDB needs to be re-indexed occasionally. One of the easiest ways is to create a daemon or service in your favorite language and make it do some maintenance like re-indexing and data cleanups.
  • Implement Paging in your queries This is good to do in most projects. When showing large data sets, try to page your data so that you only show enough to start with, and then fetch more as you go. Mongo has an advantage over other databases in this regard in terms of speed. Please keep in mind the field you page on is a unique index

So these are a few observations I had while designing my project in MongoDB. I will be adding more improvement techniques as I go forward. If you think some of my above points are erroneous do let me know. Also share your tricks with me!

Getting your cheap Android Phone/Tablet to get detected for Debugging by Linux (Mint or Ubuntu)

Welcome to a post another road block I recently solved on the Android development saga. I got myself a cheap Android tablet (Byond Mi-1). In an effort to use it for Android Development with Linux Mint / Ubuntu, I had to get across quite a few steps other than what is normal. Lets go step by step:

  1. Figure out your Tablet’s Vendor ID – Use the lsusb command. It will dump out the details of all the USB devices connected to your machine. Usually your cheap tablet will not show up with a name on the dump, however in most likelihood it will be the last item on that list. To be sure, copy the output of the lsusb command into a text editor or spreadsheet. Then connect your Tablet with the computer and turn on Mass Storage (on the tablet). Run lsusb again and grab the dump and put it into a text editor or spreadsheet. There should be an extra line pertaining to your device. There will be an ID in the form of ID 1234:5678. 1234 will be your Vendor id. Take a note of it.
  2. Run the command:
    sudo gedit /etc/udev/rules.d/51-android.rules
    Copy paste these lines:
    SUBSYSTEM==”usb”, ATTR{idVendor}==”1234″, MODE=”0666″, GROUP=”plugdev”
    SUBSYSTEM==”usb”, ENV{DEVTYPE}==”usb_device”, ENV{PRODUCT}==”1234/*”, MODE=”0666″
    SUBSYSTEM==”usb”, SYSFS{idVendor}==”1234″, MODE=”0666″ 

    Please appropriately change 1234 to your correct device id.

  3. Run the following command to create a adb_usb.ini file in your .android folder in your home.
    sudo gedit ~/.android/adb_usb.ini
    Simply write your device id in this format:
    0x1234
    Save and exit
  4. Reboot your computer
  5. Unlock your tablet and go to settings. Find Developer Settings and switch on USB debugging. This step will depend on your Android version.
  6. Connect your tablet to the computer
  7. Get to your android sdk’s platform tools folder and run the command:
    ./adb devices
  8. If your device is listed, then yuhoo you got your cheap tablet ready for development.

Pretty cool eh!?

Laptop LCD screen brightness in Linux Mint 13 or Ubuntu 12.04

I recently set up a Linux workstation and based on my lookup on best distributions available, two came to fore: Ubuntu 12.04 and Linux Mint 13 (Maya). Ubuntu has always been a fantastic Linux distro, but as I learned Linux Mint is actually based off of Ubuntu and did a better job at being a full featured OS, I decided to get it setup on my desktop. I have been very pleased so far!
One of the issues faced was inability to control brightness of the screen. I could not do say from the keys on the keyboard and neither did system settings work. The fix was easy as I learned about it on other forums. Here is the link to fix the problem:
http://shellboy.com/linux-mint-13-on-dell-xps-15-brightness-keys-not-working.html

Upgrading MonoDevelop to the latest stable build on Linux Mint / Ubuntu

All those developing with MonoDevelop on Linux Mint or Ubuntu must have noticed that the software repository does not provide the latest release of MonoDevelop (3.0.3.5 as of this writing). The only way to get to the updated version is to compile it on your own. Compiling a big project like Monodevelop on Linux usually scares the crap out of some, specially those migrating in from Windows backgrounds. Although there is nothing special about it, you satisfy project dependencies and compile using the provided tools. Also it is basically a standard linux three step process, configure, make and make install.

In spite of all of that, there are some of us who believe in keeping things simple. That allows us to channel our creative energy and spirit into other things that matter. I obviously don’t want to fight dependencies after dependencies and have no energy left to work on my own project. So here is a the best way I have found to make a clean build of Monodevelop from fresh stable source code. Actually the credit goes to John Ruiz who put up this simple shell script that does the job for us. Get his script from https://raw.github.com/jar349/tools/master/install-monodevelop.sh and save it to a folder. Usually it would land up in your Downloads folder in your home. Make sure to give it “execute” permissions. You can use the UI, right click the file, go to Properties, select Permissions tab, and check the box that says “Allow executing file as a program” With that done, now you need to start your Terminal, navigate to the Downloads folder and run the script as ./install-monodevelop.sh

It will do a bunch of stuff and by the end of its run, it will have monodevelop built and installed. Simply type monodevelop on the command line to run! Yep you are done!