PostgreSQL – NoSQL and Javascript Functions with PLV8 – Part 2

PostgreSQL and it’s NoSQL features are awesome, right? If you don’t know what I am talking about, you should check my previous post to familiarize yourself with an introduction to NoSQL in Postgres. While it doesn’t go in depth of the NoSQL features, it does get you started much faster knowing that you can manipulate and operate on JSONB/JSON data using the Javascript language.

In this post we are going to talk about a bit more of the NoSQL features and functions that we can use to navigate our way around NoSQL columns. Some of the things we are going to look at:

  • Making JSONB/JSON columns part of our SELECT or UPDATE (SQL) queries
  • Modifying JSONB fields partially (or filling in details that need to come from other relational fields)

Querying JSONB/JSON NoSQL columns

In the simplest of use cases, you will find a need to select your rows based on a fields value within the JSONB. For the purpose of our exercise, let us assume a table and possible values of the column cells.

Posts

id      |  owner_id  |  post_text                                                                |   comments
———————————————————————————————–
1        |  27                 |  PostgreSQL is fun!                                            |   [ ]

In the above example, we have a Posts table which has a column called comments of type JSONB. Why JSONB? Because then we don’t need to have a separate table called comments which has a post_id column with foreign key reference to our posts table. The comments JSON structure could look like the following:

[{
   "comment": "Yes indeed it's fun",
   "timestamp": 1234567890,
   "owner": 12
},
{
   "comment": "Where I live, Mongo rules!",
   "timestamp": 1234567890,
   "owner": 18
}]

The above structure, if you notice, is firstly a JSON array since it starts with the square brackets. Secondly, in each comment in the array, we do not repeat information like owner_name or owner_profile_picture_url. We make a safe assumption that these details are available in a single Profiles table which has the profile details of each user in the system. So far so good right? It absolutely seems like the perfect spot to use NoSQL JSONB datatype. But there are some problems we will encounter later when we get down to build usable APIs against our table that need to be consumed by our front end apps.

Problem 1

How do I get the owner_name and owner_profile_picture_url for each “owner” of the comment? In the traditional world of RDBMS, it would be a simple join. That is what we will do with the owner_id in the Posts table.

SELECT 
    o.name, o.profile_picture_url, p.post_text
FROM
    posts p INNER JOIN profiles o ON p.owner_id = o.id

Let us now see how we would do something similar with the comments JSONB array. But before we get to the array, we must see what difference it would have made if instead of being an array it was a single comment.

{
   "comment": "Yes indeed it's fun",
   "timestamp": 1234567890,
   "owner": 12
}

In this case we would use one of the arrow selectors part of PostgreSQL, and do something like this:

SELECT 
 o.name, o.profile_picture_url, p.post_text, 
 p.comments->>'comment' as comment,
 co.name as comment_owner_name, co.profile_picture_url as comment_owner_profile_picture_url
FROM
 posts p INNER JOIN profiles o ON p.owner_id = o.id
 LEFT JOIN profiles co ON p.comments->>'owner' = co.id

Does that give you a fair idea of how to pick a field from a JSONB column? Therefore, if you have a meta JSONB column in a table, you could extend the meta column with additional future columns and never have to change the schema itself!

But the above hasn’t solved our problem yet, i.e. how do we do the same for a JSONB array. But before that we need to at a few more basic things with it. For instance, selecting is one thing, how do we filter based on the JSONB array? Let us look at that next.

Searching Through a JSONB Array

The simplest way to search a JSONB array from within a SQL query is to use the @> operator. This operator lets you find through individual elements in an array. Let us presume a facebook style ‘likes’ column on the posts table. Instead of solving it with a traditional RDBMS style approach where we introduce a likes table with the ids of all the users who have liked a certain post with the post_id foreign key, we will rather use a JSONB array to store all the owner IDs right within the posts table. In the example below, the post with id 1 has been liked with profiles with id 18 and 4.

id      |  owner_id  |  post_text                                                                |   comments   | likes
———————————————————————————————–
1        |  27                 |  PostgreSQL is fun!                                            |   [ ]                      | [18, 4]

Now what if we need to find all the posts which have been liked by the user with id 4, we would need to do something like this –

SELECT * FROM posts p WHERE likes @> '[4]'::jsonb

We have to take explicit care about the operator, operands and the data types of the operands. It is very easy to get sucked down the rabbit hole if you miss these details.

Problem 2

How do we join the details from a JSONB array with the concerned tables and fetch more details like name of the user or their profile picture etc. ? We left a similar question unanswered in Problem 1 which we will find answers to now.

The jsonb_array_elements function is almost a must-use when we are down to opening up an array, picking up the elements we need and joining them with another table. What this function does, is to open and break up each element in the array as if it was a cell in a set of rows. So if you were to do something like this –

SELECT jsonb_array_elements(likes) as profile_id FROM posts
WHERE posts.id = 1;

Would return you something like this –

|    profile_id   |
——————————————-
18
4

Now we can use this to join on the profiles table and fetch the user details. How we choose to do it is up to us individual developers, but here is simple way of doing this

SELECT 
    p.name, p.profile_picture_url
FROM
    profiles p INNER JOIN
    (SELECT jsonb_array_elements(likes) as profile_id FROM posts
     WHERE posts.id = 1) pid
     ON pid.profile_id = p.id

This is extremely powerful. It simplifies our otherwise complex ER Diagram by an order of magnitude. I no more need redundant data or several tables that are a result of normalization and have id columns spread all over. Things can be kept concise as long as you are cognizant of your NoSQL and RDBMS hybrid design. Keep in mind NoSQL is by definition is not enforcing which means that you could throw in anything out of place in the JSONB columns and it wouldn’t complain. So an obvious design choice would be to use relational data-model where it makes sense and use NoSQL where it doesn’t.

For our final leg in this post, we will find the answer to the question posed in Problem 1, i.e. how can we modify the JSON output so that it includes all of the information needed to send out to our consumers e.g. an API service.

Modifying Your JSONB Response

Being able to search through and connect data with JSONB is one step in the right direction. However being able to turn around concise information in modern data formats like JSON right from within your queries is what we are gunning for. The simplest way to achieve this is using PLV8 which is a native Javascript programming environment. You can modify JSON objects just the way you would in web environments.

But in order to be cognizant of performance, it pays off to learn some of the functions in Postgres that let you modify a JSON/JSONB column on the fly during a query. Let us revisit the problem we left unsolved in Problem 1 and also return the names and profile picture URLs for the people who have commented on a post. For this we will use the function called jsonb_set.

SYNTAX: jsonb_set

jsonb_set(target jsonb, path text[], new_value jsonb[,create_missing boolean])

Once you get a hang of using jsonb_set, you can manipulate JSONB objects right from within a query. It’s time to solve our problem –

SELECT 
       jsonb_set(p.comments, '{name}', ('"' || pf.name || '"')::jsonb, true) as comments
FROM
     profiles pf INNER JOIN (SELECT
                                      jsonb_array_elements(comments) as comments
                               FROM posts) p
         ON p.comments->>'owner' = pf.id::text

RESULT:
{"name": "chris", "owner": 1, "comment": "i like it"}
{"name": "abi", "owner": 2, "comment": "me too!"}

In the above solution, we added a name field to the JSONB comment. If you notice this combines the solutions from both Problem 1 and 2 to produce the right JSONB output. Just the way we added the comment owners name to the JSONB we can fetch as many details as we like and append to create a formatted JSON that is just the way our end consumer apps want!

Now venture out into the world of NoSQL and Javascript PLV8 and tell me if it enhances or spices up a relational DB setup. Bon-voyage mes amis!

PostgreSQL – Writing Javascript Functions in PLV8 with NoSQL

PostgreSQL is awesome, right?! We are doing our fourth successful project with PostgreSQL 10.3 as our Data Persistence Layer (database) and with each implementation, we are loving it. At first, it was the NoSQL features within a relational environment that got us hooked. It just is so much easier to convince the dinosaurs (old techies in their post 50s who tend to have an adverse opinion on any new tech) to go flirt with NoSQL. In my experience, as long as we stayed in their comfort zones by keeping 90% of our Data Model relational, and only about 10% of NoSQL in our structure – everyone was happy.

How did we introduce NoSQL in the traditional world of RDBMS developers?

The first thing we did, was to add a “meta” column with JSONB type to almost every table. It was almost invisible to the naked eye on an ER diagram. No one bothered to ask about a column named “meta” much at first. That changed drastically over time. Here is an example. Any time someone realised that they actually needed a many to many relationship between two tables, we would lap up the opportunity to show off what NoSQL could achieve with minimal amount of changes – and with elegance. The neglected poor old “meta” column that had spent most of it’s lifetime remaining “null” now sprung into action and solved a real world problem. To take an example, assume two tables, one called Restaurants and another called Menus. Initially we designed the system believing that a restaurant can have multiple menus (while in real life it’s only a single menu for most) so we addressed the issue by having a one to many relationship between Restaurants and Menus – i.e. one Restaurant could have several Menus. As time went on, we encountered a client who had several restaurants and each restaurant had several menus (depending on what time of the day you went there). Now unfortunately our old fashioned approach needed work-arounds to solve this problem because there was no easy way to make the a menu be a part of several restaurants. We decided to solve it using two approaches. The first one was a traditional crosslink table. The second was adding a JSON array field into the meta JSON called restaurants in the menus table and vice versa in the restaurants table. I won’t go into much detail, but you already get the idea about which solution was more elegant. NoSQL clearly won the preference.

PLV8 JavaScript Functions

OK now we are ready to dive into the world of NoSQL inside of JavaScript and look at what PLV8 can do for us. The biggest criticism we would usually take for adding NoSQL into our Data Model was about how non-standard and cumbersome it was for us to use PostgreSQL JSON functions to play around with the NoSQL data. And yes it isn’t pretty and neither is it a standardized approach that someone from the world of Oracle or SQL Server could easily familiarise themselves with. Say hello to the PLV8 extension! Now we have a standard programming language called JavaScript that is understood and known by a large group of developers. The adventurous kinds in the area of RDBMS have at some point or the other dipped their toes in NoSQL and encountered JavaScript along the way. Those were the ones I convinced on exploring PLV8 – and eureka! – in a short amount of time we had a good chunk of functions written in JavaScript living right beside the traditional PL/pgSQL functions.

OK enough, show me how it’s done

Step 1.
Add the PLV8 extension to your PostgreSQL database.

CREATE EXTENSION plv8;

Step 2.
Write your first function!

CREATE OR REPLACE FUNCTION public.getreviews(postid bigint)
 RETURNS json
 LANGUAGE 'plv8'

COST 100
 VOLATILE 
AS $BODY$


var plan = plv8.prepare('SELECT COALESCE(reviews, \'[]\'::jsonb) as reviews FROM posts WHERE id = $1', ['bigint']);
 var reviews = plan.execute([i_postsid])[0].reviews;
 for (var index = 0; index < reviews.length; index++) {
    var review = reviews[index];
    //do something with your review object
 }
 return reviews;

$BODY$;

What the above function achieves is simply take a postid and return all the reviews in a NoSQL field. But if you are a JavaScript junkie, then you already know how to open the pandora’s box now! You can manipulate the JSON way more easily compared to using inbuilt JSON functions in PostgreSQL and pass it around. In the above example, note the few things of importance. Number one is the plv8 object which acts as our bridge to the PostgreSQL database. Second is the fact that your regular PL/pgSQL is no more a first class citizen within those $BODY$ start and end markers. We have gone JavaScript!

I have kept this short to serve as an introduction and motivational to help interested developers push the NoSQL agenda. Cheers!

C# dynamic objects

The best use of dynamic variables in C# I have found in use with deserializing JSON data for which I may not have the schema, or I know is a frequently changing schema.
For everything else I just avoid using it.

What do I use? Well, NewtonSoft’s JSON deserializer and this simple statement:

dynamic apiData = JsonConvert.Deserialize<dynamic>(jsonData);

Simple and neat.
#CSharp #dynamicvariables

Few tips on improving speed of your MongoDB database

Those of you who have done a project with MongoDB will notice that it functions and behaves quite differently than traditional RDBMS systems. From super fast queries to all of a sudden taking forever to return 10 documents is something beginners always face with MongoDB. I am no expert but these are the steps I took and Mongo worked much nicer than it had earlier.

  • Configure “Mongod” to run as a service Many beginners make this mistake and its a very common one. Make sure you run it as a service which allows MongoDB to do better performance management and handle incoming queries a lot better.
  • Indexing This should not even need to be mentioned, but with Mongo don’t do a blind indexing. Think of the fields you group the documents the most in your queries and set the indexes with that in mind. This will do a lot to speed up your MongoDB
  • Start using _id Again this is something people do a lot, i.e. they don’t use the inbuilt _id field. You should using that over your own ids. Since it’s an ObjectID, it indexes better and is truly unique reducing programmer headache of creating unique id fields
  • Create a re-indexer service Like any other database MongoDB needs to be re-indexed occasionally. One of the easiest ways is to create a daemon or service in your favorite language and make it do some maintenance like re-indexing and data cleanups.
  • Implement Paging in your queries This is good to do in most projects. When showing large data sets, try to page your data so that you only show enough to start with, and then fetch more as you go. Mongo has an advantage over other databases in this regard in terms of speed. Please keep in mind the field you page on is a unique index

So these are a few observations I had while designing my project in MongoDB. I will be adding more improvement techniques as I go forward. If you think some of my above points are erroneous do let me know. Also share your tricks with me!

Getting your cheap Android Phone/Tablet to get detected for Debugging by Linux (Mint or Ubuntu)

Welcome to a post another road block I recently solved on the Android development saga. I got myself a cheap Android tablet (Byond Mi-1). In an effort to use it for Android Development with Linux Mint / Ubuntu, I had to get across quite a few steps other than what is normal. Lets go step by step:

  1. Figure out your Tablet’s Vendor ID – Use the lsusb command. It will dump out the details of all the USB devices connected to your machine. Usually your cheap tablet will not show up with a name on the dump, however in most likelihood it will be the last item on that list. To be sure, copy the output of the lsusb command into a text editor or spreadsheet. Then connect your Tablet with the computer and turn on Mass Storage (on the tablet). Run lsusb again and grab the dump and put it into a text editor or spreadsheet. There should be an extra line pertaining to your device. There will be an ID in the form of ID 1234:5678. 1234 will be your Vendor id. Take a note of it.
  2. Run the command:
    sudo gedit /etc/udev/rules.d/51-android.rules
    Copy paste these lines:
    SUBSYSTEM==”usb”, ATTR{idVendor}==”1234″, MODE=”0666″, GROUP=”plugdev”
    SUBSYSTEM==”usb”, ENV{DEVTYPE}==”usb_device”, ENV{PRODUCT}==”1234/*”, MODE=”0666″
    SUBSYSTEM==”usb”, SYSFS{idVendor}==”1234″, MODE=”0666″ 

    Please appropriately change 1234 to your correct device id.

  3. Run the following command to create a adb_usb.ini file in your .android folder in your home.
    sudo gedit ~/.android/adb_usb.ini
    Simply write your device id in this format:
    0x1234
    Save and exit
  4. Reboot your computer
  5. Unlock your tablet and go to settings. Find Developer Settings and switch on USB debugging. This step will depend on your Android version.
  6. Connect your tablet to the computer
  7. Get to your android sdk’s platform tools folder and run the command:
    ./adb devices
  8. If your device is listed, then yuhoo you got your cheap tablet ready for development.

Pretty cool eh!?

Upgrading MonoDevelop to the latest stable build on Linux Mint / Ubuntu

All those developing with MonoDevelop on Linux Mint or Ubuntu must have noticed that the software repository does not provide the latest release of MonoDevelop (3.0.3.5 as of this writing). The only way to get to the updated version is to compile it on your own. Compiling a big project like Monodevelop on Linux usually scares the crap out of some, specially those migrating in from Windows backgrounds. Although there is nothing special about it, you satisfy project dependencies and compile using the provided tools. Also it is basically a standard linux three step process, configure, make and make install.

In spite of all of that, there are some of us who believe in keeping things simple. That allows us to channel our creative energy and spirit into other things that matter. I obviously don’t want to fight dependencies after dependencies and have no energy left to work on my own project. So here is a the best way I have found to make a clean build of Monodevelop from fresh stable source code. Actually the credit goes to John Ruiz who put up this simple shell script that does the job for us. Get his script from https://raw.github.com/jar349/tools/master/install-monodevelop.sh and save it to a folder. Usually it would land up in your Downloads folder in your home. Make sure to give it “execute” permissions. You can use the UI, right click the file, go to Properties, select Permissions tab, and check the box that says “Allow executing file as a program” With that done, now you need to start your Terminal, navigate to the Downloads folder and run the script as ./install-monodevelop.sh

It will do a bunch of stuff and by the end of its run, it will have monodevelop built and installed. Simply type monodevelop on the command line to run! Yep you are done!

Complexity Adaptive User Interface (COMPAD UI)

In a Nutshell

How about a UI that doesn’t present you with all the complex features of the application just at once. Instead, it slowly adapts in that direction based on your usage pattern.

The Need

Applications, as they move up in release versions, start cramming up the UI with features. This is a gradual progression we see in most software applications available today. In a world were simplicity speaks volumes, we might be better off with showing less. Why would I hide features when my application supports it, you would ask. The answer is simple – Users may only need to do certain things with your application, never will they use every single feature from day one (unless they are used to previous versions, of course).

The Solution

Bring about a UI which understands user usage pattern, and then gradually starts enabling/showing features. This would allow the end user to start with a minimalist interface and then as the user gets comfortable with the core functionality, that they can start using more features.

Practical Example

Like all theories, it’s better to put the point across with a practical example.

The best example I have from my own experience is the story of Winamp (from my perspective). Like most fans of music during the late 90s, I too was a Winamp + Napster fan. Winamp had always been my media player of choice ever since I had started listening to MP3s. In spite of my dedicated loyalty towards all the versions up until 2.81, something happened with Winamp 3 that totally threw me off and almost made me regret my decision of upgrading. It had become this bloated piece of software from the original version, that in less than 10 minutes I lost all charm of wanting to use it. My grunt was simple – I had no interest in Music Libraries or tons of those new features that it shoved at me. Plus all of those features made it slower to load. It didn’t take me much time to roll back to Winamp 2.81 which had been my previous version.

One day I got this newsletter bragging about the launch of Winamp 5. Honestly, I was not too excited to give it a try given my past experience with the version 3. What I did notice is that this time it came with a Lite version as well! Instinctively, I downloaded that and fired it up. Expecting to see a worsened avatar of the version 3, what eventually showed up was a surprise – The UI was almost just like 2.81. Wow, what a relief that it does retain the 2.81 simplicity, I was instantly telling my geeky friends! I immediately got to using it and no second thoughts about reverting back to the old version haunted me. What this meant was, I was not the only one complaining about the cramped up Winamp 3 UI and certainly I was sharing a general consensus. In about next few days I unlocked almost all of the features I had seen in Winamp 3 plus a few more. Not necessarily I used all of them, but at least I knew they were there and I will use them when the need (or the urge) arises. An important lesson was learnt that day – make it comfortable for someone to fit in to what is otherwise new.

Learning off of that experience, came a thought to my mind – what if the software understood that when a user was ready to be presented with more stuff that they might want! If that would happen, then even a non-geeky user could be eventually roped in to use some of the more advanced features.

Alright, we are sold, how do we implement Complexity Adaptive UI (COMPAD UI)?

What I plan to suggest in series of successive posts, is a set of  configuration XML markup structures, terms and design patterns to enable successful implementation of this feature in programming language and technology of your choice. I am also in process of setting up a Wiki page so that more of us can collaborate on the idea.

Wiki Article Reliability Algorithm/Software

Let me start by saying that I am a supporter of Wikipedia, I contribute articles and information wherever I think I have sufficient knowledge. I also contribute annually a certain amount to Wikipedia donations. Having said that, it does hurt me sometimes when people rubbish you if you quote them something from wikipedia or you give them a wikipedia link in an attempt to prove your point. People who don’t know how wikipedia works or have very little surface knowledge seem to disregard it with much ease. I read somewhere about an article that how teachers in most school discredit any wikipedia sources of research. Yes they dislike it because in many cases it contradicts their text books. In reality, Wikipedia is a mighty flattener of the world by providing free access and authoring capability of information to general public. Let me quote an example, have you heard of the famous saying, “History is written by conquerors”? Not anymore. With rising popularity of Wikipedia, every piece of historical article is being subjected to views from all directions. One such example would be the role of “Aryan Invasion Theory” in Indian history. For more than one century we have heard the Aryan Invasion theory and taken it as practical history, of course until now. Without going into the details, you will notice Wikipedia article on the subject seems to stay neutral by presenting both sides of the argument.

Now coming to the original intention of writing this article, I propose to write first an algorithm and then a practical implementation of the algorithm as a web service/site that other applications can use. Yes, everything will be open source and free. The purpose of the algorithm would be to present the reader with a version of the wikipedia page (or for that matter any wiki page) that the algorithm thinks is the most stable/reliable version. How the algorithm will work is a set of steps that I will be detailing next.

  • Access the History page of the article
  • Fetch a list of all the authors
  • Loop through all edits made by non-registered-users i.e. random edits
  • Check if these edits against article lifecycle, i.e. how far in the stable life of the article was the edit made
  • If the edit was made and no registered user edit was made after it, remove it
  • Mark every other random edit as “Candidate for Removal”
  • Fetch a list of newly registered users who have recently modified the page
  • Check if the author has made edits to other pages, if yes, look at the activity interval. If there are rapid edits, the author could be spammer. If the edit made was very recent, mark it as “Recent Edits” and “Candidate for Removal”.
  • Every content line that has a [citation needed] marking, mark them as “candidate for removal”
  • Find trustworth authors, by finding every author that has been editing on wikipedia for quite a long time
  • Promote their edits to “Trustworthy Info”
  • Find any “Candidates for Removal” in the “Trustworthy Info” and let “Trustworthy Info” suppress Candidate for Removals
  • Based on the stringency of user settings, curate the “Candidates for Removal” in the final rendering of the article

This could just turn out to be the quick moderator you need while browsing the excellent and superb Wikipedia! And this doesnt just apply to wikipedia, it also applies to Technical wikis we use at work. There are many people writing and modifying wiki pages. If its a big organization, I bet there are many new joinees and interns who are not necessarily the most trusted people to edit wikis. However, the best use of it is on public wiki sites where trust worthiness of an article becomes a big question for few.

The Mind That Cracked

This has been one of the most interesting stories from the days of hobby programming. Like most of the hacking community, the biggest turn on for writing programs is “Challenge”. Of the many challenges and victories, I specifically remember this as being one of the most interesting.

I had won the software programming contest at a national level tech fest called Aureole held in JEC. I was riding high on confidence and had the feeling that I could do anything with computers and programming. I even started considering myself as one of the best programmers in the area. The only person I thought came close or at par or better was my senior in college, Kunal.

That summer I came across a software program being distributed by a local coaching class professor called MindsReader by MindsArray. Two things made me look at the software that was being used by the professor to distribute notes. Firstly, the software was being touted as uncrackable. Secondly the notes being distributed on it were really good. To get those notes you had to buy the software and individual notes packages. Considering I had never paid for software in my life (till then) and having released all my projects as Open Source, it was unlikely that I was going to pay for this one either. All the motivation needed to crack it was in place.

Let the hacking begin…

I obtained a copy of MindsReader from a friend who had bought it and got it installed on my PC. At first I thought it would be as simple as installing with the same key that my friend got. Didn’t work. It generated me some key and wanted me to take it to the center or go on the web and generate an activation key for it. I had seen many such software to know that cracking this one would require me to decipher the key generation algorithm… in assembly language! I certainly didn’t have time for that but still gave it a try. Got a copy of SoftIce, an executable decompiler and pointed it to the points where I would think the software triggered key generation algorithm. I was sure that the software was using some kind of hardware ID to tie the generated code to the activation code because the activation code that one person got never ran on another machine than the one code was generated for.

Having spent hours trying to figure out which piece of code was actually generating the hardware ID, my head was aching with all the staring at assembly that I had done for nearly 10 hours. My plan was simple, locate the hardware ID generation logic and hard code it with the ID from my friends computer. Yet finding that place in code was the toughest thing I had done in a while!

The Eureka moment!

I started looking at the problem from the perspective of a developer. If I had to generate a unique computer ID how would I do that? Google. Of the many results I got, the one that jumped out at me was a simple DOS executable program that you could embed with your own program by reading off the hardware ID from it’s output. The sun finally decided to shine on me and I found the exact same DOS exe being used by MindsReader in its “bin” folder! The solution was right there in front of me all the while and I had been bothering myself with the uneasy painful path. The uncrackable software was now to be cracked. The solution was simpler than one would think. When I ran the DOS exec, I captured it’s output on my friends machine. I fired up LECIDE (my self developed C++ IDE) and wrote simple cout>> statements to dump the exact same output as the DOS exec. I compiled and linked the code and named the exe same as the original exe for generating hardware IDs and replaced that in the bin folder, prayed to the Holy Spirit of computer programming and fired MindsReader. Re-entered the registration screen and tried again with the activation key I had borrowed from my friend and the program ran! The feeling of looking at the program run was pure bliss… I had accomplished something that very few dared to try. Yes, it wasn’t as tough as cracking some crazy algorithms would be, but so what? I was the first one to have done that in my small sleepy town and that made me proud enough…

Being a supporter of free software I was now going to enable other peers of mine to be able to run the same software. I would allow them to emulate hardware IDs and also give them a UI to adequately set up hardware id/activation key/notes package combinations. The reason this was helpful is because not everyone bought every notes package. So you could ask different people for which notes package they had purchased and easily manage the emulation process for each package. I christened my tool very appropriately, CrackMind 🙂

Just before I was about to release this cracker out in the wild I called Kunal and showed him what I had accomplished. After all, showing off your accomplishments to another able programmer is what drives the Open Source industry. I expected him to get surprised (or at least act) but he shocked me by confessing that he was the developer MindsArray had hired to develop MindsReader. My sense of victory was dampened a bit because now I couldn’t release it out to everyone or that would get Kunal in trouble. So instead I just distributed the tool to my entire group who reaped benefits of being my friends. Ohh yeah I reaped benefits out of them as well and I still do!

Hobby Open Source Projects

As I have written on several occasions about me, I love programming. You can find the software I have written on this blog. Just made a link to a program called CopyFat with its installer and source code. I will mention here the projects I have done in the past and will eventually post them with Source Code and Installers.

  • LECIDE – Learners/Experts Configurable Integrated Development Environment
    • With LECIDE I intended to make something like Eclipse IDE much before it showed up. Obviously my effort was much juvenile and to be honest, I started by creating a Notepad clone in VB6 and ended up creating a very complex IDE with syntax coloring, instant help tooltips, multiple compiler support, and BLADE – a drag and drop GUI designer for creating C++ dialog resources. Unfortunately, the way the code was written was horrible. If I wanted to do something new with it today, I would rather jump off a building than taking a dive back into its code. However the project was pretty extensive for the time it was built in and can be used to learn a lot. Download the executables and installer here. Download the source code here. Kindly run it in Windows 7 compatibility mode, because certain components don’t work on Vista/Win7.
  • CopyFat 2.0 – File copy program
    • You can read more about it here. It was one of the most useful utilities I ever wrote. It helped me and my friends on several occassions!
  • CyberBrowser – Tabbed IE based browser
    • A pretty simple browser with Tabbed browsing capability back in the days when IE was still a single window browser. A good reference for those wanting to learn how to use the WebBrowser control in VB6. Download the source code here.
  • Winsock Based FTP client
    • While learning socket programming, I implemented my own FTP client. The important thing to note is, I didn’t use any third party components to derive FTP functionalities. The code actually talks to the FTP server by opening ports and opening parallel channels for file downloads etc. Great stuff if you are learning socket programming in VB6! Get the source code right here.