Are you doing PageSpeed tests? Because you should.

How often do you open up a blog that has good contents but shitty loading time? Not very often. The content you provide to your readers is as much important as how fast you can deliver it. The competition knows that all too well. That is why we have techniques like code minification, data compression, etc. To minimize the loading time. To that effect, Google introduced a beautiful tool in it’s online app store called PageSpeed Insights, and we all should learn how to use it.

PageSpeed is among some of the best analysis tool a developer can utilise to evalute the web content. It gives insights on multi-tier issues like Javascript, CSS, Media, DOM, HTTP Headers, you name it. A site that performs well with PageSpeed usually delivers contents faster. Much of this involves tweaking your code to minimize the amount of data you need to send the end user in order to display the same web content as before, but faster. Although, some of it also deals with “best practices” to avoid bloating of content and other such issues on devices unsuitable for your code (like customising the viewport).

You can download PageSpeed from the Chrome Store using the link:

Let’s see a demo of what we can do using PageSpeed. Let’s create a test.html and a test.css file.



<!DOCTYPE html>
<html lang="EN">
 <title>Chrome Test</title>
 <meta name="viewport" content="width=device-width,initial-scale=1" />
 <link rel="stylesheet" href="./test.css" />
 <link rel="stylesheet" href="" />
A div
Another div
Yet another div
</body> </html>

And test.css

 font-size: 22pt;
 background-color: pink;
 text-align: center;
 color: blue;
 -webkit-user-select: none;
 -moz-user-select: none;
 background-color: violet;
 text-align: right;
 color: white;
 background-color: maroon;
 text-align: justify;
 color: green;

If you open up Chrome and press F12 key to bring up developer tools and navigate to Network Section and then open this page, you will find something like this:


You can see that the Load time is quite high – 2.34s

So now we turn to PageSpeed for suggestions. Open the PageSpeed tab on the Developer Console and press the Analyze button once it appears. This will reload the page and give you a list of things you should do to improve your score. Let’s see what we got for this demo.


The suggestions were:

1. Minify Javascript

Notice that we included JQuery uncompressed version. That is why we got this message. We can get rid of this by using the minified version of JQuery. Change the script tag to:



2. Defer parsing Javascript

There is a possibility that the javascript content changes the DOM Structure of the page, so the browser has no way to know for sure if this will happen and hence, is forced to load the entire javascript code before display a web page. In a Progressive Web App, we must make the assumption that no Javascript runs beforehand. This assumption does not come without some limitations on what we can achieve but it makes a web page faster and more responsive. Therefore, to avoid a blocking of memory and resources due to a script, PageSpeed recommends that we add the script tags towards the end of the document (preferably, at the end of the body).

Another, perhaps more efficient way to achieve non-blocking script execution is using the async attribute. Changing the script tag to:

Tells the browser to continue loading the DOM while parallely downloading the script.

3. Leverage Browser Caching

Caching requests are sent as HTTP Headers from the Web Server to inform the browser what static content our site will be using every time it is loaded. For example, JQuery can be cached so that it need not be downloaded by the user every time. Similarly, other scripts can also be cached into the user’s memory for future use. Caching has grown much from it’s root origins, special thanks to the new Web Cache API that lets sites control and manage cached resources. This is particularly helpful when you have a large content, a part of which you change periodically, while the rest of it remains the same. Using the Cache API, you can let the browser compare the version of the file the user has and the version you are currently serving so that it can update only the necessary parts of the page without reloading the entire page. Read more at Google Developer’s Docs.

4. Minify CSS

Just like Javascript, CSS Minification can also go a long way towards saving data and thus ensuring smaller loading times. By changing to a minified version in our example i.e, loading Bootstrap Minified version:

<link rel="stylesheet" href="" />

We expect to see a drop in the load time.

However, there is another issue that needs our attention. While CSS minification helps, we must also try to make our web page progressive. For instance, we might preferably want our viewers to see a checkbox like this:


However, we can settle for atleast something that looks more or less like this:


What we can do here is have 2 CSS files, a base CSS declaration and an advanced formatting CSS file. By placing the base CSS file in the head and the advanced CSS file at the end of the body, we ensure that the DOM Content Loading is not blocked due to the CSS formattings. For a typical user with faster connection, he won’t even know the difference. However, one with a slower connection need not be blocked from viewing the actual content due to styling.

So after all this struggle, let us see where PageSpeed landed us.

The results are:


Here we see a divergence from the trend of the previous result. The loading time has grown much smaller, we can all see that. But what’s even more intriguing is that the DOM Loading time (242ms) is now much smaller than the actual Loading time (627ms). This means that the screen was rendered way before all the content was loaded and thus we have achieved better performance. So PageSpeed does help quite a bit, don’t you think so ?

These are also some of the fundamentals of Progressive Web Apps which are growing popular by the day due to their versatility and design that delivers prettier UX without compromising with contents.

Keeping all that in mind and with PageSpeed at your side, you can make your web page way faster than you imagine and perhaps even increase your SEO Ranking dramatically.

Thank you for reading. I am a little late on my other AI discussion but it will be arriving soon, so stay tuned.


How to make a NES-style game in HTML5

Most of you must have played cool MMORPGs or other browser based games at some point in your life. Speaking of games, remember the Nintendo GameBoy games back in the good ol’ vintage-filtered days? Since we will be learning how we can create a game, why not try and make one of those games? How about “Racer” ?


(Almost) the classic NES Racer

Design of the game

Let’s break the game down a bit, shall we?

  1. We have pixels, not our screen pixels but the game pixels (which, as you can see, are quite larger).
  2. We have 2 states for each pixel: OFF and ON. The pixels defining the racer cars are in ON state while all others are in OFF state.
  3. We have a racer who is controlled by buttons (or additionally, keyboard).
  4. We have randomly generated bad guys.

Now while I personally would love to see you define a racer by code (i.e, draw the active pixels using Canvas API), I will leave that bit out and use pre-packaged PNGs. Why? Because I already tried rendering the elements by hand. It was slow! And resource intensive. Infact, I researched a bit and found that sometimes, it’s better to just use pre-made sprites instead of drawing them every frame. So, we need some assets.

Gathering Assets

You can download the boilerplate here.

Make a folder named ‘nesracer’ in your localhost root. Extract the zip ( into the folder. The structure of your folder should look like this:

  • root
    • nesracer
      • sprites
      • racer.html
      • LCD_Solid.ttf

Open up racer.html inside the folder in your favorite text editor. We need a function to draw our Sprites in every frame:

function Sprite(img, width, height){
this.img = img;
this.width = width;
this.height = height;
Sprite.prototype = {
draw: function(ctx, x, y){

We don’t want our player going out of the screen. So we will create a function to prevent this from happening.

<br />function calcBounds(x,y){
cx = (x > (canvas.width-50))?(canvas.width-50):(x < 0)?0:x; cy = (y > (canvas.height-50))?(canvas.height-50):(y < 0)?0:y;
return [cx,cy];

For the collision detection function, we are going to use a function by Joseph Lenton with a bit of minor change. Credits to the original author. But if you want something a bit more sophisticated, you can always use Ninja Physics or some other physics engine.

* @author Joseph Lenton -
* @param first An ImageData object from the first image we are colliding with.
* @param x The x location of 'first'.
* @param y The y location of 'first'.
* @param other An ImageData object from the second image involved in the collision check.
* @param x2 The x location of 'other'.
* @param y2 The y location of 'other'.
* @param isCentred True if the locations refer to the centre of 'first' and 'other', false to specify the top left corner.
function isPixelCollision( first, x, y, other, x2, y2, isCentred )
// we need to avoid using floats, as were doing array lookups
x = Math.round( x );
y = Math.round( y );
x2 = Math.round( x2 );
y2 = Math.round( y2 );

w = first.width,
h = first.height,
w2 = other.width,
h2 = other.height ;

// deal with the image being centred
if ( isCentred ) {
// fast rounding, but positive only
x -= ( w/2 + 0.5) << 0
y -= ( h/2 + 0.5) << 0
x2 -= (w2/2 + 0.5) << 0
y2 -= (h2/2 + 0.5) <= xMax || yMin >= yMax ) {
return false;

xDiff = xMax - xMin,
yDiff = yMax - yMin;

// get the pixels out from the images
pixels =,
pixels2 =;

// if the area is really small,
// then just perform a normal image collision check
if ( xDiff < 4 && yDiff < 4 ) {
for ( pixelX = xMin; pixelX < xMax; pixelX++ ) {
for ( pixelY = yMin; pixelY < yMax; pixelY++ ) {
if (
( pixels [ ((pixelX-x ) + (pixelY-y )*w )*4 + 3 ] !== 0 ) &&
( pixels2[ ((pixelX-x2) + (pixelY-y2)*w2)*4 + 3 ] !== 0 )
) {
return true;
} else {
/* What is this doing?
* It is iterating over the overlapping area,
* across the x then y the,
* checking if the pixels are on top of this.
* What is special is that it increments by incX or incY,
* allowing it to quickly jump across the image in large increments
* rather then slowly going pixel by pixel.
* This makes it more likely to find a colliding pixel early.

// Work out the increments,
// it's a third, but ensure we don't get a tiny
// slither of an area for the last iteration (using fast ceil).
incX = xDiff / 3.0,
incY = yDiff / 3.0;
incX = (~~incX === incX) ? incX : (incX+1 | 0);
incY = (~~incY === incY) ? incY : (incY+1 | 0);

for ( offsetY = 0; offsetY < incY; offsetY++ ) {
for ( offsetX = 0; offsetX < incX; offsetX++ ) {
for ( pixelY = yMin+offsetY; pixelY < yMax; pixelY += incY ) {
for ( pixelX = xMin+offsetX; pixelX < xMax; pixelX += incX ) {
if ( (( pixels === undefined ) ||
( pixels2 === undefined )) ||
(( pixels [ ((pixelX-x ) + (pixelY-y )*w )*4 + 3 ] !== 0 ) &&
( pixels2[ ((pixelX-x2) + (pixelY-y2)*w2)*4 + 3 ] !== 0 ))
) {
return true;

return false;

And finally, we write down our code. I have explained each step of the code as comments so you must be good to go. But, you can always ask me for more details in the comments.

* @author Sagnik Modak -
* @param px The pixels of NES in OFF state
* @param rx The image of the user-controlled racer
* @param re The image of the bad guys (basically, inverted racer)
* @param background The background pattern from px data
* @param canvas Reference to the Canvas Element
* @param ctx 2D Context of the Canvas
* @param meX X position of player
* @param meY Y position of player
* @param timerfeel To store the last time any enemy moved (so it feels like NES)
* @param lastrateupdated The last time the rate of enemy movement was increased
* @param rate The current rate at which enemies move (increases slowly)
* @param lost Boolean value for player status. Initially 'false', if lost, set to 'true'
var px = new Image();
px.src = 'sprites/racer-px.png'; //the off pixels on the screen
var rx = new Image();
rx.src = 'sprites/racer.png'; //the racer
var re = new Image();
re.src = 'sprites/racer-enemy.png'; //the enemies
var background, racer, badGuys = []; //storing the patterns makes it efficient
var canvas = document.getElementById('game-canvas'); //reference to the canvas
var ctx = canvas.getContext('2d'); //2D context of canvas

var meX = 80, meY = 140; //intial position of player
var timerfeel,lastrateupdated; //so that the FPS feels like NES
var rate = 10; //the rate at which enemies progress towards you.
var lost = false; //Cannot afford to lose before you start to win right?
px.onload = function (){
createBackground(); //create background once off pixel loaded
rx.onload = function (){
createRacers(); //create racer once racer image loaded
re.onload = function(){
createBadGuys(); //create bad guys once bad guys image (WTF?) loaded
window.onload = function (){
startGame(); //like it says, duh?
function createBackground(){
background = ctx.createPattern(px,"repeat"); //create a pattern by repeating px
drawBackground(); //draw the background
function drawBackground(){
ctx.rect(0,0,canvas.width,canvas.height); //clear the canvas
ctx.fillStyle = background; //set pattern as fill style
ctx.fill(); //fill it
function createRacers(){
racer = new Sprite(rx,50,50); //racer sprite
racer.draw(ctx,80,140); //draw the racer
document.addEventListener("keydown",function (event){
redrawRacer(event); //when user presses any key, redraw racer
function createBadGuys(){
badGuys[0] = new Sprite(re,50,50); //create the bad guys
badGuys[1] = new Sprite(re,50,50);
badGuys[0].draw(ctx,0,165); //draw the bad guys
badGuys[0].x = 0, badGuys[0].y = 165; //initial position of bad guys 1
badGuys[1].x = 50, badGuys[1].y = 0; //initial position of bad guy 2
function redrawRacer(e){
if(e.which == 37){
meX-=10; //user pressed Left Arrow Key
}else if(e.which == 39){
meX+=10; //user pressed Right Arrow Key
}else if(e.which == 38){
meY-=10; //user pressed Up Arrow Key
}else if(e.which == 40){
meY+=10; //user pressed Down Arrow Key
meX = meX, meY = meY; //no point in changing anything, just included for clarity
c = calcBounds(meX,meY); //see if the new position is within bounds and if not, stop at max or min allowed value.
meX = c[0];
meY = c[1];
function redrawRacerButtons(code){ //same as redraw racers, but this time with buttons in NES controller.
if(code == 4){
}else if(code == 2){
}else if(code == 3){
}else if(code == 1){
meX = meX, meY = meY;
c = calcBounds(meX,meY);
meX = c[0];
meY = c[1];
function redrawBadGuy(){ //draw the bad guys
function advanceBadGuys(rate){
for(i=0;i 300){ //to loop the bad guys around
cy = 0;
cx = Math.floor(Math.random()*13)*15; //random x position for bad guys
cy = y;
cx = badGuys[i].x;
badGuys[i].y = cy;
badGuys[i].x = cx;
lost = true; //the racer collided with the bad guys, you lost
function redraw(){
if(( - timerfeel) > 1000){
if(( - lastrateupdated) > 30000){
lastrateupdated =;
timerfeel =;
gameOver(); //whoops, try again?
function startGame(){
lastrateupdated = timerfeel =;
function restartGame(){
meX = 80, meY = 140;
rate = 10;
lost = false;
function gameOver(){
ctx.fillStyle = 'black';
ctx.textAlign = 'center';
ctx.font = '32px LCDPixels';
ctx.fillText("Game Over",canvas.width/2,canvas.height/2);
ctx.font = '16px LCDPixels';
ctx.fillText("Press Start to begin",canvas.width/2,(canvas.height/2)+20);

And there you have it. Save the file and open it in your browser (with localhost, ofcourse).

You can see the entire source code at –

And that’s all for today, folks. Thank you for reading. If you want to learn how to code a game AI for a complex game, stay tuned till Monday. If you have any comments or queries, go ahead and tap it out in the comments down below. A rating would go a long way.

Yours truely,


Top 5 Presentation Creation Tools for your website

Developers, type no more! There are tools on the market that you probably didn’t know about that let you create awe-striking slides and carousels with just a few clicks. While Bootstrap Carousels would be in every designer’s toolbox, these plugins and apps seem to work just as well and maybe even better. This list of the Top 5 Slide Creation Tools will give you a real headstart into your app development.


Reveal JS Home

Being innovative has never been easier on the web. With this presentation framework, you can build One-Page-Wonders. Forget about the tradition carousels and slider- this gives you a whole new level of smooth. Don’t take my word for it. See it to believe it. It also features a bunch of other capabilities like exporting to PDF, sharing on and using Markdown contents. Read more on Github


CSS Slider

CSS Slider App Demo

Create beautiful sliders with the CSS Slider app

Are you very picky about your plugins? Or simply want a purely CSS-based library? Either ways, this is the right choice. This helps you create seamless CSS Slides without a pint of coding (Yes, you heard me right!). Downloadable for Windows and Mac Platforms, this lets you customize your slider bottom up using a WYSIWYG interface. And to add icing to the cake, it also supports Retina-display for better resolutions. So fret no more. Download today!


Slicebox demo

Slicebox 3D rotate effect

While 2D gets the job done quite well, sometimes you need a 3D plugin to build a great design. Slicebox JQuery Plugin is the way to go. It’s small, it’s free and it’s Open Source! Read their docs over a Github and download the code.

MaterializeCSS Slider

MaterializeCSS Slider Demo

Right-aligned MaterializeCSS Fullscreen slider

MaterializeCSS is a beautiful CSS/JQuery framework for material design popularized by Google. Are you a fan of material design? Do you want a carousel tailored to your site’s material blocks? MaterializeCSS understands. It provides you one of the richest material 3D carousel experience. They also have a plain slide feature in case you wanna keep it clean. You are missing out if you haven’t tried it yet.


SlidesJS Standard Demo

The standard SlidesJS demo

Want something simple but elegant? You are bound to fall for this eye-catchingly simple slideshow plugin for JQuery. Download the code at and start creating beautiful sliders.

If you find this post helpful please dont forget to like, share and comment. Thanks for reading.

A brief history of Torrents

Today, let’s talk about a decade old technology that drives a third of all the Internet Traffic.

The internet has grown a lot. High-speed data cables, 4G connectivity, wireless routers,etc. have eased our Online life quite a bit. In the midst of all these “connectivity augmentations”, torrents occupy an indelible place. For the average netaholic, a day without uTorrent is a day wasted. We need it for all kinds of software and media, especially the pirated ones. Torrent has become the brand-logo for Piracy in contemporary media industries. It owe this infamous title to torrent sharing websites like Kickass, PiratesBay, etc. While the topic of whether or not Kickass was right to do so is a whole new post on it’s own, we are going to study the origins of this technology and not one specific (but massive) implication.


File sharing has been around ever since ARPANET and maybe even before. Torrents were not the first of their kind. There were several other predecessors including the USENET most of which were very short-lived. For a detailed history of File Sharing before Torrents you can head over to this TorrentFreak post. Wikipedia also has a timeline of File Sharing and BitTorrent is quite a bit down the list. Files are nothing but streams of data that could be sent and received like any other packet of data over the internet. But what makes File Sharing harder than hosting a web page are these:

  1. Files usually contain more sensitive data than publicly visible web pages
  2. A typical file is usually larger than a typical website and therefore consume more time and data.
  3. The existing internet protocols at that time were suited for web pages with limited markups and variations but not so much for other files.

All these contributed to the need for a new protocol for file sharing. In 2001, programmer Bram Cohen (then, a student at the University at Buffalo, New York) rose to the cause and outlined a new protocol for faster and more reliable sharing of large files over the internet. He called it the BitTorrent Protocol (first released in April, 2001). He went on to make a client for this protocol which goes by the same name as the protocol. What makes it unique is the way in which it handles downloading of large files across the internet.

Predating the BitTorrent protocol, a proprietary file owned by a web admin was served only from the admin’s own server. This meant that whenever N users wanted to download the file, the server had to open N connections and serve individual instances of the file to all the requesters. Also, since only a single packet of data could be sent over a single connection, the download was linear. This posed 2 main problems:

  1. There was a lot of pressure on the central server and so it’s maintenance costs were high. That also meant that if the central server failed, the resource would become unavailable to all.
  2. The file download rate was limited to the server’s ability to handle the open connections simultaneously.

Existing solutions to the aforesaid problems either involved additional hardware or setting up additional paid, trusted servers across the world. The BitTorrent Protocol reformed this completely. Instead of centralizing file downloads, the idea was to decentralize file sharing.


In order to understand how it worked, we first need to familiarize ourselves with some basic terminology.


A file was divided into several pieces. The records for these pieces were stored in a separate, much smaller file (called a torrent file) along with additional meta-data about the file(like content length, file size, etc.). Optionally, it contained a list of trackers (described below).


A seeder was a computer connected to the internet, having the torrent file, the BitTorrent client, and has downloaded the complete file contents. They acted as servers which served the whole file over the internet. A peer could then download the entire file or a piece of it from a connection to the seeder.

The central computer has the entire file and is acting as a seeder in the above figure.


A leecher was a computer with the partial file contents. Suppose someone has downloaded 2 of 5 pieces of data over the internet. He could then relay this data to another peer while simultaneously continuing downloading the rest of the 3 pieces (maybe even from another leecher !).


As you can already realize, there is a lot of confusion going on over this hypothetical file of ours. Someone is seeding it, someone is leeching it. How will your client know which system to connect to? All this fuss could be avoided if some systems agreed to keep track of the file sharing (i.e, status of the seeders and leechers). They came to be known as Trackers as they kept track of the seeders and leechers and directed new clients to make the appropriate connections for downloading a particular file. In most cases, the server acted as a tracker too.


The people who have the torrent file and are in the process of downloading it’s contents. The reason for inventing a separate term for them is this – A peer may or may not be a leecher. If someone opts to limit his upload speed to 0 Kb/s , he can still download the file contents but is no longer uploading any content to the swarm.


An interconnected network of several seeders, peers and trackers sharing the same torrent is known as a swarm.

The BitTorrent Protocol

It worked like this:

  •  All computers having the BitTorrent Client and the Torrent file could share the file over the internet.
  • A seeder seeded the file or pieces of it to requesting clients
  • A leecher requested the pieces it needs while simultaneously (partially) seeding the pieces it has already downloaded.
  • Trackers kept track of the file transactions and connections. All clients were required to inform the Tracker of their activities and status.

This can be better illustrated by this animated GIF from Wikipedia.


Animation of protocol use: The colored dots beneath each computer in the animation represent different parts of the file being shared. By the time a copy to a destination computer of each of those parts completes, a copy to another destination computer of that part (or other parts) is already taking place between users. The tracker (server) provides only a single copy of the file, and all the users clone its parts from one another.

Now, the file sharing was decentralized and the downloading of files became non-linear or non-sequential which meant somewhat faster downloads. Also, if the central server fails, the connected peers can keep downloading the file from a different source. The potential security loophole was that from the Man-in-the-middle-attack whereby, a malicious user would serve modified version of the file to the requesting client. But this was rectified soon by addition of cryptographic hashes for the file pieces in the Torrent file. Since the torrent file was downloaded from a trusted source, one can be sure that the file contents he fetched from another remote seeder/leecher were legit by just verifying their hashes.

Anatomy of a Torrent File

A torrent file contains the following UTF-8 encoded information:

  • announce—the URL of the tracker
  • info—this maps to a dictionary whose keys are dependent on whether one or more files are being shared:
    • files—a list of dictionaries each corresponding to a file (only when multiple files are being shared). Each dictionary has the following keys:
      • length—size of the file in bytes.
      • path—a list of strings corresponding to subdirectory names, the last of which is the actual file name
    • length—size of the file in bytes (only when one file is being shared)
    • name—suggested filename where the file is to be saved (if one file)/suggested directory name where the files are to be saved (if multiple files)
    • piece length—number of bytes per piece. This is commonly 28 KiB = 256 KiB = 262,144 B.
    • pieces—a hash list, i.e., a concatenation of each piece’s SHA-1 hash. As SHA-1 returns a 160-bit hash, pieces will be a string whose length is a multiple of 160-bits. If the torrent contains multiple files, the pieces are formed by concatenating the files in the order they appear in the files dictionary (i.e. all pieces in the torrent are the full piece length except for the last piece, which may be shorter).

Magnet Links

A newer, more widely accepted method of downloading torrent is the Magnet URI Scheme whereby, the cryptographic hash values are calculated by the client and not the server and are served via. a plain-text link only. This deprecates the need for a separate file to store the data while maintaining the security of file contents being shared. A typical magnet link is of the format:

magnet:?xt=urn:btih:&dn= and some additional parameters.

Read more about Magnet URI Scheme here.

Analysis of Torrent File health

The analysis of a torrent file’s health is a very good estimation of how good or bad a torrent is. For example, if the number of seeders is very less compared to the number of leechers, then the torrent will surely take a longer time to download. However, there might be faster seeding servers than leechers in a server and hence, the number of seeders is not a very accurate representation of torrent health. Even a smaller but faster group of seeders can guarantee better download speeds than a larger but slower group of seeders in the swarm. Hence, a better representation of a torrents health could be the ratio of the average number of bytes uploaded to the average number of bytes downloaded. This is known as the Seed Ratio of the torrent. As is clear the seed ratio is dynamic. When the seed ratio of a torrent becomes zero, an average of 0 bytes of the torrent is uploaded per second, which means, you can expect virtually no download speed for the torrent. Such torrents are known as Dead Torrents. It is however, possible to resurrect a dead torrent. A person who has the complete file and chooses to manually reseed the torrent is called a reseeder.

A list of the best Torrent Clients

  1. uTorrent
  2. BitTorrent
  3. Vuze
  4. BitComet

Decentralization of Torrent Sharing Platforms

There are several emerging projects in this new domain of File Sharing. A decentralized torrent sharing platform will ensure that there is no one person/community to point to.  Magnet Links was just the first step toward this direction. Some of the most notable projects are-

  1. Open Bay: The Open Bay platform lets you create a local copy of some of the most infamous torrent sharing websites like Kickass and ThePirateBay on your PC. This local copy can also serve the purpose of a tracker and a seeder.
  2.  RIVRAn Open-Source Torrent Search Engine. It lets you scrape torrents off some of Kickass, ThePirateBay, Isohunt, etc. It is a relatively newer project and the author wishes to make it a distributed search engine. Your contributions towards this project could be beneficial to all.
  3. TORWhile this project is not really related to the Torrents, it is an important part of the process. The Onion Router allows us to relay data from a different IP Address belonging to the Tor Network. This lets us stay anonymous while downloading data off the internet. Torrent sites are being blocked by the Government in several countries. Using the TOR Circuit, one can download torrent resources which are otherwise blocked in their respective countries.
  4. eMule ProjectAn offshoot of the earlier eDonkey Project with improved File-Sharing and a nice GUI support. Active since 2002.
  5. Equabit: A new project aimed at decentralization of torrent sharing platforms, which means, you share your part of the torrent database to the world wide web anonymously. This is also one of my own projects. Head over to the website’s Participation page to know more.

Further Reading

For a more technical study of Torrents and Analysis of Torrent as File-Sharing medium, you can refer to the following reads-

  1. BTWorld: towards observing the global BitTorrent file-sharing network 
  2. Daily BitTorrent Statistics ( from IKnowWhatYouDownload* )
  3. LimeWire – Wikipedia
  4. History of P2P Networks and File Sharing – WikiSpaces
  5. eDonkey Network

*This freaky website claims that it can record your download information over BitTorrent Protocol unless you are behind a VPN or other such services. Read more about it here at IFLScience!

If you have any queries, suggestions or comments on this topic please feel free to express the same in the comment section down below. I will surely answer them as soon as possible. Also, please do not forget to leave a rating for this post. If you like it, please share this post. Thank you for reading.

Optimization Algorithms (and their uses)

So I have not written a post in days. Partly because of my workshops and exams and partly because I was indulging myself deeply into studying Machine Learning. One of the ideas that provoked me was this – Which aspect of Machine Learning can we truly call an AI ? Now I know this is not an original question and there must be hundreds of discussion threads on this topic, but I will try and illustrate what I have understood. From one perspective, there is no true ‘best’ Artificial Intelligence technique.

An Optimization Algorithm is one which finds you an optimal solution for a given problem within a particular problem domain. The problem domain may be fixed or might be variable. The same base-algorithm can also be applied to several problem domains. In other words, for a given list of possible ways to solve a problem (or come close to solving it), an Optimization algorithm searches for the way that offers least resistance and most benefit. There are several such algorithms. Which reminds me, there is an entire directory of this over at Wikipedia. You can follow this link

to read more about them.

Now if you are familiar with these systems, you can argue that the Neural Networks or the Genetic Algorithm are more suited to any problem domain, but, the counter-argument that they are (fundamentally) less suited to solving problems related to one specific domain seems to surpass it. Suppose I have a game of Chess. I can use a GA-based system and train it for days and I know that still, it won’t be anywhere near today’s Chess AIs which are primarily based on MiniMax or some other variant. This is because in them, the heuristic functions are designed for and only for that game (and none other). You cannot expect a Chess AI to play good Poker, but you should expect it to play veteran chess. So it seems that the problem boils down to what our objective in building the autonomous system is. Unless you want to invent a Terminator, that is what you would do. Pick an algorithm best suited for the task at hand. When I attended my first AI Workshop, I saw the lecturer train the Neural Networks with a carefully chosen sample. But he kept calling it a ‘Random Sample’. At first I thought he was cheating on us because it all seemed counter-intuitive to me. So I went straight to him and asked him. He gave me what I believe is a very precise answer –

“Would I teach you the alphabets A, B, C and D today and expect you to write G and H tomorrow ?”

I realised that even a Random sample must follow certain criteria. The better your training set, the better the results. And as a generalisation of the aforesaid hypothesis, the better suited your algorithm and (if applicable) the better your training set, the better the outcome from the testing set.

I have written seven programs in the past 1 month. I have shared 2 of the most interesting of these over at my website. I have provided the links below.

Please note that I have just started learning AI and as such, there might be innumerous faults in my understanding. I would be grateful if you can help me rectify my misconceptions by suggestions, so, if you have any queries or suggestions please do leave it in a comment below. And finally, thank you for reading. Kudos ! 😉

Grapher (A JAVA desktop application to solve mazes or plot paths)

A tic tac toe game using MaxiMin algorithm and heuristic determination

Optimizing Page Speed by reducing headers

I have not written anything for quite a while now, so I decided to put my cursor down on the screen again. Web pages are becoming increasingly beautiful. All those eye-catching banners and color combinations, matched with animations and 3D transitions make us want more out of the internet that our grandparents could ever have imagined. Considering this, the complexity of the code involved in creating these styles is also accelerating quicker than ever.

We have at our disposal, Media ! Starting from images, videos, audio clips, several fonts – each one more beautiful than the previous, 3D rendering frameworks, etc. Standing where we are, we need to load several files on which our original Hypertext file depends. These files are part of what is known as HEADER. Most of you must already be familiar with HTTP Headers. For those who are not, I shall not be discussing it here, but you can read it here or watch this video.

In this post, I will discuss a technique to reduce HTTP headers in order to decrease DOM Loading time and also exclude unused data. For that, let’s first understand the problem with a classic example.

CSS Fonts

If you are intermediately familiar with CSS Stylesheets, you are also familiar with CSS3 Fonts. Now, for any big website, we may require different fonts for different pages depending on our necessity. But consider this – what if I have say, 5 or more different pages with some similar layout and design that can be defined in just one CSS file, but require usage of different fonts? You might say, the answer is, I include the required fonts using link tag(s) and include the common CSS file on every page. But how about a dynamic Single-Page App? What if you require usage of several fonts on such a dynamic page, where you never know what the user wants and when he wants it?

What I want to do is load whatever is mission-critical at first. Then, depending on whether the user needs a particular file, load it onto the DOM. How do I do this?

Ok, How do I do this?

The answer is easy. Using Javascript. But yes, in such cases, although the codes might vary depending on your needs, the concept remains the same. Load if and when needed.

Let’s look at a sample CSS file:


  font-family: 'Josefin Sans', sans-serif;
  font-family: 'Poppins', Roboto Slab, Times, sans-serif;
  font-family: 'Satisfy', cursive;
  font-family: 'Sedgwick Ave Display', cursive;
  font-size: larger;
  font-family: 'Roboto', Helvetica, Arial, sans-serif;

Now most of these are custom fonts, i.e, these are generally not packaged with the OS by default. We need to load them from an external source which in this case is the Google Fonts library. To do this, the Google Fonts library defines two methods.

  1. We can add the tag given by Google Fonts in the document head.
  2. We can use an @import () css statement to import the required fonts.

The problem with either of these are that they load all the fonts irrespective of whether we need them. Suppose one page uses only the font-hand class and none of the others and another uses only the font-office class and none of the others. In either cases, although unused, all the fonts are loaded beforehand.

The JS Way

Now to the solution :

For this specific example, the JS code is :


function updateTyp(){
    var imp = '&lt;link title=&quot;TypFonts&quot; href=&quot;,400|';
    if(document.body.querySelectorAll('.f-dreamy').length &gt; 0){
        imp += 'Josefin+Sans|';
    if(document.body.querySelectorAll('.f-office').length &gt; 0){
        imp += 'Poppins:300|';
    if(document.body.querySelectorAll('.f-hand').length &gt; 0){
        imp += 'Satisfy|';
    if(document.body.querySelectorAll('.f-spray').length &gt; 0){
        imp += 'Sedgwick+Ave+Display|';
    if(document.body.querySelectorAll('.f-blog').length &gt; 0){
        imp += 'Roboto:300|';
    if(document.body.querySelectorAll('.dropcap').length &gt; 0){
        imp += 'Alegreya+SC|';
    if(document.body.querySelectorAll('blockquote').length &gt; 0){
        imp += 'Quattrocento+Sans|';
    if(document.body.querySelectorAll('h1').length &gt; 0){
        imp += 'Raleway|';
    imp += '&quot; rel=&quot;stylesheet&quot;&gt;';
    if(document.querySelector(&quot;link[title=TypFonts]&quot;) !== null){
    document.getElementsByTagName('head')[0].innerHTML += imp;

You get the picture, right? We add the fonts to the link tag whenever we need it.
But how do I make this work for dynamic pages?
For that, we need to detect changes in the DOM and execute the above function whenever a change in the DOM is detected. This code is what helps us do just that and has been proudly and professionally ripped off from this stackoverflow answer.

var observeDOM = (function(){
    var MutationObserver = window.MutationObserver || window.WebKitMutationObserver,
        eventListenerSupported = window.addEventListener;

    return function(obj, callback){
        if( MutationObserver ){
            // define a new observer
            var obs = new MutationObserver(function(mutations, observer){
                if( mutations[0].addedNodes.length || mutations[0].removedNodes.length )
            // have the observer observe foo for changes in children
            obs.observe( obj, { childList:true, subtree:true });
        else if( eventListenerSupported ){
            obj.addEventListener('DOMNodeInserted', callback, false);
            obj.addEventListener('DOMNodeRemoved', callback, false);

observeDOM(document.getElementsByTagName('body')[0], function (){

I have used fonts in this demonstration as an example. The same concept can be extended to images, videos, stylesheets and other graphics. Infact, any such header is almost similar to any AJAX call.

One must note that in order for the DOM Observer to work, the script should be appended only to the end of the body because otherwise, it will not be able to observe any DOM Element(s) below it’s line of declaration and might lead to unexpected and often annoying results.

If you like the post, give it a 5 star rating below. In case you want to add any suggestions or corrections, please leave a comment below. And do read the other posts of this blog. And thank you for reading.

Some CSS3 features you probably never heard of

Hey folks! With CSS4 on the way to the marketplace and the buzz surrounding it, we naturally have not much more to discuss about CSS3 and it’s predecessors. But let us look back and see what all we might have missed about CSS3.

CSS3 introduced many new features. It was a great improvement over it’s popular predecessor CSS 2.1. It added media queries, namespaces, gradients, animations, etc. In this post I am going to talk only about some of the most scarcely seen features found in CSS 3.


Browser features differ widely from device to device and browser to browser. Most of the time the browser engines define browser-specific prefixes (like -webkit-, -moz-, etc.) to allow access to incomplete versions of the feature with lesser support. But sometimes, even that is not enough. Sometimes, we just don’t have a way to go around. In such cases, we can use the @supports query to define whether to use a feature or to fallback to another method. For example,

@supports (condition 1) or (condition 2) ... (condition N) {
/*define properties when condition(s) is/are supported*/
@supports not ((condition 1) or (condition 2) ... (condition N)){
/*define properties for elements when condition(s) is/are not supported*/

Read on at MDN Docs

vw and vh

We all know about responsive typography using media queries. But there is another way to resize font with change in device dimensions. The vw and vh represent these measurement units. As opposed to the traditional px and em, these are dependent on the device width and height. They scale up and down depending on the device width and height. There is also a widely supported variation of these, the vmax and the vmin which takes the maximum or the minimum of the device width and height, whichever applies.

Read more MDN Docs

CSS Speech Module

This feature is still under consideration and most known browsers do not support it as of yet. This module, according to the documentation is just a wrapper over the SSML (Speech Synthesis Markup Language). The syntax is very simple. Let’s take a simple example here :

voice-family: male;
voice-stress: moderate;
cue-before: url(./pre.wav);
voice-volume: medium 6dB;

Read more Docs

Multiple Backgrounds

CSS3 support for multiple background is becoming increasingly common. Check the list of support here. Multiple background property lets you set several backgrounds to an element, one on top of the other. The syntax is as follows:

background: background1, background2, ... ;

Check it out at the MDN Docs

Background Blend Mode

Now that we already talked about adding several backgrounds, will they always look good over one another? To ensure that they do, we have another property called the background-blend-mode. This lets us define how the backgrounds blend with each other.

Read more at MDN Docs

3D Transforms, Translates and Scales

3D transforms are already quite popular. Most probably you already know or guessed what it is. Anyways, I will just state the obvious just in case. 3D Transforms allow us to make CSS Transformations… in 3D. Just a Z-axis more. But it can create cool effects too! It has a few interesting sub-properties – the translate3D, rotate3D and the scale3D. Although they require MathML support, they can be handy in preserving details.

Read more at MDN Docs – scale3d(), MDN Docs – rotate3d() and MDN Docs – translate3d()

Backface Visibility

Since we already talked about 3D Transformations in CSS3, we might as well talk about this property. This property defines whether an element’s backface should be visible when it is rotated in the 3D plane. Browser support is currently increasing at a steady rate and should be available in most major browsers.

Read more at MDN Docs


The appearance property is now supported in webkit. Vendor-specific prefixes for other engines are also available. This property allows us to change the default appearance of  HTML5 element to suit our design needs. There are possibly no restrictions – from inputs to buttons, it can modify any HTML5 native element.

Read more at MDN Docs


The CSS ::backdrop pseudo-element can be used to create a backdrop without any fancy code or CSS hacks. An example usage is as follows:

background: rgba(255,0,0,.25);

Read more at MDN Docs

Basic Shapes

How do you create a triangle using CSS ? Reducing height, setting border backgrounds to transparent, blah blah.. right ? Well, how do you make a hexagon ? That would surely be a lot of work using this technique. Or, you can just switch to the basic shapes that can be set using the clip-path or the set-outside properties. They are cleaner and more interesting.

Read more at MDN Docs


This property is certainly not untapped. But I added it, just in case. This allows styling your document to look as good on printed paper as on the screen.

Read more at MDN Docs

User Zooming

The user-zoom property lets you decide whether or not a user can change the zoom factor defined by the viewport. Changing zooms can break the html styles and can be quite annoying at times. This lets you prevent that from happening.

Read more at MDN Docs

Object Fit

The best way to resize an image or video is using the CSS object-fit property. It can even change the dimensions of the image or video to fit the container element.

Read more at MDN Docs

Cue Change

The CSS ::cue pseudo-selector lets you style the elements especially in a VTT track.

Read more at MDN Docs

Caret Coloring

Ever wanted to change the color of the caret (the blinking pointer showing at what position you are typing) in a text field ? Well, you can use the caret-color property to change it to just about any plain color or even any hex or rgb value.

Read more at MDN Docs

Will Change

Worried about how the browser will optimize your content ? The will-change property allows the developer to have partial control over the web page optimization strategies implemented by the browser by letting it know in advance what is to be expected.

Read more at MDN Docs


On handheld devices, orientation is a big issue. A webpage might look good in portrait mode but the same might not be so beautiful in landscape mode. Using the orientation property you can (to some extent) force the user to view the web-page only in one specific mode (i.e, either portrait or landscape);

orientation: auto;
orientation: portrait;
orientation: landscape;

Font Language Override

Anyone speaks Azkabanian here ? Probably not. But hey ! I wanna write in Azkabanian ! Oh, wait ! Hogwartzian is somewhat like Azkabanian and is supported in this browser. But,… I guess there are no provisions to use Hogwartzian dialects for Azkabanians, right? There is. Using the font-language-override property, you can set the default unicode dialect for an unsupported language to a similar supported language. And although, Azkabanian and Hogwartzians are just imaginary languages whose ideas are ripped-off from the world of Harry Potter land by the infamous me, it works just as well in the real world too !

Read more at MDN Docs

Fallback Soldiers !

When nothing works in the face of features-battle, save yourself ! The last thing anybody wants is bad PR for nothing more than unsupported features. For fallback, we most often have to lean on Javascript. But that should not be considered bad. After all, these features exist only to beautify the looks of your content and not the content itself. But the content carries points too, right?

This is all the features I could add to the list. Try them out if you haven’t and let me know what you got (a link to your works would be awesome). If you wanna add something to this list, let me know in the comments section. I am open to suggestions and corrections. And as always, thanks for reading.