Wednesday, 4 May 2016

Angular 1.5 Components

A really good article on Angular Components.

I am copying it here as it is in case that is ever deleted. But please read the original one. That has better UI and readability.

Components in angular 1.5.x
Posted on May 1, 2016 by Kristof Degrave
Source: kristofdegrave

In my previous article I described my view on a component based architecture. In this post I want to focus on how I applied this in a real life application with angular. Since the 1.5 version was released, components have become first class citizens just like in Angular 2, Ember 2, React,… On the other hand components aren’t so revolutionary. We have already known them for a long time as directives, but we never used them like this. Generally, we would only use directives for manipulating the DOM directly or in some cases to build reusable parts.
The components in angular 1.5 are a special kind of directive with a bit more restrictions and a simpler configuration. The main differences between directives and components are:
Components are restricted to elements. You can’t write a component as an attribute.
Components always have an isolated scope. This enforces a clearer dataflow. No longer will we have code that will change data on shared scopes.
No more link functions, but only a well-defined API on the controller.
A component is defined by an object and no longer by a function call.
Well-defined Lifecycle

Components have a well-defined lifecycle:
$onInit: This callback is called on each controller after all controllers for the element are constructed and their bindings initialized. In this callback you are also sure that all the controllers you require on are initialized and can be used. (since angular 1.5.0)
$onChanges: This callback is called each time the one-way bindings are updated. The callback provides an object containing the changed bindings with their current- and previous value. Initially this callback is called before the $onInit with the original values of the bindings at initialization time. This is why this is the ideal place for cloning your objects passed through the bindings to ensure modifications will only affect the inner state.
Please be aware that the changes callback on the one-way bindings for objects will only be triggered if the object reference changes. If a property inside the object changes, the changes callback won’t be called. This avoids adding a watch to monitor the changes made on the parent scope (works correctly since angular 1.5.5)
$postLink: This is called once all child elements are linked. This is similar to the (post) link function inside directives. Setting up DOM handlers or direct DOM manipulation can be done here. (since angular 1.5.3)
$onDestroy: This is the equivalent of the $destroy event emitted by the scope of a directive/controller. The ideal place to clean up external references and avoid memory leaks. (since angular 1.5.3)
Well-defined structure

Components also have a clean structure. They exist out of 3 parts:
A view, this can be a template or an external file.
A controller which describes the behaviour of the component.
Bindings, the in- and outputs of the component. The inputs will receive the data from a parent component and by using callbacks, will inform the parent component of any changes made to the component.

Because components should only be allowed to modify their internal state and should never modify any data directly outside its component and scope. We should no longer make use of the commonly used two-way binding (bindings: { twoWay: “=” } ). Instead since angular 1.5 we now have a one-way binding for expressions (bindings: { oneWay: “<” } ).
The main difference with the two-way binding is the fact that the bound properties won’t be watched inside the component. So if you assign a new value to the property,it won’t affect the object on the parent. But be careful,this doesn’t apply on the fields of the property, that is why you always should clone the objects passed through the bindings if you want to change them. A good way to do this is working with named bindings, this way you can reuse the name of the bound property inside the component without affecting the object in the parent scope.

1: module.component("component",{
2: template: "<div>{{\$ctrl.item}}</div>"
3: bindings: {
4: inputItem: "<item"
5: }
6: controller: function(){
7: var \$ctrl = this;
8: this.\$onChanges = function(changeObj){
9: if(changeObj.inputItem){
10: this.item =
11: angular.clone(changeObj.inputItem.currentValue);
12: }
13: }
14: }
15: });

Another way to pass data is by using the “@” binding. This can be used if the value you are passing is a string value. The last way is using a “&” binding or a callback to retrieve data through a function call on the parent. This can be useful if you want to provide an auto complete component with search results data.
For exposing the changes inside the component we can only make use of the “&” binding. Calling a callback in the parent scope is the only way that we can and should pass data to the parent scope/component.
Let’s rephrase a little:
“=” binding (two-way) should no longer be used to avoid unwanted changes on the parent’s scope.
“<” binding (one-way) should be used to retrieve data from the parent scope passed as an expression.
“@” binding (string) should be used to retrieve string values from the parent scope.
“&” binding (callback) can be used to either retrieve data from the parent scope or be used as the only way to pass data to the parent scope.

The components way of working in angular 1.5 closes the gap for migrating your code to an angular 2 application a bit more. By building your apps in the angular 2 style you are already thinking on the same level. This will ease the migration path by shifting away from the controller way of working.
Directives will still have an existence beside the components. You will still have some cases where you will want to use attributes instead of elements or have the need of a shared scope for the UI state. But in most cases components will do the trick.
Kristof Degrave, one of our experts, regularly blogs at kristofdegrave.

Saturday, 30 April 2016

A little history about web browsers and layout engines.

Browsers’ story

Usage share on web

Usage share of web browsers

Share of non-mobile browser traffic on WikiMedia (Feb ‘14)

enter image description here


enter image description here

KHTML and forks



Blink is the rendering engine used by Chromium.

Blink’s Mission:

To improve the open web through
technical innovation and good citizenship

General Idea Diagram

Created with Raphaël 2.1.2KHTMLKHTMLWebKitWebKitBlinkBlink. Konqueror .Fork.. Safari .. Chrome .Fork.. Chrome .. Opera .

Trident and forks


Version History

Trident MSHTML.dll IE IE Mobile
NA 4.x IE 4 NA
NA 5.x IE 5 NA
NA 5.5.x IE 5.5 NA
NA 6.x IE 6 NA
NA 7.x IE 7 NA
3.1 7 NA 7
4 8 8 NA
5 9 9 9
6 10 10 10
7 11 11 11



“Tasman is a discontinued layout engine developed by Microsoft for inclusion in the Macintosh version of Internet Explorer 5. Tasman was an attempt to improve support for web standards, as defined by the World Wide Web Consortium. At the time of its release, Tasman was seen as the layout engine with the best support for web standards such as HTML and CSS.”

Tantek Çelik led the team that developed the Tasman engine as the Software Development Lead.


General Idea Diagram

Created with Raphaël 2.1.2TridentTridentTasmanTasmanEdgeHTMLEdgeHTMLFork. IE4 ... IE 10 .. IE 5 for MAC .Fork. IE 11 .

MicroSoft Edge

Replaced IE 11 and IE Mobile.


IE 8 Architecture.

The architecture of IE8. Previous versions had a similar architecture, except that both tabs and the UI were within the same process. Consequently, each browser window could have only one "tab process".


Presto was the layout engine of the Opera web browser for a decade.

Created with Raphaël 2.1.2PrestoPrestoBlinkBlinkOpera7+ReplacedByOpera15+

Presto on WikiPedia



  1. Mozilla considered WOFF a superior alternative to SVG fonts. That’s why FF4 failed 4 SVG based tests in ACID3 test suit and scored 96/100 initially.










  2. 2.
















Saturday, 16 April 2016

Accessing array through enum values

Originally posted on Saturday, 11 January 2014 as Array index through enum in C

Accessing array through enum values

We have been taught that array index in C can be either a literal or a #defined value.

But we can also use enum values to assign the size of the array at the compile time.


enum indices{
  B = 3,

int main(){
  int arr[A+E] = {1,2,3};
  int i = 0;
  for( i =0; i < A+E; i++)
    printf("[%d]=%d\n", arr[i]);

 return 0;

Output would be 1 2 3 0 0 0

Err… there’s a bug in the above code. It compiles without any warning but there’s a bug in the above code. Can you spot it?

Without shooting that bug down, all we sill get is garbage.

So, we saw that we can use enum‘d values while declaring the size of the arrays. That’s kind of no-brainer, because enum values in their entirety are constants. But well that’s kind of better than using #define UB 6

Try this simple problem:


enum indices{
  B = 3,

int main(){
  int arr[A+E] = {1,2,3};
  enum indices index = E;
  printf("%d\n", arr[index>>1 +1]);
 return 0;

Here, look at the line: printf("%d\n", arr[index>>1 +1]);

if index is 6, the output of index>>1 should be 3, and then 3+1 = 4.
So output should be 0. Since the array contains a 0 at the 4th index or 5th position.

But output is actually 2.


Facts that are working in the above code:

  1. After assigning a value to enum member, all the members that follow it will get assigned incremental values from there.

  2. if array size is bigger than the number of the elements provided in the initializer list, the remaining members would be 0, here in case of int.

  3. addition operator has higher precedence than the shifting operators.

  4. enum members are constant values.

With that, I wish good luck to you.

Saturday, 9 April 2016

EcmaScript Learnings - 10th of April, 2016

String reversal


const means read-only

const name = "Anubhav";
name = "AS";

Above code will result in error: name is read-only.

Functions that have formal parameters with default values create second Function Environment Record.

function someFun (arr, index, get = function (){ return arr[index]; }) {
    arr = [1, 2, 3, 4];
    console.log(get()); // fetches 3
someFun([1, 3, 5, 7, 9], 2);

Code above will fetch 3 out of arr; even though 5 was expected from the call. Contrast it with code below which fetches 5 but because of different reason:

function someFun (arr, index, get = function (){ return arr[index]; }) {
    var arr = [1, 2, 3, 4];
    console.log(get()); // fetches 5
someFun([1, 3, 5, 7, 9], 2);

Reason for that: var creates a variable arr in second function environment record and original arr:formal-parameter is not updated.

get function has arr bound to the formal-parameter arr which is in first function environment record.

Function parameters default values

  1. While defining functions, you can make formal parameters have default values.

    function sum(a = 10, b = 20) {
        return a+b;
    sum(10, 20); // 30
    sum();       // 30
    sum(1, 10);  // 11
    sum(1);      // 21
  2. Passing undefined will trigger default behavior

    sum (undefined);    // 30
    sum (undefined, 1); // 11
    sum (1, undefined); // 21
  3. Defaults can refer to earlier formal parameters.

    function isSquare(x = 10, xSq = x*x) {
        console.log(x*x === xSq);
    isSquare (); // true;
    isSquare (2, 4); // true
    isSquare (2, 9); // false

Done for the day.

Reversing a string in ES2015

Really how to reverse a string in ES2015

let name = "Anubhav Saini";
let reversedName = [].reverse();

I know!

Thursday, 7 April 2016

Simple async and await example in EcmaScript 2015

Async - Await example in ES2015

async function sum(a, b){
  return a+b;

async function main(){
  let s = 0;
  for(let i=1, j=10; i < 20; i++, j++){
    s = await sum(i, j);


And this is pretty much as simple as it gets.

Output however is expectedly linear:


Monday, 4 April 2016

Git commit as fundamental work unit.

I am not sure how I used to code before git came. I am not sure, why people coded without it for 40 years. But now, git is here and we are here to make use of it. Today I am going to discuss about units of work.

Some people claim that the git was created was a self-indulged-programmer who wanted to solve his problems. I think whoever said that might be right. git is weird! But we are not discussing it’s weirdness either. We have very specific thing to discuss:

git commit as a fundamental unit of work.

git has a command commit, which lets you save your work in the history of the project. When you do a git commit you get to put your stuff safely in the eternal history. Well, let’s not get that dramatic. describes git commit as:

git-commit - Record changes to the repository

git commit [-a | –interactive | –patch] [-s] [-v] [-u] [–amend]
[–dry-run] [(-c | -C | –fixup | –squash) ]
[-F | -m ] [–reset-author] [–allow-empty]
[–allow-empty-message] [–no-verify] [-e] [–author=]
[–date=] [–cleanup=] [–[no-]status]
[-i | -o] [-S[]] [–] […​]

Yup, there are many things that this simple command do. So Unixy.

Let me tell you the most common form( that I use):

git commit -m "my commit message"

And voila, we are done. All we need to do to put our names in history forever is: push it. Some other day maybe.

We are going to discuss unit-of-work.

unit of work

When you type code, what do you type your code for? I am guessing that you are on a bandwagon (Agile, XP, Scrum, RAD, whatever) where you know what to achieve at the end of the typing.

Here we find a problem.

Sometimes, work is distributed across files.

Case A: C# class and interface.

Usually people put classes and interface in different files. This is a good habit. But, if you need to update a tiny detail in say three of the ten files that use that interface, you are in a whack.

What should you do?

  1. Edit each file separately and commit (separately ofcourse). OR
  2. Edit each file and commit.

Both of these ways have their merits and demerits. And we are going to discuss them.

Edit and commit for each file.

If you create commits based on each file that is changed, you may probably get out of merge conflicts.

I lied, this way doesn’t have any real world benefit, check these commits:

enter image description here

enter image description here

This is the history of the project. What can you gather from the history. It says “linking of pages” 5-6 times and says “removed inline styling” 4 times.

What’s the direct result?

enter image description here

You get more commits than files changed during those commits.

This should raise questions in your head:

  1. What is a commit really?
  2. Is it logically possible to have multiple commits to alter less number of files that commit-count?

Well, wait for it. We have to discuss the reviewer’s story too for the commits.

reviewing the work.

A reviewer comes to the task, these are the things that he checks for:

  1. Commit count and commit messages.
  2. Files count
  3. Files changes.

Now, if a reviewer is going to review, how is he/she going to feel about the repetitive messages.

Do you think if there are a lot of files, he might get lost, what commit has he already reviewed and what is yet to be reviewed.

Look at the image above, it’s hard to track the commits. If there were more, he/she just might give up.

Moral of the story:

Do not repeat commit messages.

Why you may ask why?

Commit history should read like a story of a project and journey it’s programmer took to reach to the end.

Repetitive commit messages sound bad now. Don’t they?

Well, let’s bust the argument that this practice will reduce or prevent the merge-conflicts.

But this will prevent the merge conflicts.

No it won’t.

So, I am going to work on a file that you are working on. You created 10 commits and made a PR: pull request.

When that PR gets merged, I still will get the merge conflicts.

Listen people: merge conflicts are way of shared development.

* Anyway, merge conflicts are result of poorer work assignment process and in general bad tool.*

This is where that self-indulged-programmer’s product thesis gains ground. But meh, still not going that way.

Yes, git is not a perfect tool. It could have been better. We are stuck with it. diff tools are available, use them.

And now we have to discuss other way commits can be organized.

Edit all files based on a logical requirement and commit once.

This way, you’d get all the logical changes in one commit.

Note: When I say logical I do not contrast it with illogical.

Logical (here): things that belong together, as in a class or data-structure.

Example would, be removing inline styles from your all files.


Then, you have 5 files where you want to remove the inline styles from.

  1. Go ahead, remove inline styles from all the files.
  2. Go ahead, create an external stylesheet.
  3. Go ahead, put the links to the external stylesheet in each page.
  4. Commit the changes.
  5. Push.

The thing is, there are going to be 6 files that are touched and one commit saying: “Inline styles removed.”. When a reviewer get to the task of reviewing, one would be able to look at the terse commit message, number of files, and get to review.

What is one going to find? Well, there are 6 files that are touched and all contain similar patterns. Easy-peasy-lemon-squeazy.

Now, let’s discuss the questions that we saw:

  1. What is a commit really?
  2. Is it logically possible to have multiple commits to alter less number of files that commit-count?

2. It is logical to have a single file change and multiple commits.

It’s explanation is dependent upon first point.

1. What is a commit really?

Commit is a fundamental unit of work. Some, units of work are:

  1. Center a div inside another div.
  2. Create a modal box using CSS3.
  3. Beautify/Uglify your code.
  4. Removing inline styles.
  5. blah blah

Each work item should update or touch a lot of files. You have to commit against each work item.

Workflow -> work item -> work -> commit.

That means:

Center a div -> work item -> work/changes -> commit.

Create a modal box using CSS3 -> work item -> work/changes -> commit.