# Converting to on()

jQuery 1.7 introduced new methods for handling DOM events in JavaScript; on() and off(). In this article we will focus on on().

It is intended that, from now on, on() should be used where you’d previously have used any of bind(), live(), delegate(). In particular, live() has been depreciated already; so usage of this in your projects should cease immediately. Converting from any of these methods to on() is easy; you just have to follow the following conversion rules:

1. If you were previously using bind(), simply change the function name to on(); on() supports the same method signatures as bind().

$('.foo').bind('click', function () { alert('Hello'); });  … will now be… $('.foo').on('click', function () {
});

2. If you were previously using delegate(selector, map), where selector identified the elements whose events you wanted to handle, and map was an object which mapped event types to handlers, swap the selector and map arguments around.

$('div').delegate('a', { mouseover: function () { alert('Mouse Over!')' }, mouseleave: function () { alert('Mouse Out!')' } });  … will now be… $('div').on({
mouseover: function () {
},
mouseleave: function () {
}
}, 'a');

3. All other uses of delegate() can be converted to on() by swapping the order of the first two parameters (the selector and event list)

$('div').delegate('a', 'click', function () { alert('Clicked!'); });  … will now be… $('div').on('click', 'a', function () {
});

4. All uses of live() can be converted to use on() by inserting the selector as the second argument to on(), and setting what-used-to-be-the-selector to document:

$('a').live('click', function () { alert('Clicked!'); });  … will now be… $(document).on('click', 'a', function () {
});


.. and that’s all there is to it! To find out more about the new function on(), you can read it’s extensive documentation on the jQuery website.

# Why should(n’t) you rely on CDN’s for hosting libraries?

1. The speed of the download should be quicker. CDN’s work by installing endpoints in locations closer to the user, both geographically and topologically. It’s also likely the CDN offered by the likes of Google is also more optimized and beefy than anything a developer would be able to provide themselves. All of these factors should result in lower latency and a faster download speed.
2. The chances of a cache-hit are increased. For a file to be possibly cached, the user must have accessed it before. If multiple websites link to the same resource (like multiple websites use the resources on Google’s CDN), the chances of the user accessing the file is increased compared to you hosting the file yourself (and so only your website using the resource).
3. It allows parallel downloads. Browsers are restricted by how many requests they can make concurrently to a domain/ IP address. The limit is browser version specific. By hosting resources on different domains/ IP addresses, you increase the amount of resources you can concurrently download.
4. It frees up your servers resources. Hitting a CDN rather than your server means it has to deal with one less request. That’s less bandwidth, less connections and less load and less money spent!

Sounds good so far! Are there any disadvantages?

1. Hosting files on an additional domain requires an additional DNS lookup to resolve it. This requires a request to a DNS server. It’s worth noting that DNS entries are cached, so this DNS lookup won’t hit you every time. Whether this additional delay is worth the speed increase of the quicker download remains to be seen.
2. CDNs do go down. Two outages spring to mind that had a major affect on websites relying on that CDN. The MathJax CDN went down a few weeks ago, and the Google CDN was unavailable for some users a few months ago. If your website relies on the resources on that CDN, your website goes down with the CDN.

It just so happens you can do something about the second disadvantage. It’s possible (and easy!) to detect whether the CDN you’re using is unavailable, and if it is, revert to your own copy of the resource. There’s no speed penalty using this approach if the CDN is working fine.

To do this, place an additional <script> tag after the <script> tag referencing the CDN. Inside that, check whether the resource was loaded (in the example below, we check that jQuery is available). If it was, the resource was loaded and you don’t need to do anything. If it wasn’t, simply load your own copy of the resource;

<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js"></script>
<script>
if (typeof window.jQuery === "undefined") {
document.write("<script src="/assets/js/jquery-1.7.1.min.js"></scr" + "ipt>");
}
</script>


For further reading, there are a number of Stack Overflow questions which address these points.

# The use of, and reasons behind, “use strict”

ECMASCRIPT 5 introduces a new mode of execution called strict mode.

As its name may suggest, when enabled it enforces stricter checks on the executable code to reduce the number of silent errors that is can easily be introduced in JavaScript. Instead of failing silently, the code will throw various types of errors; such as ReferenceError or TypeError. This has the huge benefit of allowing common programming errors to be noticed much more easily (as they throw a dirty great red line in your console), rather than failing silently and you, as a developer, having to spend hours tracking weird and wonderful bugs down to a stupid typo. Additionally, strict mode prevents usage which has been deemed as bad practise over the years; encouraging better and more eloquent code.

The beauty of enabling strict mode is that it is entirely backwards compatible and provides great flexibility to the developer where to enable it. Strict mode can be enabled either on a script level or a function level, by adding the "use strict" statement anywhere at the top of the script or function, for example;

"use strict"; // enable strict mode on a script level.

function foo() {

}

var bar = 4;


Because the statement "use strict"; by itself is perfectly valid JavaScript (i.e. it initializes a string, but does nothing with it), backwards compatibility is maintained. Your strict-mode-capable-script will still work on all the old versions of IE, albeit in standard mode.

The changes in behaviour introduced in strict mode can be perused in all it’s dry-ness in the ES5 specification, however the changes I’m most excited about are as follows;

• Assignment of an undeclared variable will throw a ReferenceError rather than create a global variable implicitly.

foo = 4; // TypeError is thrown

• Invalid assignment of object attributes throw TypeError rather than failing silently. An assignment may be invalid due to the property being Un-[[Writable]] or the object being Un-[[Extensible]].

var obj = Object.create(Object.prototype, {  // Create an object with an un-writable "x" attribute
x: {
writeable: false,
value: "test"
}
});

obj.x = "changed!"; // Throws a TypeError

• Declaring an attribute multiple times via object literal syntax and using the same argument name more than once throws a SyntaxError.

var obj = {
foo: "bar",
foo: "baz" // SyntaxError is thrown
};

• Using the bad parts of JavaScript such as with, and using caller, callee or arguments attributes throw errors (with throws a SyntaxError, the remainder throw a TypeError).

For further reading on the subject of strict mode, see the MDC article on strict mode.

# The risk of JSON hijacking

JSON hijacking is an exploit which has not had the publicity it perhaps deserves. It is a real risk to website security; as much as CSRF or XSS.

It’s easier to understand how JSON hijacking works if you have a basic understanding on the differences between JSON and JSONP.

Traditionally, to utilize a JSON response we need to make a request via XHR, retrieve the responseText, and then parse it using JSON.parse() (or similar). As XHR requests are restricted to the same origin policy (SOP), this is only relevant if we’re making the request to the same domain we’re on; and because we (usually) control that domain, there is little risk.

JSONP requests (made by inserting <script> tags) are not restricted by the SOP, but the problem here is that a JSON response normally has no effect; i.e. there’s no problems retrieving a JSON response via this method, but there’s no way to capture or utilize the response;

<script src="http://www.remote-server.com/get-json.php">
{
"user_id": 1234,
}
</script>


However, clever folks discovered that in some browsers, you can cause an effect to happen;

1. You can override the Array constructor to do something with the array

Array = function () {
for (var i=0;i<this.length;i++) {
}
}

2. To access object attributes, you can define a setter to capture the setting of that attribute;

Object.prototype.__defineSetter__("user_id", function (value) {
alert("user_id is being set to: " + value);
});


Bear in mind that neither of these exploits are cross browser. However, this is a real threat that has been used to target the likes of Google and Twitter.

So how exactly does this work?

1. An attacker injects the above snippet(s) on a website you visit (a website he owns?)
2. When the request for JSON from the 3rd party domain is made, any cookies you have for that domain are sent with the request; if you’ve got a session on that website, the website is non-the-wiser that it’s not actually you requesting that page.
3. When the response is returned, the above snippets are executed, and the attacker can manipulate/ steal the response; any personal/ important details contained in there are now his.

Fortunately, there are a number of ways to fix this;

1. Rigorously check for the X-Requested-With header to check the request came via XHR.
2. Add breaking code to the JSON response.
• Facebook lead the JSON response with a for (;;); (infinite loop).
• Google lead the JSON response with a throw 1; (throw an error).

Because of these additions, when the browser evaluates the response, the object/ array declarations are never reached. Obviously in your code, you’ll need to strip the leading mush out before you attempt to parse the JSON;

var json = JSON.parse(this.responseText.slice("for (;;;);".length));


or

var json = JSON.parse(this.responseText.slice("throw 1;".length));


For further reading, you may be interested in the following information articles; JSON Hijacking on http://haacked.com, JSON Hijacking on http://thespanner.co.uk.

# Checking for null or undefined, and introducing typeof === undefined

It can be difficult to decide whether you should be checking for null or undefined in JavaScript; on the face of things it may seem that they do the same job.

Fortunately, there’s quite an easy to follow rule with this; unless you specifically set something to null, you should most likely be checking for undefined.

As ever there are exceptions, and here it is; if using document.getElementById(), null; is returned if no element could be found.

Let’s go through some situations where things will be undefined:

1. Checking a variables default value. An un-initialized variable will be undefined. Caveat though; if a variable is undeclared, this will throw an error.

var foo;

2. The default return value of a function.

function foo() { }
function bar() { return; }


3. The value of an unspecified attribute in an Object

var obj = {};

4. The value of an out-of-bounds array index

var array = [];


However, should you really be checking against undefined, are there any caveats of doing so? It turns out there is!

1. As mentioned above, comparing an undeclared variable against undefined will throw a ReferenceError: foo is not defined.
2. undefined is a property on the global object. Prior to ES5, this value was writable; allowing problems like this;

var foo;
undefined = 42;


To avoid these caveats, it turns out you should use the typeof operator instead, and compare against the Undefined type:
var foo;
`