molily Navigation

Interaction is Key: Progressive Enhancement and JavaScript

Progressive enhancement is desirable but difficult, especially for JavaScript-heavy websites. With the right design process and capable tools, we can build rich interactions.

Progressive enhancement is a process in web development that starts with a rather simple version of a web site. This version provides the core features to the user. Then you enhance the site by using more demanding technologies, step by step, repeatedly. Web clients that do not support these technologies just render the simpler version instead of the enhanced.

Robust, server-rendered web sites

The first version has low technical requirements. It supports all devices with web access. It uses a subset of the existing web technologies, and uses them properly. This makes sure all users can access the content and perform the basic tasks.

For example, you write a site in plain HTML and CSS. You upload it to a web server that serves it statically. This rather simple site loads fast and works on all devices.

You may add sophisticated server-side logic without impairing interoperability and accessibility. The backend may be a content management system, an e-commerce shop or a search engine. You can chose the server technology stack freely. The server is – more or less – under your control. Adding complexity on the server does not raise the bar for the client because in the end it’s just HTML served over HTTP.

This paradigm sees web sites as a collection of Hypertext documents that consist mostly of text, links, forms and other media with text fallback. For any non-trivial user interaction, a server roundtrip is necessary to generate a new HTML page.

Client-side web applications

Ten years ago, the possibilities of front-end technologies skyrocketed. A different paradigm slowly took over: the web as a platform for applications that run in the browser. Today we have HTML5, complex CSS layout techniques, high-performance JavaScript engines and powerful device APIs. The browser is an open runtime environment with capabilities similar closed, vendor-specific platforms.

Used properly, these new technologies can make a web site more dynamic and improve the overall user experience. They offer a range of features that server-side logic cannot provide, for example access to the user’s location in real time, or a meaningful offline functionality.

Progressive enhancement embraces technical progress and acknowledges the benefits for the users. But using the latest front-end technologies should not exclude users with devices with limited standards support.

Progressive enhancement and JavaScript

The discussion of progressive enhancement is more than a decade old, and the case for progressive enhancement is still strong. I’ve written several articles on topic as well, for example in 2005 and in 2010 (both in German).

In the last couple of years the discussion revolves around the proper usage of JavaScript again. In 2005 the layer model of front-end technologies put JavaScript in its place: It’s just the icing on your cake, the chocolate coating on your peanuts, the topping on your ice cream, the jelly on your peanut butter (I’m running out of bad metaphors here). You should never depend on JavaScript, it was told.

Back to 2015. In the age of JavaScript frameworks and so-called single-page web applications, it’s a sad fact that almost all web sites require JavaScript and particular JavaScript features to work properly. When JavaScript fails to download or execute, there is often nothing to see for the user.

Lately there has been a controversy between authors of JavaScript client-side frameworks and proponents of progressive enhancement. My impression is that the gap between these “camps” just gets bigger over time. Sharing their experiences does not seem to bring them closer.

Reliability and making assumptions

A recent article by Aaron Gustafson, a long-term author on progressive enhancement, is called Interaction is an Enhancement. The core argument is – once again – that the client is not under the developer’s control. The browsers out there are a hostile environment for your JavaScript code. There are plenty of ways JavaScript can fail that you might not even be aware of.

There are two things that bother me about this article. The key quote is:

The fact is that you can’t absolutely rely on the availability of any specific technology when it comes to delivering your website to the world.

I’m trying to find the practical implications of this viewpoint.

It is true that you can’t “absolutely rely” on front-end technologies. Yet this statement is at odds with our daily development work. We do rely on specific technologies to build complex web sites. And that’s fine to some degree.

Writing HTML, CSS and JavaScript bakes in implicit assumptions about the network, the browser, the device, let alone the user. For example, any trivial client-side JavaScript relies on basic ECMAScript and HTML DOM support. Think about how many assumptions are made in widely-deployed front-end libraries like Bootstrap and jQuery. You need to be aware of this dependency, but it is not necessarily a problem.

As web developers, we need to rely on specific technologies – that’s what the web standards movement fought for. In practice, we do not treat each piece of code as an enhancement step. First we establish a baseline. Then we assess where enhancement steps make sense. We assess when to make implicit assumptions – and when to introduce explicit checks.

* { reliability: relative }

Statements like “you can’t count on anything on the web, it’s just messy!” are abstract and help little to write robust code. JavaScript developers have come up with several techniques to make sites robust. An incomplete list:

  • Manual testing with multiple browsers, devices, connections
  • Unit and integration tests, continuous integration with real devices
  • Feature detection, for example with Modernizr
  • Polyfills to create a level playing field
  • Encapsulated code that does not interfere with other code, e.g. few global variables, module systems, not changing core prototypes (except for polyfills)
  • Abstraction libraries like Lodash and jQuery to even out browser differences
  • The ECMAScript 5 Strict Mode deprecates error-prone coding practices
  • Compilers transform JavaScript to a more widely supported version, e.g. Babel compiles ECMAScript 6 to ECMAScript 5
  • Linters check for potential errors and compatibility issues
  • Last but not least: reducing the code complexity, finding simpler solutions, reducing the amount of code, reducing the JavaScript usage

These techniques don’t give us an absolute, but a relative reliability. Removing JavaScript from the critical rendering path completely increases the reliability further, but this is easier said than done.

When interaction is key

What bothers me most is the title of the article, “Interaction is an enhancement”. Titles in web publishing are usually catchy and simplistic, but this title sums up the article quite well.

As a negative example, Aaron Gustafson mentions the Gawker Media online magazines (e.g. Gawker.com, Kotaku, Jezebel). They provide articles, basically structured text with images. Yet these sites used to be fragile single-page applications with JavaScript as the single point of failure.

Is it technically possible to serve an article to the browser without loading 1 MB of complex JavaScript? Isn’t server-rendered HTML more robust? Hell yes.

But is interaction an enhancement? Not for these web sites, and not for a great share of web sites that are built today.

It’s easy to hit on sites like Gawker and Bustle. We should at least try to understand what they are aiming at and why they built single-page applications. For these sites, interaction is not an enhancement, it’s the fundamental feature, the core functionality.

Delivering text to the user in the most robust and performant way is not their main priority. What makes these sites unique is a slick interface to browse and discover content, to subscribe to topics and authors, to interact with the authors and other readers. The business behind these sites runs on user engagement and retention. In short: interaction is key.

One might question these goals and prioritization. One might argue that the user’s main interest is to read articles without being forced to log in, subscribe, comment, vote, like and share.

Probably that’s right. But blaming the developers isn’t appropriate. These publishing businesses are subject to economic constraints that affect their conceptual and technical decisions. Again, one might criticize this business model.

Putting progressive enhancement in practice

In my day job, I have built several JavaScript single-page applications as well as web sites with core features requiring JavaScript. This means I’ve experienced all the pleasure and pain of this approach.

I’m working for 9elements, a software and design agency. Most of our customers have a tight budget and specific goals. They choose us because we fuse design, function, interaction and performance.

I have read articles on progressive enhancement for more than 10 years with great interest. Still they helped little in my daily work developing web sites that need to provide an excellent user experience.

What did help was the recent “universal JavaScript” movement: Writing JavaScript code that runs on the server and the client, on Node.js and in the browser. This way it’s possible to generate HTML on the server and add client-side behavior on top of it easily. Interestingly, the driving force behind this movement was not progressive enhancement, but the need for a decent start-up performance.

GED VIZ: A field report on progressive enhancement in data visualization

In the last few years, if have worked on data visualization and interactive storytelling. I’d like to pick an example client project: GED VIZ from Bertelsmann Foundation, an open-source tool for journalists to tell stories about data in a new way. GED VIZ was launched in 2013.

In this project, we’ve gone great lengths to implement progressive enhancement. We decided to be compatible with browsers that have little or zero support for essential visualization techniques like SVG.

We did not cover the case that JavaScript is unavailable or fails for reasons beyond our control. We did not support blind or visually impaired users actively. Still there are several “versions” of the site depending on the abilities of the browser.

Each enhancement step brings more interactivity, a richer visualization, more performance. IE 6 and 7 get a non-interactive, server-rendered visualization. IE 8 gets a slow and simple but interactive VML visualization. IE 9 and other modern browser get the full SVG version.

We spent around 20% of the time creating a meaningful visualization for old browsers. There were a few percent of the users surfing old IEs, and we did not want to lose them.

“At least that’s something!”

Let’s recall why we built GED VIZ. Interactive data visualization is a unique way to present information. It makes data meaningful, tangible, accessible. It is not merely an “enhancement” over naked numbers in a table. It is a distinct thing, a new quality.

So did we reach these goals for the users of old browsers? Looking back from today, I think we missed the mark. The technical limits of IE 6-8 were too pressing. Despite all efforts, the user experience in these browsers is miserable. What we achieved is to deliver at least something to old browsers. But the result has little to do with what we were trying to achieve.

Data visualization and interactive storytelling are somehow special. But there are plenty of cases where particular front-end technologies like JavaScript and SVG are necessary for the core functionality. You can’t narrow down all web experiences to HTML and HTTP. The web is more colorful than that, and that’s great.

Overcoming difficulties with the right concept and the right tools

What I miss in recent calls for progressive enhancement are two simple truisms: “it depends” and “it’s complicated”.

Jeremy Keith is one of the most renowned advocates of progressive enhancement. In a recent article, he argues that progressive enhancement feels difficult because we’re not yet used to developing web sites this way. With every project, it will become easier and easier, he predicts:

So if you’re used to thinking of the all-singing, all-dancing version of your site as the starting point, it’s going to be really, really hard to instead start by building the most basic, accessible version first and then work up to the all-singing, all-dancing version …at first. But eventually it will just become normal.

For now, though, it’s going to take work.

[…]

The key to building modern, resilient, progressive sites doesn’t lie in browser technologies or frameworks; it lies in how we think about the task at hand; how we approach building from the ground up rather than the top down. Changing the way we fundamentally think about building for the web is inevitably going to be challenging …at first. But it will also be immensely rewarding.

I partly agree with this. Yes, our top-down design process is broken. Once the product is designed or even fully developed, we think about adding “fallbacks”. The cost of interoperability and accessibility is much higher when it’s an afterthought.

But I do think browser technologies and frameworks are crucial for progressive enhancement. Tom Dale, co-creator of the Ember JavaScript framework, noted in a talk: We need tools – especially JavaScript tools – that have progressive enhancement built-in for free. It needs to get easier for individual web developers to “enhance progressively”. In JavaScript land, Rendr, universal React and Ember FastBoot are good starters. But we are not there yet, the tools are still in their infancy.

I’m glad Jeremy Keith admits that progressive enhancement is hard work. But essentially he is saying: We put the cart before the horse, that’s why progressive enhancement seems so difficult.

In my experience, changing your mindset won’t change the underlying technical challenges.

Skyscrapers without elevators

We’re talking about progressive enhancement since the beginning of the web. Still there are only a few notable sites that take advantage of the concept. Why so?

My explanation is that progressive enhancement even gets harder as the capabilities of the web grow. Web devices are as diverse as never before. Web applications are as complex as never before. JavaScript is as versatile and powerful as never before.

Today’s web sites are skyscrapers, just without elevators. There’s a race to build higher and higher. Each floor is a front-end technology, an HTML, CSS or JavaScript feature. With each floor, it’s harder to make the skyscraper accessible for everyone.

That does not mean we should not try our best. But it explains why only few teams manage to cut the Gordian knot, given the technologies and limited budgets they have.

Achieving robustness

As supporters of the open web, we should continue to lobby for web standards support and more reliable front-end technology. We should stop telling people to accept failure, fragility, inconsistency and loss of control. Eventually, this burden drives off developers to proprietary software ecosystems and application runtimes, which are the biggest threat to the web today.

Ubiquity is a unique strength of the web, but error-prone devices, incapable browsers and unreliable front-end technologies are not. Let’s fix these issues.

More on JavaScript and progressive enhancement

Please see my other articles on the topic:

For a full list of references, see my bookmarks tagged with progressive enhancement.

To learn more about GED VIZ, the project I mentioned, see these articles and talks:

I’d love to hear your feedback! Send an email to zapperlott@gmail.com or message me on Twitter: @molily.