Semantics
Image by Luca Cremonini.
Does “Web 2.0” Mean Anything?
Although the term’s use is impressionistic, inconsistent and overwrought there is the kernel of an idea there. The Wikipedia article on Web 2.0 gives a useful and concise overview. Apparently the term was coined by O’Reilly Media for the Web 2.0 conference back in 2004. Founder Tim O’Reilly discusses the development in his 2005 piece “What is Web 2.0” (hinting that one of the answers is “less punctuation;” another is less “e”).
For my money Web 2.0 refers to two main themes:
- That the web has become a development platform and the resulting applications are extremely social (meaning collaborative) in nature because the platform is an open network rather than an operating system, and they reside on servers not workstations.
- The products built on this base are increasingly application-like in their form and function (i.e. “rich internet applications” like Gmail and Basecamp, but the concept extends to cover blogging tools and wiki software).
Web 2.0 does not define a set of standards about the future development of the web. It’s an idea that is much more thematic than systematic — understandably this has resulted in no end of arguing about whether the term means anything at all. Web standards, however, are definitely related: the emergence of a standards-based approach to constructing the web is one of the developments that has made “Web 2.0” applications possible in the first place. Why? Two reasons: complexity and the growing importance of the user interface.
The Bad Old Days
In the early days of the Internet a single language was used to “describe” (construct) web documents: HTML. From the mid-1990s the browser manufacturers competed fiercely to add features to HTML to improve the look of web pages. This began with fonts and colors, spread to layout techniques using HTML tables, better image handling (more formats, image maps, etc.) and soon scripting capabilities (via JavaScript, a second language) to provide interactive or “dynamic effects” for menus, links, etc. Competition amongst browser manufacturers (the big two were Microsoft and Netscape) soon meant that web designers/developers had to build two of everything since each software company was doing similar things in different ways and the user base was closely split between Netscape Navigator and Internet Explorer.
Style vs. Structure
The development of Web Standards was the first sign of sanity. The push for standards emerged in the late 1990s as a set of best practices for the use of HTML. This resulted in the decision to abandon some of the “enhanced” elements from earlier versions in favour of a stricter version that was more structure-conscious: XHTML (eXtensible HTML). The development of a third technology for web documents, CSS (Cascading Style Sheets), was closely related. CSS was not immediately adopted by browser makers, so for a while it existed more as a recommendation than as a reality. But it could do a lot more than the enhanced HTML features that Netscape and Microsoft had been packing into their browsers’ rendering engines. CSS was all about separating the way that a document is presented (i.e. it’s styling) from the content that it contained (i.e. it’s structure). From a web standards perspective, structure should be (X)HTML’s job, and styling would belong to CSS.
CSS was therefore conceived with an eye on the problems that “old school” HTML was creating for developers and designers who wanted to push the boundaries of the web experience. CSS was designed to make presentation (layout, colours, hover effects, typography) scalable and much more flexible than HTML was capable of. A major hurdle was to convince browser manufacturers to include it in their browsers. A lot of the early ground was covered by Eric Meyer and his team at Netscape. The tipping point came when Microsoft developed their first serious CSS-cable browser: Internet Explorer 5 for the Macintosh, lead by Technorati’s Tantek Çelik. Internet Explorer 6 for Windows provided better support than previous versions, but lagged behind due to several deviations from the standard and an astonishing number of bugs. Then Firefox came along and offered a truly popular alternative. Firefox 1 supported CSS very well, better than any version of IE at the time, and set new expectations for the browsing experience. (Internet Explorer 7, released in late 2006, is the best Microsoft browser yet in terms of CSS.)
Separation of Concerns
By the early 2000s there was a rapidly expanding user base with CSS-rendering browsers. This is where things really got moving for the Web Standards movement, whose protagonists saw the potential for the web to leap forward. I would bet that the increasing popularity of server-side scripting languages like PHP for managing web sites was one of the motivators. Better server-side technologies made it possible for individuals and organisations to automate the production of HTML for entire sites, freeing up resources for design and application-building (e.g. a web interface to a database, an online store, a community portal, an online newspaper). CSS offered not only superior design capabilities, but also a way of managing design concerns separately from page code and application logic.
The move from web “pages” to web “applications” quickly multiplied the different technical “layers” needed to bring a web product to market:
- Information structure and markup (HTML and then XML, XSLT)
- Presentation and interface design (CSS, graphics, Flash)
- Application behaviours (JavaScipt, DOM scripting, AJAX techniques and Flash)
- Dynamic content and server-side programming (Perl, Java, PHP, SQL, Ruby)
To prepare a product for release and to develop it beyond the first version it became necessary to enforce a “separation of concerns” between the app’s different layers. Sometimes a single person is responsible for multiple layers, sometimes an entire team is tasked with just one. Regardless, any web site aimed at interactivity or application-ness will quickly hit a point where layered development is the only way forward. Changing the company logo is a basic example of how necessary this evolution was. By centralising presentational code and linking it out to an array of documents, CSS dramatically reduced the time taken to effect site-wide changes.
It is no longer possible to build a successful web app that does not observe at least some kind of minimal separation of concerns. Old-style HTML that uses tables for layout (i.e. anti-web standards markup) could be used in a “Web 2.0” web application (e.g. a Google results page somewhat inexplicably delivers content in HTML tables styled with font
tags) but using these techniques alone for layout it would be impossible to provide the user-experience that many web apps are striving for these days. The Google example mentioned above is a hybrid design that makes liberal use of CSS, though not for layout (the font
tags have me truly stumped though).
Web Standards Manage Complexity
Due to the complexity of modern web sites and the increased attention to building better interfaces, a web standards approach has been subsumed into the development process in many cases. It’s simply much harder to create an application-like interface without using CSS, and changing designs has become as commonplace as seasonal fashion updates. A case in point is the enormously popular WordPress publishing system (used on this site), built in PHP with MySQL databases and producing semantically valid XHTML out of the box. The fact that I can change or write my own WordPress theme is only possible because of CSS. The same goes for the program’s application-like administration interface which is showing an increasing number of Web 2.0-ishness with each revision. Check out the K2 front-end template for an extreme example.
Observing web standards also has a number of beneficial effects such as better SEO (Search Engine Optimization), faster load times, better printable pages, and accessibility to non-typical web browsers (known generally as User Agents) such as screen readers, mobile devices and text browsers. The web standards approach has also introduced the concept of graceful degrading: the idea that the web experience should be sensitive to the capabilities of the client (e.g. understands CSS, has Flash, has JavaScript enabled), rather than put users at the mercy of whatever methods were used to build the site. Graceful degradation aims to avoid the either/or proposition of gaining access or not based on your choice of browser.
The standards-based approach in markup and presentation is not the only development that has enabled the Web 2.0 vision to take root. Web service APIs (Application Programming Interfaces) like REST and SOAP have been vital to the development of more complex server interfaces. JavaScript has matured substantially and is central to improving the user experience through behaviours that update the page without reloading the browser window. RSS (Rich Site Summary or Really Simple Syndication, depending on who you ask) has provided a much richer format for the description of content and the transmission of notifications about it than was previously possible. RDF and microformats have similarly increased the ability to use web documents as machine-readable resources and thereby embed them with objects such as address book cards, calendar events and citation information.
For me Web 2.0 means, more than anything, the possibility of a foundation for the kinds of experiences that are promised in extravagant visions for the web’s future. This foundation does not really exist, but a set of complimentary technologies has emerged that suggest it might be achievable to standardise on some of its most useful ideas. The web standards approach is one such technology that has made huge inroads in dealing with the challenge of increased complexity. My biggest disappointment in Web 2.0 is with the name: “2.0” is a misleading way of characterising today’s development because as a metaphor for software versioning it conflates too many orders of magnitude. Nowhere is the web itself commensurate with “one big application.” I like Tim Berners-Lee’s concept of the Semantic Web and “one big formula” much better, but I can’t talk about it intelligently so I guess I’m done.
How’d I do? Come on, share your thoughts. (Comments are back *ducks*.)