Flash is no longer an essential plugin on the desktop/laptop Web. It is non-existent on mobile. It is no longer the default way to consume video on the Web.
It didn't use to be that way, and I'm surprised how far Flash has fallen into irrelevance so quickly. For a long time, Flash had better support in browsers than CSS. SVG paled in comparison even in the realm of vector graphics. Flash's control of the web looked impregnable.
How do you take on an immovable behemoth? It starts with a single blog post rant.
The rise of mobile in the form of smartphones and tablets came with the declaration that Flash was not a welcome participant. Coinciding with the launch of the Apple iPad, in a rare Steve Jobs blog post Apple announced that Flash would not be allowed on iOS. The fallout from that decision destroyed Flash's incontestable ubiquity.
This was a business decision on the long-term viability of Flash on platforms with performance and battery-life are primary concerns. In a world where Adobe pushed hard to make Flash all things for all people it went too far, every step making it less viable to the fast-approaching lower-spec mobile platforms. In effect, this was a declaration of no confidence in Flash, and in Adobe.
Inside of a year there was no recognisable trace of Flash left in Android either. The days where Flash was a deal-breaker feature was over.
Similar to Java retreating from the browser and emerging as a viable server-side platform, Flash tried recasting itself into the platform of choice for developing native mobile apps. But today Flash is fading into obscurity from even that avenue.
In the desktop/laptop Web Flash has lost the prominence it once had. The binary Flash plugin is no longer an essential part of the Web browsing experience.
Flash grew as a sandboxed environment inside the browser page to deliver rich content and engaging interactivity and cutting edge user interface. It prided itself as being a platform for building Web applications that couldn't be built with HTML and CSS. Scrollable and zoomable maps, playing video clips, animating vector graphics, playing games, streaming content. Flash enabled all that.
But it was video-on-demand that became Flash's killer feature. The single page app didn't take off (unfortunately loading pages did, much to the disgust of visitor advocates).
Over time, Flash was gaining a bad reputation for security issues and as a performance hog. Apple was critical of Flash and took the step into pushing the Flash runtime into a separate process so as to allow it's Safari browser to continue serving up web pages and continue being responsive. By separating processes it made Flash's performance issues more visible and measurable.
Google Chrome dealt Adobe a blow by developing PepperFlash as an Adobe Flash runtime replacement. It is not as fully-featured as Adobe's flagship product, but on today's Web mostly about video, video-on-demand and live-streaming, Chrome's alternative is perfectly adequate.
Even SVG has better out of the box support than Flash in browsers today. Flash trounced SVG in developer mindshare, and now it is SVG that survives and flourishes in the age of improved user-experience on both the Web and User interfaces. SVG has the last-ability Flash could only dream of, tackling user experience polish that Flash was primarily designed to enable.
Adobe turned Flash into the ubiquitous runtime for supporting cross-platform video-on-demand. The MPAA's (Motion Picture Association of America) insistence on DRM in principle, but no particular implementation created a headache Flash cured. Flash supported the mainstream DRM systems and thus became indispensable for video playback in the browser.
Everyone used Flash to build their video players, because everyone knew Adobe did the thankless task of supporting the various DRM formats in the Flash runtime. (Everyone except Netflix)
With regular security vulnerabilities found, exploited and patched, it became a running joke that every week there was a new Flash upgrade required to fix security vulnerabilities disclosed, exploited, and fixed. In the days before seamless browser upgrades Flash was triggering an update required dialogue too often.
Probably in reaction to Flash's low security reputation, the movie industry decided that they would only allow their content to be streamed using Silverlight. This decision may have been merely rubber-stamping what Netflix were already doing. Amazon Instant Video switched from Flash to Silverlight.
YouTube dealt a second serious blow to Flash by developing, supporting, and eventually defaulting to the HTML5 video player. Flash's leading role in video on the Web was over. Adobe showed no appetite to fight this transition, perhaps they had already surrendered.
Today websites that use their own Flash-backed video player instead of an HTML5 video player or YouTube are losing engagement as they degrade back into grey rectangles of missing plugins.
The movie industry decision to rubber-stamp Silverlight as the preferred technology platform for video-on-demand didn't take long to unravel. Not long after Amazon Instant Video switched to a Silverlight player, Microsoft announced that Silverlight was a deprecated platform. Silverlight had no future.
So Netflix switched to HTML5. And everyone else is following their lead.
A mass of Flash developers opted to implement the whole website in Flash. User interface and all. I recall that's the first time I encountered the hover-to-scroll UI idiocy. These sorts of sites you learned from other people, since you couldn't find them directly from Google. These are the forerunners to the Single Page Apps. Un-indexable un-bookmarkable blobs of binary data referenced by an empty embed element.
The reimplementation of common UI elements presumably led Adobe to offer Flex, the language to build single page applications. As well as creating a rift in the Flash community, the Flex folk believed Flex was also the replacement for Java.
But these applications didn't gain a foothold on the Web. Maybe Intranets, and as a CMS editing interface, but nothing substantial. Adobe Air is probably the most prevalent of platforms for Flex based Flash applications.
Flash was used to patch some gaps in HTML, like Flickr's uploader widget. It vastly improved on the default HTML file upload widget. Wordpress followed and adopted Flickr's approach.
Flex had potential, but never came close to fulfilling it. The head conference in 2008 is the only good example of Flex coming close to its potential. It was Twitch for online conferences, allowing streaming of presentation session alongside real-time chat.
The story of Flash is a fascinating but unsurprising. An unassailable dominant platform toppled by business decisions from Apple, Google and Microsoft (proxied through the Motion Picture Association of America in the form of a platform mandate).
The symptoms were always there, Flash is a closed/proprietary platform. Like Microsoft, they appear to be open, but not open enough to allow a feature complete open-source competitor to flourish. The closed system naturally supports the security through obscurity of DRM, which is essential to appease the movie license holders. Adobe gave us a blackbox solution to playing any video on the Web.
Apple and Google replaced Flash with native apps on mobile. Google, Apple and Microsoft replaced Flash with HTML5 on the desktop. Netflix and Google replaced Flash (and Silverlight) with HTML5 for Web video consumption.
Thankfully Flash isn't replaced by one thing. HTML5 is the big winner, and even though it is largely dictated by Apple and Google as browser vendors, it is still a technology based on the principle of openness.
The progress of HTML5 effectively replaced Flash as a technology stack. SVG (in conjunction with JavaScript) has evolved from a parody of Flash to ubiquitous in browsers and operating system platforms. CSS has taken a leap forward. And JavaScript has grown from an embedded inline language in HTML documents to a stack for building fully-fledged applications.
The struggle now seems to be turning to native apps versus non-native apps on the mobile platform. It is similar to Flash's original battle ground: the argument that the Web technology stack is not suitable for building applications with a polished user-experience.
The other debate is one that aims right at the heart of the Web. Whether it is a connected web of documents, or a collection of walled-garden web applications.
The Flash plugin and the technology platform are dead, but the ideas and ambition of replacing the Web haven't disappeared.
]]>SproutCore doesn't just ignore progressive enhancement - it hacks it into tiny little pieces, urinates all over them and then mails them back to you one by one
(Note that these frameworks have a track record of ignoring established web development best practices, and then attempt to bolt them back in later. Meteor is also heading down the same cobbled path, they added Google crawlability as a feature in later releases.)
The list of Web frameworks that perpetuate the empty-document technique grows; SenchaTouch and Meteor being two garnering attention today. John Allsopp notes the approach is about replacing a declarative set of technologies with an imperative model. Thus the idioms and patterns are quite different.
The core of the argument is that apps built using web technologies are being left behind by app-specific development toolkits, and that web technologies need to improve to be better than these toolkits. Trying to succinctly express this runs headlong into absurdity, perhaps not through reductio ad absurdum:
Apps built with app-specific technology are better apps than those built with non-app-specific technology.
Obvious? But more pertinent is the question: so what? The Web's declarative model is what keeps the barrier of building websites as low as possible. (And web apps are just websites, just with a lot more JavaScript.)
And yet, James Pearce's talk at FullFrontal 2012 kept plugging in this direction, resulting in this particular example of HTML minimisation:
<script src="app.js"></script>
That is the entire HTML document for a web application. It's hard to understand whether this is a tongue-in-cheek, or a foot-in-mouth argument. Claiming it as just an extreme example of a web app and not meant for real world use is disingenuous; people seem compelled to use these odd curiosities in real world products.
Take the "lets reimplement the browser in Canvas from scratch" noodling of Bespin. As the HTML5 Editor (at that time) Ian Hickson noted:
Bespin is a great demo, but it's a horrible misuse of <canvas>.
Or Henri Svionen's less succinct, yet still brutally accurate:
I think Bespin can be taken as an example of using <canvas> being "doing it wrong" and the solution being not using <canvas>.
Despite this regular feedback, it still took the Bespin / SkyWriter developers a few years of fighting with performance and usability issues before they moved away from canvas. In that time, not least because of the initial attention Bespin received in tech circles, the "canvas as the browser" approache started to gain adoption as an acceptable way to build web applications. Bespin was no more than a proof-of-concept never intended to be used in a production setting (according to its developers).
Of course, modern web developers using new fangled content-less HTML aren't making the same mistakes as Bespin and SproutCore. Their conceptions reflect web development best practice; of a separation of concerns (for stability), progressive enhancement (for wide device compatibility), graceful degradation (for robustness), accessibility (as an extreme usability utility). Right?
In the far-too recent past, web app developers jumped on the HashBang URLs as the technique that exemplifies the applification of the Web. Despite it being contrary to web development best practice, single-page webapp developers persisted with this technique in the name of better performance and more robust code.
Yet Twitter backtracked back to progressive enhancement (in the name of better performance, reducing the time to first tweet), and Gawker Media quietly reverted on their hashbang-dependent implementation in the name of customer experience and robustness.
Both companies recognised that the problems of Hashbang URLs weren't in the URLs themselves, but the complete dependence on a JavaScript bootstrap for the content experience. That means instead of one document loading before content appears, it now requires one document, plus a chunk of JavaScript that simulates a browser to load in, initialise itself, call its required content and template assets and then render them in the content window.
Where HashBang URL using web apps fell down is the overhead before a user experiences the first chunk of content (Twitter's "time to first tweet" metric) is too high for a responsive web experience. Yes, the single page website is fractionally more responsive to customer interactions after the core application infrastructure has finally loaded up and run; the duration to first content visible turns out to be a more important metric.
It's the first impression, folks. Despite the empirical fallaciousness on the importance of a first impression, customers tend towards experiences that get them to their content and utility quicker. We've know this since the beginning of the web; a two-page form converts better than a three-page form, despite requesting the same information.
It's probably an urban legend, but makes a good story nevertheless, that Twitter brought in the high-priest of High Performance JavaScript (Steve Souders) to advise them on how to make their web application faster and he replied with: "Have you though about putting the Tweets in the HTML instead of loading them through JavaScript?"
It turns out progressive enhancement isn't dead. It's actually still the primary technique for getting content to the customer fast. It's just continually ignored, and web apps eventually get there when they run out of non-best-practice techniques to throw at the bootstrap-time problem.
(It does make me chuckle when web developers claim progressive enhancement is hard as a reason for skipping it, and a little time later after the diminishing returns of that short-sightedness has worn off, they undertake the far more difficult approach to bolting progressive enhancement back in because the alternatives are even harder. All high-quality and efficient web development paths go through the forest of progressive enhancement eventually.)
Building applications using web technologies isn't new. We've been doing it for at least a decade. Sometimes you don't really notice.
Firefox and Thunderbird are two quintessential examples of applications being built with web technologies. The entire user interfaces of both products are a collaboration of markup, CSS and JavaScript displayed by the Gecko rendering engine. Firefox is thus the inception point, since it's a web application that runs other web applications.
We had a steady stream of applications built on Mozilla's XUL framework beyond mere proof-of-concepts (and Twitter clients): Songbird, Komodo Edit IDE, Cyclone3 CMS, Flickr Uploader, ChatZilla IRC Chat, InstaBird IM Client, Blue Griffon ePub editor, StreamBase.
XUL is an XML vocabulary, but it works with HTML so cleanly that at times it can be mistaken for just being extra tags bolted on top of HTML. Even the extension mechanism XBL allows you to create extra tags that can be used as first class elements in your structural (or declarative) documents.
The Mozilla approach to connecting the web surface to the computer interfaces is to create an entire series of APIs that expose the inner workings of a computer and make them available through JavaScript. And the developer can chose to surface that right up to an actual custom element.
We've sunk countless hours working within the XUL framework, some of the best developer tools came through that route. For example Joe Hewitt's Firebug, a tool that effectively brought web development out of the Stone Age, and its little cousin Chris Pederick's Web Developer Toolbar.
The second still-growing framework for web technology based apps is Adobe Air. I still regularly see new software being built by and sold to small and medium-sized business, as a simple way of encapsulating expert knowledge into a handy tool. I think I've bought at least three Adobe Air based applications (not including Twitter clients) this year alone. For example Keyword Blaze helps small businesses explore and find online niches and assist in Keyword Research. It's lowering the bar for entrepreneurs to handle and manage their own SEO strategy.
While webapp ninjas complain about their tools and environment, entrepreneurs create these applications with the web stack.
As an aside: there was a movement running in parallel to Mozilla's effort that focused on the idea of Rich Internet Applications (largely driven by the Open XUL Alliance), where developers collaborated on building declarative interface bindings to their pet languages. For a brief while that's where web application development sparked. That produced toolkits like Luxor-XUL for client-side Python bindings as one notable example.
The typical counter-argument to these approaches is the visual look and feel. Clearly none of the platforms above look exactly like custom iPhone apps on the iPhone. They don't feel like native iPhone apps.
Developers who find this particular feature galling then go to extraordinary lengths to duplicate the feel of the iPhone interface inside their web apps. This has the side-effect of the app feeling ridiculous on an Android phone of the same technical specifications because you get an Android experience on the outside, and a sub-par iPhone emulated experience on the inside of the app. It's common to land in a situation of having two back buttons in this scenario each doing something different.
The crux here isn't that web technologies don't make good enough application platforms, they don't match perfectly with the native apps look and feel. Because every native platform is different, whether it's in the design or the establishment of the idioms and metaphors that are the heart of it, or just subtle differences in definitions and interfaces.
The Web is platform agnostic, its success isn't chained to the continued success of a specific platform. And so they do not conform to individual platform expectations. Much like their cousins the cross-platform application toolkits (Java, QT, Lazarus), they don't exactly match the Operating System native widgets because there isn't a clean mapping across the range of platforms they support.
This platform independent characteristic is a feature of the Web, not a shortcoming. It's not meant to emulate Operating System graphical widgets. That's the browser's job (or the operating system's job), not the web stack's.
And you know what, it doesn't matter. Applications written for a specific platform also don't always look the same as other apps on the same platform. WinAmp, QuickTime, Twitterific, Chrome and StickyNotes are just five applications I'm running right now that don't look like the default visual standard of the Operating Systems they are running on.
If an application's success is primarily based on looking and feeling like a native application for that platform, you really have to be an idiot to build that application using something other than a toolkit or framework designed with that goal as the primary endpoint.
GMail continues being a pioneering success for web applications. It remains steadfastly popular for its features and information management, not because it looks like a native iPhone app on the iPhone.
The Web's independence from the hardware and software platform people use is a feature. It's better than cross-platform frameworks which are constantly criticised for not producing exact native-feeling apps on the multitude of platforms they run on. The Web is above that pettiness.
The Web isn't an application platform. It is really a data platform (or more precisely, a content platform). A data platform that has a very light visual and behaviour layer available: Cascading StyleSheets and JavaScript.
Take a typical brand new iPhone. Make sure it doesn't touch the web in any way, so place it in an environment where it cannot make an HTTP Request or receive an HTTP Response.
How useful are your native applications? Oh that's right, the only applications you can get to are the ones installed on the phone by default. The other applications are just data sitting on the Web waiting for you to request them.
And the default applications you do use because of the value they provide are probably going to offer you less value without that lifeline to the Web. Yes, the value of the application isn't because it's a native application, but because of the data it uses, and that data is sitting on the Web.
Smartphones are useful because they participate on the Web of content, as equal citizens to desktop browsers and tablets. Applications are just a shell that offers interaction with the data. Without the data, the interactions are worthless.
Native apps need the Web, the Web doesn't need native apps.
If your primary requirement is a seamless native app experience, then you need to build a native app for each platform you want to support.
If you are content to abstract away the nativeness of a platform to a wrapper (like a browser), then a web application is perfectly adequate.
But there's also a third alternative: a hybrid of both approaches, since the Web isn't (just) an application platform, it's the primary globally available data platform in your application.
We who cannot remember the past...
]]>It’s taken a couple of days of hacking around, trashing Ubuntu and re-installing it, but I’ve gotten to the bottom of the issue. Most of the 5 minute boot sequence seems to be the laptop just sitting there waiting for something to happen, there’s only a tiny amount of disk activity going on.
The first step is to figure out why the boot process is so slow. So in a Terminal, run dmesg
which displays a timestamped log of each subsystem initialising. The timestamp counts the number of seconds from power on. Looking through this list I saw a huge leap in seconds (about 350 seconds), and that line said:
[ 352.885250] ADDRCONF(NETDEV_UP): eth0:
link is not ready
Turns out Ubuntu can’t quite deal with the Ethernet card (possibly a Realtek network module). Opening the “Connection Information” in the Network menu (top right icon of up-and-down arrows) identifies the network card driver as r8169
.
The solution turns out to delay the initialisation of the Ethernet card, removing it from the boot sequence and adding it to the rc.d initialisation. Here’s how to do that:
First we blacklist the card from the early boot initialisation steps (all on one line):
echo "blacklist r8169" |
sudo tee /etc/modprobe.d/blacklist-ethernet.conf
Second step is initialising it by editing /etc/rc.local
and adding in modprobe r8169
just before the exit 0
line:
modprobe r8169
exit 0
Thirdly we need to rebuild the boot image to take into account the newly added blacklist item, so run the following:
sudo dpkg-reconfigure linux-image-$(uname -r)
Once that is done, reboot the laptop.
For me that reduced the boot time to 38 seconds, so a reduction of about 90%. There’s still a couple more seconds to be saved by disabling the ipv6 and parallel port support, but that’s for a rainy day.
Most interesting talk was Jeremy Ashekenas' CoffeeScript Design Decisions - which surprised me somewhat considering my opinion on the impracticability of CoffeeScript in a real-world web development team. I was also pleasantly surprised by Rik Arends' talk about the Cloud9 IDE. Phil Hawkesworth's talk was eminently listenable, but unfortunately impossible to live blog. Eloquent JavaScript author Marjin Haverbeke was incredibly interesting, but perhaps too technical for a conference talk. The final two talks, from Brendan Dawes and Marcin Wichary were both entertaining and (more importantly) inspiring. Glenn Jones provided us a very useful look at almost-ready features of browsers as a platform for sharing data. And Zakas rolled out a two-year old talk.
I covered Ashkenas on CoffeeScript Design Decisions in a separate blog post. It was that interesting and bloggable.
Marijn Haverbeke talks about code editors in a browser; basically glorified text areas. Text areas themselves are too primitive for building a code editor on; indentation is unworkable, and good luck finding which line of your code you are currently on.
The code editor in a browser received a massive boost from Bespin, a code editor written entirely in canvas, and basically recreated every pixel of a UI. As a demo Bespin was interesting, but using canvas is inappropriate solution. Thankfully this abysmal mess of inaccessibility is now mostly abandoned, and the code editors that have arisen from this work are well on track to being more conducive to the web environment, and more perceptive of the DOM. One significant limitation of canvas is its portability.
So far there are only 3 serious implementations of in-browser code editors:
All three are open source projects, and to date there isn't a single commercial implementation of a browser-based code editor.
Marijn is the sole developer behind CodeMirror. The first major version of CodeMirror used content-editable (or design mode); this is a browser supported mechanism of inline editing. But Marijn found the feature both underspecified and the browser support buggy or unsuitable for building a code editor. Issues ranged from Internet Explorer inserting paragraphs whenever it could, to the general unavailability of useful events.
The original version of CodeMirror grew out of Marjin's previous project, an online JavaScript book called Eloquent JavaScript. The book contains exercises and demonstrations of JavaScript, code that could be written and run inline on the page. For the book Marjin's solution of a Firebug-like textarea console and output window worked. He later added colour syntax highlighting by overlaying the text area with coloured text DOM nodes, but this turned out to be too slow with the pure JavaScript DOM changes. This gave rise to the first version of CodeMirror.
After content-editable proved insufficient (code folding was impossible, for example), Marjin started version 2.0 of CodeMirror which was based pure DOM. He limited the size of the DOM by creating a viewport of the document, so only the visible part of the document needed to be rendered on page, thus speeding up the overall rendering of the document. Marjin calls this a fake editor, just lots of changing DOM nodes. The cursor is a lie, the text looks editable, with copy and paste and moving text working. Marjin crafted his own text selection algorithms, and that worked pretty well since CodeMirror has full control over the document model.
CodeMirror with its API approach allows undo/redo, code-folding, and an extensible mechanism for supporting multiple languages with both syntax highlighting and code completion. Building a language extension is about implementing one method, token(), which is in a SAX-like way called for each token within the edited document. There are hooks for managing state through the document. The typical features of a good editor such as search-in-place and text replacement end up being scripts on top of the CodeMirror API.
CodeMirror supports the identification of local variables, helping the developer identify potential bugs by it's ability to distinguish the scope of variables as the code is being edited. This support isn't limited to JavaScript. XML-based languages has mismatched tag detection, for example.
Also CodeMirror can handle syntax highlighting of multiple languages in the same document. (composed modes) Thus an HTML page containing CSS and JavaScript - and even PHP - each piece can be independently supported. I guess this is done on a line-by-line basis, so perhaps an inline piece of JavaScript surrounded on the same line as non-JavaScript might not by syntax highlighted in the same way as a block of JavaScript.
Internally, the document being edited is stored as a doubly-indexed B-tree as this allows the two way mapping of the actual line of code with the line currently being edited on screen. These two lines can be different because of code-folding, long lines being wrapped, and the region being displayed is offset from the top of the document.
The editor listens for scroll events, and avoids startup freezes as it renders the DOM nodes onces there's sufficient information available to render the current visible region. So scrolling and loading large documents doesn't slow down the interface.
Copy and pasting is a clever hack, CodeMirror detects a right-click and inserts a hidden textarea underneath the cursor. Then the context menu at this point offers copy and paste options, because they are native to text fields.
With demos of the above features and talking through how they were implemented Marjin delivered an interesting tech-heavy talk.
Rik Arends introduces Cloud9 IDE as the easiest way to work with node.js. Cloud9 IDE is an online IDE and runs on node.js.
Though, why build and IDE with JavaScript? JavaScript developers lack tooling. Each main language has an IDE geared for their language; Java has Eclipse, C++ has Visual Studio, JavaScript is added to IDEs and editors as an afterthought, always treated as a second hand language. By using JavaScript developers will already know how they can extend the IDE they use.
We evangelise the Web, but we don't develop on it. Rik accepts the challenge that if many applications can be hosted on the web. why not a developer IDE. Surely web developers should be able to work using a cloud-based IDE. If the web can self-host its entire toolchain isn't this, he asks, the best way of pushing the web forward?
IDEs don't need to be ugly or clunky, so Cloud9 takes a designed approach to IDEs. One year ago it looked a little like Eclipse. Today it will look very familiar to TextMate users (and indeed it takes some useful cues from the Sublime Text Editor). They are even currently importing TextMate themes.
Cloud9 is betting the company on building a cloud-based IDE. It is currently funded by Accell and Atlassian. And, Cloud9 IDE is developed using the Cloud9 IDE. The IDE is open source, and downloadable from github to run on your own sever. (I caught up with Rik later on in the day and he confirmed it can be up and running on a VPS as long as node.js has at least 128Mb of RAM available. So a cleanly configured 256Mb VPS should be a useful starting point).
Rik demos the Cloud9 IDE; projects are brought in via github (after oAuthing with github). The main editing interface uses the ACE code-editing component. This renders a viewport of the current code as DOM Fragments (similar to CodeMirror2, I gather). This means rendering is lightning fast, and syntax highlighting is easy to extend for different languages and markup schemas. One recently added feature to the editor is a mini-map of the code, which is a good visual overview of the code; this is based on the findings that developers recognise code block by how the look, rather than the actual code.
Cloud9 IDE has a Console for running git commands. It's actually an emulation of a server shell, but very limited range of unix commands. One feature coming soon is a link to a virtual machine, and that will allow a real unix console right in the IDE.
Projects can have node.js servers configured with them (essentially on the fly and managed through the CloudIDE interface). This allows inline node.js debugging, so the developer has the whole gamut of breakpoints and means of stepping through code, peeking at stack traces and in-scope variables (much like the Firebug debugger for browser-based JavaScript debugging).
The IDE has support for Heroku based deploys (as git commit hooks, I think). The console has code-complete (for example git commands and options).
CloudIDE takes advantage of a number of publicly available libraries and toolkits, including:
Rik lists some useful pointers in building applications on node.js:
From a systems point of view:
Rik's excellent talk covers useful ground in building and architecting node.js applications. It's also a great example of what is considered an impossible to build application implemented in a highly usable, flexible and powerful way. I guess this is the first serious node.js app I've seen under the covers of, and a great demonstration of what's possible with node.js
We live in a globally connected world. Conferences happen regularly, they are recorded in video and posted on-line for the world to watch. The JavaScript community is small and closely-knit, so we keep up with what's going on outside of the UK borders. With that in mind I find the idea of paid conference speakers who get top billing (both local speakers, and flown in from outside countries) giving the same talk multiple times at different events a deplorable and unacceptable behaviour.
Nicholas Zakas' talk "Scalable JavaScript Application Architecture" is unchanged from the original talk he gave at the Bayjax conference hosted at Yahoo, Sunnyvale in September 2009. Two years ago. I've already watched the video of this talk (several times), so I was unimpressed to see him wheel it out again verbatim.
This repetition may make sense if this was a last-minute stand-in replacement for another speaker, or one to make up the numbers speaker-wise. But not for a top-billed speaker flown in from the USA, that we as attendees were paying for.
One page web applications are appearing all over the place, and as Glenn accurately notes they are siloes, but they don't need to be. Glenn shows a couple of demos he has been working on that allow data from one web application to be communicated to another.
His central application is a people store, storing vcards (or the microformat equivalent hcards). He shows how drag and drop between two separate browsers can be used to copy across the vcard information purely in the browser, meaning that this transfer can be done on any web-based application hosted anywhere.
Glenn shows further capabilities, of dragging and dropping vcards from the file system to the browser, and saving vcards from the browser back to the file system.
This shows that the drag-and-drop API and the filesystem based APIs (including uncompressing zip files in pure JavaScript) are getting to to point of being useful to the developer. Glenn demos the applications first, and then goes back over each application showing how each demo is done, along with the APIs used, and the current set of browsers these APIs are working in.
What impressed me was that the data wasn't just text, or an image, but a set of structured data plus an image. The drag and drop looks very useful, but Glenn notes that the HTML5 specification for dragdrop is a disaster - concurring with PPK's viewpoint. Currently advances in drag-drop seem to be driven by GMail.
Another clever demo was copying in a chunk of HTML (using the Clipboard). Glenn shows that under the covers he is using content-editable / design mode to receive the HTML, and then run a standard parser over that to extract out the structured information. Certainly a sensible use of microformats parser.
As an additional extra Glenn is using the Google Social API which, given an email, returns a list of social network or XFN type information about the person represented by that email address.
The last part of Glenn's talk covered Web Intents. These are used to map common actions with services, such as posting a status, bookmarking a link, editing an image, picking a profile. Glenn shows the two main ways Web Intents improves the flow of information:
Many blogposts have social bookmarking links, instead Glenn shows that replacing these with a "Bookmark this link" and identifying it as such to the browser, that browser can then assign that link to the service that the user uses. So clicking on that link opens the Delicious bookmarking dialogue for one user, and a Pinboard dialogue for another user.
The other direction is for filling in forms - like street addresses. He demos a webintent that maps a "Fill in this address using a profile" link and maps it to his own people store. That way he can select the right profile in his contacts system appropriate for the form such as his personal profile address, or his business profile for his office address.
A very informative talk about the features that are on the way to being ready for use in web sites, features that improve the user experience, and thus help reduce the risk of customers disengaging with sites.
Brendan Dawes is a designer and a geek, and he delights in design of items that escape us - such as Japanese and Brazillian paperclips (and pencils). And he lets his creativity loose. When a high-profile architecture company hires Brendan's company to design a new experience for their portfolio of buildings, they see in Brendan the very same boundless creativity, amazing things happen.
Brendan's narrative draws you in, he shows that sometimes it's good to build things that are not user-friendly because in that way you get experiences that are memorable, enriching and deeply rewarding, as well as expressing a brand identity. One side-effect of having a rich exploratory interface is that it opens the visitor up to serendipitous discovery, one of the most wonderful experiences for curious people.
Brendan's talk is about visualisation of data into an engaging experience. He talks about the ideas that worked, how the idea evolved, how the customer asked to make the logo smaller, about what it's like creating with no limits in place.
Yet he also shows that the idea is still grounded in technology, it's built in HTML5 because it's something Brendan doesn't know (Brendan is a firm believer in moving on to different technologies when you have mastered the current one).
An utterly enjoyable, and inspiring talk, and a nice counter-balance with the other technical-focused talks.
Marcin works for Google, and his 20% project is helping the Google Doodles team build their next Doodle. He's been involved in several of the new interactive doodles that have appeared on the Google homepage over the last year. From the faithful implementation of PacMan, to the ??? machine, to the magnifying glass doodle, and back to the Jules Verne underwater exploration doodle, and the ??? animation.
After a quick summary of the history of the Google Doodle (from the first Burning Man doodle, right to when Marcin starts to get involved), Marcin dives into the interesting stories behind the interactive ones, covering the usability tests and technical challenges of the doodle. We are shown several versions of the Jules Verne doodle and see the changes made and how they considered IE support.
Marcin talks about how each of the doodles evolved to their end-result, the user-experience factors, the technical limitations and reinventing new animation techniques. Marcin has a healthy interest in retrogame programming techniques, and it's eye-opening some of those techniques have been adapted in these famous doodles. The Martha Graham dancing woman doodle is a magnificent example of his Crushinator technique, which reminded me of the bit-blitting techniques of the Amiga.
Animation tricks like a thick lens holder to hide the coarseness of the rectangular clip-regions, thus saving performance and still producing a jaw-dropping result. Similar to the Dark Sceptre approach to masking large animated characters.
Marcin shows some of the easter eggs present in the Doodles, like the Jules Verne one controllable by the accelerometer built into the Macbook Pros. Many people first learnt about that particular hardware feature because of the Jules Verne doodle. Marcin notes some of the bugs they uncovered, for example the same generation of Macbook pro having different accelerometer chips which resulted in the movements being completely reversed.
A fascinating exploration of animation, interactivity and rich user-experience on the world's most trafficked page. And shows how the old animation techniques are repurposed in the ever-evolving web.
Full Frontal was an enjoyable day spent watching masters of JavaScript showing their efforts and taking us behind the scenes. I like Remy & Julie's choice of speakers - the people who actually ply their trade and craftsmanship, the ones who don't seek out the limelight and attention, and generally just do a wonderful job and are passionate about what they do.
Full Frontal delivers in a way no other conferences do, it's a small audience in a homey venue. Free from marketing and hype, and delivers great value, inspiration, and a focus on the web developer. It is my favourite event.
]]>