Last week: CSS WG meeting at Opera, Chrome Labs and the year 275759

Published on in CSS, Google Chrome, HTML, Last Week, Standards, tech, WebKit. Version: Chrome 7

Last week the CSS Working Group met at Opera’s office in Oslo, Norway, for a face to face meeting. Following tight planning, the members met three days in a row discussing topics ranging from the open CSS 2.1 issues, various CSS 3 modules and other subjects such as hit testing. Some of the results are clear: all open CSS 2.1 issues have been resolved and a range of specifications will have their priority increased (such as CSS Transitions and Transforms).

Furthermore, CSS 2.1 is expected to become a Proposed Recommendation by the end of the year. This would mean that the specification could be a W3C Recommendation early next year, allowing the working group to focus their attention to CSS 3 and beyond. During the meeting Mozilla’s David Baron also mentioned that Firefox will be implementing 3D Transforms, already available in Safari and Google Chrome.

As for Chromium and WebKit, a combined amount of 1282 commits were uploaded to their repositories. While this means there were fewer commits than to last week, there’s a lot more news to share about the projects. I’ll highlight some interesting items which occurred last week, and briefly list other interesting changes.

Firstly, it’s becoming more and more obvious to the Chrome team that their browser is lacking important features for the enterprise market. An area Google can tackle is policies. Policies are a way of defining the settings of the browser through the registry, Microsoft’s Administrative Template files or the, so far unannounced, ChromeOS Enterprise Daemon. Other policy and preference stores may be added in the future.

Policies allow companies to easily define common settings such as the proxy server to use, account synchronization and whether JavaScript should be enabled for websites. Unfortunately this also enables companies to block Chrome updates, but I’m sure the Chrome team will be looking at options to prevent people from doing this. Last week support for three new policies was added.

Another large update is the initial inclusion of the Google Chrome Labs page. Most other Google products, as well as Google itself, include a page with experimental features. Considering Chrome supports about 320 command line flags it won’t surprise you that adding such a page makes certain tests a lot more accessible. Google’s Nico Weber committed the initial version just over four days ago. You can try it out yourself by downloading a recent nightly and visiting about:labs.

The WebKit team has invested a lot of time in improving their support for various standards. Adam Barth and Eric Seidel enabled the last part of the new HTML5 Tree Builder: fragment parsing. Furthermore support for HTML5 compliant doctype switching was added, symbolic CSS3 list-style-types are now supported and file inputs now respect HTML5’s fake path. Finally, due to this addition, you can now use HTML5’s date input types to start making plans for your birthday in the year 275759.

Now that the new Tree Builder has been completed, except for a lot of fine-tuning of course, thousands of lines of code were up for deletion. The old Tree Builder itself wast removed on the 24th of August. Further cleanups were done with the removal of their current implementation of Mozilla’s XML Binding Language (XBL). It hadn’t been maintained in years, so the decision was made to remove it in total.

Further updates last week

Starting next Thursday I will be in Brighton, United Kingdom. Together with KrijnAnne and Matijs I’ll be attending dConstruct 2010. Perhaps I’ll be seeing you there? 🙂

Read more (3 comments) »

Last week in.. WebKit and Chromium!

Published on in Browser Vendors, Google Chrome, Last Week, tech, WebKit. Version: Chrome 7

It’s hard to keep track of huge open source projects which receive hundreds of updates per week. In case of WebKit and Chromium, a total of 1113 changes were landed in the past seven days alone, including lots of new features, enhancements and of course tons of bugfixes. Inspired by Paul Irish and Divya Manian, I’m going to experiment to see whether it’s doable to regularly write (smaller) updates like these.

In the past seven days WebKit has seen 396 commits done by about 80 authors. A decent number of them were done by Google engineers working on storage related systems. Firstly there is the File API specification; Chromium has been supporting various asynchronous File Reader functions for a few months now.

Last Thursday Eric Uhrhane committed the first part of the File Writer spec. Even though it’s only a placeholder, it shows that Google’s actively working on implementing the features. Official word on synchronous methods is still pending.

The other storage system they’re working on is a specification I wasn’t aware of myself, a Directories and System extension to the File API. The initial bits of the implementation were committed by Kinuko Yasuda on Monday. Being built entirely on top of the File API, it’s likely that the main use-case for the implementation will be Chromium OS. Regardless, most of the use-cases would be useful in current browsers as well.

Folks at Apple have been busy with improving the quality of the WebKit2 interface. Windowless plugins can now paint and receive mouse events, which means that the Vimeo Flash Player can be used again on Windows builds. A number of improvements for the media playback have been added as well, such as improved handling of detection of the “application/octet-stream” content-type, as well as restoring the intrinsic size of a video after loading its poster. Simon Fraser solved a number of random crashes which became more obvious now that accelerated compositing has been implemented.

Also exciting news is, even though it has been working for a while already, that support for inline MathML has been announced for Safari nightlies. MathML is a way of rendering complex math straight in your browser, pretty much like SVG is for graphics. MathML can be, just like SVG, included in any HTML5 page. Henri Sivonen has created a nice example demonstrating both technologies.

Within the Chromium team work is hard on its way to perfectly integrate ANGLE into the browser. DirectX libraries will be distributed with the Windows versions and a public experiment has started to gather statistics about GPU capabilities. The browser also received per-plugin content settings, although it’s still protected behind a runtime flag.

The version of Chromium’s trunk has been updated to 500, which certainly is a milestone. An early implementation of the remote WebDriver API has landed, allowing basic remote control of your browser. Finally, the V8 JavaScript engine has been updated to version 2.3.9 (changelog).

Other recent changes

Of course, with a total of 1113 commits in both repositories, there’s a lot which hasn’t been mentioned yet!

  • Eric and Tony have solved some more issues around the HTML5 Tree Builder.
  • The Qt port now supports touch events in WebKit2, courtesy of Juha Savolainen.
  • Chromium’s accelerated compositing rendering logic has been refactored.
  • <style> elements within <noscript> are now ignored if JavaScript is enabled.
  • Kenneth Russell now is a WebKit reviewer (congratulations!).
  • Some SVG Pattern fixes were landed by Nikolas Zimmermann.
  • Pushed SPDY streams in Chromium now get closed automatically as well.
  • Accelerated Compositing for <canvas> will be compiled in by default.
  • Chromium can now use the Windows 7 Location Provider for Geolocation.

Even though it’s just a week, an incredible amount of work happens within these two huge open source projects. In order to include other browsers (Firefox, Opera and Internet Explorer) and specifications, I’ll have to cut back on the details quite a bit. This week the CSS Working Group is meeting face-to-face in Oslo, I’m sure that’ll be interesting to include next week :colone;;);

Read more (13 comments) »

Synthesizing and processing audio through JavaScript: the Audio API

Published on in Gecko, Standards, tech, WebKit.

In the past few years a whole range of visual effects have been standardized. Future websites can render pretty much anything using bitmap canvasses, display 3D content using CSS 3D Transforms or WebGL and even implement entire key-frame based animations using nothing but CSS. Combined with specifications like the Application Cache and Local Storage, “HTML5? enables a whole new range of web-based applications.

Unfortunately, now that almost everything can be visualized on your monitor, the inability to synthesize, process, and analyse audio streams is becoming more and more obvious. While Flash provides fairly extensive APIs for working with sound, having a native (and preferably more extensive) API available to synthesize, process, and analyse any audio source is much more convenient. That’s why the W3C Audio Incubator Group was founded!

Don’t get too excited just yet: while an initial draft has been published by Google’s Chris Rogers, you shouldn’t expect the API to be finished within the year. The initial version received lots of input from six Apple engineers: Maciej Stachowiak, Eric Carlson, Chris Marrin, Jer Noble, Sam Weinig and Simon Fraser, and now frequently gets updated based on feedback received via the mailing list. The draft specifies various features for the API: spatialized audio, a convolution engine, real-time frequency analysis, biquad filters and sample-accurate scheduled sound playback. Wait, spatialized what?

The reason why it doesn’t exist already

The complexity involved with synthesizing, processing, and analysing audio is one of the key reasons why it doesn’t exist already. Most audio today has a sampling rate of just over 44 thousand samples per second; tracks of DVDs and blu-ray discs can be as high as 192 thousand samples per second. When multiplied by the number of sound channels and considering the decoding required to make sure the file makes sense, you can imagine the amount of work that goes into translating that MP3 file to waves our ears can interpret.

Of course, part of this process is handled by hardware, like converting the digital stream to an analog signal. However, applying effects to an audio stream happens entirely in software where each sample gets processed. In situations where effects are applied and the processed sound is played back almost simultaneously, you can imagine how critical things like buffering and timing are.

Another problem is JavaScript performance. While the scripting engines have become way more powerful in the last few years, they can be in the order of twenty time slower than well optimized native code. When used in combination with one of the SSE instruction sets, which enhance your processor with highly optimized abilities to do audio-related math, today’s scripting engines still got a long way to go.

Native processing to the rescue: just create an API

Performance can be improved by moving most of the processing away from JavaScript. By providing the Application Programming Interface (API) the Audio Incubator Group will likely be proposing that your script gains the ability to “describe” what you want to be doing, rather than doing it. Right now, however, work is being done to implement an interface allowing direct JavaScript processing in the API. Such an interface could be used to prototype audio processing algorithms and creating educational demos, something which already was a possibility using Adobe Flash and Mozilla’s Audio Data API.

The idea is simple: the “base” is an AudioContext interface which manages connections between the different Audio Nodes. The context contains a Destination Node by default, which represents the output device on your computer. This could be your speakers, your headphones or, perhaps in the future, even as a file on your harddrive.

Of course, there have to be audio sources as well. There are various kinds of sources: MediaElementAudio- SourceNode for <audio> and <video> tags and AudioBufferSourceNode for other kinds of input, like MP3 files requested via XHR. Other types are yet to be defined, but source nodes like DeviceElementSourceNode aren’t unthinkable, which could be used to process microphone input via the <device> element.

Between audio sources and destinations, there can be other types of nodes to perform various kinds of manipulations. The specification currently defines the following interfaces:

  • AudioGainNode Allowing you to change the volume of the audio.
  • AudioPannerNode Positioning and spatializing audio in a 3D space.
  • BiquadFilterNode Add lowpass, highpass, and other types of common filters to the audio.
  • ChorusNode Add a chorus effect to the audio.
  • ConvolverNode Add effects to audio, such as imitating the sound of a concert hall.
  • DelayNode Apply dynamically adjustable delays to an AudioNode.
  • DynamicsProcessorNode Adding shaping (compressing/expanding) effects.
  • WaveShaperNode Adding non-linear waveshaping effects, like distortion.

These nodes form the foundation of many of the features currently available in audio systems, but the specification is still far from finished and more types of nodes may be added. For analysis you could use a RealtimeAnalyserNode, which allows you to analyse the audio node in real time. This could be used for example, to display the tones output by a stream.

An example: dynamically changing the language of a video

Currently there is no clean way to switch between alternative audio streams for a HTML5 <video> element. The Audio API is ideal for such a purpose. When you keep a number of things in mind, like fragmenting the audio in smaller files to speed up the (initial) loading, it won’t be hard to create a language switcher:

  1. Create an AudioContext,
  2. Get the audio sources from the <video> element using a MediaElementAudioSourceNode,
  3. Decrease the volume of the video using an AudioGainNode,
  4. Get the new audio stream by requesting  the MP3 via XHR and putting it in an AudioBufferSourceNode,
  5. Combine the two using the Dynamics Compressor (DynamicsProcessorNode),
  6. Play the audio stream.

This can be demonstrated using the following diagram:

These same techniques could be used to dynamically control background sounds for clips and create timed effects for games using an arbitrary number of output channels (which could be 2 for stereo, 5.1 for surround or even more!). Of course, more normal use-cases can be thought of as well: a beep when you click on a button, messages when interactive validation in a form fails or a music player featuring cross-over effects.

A number of examples demonstrating the capabilities of the Web Audio API are available as well, but keep in mind that you have to build WebKit yourself. They do show the involved JavaScript code however!

I’m really interested in the progress of the Audio Incubator Group and can see quite some benefits in being able to synthesize, process, and analyse audio through JavaScript. I’ve signed up to their mailing list and follow prototypes in Gecko and WebKit. Are you interested too? Consider following @AudioXG on Twitter or subscribe to the public-xg-audio mailing list at the W3C — lots of cool things are yet to be invented!

Thanks and credits to Chris Rogers and Koen ten Berg for their technical input and feedback!

Read more (3 comments) »

Overview of vendor-prefixed CSS properties

Published on in CSS.

For some vendor-specific CSS research I’m doing, I’ve created an overview of vendor-prefixed CSS properties that are currently supported in the four most popular rendering engines. The overview also contains references to the specifications they’re part of (if any), non-prefixed versions in other browsers and some notes on special conditions of a property.

One thing I noticed when creating the overview, is the amount of non-standard properties in WebKit. Looks like they got some catching up to do in terms of documenting their features. If you have any feedback or corrections, feel free to post a comment or poke me on Twitter!

Go to the overview »

Read more (11 comments) »

Thank you, Microsoft; HTML5 Canvas is a go!

Published on in HTML, Microsoft Internet Explorer, tech, Trident.

Today, exactly 217 days after the first Internet Explorer 9 announcement, Microsoft has released the third Developer Preview of the latest version of their browser. One of the most awaited and unannounced features this preview brings is the addition of the HTML5 Canvas Element. Defined in section 4.8.11 of the HTML5 specification, already implemented in all other major browsers, and now upcoming for Microsoft’s Internet Explorer 9: 2D Canvas is a go!

The history of <canvas>

The first signs of the -then still proprietary- element were committed to the WebKit source tree by Richard Williamson on the 25th of May, 2004. Apple’s idea came down to exposing Mac OS X’s Quartz drawing system to JavaScript and HTML in order to ease up writing graphical widgets for the Apple Dashboard. Consequently, as both products share the rendering engine, the element became available in the Safari browser as well.

In July of that year Dave Hyatt announced the new element on the Surfin’ Safari blog. This immediately brought up a lot of controversy, of which Eric Meyer’s post is a clear example: “What the bleeding hell?!?” In defense, Hyatt elaborates Apple’s rationales for including the proprietary features and said to submit a proposal to the WHATWG lists, however, it never came. Ian Hickson therefore, despite his opinion on how Apple handled the new elements, reverse engineered a draft based on available source code.

Mozilla Firefox

A few years earlier, late October 2001, Joe Hewitt opened bug 102285 in Mozilla’s bug tracker. Sharing the same name and rationale, his proposal was to implement a custom painting control to Mozilla’s XML User Interface Language. Interestingly enough, Brendan Eich, founder of the JavaScript language, tore down the idea as something for rendering fanboys. The patches were never used, and inclusion in official builds was unlikely, as Eric Murphy stated in the discussion.

On the first day of April in 2005 Mozilla’s Vladimir Vukicevic uploaded a patch featuring basic canvas functionalities, which opened the road for further work in Firefox. While this first implementation only worked on Linux due to different color formats on Windows and Mac OS X, the release of their “Deer Park” project late November, known as Firefox 1.5, featured a cross-platform implementation of canvas.

Opera introduced the <canvas> element mid 2006 with their Opera 9 release, in quite a humble way (can you see it without searching?). This meant that all major browsers, with the exception of Internet Explorer, implemented the element natively. However, it didn’t mean that the element was unusable, as Google’s ExCanvas and Mozilla’s IECanvas projects brought limited support for the element to Microsoft’s browser.

The long and juridical path to standardization

The path to proper standardization wasn’t very smooth. This began with the lack of a proper proposal coming from Apple’s side, resulting in the initial specification being based on reverse engineering works by Ian “Hixie” Hickson, editor of the HTML5 specification. In 2005, Jayant Sai brought up an initial idea for drawing text on a canvas, which later got formalized into a decent proposal by Stefan Haustein.

However, not everything went nice and smooth. After Mozilla Firefox and Opera had implemented the element, Apple’s Senior Patent Counsel Helene Plotka Workman sent a message to the WHATWG and Ian Hickson stating that Apple believed to have intellectual property over the canvas element, and would only consider to release these IP Rights if the Web Applications draft would become a formalized draft standard with the W3C.

Despite the fact that the rationale behind Apple’s message was unclear, the timing of their message was interesting. Exactly a week before, the W3C would be re-launching the HTML Working Group. Less than half a year later, in February 2008, the first draft of the HTML5 specification was published as a W3C Working Draft. On the 18th of June that year, Apple disclosed patent 11/144384 for use by the HTML5 specification. The same patent has been disclosed in six other jurisdictions, enabling the WHATWG to continue including <canvas>.

Going 3-dimensional with WebGL

More recently, on December 10 last year, Mozilla’s Arun Ranganathan announced the first draft of the WebGL specification. While you would expect the specification to be hosted by either the WHATWG or the W3C, because it defines a context for the HTML5 Canvas Element, WebGL is available at Khronos. This can be explained by the fact that the specification originally was intended as a simple binding of OpenGL ES2.0 to JavaScript, whereas the Khronos consortium already hosted the OpenGL ES specs.

WebGL is the second context that can be used with the <canvas> element. As said before, it has been based on the OpenGL ES 2.0 specification and provides a JavaScript interface for 3D graphics. The specification has evolved out of an experiment by Mozilla’s Mozilla’s Vladimir Vukicevic. He first demoed the possibilities in his “Web Graphics: Canvas, SVG, and more” talk at XTech 2006, and later announced as the “moz-glweb20? context. Opera published their opera-3d context late 2007, but decided to add abstraction in order to leave the door open for other implementations based on, for example, Direct3D.

WebGL is a specification in which all browser vendors, with the exception of Microsoft, participated. This can be clearly seen by the fact that nightly builds of Firefox, Google Chrome and Safari contained implementations of WebGL. While Opera actively participated in discussions, they have yet to release a public build containing the 3D context. Nokia has announced WebGL support in a new firmware version for their Nokia N900 phone.

Of course, Google hasn’t been silent either. In May they announced the ANGLE project, which basically translated OpenGL calls to their DirectX equivalents. Two weeks later, on April Fools’ this year, Googler Chris Ramsdale announced a WebGL port of the Quake II Game Engine.

No really, Thank You, Microsoft!

Even before the Internet Explorer 9 announcement, Microsoft tried to move the Canvas 2D context to its own module. There was no word about <canvas> in the first two Developer Previews, and the company’s position could best be described as vague. In May, Microsoft Evangelist Giorgio Sardo publically stated that he would like the element to be included, but also added that the company was in no way committed to Canvas.

More recently, at the SVG Working Group meeting earlier this month, Internet Explorer’s General Manager Dean Hachamovitch stated that the team wouldn’t be talking about implementing Canvas at that point. However, he added the following: “all your graphics needs will be taken care of, and I’m smiling broadly.” Today they finally confirmed the suspicion a lot of web developers have been having for months: the 2D canvas has been included in Internet Explorer 9.

Thank you, Microsoft, you’ve just made us smile broadly as well.

Read more (7 comments) »

Chromium moving towards the GPU: CSS 3D Transforms

Published on in Browser Vendors, Google Chrome, Rendering Engines, tech, WebKit.

Google’s Vangelis Kokkevis, former lead developer of the O3D project, enabled support for the “--enable-accelerated-compositing” flag in Chromium nightlies. By supplying it to the browser, the so called “fast path” for rendering gets enabled in WebKit. This path is responsible for accelerating a number of performant features in the engine, such as CSS 3D Transformations, video decoding and various components of the WebGL Canvas. While the software implementation landed back in March, this change allows you to use it as well. Milestone 6, builds of which frequently get pushed to the dev-channel, already mentioned plans for supporting CSS 3D Transforms. These transformations were introduced by Apple about a year ago and can now be found in their own W3C Working Draft. About a month ago the Qt WebKit Port announced support for the draft, and the nightly Chromium builds introduced it yesterday. Mozilla’s stance on the specification has yet to be defined, and no word from Internet Explorer and Opera either.

Why would I want to use 3D in my webpages?

You don’t. No really, I truthfully hope we’re not going to see entire websites created in 3D for a while to come. A large part of the websites today are horrible in terms of usability; Perspectives and animations really aren’t going to improve that. The real use-cases can be found in examples such as Charles Ying’s Snow Stack: eye-candy and graphics are becoming more important in applications, and going 3D is a logical next step in that.

peter&period;;sh

WebKit’s Poster Circle example and the “Transform this article” used in Chromium.

Is the implementation complete?

No, far from it. Currently the only supported platform is Windows, using OpenGL drivers. In the following weeks more support for Windows will be added by including Google’s Angle Project, which was announced in March. Simply put, ANGLE is a bridge between OpenGL and DirectX, enabling a much larger part of the Windows users to use the GPU acceleration. Support for Linux and Mac OS X is on its way, but isn’t stable enough yet in order to be included. Finally, when you enable accelerated compositing, video and WebGL will be disabled.

As for the current implementation, it’s quite rough. When putting a perspective on the <body> tag the renderer crashes, on any other element my scrollbar turns blue and artifacts aren’t rare either. Still, the results look great, smooth and barely takes any CPU time. Compositing gets triggered by various animation effects, such as transparency and transforms, usage of 3D Transforms and IFrames under certain conditions. Some of the most important things which are still to come include Safe Browsing, which will make sure that effects such as 3D Transforms and WebGL will not lock up your browser, fast paths for accelerated canvas and video elements, and support for Linux and Mac OS X.

Why doesn’t Chrome render a full page using the GPU?

Google’s position on full-page GPU rendering, such as Internet Explorer 9 and Mozilla Firefox are capable of doing, isn’t entirely clear yet. Keep in mind that GPU rendering isn’t everything: Opera comes incredibly close to Microsoft’s performance in the Flying Images demonstration without any GPU acceleration at all, and a large part of Chrome’s performance on that page can be accounted to the high-quality image scaling algorithms.

As Pete LePage, an Internet Explorer Program Manager, already noted: browser speeds aren’t all about JavaScript. The same can be said about hardware acceleration: while it can provide significant performance improvements, other components such as the DOM, styling and images need to be available before they can be rendered in the first place.

Read more (9 comments) »

Thoughts on the HTML5 hidden attribute

Published on in HTML, Standards.

Last Friday Maciej Stachowiak submitted a patch for supporting the HTML5 hidden attribute in WebKit. While supporting new features the HTML5 draft specifies usually is a good thing, I’m having quite some doubts about the added value of the new attribute. Shelly Powers submitted an issue to the HTML Working Group with quite an extensive change proposal, which has led to a decent amount of discussion on the subject.

World Wide Web Consortium &lpar;;W3C&rpar;;Something which undoubtedly has been affecting the discussion are the other issues opened by Powers, which propose removal of <progress>, <meter>, <aside>, <figure> and various attributes. While her rationales often are decent and complete, it’s more of less becoming a vengeance: getting anything removed from HTML5 which does not directly improve semantics or accessibility. While I disagree with removing tags like <meter>, I do believe removing the hidden attribute would be a good idea.

What exactly is the hidden attribute?

It’s one of the clearest and most straightforward definitions of an attribute available. Defined in section 8.1 of the HTML5 specification, any element which has the attribute should be considered irrelevant and thus shouldn’t be rendered by the browser. However, the specification also defines that the attribute should not be used for content that could legitimately be shown in another presentation, like tabs or collapsible menus.

Using the attribute is really simple, take the following snippet as an example:

<section hidden>
<h1>The HTML5 hidden-attribute</h1>
<p>
This text should not be displayed by a browser.
</p>
</section>

While the example could contain any element, as the hidden attribute can be applied on all HTML elements (including on <html>), I think it’s quite clear. The section, including all its contents, should not be rendered by the browser, even though it will be available in the DOM and for search engines.

The primary problems I see with the current definition of the hidden attribute are the following:

  • Its purpose is going to be misunderstood — it will be abused.
    When I’m thinking about use-cases for the hidden attribute, one of the first things that comes to mind is using it for tabs. Easily hide any tab which isn’t active at that very moment without having to use a CSS class like “hidden-tab”. This is, however, wrong. Tabs could be defined as a form of presentation, which specifically has been excluded by the specification as a use-case.
  • There are two identical alternatives available, one of which is widely used already.
    The first alternative is quite clear: use a CSS class or inline style which sets the “display” property to none. The second alternative would be using the “aria-hidden” attribute, which would be identical after adding a CSS attribute-selector hiding the element. Laura L. Carlson of the Accessibility Taskforce has already replied to the proposal stating that the accessibility gains are negligible, and given “aria-hidden” takes precedence over the normal display property, using both statements could create a contradiction. Browsers could automatically assign the “aria-hidden” attribute/role to elements which are hidden if accessibility would really be an issue here.
  • It’s a clear layering violation.
    Now that a fair number of presentational elements have been removed from HTML5, such as <u>, <font> and <center>, layering problems are becoming more visible. Layering means using HTML strictly for the content, CSS for the presentation and any JavaScript code for behavior of the page, which the hidden attribute is violating. Considering all the attribute does is setting the display property to none, the attribute barely is different from tags like <u> and <i>.

The added semantical value of @hidden

Of course, since the attribute has been accepted as a feature of HTML5, it isn’t all bad. The attribute initially was named “irrelevant” but was renamed to “hidden”, mostly because the former is unclear. Having native attributes which could be used to address accessibility is one of the more important use-cases for the attribute. Web developers are much more likely to include a “hidden” attribute, than they are to type aria-hidden=”true”. In fact, most of the developers won’t even know what aria is in the first place.

On top of that, it could be beneficial for search engines. Even today the more popular spiders, such as GoogleBot and Yahoo Slurp, do not download or parse the CSS code used on any page. Consequently, all hidden content on a page will therefore be included in the search results, which leads to less accurate search results. The text you’re searching for is not per definition visible on the page itself, it might very well be (legally) hidden using JavaScript or CSS.

Keeping the attribute in the spec, or remove it?

I personally support Shelly’s proposal to remove the hidden attribute. Even though there is a semantical improvement, mostly for applications which are not capable of parsing CSS, a possible improvement in terms of accessibility and the ease of simply hiding an element, I believe the attribute is too prone to being abused, imposes a layering violation, already has two valid and widely used alternatives and does not add sufficient value in order to be included in specification.

Read more (7 comments) »

The graphical side of Microsoft Internet Explorer 9

Published on in Browser Vendors, Microsoft Internet Explorer, tech, Trident.

One of the subjects Microsoft is giving a lot of attention to in Internet Explorer 9 is graphics: extensive SVG implementation plans, various CSS specifications related to graphics, and with GPU Acceleration available by default. This isn’t entirely surprising: considering Microsoft owns the operating system and graphic libraries which have to be used for the acceleration and only support a limited amount of configurations, implementation is a lot easier than having to create cross-platform implementations based on third-party software.

Nevertheless, they are doing quite a good job at implementing the specifications, and certainly with a lot of sense for the finer details. A good example of this can be found in their rendering of borders. One of the new CSS properties that Internet Explorer 9 introduces is border-radius, as defined in the Backgrounds and Borders specification. While the specification still isn’t entirely clear on how to render mixed border-style connections, Microsoft’s latest implementation does look smooth and well adapting.

Scalable Vector Graphics

SVG is becoming a widely implemented specification for scalable graphics. With a file format based on XML, confirmed support for in-line HTML rendering in three major browsers, a mature specification and lots of support from professional graphic editing software such as Adobe Illustrator, it’s destined to strongly gain in popularity in the near future.

Internet Explorer 9 will support the entire SVG 1.1 specification, with the exception of Fonts, Filters and SMIL. Microsoft believes that web fonts have a decent future as it is, mainly because the W3C Fonts group is making vast progress in standardizing a common format. Filters will not be included because they are not convinced about how often they will be used, considering Internet Explorer has supported various filters since version 4, but barely anyone has used them. Finally, SMIL will be omitted because the company believes there would be too many different ways of handling animation. A fair reason, as CSS 3 introduces two additional ways of adding animation to your webpage.

Per-pixel rendering using <canvas>

While the Internet Explorer team is making a lot of progress with the development of Internet Explorer 9, not only on the technical side, but also through a development process which is more open than was the case with previous releases, their position on the HTML5 canvas element is simply vague. Early May Giorgio Sardo, an Internet Explorer evangelist, already indicated that he would like to have support for the element in IE9, but the company has neither confirmed nor denied inclusion of the element in the new browser. Just yesterday, however, Dean Hachamovitch hinted that Microsoft still has some tricks up their sleeve: “We’re not talking at this point about whether we’re supporting canvas or not, but I’m smiling broadly. All your graphics needs will be taken care of, and I’m smiling broadly.”

My guess would be, something I have been saying ever since the release of the first Internet Explorer 9 Platform Preview, that they certainly will be implementing canvas. I mostly base this on their attention to the graphical aspects of IE9, as well as on the comments made by various Microsoft employees. As Mozilla’s Brendan Eich already said about the subject: “Canvas is pretty small. It’s like your postscript level to 2D graphics.” I’m assuming that Microsoft knows their own APIs by heart, so implementation of the 2D canvas standard shouldn’t be a problem at all. In fact, utilizing the power of the Direct3D API, I’d say even WebGL is a likeliness. Microsoft is smiling broadly, let’s hope they allow us to do the same.

Read more (2 comments) »

Going full-frame with the Nikon D700

Published on in Photography.

Nikon D700In my last blog post I talked about my near-future camera plans, specifically about the plan to upgrade my camera. Right now I own the Nikon D60, it’s a nice entry-level camera with a 10 mega-pixel ccd sensor, pretty well image quality with the more professional lenses and cheap SD memory-chips for storage. While it’s more than sufficient for most photography, there were a number of things I didn’t like about the Body. It easily takes three or four seconds before a picture appears on the LCD display and focusing isn’t intuitive at all.

Because of that I have been planned to upgrade to a D300s since late last year. I already had some experience using the D300 camera of my dad, and considering the D300s is newer and cheaper it wasn’t really a hard decision. Other models like the cheaper D90, older models such as the D2 and the D200 and the full-frame D700 also crossed my mind, but since two of my three lenses were made for cropped DX cameras they weren’t interesting. On top of that, the D300(s) simply is an excellent camera.

Read more (no comments) »

Trying out my new Nikon 24-70mm f/2.8 lens

Published on in Photography.

Photography is one of my hobbies. Right now I own a Nikon D60 camera with a bunch of lenses, including a 18-200mm f/3.5-5.6 all-round lens which is mounted on the camera most of the time. While I’m quite happy with the quality and possibilities of the combination, it wasn’t entirely what I was searching for. This is mostly caused by the Nikon D300 camera my dad owns: it focuses much more intuitively, responds quicker than my D60 does and doesn’t fill pictures with noise when I use any ISO higher than 400. This is why I decided to upgrade to a D300s body, together with a new sharp and fast lens, the AF-S Nikkor 24-70mm f/2.8G ED.

A Prunus Cerasifera tree in the back garden and some remaining easter decorations.

Read more (3 comments) »