Is serving high dpi images better for paint performance?

Whilst looking at paint times of various elements recently, I noticed that sometimes images are causing some long paint times when in view. After digging around I found that it was on certain sites which aren’t optimized for high dpi screens like the one in my retina MacBook. I then noticed the paint times are reduced when swapping the image out for a 2x one, so that the image is displayed 1:1 and there is no upscaling.

I experimented with an isolated example of an image forced to a 600×300 virtual pixel size through css. I then used a 600×300 image, a 1200×600 image (so, 2x) and measured the paint time using Chrome Dev tools.

600x300 image displayed at 1200x600 physical pixels - 10.8ms paint time

600×300 image displayed at 1200×600 physical pixels – 10.8ms paint time

1200x600 image displayed at 1200x600 physical pixels - 4.3ms paint time

1200×600 image displayed at 1200×600 physical pixels – 4.3ms paint time

It’s clear to see that the scaling causes a 6ms to 16ms overhead on paint, which is unfortunate given for 60fps we need the entire viewport to paint in 16ms, and I’m sure most sites have more than a single image for the browser to render.

I think what this shows though is that any image scaling is going to cause paint overhead. Unfortunately, many developers, including ourselves at Condé Nast, are using percentage based widths for responsive designs, and therefore downscaling for most users in lieu of a better responsive image solution. A further test (displaying the an 1800×1200 image at 600×300 using 1200×600 physical pixels) shows that downscaling can be even more costly. I would like to take the time to put together a table with a  more comprehensive set of test results at different sizes and pixel densities.

1800x1200 image displayed at 1200x600 physical pixels - 23.5ms paint time

1800×1200 image displayed at 1200×600 physical pixels – 23.5ms paint time

Responsive Images – Thoughts before Edge Conf, and the Element Size Problem

Ahead of my contribution to the Responsive Images panel at Edge Conference in NY next week, I wanted to get down my thoughts on the topic, if only to see if it differs after the conference. Well, my thoughts along with many discussions and real-world implementations with my team at Condé Nast Digital UK.

Some of the most popular currently proposed solutions are:

  1. srcset extensions to the img element
  2. <picture> element, including multiple <source> definition elements
  3. A compromise between 1. and 2.
  4. Client Hints

All of these proposals seem to agree on one thing: that there should be an ability for developers to define different image sources based on the size of the viewport, or the pixel density of the screen. Pixel density because we don’t want to upscale images by displaying them at their replaced element’s virtual pixel size. Different size of the viewport because we want to have art direction on different size crops of an image.

This isn’t enough

I’d go so far as to say that this may be a destructive place to start. In the case of the <picture> element, developers would actually be defining rules for which sources to use at which screen widths, independent of the stylesheets for the page. This is problematic because this disconnect flies in the face of semantic separation, and means we will have a difficult time defining one place where layout is controlled. Consider the scenario where the image has one or both of width and height set to ‘auto’ (as if they are both set to by default). By defining a different image source at different viewport sizes which has a different pixel size, the image displayed in the replaced element will also change size to match. I recognise that this is *already* a problem with the <img> element as it is today (and some may say a welcome feature), even without multiple source/media-query combinations, but in my opinion this is a different level of problem because of the explicit definition of media queries to define rules within the html.

The srcset solution is better in that it’s still adhering to the current principles of <img> and simply providing a way for us to give (what the browser hopes to be) the same image at a larger size.

Responding to Element Size

But can this be taken one step further? Through CSS, we can alter the user experience of our pages – layout, appearance, transformation, animation, etc. All this can change the positioning and size of our <img> elements. To reiterate – it’s in the CSS that we define the rules that govern where and how big our images are. The browser should choose an appropriate image source depending on that location. In order to do that, we should give the browser an appropriate list of image sources we have for that element, and explicitly define their width and height. Something like:

<img alt="Barak Obama stands to deliver his speech at the White House">
    <source width="300" height="150" src="" />
    <source width="600" height="300" src="" />
    <source width="1920" height="1280" src="" />
</img>

This still adheres to HTML’s pursose (which srcset does too) of merely defining data in HTML. This has to be the solution to repsonsive images, in fact it is similar to the solution used by us at Condé Nast UK for vogue.co.uk, GQ, Wired and others, albeit a javascript solution (codenamed ‘srcTwizzle.js’ at version 1). Consider the following example, where we have a page at 530px and 730px viewport width:

730px viewport width

730px viewport width

530px viewport width

530px viewport width

The CSS is defined to stop floating the list of ‘latest’ articles by 530px (they are on the right at 730px) so that it isn’t forming a second column anymore, and the featured images are set to go to fill the space of their containers (100% width). So, at a smaller screen size, we actually have a bigger image. Not a problem for <picture> element, one could say, we just define a smaller image for viewports of 730px width than we do for screens of 530px width….

…But consider the scenario that we offer users the ability to remove the ‘latest articles’ list. And that we have the following css rules:

#FeaturedArticles, #LatestArticles { width:50%; }
#LatestArticles ~ #FeaturedArticles { width:100%; }

When the #LatestArticles list is removed from the DOM, the #FeaturedArticles list expands to fill the full-width. Now we have a problem using the <picture> element where the image wil be upscaled. However, if the browser makes the choice based on the element sizeand since we’ve given a width rule to the image element, the most appropriate image source for the size of the element will be chosen.
In my experience, we achieve a responsive design by using percentage-based sizing and some media query adjustments. It works well. We should keep doing that, without adding more rules in to make specific changes at specific viewport sizes.

Problems

  • You have to know the width and height of each image at html generation time.
    Though, for the srcset and <picture> solutions you kind of need this too, you’re just assuming what width and height the image will be at appropriate screen sizes, and making assumptions about the location / margin / padding, which is worse.
  • Each source has to be the same aspect ratio.
    I think we could get around this with by defining a spec about how to define behavior for aspect ratio differences: an optional aspect-ratio-group element for example.
  • Images will be scaled.
    It’s true that with the <picture> proposal, we could define different images for different screen sizes, and the image would be displayed at it’s actual pixel size. Our page layout would then adjust accordingly. However, if we are to encourage percentage-based sizing more and more, then unless we’re defining fixed widths at different breakpoints, scaling is here to stay.

Isn’t this the same as Element Queries?

Element Queries aim to solve much the same issue. The principle guiding the call for Element Queries is that our layouts should be fluid by design. When a layout is fluid, then it’s the viewport size which affects the box size and position of an element, along with all the css rules applied to all the elements in the document. And a lot of developers and designers really want is to alter the behaviour of an element (or the content of an element) when the browser has given it a certain size because of those rules. The reason they want this is it becomes cumbersome and messy to keep track of everything changing when using mostly percentage-based sizing, and so designers and developers are pushed towards a strategy of defining various breakpoint widths, and consolidating many fixed-width rules in to those breakpoints instead. In our team, we still like to think about each element’s purpose individually and apply responsive behaviour to it in a somewhat componentised fashion, whilst considering the layout as a whole. Helper like SASS get us a long way here, but there are still a lot of real-world scenarios, which having the browser make decisions based on viewport size as a whole forces us in to javascript to solve.

Comments welcome, especially to tell me why this is not the correct approach – I’ll probably have a response for you!

MonoTouch.Dialog.DialogViewController and UINavigationController: missing back button

Ran in to an issue where I was pushing several DialogViewControllers in to a NavigationController. The DialogViewcontroller instances would asynchronously fetch data, and add the dialog elements when the data was retreived. Whilst this was working quite well, at any given point in the navigation the back button would only appear on the topmost view (the current view). Navigate back to any previous view and the back button would not be on the NavigationController.

I knew that the NavigationController is reliant upon the title of each UIView, so I made sure I was setting that, but it didn’t help.

The solution was to make sure to not reinstantiate the Root element of the DialogViewController. Doing so must mess with the Title of the UIView, and even if ithe title content stays the same, it looks like it causes the NavigationController to forget what it’s called and prevent a back button from showing. Instead, instantiate the RootElement when the class is created, and add items to the that object when the data has been retreived.

Bad:

//data has been retreived, replace dialog contents
this.Root = new RootElement(myTitle)
{
new Section("Folders"){
Elements = myData.Select(obj => (Element)new StringElement(obj.Name)).ToList()
}
};

Good:

//data has been retreived, replace dialog contents
this.Root.Clear();
this.Root.Add(new Section[]
{
new Section("Folders"){
Elements = myData.Select(obj => (Element)new StringElement(obj.Name)).ToList()
}
});

Measuring Viewport size with Google Analytics

Google Analytics recently updated their API so that it’s possible to track non-interaction events without reducing the bounce rate to 0% when automatically tracking events on many or every pageview. They did this by including a boolean parameter on the _trackEvent method call which, when set to True, indicates that the event was not based on user-interaction.

Now we can send extra information to google analytics and not have it interpret that information as user interaction – and one piece of information we’ve always wanted to track in Google Analytics is viewport size. It’s great that GA already tracks screen resolution, but that doesn’t help us know what size the viewable area within the browser windows actually is, so let’s track the initial size, and then anytime the user resizes the window:

//send to GA window viewport size on inital load and when resized as non-interactive events
$(function() {

  //track viewport dimensions
  var viewportWidth=$(window).width();
  _gaq.push(['_trackEvent', 'Viewport Dimensions', 'Viewport Dimensions Initial', viewportWidth+'x'+$(window).height(), viewportWidth, true]);

 //track viewport dimensions being changed by resize (throttled)
  var gaResizeCompleteHl;
  $(window).resize(function(){
    clearTimeout(gaResizeCompleteHl);
    gaResizeCompleteHl = setTimeout(function(){
        var viewportWidth=$(window).width();
  _gaq.push(['_trackEvent', 'Viewport Dimensions', 'Viewport Dimensions Resized', viewportWidth+'x'+$(window).height(), viewportWidth, true]);
      }, 500);
    });
  });

(paste this after your google analytics code, and sorry for the laziness – my code requires jQuery) Since events allow for a numeric value to go with the action, I choose to send the width which GA can use to do powerful filtering during segmentation.

One of the biggest but most exciting challenges at the moment for web development is producing sites which respond beautifully to different screens, different methods of interaction and different amounts and types of data. In order to build those sites to the users’ needs, we need as much data as possible about how people are using our products. Using Google Analytics and the above, we can segment the existing GA data and run queries using the new viewport information to answer questions such as:

  • What percentage of people run their browsers at full screen width?
  • How many tablet (or phone) users are browsing in portrait versus landscape?
  • When or why do people resize their browsers?
On a similar note, when thinking about designing for larger screens whilst I’m completely driven that we create experiences which make use of the whole screen, I have mixed feelings towards Mac OS. First, I panic because of the pre-Lion versions’ tendencies to run browsers at very reduced widths compared to screen-widths. Then I feel excited about Lion’s full-screen mode, and how much users embrace and love it. Seeing that and the browsing experience on Windows 8 makes it clear the direction of travel and our task is to make those full-screen experiences as usable and beautiful as possible.

You can’t put your privacy concerns on Facebook anymore

After facebook’s announcements last week around the new version of OpenGraph (version 3), many have been scared of an increased invasion of privacy about facebook recording and displaying, for the world to see, your activity as you browse around the web. Sparked by the release by certain media outlets of features where as you browse their websites each action is relayed to your facebook timeline.
Facebook OpenGraph Timeline Music ItemsFacebook OpenGraph News items

There are two myths which seem to be perpetuating this hysteria:

Myth #1: Facebook is implementing this functionality

Actually, Facebook have only created an API for apps and sites to post actions to your history, and have done so in a very controlled way. The actions must contain an approved verb (eg: read/listened-to/played), an approved type (article/tv show/song) and cannot be anything outside of those bounds. The new actions API contains a sufficiently descriptive permission model for users of each app in front of that. It is the apps/sites you should direct your frustration at if you don’t want them to send every interaction you do to facebook. Indeed, I think these sites should offer more control – a compromise between fully automatic sending of actions, and a button similar to the like button.

Facebook OpenGraph action creation

In fact, facebook could have done so much worse. Due to the success of the facebook like button, facebook could have had the ability to record literally every article you visit on every site which implements the like button (lots) without asking for any persmission at all, or changing any code on your site. They had the power to know, without permission, everywhere a logged-in facebook user visits, but they chose to make this a push mechanism for content publishers. If you don’t like the fact that once you connect with Yahoo News or Spotify, every article you read or every song you listen to will be recorded forever on your Timeline, then you have to blame Yahoo or Spotify. Genius.

Myth #2: Anything I do on the web now will be recorded on my Timeline

Again, this depends on how media publishers intend to use facebook’s new API. Sometimes it will make sense to post actions without user interaction each time (listening to a song for example), and sometimes it won’t (imdb doesn’t know if you actually watched a movie, until you press their new watch button [this doesn't exist, I made it up]), but it’s all in the control of the publisher.

This is the evolution of the like button, the old profile Apps, and the facebook beacon combined

Facebook apps – the ones which created the mess of boxes and videos on people’s profiles, which quickly got removed a year or two ago allowed more identity to creep through, but in an extremely uncontrolled manner (developers could post pretty much any html in any layout they wanted). Now, developers don’t write any html on your profile – the actions your app sends to facebook are sent as data in a very specific manner, and how that is displayed and used on people’s Timelines is presented to you as 4 facebook-controlled options.

Facebook OpenGraph timeline display configuration

Like Button – as I mentioned already, the like button was a precursor to all this – 1 simple button which automatically likes a Url on the web. The like button could be reimplemented within the new OpenGraph API, but likely won’t be yet because of the fact you don’t need to give a website permission to have a like button which knows about you on it.

Facebook beacon – actually, facebook have been trying to achieve what they’ve done this week  for a while, starting with facebook beacon. Through that experience, it’s obvious Facebook learnt a lot about the direction they would need to take in order to get to their destination of becoming the central point for digital identity, sharing and discovery – and that direction is one where they share the resposibility with content providers.

Like it or not, this is the direction the web is going if you want to take part in its social features and, personally, I love it.

It’s definitely going to be an interesting time as we see how different content publishers embrace the new OpenGraph features – will wordpress.com implement an integration for their x hosted blogs? What about content publishers who have tried to do similar social/content mixing before – will iTunes give up on Ping and embrace facebook, or Microsoft connect Zune to facebook (their music and movie streaming is good but their social never took off)?

If Timeline is something to embrace as a cloud storage of my life events, and it’s to be a complete picture – then I’d like to see all the above, plus some interesting ways to push content to timeline from other mediums – TV shows and cinema, sports events (RunKeeper/MapMyRun), and trips for example, but I’d also like to see great tools for me to curate that information and powerful ways for facebook to amalgamate all my data, analyse it, and show interesting angles on my Timeline.

Cloud Terminal: Remote Desktop to EC2 instances

By nature, when working with connectable resources in the cloud, the number and IP location of those resources can change at any point. A pain point is often managing the addresses with which to connect to these instances, so I spent a short amount of time doing something about it by creating a program in WPF which automatically retreives a list of instances for an Amazon EC2 account and allows connections over Remote Desktop Protocol (RDP).

CloudTerminal v0.2

After quickly realising then that there are many additional features which would also be useful in this area, I open sourced the project at http://cloudterminal.codeplex.com Special thanks for already contributing a beautiful logo to James Tenniswood.

To prevent over-engineering the tool and never coming up with a version I can use myself, let alone releasing, I decided to put some old skool agile methodology on the project and prioritise the features by how essential they are for each release. This roadmap is then published on CodePlex. Development of features in 0.2 is complete and a working copy can be installed via click-once.

0.1
- Retreive and display list of connections from EC2
- Connect and disconnect via RDP to any instance in list

0.2
- Show instance CPU history
- Store account keys in local configuration
- Optimise UX

0.3
- Allow multiple AWS accounts
- SSH connectivity, including private key storage
- Overlay instance details / commands on list select
- Add grid of instances, shown when no connections are active

0.4
- Allow Azure accounts with more appropriate list view of instances/services

0.5
- Test TCP connectivity before connecting. Offer option to open relevant remote cloud firewall port to client IP address.
- Allow instance / image / service specific credential saving for connections.