Building AppyLinks with Xamarin.Forms


This was a weekend project as my entry for the Xamarin Forms Evolve conference competition. …and I won first prize!


The new Xamarin.Forms release is pretty exciting. I spent a day exploring it and decided to build an app that I’ve personally seen a need for – quick way to get a list of links I may be working on to multiple devices. Day to day my team and I have to preview a lot of different (and long) urls on devices, for both when we’re building and when we’re presenting, so an app that can display a list of those links across devices would be quite handy! I decided to use a user’s GitHub gist as the data store since it allows for easy and continuous editing, and therefore authenticate the app using GitHub.

In the past, I’ve tried to use Xamarin Studio to create basic tools like this for both iOS and Android. Aside from some professionally-released apps, I’ve always stopped before having something coherent because I just could not get enough accomplished in a few hours – there was just too much overhead, and too much having to get to know the intricacies of each platform. The fact that I did get this done in a day, and that it did (for the most part!) ‘just work’ on both iOS and Android is proof that the new Xamarin.Forms is going to be incredibly useful. And this is only their first release…

AppyLinks lets you authenticate with github…

Logging in via GitHub

Logging in via GitHub

from where it will grab a gist named ‘AppyLinks’ and parse the basic list of links it expects…

List of links retrieved from a GitHub gist

List of links retrieved from a GitHub gist called ‘AppyLinks’

…Selecting any of the links will open a view with a browser navigated to the destination, and allow you to navigate back to the list.

Opening a link in a WebView

Opening a link in a WebView

Source code is public and available on GitHub at

The power of the new Xamarin.Forms

A typical view, which will render with native controls on iOS, Android and Windows Phone:

     <StackLayout Orientation="Vertical">
         <ActivityIndicator x:Name="listFetchingActivity" IsRunning="false" IsVisible="false"></ActivityIndicator>
         <ListView x:Name="urlView"
                   <TextCell Text="{Binding Title}" />

An interface to device-specific local-storage…

public interface IUserSettingsStore
    string GithubAuthorizationToken { get; set; }

…the implementation for android…

  public class UserSettingsStore : IUserSettingsStore
     //static control
     static AndroidSettingsStore preferencesInstance = null;
     public static void Init(Context settingsContext)
         //initialise preference store
         preferencesInstance = new AndroidSettingsStore(
             PreferenceManager.GetDefaultSharedPreferences (settingsContext)
     const string SETTINGSKEY_GITHUBAUTHTOKEN = "GithubAuthToken";
     //interface implementation
     public string GithubAuthorizationToken {
         get {
             return preferencesInstance.androidPreferences.GetString (SETTINGSKEY_GITHUBAUTHTOKEN, null);
         set {
             var editor = preferencesInstance.androidPreferences.Edit ();
             editor.PutString (SETTINGSKEY_GITHUBAUTHTOKEN, value);
             editor.Apply ();
     public UserSettingsStore ()

…and for iOS…

  public class UserSettingsStore : IUserSettingsStore
     const string SETTINGSKEY_GITHUBAUTHTOKEN = "GithubAuthToken";
     #region IUserSettingsStore implementation
     public string GithubAuthorizationToken {
         get {
             return NSUserDefaults.StandardUserDefaults.StringForKey (SETTINGSKEY_GITHUBAUTHTOKEN);
         set {
             NSUserDefaults.StandardUserDefaults.SetString (value, SETTINGSKEY_GITHUBAUTHTOKEN);
     public UserSettingsStore ()

Great things about Xamarin.Forms

  • You can make custom render implementations for each platform – for whole controls, or just to customise a small part of functionality – see the custom NavigationPageRenderer for iOS in AppyLinks source code.
  • You can provide Interfaces, and make an implementation on each platform, if you want to do platform-specific.
  • Each platform still has a separate project which can be completely customised or configured – Xamarin.Forms doesn’t enforce some kind of purist approach.

Challenges with v1

  • No designer, even though there’s both XAML support here, and there’s the nice new iOS designer with Xamarin. Because of the spaghetti c# view code, I daren’t add any (even basic) styling to these views with any speed, and that’s a shame (especially on android).
  • Xamarin Studio is having problems with intellisense and syntax highlighting for objects defined in XAML.
  • It’s an early release, so documentation is pretty bare, and also quite ambiguous at times.
  • There are currently some issues with System.Net.Http (probably affecting all mono uses, there’s a bug report from 2013 I commented on here).
  • Couldn’t get the XAML <FileImageSource /> element working, had to fallback to the Image.Source = ImageSource.FromFile pattern, which did work fine once you import the image file as a resource in to each platform-specific project.

With this new ability mixed with Portable Class Libraries (like you really can get to the stage where projects are 95% shared code, and the 5% platform-specific is finessing the details, rather than re-implementing the same logic!

Awesome logo designed by the lovely Kat Windley!

AppyLinks logo

AppyLinks Icon

Is serving high dpi images better for paint performance?

Whilst looking at paint times of various elements recently, I noticed that sometimes images are causing some long paint times when in view. After digging around I found that it was on certain sites which aren’t optimized for high dpi screens like the one in my retina MacBook. I then noticed the paint times are reduced when swapping the image out for a 2x one, so that the image is displayed 1:1 and there is no upscaling.

I experimented with an isolated example of an image forced to a 600×300 virtual pixel size through css. I then used a 600×300 image, a 1200×600 image (so, 2x) and measured the paint time using Chrome Dev tools.

600x300 image displayed at 1200x600 physical pixels - 10.8ms paint time

600×300 image displayed at 1200×600 physical pixels – 10.8ms paint time

1200x600 image displayed at 1200x600 physical pixels - 4.3ms paint time

1200×600 image displayed at 1200×600 physical pixels – 4.3ms paint time

It’s clear to see that the scaling causes a 6ms to 16ms overhead on paint, which is unfortunate given for 60fps we need the entire viewport to paint in 16ms, and I’m sure most sites have more than a single image for the browser to render.

I think what this shows though is that any image scaling is going to cause paint overhead. Unfortunately, many developers, including ourselves at Condé Nast, are using percentage based widths for responsive designs, and therefore downscaling for most users in lieu of a better responsive image solution. A further test (displaying the an 1800×1200 image at 600×300 using 1200×600 physical pixels) shows that downscaling can be even more costly. I would like to take the time to put together a table with a  more comprehensive set of test results at different sizes and pixel densities.

1800x1200 image displayed at 1200x600 physical pixels - 23.5ms paint time

1800×1200 image displayed at 1200×600 physical pixels – 23.5ms paint time

Responsive Images – Thoughts before Edge Conf, and the Element Size Problem

Ahead of my contribution to the Responsive Images panel at Edge Conference in NY next week, I wanted to get down my thoughts on the topic, if only to see if it differs after the conference. Well, my thoughts along with many discussions and real-world implementations with my team at Condé Nast Digital UK.

Some of the most popular currently proposed solutions are:

  1. srcset extensions to the img element
  2. <picture> element, including multiple <source> definition elements
  3. A compromise between 1. and 2.
  4. Client Hints

All of these proposals seem to agree on one thing: that there should be an ability for developers to define different image sources based on the size of the viewport, or the pixel density of the screen. Pixel density because we don’t want to upscale images by displaying them at their replaced element’s virtual pixel size. Different size of the viewport because we want to have art direction on different size crops of an image.

This isn’t enough

I’d go so far as to say that this may be a destructive place to start. In the case of the <picture> element, developers would actually be defining rules for which sources to use at which screen widths, independent of the stylesheets for the page. This is problematic because this disconnect flies in the face of semantic separation, and means we will have a difficult time defining one place where layout is controlled. Consider the scenario where the image has one or both of width and height set to ‘auto’ (as if they are both set to by default). By defining a different image source at different viewport sizes which has a different pixel size, the image displayed in the replaced element will also change size to match. I recognise that this is *already* a problem with the <img> element as it is today (and some may say a welcome feature), even without multiple source/media-query combinations, but in my opinion this is a different level of problem because of the explicit definition of media queries to define rules within the html.

The srcset solution is better in that it’s still adhering to the current principles of <img> and simply providing a way for us to give (what the browser hopes to be) the same image at a larger size.

Responding to Element Size

But can this be taken one step further? Through CSS, we can alter the user experience of our pages – layout, appearance, transformation, animation, etc. All this can change the positioning and size of our <img> elements. To reiterate – it’s in the CSS that we define the rules that govern where and how big our images are. The browser should choose an appropriate image source depending on that location. In order to do that, we should give the browser an appropriate list of image sources we have for that element, and explicitly define their width and height. Something like:

<img alt="Barak Obama stands to deliver his speech at the White House">
    <source width="300" height="150" src="" />
    <source width="600" height="300" src="" />
    <source width="1920" height="1280" src="" />

This still adheres to HTML’s pursose (which srcset does too) of merely defining data in HTML. This has to be the solution to repsonsive images, in fact it is similar to the solution used by us at Condé Nast UK for, GQ, Wired and others, albeit a javascript solution (codenamed ‘srcTwizzle.js’ at version 1). Consider the following example, where we have a page at 530px and 730px viewport width:

730px viewport width

730px viewport width

530px viewport width

530px viewport width

The CSS is defined to stop floating the list of ‘latest’ articles by 530px (they are on the right at 730px) so that it isn’t forming a second column anymore, and the featured images are set to go to fill the space of their containers (100% width). So, at a smaller screen size, we actually have a bigger image. Not a problem for <picture> element, one could say, we just define a smaller image for viewports of 730px width than we do for screens of 530px width….

…But consider the scenario that we offer users the ability to remove the ‘latest articles’ list. And that we have the following css rules:

#FeaturedArticles, #LatestArticles { width:50%; }
#LatestArticles ~ #FeaturedArticles { width:100%; }

When the #LatestArticles list is removed from the DOM, the #FeaturedArticles list expands to fill the full-width. Now we have a problem using the <picture> element where the image wil be upscaled. However, if the browser makes the choice based on the element sizeand since we’ve given a width rule to the image element, the most appropriate image source for the size of the element will be chosen.
In my experience, we achieve a responsive design by using percentage-based sizing and some media query adjustments. It works well. We should keep doing that, without adding more rules in to make specific changes at specific viewport sizes.


  • You have to know the width and height of each image at html generation time.
    Though, for the srcset and <picture> solutions you kind of need this too, you’re just assuming what width and height the image will be at appropriate screen sizes, and making assumptions about the location / margin / padding, which is worse.
  • Each source has to be the same aspect ratio.
    I think we could get around this with by defining a spec about how to define behavior for aspect ratio differences: an optional aspect-ratio-group element for example.
  • Images will be scaled.
    It’s true that with the <picture> proposal, we could define different images for different screen sizes, and the image would be displayed at it’s actual pixel size. Our page layout would then adjust accordingly. However, if we are to encourage percentage-based sizing more and more, then unless we’re defining fixed widths at different breakpoints, scaling is here to stay.

Isn’t this the same as Element Queries?

Element Queries aim to solve much the same issue. The principle guiding the call for Element Queries is that our layouts should be fluid by design. When a layout is fluid, then it’s the viewport size which affects the box size and position of an element, along with all the css rules applied to all the elements in the document. And a lot of developers and designers really want is to alter the behaviour of an element (or the content of an element) when the browser has given it a certain size because of those rules. The reason they want this is it becomes cumbersome and messy to keep track of everything changing when using mostly percentage-based sizing, and so designers and developers are pushed towards a strategy of defining various breakpoint widths, and consolidating many fixed-width rules in to those breakpoints instead. In our team, we still like to think about each element’s purpose individually and apply responsive behaviour to it in a somewhat componentised fashion, whilst considering the layout as a whole. Helper like SASS get us a long way here, but there are still a lot of real-world scenarios, which having the browser make decisions based on viewport size as a whole forces us in to javascript to solve.

Comments welcome, especially to tell me why this is not the correct approach – I’ll probably have a response for you!

MonoTouch.Dialog.DialogViewController and UINavigationController: missing back button

Ran in to an issue where I was pushing several DialogViewControllers in to a NavigationController. The DialogViewcontroller instances would asynchronously fetch data, and add the dialog elements when the data was retreived. Whilst this was working quite well, at any given point in the navigation the back button would only appear on the topmost view (the current view). Navigate back to any previous view and the back button would not be on the NavigationController.

I knew that the NavigationController is reliant upon the title of each UIView, so I made sure I was setting that, but it didn’t help.

The solution was to make sure to not reinstantiate the Root element of the DialogViewController. Doing so must mess with the Title of the UIView, and even if ithe title content stays the same, it looks like it causes the NavigationController to forget what it’s called and prevent a back button from showing. Instead, instantiate the RootElement when the class is created, and add items to the that object when the data has been retreived.


//data has been retreived, replace dialog contents
this.Root = new RootElement(myTitle)
new Section("Folders"){
Elements = myData.Select(obj => (Element)new StringElement(obj.Name)).ToList()


//data has been retreived, replace dialog contents
this.Root.Add(new Section[]
new Section("Folders"){
Elements = myData.Select(obj => (Element)new StringElement(obj.Name)).ToList()

Measuring Viewport size with Google Analytics

Google Analytics recently updated their API so that it’s possible to track non-interaction events without reducing the bounce rate to 0% when automatically tracking events on many or every pageview. They did this by including a boolean parameter on the _trackEvent method call which, when set to True, indicates that the event was not based on user-interaction.

Now we can send extra information to google analytics and not have it interpret that information as user interaction – and one piece of information we’ve always wanted to track in Google Analytics is viewport size. It’s great that GA already tracks screen resolution, but that doesn’t help us know what size the viewable area within the browser windows actually is, so let’s track the initial size, and then anytime the user resizes the window:

//send to GA window viewport size on inital load and when resized as non-interactive events
$(function() {

  //track viewport dimensions
  var viewportWidth=$(window).width();
  _gaq.push(['_trackEvent', 'Viewport Dimensions', 'Viewport Dimensions Initial', viewportWidth+'x'+$(window).height(), viewportWidth, true]);

 //track viewport dimensions being changed by resize (throttled)
  var gaResizeCompleteHl;
    gaResizeCompleteHl = setTimeout(function(){
        var viewportWidth=$(window).width();
  _gaq.push(['_trackEvent', 'Viewport Dimensions', 'Viewport Dimensions Resized', viewportWidth+'x'+$(window).height(), viewportWidth, true]);
      }, 500);

(paste this after your google analytics code, and sorry for the laziness – my code requires jQuery) Since events allow for a numeric value to go with the action, I choose to send the width which GA can use to do powerful filtering during segmentation.

One of the biggest but most exciting challenges at the moment for web development is producing sites which respond beautifully to different screens, different methods of interaction and different amounts and types of data. In order to build those sites to the users’ needs, we need as much data as possible about how people are using our products. Using Google Analytics and the above, we can segment the existing GA data and run queries using the new viewport information to answer questions such as:

  • What percentage of people run their browsers at full screen width?
  • How many tablet (or phone) users are browsing in portrait versus landscape?
  • When or why do people resize their browsers?
On a similar note, when thinking about designing for larger screens whilst I’m completely driven that we create experiences which make use of the whole screen, I have mixed feelings towards Mac OS. First, I panic because of the pre-Lion versions’ tendencies to run browsers at very reduced widths compared to screen-widths. Then I feel excited about Lion’s full-screen mode, and how much users embrace and love it. Seeing that and the browsing experience on Windows 8 makes it clear the direction of travel and our task is to make those full-screen experiences as usable and beautiful as possible.

You can’t put your privacy concerns on Facebook anymore

After facebook’s announcements last week around the new version of OpenGraph (version 3), many have been scared of an increased invasion of privacy about facebook recording and displaying, for the world to see, your activity as you browse around the web. Sparked by the release by certain media outlets of features where as you browse their websites each action is relayed to your facebook timeline.
Facebook OpenGraph Timeline Music ItemsFacebook OpenGraph News items

There are two myths which seem to be perpetuating this hysteria:

Myth #1: Facebook is implementing this functionality

Actually, Facebook have only created an API for apps and sites to post actions to your history, and have done so in a very controlled way. The actions must contain an approved verb (eg: read/listened-to/played), an approved type (article/tv show/song) and cannot be anything outside of those bounds. The new actions API contains a sufficiently descriptive permission model for users of each app in front of that. It is the apps/sites you should direct your frustration at if you don’t want them to send every interaction you do to facebook. Indeed, I think these sites should offer more control – a compromise between fully automatic sending of actions, and a button similar to the like button.

Facebook OpenGraph action creation

In fact, facebook could have done so much worse. Due to the success of the facebook like button, facebook could have had the ability to record literally every article you visit on every site which implements the like button (lots) without asking for any persmission at all, or changing any code on your site. They had the power to know, without permission, everywhere a logged-in facebook user visits, but they chose to make this a push mechanism for content publishers. If you don’t like the fact that once you connect with Yahoo News or Spotify, every article you read or every song you listen to will be recorded forever on your Timeline, then you have to blame Yahoo or Spotify. Genius.

Myth #2: Anything I do on the web now will be recorded on my Timeline

Again, this depends on how media publishers intend to use facebook’s new API. Sometimes it will make sense to post actions without user interaction each time (listening to a song for example), and sometimes it won’t (imdb doesn’t know if you actually watched a movie, until you press their new watch button [this doesn't exist, I made it up]), but it’s all in the control of the publisher.

This is the evolution of the like button, the old profile Apps, and the facebook beacon combined

Facebook apps – the ones which created the mess of boxes and videos on people’s profiles, which quickly got removed a year or two ago allowed more identity to creep through, but in an extremely uncontrolled manner (developers could post pretty much any html in any layout they wanted). Now, developers don’t write any html on your profile – the actions your app sends to facebook are sent as data in a very specific manner, and how that is displayed and used on people’s Timelines is presented to you as 4 facebook-controlled options.

Facebook OpenGraph timeline display configuration

Like Button – as I mentioned already, the like button was a precursor to all this – 1 simple button which automatically likes a Url on the web. The like button could be reimplemented within the new OpenGraph API, but likely won’t be yet because of the fact you don’t need to give a website permission to have a like button which knows about you on it.

Facebook beacon – actually, facebook have been trying to achieve what they’ve done this week  for a while, starting with facebook beacon. Through that experience, it’s obvious Facebook learnt a lot about the direction they would need to take in order to get to their destination of becoming the central point for digital identity, sharing and discovery – and that direction is one where they share the resposibility with content providers.

Like it or not, this is the direction the web is going if you want to take part in its social features and, personally, I love it.

It’s definitely going to be an interesting time as we see how different content publishers embrace the new OpenGraph features – will implement an integration for their x hosted blogs? What about content publishers who have tried to do similar social/content mixing before – will iTunes give up on Ping and embrace facebook, or Microsoft connect Zune to facebook (their music and movie streaming is good but their social never took off)?

If Timeline is something to embrace as a cloud storage of my life events, and it’s to be a complete picture – then I’d like to see all the above, plus some interesting ways to push content to timeline from other mediums – TV shows and cinema, sports events (RunKeeper/MapMyRun), and trips for example, but I’d also like to see great tools for me to curate that information and powerful ways for facebook to amalgamate all my data, analyse it, and show interesting angles on my Timeline.