Posts Tagged ‘Carnival’

Masabists: Illustrating Gartner’s Q3 2010 Global Handset Shipment Report

// November 15th, 2010 // No Comments » // Mobile

This post originally appeared on the Masabists blog.

I created a quick infographic this weekend to illustrate the trends shown in Gartner’s recent Q3 2010 handset shipments report:

Gartner 2010 Q3 global handset shipments

The bluer a company is the more its market share is growing, the redder a company is the more its share is being eroded (even if handset shipments themselves are up) – which illustrates nicely the slow decline of the old guard, as a more diverse mix of companies invade the handset space. Fragmentation is here to stay.

Please comment on the original post

Carnival of the Mobilists 186

// August 13th, 2009 // Comments Off // Mobile

My Orange post was featured on this week’s Carnival over at All About iPhone – check it out for the week’s best mobile writing!

Masabists: Transcoders round 2

// February 13th, 2009 // Comments Off // Mobile

Note: this post was originally written for the Masabists blog, and was nominated Post of the Week on Carnival of the Mobilists #161.

It has been incredibly busy pre-MWC and so I have not yet pulled together our research into mobile browser handling of SSL certificates (as promised when I described transcoders). However one piece of news inspired me to write an addendum – Novarra, a transcoder vendor, have now started offering a version of their transcoder for laptops using 3G dongles. Bill Ray isn’t certain about all the details, and Novarra still haven’t replied to my previous request for information so it is difficult to confirm anything, but I’ll restate the problem now and draw some possible conclusions from that.

The problem as it stands: transcoder vendors want to rewrite content to improve performance and accessibility of sites accessed through “mobile” connections. To do this for secure HTTPS sites, they insert themselves into the transaction quietly, breaking end-to-end encryption and creating a potential security hole – though if they are doing their job correctly it should be hard to exploit. More details here.

The transcoder vendors are asking the W3C to approve a new best practices guide which states that they will be allowed to insert themselves into an HTTPS connection, but if the server has set a special HTTP header they will immediately stop and allow end-to-end encryption to resume. If you’re interested in the politics WapReview have an excellent post covering them.

On the face of things the proposal sounds eminently reasonable. It isn’t.

One year ago Netcraft reported that there were 2,451,780 sites on the web responding to SSL (HTTPS) requests, 794,008 of which certificates verified by trusted third parties such as Verisign. Growth was nearly 40% year-on-year at that point, so we can assume there are at least 3m sites today, 1m of which are serious about their security.

There are approximately 5-15 transcoder vendors (that’s an educated guess – there are more than two and I can’t imagine there are as many as 20).

The vendors believe that for security to be guaranteed, the best thing to do is that they break the existing standard, and all 1m (or 3m) sites on the web must change their server configuration if they want to continue being secure. This agreement will be made by a committee of a few dozen people at W3C – no word yet on how widely it will be publicised, but the people publicising the process so far are largely outsiders who don’t like the sound of it.
Many might not care, because they don’t support mobile handsets directly and don’t plan to in the near future. The move to broaden transcoding into laptop connections may make those people think twice, especially when you consider that in the future WiMax and LTE will start to offer true broadband over the air and the entire nature of the internet connection industry will change and become more wireless.

It seems common sense to a layman: the 3+ million e-commerce sites, corporate intranets etc should be allowed to remain secure without having to change all of their configuration; the dozen or so transcoder vendors should honour the existing system. If it is neccessary to transcode HTTPS connections, this should be an opt-in service decided by each and every site, who should be given full explanations of the (minimal?) risks – because they have the burden of care to their users, adherance to banking/privacy regulations, etc.

Let’s hope the W3C make the right decision, but if you are concerned about this maybe it’s worth publicising it widely just in case?

Please comment on the original post.

Masabists: NFC – One Day It’ll Be Great

// October 15th, 2008 // Comments Off // Mobile

Note: this post was originally written for the Masabists blog, and featured on Carnival of the Mobilists #146.

Currently there are two potential candidate technologies for scanning an electronic ticket held on a mobile phone – 2D barcodes (visual scanning) and NFC (wireless scanning). From Masabi’s point of view they are interchangeable – just connection technologies for our ticketing services, which can support either – but MoMo London’s NFC event this week provided some very interesting confirmation that barcodes will be ruling the roost for some time.

The event had a great mix of speakers, covering the full breadth of the technology from potential use cases right through to a speaker from O2 covering their recently concluded London-based trial. Telefonica/O2 have been the most agressive European operator backers of the technology, and their trial stats were very positive on the surface, but they didn’t stand up to much scrutiny.

The headline 80+% approval ratings all came from Londoners who had been given free handsets that they could use as Oyster cards (for travel on the Tube), with free credit to get them started – I’d certainly be very happy with that! The majority of the UK population – and the majority of the European and world populations – haven’t been obliged to use a widespread NFC-based network every day for several years, so it is unlikely that these interest levels could be replicated elsewhere.

The most telling thing about Claire from O2′s talk, though, was that the trial was so successful that O2 now had enough data for… directing another trial, at some point in the future. Moreover she stated that O2 had decided they were not interested in taking any revenue share from NFC payments, whilst acknowledging that they as the operator would bear the brunt of user support costs for on-phone NFC payments. Does that setup make O2 likely to spend money rolling out a service which will then cost them more money to run, with no revenue upside?

Just to confirm this pessimistic view, Claire declared that O2 would only consider a commercial launch in partnership with all operators, offering a wide range of NFC-enabled devices. However consider:

  • Currently only Nokia have an NFC handset on the market (a niche mid-range feature phone);
  • That handset isn’t widely available through operators on contract;
  • No other major manufacturer has announced an upcoming NFC handset;
  • Handset manufacturers will only increase the cost of a device by putting NFC into it if the operators ask them to – clearly, they aren’t.

Claire’s tone of voice seemed… cautious when she suggested that analysts reckoned it might get big in 2012. Even if we ignore the impact of potential world recession on the technology industry, that seems optimistic:

  • New handsets take time to design and new features require new skills to integrate into hardware and firmware;
  • There is a clear lag between handsets first arriving in the shops and those same handsets becoming widespread, as users usually only upgrade at the end of a contract;
  • Handset upgrade cycles are slowing down as operators try and encourage customers onto 18 and 24 month contracts, instead of the more usual 12 (Orange recently offered me a massive discount for just such a move when I upgraded to Nokia’s flagship N96, which contains every feature under the sun – except NFC).

Going mainstream in 2012 suggests a lot of new NFC handsets will be commissioned, designed, manufactured and shipped out in volume by the start of 2011, just over two years from now… possible, but unlikely.

So is NFC living on perpetual limited-scale-trial life support? That may be the case for on-handset NFC, but James from Mastercard hinted at the real future of the technology – already hundreds of thousands of normal credit and debit cards have shipped out NFC-enabled, even if their owners don’t have any idea what that means.

It is not a huge leap of the imagination to see NFC as being “cheap enough” to just implement in next generation card processing terminals, so the natural upgrade cycle of those terminals leads to a widespread “stealth” deployment of NFC processing capability – without requiring retailers to actually pay out for a massive infrastructure upgrade so soon after Chip and PIN was introduced. Once the technology is out there, it may start to be understood and used, and once it starts to be used the added advantages of NFC on handsets may be compelling enough – and cheap enough – for a mass market roll out.

Until then, we’ll stick to a two pronged approach: 2D barcodes for our mainstream ticketing solutions, and limited scale NFC deployments where the handsets and infrastructure are available.

YourRail ticket sales app YourRail ticket sales app

National Express East Coast ticketing app National Express East Coast ticketing app

Please comment on the original post.

Masabists: The Mobile Web and Fragmentation

// September 24th, 2008 // Comments Off // Mobile

Note: this post was originally written for the Masabists blog, and was Post of the Week on Carnival of the Mobilists #143.

Last week I read some interesting comments that came out of the last MoMo London event on whether the mobile web is the only way to deliver mobile services, and whether it will fragment like every other mobile development platform. I thought it was worth republishing my comments in an extended form here on our official blog as this is an area in which a number of people have an opinion, and there is a real divide between the optimists and the realists.

Firstly, the choice of platform for a mobile service is not clear cut and we strongly believe at Masabi that no single platform is the ‘winner’. Some types of service are better SMS driven, some as standalone apps, some purely web-based, some as a hybrid and some will simply never actually work in the real world. At Masabi, we focus on areas where standalone apps deliver the best experience, but tie in SMS in various innovative ways and offer web fallback where viable, but we always assess every project on its own requirements.

Ultimately, mobile phone users wants easy to use services that make their lives better and are context-relevant. It is difficult to see evidence for many users clamouring for ‘One Web’ and access to desktop-optimised web sites on their small phone screens; the success (and clearly superior usability) of, for example, the iPhone-optimised GMail and Facebook sites show that ‘one web’ is often not actually in the user’s interests. The even greater improvements achievable with dedicated apps – look at, for example, Google Maps on the iPhone – further suggest this is wishful thinking, and one size does not fit all.

The fragmentation issue is a thorny one. People mean many different things by fragmentation, and for some reason Java is a particular target for misinformed comment. Mobile fragmentation comes from a huge range of issues, from bugs and implementation inconsistencies, different screen sizes (a problem for any application that wants to look better than a 1996 web page), different interaction mechanisms (touchscreens, qwerty and numeric keypads are not alike), different hardware features (only some phones have GPS, camera, Bluetooth) etc. These issues apply equally to browsers and standalone applications.

The mobile web is still relatively young by many counts (despite the legacy of Wap), and it is only recently that users have been presented with browsers which are not a punishment to use. They are improving rapidly, as they must, but they are subject to the same Quality Assurance constraints that beleaguer all mobile phone firmwares – the lifecycle of a mobile phone is driven by the marketing department much more than the software development department, and TV advert deadlines are less amenable to change under pressure than bug lists.

The mobile browser world is extremely fragmented already when viewed as a whole – and even if you ignore 85% of users and look only at smartphones, it is still fragmented. There are many bugs even in rendering, and they don’t get fixed because updating firmware is still non-trivial. Once you start looking at scripting the picture gets worse:

  1. There are no standards for things like camera access, and all mobile history suggests that will lead to manufacturers adding proprietary extensions – some are being suggested, but none yet have Nokia, Samsung, Apple et al signed up.
  2. There are and will continue to be differences in performance and implementation specifics for all of these new APIs even after standardisation, just like desktop browsers but with more variables – even when the underlying rendering engine is the same eg. Webkit.
  3. Offline scripting engines like Google Gears really make things worse. A chat with one of the Gears guys at the previous MoMo London was revealing – Apple will never have Gears on the iPhone, which means instant fragmentation even if some parts of the API are copied over. They only offer it for Windows Mobile now with Symbian support hinted at soon – but Google will NOT pursue a strategy of bundling on devices, which means users must perform a complicated native app installation to get Gears. How many users will do this? Few, I think – and it leaves feature phone users, vastly outnumbering their smartphone brethren, out in the cold.

There are two ways to work around these handset differences – downloading a single big script which contains variants for everything (the current desktop way) or running servers which auto-switch based on the HTTP User-Agent header and a database like the WURFL. The former approach means more data shifted (which is still not free for most people, and certainly isn’t instant) with a greater memory and performance overhead when it arrives. The latter will risk becoming a rat’s nest of script conflicts – scripting is lovely for quick development but a nightmare for maintenance of multiple different library versions. Desktop browsers still require browser-specific workarounds in the script as they are still not all compatible, despite years of Javascript development and auto-updates over broadband – it is inconceivable that mobile, with more platforms, more browsers, more device variation and fewer updates, will not suffer similarly.

Even ignoring all of this, the kind of AJAX webapps people use today are feasible because PCs have effectively free always-on high bandwidth network connections, effectively free mains electricity supplies or laptop-style battery expectations, and fast processors tied to loads of memory and even more storage for caching, and a single consistent UI model with mice, full keyboards and big screens. XML may carry a huge overhead over the network, and scripting may consume a vast amount of clock cycles, but it doesn’t matter because both are plentiful. A drag and drop UI model, like Flickr’s Organizr, can work on Safari on the Mac and also IE on a PC with only a few code tweaks. Does this sound like a mobile device? Some of these areas are improving, some aren’t, but it doesn’t avoid the fact that AJAX webapps are not a natural fit for handsets.

The browser is a natural fit for a huge number of mobile services, and improvements in browsers are certainly very welcome (and overdue). The browser is not the be-all and end-all of the mobile user experience though, and developers who believe it is are to some extent working for the world they wish existed, not the one that does exist.

Despite immensely rapid technical innovation in the mobile world, actual adoption rates of new phone software like browsers is slow with upgrade periods lengthening and a very large installed base of users on legacy phones. Masabi recognises this and out applications support the very widest array of handsets possible, because for many real world services you can never be sure the users who would benefit from the service are the same as the users who always upgrade to the latest handset.
(Disclaimer/plug: I’m one of the MoMo Estonia chapter founders, but I keep referring to MoMo London events simply because they are extremely good and not because of any MoMo bias!)

Please comment on the original post.

Masabists: The Truth About Mobile Fragmentation

// January 14th, 2008 // Comments Off // Mobile

Note: this was first published on the Masabists blog, and was featured on Carnival of the Mobilists #107 and #108 (not sure why it reached both!).

My most recent task has been architecting the Playtech mobile product, which currently encompasses 10 games running on over 600 devices in 8 languages, each heavily customised for multiple licensees. Furthermore, the system was designed to continue scaling across more than a hundred licensees, with dozens of games and languages, supporting all mass market devices, all managed by a small team of inter-disciplinary experts. This has given me some interesting insights into the problems of fragmentation which I would like to share.

The dominant UI model on the desktop has been the Windows, Icons, Menus and Pointers system that became popular when Apple copied it from Xerox, and has stayed in the mainstream since Microsoft copied it from Apple.

No such standardisation is present in the mobile handset world. This will remain the case for the foreseeable future, as the market segments – offering “Swiss Army Knife” phones for power users, as well as specialised phones for people who value music, games, photography or fashion most.

Some will have keyboards, some trackballs, some touchscreens; some will have high resolution displays, some low; some will be landscape oriented, some portrait; some will have powerful processors and lots of memory, some won’t; and all of this will change radically over time, with some users upgrading every six months and some every four years.

Factor in the tight commercial deadlines that phone releases are subjected to, which never leave enough time for firmware QA, and you see why any mobile development platform that is large enough to be mainstream will be fragmented for a long time to come.

Anyone who has ever attempted to develop software for mobile phones knows that there are many hidden pitfalls along the way. However, mobile development need not be fraught with pain. A little experience immediately allows you to identify the key problems before you write a line of code, and design around them; a few rules of thumb can work wonders on development schedules and keep the QA team happy.

Pick Your Platform

I’ll try not to repeat my earlier post here, and will just briefly describe the three main types of mobile application platform – SMS, browser-based and installed application. Each has its own strengths and weaknesses, and they can often be used together to build a stronger product.

SMS is excellent for short, simple, immediate interactions across any handset – but break down once complex instructions are required.

Browser-based applications are growing hugely on the fixed web now that client-side scripting (such as AJAX) allows slick interaction, benefiting from automatic updates for every user – because everything is fetched from the server on-demand. AJAX on mobile is more theory than reality right now however, able to reach a tiny fraction of the market with incompatible implementations. Browser rendering is also hit or miss, and data costs can rapidly rack up as XHTML markup is heavy and flat rate data rare. Data cost is still an issue, with prepay customers regularly overcharged; flat rate, when available, is often full of loopholes and obscure clauses – billing per Kb is still very much the norm outside of the US.

Browser fragmentation will get worse long before it gets better – for example, radically different hardware like the iPhone introduces special case UI and design rules, and then 3rd party browsers like Opera or Mozilla further multiply these special cases whenever they launch for that hardware.

The other option is an installed client-side application, offering immediate and offline access, far greater presentation opportunities, and the potential for reduced data costs – but it is harder to update over time, and is best suited to services which are intended to be used regularly.

The Traditional View of Fragmentation

The most common thing people will say when discussing mobile applications, particularly those which are installed on the client-side, and Java apps in particular, is that fragmentation is a huge problem. This is generally repeated as a fact even by those who have never coded Java, which is why it is so often misunderstood. Figures like the 25,000 builds Glu required for Transformers are repeated with disbelief (which I share); I’d love to know how many of those included the tiny changes that operators demand to feature content on their portals, but I’d wager it’s a huge factor because I know Glu aren’t incompetent. As a comparison, we normally run to between 30 and 60 builds per language for an app or game.

Some history may explain how this feeling became “common wisdom” in the industry. In the bad old early days of MIDP 1 (the first version of mobile Java), applications were quite restricted in what they could do. MIDP 1 did not define a way to play sounds, vibrate the handset, or provide a mechanism to allow the game to take over the entire screen, for example.

Most major handset manufacturers saw this as an opportunity to differentiate their products, and dived in with proprietary APIs to do lots of extra things which arguably the MIDP 1 spec should have included in the first place. These could all be abstracted away by developers behind a common internal API and a single build, but it would have required a little effort to set up and was hardly ideal. This led to much of the initial bad press for mobile Java, which is still heard to this day. Buggy handset firmware is of course the other huge factor, but it is hardly restricted to the development environments – whole handsets regularly suffer.

Sun (the makers of Java) eventually got their act together and started to introduce extended functionality through the JSR process: first MMAPI for sound, then WMA for text messaging, and then MIDP 2. Rapidly, the old manufacturer APIs dropped away; some can still be found, but they are really only objects of curiosity these days. Right now, MIDP 2 is available on over three quarters of actively used Java-enabled handsets, with most popular devices also featuring lots of the common extensions as well.

The next most popular platform, Flash Lite, is now available on commonly used handsets in versions 1.0, 1.1, 2.0, 2.1 and 3.0. All of these offer different capabilities and implementations run in one of two different ways: either embedded on a web page (like a traditional Flash movie) or running as a standalone app (like JavaME content), each of which brings critical differences to the experience. Japanese handsets (where Flash Lite is arguably most popular) tend to embed Flash in the browser, preventing the Up and Down keys from being used in the Flash content. The main reason Flash Lite has fewer variations than JavaME is that it offers less power and access to the device’s features – and it still has its fair share of the standard cross-device issues and bugs.

After Flash come native Symbian OS apps – offering plenty of APIs (between the three major and many minor versions of Nokia’s Series 60 and the keypad and multiple touchscreen variants of UIQ). Recent versions of Symbian have repeatedly broken binary compatibility and the range of APIs always expand alongside device capabilities. So it goes on.

Mobile development platform graph

It is inevitable that any new feature absorbed into a phone – from cameras and music playing to GPS location tracking, OpenGL 3D and operator micropayment integration, will require a dedicated API before developers can access them. Features are being absorbed into phones at a very rapid pace, so any living development platform, be it browser or fully fledged open OS, will have API growth over time. There is simply no avoiding it.

Loosely drafted specifications with optional features and optional supported file formats still require management and should be avoided, but this is not rocket science. If you know that some devices support WAV sound but others only handle AMR, it’s not a particularly difficult problem to set up builds for the former to get WAVs and builds for the latter to get AMRs. We now have to generate platform-specific builds for a single application – we have certainly lost the “Write Once, Run Anywhere” bonus we were promised with Java. However this is simply unavoidable in the mobile space whatever platform you use, once you move beyond text-only messaging.

For what it is worth, my personal feeling is that the core Java functionality is becoming more reliable over time – we haven’t had anything as bad as a 7650 or a 6600 for some time, and even some of the more notorious platforms like JBed are getting more stable. The more obscure APIs have teething problems and no platform is bug free, but most everyday features can be used easily enough once you know the basic tricks.

Mine’s Bigger than Yours

I’ve just listed one reason we may wish to split a single application into multiple binary files: incompatible file formats between devices. There are many more, and most apply to any development platform you care to mention.

The single factor which leads to the most extra work on any mobile app is the size of the device’s screen. The smallest screen the Playtech mobile product supports is 96×65 pixels for Nokia’s still popular low-end devices; the current largest is 352×416, used on some of Nokia’s highest end handsets; I have no doubt we will expand to support VGA and larger screens soon enough. In between, there are currently eight other screen size and orientation groups, all requiring individually redesigned graphics, to cover a few dozen actual real screen resolutions. For example, we consider a 176×196 pixel usable screen area to be equivalent to 176×220, and design the graphics and code to be happy with any height between 196 and 220px.

Note that often, the phone will let applications take over most of the screen, but keeps a header and/or footer for showing battery and signal information, the time etc.

Mobile apps and games almost exclusively use bitmap graphics – which have a fixed size and resolution – and not scalable vectors, like a traditional Flash movie, that would be capable of adapting somewhat to different screens sizes from only one binary file. Even Adobe admits that Flash Lite will need different binaries for different devices, as whilst Flash Lite theoretically offers the same vector scaling as its big brother, in reality this is too slow on current devices for fast complex graphics. Experts recommend the use of bitmaps for any non-trivial application, just like Java.

Managing this screen size complexity is a challenge without huge graphics production timelines (and suicidal designers), but one that can be met with scripting and templates. In Playtech’s case games can feature amazing levels of variation where the graphics require less than a day to customise for a new licensee, ready to drop into an automated build.


Fragmentation for All!

Assuming that intelligent code, good tools/scripting and carefully planned graphics can cope with screen size variations and differences in processor speed, what else do we have to worry about? Principally, the very first point mentioned in this post – the UI model of the handset, and the input mechanisms provided to make use of it.

The iPhone was rightly lauded for its touch user interface, a radical departure from even the conventional stylus-powered PDA platforms like UIQ and Windows Mobile, and thumb-powered interfaces as featured on the LG Prada. Windows Mobile itself is now changing to a new model based on gestures, and Nokia’s Series 60 is getting touch screens as well. Great news for consumer choice, but a fragmentation headache for developers – each of these systems requires its own dedicated interface solution to best meet the user’s expectations.

Even on more conventional phones, there are plenty of other differences – to enter some text into a field, most provide simple numeric keypads, but some handsets provide full keyboards and some provide their own proprietary solutions, such as RIM’s Suretype system. Even with a simple numeric keypad, some devices have a dedicated Clear/Delete key and some require a soft key for this function. In theory, you could just create a Java text field and let it worry about this detail, but that would tie you down to very ugly screens with extremely limited control. Most quality applications will implement their own UI components with full branding, but this means extra effort to support all types of character entry.

Even an app with no need for text entry should optimise the rest of the interface for whatever keys are available – allowing users to jump through lists by typing, for example, or handling motion control in games on devices which have no dedicated Up/Down/Left/Right keys. The optimal UI for a device with an Up/Down scrollwheel may be very poor on a device with a trackball, etc.

For a trivial app, these differences do not matter – and in a webapp, you simply can’t do these optimisations for your user experience. For a professional installed app that will see frequent and prolonged use, however, these optimisations are 100% critical to the project’s success. The interface must react exactly how the user expects it to, looking and feeling familiar whilst also presenting an attractive, professional and branded appearance.

OS Wars

Many years after Symbian was formed to be the phone OS of the future by some of the key mobile manufacturers of the day, the marketplace is still very fragmented. Motorola currently supports MIB, AJAR, MAGX (a Linux variant), Windows Mobile and UIQ (a Symbian variant); Samsung relies on a number of internal platforms alongside Windows and S60 (the other open Symbian variant); Nokia, the truly dominant force in mobile with between 30% and 40% of the global market at any given time, has managed to rationalise its OS portfolio down to the low-mid range Series 40 and the S60 platform for smartphones. Some operators are attempting to limit the number of platforms they support, but this is starting to look like trying to hold a finger in a leaking dyke.

If Nokia, by some metrics the most successful consumer electronics company the world has ever seen, finds it very hard to raise market share beyond 40%, then it would seem unlikely that anyone else can achieve more. Slow handset upgrade cycles mean that even if they did tomorrow, today’s fragmented handsets would still be in regular use in 2010. Current marketplace trends see increased diversification, both in form factor and in the types of software running on it – and with more software development platforms to work with. Fragmentation may be annoying, but it is here to stay.


Mobile development will always involve hitting multiple moving targets, and the only applications which succeed will be those that embrace this complexity and manage it sensibly. It is more important to ask “how many people can a platform reach?” than “how few targeted binary files will I have to generate?”

The only serious platforms that can give a satisfying answer to this question today are clearly WAP2 or Java, depending on the nature of the application. One day, it will hopefully become possible to write scriptable mobile web applications that can be updated day to day from the server end, but that simply is not possible for the mass market today, and won’t be for some years.

As CTO of a company which is in the business of knowing the ins and outs of every handset and every mobile platform, it is obviously to my advantage to state that only developers who can manage this complexity will be able to survive and thrive in the mobile space. Hopefully, the reasons why this is true, and the reasons why we as a company have taken the sometimes painful direction that we have, are clear. This is a very exciting space to be in, but when you try to work on the cutting edge you must take precautions not to get cut.

Please leave comments on the original post.