There have been two approaches in bringing touch computing to the masses. The first approach, largely championed by Microsoft, has been to add touch to historically cursor driven desktop operating systems. The second, pioneered by Apple, has been to build touch-first user experiences. My main criticism of the added-on touch in Windows is that user experiences built around tiny cursors can’t elegantly support touch. Microsoft started adding touch support nearly 8 years ago and Windows is still littered with tiny cursor-sized controls (buttons, menus, etc…). Even as Microsoft has advanced touch in their OS, third party programs still lag behind. Sure, you technically can use touch everywhere on Windows, but there are only some places where you want to use touch.
Added-on touch does have one benefit though. When the ergonomics of a given task or setup don’t lend well to touch, those using Windows 10 can simply fallback to the same keyboard conventions that have always been there while those with iPadOS were stuck trying to awkwardly complete that same task using only touch mechanisms. Regardless of how it actually worked, added-on touch was the uncontested winner for supporting multiple pointing methods simply because there was no alternative. While adding cursor support to a touch interface seemed much simpler than adding touch support to a cursor interface, Apple resisted1 for the better part of a decade. It wasn’t until last September under the newly named iPadOS did Apple finally provide a rudimentary cursor that was hidden away under Accessibility settings. Honestly, that simple cursor is where I thought Apple was more-or-less going to stop. I was wrong.
From Apple’s updated iPadOS webpage:
The reimagined cursor experience has been designed specifically to work in a touch‑first environment.
That sentence says it all. Cursor support that has been added on nearly a decade later to a historically touch-first OS already feels more like a first-class pointing method than touch does on Windows 10. I think this is for two reasons. First, I think its evidence that adding a cursor to a larger touch-based interface is much easier than adding touch to cursor-based interfaces. Second, I don’t think you conceptually get to this new cursor by starting with a desktop where there is little impetus for and near insurmountable resistance2 to fundamental change.
There have been two approaches in bringing touch computing to the masses: added-on touch and touch-first. Now there are two approaches to bringing multiple pointer computing to the masses: One long offered by Microsoft that has a cursor-first experience with an added-on touch that is necessarily second-class because of the practical limitations of cursor-based user interfaces. The other, just released by Apple, is a touch-first user experience that’s now also cursor-first.
- This is somewhat reminiscent of the omission of arrow keys from the original Macintosh’s keyboard. The key difference here is that Mac users didn’t have to wait nearly a decade to get arrow keys. ↩
- Remember when some asshole tried to get rid of the Apple Menu? How’d that work out? ↩