Paradigm Soup

I am not the only one who is apprehensive about Stage Manager. Steve Troughton-Smith elaborated his concerns in a thread on Twitter, and even echoes my consternation around the lack of names.

I don’t think Apple has figured out the nouns & verbs of the important elements of multi-window multitasking — what is an ‘app’, ‘window’, where does it live? Where does it go when I put it away? iPadOS is all just a soup of recent tasks that may be running, cached, or ghosts

The lack of names suggests to me that Stage Manager is not conceptually complete and incomplete concepts is how iPadOS multitasking got to where it is today. Here’s what I wrote about iPad multitasking back in January:

A UI needs to fundamentally answer three questions: Where am I now? Where is the thing I want? How do I get there? Before multitasking and in iOS today, this was straight forward. There was a home screen and apps. How do I get to Safari? Hit the home button and tap on the Safari app. With split views and multitasking, there doesn’t even seem to be a canonical lexicon to describe where things are. What is the noun for the thing that is being split? Is it a view? Maybe it’s a window? Okay, so how do I get to that Safari “window”? Maybe it’s on that “screen”, but that’s also not really a term Apple uses as far as I am aware.

  • Where am I now?
  • Where is the thing I want?
  • How do I get there?

iPadOS couldn’t answer those questions back in January. That it seemingly still can’t with Stage Manager is worrisome, to say the least.

Shrinkydink

According to Chris Hynes, it turns out Stage Manager is a rehash of a project he worked on in 2006 called “shrinkydink”. On his blog, Tech Reflect, he described shrinkydink as…

…a radical new way to manage apps and windows and effectively made the existing Exposé irrelevant as well as the Dock as a way of managing running apps and windows.

And he then went on to describe Stage Manager as…

…a radical new way to manage windows and likely makes much of Exposé and the Dock functionality irrelevant… In addition to being a macOS feature, it’s also a much more elegant way to do multitasking on the iPad

Like Chris, I see Stage Manager as something that should obviate the Dock and/or Exposé1 in macOS, as well as the existing multitasking paradigm in iPadOS, but right now it exists almost in competition with them2. It’s one of the major reasons behind my initial pessimistic take. Operating environments should feel like a singular experience, not a bunch of apps cobbled together. Right now, Stage Manager looks to me more like a separate app than as a connected part of a whole.

Chris had another take that I whole heartedly agree with.

Stage Manager appears to be positioned as a power-user feature which I think is a shame. I’d much prefer to see it as something you pick in Setup Assistant or choose in Settings rather than hidden in a menu somewhere. I think this is something that would be especially appealing to a new Mac user.

You know who is comfortable with having a ton of windows open? Power users. Stage Manager should have been positioned as a less daunting alternative for normal people3.

It’s early days yet and there is plenty of time for Stage Manager to improve, though I would be surprised if there are significant changes this summer. If Apple is serious about Stage Manager, it will iterate and improve on the feature over multiple major releases. If that happens, I could easily see Stage Manager being an integral component to Apple’s user experience. For right now though, we have something that at the outset seems a bit tacked-on to me.


  1. And also Spaces. ↩︎

  2. Stage Manager is also functionally different than Exposé. Exposé is window switcher along the lines of command+tab. No one works while in Exposé. It’s transitional and doesn’t really compete with other UI mechanisms. Stage Manager is a persistent mode that is expected to coexist with Spaces, the Dock, etc… ↩︎

  3. Apple releasing Stage Manager simultaneously for both macOS and iPadOS puts them in a bit of a marketing corner. As much as I think Stage Manager isn’t a power-user feature on macOS, it definitely is a power user feature on iPadOS. Part of me wonders if Apple should have first released it only on iPadOS, where it was inarguably most needed. ↩︎

Stage Manager Notes

Below is a rough collection of thoughts I’ve had on Apple’s new Stage Manager. Take into consideration that State Manager is in early beta and that I haven’t tried it out yet so these opinions and reckons are likely to change.

Names

Names are important for two reasons:

  1. They demand and enforce a paradigm, because names suggest relationships. These relationships make computers easier to use. For example: “Apps” can live in the “Dock”.
  2. They are necessary for basic communication.
Where is the stage?

I understand “desktop manager” is generic, but “stage” is a term already Apple uses. “Stage” in “Center Stage” refers to the space that is in view of the camera outside the computer, but “stage” in “Stage Manager” refers to windows inside the computer.

What am I toggling between?

Are these collection of windows “sets”, “tasks”, “workspaces”, or something else? If I were backseat driving, how would I tell the person at the computer where that Safari window is?

Does the lefthand dingus even have a name?

So there’s a new dingus on the left hand side. What’s it called? If it can be separately hidden or disabled, then how is that functionality labelled?

Complexity

One problem Apple is trying to solve with Stage Manager is the complexity caused by too many windows and applications being opened at once. Microsoft’s solution to this problem is tiling, which go hand-in-hand with their implementation of multiple desktops. Stage Manager seems like an added alternative “mode” that is entirely incongruous with Apple’s existing window management tools. In my experience, adding mechanisms in lieu of replacing, integrating with, or evolving existing ones either stems from laziness or cowardice (or both.)

Why are we adding another dingus?

Both macOS and iPadOS have docks. I get that those docks “don’t work that way”, but they shouldn’t be hallowed ground. There is an opportunity to replace them with something way better.

I mean really, how does this work with Spaces and fullscreen apps?

Spaces is Apple’s branding for multiple desktops. I would argue that fullscreen and splitview apps in both macOS and iPadOS, are effectively spaces that are dedicated to one or two apps. Stage Manager seems like a competing alternative to that paradigm.

Is there a keyboard shortcut for switching between whatever these things are called?

Does Stage Manager even have keyboard shortcuts? If so, how does it work with existing tab and ` switching?

Window Management

A long time ago, I heard or saw the comment that Microsoft’s naming convention circa Windows XP was misguided because it used the word “my” when it should use the word “your”. In their mind “my” meant the computer so “My Documents” read to this person as “the computer’s documents”. Whereas this Window’s issue was limited to labeling, I worry that Stage Manager will actually take ownership of your windows.

Will I be allowed to be messy?

People are messy. User interfaces with overlapping windows lean into that nature by letting the desktop be messy. It looks awful, but it works surprisingly well. People can put windows exactly where they need them to be and the act of arranging windows makes it easier for people to find them later. Stage Manager seems to want to arrange windows for you. While will certainly looks cleaner, I worry that it will ultimately be less useful.

Final Thoughts

I am happy to see iPadOS get some form of windowing, but I would argue what iPadOS really needs is a holistic paradigm. It’s the first beta and I am sure there will be improvements, but at the moment Stage Manager seems like a concept more than anything else. I don’t see how this concept becomes the paradigm iPadOS needs, let alone coexist with the nearly 40-year-old desktop paradigm found in macOS.

Stepping Stones

Over at Ars Technica, Jeremy Reimer put together a nice retrospective of Apple’s Newton handheld in celebration of its 30th anniversary. The Newton turned out to be the first in a slew of PDAs released by a variety of companies throughout the 90s. While these companies correctly predicted a future where handheld computers would be ubiquitous, they all made two wrong assumptions: that said future was just around the corner and that the ubiquitous handheld computer was going to be a PDA. Despite some impressive innovation and some truly good products, most PDAs of the 90s failed entirely and those that did succeed had limited appeal. The technology wasn’t quite there yet and more importantly, those who made them hadn’t figured out how most people would even use a handheld computer.

Virtual reality today feels to me a lot like PDAs did in the 90s. There has clearly been a ton of innovation. VR felt like science fiction as recent as a decade ago and yet today there are a number of good VR headsets made from a variety of manufacturers. That said and like PDAs of old, the technology doesn’t seem to be quite there yet and everyone is still trying to figure out how most people will even use VR outside of gaming.

That’s not to dismiss VR. Millions of people love and use their VR headsets just like millions of people in the 90s loved and used their PalmPilots, but using a PDA in the 90s required work. Users had to regularly sync their device with a personal computer. They had to frequently replace non-rechargeable batteries. They even had to learn and practice how to properly input text. The work involved with using a PDA daily in the 90s meant there were no casual users. Owning a PDA either led to becoming an enthusiast or forgetting about it in some junk drawer.

VR arguably requires less work, but has the added hurdle of convincing people, many of which were skeptical of AirPods, to wear goggles. As with early PDAs, these barriers create the same self selection where the only people who buy and evangelize VR headsets are those who are willing to accept these trade offs. Without an insanely compelling use case, these barriers will continue to limit VR to enthusiasts with no clear path towards ubiquity.

Being a product for enthusiasts is not a bad thing. There are many large non-ubiquitous markets that do just fine and even most ubiquitous products started out as something for enthusiasts. Personal computers started in the late 70s and weren’t truly ubiquitous until the internet became mainstream in the 90s. Video games were considered just for children until those children grew up and kept playing them. I would even argue the Mac was mostly an enthusiast market for a period of time.

PDAs never achieved ubiquity. Instead, they were a necessary stepping stone toward smartphones, which themselves weren’t ubiquitous until iPhone and Android1. I think VR has a bright future, but I don’t see VR as it exists today being the thing that becomes the next smartphone. Rather, I see VR as another necessary and very exciting stepping stone toward something ubiquitous that is yet to come2.


  1. Here’s an interesting tidbit. We are roughly the same distance in time (15 years) from the iPhone being released as the iPhone was to the Newton’s release. ↩︎

  2. Unlike PDAs, which were subsumed by smartphones, I don’t see VR being immediately subsumed by AR. Even if AR and VR ultimately converge, I think differing priorities will long necessitate having two kinds of devices. ↩︎

Best Overall Shortcut

Yesterday to my great surprise, I won “Best Overall Shortcut” in The MacStories Automation April Shortcuts Contest. I don’t usually participate in contests, I suspect because most of them don’t involve things that I am both passionate about and skilled at. MacStories’s contest might have been the first one I was actually excited to enter. The contest was announced on April 4th and each contestant was allowed to submit two Shortcuts. I submitted Join Zoom Meeting that same week and Whats On KUTX less than a week later.

Realistically, I thought the most likely outcome was either nothing or an honorable mention at best. In my wildest dreams, I thought that maybe, just maybe, I might be able to win a subcategory like “Best Every Shortcut” or “Best Media Shortcut”. When the winners were finally announced yesterday, I immediately started reading. By the end of the subcategories, I figured it wasn’t my year. I wasn’t even all that disappointed. How could I be? Every category had an amazing Shortcut that deserved to win. They were all so good that they collectively made me excited to know what awesome idea won “Best Overall Shortcut”.

Then I read the words “Best Overall Shortcut” followed by the words “Join Zoom Meeting”.

“That can’t be right.”

What came next were some of the nicest words ever written about my work, courtesy of Federico Viticci

When John and I started planning Automation April last November, something was immediately clear to us: when it’d eventually be time to pick the shortcuts we consider the “best” ones, we’d have to choose those that strike an ideal balance of unique features, intuitive design based on native Shortcuts actions, ease of use, and integration with Apple’s operating systems. Nowhere are these qualities as evident as in the Best Overall Shortcut for Automation April 2022: Join Zoom Meeting, created by Jack Wellborn.

“Jack Wellborn”

It was real.

In a daze, I immediately went upstairs to tell my wife. It wasn’t until after my head cleared enough for me to send out a thank you Tweet that I continued reading the other incredibly kind and validating words from Federico.

My reading eventually took me right into the honorable mentions section where I found even more amazing Shortcuts. Because I had skipped over the table of contents in haste to see who’d won, it wasn’t until I got to that last honorable mention to read “What’s on KUTX” followed by even more kind words, this time written by John Voorhees.

What’s on KUTX by Jack Wellborn is a great example of a way to use Shortcuts for music discovery.

So I sent out another thank you Tweet.

Normally I would feel like a fool for missing the honorable mention, but instead I am so honored and elated to be recognized by one of the most venerable sites in the Apple community for my efforts in an area that I am deeply passionate about.

In addition to Shortcuts, I have long been building automations using JavaScript, AppleScript, and even a little Automator. I truly believe that automation is a key piece to fulfilling one of the founding promises of personal computers, which is to alleviate people from boring repetitive tasks that get in the way of doing creative work. The sad reality is that most people, even those who are technically inclined, don’t use automation and so they are stuck manually repeating their boring tasks. Despite my criticisms, I think Shortcuts has the best chance to actually change that.

About My Submissions1

At the start of the competition, I gave myself three rules:

  1. Minimal or no code — The idea of wrapping some blob of code with one or two actions felt disingenuous and not in the spirit of the competition.
  2. Suitability — Related, I wanted my submission to be well suited to the capabilities of Shortcuts.
  3. Support for iOS, iPadOS, and macOS — I wanted to embrace what I see as Shortcuts’ biggest benefit, which is they work across Apple’s platforms.

Both of my submissions were works in progress when the contest started. I selected Join Zoom Meeting and What’s on KUTX because they were both automations I was already frequently using in some form or another that could potentially adhere to these rules.

Join Zoom Meeting

Join Zoom Meeting is a Shortcut that joins an upcoming Zoom meeting directly in the Zoom app. It can also open any Zoom link provided via the Share Sheet on iOS and iPadOS, or a Quick Action/Service on macOS.

Us working stiffs have a lot of meetings. These meetings increasingly happen remotely over Zoom, which typically involves multiple steps across multiple apps.

  1. Click the meeting notification to open the meeting invite in Calendar.
  2. In the calendar event pop-over, click on either the invite’s “Join” button if available or find the Zoom link buried in the notes. This will open a new browser tab.
  3. Once prompted in the browser tab, click “allow” to open the Zoom app.

Not only does this cumbersome process disrupt flow, it also leaves behind clutter in the form of Zoombie tabs. Zoombie tabs are my name for the tabs that solely exist to prompt users to start the Zoom app. They can of course be closed almost immediately, but going back to the browser to close a tab is not always top of mind when joining a meeting so these tabs frequently become lingering vestiges of Zoom meetings long since dead.

I had attempted to join Zoom meetings using Automator before Shortcuts came to the Mac. That early attempt resulted in a Quick Action/Service that could only convert and open selected Zoom links. It had no calendar integration because, as far as I can tell, finding Calendar events using AppleScript is insanely complicated, if not entirely impossible.

Join Zoom Meeting is a much better solution, because of Shortcuts’s excellent Calendar actions. Like Automator, Shortcuts on the Mac can also create Quick Actions/Services, but in addition to converting selected Zoom links, Join Zoom Meeting can also just open the current meeting using a keyboard shortcut. Now I just type “control+z” when I see a meeting notification and Join Zoom Meeting puts me right in Zoom. No more clicking through pop-overs and prompts across different apps, and no more Zoombie tabs.

What’s on KUTX

What’s on KUTX is a shortcut that adds the song currently playing on Austin’s public music station (KUTX) to your Apple Music Library, or notes that song if it can’t be added.

I love music, but discovering new music is a challenge. Luckily, I live in Austin, Texas. While I’m too much of a homebody to comment on whether it is truly the “Live Music Capital of the World“, I can say without a doubt that our public music station is top notch. KUTX has human DJs that play a variety of music from across genres. I get most of my new music from KUTX, but therein lies the problem. How do I add a song to my music library from broadcast radio?

Shortcuts, obviously.

Months ago, I discovered the API that KUTX uses for its “On Now” widget, which returns JSON that can easily be parsed into a Shortcuts dictionary. My first Shortcut only added songs to a note that I could periodically review later. For this competition, I decided to revisit this Shortcut and was able to do something truly magical. Take a song playing on the radio and instantly add it to my music library.

The JSON contains a “buy.iTunes” field that contains a link to NPR. To my surprise, that link doesn’t go to NPR. It’s a redirect to Apple Music. Even more surprising, it turns out any redirect can be resolved using the “Expand URL” action. Finally and most surprising, the “Add to Playlist” action accepts Apple Music links.

Given this simplicity, you might be wondering how this Shortcut involves 89 actions. Many of them exist to better ensure a successful Apple Music redirect, while others are there to give appropriate feedback when a song already exists or when it can’t be found in Apple Music. Simply put, many actions were added to make this Shortcut nice to use.

Thank You

A day later, and I’m still over the moon. Thanks to MacStories for putting together this contest. Thanks (again) to Federico Viticci, John Voorhees, and the other judges2. Thanks to my family who suffered through a week of me debugging by periodically shouting “Hey Siri, what’s on KUTX?” Thanks to everyone who sent nice words after seeing the news. Thanks to everyone who encouraged and helped me on my journey to becoming a better automator. I will never forget this joy and recognition.


  1. These are paraphrased from my submissions’ descriptions. ↩︎

  2. Special thanks to Jason Snell, who went to bat for the “poor developer of the public radio shortcut“ ↩︎

Apple’s Convergence

During much of this pandemic, I’ve been using Fitness+ for yoga and strength training. Those workouts typically involve my iPad, Apple Watch, and usually my AirPods. While this use-case may seem run of the mill to anyone who’s experienced Fitness+, I think it’s an example of a subtle and yet significant shift in how Apple is changing how we interface with our personal computers.

When you think of the term “personal computer”, it’s easy to picture a device that is directly in front of someone. The logo used for macOS’s Finder exemplifies this mental model. AlbenFaris, the company behind the logo, summarized its design as follows:

The interplay between the face of the computer and the profile of the user reinforces the idea of a special relationship, a partnership, between people and their computers.

The symbolism behind Finder’s icon works just as well with Apple’s more recent platforms. Interfacing with iPhones, iPads, and even Apple Watches require face-to-face interaction just as much as Macs do, and for a long time, interfacing with one of these personal computers inherently precluded interfacing with any other. For example, let’s say it’s 2010 and you just received a text message on your iPhone while writing a document on your Mac. Not only was each device completely unaware of the other, interfacing with the iPhone to address that message meant temporarily pausing interaction with the Mac.

Compare this to my modern day Fitness+ experience where interfacing with a given workout happens across three1 different computers that are each completely aware of the others. The Apple Watch sends health data to the iPad to display, the iPad sends audio to the AirPods, and any of these computers can pause or resume the workout.

Many people assume that Apple’s platforms, the iPad and Mac in particular, will converge solely by inevitably becoming more like one another — that Macs will get touchscreens and that iPads will get proper windows. Experiences like Fitness+ betray an entirely different kind of convergence Apple has undoubtedly been pursuing for years, one that lets Apple platforms lean into their strengths while also being individual pieces in a singular experience amongst increasingly interconnected devices.

One way Apple is doing this is through Continuity. “Continuity” is the umbrella term Apple uses for features that involve multiple devices. Here’s how Apple describes it2:

When you use a Mac, iPad, iPhone, or Apple Watch, you’re able to do incredible things. And when you use them together, you can do so much more. Make and receive phone calls without picking up your iPhone. Use your iPad to extend the workspace of your Mac. Automatically unlock your Mac when you’re wearing your Apple Watch. It’s like they were all made for each other. Because they were.

Take Universal Control for example, which lets a user share mouse and keyboard input from a Mac with other Macs and/or iPads. I think Universal Control exemplifies Apple’s thinking with Continuity. Rather than try to have a single device that compromises to satisfy both the ergonomic and user interface requirements of a touch-based tablet and a traditional desktop environment, have two purpose built devices that work seamlessly together when needed3. While Universal Control was only publicly available last month, it’s just the latest in a series of interconnectivity features that started when Continuity was first announced in macOS Yosemite, almost 8 years ago.

Apple’s other major effort to converge platforms through interconnectivity is Siri. While it has some platform specific differences, I get the sense that Apple ideally wants Siri to be device agnostic, in that interactions that happen with a HomePod shouldn’t be any different than ones that happen with an iPhone or a Mac. This is further evidenced by Siri Shortcuts, Apple’s app for building home and personal automations. Like Siri, only a few of Shortcuts’s built-in actions are platform specific. A majority of them work across all of Apple’s devices. For example, my rather sophisticated shortcut that finds and downloads songs playing on a local radio station works on my Mac, iPhone, iPad, and even my many HomePods.

For decades and across form factors, the idea of interfacing with a personal computer involved a person face-to-face with a computer. Apple’s “personal computer” is no longer just the device in front of you, but the series of increasingly interconnected Apple devices that are all around you.


  1. Four, if you count each AirPod. ↩︎

  2. That an up-to-date product marketing page even exists for something that was announced almost 8 years ago is evidence of how important Apple considers Continuity. ↩︎

  3. Business-wise, I think it’s worth mentioning that a touch screen Mac, regardless of how good of an idea it is in practice, would eat into iPad sales. Compare that to this kind of interconnected convergence, which incentivizes buying multiple Apple devices. ↩︎

Peek Course Correction

As someone who will replace my 27″ iMac some time this year, I really think Apple prioritizing the separate 27″ Studio Display makes a ton of sense. When I bought this iMac 7 years ago, three things were true about Intel MacBook Pros:

  1. They were noisy (compared to the iMac)
  2. They weren’t great in clamshell mode
  3. External I/O couldn’t even support 5k displays

While I certainly welcomed less fan noise, I bought this iMac primarily for the display and wager most of those who bought it did so for that same reason. That said, most customers don’t buy iMacs. They buy laptops.

I think Apple’s Mac hardware releases over the last three years could be fairly described as a course correction. These Apple Studio Displays are undoubtedly part of that course correction. Apple’s mistake was not selling this display in 2016, when Thunderbolt 3 MacBook Pros were released. They left money on the table for 6 years(!) by limiting arguably the best high-end1 non-gaming display to a relatively niche desktop line. These new Studio Displays connect to every Mac that Apple sells, including laptops. It’s a peripheral that is equally suited for those who only need the power of a $999 MacBook Air as much as it is for those who need a maxed out $8000 Mac Studio. Furthermore, thanks to Apple Silicon, MacBooks connected to this display will be whisper quiet and work just as well as if it were integrated, regardless of whether they are in clamshell mode or not.

Even before yesterday’s announcement, I had decided to replace this iMac with a laptop. Now that decision is a no brainer.


  1. I’m not counting even more niche insanely expensive reference monitors. ↩︎

A Sweet Solution

A few weeks ago, I Tweeted how web developers should appreciate WebKit/Safari as it’s really the only browser that can keep “Chromium’s dominance in check”. In hindsight and as is the case with many Tweets, my wording could have been clearer. What I meant by “in check” here was that Safari on iOS is really the only browser that prevents Chromium from unilaterally dictating web standards. Not surprisingly some people took umbrage with this sentiment, but not in a way that I expected. I expected pushback that Safari was a laggard in standards adoption, an argument that maybe had more merit a few years ago. Instead, I saw an argument about how Apple bans all competition from iOS and deliberately cripples Safari to hold the web back. While I get the argument that Safari benefits from default status in iOS, the notion that Apple is deliberately making Safari worse is absolute nonsense. Honestly, I thought the conflation of iOS being a closed platform with the merits of Safari were a bit of a reach until I privately discussed the topic with John Gruber, who quickly and smartly pointed out that it was really about progressive web apps (PWAs).

Progressive web apps are “write once” web applications written an HTML, JavaScript, and CSS that can be “installed” on any mobile or desktop computer via the browser. They are an alternative to “native” applications that are written for a specific platform using that platform’s languages and frameworks. As far as I can tell, the biggest boosters behind progressive web apps are Google and more recently Microsoft.

Progressive web apps are an obvious fit for Google, a web company with many popular web services and a web based OS. But why Microsoft? At first glance, Microsoft seems like an unlikely bedfellow for progressive web apps. Why would the maker of Office make it easier for its customers to use something like Google Docs? What I believe is that Microsoft realized they couldn’t stop Google Docs or progressive web apps given the popularity of Chrome. Chrome is not just a browser. It’s also an alternative platform that, unlike Java and Flash before it, is beloved. Web-based developers love its features and tools. IT departments love not having to install and troubleshoot software that runs through a browser. Users love not having to update their software. So long as Chrome is ubiquitous, Google is an existential threat to Microsoft. So what is Microsoft doing? To paint a picture, here’s an incomplete timeline of events that I’ve noticed over the past few years.

By going all in on JavaScript-based cross platform development, Microsoft has clearly decided to become Google before Google becomes Microsoft1.

So why doesn’t Apple want to support progressive web apps? People assume it’s just because progressive web apps would hurt App Store revenue. While I am sure that’s certainly a factor, I suspect the App Store is the least of Apple’s concerns. Like Microsoft, I suspect Apple sees progressive web apps as an existential threat. Unlike Microsoft however, Apple can’t address this threat by completely embracing progressive web apps. At the end of the day, Microsoft can become Google because they are both software and services companies. Their bottom lines go up when their stuff gets installed on more devices and the cost of supporting those devices goes down. By comparison, Apple is a hardware company. Apple makes money selling premium devices by marrying premium hardware with best-in-class user experiences. Progressive web apps would invariably dilute both.

“Write once” solutions necessarily prioritize consistency across platforms, which invariably leads to lowest common denominator software designed to support lowest common denominator hardware. Not only does this handicap the advantages of premium hardware, it also creates a barrier to releasing competitive hardware features. A feature like Touch ID only works when software APIs can be immediately updated to support new hardware capabilities. Apple could not unilaterally release something like Touch ID in a world where it doesn’t have direct control over its APIs. Progressive web apps are an existential threat to Apple, not because they could be an end around to the App Store, but because they could lead to a loss of platform control. Apple wouldn’t be able to justify $1000 phones when most of the software works just as poorly as it does on $300 phones.

All that said, I think Apple should consider supporting progressive web apps.

Part of my argument on Twitter was that Apple is a company that, like any company, acts in its own interest. Up until recently, I think not supporting progressive web apps was in Apple’s interest for all of the reasons I listed above, but the situation has changed. Apple has been facing increased scrutiny over its App Store. Regulators are increasingly pushing to force Apple to allow alternative payment processors and even to allow apps from sources other than the App Store.2 To reiterate, Apple’s nightmare scenario is not losing the App Store exclusivity, it’s losing control of its platforms. In my mind, the quickest way that happens is as follows:

  1. Regulators force Apple to allow apps to be installed outside of the App Store.
  2. Other large companies such as Microsoft, Google and Epic use this as an opportunity, not just to install apps, but also to install runtimes for their own app platforms.
  3. Apple can no longer easily push changes to its own frameworks as doing so means going through these competing runtimes.

Apple loses on two levels in this scenario. Not only could native iOS apps be install from anywhere, some of those apps would invariably be runtimes for competing platforms that Apple has zero control over. Chromium progressive web apps already exist and could easily be the first, if not the biggest one of these competing runtimes.

Embracing progressive web apps in Safari might be able to kill two birds with one stone. Regulation-wise, Apple could point to PWA support as a viable “open” and “standard” alternative to its closed App Store. Control-wise, Apple could still have a heavy hand in how progressive web apps work on its platforms. Safari-based progressive web apps could be limited or extended in ways the best serve Apple and its customers. Apple could further use the inherent limitations of progressive web apps to distinguish native experiences only available in the App Store.

Apple’s choice is obvious when the only options on iOS currently are progressive web apps or no progressive web apps. I would think that choice would be equally obvious if and when those options became Chromium progressive web apps or Safari progressive web apps, because while Chromium being able to unilaterally dictate web standards would be bad, it being able to unilaterally dictate app standards on iOS would be much worse.


  1. Microsoft’s current pivot toward JavaScript and the web is eerily reminiscent to the one kicked off by Bill Gates in the 1990s. This may sound crazy, but I might be a little worried if I were Google. ↩︎

  2. I think it goes without saying that Apple’s hubris with its App Store has certainly made matters worse for the company in that I don’t think we’d see this kind of heavy handed regulation being discussed in some alternate reality where Apple still had good developer relations. ↩︎

A Literal Apple Tax

Manton Reece recently speculated that Apple might support third party payment processors via some sort of API. Great minds think alike. Manton suspects Apple could use such an API to collect a service charge for the App Store. Not only do I agree, I think it bolsters my argument that some sort of API/Entitlement based solution is probably the best and safest route for Apple to support third party payment processors. Here’s what I wrote in 2020:

The way I am thinking about it, using a non-Apple payment processor would involve two entitlements. One to be a payment processor and another to use it. Two entitlements would allow Apple to stop 3rd Party payments from a single abusive app or from any app using an a abusive payment processor. Furthermore, revoking this entitlement wouldn’t mean having to remove an offending app from customers phones. People would still get to play their games and keep their in-app purchases already made through the third party, but new purchases would either be impossible or go through Apple until the entitlement was reinstated.

I regretfully hadn’t considered using API/Entitlements for service charges at the time, but I think it only emboldens my overarching point that using them gives Apple what they ultimately want: control. Where Manton and I differ is that he seems to think that collecting a service charge would be untenable, even if Apple knows the price involved in a given transaction.

Here’s what he wrote:

In this new world Apple imagines, developers will be collecting all of the sales into their own bank account, and then paying Apple the 11% or whatever Apple ends up demanding. There is a huge psychological difference between these approaches, just as there’s a difference between getting taxes taken out of your paycheck automatically and having to write a big check to the government.

While I agree that getting developers to manually pay Apple some service charge would be untenable, I don’t think that’s how Apple would necessarily have to operate. Apple could use the same thing they seem to be already using to justify taking that service charge in the first place: the App Store. The App Store already has users’ credit card information. Apple could design this hypothetical API in such a way that it deducts Apple’s service charge before sending the remainder to the third party. For example, given an 11% service charge and a $1 in-app purchase, Apple could first charge 11¢ and send the remaining 89¢ to the third party payment processor to charge separately. Alternatively, Apple could present a price with their service charge already included. Given the same 11% service charge and a $1 in-app purchase, the API would present the user with a sum total of $1.11.

The best alternative I’ve heard to using some sort of API is to simply allow developers to link out to a web page for payment. This still makes a lot of sense to me in that it lets Apple wash its hands of any mess created by those developers. However, that simple solution won’t work if Apple is planning on collecting a service charge, because there would be no way for Apple to get the sales data necessary to do so. An API could let developers offer different payment processors, but Apple would still get their cut and (I would argue more importantly) retain the relationship with the customer.

What’s an iPad?

This past summer I was faced with a dilemma. I needed to replace my aging 9.7″ iPad Pro. There were two options: get another iPad Pro or a Macbook Air. Both had Apple silicon and cost roughly the same after adding accessories. Not surprisingly, I am a Mac user more than an iPad user and so part of me wanted to ditch the iPad entirely. I didn’t though, for three1 reasons:

  1. Center Stage seemed like a killer feature, especially to this father who attends weekly FaceTime and Zoom calls with grandparents.
  2. The iPad’s simplicity made it ideal as a secondary device that I could use for personal stuff while at work.
  3. While I had my frustrations with the platform, my old iPad Pro had been terribly slow for months. I wasn’t sure how much of my frustration stemmed from that lack of performance. Similarly, the 9.7″ screen always felt cramped to me and I had long yearned to try the larger 12.9″ model.

People love their iPads and I wanted to give this next iPad the best shot at winning my affection, and so I order a new 12.9″ iPad Pro with the M1 processor.

Almost six months in, and I can happily report that all my reasoning was sound. Center Stage is a killer feature, the iPad remains an ideal secondary device, and many of my frustrations did have to do with performance or screen size. All in all, I am happy with this new iPad… but I still don’t love it.

As with my prior iPad Pro, my remaining issues lie with the software. Last year I wrote about the iPad’s poor multitasking, calling it a broken paradigm. Since then, iPadOS 15 has improved multitasking by doing the most obvious things: window controls, floating windows, rudimentary window management2. I don’t mean “obvious” in the complimentary “in hindsight…” kind of way, but in the “how in the hell did it take Apple this long to add things they’ve been doing for almost 40 years” kind of way.

I’ve heard others speculate that there’s been a debate within Apple between whether to keep the iPad simple or make it more complex. I think there’s something to that, but I think behind that hypothetical debate is a fundamental question: Is the iPad a computer?

A spiritual difference between Mac and iOS is that the former was designed with a mindset that the computer was part of the solution whereas the latter was designed with a mindset that the computer was part of the problem. Macs are proudly computers, and they’re great. iPhones are computers that don’t feel like computers, and they’re great. The original iPad was another computer that didn’t feel like a computer, and it truly was great until people wanted to do computer-y things with it.

For a long time, it seemed Apple tried to support those computer-y things while also keeping the iPad from feeling like a computer. This explains why the platform has advanced at such a sluggish pace. Anything deemed too computer-y was kiboshed or replaced with something worse before reluctantly being added later. Why was there no way to manage files? Managing files was too computer-y. Why was multitasking weirdly gesture based for so long? Window controls were too computer-y. Why is there a menu bar tucked away in a command key that most iPad users don’t even have? Menu bars are too computer-y.

Finally adding window controls, some basic windowing, and a hidden menu bar are why multitasking in iPadOS 15 was so well received, but I would argue that any celebration wasn’t because of those UI elements alone. They really aren’t that great, and I’ll probably trade in this iPad for a MacBook Air if they’re still all we have in iPadOS 16. What people are celebrating and why I might stick with this iPad a bit longer is that these features signal that Apple recognizes something we’ve known for a long time, that the iPad is a computer.


  1. Another reason was that the MacBook Air was due for a refresh in the next 6-12 months. While I could replace whatever I bought with this new model, I assumed this newly designed Air would make the current ones feel like chopped liver and tank its resell value. ↩︎

  2. The iPadOS UI paradigm is still broken. A UI needs to fundamentally answer three questions: Where am I now? Where is the thing I want? How do I get there? Before multitasking and in iOS today, this was straight forward. There was a home screen and apps. How do I get to Safari? Hit the home button and tap on the Safari app. With split views and multitasking, there doesn’t even seem to be a canonical lexicon to describe where things are. What is the noun for the thing that is being split? Is it a view? Maybe it’s a window? Okay, so how do I get to that Safari “window”? Maybe it’s on that “screen”, but that’s also not really a term Apple uses as far as I am aware. ↩︎