In Search of the Automatic Platform for Personal Computers

Most people love their smartphones more than they do their personal computers. The biggest reason by far is that smartphones bring the internet to them everywhere, but another less obvious reason is that using smartphones is like driving an automatic1. They are way less demanding than Macs or PCs running Windows, and I don’t just mean maintenance. Certainly running updates and installing software is more manual in macOS and Windows, but that occasional maintenance pales in comparison to the constant manual user intervention involved with merely using those platforms. Where should this file live? Which of them sync? How should my windows be arranged? When should I close them? What should I even do with the desktop? Which apps actually need full disk access?

By contrast, the default experiences of iOS and Android don’t really demand any manual intervention or decision making from their users. There are no windows to deal with or files to manage. Any need for user intervention is optional and progressively disclosed behind apps and/or features. Photos, notes, music, and other documents are typically managed within their respective apps. Both iOS and Android have any number of settings that can be tinkered with. People can intervene more with their smartphones, but they don’t have to.

Personal computers do have a more automatic platform of sorts, the web. Like with smartphones, many people prefer web apps over traditional native ones. Web based apps are so automatic that they are effectively on demand. No one has to install GMail, Google Docs, or Slack. They just go to some website where they always get the latest version of whatever app when needed. Web apps also don’t require users to deal with windows or files. Their emails, flow charts, conversations, and other documents are all self contained within one browser window. Like smartphones, users can customize their experience and manually manage their documents, but again, they don’t have to.

Given most people prefer automatic platforms, why hasn’t Android, iOS, or the web completely overtaken macOS and Windows? Why aren’t offices filled with Chromebooks and iPads? I think most people who prefer automatic platforms end up using manual ones on personal computers for three reasons: familiarity, apps, and cross-app productivity. They want something familiar with what they already use and need specific apps that all work cohesively with the other ones they use.

Google’s ChromeOS found on Chromebooks is very familiar to many people who are already using web apps like Gmail, Google Docs, and Slack. ChromeOS has even visually converged with Windows. The problem is ChromeOS has limited software support outside of web apps and while web apps do support a growing number of use cases, gaps remain. This is why, I presume, Google is merging ChromeOS into its non-web based Android platform.

iPadOS has much better app support, but is unfamiliar and often times cumbersome to someone coming from Windows or macOS. In other words, iPads are a little weird for those who already have an expectation of what a computer is. This is best illustrated by the fact that Apple still doesn’t sell an iPad with a keyboard and trackpad included. They are optional add-ons that need to be purchased separately. Part of me wonders how many companies would jump at the chance to buy their teams a laptop running iPadOS in lieu of similarly priced MacBooks or even cheaper PC laptops.

That said, I think many professionals would still cling to macOS and Windows even in a world where ChromeOS was merged with Android or Apple did release a laptop running iPadOS, because neither would inherently address or improve those platforms’ limitations when it comes to cross-app productivity. Furthermore, I suspect most avenues of improving cross-app productivity on these platforms would be in tension with what makes them so automatic to begin with2. ChromeOS, Android, and iOS (including iPadOS) are automatic largely because they defer the complexity of manual intervention to apps that mostly exist in isolation of one another. This simplifies the platform, but makes working across apps much more cumbersome. I wrote about this when addressing Catalyst apps’ lack of cohesion in macOS.

The more complicated Mac builds ease almost entirely through cohesion. Wherever possible, Mac applications are expected to share the same shortcuts, controls, windowing behavior, etc… so users can immediately find their bearings regardless of the application. This also means that several applications existing in the same space largely share the same visual and UX language. Having Finder, Safari, BBEdit and Transmit open on the same desktop looks and feels natural.

By comparison, the bulk of iOS’s simplicity stems from a single app paradigm. Tap an icon on the home screen to enter an app that takes over the entire user experience until exited. Cohesion exists and is still important, but its surface area is much smaller because most iOS users only ever see and use a single app at a time. For better and worse, the single app paradigm allows for more diverse conventions within apps. Having different conventions for doing the same thing across multiple full screen apps is not an issue because users only have to ever deal with one of those conventions at a given time. That innocuous diversity becomes incongruous once those same apps have to live side-by-side.

I do think personal computers will become more automatic, either through the evolution of macOS and/or Windows, or the advent of some other platform. Apple once thought that “some other platform” was going to be i(Pad)OS and Google seemingly still believes it’s going to be some amalgam of ChromeOS and Android, but I don’t think either can overtake today’s manual incumbents. They’ve achieved being more automatic largely by only supporting one app at a time. That is perfectly suitable for smartphone and web apps, but for multiple apps running side-by-side on personal computers, people need an automatic platform that won’t slow them down.


  1. I first came up with the manual/automatic analogy when reviewing Apple’s Stage Manager, but I think it’s suitable beyond just window management. 
  2. Nothing better illustrates this tension than Apple’s struggles to bring cross-app multitasking to iPadOS. The company has made several attempts to bring basic multitasking to iPad and every time the company has gotten push back, both from those who think any multitasking needlessly complicates the iPad and from those who argue they haven’t gone far enough. 
“I Don’t Think About You At All”

Justin Long is back again, this time for Qualcomm. Tom Warren, reporting for The Verge:

Apple’s former “I’m a Mac” actor Justin Long defected to Intel a few years ago, and now he’s looking to switch to a Qualcomm-powered Windows PC.

Which is worse? Being the butt of a joke that is only effective because lots of folks remember your ads from over a decade ago or being spared because no one remembers your ads from just three years ago? Back when Intel featured Long in their ads, I observed that the campaign ultimately showed how commoditized they were. I can’t think of a better illustration of that point than an actual competitor of Intel also using Long to promote PCs, and oh by the way, no one cares because most people don’t even remember Intel even ran those ads.

macOS on iPads Would Be More Like Boot Camp than Classic Mode

In my last post, I likened hypothetical macOS virtualization in iPadOS to Mac OS X’s classic mode. The more I’ve thought about it however, the more I think classic mode isn’t the right analogy. Classic mode was for anyone migrating to Mac OS X, which was practically every Mac user at the time. I don’t see virtualized macOS as something for every iPad user. A better analogy would be Boot Camp.

Boot Camp was (and seemingly still is) an optional feature that made it possible for technically inclined users to install Microsoft Windows on Intel-based Macs. It wasn’t for everyone. I would wager most everyone that seriously used Boot Camp strongly preferred macOS, but needed access to Windows to perform some function of their work. This is exactly the type of audience I imagine virtualized macOS would support. Technically inclined users who prefer the iPad, including iPadOS, but still need to occasionally access macOS to do some bit of work. The only difference is that Boot Camp installed Windows as a parallel system OS that required rebooting into. That was fine for 2006, but obviously doesn’t make sense in an era where virtualized environments can be just as performant while effectively running in an app.

Virtualized macOS on iPads? Yes, and…

With the most recent round of “holy shit these new iPad Pros are really powerful” naturally came “I just really wish iPadOS weren’t so limited”. One idea that is currently making the rounds to make iPadOS less limited is to have some sort of virtualized Mac mode a la Mac OS X’s Classic Environment wherein higher end iPads could run macOS when connected to a trackpad and keyboard. I am not opposed to the idea and agree that virtualized macOS would serve as an “escape hatch” of sorts. Instead of physically fleeing to Mac hardware at the first sight of a complicated task, users could merely flee to macOS while using the same iPad hardware. I also think virtualized macOS is a way better idea than using macOS as a tablet OS because it would be a distinct mode where touchability isn’t expected.

That being said, I think supporting of virtualized macOS on iPads would only serve power users who are not necessarily pro users. While the two aren’t mutually exclusive — there are undoubtedly countless pro users on the Mac using things like Homebrew, Applescript, and all sorts of other utilities — I would wager most pro users aren’t power users. To them, the computer is merely a conduit to the apps required to do their job. To non-power users, pro or otherwise, virtualized macOS on iPad would be messy. How do updates work? Can the two environments access each other? What happens to Mac mode when you yank the iPad out of the Magic Keyboard? Would virtualized macOS be allowed to run in the background? I think many power users would recognize and be fine with the trade offs involved, but how would anyone, including Apple, go about explaining them to vanilla iPad users. “Well you can use these other class of apps, but only if you are connected to this $300 accessory. Oh and by the way, these apps are different than the ones you’ve been using on your iPad, even the ones with the same names. No, they don’t talk to each other.”

Virtualized macOS would serve Apple power users rather than raise the limits for professionals using iPadOS.

One of the main arguments for supporting virtualized macOS on iPads is that it would take some of the pressure off of iPadOS to do all the things a Mac does. That’s a fair argument, but I think there are other ways to take the pressure off of iPadOS without introducing a second operating system just for power users. While many of iPadOS’s numerous limitations can and should be addressed across the line — a better files app, sound from multiple apps at once, etc…– there is one foundational limitation that Apple can’t address in software alone, and that is the physical screen size. This is an area where I think Apple could relieve the pressure on iPadOS by changing its constraints.

In an ideal world, iPadOS would somehow deliver professionals and power users an experience that satisfies three requirements:

  1. Is information rich enough to support a handful of apps on one screen
  2. Remains touch friendly
  3. Does the above on an 11-inch screen

If satisfying all three is impossible, and right now it sure seems impossible, what I think Apple should do is try to satisfy just two of those requirements. Virtualized macOS does this inherently. macOS can be information rich on 11 and 13-inch screens specifically because it doesn’t support touch. In theory, iPadOS could also become information rich at the expense of touch friendliness whenever a trackpad and keyboard are connected. Modern iPads already offer display scaling and it’s easy to imagine a future where this sort of scaling could change based on peripherals, orientation, and/or whether Stage Manager is enabled. While I don’t like the idea of diminishing touch in iPadOS, it would still be way better than running an entirely separate OS. Merely toggling scale modes when disconnecting an iPad would be way more elegant than suspending macOS running in a virtual machine.

That being said, what I have been arguing is for Apple to sacrifice number three. Have a “multi-app” mode available only when a Thunderbolt enabled iPad is connected to a large external display and offer what would be an absurdly expensive Studio Display Touch1. Plugging in an iPad wouldn’t have to switch scaling because the screen would be large enough to be information rich and have touch friendly controls. Power and pro users could work across multiple apps on the same screen or have a single luxuriously large app. Tying multi-app mode to having a display connected naturally lets an unconnected iPad just be an iPad. Apps that are windowed on an external display would merely go back to being full screen when the iPad is disconnected, and a vast majority of people who love their iPad for what it is would likely never see this multi-app mode.

All this isn’t to say I think Apple shouldn’t support virtualized macOS on iPads. My position is more of a “yes, and”. Let high end iPads run virtualized macOS and still address the limitations that exist in iPadOS. Virtualized macOS would delight the minority of power users while go largely unnoticed by vanilla iPad users, pro or otherwise. It would be helpful to some and harmless to most. That said, I also think Apple still has to raise the limits of iPadOS, even if that means revisiting the trade offs inherent in an 11-inch touch first device.


  1. In my mind, a Studio Display Touch would be the best option, but multi-app mode would still work with other displays. Even with today’s non-touch displays, it’s easy to imagine a designer illustrating on an iPad Pro that is flat on their desk with other apps open on a connected display. 
The Long Games of Mixed Reality Headsets

If you told people that Meta would start licensing their headset platform less than three months after Apple released the Vision Pro, most would assume that Apple’s headset must of sold like gangbusters. Not only has that not happened, it couldn’t happen based on supply chain constraints. So why license now? I have two thoughts: that Apple still disrupted Meta and that both companies are necessarily in it for the long game.

Meta’s first gambit was to dominate the market by being the first mover. After buying Oculus, Meta quickly established itself as the preeminent company in the very nascent headset market. At this time last year, they were touting a line of consumer and high end hardware that was coupled with their OS. Without any meaningful competition, Meta could play the long game merely by continually releasing products over time, because they would undoubtedly be the default people chose when headset sales eventually took off. The Apple Vision Pro didn’t need a bunch of sales to disrupt this strategy. As crazy as it sounds, it did so by merely existing. Meta couldn’t just idly wait for headsets to take off anymore because the longer that took, the more time Apple would have to bring its compelling and tightly integrated offering down market.

Meta responded to Apple Vision Pro in two ways. They effectively shitcanned its high end line and more importantly, lowered prices of its consumer line. Undercutting the competition to dominate an industry is a classic and often very successful strategy, but only if there is an industry to dominate. Meta’s problem is that no one is buying headsets in meaningful numbers. The whole point of selling hardware on the cheap is that everyone comes to your ecosystem instead of the other guys’, but that doesn’t work when no one is buying. By cutting prices, Meta was trying to drive headset adoption to happen now so they could maximize their first mover advantage. That apparently didn’t happen so the company is necessarily pivoting.

By licensing their platform, Meta is embracing the long game. They are still hoping headsets will catch on this or next year, but their goal isn’t to be the default headset choice. Now it’s to be the default headset platform. Meta is betting that if headsets do take four or five years to take off, whatever Apple does won’t matter when every other headset on the market runs their Horizon OS. It worked for Microsoft. It worked for Google. It’ll probably work for Meta too.

Sounding Right with Apple’s Computerized Speakers

I recently connected my AirPods Pro to our Apple TV, something I usually do on the occasion that my wife wants to sleep rather than unwind with some show. The resulting sound is always exceptionally good, even when it’s not immersive. In this particular case, the AirPods didn’t try to envelop me in sound because I was watching an older show. Instead, the stereo audio was augmented in such a way to suggest the sound was more or less coming from the TV. This is actually what I want most of the time. Some home theater enthusiasts may balk at reading that, but I don’t want to be enveloped in sound while trying to wind down. All I want is clear dialog and not to be jarred by suddenly loud musical scores and sound effects. In other words, I just want my TV to sound right.

The CRT televisions of my youth didn’t sound good, but they sounded right. Despite only having one or two laughably small speakers, I rarely had any issue interpreting the sound of whatever movie or show I was watching. By comparison, the flat screen televisions of my adult lifetime never sounded right. Their audio quality was better, sure, but they all sounded wrong. Dialog was quiet and often obscured by other audio tracks. Increasing the volume helped, but at the expense of jarringly loud moments.

Edward Vega, did an excellent YouTube video for Vox explaining this phenomenon, and specifically why dialog in particular has gotten increasingly hard to hear. He lays out several reasons, but the parts I think are most germain to televisions not sounding right are dynamic range…

[A] big thing that [filmmakers] want to preserve is a concept called dynamic range. The range between your quietest sound and your loudest sound. If you have your dialog, that’s going to be at the same volume as an explosion that immediately follows it. The explosion is not going to feel as big. You need that contrast in volume in order to give your ear a sense of scale. But the thing is, you can only make something so loud before it gets distorted. So if you want to create that wide dynamic range you have no choice but to push those quieter sounds lower instead of pushing the louder sounds louder. So explosions go up and dialog comes down.

…and the large number of tracks necessary for modern surround sound.

The content that we watch [on our televisions and smartphones]is not mixed for us, primarily. Rerecording mixers mix for the widest surround sound format that is available. typically like big release films. That is Dolby Atmos, which has true 3D sound up to 128 channels. The thing is, if you’re not at a movie theater that can showcase the best sound Hollywood has to offer, [then] you can’t experience all of those channels. So after the movie is mixed for the 128 Atmos tracks somebody, has to create a separate version of the film’s audio where all those same sounds live on one or two or five tracks.

Basically, those who make movies and shows produce audio for multi-speaker theater setups, home or actual, and do so at the expense of more typical setups that involve just the TV’s built-in speakers1.

Edward concludes his video by saying the issue is intractable and gives the audience three options:

So the solutions we have are:

  1. Buy better speakers and only go to theaters that have impeccable sound.
  2. Take a chill pill and try to just worry a little bit less about picking up every single word that gets said.
  3. Just keep the subtitles on.

Suggestions two and three are absurd, as is the notion of exclusively going to theaters. The only real suggestion is to buy speakers, but that begs the question what kind of speakers does one need just to make the TV sound right? Those producing content would seemingly have everyone buy and install a home theater with capabilities as close to an actual theater as possible. My problem, beyond cost and wires, is that I don’t always want a theater-like experience for home viewing just like I don’t want a concert-like experience for home listening. Most of the time, I don’t want an experience at all. Again, I just want my TV to sound right.

“Sounding right” seems like table stakes for any home theater system, but I’ve found it to be elusive. My previous receiver with two decent bookshelf speakers never sounded right, even after I added a center channel. In hindsight and given the aforementioned complexity and priorities of audio in modern movies and shows, I now don’t see how those three dumb speakers ever could sound right. In fact, it seems the only way to make shows and movies sound right with dumb speakers is to use an ever increasing number of them. That’s fine, even exciting, if you are a home theater enthusiast, but a bunch of dumb speakers is not the answer for me.

The answer for me, and I would wager most people, is computerized speakers2.

“Computerized speakers” is the term I am using to mean speakers with built-in computerized audio processing. This is in contrast to what I am calling “dumb speakers”, which merely reproduce already processed audio being sent from another component, typically a receiver. Having the smarts built-in make computerized speakers more flexible and less finicky than their dumb counterparts. With dumb speakers, more is necessary3. With computerized speakers, more is merely preference. You can have a single soundbar or fill the room with a bunch of computerized speakers for better immersion, and everything can sound right regardless. I’d wager most people don’t buy any speakers for their TV and that a majority of those who do just buy a soundbar.

Soundbars range in both quality and price. Some are cheap and probably sound like crap while others are expensive and reportedly sound quite good. Within this range, I think Apple is making some of the best computerized speakers4 for home theater on the market for their price. The HomePods that we typically use when watching TV also just sound right., all the dialog is clear and I still hear everything without being jarred by some overly loud sound effect or music cue.

While the HomePods can only can to kinda sorta fake surround sound, the AirPods Pro reproduce a remarkably spatial surround sound experience. “Spatial” is the word that Apple uses and one that I think is very apt. Like with stereo, surround sound with AirPods Pro can feel like it’s at a distance where the audio is set further back in the soundscape. Unlike stereo, this distance varies depending on what’s happening in the video. A close up sounds close while a medium shot sounds further away. Apple’s deliberate use of distance is best illustrated by a third example of Apple’s exceptional computerized speakers.

I don’t own a Vision Pro, but a friend let me try his a few weeks ago. Top on my list was to sample some of Apple’s immersive video. Immersive video is not the same as 3D video. 3D video still comes from a rectangle and is therefore still directional. You look toward the rectangle to watch the 3D video. Immersive video has no rectangle. The video is all encompassing, and you watch by looking all around. With non-immersive video, the Vision Pro’s audio pods kept some sound at a distance just like when I watch my Apple TV with AirPods. With immersive video however, the sound is more often right there, which makes sense because you are right there.

Time was you could just buy the TV and have everything sound right. You could optionally add two decent dumb speakers and have everything sound good too, because everything you were watching was stereo or mono. The sound in modern movies and shows has gotten too complicated to sound good on two dumb speakers. Getting decent home theater sound with only one or two speakers requires good computerized speakers and right now Apple is making some of the best computerized speakers for the buck, especially if you just want everything to sound right.


  1. Supporting the top end makes sense, but doing so at the expense of how the majority watches shows and movies seems folly to me. It would be like if the makers of Cheers insisted on filming and presenting their sitcom in letterbox despite being solely viewed on standard aspect ratio televisions. 
  2. The obvious term to contrast with “dumb speaker” is “smart speaker”, but alas, that term is already taken
  3. It might be possible for a home theater system consisting of a receiver and two dumb speakers to sound right, but I am doubtful for two reasons. First off is the sheer number of speakers in a given speaker. Good dumb speakers tend to have one or two, maybe three speakers. Good computerized speakers, on the other hand, tend to have half a dozen or more. The second reason is that the market for home theater receivers has gotten more complex, not less, in part because they cater specifically to home theater enthusiasts. 
  4. It’s stuff like this that makes me feel like Apple should have kept “Computer” in its name. 
Shoebox Averted

Someone tried to pull a “shoebox” scam on me while I was traveling in NYC this week for work. By “shoebox” scam, I am referring to Charlotte Cowle’s The Day I Put $50,000 in a Shoe Box and Handed It to a Stranger. If you haven’t done so already, I implore you to read it.

I don’t typically answer many calls from unknown numbers, but one came in while I was at work that was very possibly from our kiddo’s spring camp. Upon answering, a person on the other line identified himself as a deputy with the county sheriff. After confirming the name and address that this person already had, I was told that I had missed a summons. The implication being that there was some warrant out for my arrest. Already suspicious, I politely told the person that I would call him back at the number listed on the sheriff’s website. He seemed bothered by this and my suspicions were confirmed when he demanded that he call me back from that number. This experience was exactly like one Charlotte had described in her piece.

He told me his name was Michael Sarano and that he worked for the CIA on cases involving the FTC. He gave me his badge number. “I’m going to need more than that,” I said. “I have no reason to believe that any of what you’re saying is real.”

“I completely understand,” he said calmly. He told me to go to the FTC home page and look up the main phone number. “Now hang up the phone, and I will call you from that number right now.” I did as he said. The FTC number flashed on my screen, and I picked up. “How do I know you’re not just spoofing this?” I asked.

“It’s a government number,” he said, almost indignant. “It cannot be spoofed.”

Caller ID can be spoofed, even government numbers. Sure enough, before I could get my call out, another call came in from the number on the website. I could have ignored it, but answered out of curiosity. The person on the other side was not the same person who I was talking to before, but a different man. This man had a southern sounding accent, apt for Texas, and was immediately more intense. When I reiterated that I would call the number back, the intense southern sounding man became threatening. I don’t remember his exact wording, but it was something along the lines of “if you hang up this call, I am going to send a squad car right now and have you arrested.” His threat didn’t really worry me and not because I was in New York at the time, but because it was clearly a gambit to keep me from calling the number. Spoofing Caller ID is just that. It makes a phone call from one number look like it came from another number. In my case, the number spoofed at this point was the one for the actual sheriff’s department so any call back would be received by the actual sheriff’s department and not the scammers. My adversary surely knew this, so his only move was to try and bully me into staying on the call he had started.

I hung up on him and called the actual sheriff’s department using the number listed on their website. Eventually a nice operator sent me to non-emergency dispatch who told me they were aware of the scam and asked if I had given any money to the perpetrators. I told him I hadn’t and asked if there was any way to actually confirm whether there was some sort of summons I missed for ultimate peace of mind. The dispatcher wasn’t really sure, but didn’t seem too worried about it. I then called my wife to let her know what happened, lest the scammers tried her next. Alas no one called her, I assume because they correctly figured that I would warn her immediately.

I have been privately skeptical of Charlotte’s account and not because I thought these scams didn’t exist, rather because there are many aspects of her story that require her to fall for some truly unbelievable aspects. The most notable of which for me was that the FTC/CIA was going to let her take cash out before freezing her accounts, or as redditor Creative_Instinct put it:

CIA: We’re probably freezing your assets because you’re under investigation or under arrest. I don’t even know anymore. But I like you. So. Go take all your money out. This is standard protocol. We warn you, THEN freeze your assets.

That said, I find her more believable having now experienced this sort of scam in person. I was definitely a little rattled by the experience despite having identified the scam early on and can’t imagine what I might have done had I not. Furthermore, that I did suss it out was at least partially due to having read Charlotte’s piece in recent months. I don’t know if I could have been rattled to the extent of putting $50,000 dollars into a shoebox, but having her account top of mind certainly helped regardless.

My First Macintosh

The Mac turns 40 today, so I figured I’d use the occasion to write about my first Macintosh. Before he retired, my Dad was a consultant at a firm. What he did was niche, but he was top three in his field. While I was in elementary school, my dad threatened to go independent and went so far as to set up a home office. We’d already had an Apple //c in the house, so he bought another Apple for his new business, a Macintosh LC II. I was immediately enamored with it. I loved the Apple //c, but this was so much better. The problem was I wasn’t allowed to touch it.

Fast forward three more years.

My dad’s employer had long convinced him to stay, I was in middle school, and my older brother had just gone off to college. It was Christmas, and all I wanted was a CD player like the one my older brother had. We opened presents. No CD player. That’s when my mother said the following.

“We can get you a CD player or you can have Dad’s computer, and we’ll get you a CD drive and some speakers.”

So I got my dad’s Macintosh LC II. The LC II wasn’t a great computer to begin with and certainly wasn’t anything to write home about in 1995, but that Mac started my passion for technology and opened up my world in incalculable ways.

Apple Vision: The Best Way to Multitask iPad Apps

Since June, I have been thinking of Vision as being more Mac-like, in part because both were built for multitasking1. The more I think about it however, the more I think that Vision is really a rethinking of how multitasking iPad apps should work.

Almost a year ago, I argued that multitasking on iPad suffered because you can’t have touchability, productivity, and portability. An 11” iPad Pro can’t have the information density of a Mac while retaining both its portable size and its touch friendliness. In that piece, I used the term “information density” to describe how pointer driven interfaces can display more information because their controls (button, menus, etc…) don’t require nearly as much affordance as those needed to do touch interfaces well.

“Density” is really only a means to being “information rich“. My conclusion was that the only way Apple could deliver an information rich multitasking experience with iPads Pro would be better support for large screens. An iPad Pro connected to a hypothetical “Studio Display Touch” could be significantly more information rich. The trade off, of course, would be portability.

My thinking at the time was that portability had to be dictated by screen size, because historically that was the case. A device could only be as small as its screen, regardless of its OS or user experience. Headsets are the very recent exception. They can provide portability without sacrificing screen size. Even without any modifications, windowed apps that feel cramped in Stage Manager on iPad will suddenly feel much more natural on Vision because they can be maximally information rich without having to be information dense.

Throughout the 2010s, there was always a question of whether the iPad could supplant the Mac. How that would actually happen always seemed like the underpants gnomes’ plan, with no clear line from A to B, and step two perpetually filled with question marks. There is a clear line how Vision could supplant another Apple product line, but the closest target isn’t the Mac, it’s the iPad Pro.


  1. While the original Macintosh didn’t support multiple apps running at once, the user interface that it came with conceptually did, in that System 1 didn’t look or behave significantly different than subsequent versions that did support multiple apps 
Penny Foolish

Products and features need to resonate with an audience in order to succeed. People feel, sometimes immediately, when a feature resonates with them, and it’s that feeling that gives them a sense of whether a related product is revolutionary and not the other way around. The iPhone was almost immediately and universally recognized as a revolutionary device because its features resonated with anyone who hated their existing mobile phone, which was practically everyone. Conversely, the original Macintosh was also truly revolutionary, but struggled initially because the graphical user interface only resonated with a relatively small audience.

Sometimes a widely resonant feature requires a revolutionary product, but I would wager that most features that resonate with a wide audience don’t come via revolution, but through iteration. In the 2000s, Apple excelled at adding features that resonated with buyers to existing products. MagSafe, the pulsing sleep indicator, and hidden LEDs to show battery charge are just a few examples of widely resonant features that were added iteratively to Apple’s laptops that buyers immediately understood and desired. Apple was so good at finding these sort of features that many in the tech world painted the company as some sort of pied piper, one that used gimmicks to trick buyers into paying more for computers that didn’t even run Windows.

One of my criticisms of Apple during the 2010s is how often the company would chase “revolutionary” rather than embrace iteration and emphasize features that would resonate most with customers. A good example of this is the now defunct 3D Touch. In my mind, the killer feature of 3D Touch was “trackpad mode”, wherein pressing anywhere on iOS’s keyboard would turn it into a trackpad for precisely moving the insertion point and selecting text1. Most people, even Apple enthusiasts, probably don’t remember this version of the feature and I wouldn’t be surprised if many didn’t even know it existed in the first place. That’s because Apple itself didn’t mention it when 3D Touch was announced.2

Apple instead chose to pitch 3D Touch as the “revolutionary” follow up to multitouch, and primarily promoted Quick Actions and Peek and Pop3. While still useful, I would argue both were “nice to have” features, each looking for a problem to solve. A quicker way to take selfies or preview a message was nice, but no one was really stymied by taking selfies or browsing email before 3D Touch. It was obvious from the start that 3D Touch was not revolutionary, and these features didn’t resonate with enough iPhone users even if it was. Precise intuitive text editing, on the other hand, would have resonated with anyone who has ever become frustrated while trying to edit text on a smartphone, which I would wager was most smartphone users4. By artificially inflating 3D Touch to “revolutionary”, Apple steered the messaging away from its most resonant feature.

In more recent years, Apple has made progress in delighting users by bringing back features that resonate. MacBooks have MagSafe and iMacs have color. That being said, I still think today’s Apple often struggles to identify which features resonate with their customers. A good example of this is the Apple TV.

My sense is that the company doesn’t quite know how to pitch the Apple TV. Apple TV is not a revolutionary product, especially in 2024, and on paper doesn’t offer much more than what is already included with modern day smart TVs. Apple’s inability to pitch the Apple TV means the conversation around it is dominated by its price, which is foolishly oversimplistic. Sure, in relative pricing, an Apple TV at any price will always be infinitely more expensive than whatever crap software that comes free with a smart TV. Macs are more expensive than Chromebooks too. In absolute pricing however, an Apple TV costs $129. Adjusted for inflation, that’s cheaper than the cheapest iPod Apple ever sold. Apple TV is a steal if you care about the experience of watching TV, and has great features that I think would resonate with many buyers if Apple actually promoted them. The best example of this in my mind is audio.

I have been pairing my Apple TVs with HomePods to get Home Theater Audio for a few years now. Despite the name, the feature I think would resonate with most people is not immersive audio, because Home Theater Audio isn’t truly immersive. The resonant feature is really good audio that makes it possible to understand what the hell characters are saying without having to install a complicated five-plus speaker system for surround sound. Apple has even already done the ad for that second part.

AirPods support is another Apple TV audio feature that I think would resonate with a lot of people. Being able to connect multiple AirPods to an Apple TV was a godsend when my wife and I were sleep training our kid and has since proven useful when one of us wants to watch something while the other sleeps. That may not resonate with everyone, but I would wager there are more people interested in watching a show without disturbing a sleeping family member than those interested in playing iOS games on their TV.

Another feature that would obviously resonate with Apple’s customers that the company seems to be outright avoiding is emoji Tapbacks. Messages currently only lets you “Tapback” with six emoji-like and subtly animated glyphs. People want and have become accustomed to using any emoji to react to a given message. To my knowledge, Apple never claimed Tapbacks were revolutionary, but its insistence on excluding emoji is actively dissonant to what its customers want. While the most recent version of Messages does support using emoji as stickers, the implementation spitefully obscures text. Even with a better implementation, the feature would still be dissonant, because stickers aren’t what their customers expect.

Revolutionary products necessarily have features that resonate with a wide audience, but most resonant features happen through iteration. Eschewing iteration in the pursuit of “revolutionary” risks increasingly forgoing features that resonate with customers. Always chasing features in lieu of a revolutionary approach is indeed penny wise and pound foolish, but repeatedly doing the opposite might just be worse.


  1. Today, the feature is invoked by touching and holding the spacebar
  2. There was an “oh by the way” two sentence text blurb on 3D Touch’s overview page, but it wasn’t really promoted. 
  3. Both Quick Actions and Peek and Pop also exist today without 3D Touch. 
  4. The loupe, while serviceable, was (and still is) clumsy in many scenarios.