Minimum Viable Pixels

Meta’s Quest 2 supports a resolution of 1832×1920 pixels per eye and costs $300. Sony’s VR2 headset supports a resolution 2000×2040 pixels per eye and costs $550 plus another $500 for the Playstation 5 to drive it. Apple’s upcoming Vision Pro supports more pixels than a 4K TV for each eye and will cost $3500. There is a lot more to headsets than just pixels, but I think the Vision Pro’s significantly higher resolution is worth specific examination, especially considering the displays involved are reportedly the product’s highest costing and most limited component. Apple clearly believes this display and its 23 million pixels that deliver incredibly clear text and graphics is the only one to meet what they believe is the threshold necessary to do spatial computing right.

Requiring high resolution in an immensely anticipated platform is not new to Apple. They’ve done it at least one time before and I’m not talking about the iPhone. While the iPhone was without a doubt revolutionary and had a significantly larger screen, that screen’s pixel density was actually in the ballpark of other small-screen phones of the day. Those other phones sucked by comparison for other reasons. No, the product I am referring to is the Macintosh.

The original Macintosh that was famously released in 1984 sported a 512×342 display that delivered a whopping 72 square pixels-per-inch. This meant the Macintosh could deliver incredibly clear text and graphics necessary to do a desktop graphical user interface (GUI) right. It’s why System 1.0’s design looks modern compared to other GUIs of that era and has held up for almost four decades now.

Despite its high resolution and being the first mass market personal computer with a graphical user interface, the Macintosh did not bring the graphical user interface to the masses. The reasons for this have been discussed ad nauseam. Macs weren’t compatible, were more expensive, and suffered from increasingly boneheaded leadership.

But what about Windows?

Windows was released in 1985, just a year after the Macintosh. It was compatible with millions of already sold PCs, supported cheaper hardware, and benefited from incredibly competent leadership at Microsoft. Windows was the platform that ultimately brought desktop GUIs to the masses, but not in the 80s. In fact, fewer people used Windows in the 80s than owned Macintoshes. A good illustration of this is Sim City. Sim City was a very popular game released in 1989 for both Macintosh and MS-DOS. The first Windows version didn’t appear until 1992 because no one really used Windows until the 90s. A major reason why is also illustrated by Sim City, whose 1989 MS-DOS version initially only supported the following graphic modes in MS-DOS:

EGA color graphics in both low-resolution 320×200 and high-resolution 640×350, as well as monochrome EGA 640×350, CGA 640×200, and Hercules 720×348.

Some unfamiliar with the era might look at some of those resolutions and wonder how they never noticed that PC monitors from the 80s had widescreen aspect ratios. That’s because PC monitors of the era weren’t widescreen. They had a display aspect ratio of 4:3, which was very much standard at the time. These resolutions worked not by making the displays wide, but by making the pixels tall. Most PCs sold in the 80s primarily measured their displays in lines and columns. Pixels obviously existed, but only in the context of rendering command line interfaces.

Arguably, Windows biggest advantage in the 80s was that it made multitasking between different programs much easier. Even though Microsoft managed to pull this off in software, the hardware limitations of low resolution CGA and EGA graphics with chonky rectangular pixels made multitasking clunky to the point of being unusable. I don’t have the statistics to prove this, but I would bet good money that the rise of Windows 3.x correlates very heavily with the adoption of VGA’s 640×480 graphics mode and SVGA. SVGA in particular delivered 800×600 resolution that roughly amounts to 72 square pixels per inch on a 14″ display, a size that I recall was fairly common in the early-to-mid-90s. Not only that, but I would also wager heavily that Windows 95, which started development in 1992 and included an overhauled GUI that just so happened to drop support for anything less than VGA, was a massive success because it was the first version of Windows that was really designed for the 72 PPI threshold.

The Macintosh illustrated the importance of 72 PPI because of how its GUI didn’t need to change. Windows illustrated the importance of 72 PPI because of how much it did change.

Many people have and will instinctively compare Apple Vision to the iPhone, iPad, and Apple Watch, but those were mass market devices from day one and priced accordingly. The Vision Pro is not a mass market device. The closest thing I can compare it to is the Macintosh, and the only thing I can compare spatial computing to is desktop computing. Desktop computing took a decade to come to the masses. It may not take a decade for spatial computing and I am sure Meta and Sony will continue to sell many headsets in the intervening time. I am also confident that every headset will inevitably exceed this new threshold set by Apple, just as PCs eventually exceeded 72 PPI, but I would bet that any headset that isn’t just a video game console will offer an experience closer to Vision OS than anything else on the market today.

Samsung’s New 5K Display Costs As Much As The Studio Display

From Chris Welch, at The Verge:

Let’s get right to it: the 5K display, which is being positioned as a prosumer option meant to rival monitors from LG and Apple, will cost $1,599.99 and you’ll be able to purchase it from Samsung and other retailers in August.

$1,599 is the same starting MSRP as Apple’s Studio Display — also a 27-inch 5K monitor.

The first 5K iMac was released in 2014. Apple at the time touted the engineering of making 5K possible. That it’s almost a decade later and Samsung’s entry is only the third truly1 5K display on the market really highlights how ahead of the curve Apple was able to get with the integrated iMac. Furthermore, while originally priced at $2499, 2015 brought models priced as low as $1799. That’s only a hundred dollars more than the Studio Display and this new Samsung 5K display.

Speaking of price, I do expect this Samsung display will get more frequently and heavily discounted than the Studio Display2. It also comes with a matte display and an adjustable height stand out of the box3, two features Apple charges extra for. Furthermore, the Samsung entry features a DisplayPort in addition to Thunderbolt 4, which should make it ideal for anyone who uses both a Mac and a gaming PC.

Regardless, I hope these Samsung displays sell like gangbusters. 5K is way better than 4K at 27″, even on Windows, and I would love for this display to be the first of many new 5K options.


  1. I am not counting non-Retina oddly shaped 5K displays, nor am I counting the seemingly elusive Dell entry from 2016 since I am not convinced it ever sold in meaningful numbers. 
  2. The Studio Display does go on discount though, at least on Amazon. I got mine for $200 dollars off. 
  3. That said, I am curious to hear about the quality of the stand. 
Mimestream 1.0 Released

Neil Jhaveri, on Mimestream’s blog:

Mimestream combines the power of macOS with Gmail’s advanced features for a new kind of email client that lets you move through your email effortlessly. Unlike other email clients that use the decades-old IMAP protocol, Mimestream uses the Gmail API for a new kind of lightning-fast experience that’s full of features. Built using the latest technologies from Apple, using Mimestream is a breath of fresh air that you’ll see and feel.

I feel like many, if not most third party email clients that have come out in recent years took the approach of trying to reinvent email. They never appealed to me because I didn’t want to reinvent email. I just wanted a great email client. Mimestream is a great email client. If you and/or your work use Gmail, Mimestream is a no-brainer and is an absolute steal at $30 a year.

A Notch for Your Face

A large part of the consternation surrounding Apple’s purported AR/VR headset is that it falls short of the real killer product, AR glasses. While writing my last post, it dawned on me this consternation is reminiscent to that surrounding Apple’s various notches. Tech enthusiasts want Apple to deliver a true edge-to-edge experience with the camera, FaceID, and other sensors somehow behind the display. I am using the present tense “want”, because it’s been over five years since the notch was introduced with the iPhone X and no one, including Apple, has been able to meaningfully deliver a true edge-to-edge experience.

Calling an AR/VR headset “a notch for your face” sounds like snark and while it does make me chuckle, I think it’s incredibly apt in that I suspect the realities of AR glasses today are the same realities of true edge-to-edge displays in 2017. The capabilities just aren’t here and it’s reasonable to assume that they aren’t going to get here for years to come. The choice for Apple isn’t “do a headset or do AR glasses” just like the choice wasn’t “do the notch or do edge-to-edge.” In both cases, it’s “do something incremental or do nothing.”

Ubiquitous or Bust

I think people have become too married to the idea that AR/VR needs to be ubiquitous to be considered a success to the point where it’s distorted our whole thinking about the product category. This is why their ideal headset needs to be priced in the mid-hundreds, because that’s the price where ubiquity is possible. A $3000 headset, no matter how good, can’t become ubiquitous and therefore has already lost before even entering the race with that narrow line of thinking.

This “ubiquitous or bust” bar of success applies doubly for Apple because of the iPhone. For many, everything Apple does is framed by the success of the iPhone. They don’t want to hear that the iPhone’s ubiquity is largely because the need for a pocket computer turned out to be ubiquitous. They singularly focus on whether such-and-such Apple product is as ubiquitous as the iPhone, then write it off when it isn’t. Folks who still overlook the Apple Watch, a device that topped 10 million in sales and dominates in marketshare, are going to be apoplectic over a headset that doesn’t hit a million sales in its first year.

Along the lines of “Apple never releases two new products the same way”, I don’t think we have a model for how a premium AR/VR headset will perform in the market. As revolutionary as the iPhone was, it’s momentum was driven by a very clear model for success when it was announced. Handheld devices like cell phones and iPods had already been selling like gangbusters for years and so everyone knew a device that served both roles just as well or better was also going to sell like gangbusters. The closest Apple product launches I can think of is the original Macintosh and the Newton. People didn’t quite know how they were going to use a personal computer, and they didn’t know how they were going to use a PDA. Personal computers didn’t become ubiquitous until the-mid 90s. PDAs never achieved true ubiquity and ended up being a stepping stone toward smartphones.

I don’t think AR/VR headsets are ever going to become ubiquitous, even if they are priced in the mid-hundreds, because I can’t imagine most people (many of which were hesitant about AirPods) will get comfortable strapping goggles over their face. That said, I don’t think ubiquity is the only measure of success. The Mac has never been ubiquitous. A headset that sells at low volumes, but becomes the de facto standard for professionals is still successful. A headset that over time gives Apple the chops to become the de facto standard in AR glasses, something that I think does have a high chance to become ubiquitous, is a no brainer.

NFL Sunday Ticket Prices on YouTube

From David Pierce, at The Verge:

Football fans, start saving those pennies. YouTube just announced pricing for its new Sunday Ticket package, and access to this season’s games will cost anywhere from $249 to $489.

When Apple didn’t get NFL Sunday Ticket, my guess was that it was because they wanted to do something markedly different with either the pricing and/or the offering itself, and that the NFL just wanted to keep doing the same thing. We’ll probably never know for sure, but these YouTube prices and packages sure look a lot like what DirectTV was previously offering.

Making Computers Personal

Personal computers, including smartphones, are really general purpose computers in that they and their software is built for, well, everyone. Their operating systems and most of their apps are designed to maximally satisfy the most users. The challenge is that everyone has different needs so how can one piece of software accommodate them all? The answer is usually to add more features. Large apps and operating systems are chock full of so many features that how to organize them is subject to criticism and debate.

The most common and sensible way of managing loads of features is to implement some form of progressive disclosure. Rather than attempt to show every feature to everyone all at once, designers put some features behind some sort of control, like a dropdown menu or button. Implementing progressive disclosure requires software makers to prioritize which features should be most easily accessed and a natural way to rank them is usage. Sensible user experiences only present the features its makers believe will benefit the most users. The most prominent features in Apple Music, for example, is arguably the back, play/pause, and next buttons. This makes sense because everyone listening to music will necessarily have to play and pause, and are very likely to want to skip ahead or go back to replay a song at some point.

Being a large app with everyone in mind, Apple Music also has a ton of features and employs progressive disclosure to tuck away those less commonly used behind various UX. One less commonly used feature in Apple Music that I frequently use is star ratings, which Apple Music hides behind hover states and menus. While certainly reasonable design, it’s an example where an app’s progressive disclosure doesn’t align with my personal needs.

My situation is neither unique nor is it limited to only advanced users. I’d wager millions of users needlessly and repeatedly perform some additional click or command to get past some bit of progressive disclosure in order to reach some feature they frequently need. Heck, given some apps have hundreds of millions of users it’s easy to imagine a single “less common” feature that is frequently used by millions of users. You might think this is bad design, but in many cases it’s unavoidable. Obscuring a feature used by millions sounds bad until you consider the other features that are frequently used by tens of millions.

This is where personal automation comes in. Personal automation gives individuals the ability to choose which of their features should be most easily accessed. I used Shortcuts and AppleScript to elevate star ratings using dedicated keys on my Stream Deck. Now I can rate songs regardless of what app I am currently using, and in one step instead of five. Using personal automation, I have also changed how Time Machine works, streamlined pasting links from Safari, and made joining Zoom meetings practically effortless.

Personal automation doesn’t need to involve expensive third party hardware, or require scripting. It can be something as as simple as customizing keyboard shortcuts1 or defining text replacement macros. Apple’s Shortcuts app is completely drag-and-drop, and makes building personal automation easy enough for even basic users. On top of the many automation apps and features included with Apple’s platforms, there are also a slew of great third party apps that unlock even more possibilities.

Personal computers have become easier and more accessible, to the point where they and their apps are necessarily built for everyone. That’s truly great, but being easy and more accessible doesn’t make them personal. It makes them general purpose. Personal automation gives individuals the power to make their general purpose computer that was built for everyone actually personal.


  1. I just recently had the pleasure of building a Keynote presentation, and I mean that genuinely. Say what you will about Apple’s other iWork apps, Keynote is top notch. That said, I kept being bugged by the keyboard shortcut to “Play Slideshow”. The default in Keynote is Command+Option+P, which seems reasonable enough. My problem was the years of using Command+Enter to effectively do the same thing in PowerPoint, Google Slides, and even Adobe Flash. It was a habit I couldn’t shake. Changing the keyboard shortcut made me way more productive. 
Touch Support Versus A Touch First Experience

Matt Birchler wrote what I can’t help but feel was a rebuttal to my piece arguing that you can’t have a maximally productive touch UI on a portable screen. Here’s what I wrote:

It’s foolishly optimistic to think that Microsoft or even Apple can make pointer interfaces as touch friendly as iPadOS without also destroying the very thing that makes them more productive than iPadOS — information density. Smaller controls means these platforms can disclose more information and interactivity to their users at once. That’s why a bunch of windows on even an 11″ MacBook Air feels natural while only four windows on a “large” 13″ iPad feels ungainly.

And here’s what Birchler wrote, after comparing UI density on a 14″ MacBook Pro and an iPad Mini:

There’s a narrative out there that touch is just so incompatible with macOS and that in order to make it work, the macOS UI would have to get blown up to comical proportions, but I don’t think that’s the case. Changes will be made, but I think macOS is more touch-friendly today than many people give it credit for.

While I certainly wouldn’t hold up an iPad Mini as an ideal of touch friendliness, I actually agree that there are many aspects of macOS today that lots of people wouldn’t have any issues interacting with via touch. I also agree with Matt that changes can be made to macOS to better facilitate touch input without blowing it up to comical proportions. That agreement may come as surprise to anyone who read these sentences from my last piece.

Don’t try to make macOS touch friendly. Add touch and pencil support, but leave macOS’s interface unchanged.

Agreeing with Matt today after writing these sentences two weeks ago doesn’t mean my opinion has suddenly changed in the intervening time, rather it means that these two sentences require more elaboration than an off-the-cuff remark in a bullet point.

When it comes to adding touch to macOS, I think there are three camps:

  1. Apple shouldn’t add touch to macOS.
  2. Apple should add basic touch support to macOS for things like gestures and tapping on buttons to meet the expectation of a population that increasingly assumes touch is supported.
  3. Apple should reinvent macOS to create a touch first experience on par with iPadOS that is also somehow as productive as macOS is today.

I have been firmly in camp 2 since 2018, and I really do think there are areas of macOS that either already are or can be made as touch friendly as an iPad Mini without meaningfully sacrificing information density. That said, implicit in “don’t try to make macOS touch friendly” is “don’t try to make macOS touch friendly as iPadOS“. Adding a smattering of touch friendly controls won’t make macOS nearly as touch friendly as iPadOS.

The reason my last piece started with a series of quotes from an increasingly frustrated Federico Viticci is that I get the sense, perhaps incorrectly, that a driving force behind camp 3 is disgruntled iPad Pro users who are still dissatisfied by being productively limited on their preferred platform, and now think that some hypothetical new version of macOS that is designed for touch could be the platform of their dreams. This is because Macs today already solve many of their needs. They are more powerful, have better support for third party peripherals and software, and have the same great battery life and lower temperatures thanks to the same Apple Silicon found on their iPads. The only thing keeping the Mac from being the platform of their dreams is lack of touch support, but not just any touch support. What I suspect iPad Pro users ultimately want is the same touch first experience found on their beloved iPads today. The crux of my argument is that there is a tension between touch friendliness and information density that makes what camp 3 wants impossible on any platform.

The lesson to take from this half decade of disappointing iPadOS and iPad Pro updates is not that the iPad platform is fatally flawed and that Apple needs to jump ship to macOS for its pro tablet OS. It’s that Apple’s been trying to solve what increasingly seems like an impossible set of constraints — touchability, productivity, and portability. It’s foolish to think Apple or anyone can move those same constraints and demands to a different platform and assume a better outcome. I am all for adding touch support to Macs, but that won’t satisfy the dreams of iPad Pro users who want the same touch first experience found on their iPads today. Furthermore, overhauling macOS1 to create a touch first experience would only introduce the same problems found on iPadOS today, and if that’s the case, then what’s the point?


  1. Even if Apple could somehow magically create an iPad-like touch experience in macOS’s first party apps and frameworks, they would still have zero control over the thousands of third party software, frameworks, and web apps that have all been built for mouse pointers. 
Touchability, Productivity, and Portability — Pick Two

The venerable Federico Viticci has done it again! He has eloquently captured the iPad’s simplistic beauty…

The iPadOS UI, particularly in tablet mode, feels nicer than any other tablet I’ve tried to date.

…while simultaneously lamenting its low productivity ceiling.

The problem is that an iPad, at least for people like me, isn’t supposed to be a companion to work that happens somewhere else. It is the work.

Along the way, Viticci doesn’t pull any punches with his fundamental dislike of Apple’s Stage Manager.

I fundamentally dislike Stage Manager

There’s more.

I feel this every time Stage Manager doesn’t let me place windows where I want on an external display; every time I can’t place more than four windows in a workspace…

And finally…

The more I explore other platforms, the more I believe that iPadOS looks and feels nicer, but it’s also getting in the way of me being able to get my work done. Maybe this has been true for a while and Stage Manager was the proverbial straw that broke the camel’s back.

In Federico’s mind, Stage Manager is the poster child for iPadOS’s low productivity ceiling when compared to other platforms. Not only do I agree, his words stoked a very strong reckon I’ve been feeling with regards to the tension between information density and touch friendliness.

Before the Apple community was waxing poetic about art in software, we were all in a tizzy about Mark Gurman’s report that Apple was exploring Macs with touchscreens. Some might see a touchscreen MacBook as the device Federico and others have been wanting for over a decade — a great tablet that doesn’t sacrifice productivity. While I’ve been on the “Macs should have touchscreens” train since 2018, I don’t think they can solve the problems found on the iPad.

What makes a great tablet first and foremost is touch friendliness. The iPad is extremely touch friendly. The Apple Pencil exists and is great, but you don’t need one to use iPadOS. An app that had buttons or other controls small enough to require a stylus would feel broken to any iPad user. Compare this to the other major tablet platform, Windows. Windows does not make for a great tablet OS in large part because it isn’t entirely touch friendly. PC makers include a stylus, not just out of the goodness of their hearts, but because they have to. Windows apps new and old are littered with controls that require input roughly the size of a mouse pointer because Windows itself was built for a mouse.

The Mac was also built for a mouse, and while I would argue macOS is more usable than Windows, there is no getting around the fact that controls optimized for pointers are inherently unfriendly to touch input. It’s foolishly optimistic to think that Microsoft or even Apple can make pointer interfaces as touch friendly as iPadOS without also destroying the very thing that makes them more productive than iPadOS — information density. Smaller controls means these platforms can disclose more information and interactivity to their users at once. That’s why a bunch of windows on even an 11″ MacBook Air feels natural while only four windows on a “large” 13″ iPad feels ungainly.

Conversely, it’s impossible to make iPadOS more information dense without sacrificing the very thing that makes it the best tablet OS — touch friendliness. iPad users want more information on screen because that will help them be more productive, but the only way to present more information in iPadOS without sacrificing touch friendliness is a larger display. Not only is a larger display not portable, iPadOS’s support for larger displays still sucks. There’s nothing Apple can do about large displays not being portable, but better support for larger displays? That’s a problem Apple can solve.

Given macOS can’t be made touch friendly without sacrificing information density and iPadOS can’t add information density without sacrificing touch friendliness, here’s what I think Apple should do:

  1. Don’t try to make macOS touch friendly. Add touch and pencil support, but leave macOS’s interface unchanged.
  2. Shitcan Stage Manager on iPad screens. It was a stupid idea to begin with.
  3. Make iPadOS windowing awesome whenever an iPad Pro is connected to a large screen.
  4. Release a “Studio Display Touch” that works with both iPads Pro and Macs, and can be ergonomically adjusted for touch and Apple Pencil input.
  5. Bump the specs on iPads Pro to their Mac equivalent. Don’t artificially restrict how many apps can be open on a large display.
  6. Leave non-Pro iPads as just tablets.

If someone walks into an Apple Store wondering what to buy, they would have three great options given the above. If they want a portable device for maximum productivity, they could buy a MacBook. If they want a portable, touch friendly device. They could buy a non-Pro iPad. If they want a portable, touch friendly device that can also do more given a large screen, they could buy an iPad Pro.

What you don’t want is that same person being presented with a MacBook and an iPad that each have distinct terrible experiences that separately fail to solve the impossible task of being simultaneously touch friendly and maximally productive while still being portable.

Apple Studio Display and Gaming PC Thunderbolt Weirdness

A question about whether it was possible to connect a gaming PC desktop to Apple’s Studio Display came up in a recent discussion. Using Apple’s premium display with a gaming PC is a weird setup. Not only does the Studio Display lack in ways important to gaming, it also can only be connected via Thunderbolt. This makes sense for Macs, which have included some version of Thunderbolt since 2011, but PC adoption of Thunderbolt has historically been spotty. In that sense, using an Apple Studio Display with a PC is weird largely because Thunderbolt on PCs has been weird. Despite this weirdness and history, I wanted to help. I have both a Studio Display and a gaming PC, and therefore could explore if and how the two might connect. I also already had some encouraging evidence that it was indeed possible.

A few months ago, my wife had to do some work on her colleague’s Dell laptop. She usually works on a MacBook that her employer provided along with an LG UltraFine 5K, a display also designed for Macs and thus also only has Thunderbolt for input. She connected the Dell and, to our surprise, everything just worked. It seems that in more recent years, PC makers have started to include Thunderbolt on their laptops. A quick search reveals a plethora of Thunderbolt 4 equipped PC laptops.

I believe there are two very good reasons for this trend:

  1. USB and Thunderbolt have converged to the point where they are largely compatible and even share the same USB-C connector. This compatibility means PC makers don’t have to sacrifice a precious USB port in order to include Thunderbolt.
  2. Like Apple’s MacBooks, PC laptops have become increasingly thinner over the past decade. The versatility and reliability1 of Thunderbolt makes it possible for these ever shrinking laptops to continue to offer many the same capabilities of their larger forebears using a few small USB-C ports.

As surprised as I was that Thunderbolt has seemingly gained traction in PC laptops, it makes sense. Thunderbolt on PC desktops, however, is a completely different story.

Only a few PC motherboards even have Thunderbolt, and those that do can’t use it for display while also leveraging a discrete graphics card. Furthermore, most discrete PC graphics cards don’t even offer Thunderbolt compatible ports, and those that do are either designed for the Mac Pro and/or are absurdly expensive. Instead, almost all of them come with some combination of HDMI and DisplayPort. My guess is that the very modular nature of gaming PCs runs counter to the integrated paradigm found on laptops and that Thunderbolt was likely built for.

That said, it is possible to use a Thunderbolt display on a gaming PC with a discrete graphics card, but not in the way I (or I suspect anyone) would have ever imagined. As far as I can tell, the best way to connect a PC Desktop with a discrete graphics card to an Apple Studio Display using Thunderbolt is as follows:

  1. Buy a motherboard with Thunderbolt support and a PCIe card2 with Thunderbolt and DisplayPort In ports.
  2. Connect the DisplayPort Out from the graphics card directly to the DisplayPort In on the PCIe card.
  3. Connect the Apple Studio Display to the associated Thunderbolt port on the PCIe card.

This means there will be a DisplayPort cable coming out of one part of the PC and going right back into another part of that same PC. I don’t know the details, but my assumption is that the PCIe card combines the DisplayPort stuff from the external cable with the USB and Thunderbolt stuff from the motherboard to create a single Thunderbolt transmission.

This gap is not limited to Apple Studio Displays, it’s also true for the variety of displays that support USB-C input. These displays work with gaming PCs because they typically offer HDMI and/or DisplayPort inputs as well, but the effect is that USB-C and Thunderbolt are effectively relegated to being “laptop ports” in the PC world3. Here’s the thing though, laptops make up a vast majority of computers sold. Using an Apple Studio Display with a PC is weird in large part because Apple has been the odd one out with regards to Thunderbolt, but will that still be the case a few years from now if PC laptops continue to adopt Thunderbolt? Thunderbolt could easily become the de facto standard for connectivity and if that happens, it won’t be the Studio Display that’s weird, but the gaming desktop that can’t connect to it4.


  1. Thunderbolt is functionally more reliable than USB in this regard because it has a more stringent list of requirements. This means plugging a Thunderbolt x cable into an Thunderbolt x port is way more likely to work as expected. 
  2. The card might be avoidable if you can find a motherboard with Thunderbolt and DisplayPort In. 
  3. I am comparing PC laptops verses desktops, but the fundamental difference is actually integrated verses modular. Alost all laptops are necessarily integrated, but I suspect you can just as easily use a Thunderbolt or USB monitor with an integrated PC desktop
  4. I suspect discrete graphics cards will figure out a way to support Thunderbolt if this actually comes to pass.