Short-Term Trade-Offs

Apple introduced Core ML during their most recent WWDC keynote address.

From Apple’s Newsroom:

Core ML makes it easy for developers to create smarter apps with powerful machine learning that predict, learn and become more intelligent.

Unlike the other announcements paraded on and off the McEnery Convention Center stage, Core ML wasn’t presented alongside some cool tech demo or shiny new product. Even features that Core ML seems to power were credited with the more general and laymen term “machine learning”. No one would’ve thought twice if it had been entirely sequestered to the more technical Platform State of the Union, and yet Core ML was given nearly two minutes of valuable stage time in what was an already overpacked keynote1.


Apple’s Newsroom continues…

…this new framework for machine learning lets all processing happen locally on-device, using Apple’s custom silicon and tight integration of hardware and software to deliver powerful performance while maintaining user privacy.

I’d wager that Core ML didn’t make the keynote just because of its capabilities, but also because it aligns perfectly with two of Apple’s biggest core values – tight integration and user privacy. Furthermore, the phrasing of that Newsroom sentence is yet another indicator that Apple regards privacy as important as tight integration as both a value and a competitive advantage.

Whether it’s Amazon, Google or even Microsoft, nearly everyone else offering consumer experiences in this space is claiming that user data is necessary for machine learning to work — that any loss of privacy is a necessary trade-off for whatever cool new capability being touted. That was certainly true in the past and probably is still true today, but will it always be true?

I’m just old enough to remember when videogame arcades were still relevant in America. Arcades were able to charge anywhere between a quarter to a dollar per play in large part because they could offer games way more powerful than what you could play at home. Arcades faded into obscurity once home computers and consoles were even close to being comparably capable.

The main difference of course is that awesome looking games require only compute power where as machine learning also needs data, and while we’ve been able to increase the power of our machines, the amount of data that a machine would use to learn has remained relatively unchanged.

I don’t think Apple can build a smarter iPhone based solely on the data from that iPhone, but I don’t think they have to in order to maintain privacy. Here’s my guess as to how – Apple is accumulating data from hundreds of millions of powerful devices, anonymizing it using differential privacy, then sending learnings back to devices which can refine their model to run against the entire users data locally.

Sure this is pure speculation, but it sounds plausible to this non-expert and if I am anywhere close to right then how long will it take private machine learning to catch up to increasingly creepy machine learning? It took decades for PC’s and consoles to make the trade-off of going to and paying an arcade not worth it for most video game enthusiasts. If I am right and if Apple can pull it off, I give private machine learning 5 years or less to catch up in ways that most consumers care about.

I really hope I’m right.

  1. Apple’s 2017 WWDC Keynote — Core ML is announced at around 1:23:42 ↩︎

Desktops, Laptops, and Tablets

This week brought two companies espousing two platforms with two very different approaches for bringing touch to traditional personal computers. Microsoft has long advocated that all input — mouse, pen, and touch — can all coexist in the same Windows UX. Apple on the other hand strongly believes that simply replacing the display of a Mac with a touchscreen would result in terrible ergonomics and UX. Instead Apple has opted to bring only touch input to Macs, leaving touchscreens to devices running iOS.

The recent announcements by both companies are extensions of these long held positions. Among other updates, Microsoft announced a new all-in-one desktop featuring a giant 28″ touchscreen that is mounted in a way that easily lowers it into a drafting-table position. Apple announced new laptops that also have touchscreens, but rather than replace the displays, they replaced the function keys to live just above the keyboard. While both tightly integrate hardware and software to provide pricey solutions geared toward professionals, the similarities stop there.

The Surface Studio

I have long been skeptical of touchscreen PCs for two reasons, hardware and software.

Hardware-wise, most touch enabled PCs have either made no form factor changes or offered convoluted bastardizations of laptops.

Adding a touchscreen to a laptop form factor is particularly challenging simply because of the many conflicting design priorities. Laptops need to be light, touchscreens need to be firm. Touchscreens are most ergonomic when horizontal, but the reason you get a laptop as opposed to a tablet is for the keyboard. Windows PC makers, including Microsoft, have tried to solved this by offering laptops that can in one way or another transform into a tablet. The problem I see with this approach is that each transformed state is effectively a different mode that is better for some tasks, but worse for others. Say you need to respond to an email while using the device as a tablet to edit a photo. Your best option is to transform the device back into a laptop to write and send the email, then transform it back into a tablet to resume your edits. Microsoft touts this transformation as a feature, but I see it as maximum disruption.

The Surface Studio already avoids much of this mess simply by being a desktop. Desktop computers are allowed to be heavy enough to offer a firm experience for touch input. They also have detached keyboards that can co-exist with a horizontal display, which brings me to what I see as the Surface Studio’s most appealing feature as a touchscreen device. The way the Studio easily lowers into a drafting-table position is exactly how I pictured a large touchscreen would work ergonomically. I am honestly a little surprised it took PC makers this long. That same person editing their photo on the touchscreen in drafting mode can easily respond because the keyboard is still accessible and even switching the Studio to being vertical seems much less disruptive, if not elegant.

While I am still skeptical of touchscreen laptops, including Microsoft’s Surface Book, I am not skeptical of the Surface Studio… at least hardware-wise.

Software-wise, I remain unconvinced. As good as the Surface Studio hardware looks, it’s still runs the same Windows as every other PC, and for as long as Microsoft has been promoting pen and touch enabled input, they’ve been promoting the mouse since Windows’ inception. Even in this week’s impressive demos, I couldn’t help but notice all of the tiny UX controls (buttons, scrollbars, etc…) that remain optimally sized for a cursor. This alone will make for a disjointed UX experience. Imagine again you are editing a photo with touch, but need to access some feature in a toolbar or menu. You will either have to carefully hit the tiny target or reach for a more precise input device. This leads me to believe that touch input in Windows is not just a second class citizen, but third behind mouse and pen. This doesn’t mean I think folks won’t use touch, they will for convenient gestures such as pinch-to-zoom and swiping, but that they will alway have a mouse or pen nearby.

I suspect this won’t be a showstopper for many professionals who already use custom inputs and are perfectly happy to get major new capabilities that enhance their specific apps and workflow, even if it comes at the cost of a disjointed experience across the rest of their OS. Microsoft just delivered what many creative professionals have been looking for and I see Apple losing customers as a result.

The 2016 MacBook Pro

While Microsoft probably delivered the best touchscreen on a traditional computer that will be great for certain creative professionals, Apple may have delivered the best touch input on a laptop for everyone else.

The already great multitouch trackpad is up to 2X larger than previous models, which looks to bring it closer to my favorite input device of all time, the Apple Magic Trackpad 2. I suspect a lot of people, professionals or otherwise, underestimate the usefulness of Apple’s trackpads. Basic gestures such as pinch-to-zoom, scrolling, and swiping are all incredibly intuitive across apps. Advanced gestures such as switching spaces, revealing the desktop, and exposing windows are equally rock solid. Apple’s multitouch trackpad already satisfies many of the benefits of a touchscreen.

Apple also introduced the Touch Bar, a touchscreen strip that exists above the keyboard where the function keys used to live. Like many power users, I first gasped at the idea of losing my precious functions keys. But when I thought about it, here is how I use actually those keys (in rough order of frequency):

  • Volume
  • Playback
  • Screen brightness
  • Key-mapped script
  • Windows stuff in VMWare by holding the fn key.

Those first three items are already available in the controls strip section of the Touch Bar, I am guessing I can figure out how to add my my nerdy key-mapped script via customization, and VMWare will still be supported by holding the fn key. Keys that I do not or rarely use such as LaunchPad, Mission Control, and Keyboard Brightness no longer take up any space. In their place, applications will be able to insert their own custom functions and controls. While that may sound like a gimmick and though I am sure some developers will treat it that way, I think the usability potential of the Touch Bar is huge, especially given these controls can be multitouch and reportedly very responsive. Going back to example the a photo, one of my biggest annoyances when editing is losing both my focus and cursor position in order to access some tool. The touch bar will provide access to at least some of those tools without losing my cursor place.

This brings me the biggest difference in Apple’s solution, the implicit understanding that MacOS is fundamentally designed for the mouse, which makes the cursor perhaps the most important user interface element in the entire platform. Bolting on a touchscreen display inherently affects and sometimes contradicts the cursor. Tap the close button on a Window, and the cursor has to either disable or mirror that input. Anyone who has played with a pen or touch enabled Windows PC has seen this behavior. With the multitouch trackpad and now the Touch Bar, Apple doesn’t try to replace any of the cursor’s core functionality. Instead they’ve simply added complimentary touch input where ever possible. For example, if I two-finger swipe to scroll, the cursor is unaffected. Adding touch while leaving the cursor alone also means apps don’t have to be updated to get the benefit of touch gestures or redesigned to account for larger touch targets.

Apple’s solution is less obvious and certainly won’t lure back anyone getting a Surface just for the power of a traditional computer with a touchscreen, but I think it will ultimately be less disruptive while offering a more holistic and integrated experience in a way that feels natural.

Who is Using What

As I mentioned earlier and on Twitter, Microsoft’s solution seems to be a better fit for desktops while Apple continues to focus on laptops. I think Microsoft is definitely getting the attention of Apple’s pro desktop users who were once again left in the cold this week. That said, I think Microsoft might have a problem with portables. The Surface Books (and other touchscreen Windows laptops) still seem very awkward and Windows’ touch input is not good enough for tablets. Apple is dominating with touchscreen tablets and has a touch input solution that makes more sense to me for laptops. I think Microsoft is going to do well with the Surface Studio, but already more people use laptops and tablets, and those numbers are only going to go up.

In Search of a Good Villian

Looks like I am in good company with my take on how the media treats Hillary more harshly than Trump. This past Labor Day weekend, Paul Krugman wrote an excellent piece, aptly comparing this year’s media coverage to that of 2000’s election between George W. Bush and Al Gore. In it, Krugman observes:

Yet throughout [the 2000] campaign most media coverage gave the impression that Mr. Bush was a bluff, straightforward guy, while portraying Al Gore — whose policy proposals added up, and whose critiques of the Bush plan were completely accurate — as slippery and dishonest.

Sound familiar?

Krugman astutely highlights and dismantles the absurd suggestion that Hillary is somehow the more crooked candidate, but like Matthew Yglesias, he seems to suggest that individual journalists can’t resist publishing an interesting hypothesis, even when said hypothesis is unsupported by the facts.

So I would urge journalists to ask whether they are reporting facts or simply engaging in innuendo…

Like I wrote in my earlier piece, I suspect publication bias is only a small part of the problem. While journalists are certainly writing pieces that simply engage in innuendo, why are those pieces deemed “fit to print” without the facts? My argument is that it’s not just the writers, but also the editors, the publications and the entire news industry that is in the business of promoting narratives rather than facts.

I’m not alone. Shortly after Krugman’s piece was published, Craig Mazin countered with a series of tweets pointing out this election’s overarching narrative bias.

The problem journalists face is one of narrative dependence. They need narrative to tell news stories. Not facts, mind you. Stories.

Mazin goes on to describe what I think is a compelling theory that Hillary makes a better storybook villain than the obviously crooked Trump.

The Boring Villain doesn’t require you to uncover anything, or make a shocking discovery.

Whatever you choose to believe – that journalists can’t help but write salaciously or that a larger narrative bias at play, you should be aware that the media is not covering these candidates fairly, and also (as Krugman puts it):

…focus on the facts. America and the world can’t afford another election tipped by innuendo.

The Company Who Cried Product

Nick Heer over at Pixel Envy made a similar observation to mine about Google’s penchant for treating concepts the same as products.

There’s a press-related angle to all of this, too, that I find particularly fascinating. Google’s PR strategy frequently seems to involve inviting journalists to preview their research experiments. But instead of framing them as pie-in-the-sky ideas, some journalists cover them like working, fully-functional products that you will soon be able to buy.

My theory is Google wants their concepts to be covered as real products, because there is really no downside for them if/when they fail to deliver, unlike most other established companies. Can you imagine how the press might react if Toyota, GM, or even Tesla gave a ship date for some new car, only to completely can it months later?

Google reasonably benefits from a history of whimsy, but at what point will the press stop treating their flights of fancy with the same gravitas as real products?

Goodbye Ara

The Verge’s take on Reuters’ report:

Although Project Ara has always seemed a dubious commercial prospect, the news is surprising if only because Google made a renewed effort to push the modular concept at its I/O conference earlier this year, promising a developer version for fall and a consumer release for 2017.

I/O to me is as much an autoshow as it is a developer conference. Ara is like a concept car. Sure they’re both kind of neat, but I’m much more interested in products I can actually buy.1

  1. It’s not that I think Google should stop featuring concepts, rather I wonder if they should separate them out more clearly from what is foreseeably being released. Focus Google I/O solely on what’s coming. As for the neat concepts still in the hopper… how about a Google Fair? ↩︎

Google First

According to Nick Statt at the Verge, Google is moving on from the Nexus brand in favor of Google branding:

Google is dropping the Nexus branding with its two upcoming, HTC-made smartphones. Instead, the company is expected to market the devices under a different name and to lean heavily on the Google brand in the process.

Not only that, but Google’s next phone may not even ship with stock Android.

The report states Google will load the devices with a special version of Android Nougat, as opposed to the standard “vanilla” version of the operating system that’s shipped on past and current Nexus devices. We don’t know for sure what these changes or additions will be. But Google CEO Sundar Pichai said as much back in June, when he mentioned the company would be more “opinionated” about Nexus design. “You’ll see us hopefully add more features on top of Android on Nexus phones,” he said at the Recode Code Conference.

I had always assumed the purpose of the Nexus line was primarily to provide OEMs with a reference design for the ideal Android experience, but what I found instead while looking for confirmation was this 2010 post on Google’s blog announcing the Nexus S (emphasis mine):

As part of the Nexus brand, Nexus S delivers what we call a “pure Google” experience: unlocked, unfiltered access to the best Google mobile services and the latest and greatest Android releases and updates.

While Nexus owners have long benefited from the latest and purest Android experience, it’s clear to me now that Nexus was always more about Google than it ever was about Android (or open source). I’ve already written about Google’s habit of replacing open source Android capabilities with closed source counterparts from Play. Now it looks like they are taking the next step and adding even more proprietary functionality with their own proprietary version of Android1.

Because the Truth is Less Striking

When I asked if the U.S. media had a political bias, a friend of mine suggested that the media’s bias for narrative is bigger than anything to do with politics. Nearly a decade later, I find this observation holds true and is particularly noticeable during this year’s election between the former Senator and Secretary of State, and the ruthless business man.

Take the last decade as an example. After the largest terrorist attack committed on US soil, the former spent her time in the senate largely working to help those devastated, then as Secretary of State, she helped oversee the assassination of its mastermind while maintaining a delicate relationship with the strategic ally he was found in. Meanwhile the latter became a reality TV personality and likely committed fraud.

By any measure of ethics or qualification, the former blows the latter away, and yet the US media has largely underplayed this clear contrast to instead emphasize and exaggerate whatever similarity they can find. They elevate a crook to a contender and consternate over whether or not the only one actually qualified is a crook.

Take this excellent piece by Matthew Yglesias, criticizing an AP exposé that suggests the qualified candidate used her position and influence as US Secretary to fund her family’s charity. Matt doesn’t challenge that there was a conflict of interest, rather the AP’s suggestion that it resulted in unethical behavior.

For example, the AP story leads with:

More than half the people outside the government who met with Hillary Clinton while she was secretary of state gave money — either personally or through companies or groups — to the Clinton Foundation. It’s an extraordinary proportion indicating her possible ethics challenges if elected president.

To which Matt points out:

To generate the 154 figure, the AP excluded from the denominator all employees of any government, whether US or foreign. Then when designing social media collateral, it just left out that part, because the truth is less striking and shareable.

Matt goes further, deconstructing the AP’s specific examples as banal coincidences, ultimately adding:

The AP put a lot of work into this project. And it couldn’t come up with anything that looks worse than helping a Nobel Prize winner, raising money to finance AIDS education, and doing an introduction for the chair of the Kennedy Center. It’s kind of surprising.

So why did the highly regarded AP go to such length to muddy this election’s most qualified candidate as unethical even after their exhaustive investigation found nothing to back that claim? Matt suggests it’s publication bias – that the exciting hypothesis of an unethical frontrunner got attention while the boring lack of evidence didn’t. While there might be an element of that, I think the AP and other news organizations are more deliberately promoting an ongoing narrative they can continue to derive headlines from throughout the election.

“Well regarded and highly qualified candidate still beating the snot out of crooked TV personality” does not make for interesting ongoing coverage, so instead we’re getting “qualified candidate struggles with allegations while known crook tells it like it is.”

Voice Recognition Beats Humans at Typing

From Aarti Shahani at NPR:

Researchers set up a competition, pitting a Baidu program called Deep Speech 2 against 32 humans, ages 19 to 32. The humans took turns saying and then typing short phrases into an iPhone — like “buckle up for safety” and “wear a crown with many jewels” and “this person is a disaster.” They found the voice recognition software was three times faster…

…”People probably play with Siri and find oh, it didn’t give them the right answer. So they don’t think to use speech as a way to do their text messaging or their email or what not,” [Stanford computer scientist James Landay] says. “Using speech for those things is now working really well.”

This reflects my own experience where I often find myself in situations where using Siri’s speech-to-text is the best way to quickly reply to a message, and while Siri’s still hit or miss with personal assistance tasks, the misses are almost never because he/she failed to capture what I said.

The Death of Car Ownership?

Commenting on a post on Daring Fireball where John Gruber asserted that design will matter even with self driving cars, Brian Fagioli on Twitter argued that:

self driving cars will lead to death of car ownership. Outward appearance won’t matter. Just comfort and amenities.

John responded with the following update to his original post.

If you disagree — if you think the outward appearance of a self-driving car doesn’t matter, only the comfort and amenities of the interior — I think you’re being shortsighted. If all self-driving cars are ungainly-looking, they’ll still sell. Uber is already buying ungainly-looking self-driving cars. But what happens when a company starts selling good-looking self-driving cars? Cars are status symbols — even cars you don’t own. What else explains the existence of black town cars? A lot of people used to argue that the exterior design of personal computers didn’t matter, either — only the functionality. No one argues that anymore.

Looking at airlines and hotels, I can’t see people investing the same amount of status into cars they don’t actually own. Sure first class is way better than coach and people enjoy the status of being in first class, but I don’t think anyone is tying their ego to a slightly wider pleathor seat. Also black town cars are only nice compared to Taxis, which is a pretty low bar.

That said, I will challenge Brian’s other assumption – that car ownership will die. Sure it might decline, but so long as there are suburbs, there will be rush hour commuters. I don’t see rush hour easily being taken over by autonomous Uber-like services for two reasons:

  1. It will be difficult to justify a enough cars necessary to support the just 1-3 hours relevant to commuters.
  2. Uber and their ilk want to get rid of the human element so who’s going to clean these cars, and who or what decides when cars should be cleaned? Folks may be willing to risk the occasional gross experience to get home safely or go to the airport, but I doubt they would be willing to take that risk twice daily.

My prediction is commuters will buy or lease cars even after they become autonomous, and just like now the cars they buy will continue to be not just a status symbol, but also an intimate reflection of their personality and taste.

And who knows, if autonomous cars makes commuting suck less, even more people might move out to the burbs and we could see an increase of car ownership.

Tell Me if You Heard This One Before

From Mark Gurman, at Bloomberg:

Apple Inc. has hit roadblocks in making major changes that would connect its Watch to cellular networks and make it less dependent on the iPhone, according to people with knowledge of the matter. The company still plans to announce new watch models this fall boasting improvements to health tracking.

I suspect Gurman is spot on with his prediction that the next Apple Watch won’t have cellular, but did this tone that inspired such words as “roadblock” and “delay” also come from his sources? I’m doubtful given Apple’s long1 history2 of prioritizing battery life over cellular capabilities. My bet is the decision to not include cellular capabilities in this year’s model happened long before this latest batch of rumors.

  1. What Engadget had to say about the orignal iPhone in 2007: “The fact is, there’s only a very short list of properly groundbreaking technologies in the iPhone (multi-touch input), and a very long list of things users are already upset about not having in a $600 cellphone (3G, GPS, A2DP, MMS, physical keyboard, etc.).” ↩︎

  2. What The Verge had to say about the iPhone 4s in 2011: “The lack of LTE, a larger display, or a new design may put off some buyers…” ↩︎