Apple introduced Core ML during their most recent WWDC keynote address.
Core ML makes it easy for developers to create smarter apps with powerful machine learning that predict, learn and become more intelligent.
Unlike the other announcements paraded on and off the McEnery Convention Center stage, Core ML wasn’t presented alongside some cool tech demo or shiny new product. Even features that Core ML seems to power were credited with the more general and laymen term “machine learning”. No one would’ve thought twice if it had been entirely sequestered to the more technical Platform State of the Union, and yet Core ML was given nearly two minutes of valuable stage time in what was an already overpacked keynote1.
Apple’s Newsroom continues…
…this new framework for machine learning lets all processing happen locally on-device, using Apple’s custom silicon and tight integration of hardware and software to deliver powerful performance while maintaining user privacy.
I’d wager that Core ML didn’t make the keynote just because of its capabilities, but also because it aligns perfectly with two of Apple’s biggest core values – tight integration and user privacy. Furthermore, the phrasing of that Newsroom sentence is yet another indicator that Apple regards privacy as important as tight integration as both a value and a competitive advantage.
Whether it’s Amazon, Google or even Microsoft, nearly everyone else offering consumer experiences in this space is claiming that user data is necessary for machine learning to work — that any loss of privacy is a necessary trade-off for whatever cool new capability being touted. That was certainly true in the past and probably is still true today, but will it always be true?
I’m just old enough to remember when videogame arcades were still relevant in America. Arcades were able to charge anywhere between a quarter to a dollar per play in large part because they could offer games way more powerful than what you could play at home. Arcades faded into obscurity once home computers and consoles were even close to being comparably capable.
The main difference of course is that awesome looking games require only compute power where as machine learning also needs data, and while we’ve been able to increase the power of our machines, the amount of data that a machine would use to learn has remained relatively unchanged.
I don’t think Apple can build a smarter iPhone based solely on the data from that iPhone, but I don’t think they have to in order to maintain privacy. Here’s my guess as to how – Apple is accumulating data from hundreds of millions of powerful devices, anonymizing it using differential privacy, then sending learnings back to devices which can refine their model to run against the entire users data locally.
Sure this is pure speculation, but it sounds plausible to this non-expert and if I am anywhere close to right then how long will it take private machine learning to catch up to increasingly creepy machine learning? It took decades for PC’s and consoles to make the trade-off of going to and paying an arcade not worth it for most video game enthusiasts. If I am right and if Apple can pull it off, I give private machine learning 5 years or less to catch up in ways that most consumers care about.
I really hope I’m right.