Monday, December 23, 2024

Apple built a new AI system that knows everything and nothing about you

Must read

Apple has gone a long way to avoid showing any interest in you. It might not always seem like it, given that your iPhone probably stores data on your entire life, but the company has set a clear and central policy of minimising the amount of data that it holds on its users. That means, for instance, that it avoids storing information on its servers, and when it does, it encrypts it so that it can’t see that data. It simply doesn’t want to know – out of practicality, because if it doesn’t hold data then it isn’t at risk of misusing or losing it, but also as a point of principle, because it believes privacy is a human right.

But this year it announced Apple Intelligence, a whole set of new AI features that are aimed at enhancing the iPhone. They were first revealed in June, but will arrive in the coming months, as part of the rollout of the new iPhone 16. As with all AI, Apple Intelligence is about crunching data; Apple’s marketing materials are clear that the advantage of the feature is that it knows your own “personal context” so that it can prove useful. Apple might not want to know about you, but Apple Intelligence needs to.

It looks like a big problem, perhaps an insurmountable one. But Apple – and its software chief Craig Federighi, who speaks to The Independent hours after the launch of the iPhone this week – thinks that it has built a solution.

Apple’s response was to take the work it has done on on-device processing – keeping data from itself by ensuring that the computing is done on your iPhone, rather than on its own servers – and apply it to AI. “What if you were crazy enough to think you could extend that [on-device processing] to the cloud? What would that mean? And it meant a whole bunch of technical invention,” says Federighi.

(AFP via Getty Images)

That invention has led to what security experts have called “cutting edge computer science” that allows for “insanely great privacy and security”. Much of that is made possible by what Apple calls “Private Cloud Compute”, a new technology that Apple claims means that users don’t have to choose between using their personal data in helpful ways and also keeping that data secure.

“We’ve built up this privacy model around on-device computation, that we’ve really highlighted to our customers, that our customers have come to understand and trust as the gold standard for protecting their data, knowing that if the data stays under their control and is not made accessibly to anyone else, that’s the ultimate bar for privacy,” says Federighi. “And so the single biggest decision around how we approached privacy in the cloud: can we make that same kind of promise? Can we extend that same model of on-device protection to more powerful servers that are running in the cloud?

“And I’d say that was an unusual and bold decision, one that is atypical and I think unique in the industry, and to live up to that then took us down solving a whole stack of very, very hard problems – but, fortunately, ones that have been so fundamental to what we’ve been building toward for so many years.”

So Apple got to work on building new servers out of its own silicon, which Federighi notes is not something the company is in the business of doing. And it had to build a new operating system to power those servers, with Apple’s commitment to privacy built into the core of how it worked. But it also had to solve more theoretical issues: how do you build huge distributed systems to help understand people and their needs, and also ensure their privacy?

The first response is that Apple’s aim is to use Private Cloud Compute as little as it needs to. A large part of the company’s privacy work has been built around the idea that it is best to do as much as possible on the device, and Apple’s work in AI so far has relied on that. So instead of handing off photos to identify people who are in them, for instance, the pictures stay on the iPhone and it can then scan them at night while its owner sleeps.

The problem with Apple Intelligence, however, is that the increased complexity of the AI tools underpinning it cannot always run quickly on the device; doing so on Apple’s servers means that it can run more complex models on more data. It is made for “times when a user needs models that are larger than what can fit in their pocket today”, Apple said in its announcement. But Apple’s aim is still to ensure that most of the work can be done inside their pocket.

When the iPhone receives a request – “Is my brother’s flight delayed?” for instance – then the first job is to analyse whether it is able to process it itself. At times it isn’t, it will then hand off the job to Private Cloud Compute. But the first part of the privacy safeguards is that it only hands off that job: Apple’s servers receive only the data that is relevant to the task, and that data is not associated with the iPhone or its owner.

That data is then kept fenced off from Apple and anyone else while the servers do their work on it and hand the answer back. Once their request is answered, then the server will delete that data. The data no longer exists – even if it did, then it is inaccessible to Apple.

All of that is made possible by a vast array of largely new systems that includes servers that use chips Apple has built itself to ensure they conform to the highest standards of security. Those chips include technologies that ensure they cannot be abused: staff in the server centres are given no privileged access to the systems, even in the case of an outage. The iPhone will only talk to servers that it can securely verify itself, and experts are able to inspect the code too.

Unlike on the iPhone, the code is running somewhere else. As such, it could be “changed out from underneath the users and put back” – with a new security hole that would allow an attacker to steal data, for instance. So it was key to ensure that the phone would only talk to servers running software that has been checked has been published externally, so that security researchers can check it.

Apple’s approach to privacy has long been blamed for its perceived difficulties in AI, because artificial intelligence depends on having a lot of data, and Apple is fundamentally opposed to gathering up that data. “For many years – prior to the latest breakthroughs in generative AI – there was a lot of discussion of: ‘how can you possibly play in the world of machine learning, if machine learning is data based and if you at Apple are not going to be trying to fuel your machine learning on the backs of your customers’ privacy data?’” says Federighi. “We made the argument back then that holds today – that it may be harder, but there are principled ways to do it.

“You can learn to recognise photos of trees without having taken your customers’ personal photos of trees. To do it, you may have to go commission data, acquire public data from other sources, to train your intelligence. But that’s possible. We never want to take a customers’ personal data – the kind of data they store in their photo libraries, their messages and their emails and their notes – and think that’s ours to be used for our purposes. But the good news is we don’t need to.”

All of this might have looked like Apple being overly cautious – had the need for caution not been proven almost immediately. Only moments after Apple Intelligence had been announced, at its Worldwide Developers Conference in June, Elon Musk tweeted that he was so concerned about the technology that he would ban iPhones from his companies.

(Getty Images)

“Apple has no clue what’s actually going on once they hand your data over to OpenAI. They’re selling you down the river,” he wrote in one tweet. “If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies. That is an unacceptable security violation,” he wrote in another.

The posts were both based in and furthered a misunderstanding – whether intentional or not – of how the new features worked. But they were also an important reminder of exactly how much risk Apple is taking as it gets involved in artificial intelligence, and the questions of privacy that surround the nascent technology.

Mr Musk’s criticism related specifically to the integration of OpenAI’s ChatGPT that will arrive as part of Apple Intelligence. While Apple does almost all of the new AI features itself, it has also integrated that chatbot so that users can send their requests to ChatGPT as well, and get a response back. It is only a tiny sliver of the Apple Intelligence offering – almost all of the new features use Apple’s own models, run on Apple’s own hardware – but it was one that immediately brought controversial headlines.

The OpenAI integration comes with a whole host of privacy protections: users have to explicitly opt in to sharing any requests with OpenAI, companies can switch it off on phones they manage, and only the content of the request is shared, with OpenAI promising not to store it or associate it with any particular user. But just one of Elon Musk’s tweets on the issue gathered 80,000 retweets, and so many people may now be worried without any understanding of the work that Apple has done to avoid those risks. But Federighi suggests that the incident only showed how important all this work was.

(Getty Images)

“There was some I’ll argue unnecessary muddying of the waters that may have occurred there,” he says. “But what was incredibly positive coming out of that event was how clearly we were hearing exactly how we felt ourselves – which is that privacy was incredibly important to a lot of people, to many, many, many of our customers, especially when it came to AI.

“Because all we were hearing was how concerned they were that, you know, can Apple really do that? Does that really work? And as we’ve continued to talk to them, what we’re hearing is it’s important.

“Some of them look into what we’re proposing and say, ‘This is amazing. You know, I feel so much better’. Others are saying, ‘This sounds like magic. How could they possibly be doing that? I’m not sure I believe it yet’.

“But honestly, I am thrilled with that reaction, because the truth is on our side, right? We know that we are doing it, and as we continue to get the message out, people will understand it, and we’ve gotten reactions from those in the security community who have had the opportunity and the years of background it takes to delve deeply into these issues, to look at what we’ve been doing, and they’re saying this is breakthrough work.

“And I think the average consumer ultimately relies on experts who can bring the depth of analysis to this to tell them, Yes, what Apple’s saying is true. They’ve delivered something unlike anything else we’ve ever seen in the industry, and you can trust it. And you can trust it. And as that message is getting out, I think that’s reassuring our customers, and I think right now the general public needs to be reassured, because if this kind of powerful, personalised artificial intelligence is ever going to be able to really impact people’s lives, it’s only going to be if people can trust it.

“And I think what we’re hearing is people want to take advantage of these capabilities, but have been up to now, scared of how would they feel about revealing their data to these kinds of services? And we’re excited to give them finally what I think they’ve all been looking for.”

Apple may be doing hard and lauded work on privacy and artificial intelligence, but the industry more broadly has already been associated with a host of privacy worries and other controversies. Is Federighi worried that even if Apple is doing the right thing, the reputation of AI more broadly could be mixed up with so many outrages that users will be concerned about using it at all?

Customers are going to “fall on a spectrum” of excitement and concern about the technology, he says. “But I think that at Apple we have worked very hard to earn our customers’ trust, and I think our customers understand that when we put a capability in front of them on their device, that is something they can trust, and I think many of them will.

Apple Showcase
Apple Showcase (Copyright 2024 The Associated Press. All rights reserved)

“I think as long as we continue delivering on that, that they’ll continue to want to take advantage of it, because the capabilities are tremendously helpful and empowering. And, you know, we’re just getting started with the ways this technology can enrich our customers’ experience with our products – and they need to do so knowing that their privacy is being protected every step of the way.”

There is a whole separate privacy question around some of Apple’s upcoming tools. Among the large suite of features coming as part of Apple Intelligence is Image Playgrounds, a system that lets users pick prompts or images and have them turned into new AI pictures. Those images can be pictures from their photo library – which means that people’s data could be used with AI without them even knowing about it. Is that a privacy concern?

“I think one of the things that we were very conscious of in considering how we approach imagery is wanting it to always be clear what is real. Because I think there’s an opportunity for certain misinformation or misunderstanding, if someone were to take an image of any one of us or a public figure and modify it in a way that distorts the truth. And so with Image Playgrounds, we’ve really been focused on a fun, really cartoon style or sketch style imagery for the purpose of communication. The kind of thing that I could draw – if I were far more talented than I am – I could draw a cartoon image of you.

“I wouldn’t think in that sense that I’ve appropriated your image in any kind of unusual way. But no on’e going to think that picture I took of you doing something meant that you’d really done it. And so making sure that the provenance of these images and separating generative output from what could be a photograph was a really important consideration for us.”

There are big questions hovering over that feature and the rest of the tools that will arrive as part of Apple Intelligence: will people want them, and will people trust them? Many of the new features will not arrive for months, but those questions will soon be answered. Apple might not want to know as little as possible about its users, but it is about to find out.

Latest article