Wednesday, December 25, 2024

Why Mark Zuckerberg thinks AR glasses will replace your phone

Must read

We have a very special episode of Decoder today. It’s become a tradition every fall to have Verge deputy editor Alex Heath interview Mark Zuckerberg on the show for Meta Connect. 

There’s a lot to talk about this year: on Wednesday, the company announced new developments in VR, AI, and the fast-growing world of consumer smart glasses, including a new pair of AR glasses the company is calling Orion. Before we start, Alex and I talked a little about the Orion demo he experienced at Meta’s headquarters, some of the context around the company’s big AR efforts of late, and how Mark is approaching his reputation as a leader and the public perception of Meta as a whole.

Nilay Patel: Alex, it’s good to have you. 

Alex Heath: Thanks for having me. It’s good to be back. 

NP: You had the opportunity to try on some prototype AR glasses, and you also sat down with Zuckerberg. Tell us what’s going on here.

AH: So the big headline this year out of Connect is Orion, which are AR glasses that Meta has been building for a really long time. Some important context up front is right before we started this interview, we had just demoed Orion together. I think I’m the first journalist, the first outsider, to do that with Zuckerberg on camera. That’s on The Verge’s YouTube channel. 

We had just come fresh off that demo, walked into the podcast studio, sat down, and hit record. It was fresh in our minds, and that’s where we started. Orion is very much the story of AR as a category. It’s something that Meta hoped would be a consumer product and decided toward the end of its development that it wouldn’t be because of how expensive it is to make. So instead, they’ve turned it into a fancy demo that people like me are getting around Connect this year. 

It’s really meant to signify that, “Hey, we have been building something the whole time. We finally have something that works. It’s just not something that we can ship at commercial scale.”

NP: The first thing that struck me listening to the interview was that Zuckerberg feels like he has control of the next platform shift, that platform shift is going to be glasses, and that he can actually take the fight to Apple and Google in a way that he probably couldn’t when Meta was a younger company, when it was just Facebook.

AH: Yeah, and they’re seeing a lot of early traction with the Meta Ray-Bans. We talked a lot about that, their expanded partnership with EssilorLuxottica, and why he thinks this really storied eyewear conglomerate out of Europe could do to smart glasses what Samsung did to smartphones in Korea. He sees this as becoming a huge millions-of-units-a-year market.

I think everyone here at The Verge can see that the Ray-Bans are an early hit and that Meta has tapped into something here that may end up being pretty big in the long run, which is not overpacking tech into glasses that look good, that do a handful of things really well. And Meta is expanding on that rapidly this year with some other AI features that we also talked about.

NP: You got into that in depth, but the other thing that really struck me about this interview is that Zuck just seems loose. He seems confident. He seems almost defiant, in a way. 

AH: Yeah, he’s done a lot of self-reflection. In the back half of this interview, we get into a lot of the brand stuff around Meta, how he’s worked through the last few years, and where he sees the company going now, which is, in his own words, “nonpartisan.” He even admits that he may be naive in thinking that a company like Meta can be nonpartisan, but he’s going to try to play a back seat role to all of the discourse that has really engulfed the company for the last 10 years.

And we get into all of the dicey stuff. We get into the link between social media and teen mental health. We get into Cambridge Analytica and how, in hindsight, he thinks the company was unfairly blamed for it. I would say this is a new Zuckerberg, and it was fascinating to hear him talk about all of this in retrospect.

NP: The one thing I’ll say is he was in a very talkative mood with you, and you let him talk. There are some answers in there particularly around the harms to teens from social media where he says the data isn’t there, and I’m very curious how parents are going to react to his comments.

NP: All right, let’s get into it. Here’s Verge deputy editor Alex Heath interviewing Meta CEO Mark Zuckerberg.

The Orion smart glasses have been in the works for almost a decade, but Zuckerberg thinks they aren’t quite ready for the mainstream.
Photo by Vjeran Pavic / The Verge

This transcript has been lightly edited for length and clarity. 

Alex Heath: Mark, we just tried Orion together.

Mark Zuckerberg: Yeah. What did you think?

We’re fresh off of it. It feels like true AR glasses are finally getting closer. Orion is a product that you have been working on for five-plus years.

Take me back to the beginning when you started the project. When it started in research, what were you thinking about? What was the goal for it?

A lot of it goes all the way back to our relationship with mobile platforms. We have lived through one major platform transition already because we started on the web, not on mobile. Mobile phones and smartphones got started around the same time as Facebook and early social media, so we didn’t really get to play any role in that platform transition. 

But going through it, where we weren’t born on mobile, we had this awareness that, okay, web was a thing; mobile is a thing that is different. There are strengths and weaknesses of it. There’s this continuum of computing where, now, you have a mobile device that you can take with you all the time, and that’s amazing. But it’s small, and it kind of pulls you away from other interactions. Those things are not great. 

There was this recognition that, just like there was the transition from computers to mobile, mobile was not going to be the end of the line. As soon as we started becoming a more stable company, once we found our footing on mobile and we weren’t clearly going to go out of business or something like that, I was like, “Okay, let’s start planting some seeds for what we think could be the future.” Mobile is already getting defined. By 2012, 2014, it was generally too late to really shape that platform in a meaningful way. I mean, we had some experiments, but they didn’t succeed or go anywhere. 

Pretty quickly, I was like, “Okay, we should focus on the future because, just like there was the shift from desktop to mobile, new things are going to be possible in the future. So what is that?” I think the simplest version of it is basically what you started seeing with Orion. The vision is a normal pair of glasses that can do two really fundamental things. One is to put holograms in the world to deliver this realistic sense of presence, like you were there with another person or in another place, or maybe you’re physically with a person, but just like we did, you can pull up a virtual Pong game or whatever. You can work on things together. You can sit at a coffee shop and pull up your whole workstation of different monitors. You can be on a flight or in the back seat of a car and pull up a full-screen movie theater. There’s great computing and a full sense of presence, like you’re there with people no matter where they are.

Thing two is that it’s the ideal device for AI. The reason for that is because glasses are uniquely positioned for you to be able to let people see what you see and hear what you hear. They give you very subtle feedback where they can speak in your ear or have silent input that shows up on the glasses that other people can’t see and doesn’t take you away from the world around you. I think that is all going to be really profound. Now, when we got started, I had thought that the hologram part of this was going to be possible before AI. It’s an interesting twist of fate that the AI part is actually possible before the holograms are really able to be mass-produced at an affordable price.

But that was the vision. I think that it’s pretty easy to wrap your head around [the idea that] there are already 1 to 2 billion people who wear glasses on a daily basis. Just like everyone who upgraded to smartphones, I think everyone who has glasses is pretty quickly going to upgrade to smart glasses over the next decade. And then I think it’s going to start being really valuable, and a lot of other people who aren’t wearing glasses today are going to end up wearing them, too. 

That’s the simple version. Then, as we’ve developed this out, there are more nuanced directions that have emerged. While that was the full version of what we wanted to build, there are all these things where we said, “Okay, maybe it’s really hard to build normal-looking glasses that can do holograms at an affordable price point. So what parts of that can we take on?” And that’s where we did the partnership with EssilorLuxottica.

So it’s like, “Okay, before you have a display, you can get normal-looking glasses that can stream video and capture content and have a camera, a microphone, and great audio.” But the most important feature at this point is the ability to access Meta AI and just have a full AI there, and it’s multimodal because it has a camera. That product is starting at $300. Initially, I thought, “Hey, this is on the technology path to building full holographic glasses.” At this point, I actually just think both are going to exist long term. I think there are going to be people who want the full holographic glasses, and I think there are going to be people who prefer the superior form factor or lower price of a device where they are primarily optimizing for getting AI. I also think there’s going to be a range of things in between. 

So there’s the full field of view that you just saw, where it’s 70 degrees, a really wide field of view for glasses. But I think that there are other products in between that, too. There’s a heads-up display version, which, for that, you probably just need 20 or 30 degrees. You can’t do full-world holograms where you’re interacting with things. You’re not going to play ping-pong in a 30-degree field of view, but you can communicate with AI. You can text your friends, you can get directions, and you can see the content that you’re capturing.

I think that there’s a lot there that’s going to be compelling. At each step along this continuum, from display list to small display to full holographic, you’re packing more technology in. Each step up is going to be a little more expensive and is going to have more constraints on the form factor. Even though I think we’ll get them all to be attractive, you’ll be able to do the simpler ones and much smaller form factors permanently. And then, of course, there are the mixed reality headsets, which kind of took a different direction, which is going toward the same vision. But on that, we said, “Okay, well, we’re not going to try to fit into a glasses form factor.” For that one, we’re going to say, “Okay, we’re going to really go for all the compute we want, and this is going to be more of a headset or goggles form factor.”

My guess is that that’s going to be a long-term thing, too, because there are a bunch of uses where people want the full immersion. And if you’re sitting at your desk and working for a long period of time, you might want the increase in computing power you’re going to be able to get. But I think there’s no doubt that what you saw with Orion is the quintessential vision of what I thought and continue to think is going to be the next major multibillion-person computing platform. And then all these other things are going to get built out around it.

It’s my understanding that you originally hoped Orion would be a consumer product when you first set out to build it.

Yeah. Orion was meant to be our first consumer product, and we weren’t sure if we were going to be able to pull it off. In general, it’s probably turned out significantly better than our 50-50 estimates of what it would be, but we didn’t get there on everything that we wanted to. We still want it to be a little smaller, a little brighter, a little bit higher resolution, and a lot more affordable before we put it out there as a product. And look, we have a line of sight to all those things. I think we’ll probably have the thing that was going to be the version two end up being the consumer product, and we’re going to use Orion with developers to basically cultivate the software experience so that by the time we’re ready to ship something, it’s going to be much more dialed in.

But to be clear, you’re not selling Orion at all. What I’m wondering is, when you made the call, I think it was around 2022, to say Orion is going to be an internal dev kit, how did you feel about that? Was there any part of you that was like, “I really wish this could have just been the consumer product we had built for years”?

I always want to ship stuff quickly, but I think it was the right thing. On this product, there’s a pretty clear set of constraints that you want to hit, especially around the form factor. It is very helpful for us that chunkier glasses are kind of ascendant in the fashion world because that allows us to build glasses that are going to be fashionable but also tech-forward. Even so, I’d say these are unmistakably glasses. They’re reasonably comfortable. They’re under 100 grams.

I wore them for two hours and I couldn’t really tell.

I think we aspire to build things that look really good, and I think these are good glasses, but I want it to be a little smaller so it can fit within what’s really fashionable. When people see the Ray-Bans, there’s no compromise on fashion. Part of why I think people like them is you get all this functionality, but even when you’re not using it, they’re great glasses. For the future version of Orion, that’s the target, too.

Most of the time you’re going through your day, you’re not computing, or maybe something is happening in the background. It needs to be good in order for you to want to keep it on your face. I feel like we’re almost there. We’ve made more progress than anyone else in the world that I’m aware of, but we didn’t quite hit my bar. Similarly, on price, these are going to be more expensive than the Ray-Bans. There’s just a lot more tech that’s going in them, but we do want to have it be within a consumer price point, and this was outside of that range, so I wanted to wait until we could get to that range in order to have some of them shipped.

Are you imagining that the first commercial version — whenever it’s ready in the next couple of years — will be a developer-focused product that you’re selling publicly? Or do you want it to be consumer-ready? 

That’s why I’m asking about the strategy, because Apple, Snap, and others have decided to do developer-focused plays and get the hardware going with developers early. But are you saying you’re skipping that and just going straight to consumer?

We are using this as a developer kit, but just primarily internally and maybe with a handful of partners. At this point, Meta is by far the premier developer of augmented reality and virtual and mixed reality software and hardware in the world. So you can think about it as a developer kit, but we have a lot of that talent in-house and then we also have well-developed partnerships with a lot of folks externally who we can go to and work with as well. 

I don’t think we need to announce a dev kit that arbitrary developers can go buy to get access to the talent that we need to go build out the platform. We’re in a place where we can work with partners and do that, but that’s absolutely what we’re going to do over the next few years. We’re going to hone the experience and figure out what we need to do to really nail it when it’s ready to ship.

A lot has been written about how much you’re spending on Reality Labs. You probably can’t have an exact number, but if you were to guess the cost of building Orion over the last 10 years, are we talking $5 billion-plus, or was it more than that?

Yeah, probably. But overall for Reality Labs, for a while, a lot of people thought all of that budget was going toward virtual and mixed reality. I actually think we’ve said publicly that our glasses programs are a bigger budget than our virtual and mixed reality programs, but that goes across all of them. So that’s the full AR, that’s the display-less glasses, all the work we’re going to do on Ray-Ban, and we just announced the expanded partnership with EssilorLuxottica. They’re a great company. We’ve had a great experience working with them. They’ve designed so many great glasses, and working with them to do even more is going to be really exciting. There’s a lot more to do there on all of these things.

How does this partnership work, and this renewal that you just did with them, how is it structured? What does this deal look like?

I think it was a kind of commitment from the companies that we’re feeling pretty good about how this is going, and we’re going to build a lot more glasses together. Rather than doing one generation and then designing the next generation, a longer-term partnership allows the teams to not just have to worry about one thing at a time — “Okay, is this one going to be good? And then how do we build on that for the next one?” 

Now, we can start a multiyear roadmap of many different devices, knowing that we’re going to be working together for a long time. I’m optimistic about that. That’s sort of how we work internally. Sometimes, when you’re early on, you definitely want to learn from each device launch, but when there are things that you’re committed to, I don’t think you want the team to feel like, “Okay, if we don’t get the short-term milestone, then we’re going to cancel the whole thing.”

Are you buying a stake in EssilorLuxottica?

Yeah, I think we’ve talked about investing in them. It’s not going to be a major thing. I’d say it’s more of a symbolic thing. We want to have this be a long-term partnership, and as part of that, I thought that this would be a nice gesture. I fundamentally believe in them a lot. I think that they’re going to go from being the premier glasses company in the world to one of the major technology companies in the world. My vision for them and how I think about it is like if you think about how Samsung in Korea made it so that Korea became one of the main hubs of building phones in the world. I think this is probably one of the best shots for Europe and Italy, in particular, to become a major hub for manufacturing and building and designing the next major category of computing platforms overall.

They’re kind of all in on that now, and it’s been this interesting question because they have such a good business and such deep competence in the areas. I’ve gotten more of an appreciation of how strong of a technology company they are in their own way: designing lenses, designing the materials that you need to make fashionable glasses that can be light enough but also feel good. They bring a huge amount that people in our world, the tech world, probably don’t necessarily see, but I think that they’re really well set up for the future. So I believe in the partnership. I’m really excited about the work that we’re doing together, and fundamentally, I think that that’s just going to be a massively successful company in the future.

Is it set up in a way where they control the designs and you provide the tech stack, or do you collaborate on the design? 

I think we collaborate on everything. Part of working together is that you build a joint culture over time, and there were a lot of really sharp people over there who, I think, it took maybe a couple versions for us to gain an appreciation for how each of us approaches things. They really think about things from this “fashion, manufacturing, lenses, selling optical devices” perspective. And we obviously come at it from a consumer electronics, AI, and software perspective. But I think, over time, we just appreciate each other’s perspectives on things a lot more.

I’m constantly talking to them to get their ideas on different things. You know partnerships are working well when you reach out to them to get their opinion on things that are not actually currently in the scope of what you’re working on together. I do that frequently with Rocco [Basilico], who runs their wearables, and Francesco [Milleri], who’s their CEO, and our team does that with a large part of the working group over there. It’s a good crew. They share good values. They’re really sharp. And like I said, I believe in them, and I think it’s going to be a very successful partnership and company.

How many Ray-Ban Metas have you sold so far?

I don’t know if we’ve given a number on that. 

I know. That’s why I’m asking.

It’s going very well. One of the things that I think is interesting is we underestimated demand. One thing that is very different in the world of consumer electronics than software is that there are fewer supply constraints in software. There are some. I mean, like some of the stuff that we’re rolling out, like the voice on Meta AI, we need to meter it as we’re rolling it out because we need to make sure we have enough inference capacity to handle it, but fundamentally, we’ll resolve that in weeks. 

But for manufacturing, you make these concrete decisions like, “Okay, are we setting up four manufacturing lines or six?” And each one is a big upfront [capital expenditure] investment, and you’re basically deciding upfront the velocity at which you’re going to be able to generate supply before you know what the demand is. On this one, we thought that Ray-Ban Meta was probably going to sell three or five times more than the first version did. And we just dramatically underestimated it. 

Now, we’re in this position where it’s actually been somewhat hard for us to gauge what the real demand is because they’re sold out. You can’t get them. So, if you can’t get them, how do you know where the actual curve is? We’re basically getting to the point where that’s resolved. Now, we kind of adjusted, and we made the decision to build more manufacturing lines. It took some time to do it. They’re online now. It’s not just about being able to make them; you need to get them into all the stores and get the distribution right. We feel like that’s in a pretty good place now. 

Over the rest of this year, we’re going to start getting a real sense of the demand, but while that’s going on, the glasses keep getting better because of over-the-air AI updates. So, even though we keep shipping new frames and they’re adding more transition lenses because people want to wear them indoors, the hardware doesn’t necessarily change. And that’s an interesting thing because sunglasses are a little more discretionary, so I think a lot more people early on were thinking, “Hey, I’ll experiment with this with sunglasses. I’m not going to make these my primary glasses.” Now, we’re seeing a lot more people say, “Hey, this is actually really useful. I want to be able to wear them inside. I want them to be my primary glasses.”

So, whether that’s working with them through the optical channel or the transitions, that’s an important part, but the AI part of this also just keeps getting better. We talked about it at Connect: the ability to have, over the next few months when we roll this out, real-time translations. You’re traveling abroad, someone’s speaking Spanish to you, you just get it translated into English in your ear. It will roll out to more and more languages over time. I think we’re starting with a few languages, and we’ll hit more over time. 

I tried that. Well, actually, I didn’t try real-time translation, but I tried looking at a menu in French, and it translated it into English. And then, at the end, I was like, “What is the euro [price] in USD?” And it did that, too. I’m also starting to see the continuum of this to Orion in the sense of the utility aspects. You could say, “Look at this and remind me about it at 8PM tonight,” and then it syncs with the companion app. 

Yeah, Reminders are a new thing.

It’s not replacing the phone, but it’s augmenting what I would do with my phone. And I’m wondering if the [AI] app is a place for more of that kind of interaction as well. How are these glasses going to be more deeply tied to Meta AI over time? It seems like they’re getting closer and closer all the time.

Well, I think Meta AI is becoming a more and more prominent feature of the glasses, and there’s more stuff that you can do. You just mentioned Reminders, which is another example. Now, that is just going to work, and now your glasses can remind you of things. 

Or you can look at a phone number and say, “Call this phone number,” and then it calls on the phone.

Yeah, we’ll add more capabilities over time, and some of those are model updates. Okay, now it has Llama 3.2, but some of it is software development around it. Reminders you don’t get for free just because we updated the model. We have this big software development effort, and we’re adding features continuously and developing the ecosystem, so you get more apps like Spotify, and all these different things can work more natively.

So the glasses just get more and more useful, which I think is also going to increase demand over time. And how does it interact with phones? Like you said, I don’t think people are getting rid of phones anytime soon. The way I think about this is that when phones became the primary computing platform, we didn’t get rid of computers. We just kind of shifted. I don’t know if you had this experience, but at some point in the early 2010s, I noticed that I’d be sitting at my desk in front of my computer, and I’d just pull out my phone to do things.

It’s not like we’re going to throw away our phones, but I think what’s going to happen is that, slowly, we’re just going to start doing more things with our glasses and leaving our phones in our pockets more. It’s not like we’re done with our computers, and I don’t think we’re going to be done with our phones for a while, but there’s a pretty clear path where you’re just going to use your glasses for more and more things. Over time, I think the glasses are also going to be able to be powered by wrist-based wearables or other wearables. 

So, you’re going to wake up one day 10 years from now, and you’re not even going to need to bring your phone with you. Now, you’re still going to have a phone, but I think more of the time, people are going to leave it in their pocket or leave it in their bag, or eventually, some of the time, leave it at home. I think there will be this gradual shift to glasses becoming the main way we do computing.

It’s interesting that we’re talking about this right now, because I feel like phones are becoming kind of boring and stale. I was just looking at the new iPhone, and it’s basically the same as the year before. People are doing foldables, but it feels like people have run out of ideas on phones and that they’re kind of at their natural end state. When you see something like the Ray-Bans and how people have gravitated to them in a way that’s surprised you, and I think surprised all of us, I wonder if it’s also just that people want to interact with technology in different ways now.

Like you said at the beginning, the way that AI has intersected with this is kind of an “aha” thing for people that, honestly, for me, I didn’t expect it to click as quickly as it did. But when I got whitelisted for the AI, I was walking around in my backyard and using it, and I was like, “Oh, it’s obvious now where this is going. It feels like things are finally in a place where you can see where it’s going. Whereas before, it’s been a lot of R&D and talking about it, but the Ray-Bans are kind of a signifier of that, and I’m wondering if you agree.

I agree. I still think it’s early. You really want to be able to not only ask the AI questions but also ask it to do things and know that it’s going to reliably go do it. We’re starting with simple things, so voice control of your glasses, although you can do that on phones, too, and things like reminders, although you can generally do that on phones, too. But as the model capabilities grow over the next couple of generations and you get more of what people call these agentic capabilities, it’s going to start to get pretty exciting.

For what it’s worth, I also think that all the AI work is going to make phones a lot more exciting. The most exciting thing that has happened to our family of apps roadmap in a long time is all the different AI things that we’re building. If I were at any of the other companies trying to design what the next few versions of iPhone or Google’s phones should be, I think that there’s a long and interesting roadmap of things that they can do with AI that, as an app developer, we can’t. That’s a pretty exciting and interesting thing for them to do, which I assume they will.

On the AI social media piece, one of the wilder things that your team told me you’re going to start doing is showing people AI-generated imagery personalized to them, in feed. I think it’s starting as an experiment, but if you’re a photographer, you would see Meta AI generating content that’s personalized for you, alongside content from the people you follow.

It’s this idea that I’ve been thinking about, of AI invading social media, so to speak — maybe you don’t like the word “invading,” but you know what I mean — and what that does to how we relate to each other as humans. In your view, how much AI stuff and AI-generated stuff is going to be filling feeds in the near future?

Here’s how I come at this: in the history of running the company — and we’ve been building these apps for 20 years — every three to five years, there’s some new major format that comes along that is typically additive to the experience. So, initially, people updated their profiles; then they were able to post statuses that were texts; then links; then you got photos early on; then you added videos; then mobile. Basically Snap invented stories, the first version of that, and that became a pretty widely used format. The whole version of shortform videos, I think, is still an ascendant format. 

You keep on making the system richer by having more types of content that people can share and different ways to express themselves. When you look out over the next 10 years of, “This trend seems to happen where every three to five years, there are new formats,” I think you’d bet that that continues or accelerates given the pace of change in the tech industry. And I think you’d bet that probably most of the new formats are going to be AI-connected in some way given that that’s the driving theme for the industry at this point.

Given that set of assumptions, we’re trying to understand what things are most useful to people within that. There’s one vein of this, which is helping people and creators make better content using AI. So that is going to be pretty clear. Just make it super easy for aspiring creators or advanced creators to make much better stuff than they would be able to otherwise. That can take the format of like, “All right, my daughter is writing a book and she wants it illustrated, and we sit down together and work with Meta AI and Imagine to help her come up with images to illustrate it.” That’s a thing that’s like, she didn’t have the capability to do that before. She’s not a graphic designer, but now she has that ability. I think that that’s going to be pretty cool. 

Then there’s a version where you have this great diversity of AI agents that are part of this system. And this, I think, is a big difference between our vision of AI and most of the other companies. Yeah, we’re building Meta AI as the main assistant that you can build. That’s sort of equivalent to the singular assistant that may be like what Google or an OpenAI or different folks are building, but it’s not really the main thing that we’re doing. Our main vision is that we think that there are going to be a lot of these. It’s every business, all the hundreds of millions of small businesses, just like they have a website and an email address and a social media account today, I think that they’re all going to have an AI that helps them interact with their customers in the future, that does some combination of sales and customer support and all of that.

I think all the creators are basically going to want some version of this that basically helps them interact with their community when they’re just limited by not having enough hours in the day to interact with all the messages that are coming in, and they want to make sure that they can show some love to people in their community. Those are just the two most obvious ones that even if we just did those, that’s many hundreds of millions, but then there’s going to be all this more creative [user-generated content] that people create that are kind of wilder use cases. And our view is, “Okay, these are all going to live across these social networks and beyond.” I don’t think that they should be constrained to waiting until someone messages them.

I think that they’re going to have their own profiles. They’re going to be creating content. People will be able to follow them if they want. You’ll be able to comment on their stuff. They may be able to comment on your stuff if you’re connected with them, and there will obviously be different logic and rules, but that’s one way that there’s going to be a lot more AI participants in the broader social construct. Then you get to the test that you mentioned, which is maybe the most abstract, which is just having the central Meta AI system directly generate content for you based on what we think is going to be interesting to you and putting that in your feed. 

On that, I think there’s been this trend over time where the feeds started off as primarily and exclusively content for people you followed, your friends. I guess it was friends early on, then it kind of broadened out to, “Okay, you followed a set of friends and creators.” And then it got to a point where the algorithm was good enough where we’re actually showing you a lot of stuff that you’re not following directly because, in some ways, that’s a better way to show you more interesting stuff than only constraining it to things that you’ve chosen to follow. 

I think the next logical jump on that is like, “Okay, we’re showing you content from your friends and creators that you’re following and creators that you’re not following that are generating interesting things. And you just add on to that, a layer of, “Okay, and we’re also going to show you content that’s generated by an AI system that might be something that you’re interested in.” Now, how big do any of these segments get? I think it’s really hard to know until you build them out over time, but it feels like it is a category in the world that’s going to exist, and how big it gets is kind of dependent on the execution and how good it is.

Why do you think it needs to exist as a new category? I’m still wrestling with why people want this. I get the companionship stuff that Character.AI and some startups have already shown there’s a market for. And you’ve talked about how Meta AI is already being used for roleplaying. But the big idea is that AI has been used to intermediate and feed how humans reach each other. And now, all of a sudden, AIs are going to be in feeds with us, and that feels big. 

But in a lot of ways, the big change already happened, which is people getting content that they weren’t following. And the definition of feeds and social interaction has changed very fundamentally in the last 10 years. Now, in social systems, most of the direct interaction is happening in more private forums, in messaging or groups. 

This is one of the reasons we were late with Reels initially to compete with TikTok is because we hadn’t made this mental shift where we kind of felt like, “No, the feed is where you interact with people.” Actually, increasingly, the feed is becoming a place where you discover content that you then take to your private forums and interact with people there. It’s like, I’ll still have the thing where a friend will post something and I’ll comment on it and engage directly in feed. Again, this is additive. You’re adding more over time. But the main way that you engage with Reels isn’t necessarily that you go into the Reels comments and comment and talk to people you don’t know. It’s like you see something funny and you send it to friends in a group chat.

I think that paradigm will absolutely continue with AI and all kinds of interesting content. So it is facilitating connections with people, but already, we’re in this mode where our connections through social media are shifting to more private places, and the role of the feed in the ecosystem is more of what I’d call a discovery engine of content: icebreakers or interesting topic starters for the conversations that you’re having across this broader spectrum of places where you’re interacting.

Do you worry that interacting with AIs like this will make people less likely to talk to other people, that it will reduce the engagement that we have with humans?

The sociology that I’ve seen on this is that most people have way fewer friends physically than they would like to have. People cherish the human connections that they have, and the more we can do to make that feel more real and give you more reasons to connect, whether it’s through something funny that shows up so you can message someone or a pair of glasses that lets your sister show up as a hologram in your living room when she lives across the country and you wouldn’t be able to see her otherwise, that’s always our main bread and butter in the thing that we’re doing. 

But in addition to that, the average person, maybe they’d like to have 10 friends, and there’s the stat that — it’s sort of sad — the average American feels like they have fewer than three real close friends. So does this take away from that? My guess is no. I think that what’s going to happen is it’s going to help give people more of the support that they need and give people more reasons and the ability to connect with either a broader range of people or more deeply with the people they care about.

How are you feeling about how Threads is doing these days?

Threads is on fire. It’s great. There’s only so quickly that something can get to 1 billion people, so we’ll keep pushing on it. 

I’ve heard it’s still using Instagram a lot for growth. I’m wondering, when do you see it getting to a standalone growth driver on its own?

I think that these things all connect to each other. Threads helps Instagram, and Instagram helps threads. I don’t know that we have some strategic goal, which is to make it so that Threads is completely disconnected from Instagram or Facebook. I actually think we’re going in the other direction. It started off just connected to Instagram, and now we also connected it so that the content can show up [elsewhere]. 

Taking a step back, we just talked about how most people are interacting in more private forums. If you’re a creator, what you want to do is have your content show up everywhere because you’re trying to build the biggest community that you can in these different places. So it’s this huge value for people if they can generate a reel or a video or some text-based content. Now, you can post it on Threads, Instagram, Facebook, and more places over time. The direction there is generally more flow, not less, and more interoperability. And that’s why I’ve been pushing on that as a theme over time. 

I’m not even sure what X is anymore, but I think what it used to be, what Twitter used to be, was a place where you went when news was happening. I know you, and the company, seem to be distancing yourself from recommending news. But with Threads, it feels like that’s what people want and what people thought Threads might be, but it seems like you are intentionally saying, “We don’t want Threads to be that.”

There are different ways to look at this. I always looked at Twitter not as primarily about real-time news but as a shortform, primarily text discussion-oriented app. To me, the fundamental defining aspect of that format is that when you make a post, the comments aren’t subordinate to the post. The comments are kind of at a peer level.

That is a very different architecture than every other type of social network that’s out there. And it’s a subtle difference, but within these systems, these subtle differences lead to very different emerging behaviors. Because of that, people can take and fork discussions, and it makes it a very good discussion-oriented platform. News is one thing that people like discussing, but it’s not the only thing.

I always looked at Twitter, and I was like, “Hey, this is such a wasted opportunity. This is clearly a billion-person app.” Maybe in the modern day, when you have many billions of people using social apps, it should be multiple billions of people. There were a lot of things that have been complicated about Twitter and the corporate structure and all of that, but for whatever reason, they just weren’t quite getting there. Eventually, I thought, “Hey, I think we can do this. I think we can get this, build out the discussion platform in a way that can get to a billion people and be more of a ubiquitous social platform that I think achieves its full potential.” But our version of this is that we want it to be a kinder place. We don’t want it to start with the direct head-to-head combat of news, and especially politics.

Do you feel like that constrains the growth of the product at all?

I think we’ll see. We’ll run the experiment.

That needs to exist in the world. Because I feel like with X’s seeming implosion, it doesn’t really exist anymore. Maybe I’m biased as someone in the media, but I do think when something big happens in the world, people want an app that they can go to and see everyone that they follow talking about it immediately. There’s not an immediacy [on Threads].

Well, we’re not the only company. There are a ton of different competitors and different companies doing things. I think that there’s a talented team over at X, so I wouldn’t write them off. And then obviously, there are all these other folks, and there are a lot of startups that are doing stuff. So I don’t feel like we have to go at that first. I think that maybe we get there over time, or maybe we decide that it’s enough of a zero-sum trade, or maybe even a negative-sum trade, where that use case should exist somewhere but maybe that use case prevents a lot more usage and a lot more value in other places because it makes it a somewhat less friendly place. I don’t think we know the answer to that yet. But I do think, the last 8–10 years of our experience has been that the political discourse is tricky. 

On the one hand, it’s obviously a very important thing in society. On the other hand, I don’t think it leaves people feeling good. I’m torn between these two values. I think people should be able to have this kind of open discourse, and that’s good. But I don’t want to design a product that makes people angry. There’s an informational lens for looking at this, and then there’s “you’re designing a product, and what’s the feel of the product?” I think anyone who’s designing a product cares a lot about how the thing feels.

But you recognize the importance of that discussion happening. 

I think it’s useful. And look, we don’t block it. We just make it so that for the content where you’re following people, if you want to talk to your friends about it, if you want to talk to them about it in messaging, there can be groups about it. If you follow people, it can show up in your feed, but we don’t go out of our way to recommend that content when you are not following it. I think that has been a healthy balance for us and for getting our products to generally feel the way that we want. 

And culture changes over time. Maybe the stuff will be a little bit less polarized and anger-inducing at some point, and maybe it’ll be possible to have more of that while also, at the same time, having a product where we’re proud of how it feels. Until then, I think we want to design a product where people can get the things that they want, but fundamentally, I care a lot about how people feel coming away from the product.

Do you see this decision to downrank political content for people who aren’t being followed in feed as a political decision? Because you’re also, at the same time, not really saying much about the US presidential election this year. You’re not donating. You’ve said you want to stay out of it now.

And I see the way the company’s acting, and it reflects your personal way you’re operating right now. I’m wondering how much more of it is also what you and the company have gone through and the political environment, and not necessarily just what users are telling you.

Is there a throughline there?

I’m sure it’s all connected. In this case, it wasn’t a tradeoff between those two things because this actually was what our community was telling us. And people were saying, “Generally, we don’t want so much politics. We don’t feel good. We want more stuff from our friends and family. We want more stuff from our interests.” That was kind of the primary driver. But it’s definitely the case that our corporate experience on this shaped this. 

I think there’s a big difference between something being political and being partisan. And the main thing that I care about is making sure that we can be seen as nonpartisan and be a trusted institution by as many people as possible, as much as something can be in the world in 2024. I think that the partisan politics is so tough in the world right now that I’ve made the decision that, for me and for the company, the best thing to do is to try to be as nonpartisan and neutral as possible in all of this and distance ourselves from it as much as possible. It’s not just the substance. I also think perception matters. Maybe it doesn’t matter on our platforms, whether I endorse a candidate or not, but I don’t want to go anywhere near that.

Sure, you could say that’s a political strategy, but for where we are in the world today, it’s very hard. Almost every institution has become partisan in some way, and we are just trying to resist that. And maybe I’m too naive, and maybe that’s impossible, but we’re going to try to do that.

On the Acquired podcast recently, you said that the political miscalculation was a 20-year mistake.

Yeah, from a brand perspective. 

And you said it was going to take another 10 years or so for you to fully work through that cycle. What makes you think it’s such a lasting thing? Because you look at how you personally have evolved over the last couple of years, and I think perception of the company has evolved. I’m wondering what you meant by saying it’s going to take another 10 years.

I’m just talking about where our brand and our reputation are compared to where I think they would’ve been. Sure, maybe things have improved somewhat over the last few years. You can feel the trend, but it’s still significantly worse than it was in 2016. The internet industry overall, and I think our company, in particular, we’re seen way more positively.

Look, there were real issues. I think it’s always very difficult to talk about this stuff in a nuanced way because, to some degree, before 2016, everyone was sort of too rosy about the internet overall and didn’t talk enough about the issues. Then the pendulum swung and people only talked about the issues and didn’t talk about the stuff that was positive, and it was all there the whole time. When I talk about this, I don’t mean to come across as simplistic or—

Or that you guys didn’t do anything wrong or anything.

Or that there weren’t issues with the internet or things like that. Obviously, every year, whether it’s politics or other things, there are always things that you look back on and you’re like, “Hey, if I were playing this perfectly, I would’ve done these things differently.” But I do think it’s the case that I didn’t really know how to react to something as big of a shift in the world as what happened, and it took me a while to find my footing. I do think that it’s tricky when you’re caught up in these big debates and you’re not experienced or sophisticated and engaging with that. I think you can make some big missteps. I do think that some of the things that we were accused of over time, it’s been pretty clear at this point now that all the investigations have been done that they weren’t true. 

You’re talking about Cambridge Analytica and all that. 

I think Cambridge Analytica is a good example of something that people thought that all this data had been taken and that it had been used in this campaign. 

It turns out, it wasn’t used.

Yeah, it’s all this stuff, and the data wasn’t even accessible to the developer, and we’d fixed the issue five years ago. But in the moment, it was really hard for us to have a rational discussion about that. Part of the challenge is that, for the general population, I think a lot of people read the initial headlines and they don’t necessarily read [the rest of the story]. Frankly, a lot of the media I don’t think was as loud when all of the investigations concluded that said that a lot of the initial allegations were just completely wrong. I think that’s a real thing.

You take these hits, and I didn’t really know how to push back on that. And maybe some of it, you can’t, but I’d like to think that we could have played some of this stuff differently. I do think it was certainly the case that when you take responsibility for things that are not your fault, you become a weak target for people who are looking for a source of blame for other things. It’s somewhat related to this, but when you think about litigation strategy for the company, one of the reasons I hate settling lawsuits is that it basically sends a signal to people that, “Hey, this is a company that settles lawsuits, so maybe we can sue them and they’ll settle lawsuits.”

You wouldn’t write a blank check to the government like Google did for its antitrust case.

No, I think the right way to approach this is when you believe in something, you fight really hard for it. I think this is a repeat game. It’s not like there’s a single issue. We’re going to be around for a long time, and I think it’s really important that people know that we’re a company that has conviction and that we believe in what we’re doing and we’re going to back that up and defend ourselves. I think that sets the right tone.

Now, over the next 10 years, I think we’re digging ourselves back to neutral on this, but I’d like to think that if we hadn’t had a lot of these issues, we would’ve made progress over the last 10 years, too. I give it this timeframe. Maybe 20 years is too long. Maybe it’s 15. But it’s hard to know with politics.

It feels like mental health and youth mental health may be the next wave of this.

That, I think, is the next big fight. And on that, I think a lot of the data on this is just not where the narrative is.

Yeah, I think a lot of people take it as if it’s an assumed thing that there is some link. I think the majority of the high-quality research out there suggests that there’s no causal connection at a broad scale between these things. 

Now, look, I think that’s different from saying, in any given issue, was someone bullied? Should we try to stop bullying? Yeah, of course. But overall, this is one where there are a bunch of these cases. I think that there will be a lot of litigation around them.

The academic research shows something that I think, to me, fits more with what I’ve seen of how the platforms operate. But it’s counter to what a lot of people think, and I think that’s going to be a reckoning that we’ll have to have. Basically, as the majority of the high-quality academic research comes out, okay, can people accept this? I think that’s going to be a really important set of debates over the next few years.

At the same time, you have acknowledged there are affordances in the product, like the teen [safety] rollout with Instagram recently, that you can make to make the product a better experience for young people.

Yeah, this is an interesting part of the balance. You can play a role in trying to make something better even if the thing wasn’t caused by you in the first place. There’s no doubt that being a parent is really hard. And there’s a big question of, in this internet age where we have phones, what are the right tools that parents need in order to be able to raise their kids? I think that we can play a role in giving people parental controls over the apps. I think that parental controls are also really important because parents have different ways that they want to raise their kids. Just like schooling and education, people have very significantly different local preferences for how they want to raise their kids. I don’t think that most people want some internet company setting all the rules for this, either.

Obviously, when there are laws passed, we’ll follow the government’s direction and the laws on that, but I actually think the right approach for us is to primarily align with parents to give them the tools that they want to be able to raise their kids in the way that they want. Some people are going to think that more technology use is good. That’s how my parents raised me growing up. I think it worked pretty well. Some people are going to want to limit it more, and we want to give them the tools to be able to do that. But I don’t think this is primarily or only a social media thing, even the parts of this that are technology.

I think the phone platforms have a huge part in this. There’s this big question of how do you do age verification? I can tell you what the easiest way is, which is, all right, every time you go do a payment on your phone, there already is basically child age verification. I think it’s not very excusable from my perspective why Apple and, to some extent, Google don’t want to just extend the age verification that they already have on their phones to be a parental control for parents to basically be able to say what apps their kids can use.

It’s hard for me to not see the logic in it, either. I don’t really understand.

Well, I think they don’t want to take responsibility.

But maybe that’s on Congress then to pass [a law determining] who has to take responsibility.

Yeah, and we’re going to do our part, and we’re going to build the tools that we can for parents and for teens. And look, I’m not saying it’s all the phone’s fault, either, although I would say that the ability to get push notifications and get distracted, from my perspective, seems like a much greater contributor to mental health issues than a lot of the specific apps. But there are things that I think everyone should try to improve and work on. That’s my view on all of that.

On the regulation piece as it relates to AI, you’ve been very vocal about what’s happening in the EU. You recently signed an open letter. I believe it was basically saying that you don’t have clarity on consent for training and how it’s supposed to work. I’m wondering what you think needs to happen for things to move forward. Because, right now, Meta AI is not available in Europe. New Llama models are not available. Is that something you see getting resolved? What would it take?

I don’t know. It’s a little hard for me to parse European politics. I have a hard enough time with American politics, and I’m American. But in theory, my understanding of the way this is supposed to work is they passed this GDPR regulation, and you’re supposed to have this idea of a one-stop shop home regulator who can basically, on behalf of the whole EU, interpret and enforce the rules. We have our European headquarters, and we work with that regulator. They’re pretty tough on us and pretty firm. But at least when you’re working with one regulator, you can understand how they are thinking about things and you can make progress.

The thing that has been tricky is there has been, from my perspective, a little bit of a backslide where now you get all these other [data protection authorities] across the continent also intervening and trying to do things. It seems like more of an internal EU political thing, which is like, “Okay, do they want to have this one-stop shop and have clarity for companies so companies can execute? Or do they just want it to be this very complicated regulatory system?”

I think that’s for them to sort out. But there’s no doubt that when you have dozens of different regulators that can ask you the same questions about different things, it makes it a much more difficult environment to build things. I don’t think that’s just us. I think that’s all the companies.

But do you understand the concern people and creators have about training data and how it’s used — this idea that their data is being used for these models but they’re not getting compensated and the models are creating a lot of value? I know you’re giving away Llama, but you’ve got Meta AI. I understand the frustration that people have. I think it’s a naturally bad feeling to be like, “Oh, my data is now being used in a new way that I have no control or compensation over.” Do you sympathize with that?

Yeah. I think that in any new medium in technology, there are the concepts around fair use and where the boundary is between what you have control over. When you put something out in the world, to what degree do you still get to control it and own it and license it? I think that all these things are basically going to need to get relitigated and rediscussed in the AI era. I get it. These are important questions. I think this is not a completely novel thing to AI, in the grand scheme of things. There were questions about it with the internet overall, too, and with different technologies over time. But getting to clarity on that is going to be important, so that way, the things that society wants people to build, they can go build. 

What does clarity look like to you there?

I think it starts with having some framework of, “Okay, what’s the process going to be if we’re working through that?”

But you don’t see a scenario where creators get directly compensated for the use of their content models?

I think there are a lot of different possibilities for how stuff goes in the future. Now, I do think that there’s this issue. While, psychologically, I understand what you’re saying, I think individual creators or publishers tend to overestimate the value of their specific content in the grand scheme of this.

We have this set of challenges with news publishers around the world, which is that a lot of folks are constantly asking to be paid for the content. And on the other hand, we have our community, which is asking us to show less news because it makes them feel bad. We talked about that. There’s this issue, which is, “Okay, we’re showing some amount of the news that we’re showing because we think it’s socially important against what our community wants. If we were actually just following what our community wants, we’d show even less than we’re showing.”

And you see that in the data, that people just don’t like to engage with the stuff?

Yeah. We’ve had these issues where sometimes publishers say, “Okay, if you’re not going to pay us, then pull our content down.” It’s just like, “Yeah, sure, fine. We’ll pull your content down.” That sucks. I’d rather people be able to share it. But to some degree, some of these questions are negotiations, and they have to get tested by people walking. Then, at the end, once people walk, you figure out where the value really is.

If it really is the case that news was a big thing that the community wanted then… Look, we’re a big company. We pay for content when it’s valuable to people. We’re just not going to pay for content when it’s not valuable to people. I think that you’ll probably see a similar dynamic with AI, which my guess is that there are going to be certain partnerships that get made when content is really important and valuable. I’d guess that there are probably a lot of people who have a concern about the feel of it, like you’re saying. But then, when push comes to shove, if they demanded that we don’t use their content, then we just wouldn’t use their content. It’s not like that’s going to change the outcome of this stuff that much.

To bring this full circle, given what you’ve learned from the societal implications of the stuff you’ve built over the last decade, how are you thinking about this as it relates to building augmented reality glasses at scale? You’re literally going to be augmenting reality, which is a responsibility. 

I think that’s going to be another platform, too, and you’re going to have a lot of these questions. The interesting thing about holograms and augmented reality is it’s going to be this intermingling of the physical and digital much more than we’ve had in other platforms. On your phone it’s like, “Okay, yeah, we live in a primarily physical world,” but then you have this small window into this digital world.

I think we’re going to basically have this world in the future that is increasingly, call it half physical, half digital — or I don’t know, 60 percent physical, 40 percent digital. And it’s going to be blended together. I think there are going to be a lot of interesting governance questions around that in terms of, is all of the digital stuff that’s overlaid physically going to fit within a physical national regulation perspective, or is it actually coming from a different world or something?

These will all be very interesting questions that we will have a perspective on. I’m sure we’re not going to be right about every single thing. I think the world will need to sort out where it wants to land. Different countries will have different values and take somewhat different approaches. I think that’s part of the interesting process of this. The tapestry of how it all gets built is something you need to work through so that it ends up being positive for as many of the stakeholders as possible.

There’s more to come. 

Decoder with Nilay Patel /

A podcast from The Verge about big ideas and other problems.

SUBSCRIBE NOW!

Latest article