Friday, November 22, 2024

Google’s James Manyika: ‘The productivity gains from AI are not guaranteed’

Must read

On a panel last November, OpenAI co-founder Sam Altman predicted there would be an artificial intelligence breakthrough in 2024 that no one saw coming. Google executive James Manyika agreed: “Plus one to that.”

Has the past year lived up to expectations? The summer sell-off in tech stocks reflected a sense that the adoption of AI would take longer than anticipated.

Manyika points to what has been achieved. Transformers — the technology underpinning large language models — have allowed Google Translate to more than double the number of languages it supports to 243. Google’s Gemini chatbot is able to switch seamlessly (at least in certain contexts) between text, photos and videos. It also allows users to enter ever more complex queries.

For example, while commuting from San Francisco to Silicon Valley, Manyika hoped to be able to listen to a summary of recent research in his field: he wanted to put 100 technical papers into Gemini, then hear two AI voices discussing them. “I’m now able to do that. That’s an example of a big breakthrough.”

But many users view LLMs such as Gemini as clever curiosities, not business-critical technology. Does Manyika actually spend his commute listening to AI voices discuss technical papers? The answer seems to be that he still prefers human podcasts.

As Google’s senior vice-president for research, technology and society, Manyika must perform a balancing act. He must spell out the transformational potential of AI, while also convincing policymakers and the public that Google is not pursuing it recklessly.

Last year the “godfather of AI” Geoffrey Hinton resigned from Google, citing the technology’s unmanaged risks. Shortly afterwards, Manyika promised the company would be “responsible from the start”. Born and educated in Zimbabwe, before gaining a robotics doctorate from Oxford, he is a spokesman for the benefits of the technology reaching the developing world.

Manyika, 59, spent his career at McKinsey, before joining Google in 2022. I am interested in how he sees the real-world application of AI tools.

“Right now, everyone from my old colleagues at McKinsey Global Institute to Goldman Sachs are putting out these extraordinary economic potential numbers — in the trillions — [but] it’s going to take a whole bunch of actions, innovations, investments, even enabling policy . . . The productivity gains are not guaranteed. They’re going to take a lot of work.”

In 1987 economist Robert Solow remarked that the computer age was visible everywhere except in the productivity statistics. “We could have a version of that — where we see this technology everywhere, on our phones, in all these chatbots, but it’s done nothing to transform the economy in that real fundamental way.”

The use of generative AI to draft software code is not enough. “In the US, the tech sector is about 4 per cent of the labour force. Even if the entire tech sector adopted it 100 per cent, it doesn’t matter from a labour productivity standpoint.” Instead the answer lies with “very large sectors” such as healthcare and retail.

Former British prime minister Sir Tony Blair has said that people “will have an AI nurse, probably an AI doctor, just as you’ll have an AI tutor”. Manyika is less dramatic: “In most of those cases, those professions will be assisted by AI. I don’t think any of those occupations are going to be replaced by AI, not in any conceivable future.”

The antecedents are not great. While at McKinsey, Manyika predicted that the pandemic would allow companies to pursue digital transformation: he concedes many did so “in a cost-cutting direction”. Now Manyika agrees executives are incentivised to replace workers with AI, rather than deploy the technology to assist them.

© Charlie Bibby/FT

How could large language models transform his old industry of management consulting? Manyika emphasises the potential for models such as Gemini to draft and summarise. “In my role, I have teams working on lots of projects: I might say, ‘Give me a status update on project Y,’ and it will give me summaries from all the documents that are in my email and the conversations that we’re having.”

Summarising and drafting are the tasks that, for example, young lawyers perform. Will law firms transform profoundly, as young recruits are no longer required? “Yes, but . . . ” says Manyika, emphasising that his vision is for firms to use AI to increase their top line, not just cut costs.

“You don’t win by cutting costs. You win by creating more valuable outputs. So I would hope that those law firms think about, ‘OK, now we have this new productive capacity, what additional value-added activities do we need to be doing to capitalise on what is now possible?’ Those are going to be the winning firms.”


Google’s search engine dominates the web — last month a US judge found it amounted to an illegal monopoly. But many publishers worry AI could make things worse.

Google now answers some search queries with AI summaries. Chatbots provide an alternative source of information. In both cases, internet users may find what they need without clicking on any links — thereby cutting off the flow of advertising revenue to publishers who came up with the information.

Before I met Manyika in July, I asked Gemini: “What is the top news in the Financial Times today?” Gemini responded: “Several top news stories are featured in the Financial Times today (November 28, 2024)” — sic. The response went on to list five headline stories, most of which seem to date from December 2023.

“But it also sends you to the site. We still provide links in Gemini, right?” says Manyika. In fact, although Gemini mentioned the FT website in its answer, it only provided two links — to rival news websites.

Manyika points to an option in Gemini’s response called “show drafts”. This is Google’s attempt to show that the chatbot produces no “singular, definitive answer” — running the same query twice will have different takes. I didn’t even notice that option, and I doubt users really believe it will compensate for the chatbot’s unreliability.

Discouraging users from clicking on links would be “a terrible own goal”, given Google’s ad-reliant business model, argues Manyika. He likens publishers’ concern about traffic to fears, when search shifted from desktops to smartphones, that only one link would be visible and the rest would be ignored. “People still went to everything else, in a lot of detail.”

(After the interview, a Google publicist sends me a screenshot of a Gemini answer, which does contain a link to the FT website. I tried again but was still unable to replicate this.)

The broader question is whether Google has been pushed to roll out AI products, faster than it would like, in order not to fall behind OpenAI and others. Google’s AI summaries have advised users that, for example, it is healthy to eat rocks.

Only one in 7mn AI summaries has a content policy violation, such as suggesting users eat rocks, Google says. Manyika professes to love Google’s culture of internal debate: “Half the time, people think we shouldn’t release anything. Half the time, people think we’re being too slow.

“We won’t always get those things right, I think that’s OK.” Google has “had lots of things that it’s held back. When I was joining the company, it made a choice not to put out facial recognition technology.” He politely does not mention that, by contrast, OpenAI’s Altman has invested in eyeball-scanning.


Robo-drafting brings other risks. We gain clues about someone’s personality and competence from their writing style. If chatbots take hold, maybe we won’t be able to. Manyika argues that I’m romanticising the present: “When I write you a letter, maybe my assistant has drafted it.”

I give another example: isn’t it unhelpful that, because students use chatbots, many teachers can no longer set written homework? “How many times do you think a teacher who marks 100 essays reads every essay from start to finish?” he replies. This isn’t exactly the point.

Manyika is friends with the singer will.i.am, who “set up a school in Compton, a poor neighbourhood in LA. Those kids have been promised for decades that someone’s going to show up and teach [them] how to code. That person never showed up. Now, when they use an LLM to draft code, is that a good thing or a bad thing?”

The apparent paradox is that, in the very era that Google has brilliantly spread the world’s information, the world appears more susceptible to misinformation. “I don’t know if I’d put that on Google,” says Manyika. Maybe not, but upending the way we get and process information potentially has serious, unforeseen downsides.

Manyika shifts the conversation to quantum computing. “We actually think quantum computing will enable us to build AI differently. Stay tuned — you’re going to see some important milestone news later this year. What hasn’t happened yet . . . is no one has shown a computation that in principle you can’t even do with a classical computer.” The tech believers’ strength is to produce a new trick while the world is still struggling to assess the last one.

Latest article