Monday, December 23, 2024

Apple AI Tells Users Luigi Mangione Has Shot Himself

Must read

“I am surprised that Apple put their name on such demonstrably half-baked product.”

Claim Denied

Apple’s generative AI should be making headlines. Instead, it’s making them up.

Just days after its launch in the UK, the tech company’s Apple Intelligence model blurted out a totally fabricated headline about Luigi Mangione, the 26-year-old man who’s been arrested for the murder of UnitedHealthcare CEO Brian Thompson earlier this month.

As the BBC reports, the Apple AI feature incorrectly summarized the BBC‘s reporting to make it sound like the suspect had attempted suicide in an alert sent to iPhone users.

“Luigi Mangione shoots himself,” reads the AI’s bogus BBC notification.

It’s yet another high-profile example of AI incorrectly reporting current events — again raising serious questions about the technology’s role as a mediator of information.

Model Behavior

A spokesperson from the BBC said the broadcaster has complained to Apple “to raise this concern and fix the problem.”

Apple has declined to comment publicly on the matter.

BBC News is the most trusted news media in the world,” the BBC spokesperson said. “It is essential to us that our audiences can trust any information or journalism published in our name and that includes notifications.”

And this was no fluke. The report also identifies another fib by Apple Intelligence in its three-item news notifications.

When summarizing a report from The New York Times last month about the International Criminal Court issuing an arrest warrant for Israeli prime minister Benjamin Netanyahu, the AI sent out a headline claiming: “Netanyahu arrested.”

Game of Telephone

Apple Intelligence, which debuted domestically in October, was finally released in the UK last Wednesday. It’s safe to say that its news feature couldn’t have gotten off to a worse start. Mangione is one of the most talked-about men on the planet right now. Anything he does is newsworthy.

“I can see the pressure getting to the market first, but I am surprised that Apple put their name on such demonstrably half-baked product,” Petros Iosifidis, a professor in media policy at City University in London, told the BBC. “Yes, potential advantages are there — but the technology is not there yet and there is a real danger of spreading disinformation.”

However, this danger is one that’s fundamental of generative AI, and not just Apple’s flavor of it. AI models routinely hallucinate and make up facts. They have no understanding of language, but instead use statistical predictions to generate cogent-sounding text based on the human writing they’ve ingested.

This introduces another confounding factor into reporting the news. Human journalists already make subjective decisions in how events are described. Then another decision must be made to decide how those events are to be further condensed into a concise headline.

Now, tech companies want to interpose themselves into this process with a technology that only approximates the correct thing to say — and we’re already seeing the dumb consequences of it.

More on AI: Schools Using AI to Send Police to Students’ Homes

Latest article