Saturday, November 23, 2024

This $1 billion AI chatbot has been accused of stealing content and lying

Must read

A hot potato: Nvidia CEO Jensen Huang’s statement that he uses Perplexity almost every day is certainly a strong endorsement. However, recent allegations against the AI chatbot may cause some people to reconsider its use. Critics accuse it of dishonesty and theft, with one publication even threatening legal action for copyright infringement.

There are numerous AI chatbots now available allowing users to pick and choose their favorites, based on preferences and the perceived advantages of each service. One feature that users of Perplexity, an AI chatbot that markets itself as a conversational search engine, appear to like is that it annotates its answers with links to the articles that provided the information. No harm in that, right? Also, one might imagine that the links provide assurance against the hallucinations that AI has been unable to stamp out. Actually, both of those suppositions are wrong, as several publications are charging.

Earlier this month, Forbes published an article accusing the chatbot of content theft. The article claims that a new tool developed by Perplexity allows it to quickly rewrite Forbes’ articles, using “eerily similar wording and even lifting some fragments entirely.” The post looked and read like a piece of journalism but didn’t mention Forbes at all, the publication stated, “other than a line at the bottom of every few paragraphs that mentioned ‘sources,’ and a very small icon that looked to be the “F” from the Forbes logo – if you squinted.”

Wired followed up with its own article titled “Perplexity is a Bullshit Machine.” It found that not only is it surreptitiously scraping content but also “making things up out of thin air.”

It is quite the turnaround for a company that recently raised about $63 million in a new funding round that values it at more than $1 billion, doubling its valuation from three months prior. It has attracted a lot of fans in a short period of time, including Nvidia CEO Jensen Huang, who said he uses the product “almost every day.”

Perplexity, though, appears to be breaking a sacrosanct rule of the internet, according to Wired, and is generating its results by ignoring a widely accepted web standard known as the Robots Exclusion Protocol to scrape areas of websites that operators do not want accessed by bots. Wired said it observed a machine tied to Perplexity doing this on its site and across other Condé Nast publications.

Wired also accused Perplexity of summarizing stories inaccurately and with minimal attribution, citing one instance in which the text it generated falsely claimed Wired had reported that a specific police officer in California had committed a crime.

Forbes’ complaints about Perplexity’s problems with attribution go even further. After it purloined Forbes’ content, the publication said, it sent the knockoff story to its subscribers via a mobile push notification. It also created an AI-generated podcast on the story without any credit to Forbes, which then became a YouTube video “that outranks all Forbes content on this topic within Google search.”

There are legal and ethical battles underway related to AI’s right to use online content, but Perplexity’s alleged actions put its legal risk in a category of its own. Already Forbes has sent Perplexity a letter threatening legal action over “willful infringement” of its copyrights.

Latest article