Saturday, December 28, 2024

How Apple’s iPhone 16 Pro Could Change Smartphones Forever

Must read

Updated September 1 with new details on the impact of California’s proposed Artificial Intelligence Safety Bill.

At Apple’s Glowtime event scheduled for Monday, September 9, Tim Cook and his team will launch the new iPhone 16 and iPhone 16 Pro family of smartphones. In the process, they will reveal their vision of generative artificial intelligence to the public. But what if Apple made an innovative choice to ignore the AI-powered elephant in the room?

The October 2023 launch of the Pixel and Pixel 8 Pro saw Google christen its flagship Android handsets as “the first AI smartphones.” The smartphone market has followed the direction handed down from Mountain View. Every current smartphone launch features the use of Generative AI to create new content from whole cloth by summarising articles, fashioning images where there were none, and more.

Where Android has led, Apple is following.

Apple following a trend is not new—its late arrival to Augmented Reality is the most recent, but you can also add in features such as wireless charging, third-party app installation, or even the addition of cut and paste to text entry. These have always been spun as being implemented “in a way that only Apple can” and typically include magical branding such as AirPower or Spatial Video. You can add generative AI to the list with the awkwardly backronymed Apple Intelligence as the magical brand.

Apple’s approach to this new world of artificial intelligence, magical branding aside, looks remarkably like the offering from Android and Google’s various partners, such as rewriting text in a different style, summarising text and notifications, and generating new images and videos. There will be subtle differences in the implementation—most likely in the UI and presentation—but Apple is following a path that its rivals have been on for months.

Yet the dangers of generative AI are becoming more apparent every day. As smartphones bring AI closer to general use and to a broader audience, the dangers are becoming ever more evident. Researchers are investigating and highlighting real-world issues; the paper “Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data” was written with contributors from Google DeepMind, Google.org, and Jigsaw. Its abstract states that “through this analysis, we illuminate key and novel patterns in misuse during this time period, including potential motivations, strategies, and how attackers leverage and abuse system capabilities across modalities (e.g. image, text, audio, video) in the wild.”

The new AI tools in the Pixel 9 family, soon to become widespread on Android, allow ideas to become weapons of information. The Verge’s report on this subject (“Google’s AI ‘Reimagine’ tool helped us add wrecks, disasters, and corpses to our photos”) states what can be achieved practically. Now, add maliciousness, bad actors, and the toxicity of various internet cultures.

This is the path that Apple wants to follow?

I’m explicitly talking about generative artificial intelligence here. Other applications in the world of AI are not as creatively problematic. Machine Learning is a subset of AI and can be found in several essential areas of iOS; including the processing of FaceID unlocks, combining multiple shots to produce one photo from the camera, smart suggestions in the user’s calendar, and the predictive text dictionary in the keyboard.

Notably, all of these keep their data and processing on the device. They help and support actions on the iPhone with clear and defined boundaries, offering well-defined benefits. The difference between machine learning searching through faces in your photo library is a long way from creating brand new faces in a crowd scene.

Update: Sunday September 1: Following the passage of California’s Artificial Intelligence Safety Bill (SB 1047) through the State Assembly and the Senate, it awaits the signature of Governor Gavin Newsom. He has until the end of September to decide on allowing the bill to pass or to issue a veto. Vox’s Sigal Samuel, Kelsey Piper and Dylan Matthews have reported this weekend on the significant voices in California’s tech industry who are opposed to the bill and are hoping for Governor Newsom to veto the bill:

“Lined up against SB 1047 is nearly all of the tech industry, including OpenAI, Facebook, the powerful investors Y Combinator and Andreessen Horowitz, and some academic researchers who fear it threatens open source AI models.”

There are Californian companies that believe the bill is a net positive for the industry as well as acknowledging the changes that have been made in the bill to help it pass:

“Anthropic, another AI heavyweight, lobbied to water down the bill. After many of its proposed amendments were adopted in August, the company said the bill’s “benefits likely outweigh its costs.”

The bill includes clauses to protect whistleblowers reporting issues to the state’s Attorney General, to require companies spending more than $100 million on AI training to have safety plans in place that would allow AI models to be turned off if required, to have third party auditors assessing these safety practices, and more.

The passage or otherwise of SB 1047 will shape the discussions around the regulation of generative artificial intelligence and other AI models in the months and years ahead. Is this seen as an area that requires regulation, or should the tech industry be allowed to invest in technology no matter what impact it could have on society? This is the space that Apple is moving into with the AI software for the new iPhone models.

Apple prides itself on its focus on individual consumers and the wider community. It has wielded this power in many areas to do what it believes will benefit its customer base… which directly impacts Apple’s financial success. Not everyone agrees with that approach, but Tim Cook and his team have shown the desire to decide which lines should not be crossed.

Apple may be the last company that can take a moment to stop and think about the impact of generative AI, decide that the dangers are not understood clearly enough, and stand up and say, “Hold on a minute, this is a great idea, but we feel society is balanced on a cliff-edge and generative AI could push it over.”

If Apple wants its artificial intelligence plans to stand out, the right move may be to not play the generative game. It could use AI as it does now, with neural nets, machine learning, knowledge graphs and the like. It could use its soft political power to define the AI game as a field of assistive technologies and clear functions rather than the Tulip Bubble of generative prediction engines seen across Silicon Valley.

Tim Cook’s Apple has repeatedly shown that it is willing to isolate and block applications and services it believes can harm users until the issues are addressed. Limiting generative AI on the iPhone 16 and iPhone 16 Pro could give the industry the time to reconsider its actions before this digital pandora’s box cannot be closed.

Now read the latest iPhone, iPad, and MacBook headlines in Forbes’ weekly Apple news digest…

Latest article