Thursday, September 19, 2024

The Pixel 9 Pro XL showed me the future of AI photography

Must read

I played a new kind of AR Pac-Man game yesterday, guiding a circle on a phone screen toward a series of little dots that seemed to hover around a conference room. In this version, you don’t level up when you finish; instead, you get a panoramic photo.

I got to spend a couple of hours using the Pixel 9 Pro XL on the eve of Google’s annual hardware event. Becca got to play with the new phones, too, and you can check out her impressions in the video below. And while my time was focused on the 9 Pro XL, it shares all of its camera features and hardware with the smaller Pixel 9 Pro. The standard Pixel 9 comes with a lot of the same capabilities — including the new panorama mode — though not all of them.

The updated panorama interface I played with is the result of a complete overhaul of the Pixel camera’s panorama mode. Pixel camera PM Isaac Reynolds says his team “literally deleted every line of code and started at zero.” The new panorama mode incorporates processing advancements like Night Sight that the original mode didn’t offer. And the new UI is a little more delightful than the old one, which always gave me anxiety when it scolded me for moving too fast or tilting the camera too far off-axis.

An updated panorama mode isn’t the highlight of the Pixel 9’s most noteworthy new photo features — it probably doesn’t even make the top five. There’s the new Add Me camera feature that helps put the photographer into a group shot by blending two separate photos.

Magic Editor in Google Photos now comes with an option to “reimagine” parts of your photo with nothing more than a tap of the screen and a text prompt. You can also record 4K video, shoot it up to the cloud, and get an 8K clip back courtesy of AI-fueled upscaling. It all makes an updated panorama UI feel like small potatoes.

But potatoes of all sizes matter, especially when it’s the camera you carry every day. Consider, for a moment, sharpening. As Reynolds explains to me, the Pixel camera’s HDR Plus processing pipeline has gone through something of an overhaul this year. The changes center on how sharpening is applied and how the camera handles edges between light and dark elements in the image. This is most evident in faces, he says.

30x zoom on the Pixel 8 Pro (left) versus the Pixel 9 Pro XL (right).

I took a handful of comparison photos between the Pixel 8 Pro and the 9 Pro XL on a short excursion in Palo Alto, and I’ll be honest — I’m having a hard time seeing the difference in most of my shots, faces or no. I’ll do a lot more testing under different conditions when I get more time with the Pixel 9, but the changes in sharpening might be the kind of thing you have to go looking for. On the other hand, I definitely see a difference in how the two cameras handle 30x Super Res Zoom — details are noticeably clearer in the Pixel 9 Pro XL’s shot above.

I also tested out Video Boost briefly. It’s meant to make videos in low light more appealing but was kind of underwhelming the last time I tried it. With this update, Video Boost works while zooming, enhancing detail, and smoothing out transitions. It’s also twice as fast to process once it’s uploaded, an improvement Reynolds attributes to using TPUs in the cloud rather than CPUs.

Most importantly, the results look good enough that I’d actually want to use it again. Details in the boosted video I took of that distant tower look much better than in the original footage, and the jerky transitions as I switched lenses are much smoother.

There are no ghosts in the panorama Pac-Man game, but you can see a ghost of yourself when you use Add Me, and boy, is it weird. It uses a similar kind of onscreen UI to guide you through taking a photo, handing off the camera to someone in the shot so you can trade places, and taking another shot by lining up the ghostly image of people in the first frame with the new subject. The resulting image blends the frames together so it looks like everyone was in the photo at the same time.

AI did a pretty seamless job of adding me to this scene, but things get a little wonky around my hair and pants hem.

Add Me is very convincing at first glance; you only notice the signs of AI tinkering when you look closely around the edges of the subject who’s been added to the frame. It’s easy enough to use, and asking your friend to use it is probably less awkward than asking a stranger for a group shot. At the very least, it’s a thoroughly manual process that doesn’t seem like something an evil-doer could easily manipulate to sow chaos in an election year. Probably.

I played around with a few other AI features, including an auto re-framing tool in Magic Editor, which suggested I re-compose an image of a walking path to focus on a solitary mile marker pole. It might be an improvement, honestly, except for the weirdly long shadow generative AI added in the frame. And there’s Reimagine, which lets you use text prompts to replace elements of an image using gen AI. I want to use it a lot more on my own photos before I draw any solid conclusions, but for better or worse, it seems highly capable — a real what-is-a-photo-pocalypse waiting to happen.

There’s a lot going on under that weird little camera oval.
Photo: Allison Johnson / The Verge

That’s the funny thing — the Pixel camera is a powerful tool whose makers take extraordinary care over how sharply it renders foliage and how easy it is to shoot a panorama. And sitting right next to that camera pipeline is a whole new set of AI tools designed to help you recompose, upscale, or prompt your way to an ideal image — not the one you took, but the one you imagined.

Reynolds isn’t bothered by this reality. Most of the generative AI features on the Pixel 9 Pro are editing features in Google Photos, and even as the technology behind them bleeds into the camera app, he thinks offering the right controls matters more than anything. “Whether it’s in camera or in Photos, either way, you get to make the decision.” More than the placement of the feature, how it behaves for the user is the important thing. “Whether it’s sticky or not matters… whether it’s reversible, I think, matters more than whether it’s in camera or in Photos. So I don’t see an enormous difference between the two.”

I’m not sure how ready I am to reimagine the sky in my photo. But as the images we take and the images we see online increasingly lean on AI in some way, it’s going to get a little messy — and there’s no arcade-inspired interface to help us connect the dots.

Related:

Latest article