Thursday, November 14, 2024

I Make AI Fashion Models to Sell Real People Clothes

Must read

Last spring, the clothing brand Levi Strauss & Co. announced plans to introduce “customized AI-generated models” into its online shopping platforms. These “body-inclusive avatars” would come in a range of sizes, ages, and skin tones and would help Levi’s create a more “diverse” lineup in a way the company considered “sustainable.” A lot of (real) people were appalled. Why not give those jobs to actual humans of the sizes, ages, and skin tones Levi’s sought? Was “sustainable” just PR-speak for “cheaper”? Levi’s later affirmed its “commitment to support multicultural creatives behind and in front of the camera.” But it didn’t bail on the partnership with the Amsterdam-based company that created the models, Lalaland.ai. (It’s just on pause until Levi’s can formulate internal AI guidelines.)

That controversy put Lalaland on the map—and got more big brands looking to it for generated models, says Duy Vo, Lalaland’s creative director. WIRED sat down with him to find out how you get an algorithm to smile just right—and not sprout extra fingers.

The first step to creating the models is research. I see what kind of models are walking on the catwalk. I follow the latest trends in ecommerce. I find patterns, like what kind of faces are hot this season. In some ways, the work I do now is similar to my old job as a fashion photographer for big magazines such as Vogue and Harper’s Bazaar. I ask clients what kind of collection they want, what kind of model they see. They may say something broad, like they want an aesthetic from a Quentin Tarantino movie. I pull looks and stills and data from that imagery. And then we send that to the machine-learning team and, basically, we create a new persona on demand.

We start by making 3D models of a body. On top of that, we use generative AI to create identities that clients want to showcase, with different ethnicities, hair colors, and looks. You can add freckles, slightly alter smiles, and add nail polish—all the finishing touches on a model. With AI, I’m the photographer, the hair stylist, and the makeup artist all in one. And then I might have to modify the design based on client feedback with more prompting or using Photoshop. Just fixing something like a hairstyle—and making sure it still works in all the poses—can take days.

Then we “dress” the models. Many of our clients already use 3D software to design clothes, so that’s easy: We just import those files and render them onto our models. But not all brands are designing in 3D. In those cases, we collect garments from the brands and we send them off to a partner that can digitize them. They re-create the patterns, the fabrics, texture, and all that.

AI can hallucinate. We’ve seen horror shows: models with three heads, or a head attached at the knee. Hands and feet are still difficult to get right; they show up with too many fingers or too many toes. You have to go back and have the AI try again. My role is to curate and to guide the system to create good-looking people, to filter out all the bad stuff.

The salary for a role like this would be comparable to a tech job in the US at around $100,000 or $120,000. Salaries are a little different here in Amsterdam, which makes it hard to compare to Silicon Valley. You don’t need to know how to code to do this job. You need to know what the technology is able to do, but you also need to understand fashion and fashion history, and have a good instinct. Anyone from the traditional fashion space could transition into this within a few weeks or months.

It’s still hard to make a whole ad campaign with AI. The fashion is so specific, and you need to replicate it exactly. Add in other factors like lighting, and making it all look good is difficult. You will still want traditional image makers creating beautiful photos. AI is more like a tool to create images for commerce. But if you can communicate your message through synthetic images, why wouldn’t you?

— As told to Amanda Hoover

Latest article