Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label App Images. Show all posts

This Website Wants to Use AI to Make Models Redundant

 

Deep Agency is an AI photo studio and modelling agency founded by a Dutch developer. For $29 per month, you can get high-quality photos of yourself in a variety of settings, as well as images generated by AI models based on a given prompt. “Hire virtual models and create a virtual twin with an avatar that looks just like you. Elevate your photo game and say goodbye to traditional photo shoots,” the site reads. 

 According to the platform's creator, Danny Postma, the platform utilises the most recent text-to-image AI models, implying a model similar to DALL-E 2, and is available anywhere in the world. You can personalize your photo on the platform by selecting the model's pose and writing various definitions of what you want them to do. This website does the opposite of making models, photographers, and creatives obsolete.

Postma does state on Twitter that the site is "in open beta" and that "things will break," and using it does feel almost silly, like a glorified version of DALL-E 2 but only with female models. The site then reminds us of AI's limitations, showing how AI-generated images are not only stiff and easy to spot, but also biassed in a variety of ways.

So far, the prompt requires you to include "sks female" in it for the model to work, meaning the site only generates images of women unless you purchase a paid subscription, which unlocks three other models, one woman and two men, and allows you to upload your own images to create a "AI twin".

To create an image, you type a prompt, select a pose from the site's existing catalogue of images, and choose from a variety of settings such as "time & weather," "camera," "lens & aperture," "shutterspeed," and "lighting." Most generated images appear to be the same brightly lit female portrait, pictured in front of a very blurred background, indicating that none of those settings have been keyed in yet.
When you say "sks female," it generates an image of a blonde white woman, even if you chose an image of a woman of a different race or likeness from the catalogue. If you want to change the model's appearance, you must add additional words denoting race, age, and other demographic characteristics.

When Motherboard chose one of the site's pre-existing images and corresponding prompts of a person of colour wearing a religious headscarf to generate an image based on it, the result was a white woman wearing a fashion headscarf. The DALL-E 2 text-to-image generator from OpenAI has already been shown to have biases baked in. When asked to generate an image of "a flight attendant," for example, the generator only produces images of women, whereas when asked to generate an image of "a CEO," it mostly displays images of white men. 

Though examples like these are common, it has been difficult for OpenAI to determine the precise origins of the biases and fix them, despite the company's acknowledgement that it is working to improve its system. The deployment of a photo studio based on a biassed model will inevitably result in the same problems.

This AI model generator is being released at a time when the modelling industry is already under pressure to diversify its models. After massive public backlash, what was once a unique industry with a single body and image standard has now become more open to everyday models, including people cast from the street and platforms like Instagram and TikTok.  Though there is still a long way to go in the world of high fashion representation, people have taken to creating their own style-inclusive content on social media, proving that people prefer the more personable, casual "model"—in the form of influencers.

Simon Chambers, director at modelling agency Storm Management, told Motherboard in an email that “AI avatars could also be used instead of models but the caveat here is that compelling imagery needs creativity & emotion, so our take, in the near future, is that AI created talent would work best on basic imagery used for simple reference purposes, rather than for marketing or promoting where a relationship with the customer needs to be established.”

“That said, avatars also represent an opportunity as well-known talent will, at some point, be likely to have their own digital twins which operate in the metaverse or different metaverses. An agency such as Storm would expect to manage the commercial activities of both the real talent and their avatar. This is being actively discussed but at present, it feels like the metaverse sphere needs to develop further before it delivers true value to users and brands and becomes a widespread phenomenon,” he added. Chambers also said their use has implications under the GDPR, the European Union’s data protection law. 

It's difficult to predict what Deep Agency's AI-generated models will be used for, given that models cannot be generated to wear specific logos or hold branded products. When Motherboard attempted to generate an image of a woman eating a hotdog, the hotdog appeared on the woman's head, and she had her finger to her lips, looking ponderous.

An AI model has been in the works for several years. In 2020, model Sinead Bovell wrote in Vogue that she believes artificial intelligence will soon take over her job. She was referring to the rise of CGI models, rather than AI-generated models, such as Miquela Sousa, also known as Lil Miquela on Instagram, who has nearly 3 million followers. She has her own character story and has collaborated with brands like Prada and Samsung. Bovell stated that AI models that can walk, talk, and act are the next step after CGI models, citing a company called DataGrid, which created a number of models using generative AI in 2019.

Deep Agency's images, on the other hand, are significantly less three-dimensional, bringing us back to the issue of privacy in AI images. In its Terms and Conditions, Deep Agency claims to use an AI system trained on public datasets. As a result, these images are likely to resemble the likenesses of real women in existing photographs. As per Motherboard, the LAION-5B dataset, which was utilized by train systems such as DALL-E and Stable Diffusion, included many images of real people, ranging from headshots to medical images, without permission.

Lensa A.I., a viral app that used AI to generate images of people on different backgrounds, has since come under fire for a variety of privacy and copyright violations. Many artists pointed to the LAION-5B dataset, where they discovered their work was used without their knowledge or permission and claimed that the app, which used a model trained on LAION-5B, was thus infringing on their copyright. People complained that the app's images included mangled artist signatures and questioned the app's claims that the images were made from scratch. 

Deep Agency appears to be experiencing a similar issue, with muddled white text appearing in the bottom right corner of many of the images generated by Motherboard. The site claims that users can use the generated photos anywhere and for anything, which appears to be part of its value proposition of being an inexpensive way to create realistic images when many photography websites, such as Getty, charge hundreds of dollars for a single photo.

OpenAI CEO Sam Altman has repeatedly warned about the importance of carefully considering what AI is used for. Last month, Altman tweeted that  “although current-generation AI tools aren’t very scary, I think we are potentially not that far away from potentially scary ones. having time to understand what’s happening, how people want to use these tools, and how society can co-evolve is critical.”

In this case, it's interesting to see how an AI tool actually pushes us backwards and closer to a limited set of models.Deep Agency creator Danny Postma did not respond to Motherboard's request for comment.