Last Monday, January 23, Microsoft announced a new ten-billion-dollar investment in Artificial Intelligence (AI) developer, OpenAI. In January 2021, OpenAI launched DALL-E (now DALL-E 2), a text-to-image generator. Since then, there has been a meteoric rise in AI-generated art. But, as is with any company or product that achieves success, competitors soon arise, bringing about heightened competition, adjusted pricing, increased convenience and accessibility, and enhanced features. Large investments in conjunction with mass viewership of art from social media trends indeed boosted interest and value toward creativity and artwork overall, but it is still essential to achieve the right balance between technological innovation and commercialization.
One such OpenAI competitor that emerged in August 2022 was Stable Diffusion — a deep-learning, text-to-image diffusion model. Many apps and websites use Stable Diffusion; for instance, the photo-editing app, Lensa. For $7.99, Lensa “creates” fifty images with different styles, backgrounds, and features once users upload ten photos of themselves. In December, use of Lensa spread like wildfire, with over one million active users as of December 8, 2022, and over four million downloads during the first five days of December. Those on social media have been incredibly pleased with the results, with many stating it looks like more attractive or futuristic versions of themselves.
However, there are growing concerns about the ethics — including user privacy and stealing of artwork— surrounding Lensa’s use of Stable Diffusion, which generates realistic photos from any text input.
The issue is that Stable Diffusion is open source, meaning any platform (like Lensa) can use it without having to pay any sort of compensation or license. It is trained “on billions of image-and-text combinations scraped from the internet,” and the model creates individualized photos for each user by mixing billions of collected images together to create new ones. Stable Diffusion’s unfiltered ability to scrape the entire Internet for artwork raises copyright concerns and leaves original artists vulnerable to having their signature art styles stolen.
Rather than going the more direct route and simply commissioning artists to draw, paint, or create portraits of themselves, Lensa users contribute to the “stealing” of artwork as well as artistic styles. Brett Karlan from Stanford University describes this process as a type of “algorithmic monoculture,” stating that the AI generator is like “a workshop that churns out copies of Thomas Kinkade paintings, if Thomas Kinkade painted… the worst kind of Caravaggio ripoff.”
While the developer of Lensa has fought back against these claims, stating that the “software learns to create portraits just as a human would – by learning different artistic styles,” the reality is that AI learns and masters art much faster than normal humans can. Furthermore, in a new study by researchers at the University of Maryland and New York University, it is pointed out that “it is virtually impossible to verify that any particular image generated by Stable Diffusion is novel and not stolen from the training set.”
Smaller artists, and even prominent game designers or story artists like Jon Lam from Riot Games who created #CreateDontScrape, worry that “a program taking everyone’s art and then generating concept art is already affecting [their] jobs.” These artists did not consent nor opt in to help train datasets, but have no choice given that this type of open-sourced collection is virtually filterless.
Although ethical concerns with AI art have been around for a while, the explosive nature of social media trends has brought them to the forefront. The issues raised are similar to those arising from the failure to compensate users for the personal data collected on them by social media companies. Likewise, users are not compensated for the data they give to apps like Lensa which are used to continually train and improve its algorithms. Essentially, as Badgamia explains, “You are not only actually paying for the app, but you are also paying with your data to improve the app’s technology.”
Hopefully further large-scale adaptation of AI-powered image systems will set the stage for industry standards as well as bring more awareness to ethical concerns of “stealing” artistic styles. Google, for instance, announced DreamFusion — a new AI model that transforms text into 3D. Shortly after that, Meta also announced a new AI system that turns text into video. Meta’s is open source; Google’s is not. Whether challenges or concerns regarding open sourced AI imaging will prompt changes in Meta’s system remains to be seen.
As the capabilities of AI imaging continue to be pushed and big tech companies begin launching new systems, it is vital that platform creators and tech industry experts consider the issue of ethics in the nature of open sourcing and AI art in general. Without doing so, any benefit from the commodification of data and private images will be at the expense of originality, hard work, and artist consent.