Olu Online

Is using Lensa art theft? | The ethics of AI generated art

Lensa is, in their own words, "an all-in-one image editing app that takes your photos to the next level". I haven't explored any of their other features besides the one that's taking the internet by storm at the moment; their "magic avatar" packs.

It uses Stable Diffusion, an AI image generation model, to produce stylised avatars using 10 - 20 selfies inputted by the user, for a fee. I’ve mostly been seeing the trend on Instagram, where both adoption and the backlash was large and fast.

It's easy to see the appeal to consumers; the avatars generated (for a lot of people) are interesting to look at, cheap, and quickly generated. The backlash mainly focused on the fact that artists did not consent, were not compensated, and are not credited for their work; that using Lensa was art theft.

I have a lot to say about the ways in which Lensa's training data must have included few BPOC, few fat people, and few non-sexualised images of people with breasts, but there are a lot of explorations in the media of the bias inherent in the training data sets coming through in the produced images, and dangerous and offensive results produced by the app, so I won't explore that here today.

There were also concerns raised about feeding images into these apps. Some said that these images could be used to train AI to do better facial recognition (particularly in the realms of policing), be used for advertising, or be sold back to the users for a profit.

Stable Diffusion, the AI model doing the heavy lifting, is trained using a data set of over 2 billion images (with a b!) scraped from the internet, and a smaller subsets of that set. No one consented to be in that image set, and anyone who has ever posted anything online could have content in it. It is not possible to request that you, your art, your selfies, or whatever else happens to be in the set, be removed.

The ethical ambiguity of scraping the web for content for small projects has always been presented to me as an issue between the creator of the website (especially if it was a large company) and the developer creating the scraper. Guides online list "ethical scraping" guidelines: make sure you scrape in a way that doesn't accidentally create a DDoS attack; make sure you use an API where possible; make sure what you're trying to do is public on the website; and finally, the thing that was always pressed on me as supremely important. Make sure what you're doing doesn't break any existing laws.

Assuming that because something is legal it is right is pretty bullshit in general - many, many, many terrible things were and are legal - but it's particularly bullshit when it comes to internet norms. Law is famously slow, and our moral systems can't track with the legal system if we want to change said systems, or even "move fast and break things". Zuckerberg and other Big Tech leaders shrug, say YOLO and assume the law will never catch up with them, or that they can shape laws in their favour through lobbying and other pay-to-play methods.

I'm digressing, but my issue with scraping is that it is a pervasive and normal part of web development. Many people don't consider that when they use the internet, be that making a simple HTML/CSS site, or using a site through a big conglomerate, scrapers are scraping and crawlers are crawling the content unless you've specifically configured robots.txt and no-index rules to prevent it.

Criticism of scraping online centres on public-ish data sets. OkCupid, a dating site, was scraped a while ago for someone's academic research and that was pretty vigorously frowned upon. I'm unsure if a change in norm is desirable; I think it would be better to have people opt-in rather than explicitly opt-out of having their data scraped, but I assume the argument is this would slow the wheels of technological invention and innovative business. My answer to that is; do we value speed over direction? Over where we end up?

To circle back a bit, DeviantArt launched an AI project and opted all users' art into it; retroactively and in a handful of cases posthumously. They then reversed this decision and opted all users out by default, due to backlash. Even with the U-turn, an opt-out in this way is useless. Under the hood it's a tag on their posts, and doesn't prevent future scraping unless a developer's scraper both looks for and honours the tag. The DeviantArt AI project is based on Stable Diffusion, the same as Lensa; the same problems of consent apply from the initial dataset.

On credit

I think credit without consent wouldn't feel good, but would be better. People could request removal from training sets or at least trace the sets that their work is included in. People happy to share their images could capitalise on whether their images were included in a set, or point to their influence in a piece. Sadly it is often pretty difficult to determine; some services like Midjourney barely have any information about the source of their training set online at all.

Even in the case of Stable Diffusion, a separate company called Spawning created a tool for searching the training set, aimed at artists. I used it and found two versions of an old selfie. Spawning have partnered with Stable Diffusion to give artists "a few weeks" before v3 is released to opt-out. Seeing as you opt-out one image at a time and the majority of artists won't have heard of this tool, I think it's a pretty weak response. I don't know if lowly selfie creators will be able to opt out using the tool either.

Examination of 12 million of the images used to created the training set were interesting. The vast majority came from a few sites:

Nearly half of the images, about 47%, were sourced from 100 domains, with the largest number of images coming from Pinterest. – Exploring 12 Million of the 2.3 Billion Images Used to Train Stable Diffusion’s Image Generator by Andy Baio

Many methods for creating these models don't (and to be honest can't) attach the name, website and other details of every image and piece of text used to a create a new image in the metadata to every step of the process. Metadata - by which I mean tags and labels potentially including things like the site the picture came from, who created it, what the picture is of, the date it was collected, etc - is included on the training data, but is also created about different parts of the process at every step. Preserving it in a way that lets someone make sense of how it effected the finished piece of media is not a consideration of current AI image generation models.

I’m very interested int he idea of explainable AI, which focuses on methods that would allow experts on a system to explain the steps that an algorithm took every step of the way, but I recognise that they would probably be very different systems to the ones created today.

On compensation

No one is compensated for being in the training data, one argument being how do you pay billions of contributors? I get the AI companies logic, even if I do not agree with it, after reading more about how AI modelling works. Contrary to some of the popular infographic explainers out there, the AI is not creating a composite of images, but rather using its training to go from noise - no discernible image, fuzz, the TV tuned to nothing - to something it deems to match a series of prompts and themes.

If it was possible to deduce how much of an influence each individual image has on the final outcome (and the owner of each image was known and labelled, which I currently doubt happens), would it be simple to compensate people then? At the scale and in the way the industry currently works, where a training set is created without consent and then trust on consumers, I think you'd basically end up with a Spotify of art, where each artist earns a small fraction of a penny with every image generated..

Spotify and other streaming platforms like Tidal, Amazon Music and Apple Music pay a tiny amount per individual play of a track, compared to what an artist would receive if they sold a piece of music directly through something like Bandcamp or through physical sales.

I've been wracking my brain thinking of other models, and I suppose a more equitable solution could be an upfront payment for inclusion in a training set as well as royalties on each image created using your image. This all relies on the identification of an individual image's usefulness and number of uses in a set of data being easily trackable and to the best of my understanding it is not with our current machine learning technology.

On cops

A few people told me to not to use Lensa because the image recognition software that they’;re developing could actively be used to improve the algorithms of police surveillance technology, but I can’t find any direct link beyond the concept of transfer learning, where training an AI in one way can be used to train it in another, but there are cheaper and easier ways to get loads of selfies of people, especially given all the scraping we talked about earlier.

On consuming

It’s very telling that the majority of the criticism of this app has focused on what we as consumers can do, and why we should or shouldn’t use Lensa because we’re giving up our faces or giving the company money. I’ve seen a lot of focus on the terms of service and that people don’t read them, and given that reading terms of service for all the various services we use every day would be an arduous task (especially if keeping up with all the updates!) i feel it’s also a disingenuous criticism. We have to move beyond consumerism being our major lever of change, especially given in areas like fast fashion and tech the wheels will continue to turn and dump things that don’t sell into landfill, in the case of fast fashion and physical tech, or simply keep looking for a new venue for their hype in the case of software.

So... What can we do about all this?

Mass litigation of these kinds of companies would be a start towards better laws, and punishing mass scrapers that don't respect consent is key to a better internet and better technology in general. The GitHub Copilot case could make the law less of a soup but will probably take quite a while to move through the courts.

A lot of artists in threatened spaces are pragmatic as well as angry, trying to use lobbying to change existing laws and create new ones or moving into other fields where things like concept art are already being affected. I definitely think that unionised efforts, be it from people in tech or groups of artists, will be what changes things for the better for many people.

Is it art theft?

Despite all this I don’t really feel qualified to answer the question; what’s theft? I definitely think that the laws need to improve, the consent norms need to change, and that there’s loads we can do to make things fairer for artists.