Ethics and LLMs: Ownership and Intellectual Property

I spend a lot of time in artist and writer communities (because of my side project writing annotated stories to help build readers' English vocabulary /shameless plug) and one of the top concerns these communities around AI is about ownership. There are ethical questions on both sides to this coin:

  • Should people whose writing, art, photographs, and other data that were scraped from the web (almost entirely without consent) be compensated for the use of that data to train the models, or able to opt out if they object?

  • When you use an LLM that is trained on others' data, is that truly original, or are you using something that isn't really yours?

Note that in this post, we are discussing this from an ethics perspective, not a legal perspective. The law is starting to settle, but varies by country, and I am not a lawyer. Given that some countries may land on a legal framework that permits scraping and the ownership of AI-generated content, how should we be thinking about it?

The "Original Sin" of AI

Some critics refer to the mass scraping of creative works without permission as the "original sin" of AI: a foundational ethical breach that taints everything that follows.

People, but especially artists and writers whose work has been scraped without permission, may find any use of generative AI offensive.

I saw this play out recently with a podcast I follow on instagram They shared an AI generated image that was submitted as fan art about an episode. A commenter objected to the post, saying that it was “promoting AI ‘art’,” describing it as “harmful to actual artists.” They used several arguments in their response: “models are trained on art stolen from actual artists” (an ownership concern), “hire a real human artist for this” (invoking a possible replacement concern), and comparing “having tossed the prompt into AI and calling it a day” to the work they are putting in on fan art on the same topic (citing the value of effort and craft.)

Although I think this commenter missed the mark suggesting that this somehow would have otherwise been a paid commission, I am certainly not trying to criticize this commenter. Imagine spending years honing your craft, only to discover that your work has been absorbed into a system that can now approximate your style without your consent or compensation. This example highlights:

  • how upsetting some people find AI art

  • that the “original sin” impacts how people perceive it, and

  • using AI art in a circumstance where job replacement is not an issue (where hiring someone is not the counter-factural) does not exempt you from criticism online.

Another piece of evidence that ownership and original sin is at issue and not just replacement is the difference in reactions between organizations using AI art compared to using free stock photography in similar circumstances. It’s about more than paying artists, but about artists having ownership and control over what is done with their work.

Moving from Extraction to Reciprocity

The current AI landscape largely operates on an extractive model—taking creative value without giving back. But mission-driven organizations have an opportunity to pioneer more reciprocal approaches:

1. Choose Your Tools Thoughtfully When possible, prefer AI systems that:

  • Use training data with clear licensing

  • Have transparent data sourcing policies

  • Include attribution or compensation mechanisms

  • Were developed with creator consent

There are some image models that use legally-cleared image databases. The details of that legal clearance varies. Adobe Firefly, for example, trains on images in the public domain, images are offered with open-licenses, and images that were uploaded to its own stock library, which apparently had consent for training in its end-user license agreement. Another model, Bria.ai, uses only rights-cleared images and videos, and offers revenue share for creators.

(I am not well-positioned to evaluate their claims, but Generative AI by Getty Images, Shutterstock AI, and Freepik F Lite all make some level of claim about training on images that they are allowed to use.)

2. Be Transparent About AI Use If you use AI-generated content:

  • Clearly label it as AI-assisted or AI-generated

  • Don't claim AI work as human-created

  • Consider explaining why you chose AI over human creators

If you expect push-back if you acknowledge the use of AI for images, consider whether not having an image or video at all is possible. AI images may get less controversial over time once we sort out the legal side and our collective feelings about the ethics of the different models and use cases, but for now, they may do more harm than good.

3. Reinvest When Possible If AI saves you money on creative tasks:

  • Consider using some savings to commission human creators for high-value projects: a mural in your lobby or on the side of your building, perhaps. Or donating money to develop public art at your local community center.

  • Support local creative communities in other ways

  • Advocate for fair compensation models in AI development

Consider working with your city or county’s arts and culture department, the local chapter of an artists’ professional association, or another group of artists attached to the community you serve if you want to make this a cornerstone of your approach.

4. Respect Cultural and Personal Expression Never use AI to:

  • Replicate specific artists' styles without permission (keep people’s names out of your prompts!)

  • Generate content depicting cultural traditions outside your organization's context

  • Create content that could be mistaken for the work of a specific person

The Path Forward

There's no perfect answer here. AI can help stretch limited budgets to serve more people. But we can strive to use these tools more ethically than the extractive model they were built on.

The key is intentionality. Before using AI for creative tasks, ask:

  • Are we being transparent about our use of AI?

  • Could this use harm creatives or creative communities?

  • Is there a way to use AI that supports rather than supplants human creators?

  • Do our AI practices align with our organizational values?

The "original sin" of AI may be baked into current systems, but that doesn't mean we have to use extractive practices. By approaching AI use thoughtfully, mission-driven organizations can work toward a future where technology enhances rather than exploits human creativity.

What's your organization doing to navigate these ethical waters? I'd love to hear about approaches that balance mission, budget, and values.

LLM disclosure: I fed some notes I had about this post to Claude Opus 4 within a project that had my current book draft in it. I used this prompt:
”can you help me write a blog post on how Ownership and Copyright plays out with LLMs, targeted to mission-driven organizations? It should mention that some people talk about training LLMs on scraped content without permission as the "original sin" of AI. Here's the draft I have so far. you can see my other posts at drkarenboyd.com/blog to check for tone”

I had forgotten that it could see the current draft, but Claude started its response with “Looking at your draft and the relevant chapter from your book. . . “ so it clearly used that content, too.

I added a lot to this post, particularly around the “original sin” point. I think I overdid it a little, frankly, but it really helped me think through the value of ownership separately from replacement. I wonder if it helped you, too, or if I am just not done thinking it through yet! If the latter, I’ll be sure to write an update post!

Next
Next

Ethics and LLMs: Quality, Error, & Biased Error