Ethics and LLMs: New Series!

Large Language Models are technically impressive. They can create brand new content, functionally instantly. But are they ethical?

In this series, we will review values involved in LLM use and how you can think about them in your work.

Why should you listen to me?

Well, first of all, I am a little scared about AI, and I think you should think twice before trusting the AI opinions anyone who isn’t!

But I also have some bona fides. I earned a PhD from the School of Information at the University of Maryland, College Park where I studied values in the development of machine learning systems. I published articles about AI and the future of work, the ethics of building AI, and moreOver 300 other articles have cited my work, so some of my peers think it’s useful :) Since finishing my postdoc at the University of Michigan, where I studied emotion recognition designed for use in the workplace (TL;DR: YIKES!), I’ve been working in non-profits doing research for mission-driven organizations, focusing on workforce development and labor economics.

When I defended in 2020, Large Language Models, like ChatGPT, Gemini, and Claude, were nothing like they are today. The chatbots would wonder off in the middle of their answers, and we still called the image generation models “machine dreaming”— they were full of repetitions and strange artifacts. They were not very good at listening to directions yet. “Trippy” was a good way to describe the entire experience. But we were riveted anyway. I still remember putting up one of the early DALL-E papers on the TV and marveling at it.

An image from the paper "A very preliminary analysis of DALL-E 2" (Marcus, Davis, Aaronson, 2022)

An example of query and results from an early version of DALL-E that blew our minds (even though it wasn’t very good). It comes from the paper “A very preliminary analysis of DALL-E 2” by Marcus, Davis, and Aaronson in 2022.

Today, LLMs are everywhere. They have us looking a little longer at social media posts, deciding whether the author is a bot. They’ve passed the Turing Test. They are writing our emails, drafting grant applications, and making deep-fake porn.

The quick rise and proliferation make some people more than a little bit uncomfortable. Me, too.

So for as long as it takes, I’ll be writing blog posts on how different values are related to LLM use so that we can think through our choices together. I hope that you will reflect on these posts and come out the other side a little more confident about how you do (and don’t) choose to use LLMs for your work.

I plan to share my thoughts about:

  • Privacy and security

  • Authenticity

  • Ownership and intellectual property

  • Replacement and deskilling

  • Quality, errors

  • Environmental impact

  • Bias and bias in errors

  • Transparency and Accountability

  • Misuse and safeguarding

  • Accessibility and inclusion

And anything else that YOU want to hear about!

These posts will be drafts of thoughts— I am more than open to discussion. These technologies and their capabilities change weekly right now, and I have decided I would rather be proven wrong than not have this conversation at all.

First post about privacy and security for LLMs in mission-driven organizations arrives Thursday! If you’d like to get this series and future posts directly in your inbox, sign up for the newsletter here!

Thank you for joining me!

LLM disclosure:

Here’s the prompt I used:
”I'd like to write a series of blog posts about ethics implicated in AI and how they should impact whether, when, and how people use AI in their work in mission driven organizations. Can you draft me the introductory post about this series? Here's what I think it should contain: What are LLMs? What are the ethical issues, and what are the implications for using them in your work? - [list of ethical issues for which I can write individual blog posts. ]”

The strategy I tried here was to write, on my own, a list of values/ethical issues that I think are important, but NOT include them in the prompt. This lets me see what the LLM would say without being anchored by my ideas. It did include some ideas that weren’t on my list! Not things that I wasn’t aware of, but things that weren’t on my mind when I wrote my quick draft. This allows me to write a more complete post, more quickly.

There were some ideas that were on my list that it didn’t mention, which is exactly why you should take a shot at writing your own list before running the query: It’s very easy to look at an existing list and say “looks complete to me!”

I didn’t end up using any text from the LLM output in this blog post— I even renamed the values it added because I didn’t think they were quite right, but that doesn’t mean that the query wasn’t helpful!

Previous
Previous

Ethics and LLMs: Privacy and Security

Next
Next

AI: What is it? And should I be using it?