AI: What is it? And should I be using it?
These days, when people say "AI," they're often thinking about machine learning or specifically Large Language Models (LLMs) like ChatGPT and Claude, which have exploded in capability and popularity over the last 5 years. But AI is much more than that!
Artificial Intelligence (AI) refers to technology (built, not bred) that (seems to) learn, think, or act independently. It's a massive field with a rich history—from approaches like "let's teach computers every single thing, one at a time" to ambitious projects aiming to build entire artificial brains from scratch (I think we’ve been working on a nematode brain since the 80s. Maybe this is the year!) If you're curious about the big picture, Nick Bostrom's "Superintelligence" offers a great overview.
Key to note is that general artificial intelligence, in which machines are as smart or smarter than us across all or close to all domains (think Skynet or C3PO) do not currently exist: what we have is narrow AI. However, many people developing narrow AI are attempting to build general AI, including the developers of ChatGPT. The boss over at Anthropic (which makes Claude) recently said he thinks we will get there in around 2027.
One method of building AI that has suddenly become very useful in the last 10 or so years in machine learning (ML). It's a method where algorithms find patterns in massive datasets. Unlike traditional coding where you need specific instructions for every scenario, machine learning picks up patterns more like humans do.
Think about how you learned what a dog is. Nobody gave you a checklist of "four legs + tail + barks = dog." You saw lots of examples of dogs, an adult nearby referred to it using the word “dog,” and, over time, learned what a dog is (and is not). That's why you can recognize a three-legged dog—who doesn’t match the four-legs-and-a-tail criterion—or even cartoon dogs without getting confused.
Machine learning works similarly, just with millions of examples instead of dozens. These systems can detect incredibly subtle patterns, but they're limited to what they've learned. They can "imagine" by combining familiar elements in new ways, but they can't truly create something completely novel or make predictions about totally unfamiliar situations. And they definitely won't question whether the task you gave it is ethical or optimal.
Large language models (like ChatGPT, Gemini, and Claude) are a relatively recent application of machine learning. Where other types of ML have tasks like “classify [these images as different types of animals],” or “cluster [these accounts into similar customer types],” LLMs are predicting what the response would be if a human (or, more specifically the authors of content on the internet that they were trained on) were replying to your request.
The rise of machine learning systems seems sudden, but the building blocks were in place in the 1940s! My dad (and maybe yours) was building machine learning algorithms on punch cards in the 70s. But the computing hardware has only recently reached a level where ML models can be useful for those of us who don’t have building-sized supercomputers hanging around.
Recent breakthroughs in chatbots, self-driving cars, image recognition, and AI art have been astonishingly quick and disruptive. You might be wondering if it's okay to use AI for work tasks, or whether you should avoid it altogether.
Here's one thing I can tell you right now: completely avoiding AI probably isn't possible anymore.
Do you use spell check? Google search? Netflix recommendations? Social media? Use email with spam filters or text prediction? Congratulations—you're already using machine learning! It's embedded in all these products and countless others. Even if you stuck to paper mail, the US Postal Service uses AI to sort it.
It's also getting harder to compete without AI when everyone else is using it to boost their productivity—whether that's other organizations vying for the same funding or colleagues who can accomplish more in the same workday.
Plus, some AI applications are genuinely helpful, and even beautiful. Apps that help blind people identify objects, systems detecting cancer in medical images, real-time translation breaking down language barriers, accelerated drug discovery are all meaningful innovations worth celebrating.
But even in these benevolent applications there are some insidious potential problems. What if your cancer detection algorithm is mostly right, but it has a high false negative rate? In other words, when it goes wrong, it sometimes gives an “all clear” when the person actually has cancer? Now, not only do those users not get the benefit of the cancer detection, but they have a false sense of security and may not go to a doctor to get checked, worsening their outcomes.
And what if the cancer detection app was focused on skin cancer, and it works better for people who have light skin? So not only is there consequential error involved, but the error is disproportionately effecting people based on their skin color. This is not a tortured hypothetical to make a point by the way, this is a problem drawn from the real world[1] and it occurs in other domains and for other demographics as well.
Having good intentions is admirable but not enough to prevent harmful consequences from AI systems. And harm is particularly difficult to avoid in new, rapidly evolving, and technically complex fields like AI. At scale, accidental harm becomes inevitable.
In future posts, I'll explore how we can approach AI use thoughtfully, identify common problems before they cause damage, and discuss how to make things right when things go wrong. If you want to get started, but aren’t sure how, your free Mission-First AI Starter Kit includes a simple AI Use Policy Template, Safe Start Prompt Pack, AI Transparency Communications Bundle, and more!
What's your experience with AI tools so far? Are you excited about the possibilities or concerned about the risks? Let me know in the comments!
—
LLM disclosure:
I used Claude 3.7 Sonnet. Here’s the prompt I used:
”This is a passage from the introduction to a book I am writing. Can you please reword it with the following goals in mind:
This is for a blog post, not the introduction to a book. So all references the book can be removed or replaced with references to past and future blog posts.
I am not sure that the blog will be focused entirely on LLMs forever, although right now it is.
The tone of the blog is more relaxed than the tone of the book, and I don't want people reading the book to be bored because they read this part of the introduction already!”
With this approach, all the content and ideas and examples are mine, and I could reuse this content without reprinting text from the book draft. I edited it pretty heavily to make sure it was clear and accurate, and to tone down some of the AI-voice. When I wrote the prompt, I wondered whether I would like Claude’s text more than mine and replace some of it in my draft, but that did not happen. If anything, it made me feel more confident in my writing voice, which was a nice bonus!
[1] Wen, D., Khan, S. M., Xu, A. J., Ibrahim, H., Smith, L., Caballero, J., Zepeda, L., Perez, C. de B., Denniston, A. K., Liu, X., & Matin, R. N. (2022). Characteristics of publicly available skin cancer image datasets: A systematic review. The Lancet Digital Health, 4(1), e64–e74. https://doi.org/10.1016/S2589-7500(21)00252-1