Ethics & LLMs: Accessibility
AI is often framed as an accessibility boon. In fact, AI has enabled many useful assistive technologies and made existing ones cheaper, better, and even less stigmatized. It’s lowered language barriers, too. But, as with all of these values, there are some downsides, too.
Stuff AI Can Do Right Now
Real-time communication: Speech-to-text and text-to-speech aren't new, but AI has improved their accuracy and made them more mainstream. When technology goes mainstream, it often gets cheaper, the quality improves faster, and the stigma is softened. Speech-to-text and text-to-speech can help vision- and hearing-impaired people participate in meetings in real-time without specialized devices or services.
Instant translation: Instead of only offering services in the 2-3 most common languages in your area imagine communicating in real-time with speakers of Laotian, Swahili, and endangered indigenous languages. It's not perfect—you'll still need professional translators for important stuff—but it can enable you people to create genuine human connection across language barriers.
24/7 access: AI chatbots can provide basic information, referrals, and preliminary eligibility screening around the clock. For people who can't visit during business hours or live in remote areas, something is often better than nothing.
Voice commands: People with limited dexterity or mobility impairments can use the much-improved voice recognition and natural language processing that is now integrated into many smartphones and smart home devices to control some basic functions of their homes, like lights, music, and other appliances.
Accessibility problems
AI has incredible potential to break down barriers—but it's not all sunshine and universal access. Sometimes, it creates new problems while trying to solve old ones.
Bias amplification: Remember how facial recognition systems were shown to great for white men but terribly for Black women? AI systems are only as good as their training data, and if that data doesn't represent everyone, the technology won't serve everyone well. People who speak with accents, in dialects, or with non-standard speech patterns often get left behind by voice recognition systems.
The digital divide: AI requires internet access, smart devices, and digital literacy. If you're already struggling with basic technology, AI adds another layer of complexity that can shut you out even further. The flip side of assistive technology getting mainstreamed is that people are increasingly expected to have it and use it to access critical services.
Privacy vs. accessibility trade-offs: Disabled people often have to give up more personal data to make AI work for them. For example, setting up voice commands might require just a few audio clips for most people, but someone with speech differences might need to provide much more voice data to train the system.
Undermining autonomy: Some AI assistive technologies make decisions FOR users rather than supporting their choices. That's not empowerment.
What This Means for Your Organization
Include disabled people from day one: Don't build something and then ask if it works. Involve the people who would actually use the technology from the earliest brainstorming stages. “Nothing about us without us,” as they say.
Think beyond the obvious: That cool new AI tool might solve one problem while creating another (or 3). Ask yourself: Who might this leave behind? What new barriers might we be creating? For example, are we creating a system you can’t navigate without a smartphone or high-speed internet?
Don't assume all disabilities are the same: What works great for someone with one disability might be useless or less accessible for someone with a different disability. Giving people the power to choose for themselves is critical.
Quality check your tools: Test AI systems across different groups of people. If it works beautifully for your team, but fails for your community members, that's a problem.
Accessibility is another domain in which governments have legal responsibilities that other types of organizations do not. We discussed this in the GiveDirectly case: they’ve made the choice to deliver aid to the largest number of people, preferring efficient delivery over universal access. Charities with limited resources can make that trade-off, but governments have an obligation to expand access within reason. However, as the current administration in the US has drawn back their support for assistance programs, people in need may increasingly rely on private organizations. It may be worthwhile for many organizations reevaluate accessibility trade-offs made in past years and seek funding to bridge gaps.
The Bottom Line
AI isn't inherently good or bad for accessibility: the key lies in implementation The organizations that get this right will be the ones that remember inclusion isn't about deploying the latest technology, but removing barriers and creating genuine opportunities for everyone to participate.
The goal isn't perfect AI (that doesn't exist). The goal is thoughtful AI that actually serves the people it claims to help.
What accessibility challenges is your organization facing? How might AI help—or hurt—in addressing them? How about reviewing your website or other content for accessibility? There’s a prompt template in the Safe Start Prompt Pack in your free Mission-First AI Starter Kit!
—
LLM disclosure:
“Thanks! Can you create a blog post based on this chapter of my book? The tone should be more casual, and it needs to cover the most important points, but be much shorter.”
I was sad to miss what was cut, but rest assured, much more on this in the book! Available September 2025.
Can you make a simple 1:1 image for this post with no text on it?