What a Deepfake Did to a School and What We Can Learn
In January 2024, an audio clip that sounded like high school principal Eric Eiswert spewing racist and antisemitic slurs spread to more than two million listeners. Threats poured in. The district pulled Eiswert from his role, brought in interim leaders, and beefed up security. Local news was on the case. Phone lines jammed with angry calls. Teachers and students wrestled with two fears at once: had a bigot been in charge, and had someone planted secret recording devices on campus?
Eiswert referred press questions to his union. “We believe that it is AI-generated. He did not say that,” the union leader told the Baltimore Banner, adding that Eiswert condemned the words on the tape. District leadership said they couldn’t yet verify the clip and opened an investigation. Coverage of this development lit another fire: critics accused the newsroom of carrying water for a guilty man.
Four months later, the story flipped. Police arrested the school’s athletic director, Dazhon Darien, and charged him with theft, stalking, retaliating against a witness, and disrupting school operations. According to investigators, he had created the fake recording. When officers arrested him on the deepfake warrant, he was also trying to bring a gun through airport security. The FBI reported finding evidence on his devices that led to separate child-exploitation charges as well. Why fake the recording? Earlier that year, Eiswert had warned Darien about performance problems and told him his contract likely wouldn’t be renewed.
Darien entered an Alford plea (accepting the consequences while maintaining innocence) for charges tied to the deepfake and received a four-month jail sentence. As of this writing, the child-exploitation case remains open.
Eiswert has since taken another local job and filed suit against Baltimore County Public Schools. He argues the district hired Darien negligently and failed to correct the public record, which prolonged the harassment and damaged his reputation across the country.
What leaders can do
You no longer need high-tech or even tech skills to create a convincing fake image, video, or audio clip. Disinformation and deepfakes are a risk we all live with. Here’s a short overview on what you can do to combat deepfakes, disinformation, and other synthetic media attacks. More where this came from in “Amplify Good Work.”
Know your risk profile
Some contexts invite attacks. Public advocacy groups working on polarizing issues and public-facing leaders who hold power or control funds are especially exposed. Leaders with abundant audio/video online are easier to clone. Map these risk factors and plan accordingly.
Insider risk belongs on that map. Low morale or unresolved conflict can turn a staffer into the attacker. To surface concerns early, partner with HR to create clear feedback opportunities and stay sensitive to staff morale and conflict.
Reduce your vulnerability before anything happens
Prove what’s real. There are technical solutions you can implement to document that media is legitimate. Content-provenance practices for official messages (e.g., digital signatures, verifiable archives) can deter hoaxes and speed up your public response because you can show what came from you.
Invest in relationships. Regular, direct communication with staff, partners, families, donors, and volunteers builds trust. Trust can help inoculate your most important audiences from believing lies: you want the first thought after they see a synthetic media attack to be “that doesn’t sound like them.”
Speak carefully and pay attention. Media training helps spokespeople avoid quotes that can be twisted or taken out of context; media-literacy training helps everyone spot and avoid sharing fakes.
Teach the playbook. Include a deepfakes/disinformation module in staff training so people know how to recognize attacks, follow a clear communication policy to ensure that the organization can respond as quickly as possible, and avoid amplifying lies.
Build response capacity and protocols
Create clear, graduated protocols that prioritize immediate harm reduction and longer-term prevention. Spell out who to notify, what to do first, and how to contact people affected. When you make a mistake or someone targets you with a fake, reach out directly to those harmed, explain what happened, describe fixes, and offer remedies where appropriate.
Train staff to recognize failure modes, stay skeptical of AI generated content, and follow those protocols under pressure. You can run digital simulations or tabletop exercises so the steps feel familiar.
Manage the recovery window
Rebuilding trust takes time and resources. Expect reputational wounds to drain attention from your mission and budget for outreach, legal, PR, and SEO support. Plan a steady cadence of updates and, if needed, temporarily pause lower-priority work so core services don’t slip. Offer staff support with office hours, counseling, or time off.
Why this matters beyond PR
Even after you debunk a fake, doubt can linger. That erosion of confidence can hurt donations, volunteer energy, partnerships, and staff morale—which can, in turn, raise insider risk. Treat preparedness as part of governance and stewardship, not just a PR or compliance question.
LLM Disclosure.
I asked ChatGPT 5 Thinking first to write a post based on the case study as it appears in my book draft. This really did not give me what I expected. The telling of the case was very dry and not very detailed. I opened another chat in the same model and pasted in just the story telling part.

