Is your organization ready for AI?
Before diving into AI implementation, mission-driven organizations need an honest assessment of their current capacity. Rushing into AI without understanding your foundation can lead to failed implementations, wasted resources, and damaged trust.
This systematic approach helps you avoid common pitfalls, identify quick wins, and develop a realistic timeline that aligns with your organizational capacity and staff readiness.
Start with Your Technology Foundation
Your existing infrastructure determines what AI implementations are even possible. The scope of your technology audit depends entirely on your AI goals.
If you plan to use out-of-the-box solutions like ChatGPT or Claude for basic tasks, this step requires less attention. However, if you're considering custom solutions, API integrations, or on-premises implementations, a thorough technology audit is absolutely unskippable. You can wait and do this step with a vendor, but vendors have incentives that don’t align perfectly with yours. Working with (or as :) your IT department to understand your infrastructure on your own will pay dividends if it helps you identify a less complex solution or spot a shady vendor.
Hardware and Computing Resources
Custom AI solutions and local implementations demand substantial computing power. Document your current server capabilities, cloud infrastructure, and end-user devices. Many AI applications require more processing power than traditional software, particularly for data analysis or local model deployment.
Network bandwidth matters too—cloud-based AI services often transfer large amounts of data. If your organization frequently experiences connectivity issues, these problems will worsen with AI implementation. Address basic infrastructure problems before adding AI complexity.
Software Integration Points
Catalog your existing software systems, especially those that store data you might want to use with AI tools. Customer relationship management systems, financial databases, program management platforms, and communication tools all represent potential integration opportunities and challenges.
Pay close attention to data export capabilities and APIs. Systems that only export to PDFs or require manual data entry create bottlenecks in AI workflows. Understanding these limitations now prevents frustration later.
Evaluate Your Data Readiness
Speaking of data, what do you have? Figuring out what data you have access to and its quality can get you thinking about what you can learn with it, and identify gaps in your data collection processes that you need to fill in order to get a complete picture.
Data Inventory and Quality
Create an inventory of all data your organization collects, stores, and maintains. Include information about sources, formats, update frequency, and current uses. Pay particular attention to data quality issues like missing information, inconsistent formats, or outdated records.
Consider external data sources you regularly access. Census data for identifying high-need areas, partner data feeds, or city-wide calendars can enable insights impossible with internal data alone. For example, you might know the annual income of people you serve, but how does it compare to your county overall?
Historical Data Depth
Many valuable AI applications require substantial historical information to identify patterns and make predictions. Organizations with comprehensive, well-maintained historical records have more AI opportunities than those with limited or inconsistent data history.
Consider both quantity and quality. A large dataset with serious quality problems may be less valuable than a smaller, complete and accurate dataset.
Data Governance Review
Strong data governance becomes even more important with AI systems, as these tools can process and analyze data in ways that create new privacy and security risks.
Document current access controls, sharing policies, and retention procedures. Organizations without clear data governance often struggle with AI implementation because they cannot make confident decisions about what data to use and how to use it safely.
The concept of "greased data" becomes particularly relevant here. Data that was practically difficult to access suddenly becomes much more accessible through digital systems, creating new ethical questions even when following existing policies.
Understand Your Team's Perspective
Staff input is crucial for effective implementation, identifying high-impact pilot projects, and designing appropriate training programs.
Leadership often has an incomplete picture of daily work realities. Staff members understand the workarounds, unofficial processes, and hidden inefficiencies that exist in your organization. They can identify which tasks consume disproportionate time and which processes would benefit from AI assistance.
Many workers are already experimenting with AI tools. Understanding these usage patterns helps you build realistic policies and training programs rather than rules that everyone ignores.
Organizations have distinct cultures around technology adoption, risk tolerance, and change management. Some teams embrace new tools quickly; others prefer gradual, structured transitions. Understanding your organization's change culture helps you plan an implementation approach that works with your team's preferences rather than against them.
Staff concerns about AI often reflect deeper organizational issues around job security, decision-making transparency, or workload management. Address these underlying concerns directly through clear communication about your approach to these topics. Creating and communicating a clear policy around staff retention and skills shifts may also help put staff minds at ease.
Conducting Effective Staff Conversations
The gold standard involves in-depth interviews with representative workers, followed by a broader survey. Interviews allow you to discover unknown issues and hear unfiltered opinions, while surveys provide representative data from your entire organization.
When interviewing staff, start by explaining the project, ensuring voluntary participation, and protecting confidentiality. Ask neutral, open-ended questions rather than leading ones. Leave sensitive questions for the end after building trust. A more in-depth guide for how to robustly solicit, gather, and interpret staff perspectives is forthcoming in “Amplify Good Work.”
Focus on understanding:
Current tasks and biggest roadblocks
Existing AI beliefs, concerns, and hopes
Current AI experience and usage patterns
Consider Other Stakeholder Views
Donors, board members, partner organizations, and community members will have varying levels of interest in your AI use. Their perspectives can influence technology selection, policies, and external communication strategies.
Consider surveying or interviewing key stakeholders about:
How they perceive your organization's mission and values
Why they support your work
Their thoughts about AI in general and specific use cases
Whether they think AI benefits outweigh risks for your work
Remember that stakeholder perspectives vary widely. Train staff who interact regularly with these groups to understand individual viewpoints and adapt their approach accordingly. Sometimes small accommodations—like avoiding AI-generated images for a program where a skeptical stakeholder is deeply involved—can maintain important relationships without compromising your overall strategy.
Note that for both staff and other stakeholders, concerns are likely to (appropriately!) heighten when there’s a perceived threat to the organization’s mission or high potential harm to people. Consider the following:
An organization that promotes financial literacy uses AI generated images in an internal slide deck
An organization that promotes financial literacy uses AI to generate the centerpiece painting in their lobby
An arts organization uses AI to generate images in any circumstance.
Which do you think will get more and les pushback? For more on this, see my blog post on ownership and IP.
Moving Forward Thoughtfully
This assessment process takes time, but it prevents costly mistakes and builds organizational buy-in for your eventual AI implementation. Use your findings to prioritize improvements that will have the biggest impact on your AI readiness.
Remember that readiness is contextual. A small organization planning to use basic AI tools for content creation has very different readiness requirements than a large nonprofit considering a custom, predictive analytics tool for program targeting.
The time you invest in this assessment will pay dividends through more successful AI implementations that actually serve your mission rather than creating new problems to solve.