Case Study: GiveDirectly’s AI-Powered Poverty Targeting
Let’s start with the impact: GiveDirectly uses AI to get aid to some hurricane victims six times faster than their previous process and more 20 times faster than FEMA.
When hurricane Ian hit Puerto Rico and Florida in 2022, GiveDirectly used an AI system called Delphi, developed with the Google.org Fellows program, to identify areas with lots of damage to the roofs of homes in high poverty areas. They used before-and-after satellite imagery from the National Oceanic and Atmospheric Administration to identify damage and US Census Bureau data to identify high-poverty areas.
AI targeting allowed GiveDirectly to start distributing funds to low-income households (through Propel, an app used to distribute SNAP benefits) before teams could arrive the ground. According to their article on the topic, their application takes about a tenth the time of FEMA’s (and the money arrives more than 20x faster!) because the app can automatically confirm eligibility without recipients having to provide additional documentation.
There are some limitations with this approach. Before I say it, do you want to guess what they might be?
I’ll start with the good bits to give you a moment to think :)
What’s good?
GiveDirectly’s Delphi and associated application process is a great example of using AI to target interventions while considering AI’s technical weaknesses. AI could be imperfect at identifying damage from satellite images, in particular, it may only identify the most extreme cases of damage. But, using AI to identify high-need areas instead of high-need homes reduces risks to program effectiveness from not-sensitive-enough AI.
If you are implementing an AI that can notice extreme cases, but not all, or has a high false negative rate, you still may be able to use it to identify high-need fairly. By expanding to a larger group that has similar circumstances (in this case, geographic ones) and a high instance of positives, you have still identified the group (or area) of highest need as long as there is not a pattern in AI’s false negatives.
In this case, an example of a risky pattern would be if there was a particular type of roof that was more resistant to damage and was more popular in a particular part of town. If that were true, there could be high damage to the rest of the houses in that area that would not be visible in the satellite images.
OK— ready to talk about the problems?
Limitations
The AI helps them target high-need areas and they are focused on SNAP-eligible residents, but the distribution method— through a smartphone app used by less than a quarter of SNAP recipients—leaves a lot of people out.
First, not everyone in need is eligible for SNAP: even among people who are income-eligible (and depending on the state) most full-time college students, certain immigrants (both undocumented and with some legal statuses), people who have served their time for felony-level drug convictions, and people on strike at work are not eligible for SNAP.
Second about 20% of people eligible for SNAP do not sign up for benefits, and would not be able to take advantage of emergency relief.
Third, not everyone has access to a phone and data plan that allow them to use smartphone apps. Generally speaking, requiring smartphones for eligibility disproportionately impacts older adults. (Smart phone adoption also varies by gender, race, income, and education level).
GiveDirectly is a charity focused on direct cash transfers to people in need, not a government program, and they don’t have the resources to give to every person. Legally speaking, non-governmental assistance can prioritize timely intervention over perfect fairness. That said, GiveDirectly has acknowledged the limitations of the smartphone-dependent model and indicated that they are seeking funding to expand to a hybrid model.
What’s next?
This effort did get money out more quickly with less burden on recipients, and in that measure, was a success. GiveDirectly used the process again last year to distribute aid to households hit by Hurricanes Helene and Milton.
This is part of a larger suite of AI implementations at GiveDirectly: according to their Responsible AI/ML framework, they plan to use AI to:
identify people who could benefit most from cash transfers
predict crises to improve response times
monitoring socioeconomic changes in areas where they work
detecting fraud
delivering information or coaching to cash transfer recipients
Two short notes before we wrap up.
Politics matter. The Trump administration is attempting to cut funding for NOAA, which could compromise the viability of this targeting method. They have also planned cuts and even promised to eliminate FEMA, which would dramatically increase the reliance of hurricane victims on charity and turn the limitations of donation-funded programs like this into much more serious justice issues.
Sign up for benefits you qualify for. Soap box moment: stigma against benefits helps no one. SNAP is there to help. If you qualify, it’s for you. And if you sign up now, even if you don’t use benefits you qualify for, you won’t be left behind by the increasing number of programs that use SNAP eligibility as a proxy for their own eligibility.
What can you learn from GiveDirectly’s AI implementation? Could you use AI (or basic old boring statistics ;) to target your work to high-need groups or reduce the burden of proving eligibility? Let me know in the comments!
—
LLM disclosure:
I learned about this case study after feeding the current draft of “Amplify Good Work” into the Claude 3.7 Sonnet and getting feedback on the structure. It indicated that the case study I started with was helpful, so I asked for some more examples. This gave me the idea for the series in the first place, and this example was one of the ones it suggested I look into.
”This is super helpful. Can you point me to some more nonprofit, government/agency, or b-corp examples that I can include?”
It gave me a list, separated by type of organization, with little blurbs describing the implementation. Interesting to note that even though the example I had was a cautionary tale, all of the examples it gave me were framed as positive examples. I then added a second query:
“Amazing! Thank you! Are there any more cautionary tales of mission-drive [sic] org AI implementation (similar to [the example in the draft])”
PS— I know thanking AI is not necessary, and complimenting it like this (“Super helpful” “amazing”) is even weirder and could be altering my results! I do it automatically, and I just noticed it while writing this LLM disclosure. I will keep thinking about it, and if it ends up being interesting, you can be sure I’ll write about it on this blog!
To get the blog image, I asked ChatGPT o3 “Great! Can you make one with similar elements for this post?” (after the previous one about the series. The result was not great and had “Written by Karen Boyd” taking up a lot of space. I decided to request another one with “Can you do another one without the author credit?” I wondered whether “another one” would communicate that I did not want an edit, but a fresh take. I did get a fresh take!