AI could help scale humanitarian responses. But it could also have big downsides
NEW YORK — As the International Rescue Committee copes with dramatic increases in displaced people in recent years, the refugee aid organization has looked for efficiencies wherever it can — including using artificial intelligence. The Signpost project, which includes many other organizations, has reached 18 million people so far, but IRC wants to significantly increase its reach by using AI tools — if they can do so safely. IRC said it’s agreed with the tech providers that none of their AI models will be trained on the data that the IRC, the local organizations or the people they are serving are generating. Consulting with displaced people or others that humanitarian organizations serve may increase the time and effort needed to design these tools, but not having their input raises many safety and ethical problems, said Helen McElhinney, executive director of CDAC Network. People receiving services from humanitarian organizations should be told if an AI model will analyze any information they hand over, she said, even if the intention is to help the organization respond better.
Discover Related

More Humanitarian Organizations Will Harness AI’s Potential

Tech Titans Embrace Retrieval-Augmented AI: A Deeper Dive into AI Payoff

China’s ‘socialist’ AI chatbots: Is this a doomed project?

8 AI companies set to handover their chatbots to hackers. Here’s why

Fear of the unknown thwarting AI adoption

Biden, Harris meet with CEOs about AI risks

Opinion: Is it time to start considering personhood rights for AI chatbots?

Chatbots Got Big—and Their Ethical Red Flags Got Bigger
