This image contains three different images from ads on Reddit. Each of the three is a realistic looking "photo" of hungry children. All of the images were created with AI.

The Ethics of Using AI-Generated Images for Charity Fundraising

Listen to an audio version of this article, read by the an audio version of this article, read by the author

A charity food aid organization in the UK is running AI-generated images of hungry children as part of their fundraising campaigns. The images are not labeled as AI-generated and are bringing up an interesting discussion.

There are many issues surrounding “poverty porn”, but many charities feel these types of images are the best way to raise funds. I imagine they see this as an ethical alternative to actual photos. I’ve reached out to Charity Right for comment but haven’t heard back yet.

edit (May 3rd, 2023): I reached out to Charity Right for comment and was told “we’ll be addressing our usage of AI in a blogpost that will be published on our website next week“.

edit (May 26th, 2023: Charity Right has posted a blog post about their usage of AI-generated images here. I asked them why they decided to use AI-generated images and if there any consideration of tagging these images as “ai-generated”. I feel like they addressed both of these questions in their post, so check it out.

(The above images are from Reddit)

For more context, I recommend reading this Toronto Star article on another charity that used AI-generated images in their ad campaigns last year.

An AI generated image that looks like a real photo. A distraught young mom stares out the window in a barren room, holding a baby.

Seriously, go read this article now for some interesting perspectives on this topic:

Should Ads Containing AI-Generated Images Be labelled?

There are no requirements for companies to clearly label AI-generated content. Adding labels might prevent viewers from feeling tricked by these images, but that is only part of the problem here.

And of course, image generators are already being used by scammers, who are using AI-generated content to help make their fake-charity appear more legitimate. These scammers pretend to be real charities and use AI-generated images to create realistic-looking images from Syria, Ukraine, and other areas.

After a bit of poking, I’ve been able to find some of these AI-generated ads in the wild. Charity Right seems to be running them towards Reddit users in the UK. I think it’s worth noting that Charity Right does not currently use any AI-generated images on their site. In fact, the photos on their site often focus on a more hopeful look than the AI images in their ads. Here’s a direct link to one of their ads on Reddit.

Amnesty International Makes The Same Mistake

After the negative reaction to Charity Right’s ads, I was very surprised to see another big non-profit make the same mistake. Amnesty International claims it’s using images from Midjourney to protect the identities of people protesting Colombian police brutality, but critics say it undermines Amnesty’s own credibility and discredits true victims of abuse.

A photo-realistic image generated by AI that shows a group of policemen arresting a protestor.

Personally, I think campaigns like this do far more harm than good. By using fake images to promote their fundraising campaigns, Amnesty is providing more rhetorical ammunition for the Columbian government to claim any photos/reporting about these protests might be fake.

Note: This blog post originally started out as a post on Mastodon. There is a very similar looking post on PetaPixel that contains many of the same links, but that was posted after my Mastodon post. It’s likely they found the story on their own (via Reddit), I’m just mentioning it here so people don’t think I was copying the story from PetaPixel.







Leave a Reply

Your email address will not be published. Required fields are marked *