top of page
  • Writer's pictureTina Crouse

AI will fix the D and E but not the I in funding for nonprofits

Updated: Nov 22, 2023

You're skeptical looking at this claim, aren't you? With AI known to be biased, how can it fix bias?

Graphic image of a a surprised face
Photo by Shubham Dhage on Unsplash

Since AI is a tool, I recommend everyone in the JEDI or DEI world get very familiar with it because AI, with all of its inaccuracies and biases, can hide the author. Using something like ChatGPT for funding proposals matches the nuanced white, colonial hidden language and meaning in funding criteria.


I have worked for decades as the ‘beard’ for multicultural groups who were never winning the funding. Misunderstanding small exclusionary criteria, added to a desire to be open and honest, left many groups without the means to operate their supposedly desirable programs. Funders would say ‘We want x’ and of course, the groups could meet the request. Because they spoke English, could read the criteria, and write the proposal, they believed they could win the funding. But then, their submissions would receive a rejection, and this caused confusion because the funders (government and foundations) would not be transparent about the reasons for refusal. But I could see them, and it was almost always about wording. A white, university educated person could ‘read between the lines’. I saw the exclusionary wording and knew the better term to get around it. I knew that answering honestly for certain questions would cause a rejection, but I also knew how other nonprofits in the same circumstance, were offering different words and explanations and they were getting funded. It was clear to me that when a funder asked for proposals to address a social problem, those proposals had to resonate with them, and we all know that homogeneity (the quality or state of being all the same or all of the same kind) rules the world. Without realizing it, the world of ‘white’ had its own language and created or reinforced its own kind. I am of that world, and I speak that language. And so does ChatGPT.


We can expect that Generative AI is going to become de rigueur for proposal writers the world over very soon. Why risk human oversight when an AI generated proposal will match language, criteria, and focus. It won’t accidentally insert the wrong words; it will only choose the right ones because the prompt will say ‘Match the response to the following criteria’. And since AI is trained on a world dominated by ‘white’ culture, it will do the exact matching, regardless of culture, colour or first language of the authors, so that the proposal will be successful.


So where does that leave funders? When all the submissions are now using the ‘right’ words, matching all the criteria and easily resonating because of homogeneity, how will they award funds? Finally, it will be based on what they intended to fund in the first place.


We already have the ability, through the use of software, to anonymize and randomize funding proposals so that funders can’t tell which groups have submitted, but we are still left with the hidden bias subtlety revealed through choice and understanding of wording. But that is coming to an end, thanks to AI.


7 views0 comments
Tina Summer 2020.JPG
Written By Tina Crouse

Tina Crouse is the CEO of ANSWER.it, a tech4good social enterprise on a mission to strengthen the nonprofit sector. Tina's career has spanned more than 2 decades in the charitable sector with a specialty in grant development.  She has created a number of ‘firsts’ in Canada and has worked at 3 tech companies, heading up 2 social enterprises.

Stay up to date!

Thanks for submitting!

bottom of page