How Five Awardees Are Paving the Way for a More Equitable AI-Powered Future

Image

Introducing Winners of USAID's Equitable AI Challenge.
USAID launched the Equitable AI Challenge to help identify and address actual and potential gender biases in AI systems across global development contexts.

By Alexander Riabov, Communications Specialist, DAI-Digital Frontiers 

Introducing Winners of USAID's Equitable AI Challenge

Artificial intelligence (AI) tools are a dual-edged sword: they promise tremendous benefits for international development but have also demonstrated instances of bias and harm, often resulting from inequitable design, use, and impact. Recognizing that AI technologies could cause gender biases, USAID urgently looked for innovative and creative approaches to address gender-inequitable outcomes by launching the Equitable AI Challenge in the Fall of 2021. This challenge, implemented through DAI’s Digital Frontiers, sought to support approaches that increase the accountability and transparency of AI systems in global development contexts, in order to produce more gender-equitable results. Dozens of competitors submitted approaches related to preventing, identifying, or monitoring bias and harm against women and gender-nonconforming people—reflecting the larger goals of USAID’s Digital Strategy, the National Strategy on Gender Equity and Equality, and the recently launched USAID AI Action Plan.

USAID chose a total of 28 diverse semi-finalists to attend a three-week virtual co-creation event, held between February 14 and March 1, 2022, which brought together select technology firms, startups, small, and medium enterprises, civil society organizations, and researchers from around the world. The co-creation focused on the need for close collaboration between the public and private sector, which allows for diverse perspectives, local solutions, and partnerships to form among AI technology developers, investors, donors, and users. With a desire to address AI’s most critical issues, including bias and inequity within AI systems, participants were encouraged to collaborate on solutions, identify partnerships, and strengthen their proposals — all while forming a larger community of practice.

In October 2022, USAID and Digital Frontiers have awarded five proposals to receive grant awards to implement their approaches in alignment with the challenge’s objectives. Dive in and learn more about the winners of the Equitable AI Challenge as their work gets underway!

Accion’s Center for Financial Inclusion (CFI): Creating a Due Diligence Model for Investors and Donors

CFI proposes developing a due diligence model that considers gender-inequity issues when designing inclusive finance algorithms. The tool will help impact investors and donors push digital finance companies and product designers to build better algorithm development processes and consider the user and the user’s existing ecosystem and protection as design requirements. CFI’s approach is designed to support investors and donors when conducting due diligence and brings together people who are both creating and interacting with the algorithms, including impact investors such as FMO, Accion Venture Lab, and Quona Capital. CFI expects to create a practical and flexible model that can be slotted into impact investors' and donors' due diligence and portfolio support processes through this approach. CFI will produce a final report with recommendations and suggested adaptations for deploying the tools with other donors and funders and disseminate their findings during Financial Inclusion Week in 2023 and other USAID learning events.

Nivi and the University of Lagos (UNILAG): Partnering to Create a Gender-Aware Auditing Tool

In support of digital health interventions in Nigeria, UNILAG and Nivi are partnering to create a gender-aware auditing tool within their existing health chatbot technology deployment in Nigeria. The purpose of the audit tool is to evaluate real user interactions with the chatbot, along with the bot’s interpretation and response, incorporating user feedback on the adequacy of the bot’s response. The human judgments in this audit process serve two purposes. First, they can be aggregated into performance metrics both globally and segmented by demographic; second, they can be directly used as training data to retrain, tune, and improve the natural language processing (NLP) models used by the chatbot.

UNILAG and Nivi will first incorporate automated translation into English from low-resourced languages, such as Hausa. This allows scalability, as downstream information extraction and diagnosis models can be leveraged in new languages, such as Yoruba and Igbo, without retraining. UNILAG and Nivi will also apply the tool to improve the existing NLP intent recognition modules of Nivi’s health guide chatbot and the new models developed by UNILAG through training and model testing.  By building an NLP-based system that is more attuned to the needs of each local population, Nivi’s health chatbot and digital health services will have the ability to reach more women at a lower cost and help them make informed health decisions.

University of California, Berkeley in partnership with Texas A&M University and RappiCard Mexico: Improving Access to Credit with Gender-differentiated Credit Scoring Algorithms

Traditional credit scoring models tend to pool data from men and women and take a gender-blind approach which can lead to the denial of credit for women or placing women at a disadvantage when seeking access to credit. University of California-Berkeley and Texas A&M University will partner with RappiCard Mexico, the fintech arm of the Rappi delivery platform, to develop a model that seeks to under gender differentiation to assess whether it improves loan approval for women, increases fairness and credit allocation efficiency.  This collaboration will combine novel “digital footprint” data with repayment data and machine learning to build gender-differentiated credit-scoring algorithms. This research will also shed light on whether assessing creditworthiness using non-traditional sources of data, such as economic behavior and network interactions, via gender-differentiated credit scoring methods can benefit both borrowers and lenders.  This research will inform policymakers and practitioners whether gender-blind algorithms are superior in expanding formal credit for women and preventing discrimination against women who are applying for credit. The study’s findings will be shared with RappiCard to apply the algorithm in their digital credit products as well as other fintech partners to consider gender in credit allocation.

AidData in partnership with CDD-Ghana: Evaluating Gender Bias in AI Applications using Household Survey Data

Poverty estimates generated using AI models trained with household survey data are increasingly used in research, evaluation, and decision-making applications. AidData, a research lab at William & Mary, in partnership with the Ghana Center for Democratic Development (CDD-Ghana), aims to evaluate the impact of gender bias on poverty estimates generated using AI and USAID’s Demographic and Health Surveys (DHS) data in order to inform AI developers/researchers, development organizations, and decision-makers who produce or use poverty estimates. The project leverages AidData’s expertise in artificial intelligence, geospatial data, household surveys, and CDD-Ghana’s knowledge of the local context and environments to produce a novel public good that will elevate equitability discussions surrounding the growing use of AI in development. The grant will produce an assessment of the impact of gender bias on poverty estimates for Ghana as an initial case study, along with an open-source repository of code and methods for replicating the assessment in additional countries. The case study and code will be shared with key organizations in Ghana to explore current uses of AI, poverty estimates, and concerns focused on gender bias to encourage deeper consideration for potential bias in AI models/data being used for development applications and provide a practical means for others to evaluate bias in their own applications. 

Itad in partnership with WinDT, PIT, and Athena Infonomics: Preventing and Mitigating Gender Bias in AI-based Early Alert Systems in Higher Education

Itad, in partnership with Women in Digital Transformation (WinDT), PIT Policy Lab, and Athena Infonomics, will work with the Mexican State of Guanajuato's Ministry of Education on a pioneering initiative called Educational Paths, to identify and mitigate gender bias within a newly created AI based early alert system. This system, developed in partnership with the World Bank, seeks to identify at-risk students in higher education, aiming to provide them with support and thus improve their retention and graduation rates. The Ministry of Education will train the algorithm with government data in order to get preliminary findings on its performance. The grant will support the Educational Paths' rollout to identify and mitigate potential gender-based bias in the databases and algorithmic performance with:

  • The development of an Ethical Guide and Checklist for decision-makers to ensure responsible and equitable deployment of AI systems; and
  • Leveraging the AI Fairness 360 toolkit from IBM in order to detect bias, providing the toolkit to the Ministry of Education.

The findings will be documented in a case study with the desired outcome that the Ethical Guide and Checklist become tools that the local Ministry of Education uses to inform the next iterations of their AI-based early alert system in higher education. In addition, the learnings from the case study will be presented in a workshop with stakeholders from the Ministry of Education in the state of Tamil Nadu, India to explore the potential for replicability of the tools created and the lessons learned in the Mexico experience. In using the AI Fairness 360 toolkit in a development context, as opposed to large-scale initiatives in developed nations, the project aims to share learnings on how the AI Fairness 360 toolkit can measure and mitigate bias in LMIC-specific datasets.

What’s Next: The Journey Towards Implementation

Through these diverse concepts spanning geographic regions and types of approaches—from improving AI fairness tools and data systems to strengthening the evidence base for AI fairness in development contexts to developing and testing more equitable algorithms—the winners of the Equitable AI Challenge will help USAID and its partners better address and prevent gender biases in AI systems in countries where USAID works. 

Over the next year, these awardees will work with USAID and its partners to implement their approaches and generate new technical knowledge, lessons learned, and tested solutions in addressing gender bias in AI tools. Through this implementation phase, USAID seeks to foster a diverse and more inclusive digital ecosystem where all communities can benefit from emerging technologies like AI, and—most importantly—ensure all members of these communities are not harmed by these technologies. This effort will inform USAID and the development community, providing a greater understanding of AI fairness tools and approaches, what they capture and what they leave out, and what tactics are needed to update, adapt, and socialize these tools for broader use. 

Stay tuned as we share the ongoing progress of the challenge winners, and build a stronger community of practice to learn together and work towards a more equitable AI-powered future.