How USAID, Local Government, and the Private Sector Mitigated AI Gender Bias in One of Mexico’s Leading Education Pilots

Image

AI Image

The use of artificial intelligence (AI) systems to search, sort, and analyze data has become increasingly common among governments looking to improve the delivery of financial, health, and education services to their citizens. In the case of the Secretariat of Education of the State of Guanajuato (SEG) in Mexico, a digital approach was particularly urgent as the community continues to grapple with increasing student drop-out rates — over 40,000 students every year.

In partnership with the World Bank via the Educational Trajectories initiative, the Secretariat created an AI-based early alert system, the Early Action System for School Permanence (SATPE), aimed at improving school retention and graduation rates by identifying and then supporting at-risk students. While SATPE serves as a meaningful use case of how government agencies can use AI to address pressing social challenges, such AI-based tools are vulnerable to entrenching existing biases and producing discriminatory outcomes.

With a grant from USAID’s Equitable AI Challenge, a consortium of partners turned to innovation to identify and mitigate gender-inequitable outcomes within this complex, wide-reaching, and influential education tool.

Examining Gender Bias in an AI System

Identifying the need to mitigate potential gender bias in the SATPE system, Itad, in partnership with Women in Digital Transformation, PIT Policy Lab, and Athena Infonomics, stressed the need to incorporate frameworks that guide the Secretariat’s actions beyond their initial focus on privacy and person data protection. Subsequently, the consortium strengthened the Secretariat staff’s technical expertise in the ethical, responsible, and inclusive use of AI in the public sector through a series of comprehensive workshops.

While government stakeholders were keen on leveraging the use of data and AI to inform their educational policies and practices, Itad and partners worked directly with the Secretariat’s technical team to identify AI data biases that could negatively impact students. By collaborating with SATPE system implementers, Itad and partners first obtained access to anonymized data used to train the system’s AI model. Leveraging IBM’s open-source AI Fairness 360 Toolkit, the consortium identified a critical gender bias that would have prevented the model from accurately identifying up to 4% of at-risk girls who were in jeopardy of interrupting their studies. In short, 4 out of 100 girls would have missed the help they needed to keep them in school. Based on these findings, the consortium team took steps to mitigate this bias within the Secretariat’s databases before feeding it through the SATPE AI model.

Developing an AI Ethical Guide and Checklist

The potential for AI systems to reinforce existing biases, replicate privacy violations, or simply exclude populations propelled the consortium to develop an Ethical Guide and Checklist to ensure policymakers in Guanajuato understood the risks of AI. The AI Ethics Guide presents a broad overview of what AI is, the ethical concerns it creates, and how they can be addressed at national, sub-national, and municipal levels. To illustrate ethics concerns, the guide presents several case studies and provocative questions that allow decision-makers to reflect on the responsible use of AI in government systems. To support knowledge building, the guide also includes a glossary of AI terminology derived from USAID learning studies and a comprehensive literature review with varied county approaches to using AI for public services.

The Checklist for AI Deployment is a separate yet interconnected tool for policymakers and technical teams preparing to deploy or already deploying AI systems. The document seeks to inform policymakers on starting points for building ethical AI systems as well as prompt technical experts to reflect on whether the right ethical guardrails are in place for an AI-based approach. Leading users through six phases, starting from regulatory foundations to the desired functionality of an AI system, the checklist contains questions on regulations, business processes, data collection and use, system design, and decision-making for ethical AI deployment in different situations and contexts.

From Pilot to Policy Recommendations

Reflecting on SATPE’s implementation, the consortium offered actionable policy recommendations for decision-makers looking to mitigate biases and incorporate gender perspectives into AI systems. These recommendations can be adopted by a diverse range of organizations looking to explore AI and data policies in sectors like education, health, and financial services.

Phase 1 | Self-Assessment and Reflection: Starting from the design stage of AI-based interventions, teams should ask themselves what could go wrong. Both policymakers and design teams should consult stakeholders involved in the AI system’s development and implementation to consider their concerns, develop a risk analysis methodology, document processes, and learnings, and weigh institutional capacities to execute such projects.

Phase 2 | Paving The Way: To standardize best practices when working with data and AI, organizations must establish decision criteria that align with ethics and human rights. For example, government agencies can adopt decision criteria in line with their country’s existing principles. Organizations can also create a working group in charge of systematizing and designing the ethical criteria for interventions related to data and AI.

Phase 3 | Involvement and Transparency: Populations affected by AI-based systems must be consulted and included in the design and implementation of AI projects. By creating consultation mechanisms open to civil society and other specific interest groups, including large companies, small-and-medium enterprises, and professional organizations, diverse stakeholders can understand and provide feedback on AI systems that will impact them.

Phase 4 | Strengthening Existing Instruments: Organizations should coordinate existing efforts and incorporate an ethical perspective of AI and the protection of personal data as necessary to build a solid foundation for ethical, responsible, and trustworthy policies powered by AI.

Phase 5 | Broader Sensitivity: Increasing the precision of diagnosis and the effectiveness of interventions can only be done by measuring and evaluating the results. For example, organizations should consider data fairness metrics, such as those provided by the AI ​​Fairness 360 toolkit and other tools, to identify when data biases are present and correct them before feeding data into AI models. Finally, organizations should consider verification mechanisms for AI-based tools beyond data and algorithms — ensuring a human component as the ultimate decision-maker.

For many of these recommendations, the AI Checklist and AI Ethical Guide can be valuable tools for the identification of risks and areas of opportunity in addition to working with stakeholders.

The Path Forward

With the Secretariat set to adopt SATPE as an essential component for informing Guanajuato’s educational policy in 2023, local officials along with similar government bodies must consider the implications of using AI systems to address social issues. Building on the engagement in Mexico, Itad and its partners socialized these implications with other government representatives in Latin America. The international consortium then presented critical findings to state government leaders from Uttar Pradesh, Andhra Pradesh, Telangana, and Tamil Nadu in India — encouraging replicability of AI approaches in other regions while sharing lessons learned from previous iterations.

Subsequently, the Preventing and Mitigating Gender Bias in AI-based Early Alert Systems in Education grant, made possible through USAID’s Equitable AI Challenge, produced crucial resources that will allow government bodies to weigh the benefits and risks of using AI when improving the delivery of future public services. By using IBM’s AI Fairness 360 toolkit, the AI Ethics Guide, and the AI Checklist in a development context, government bodies, organizations, and technical teams can better mitigate bias in low- and-middle-income country datasets while ensuring their AI projects become more equitable, inclusive, and transparent.

As SEG and other government bodies consider expanding this work by pulling higher-risk data, including personal information and security data, open and trusting relationships with diverse groups who impact and are impacted by AI systems will also remain critical to ensuring that these influential partners continue down an equitable path.


Author: Alexander Riabov, Senior Communications Specialist, DAI Digital Frontiers