UNFPA warns of AI ‘accountability gap’, risk to women at India Summit

At the India Impact AI Summit, UNFPA’s Andrea Wojnar warned of a growing ‘accountability gap’ in AI. She said biased systems could deepen inequalities for women and girls, linking online safety to the digital economy’s potential.

Andrea Wojnar Resident Representative for United Nations Population Fund (UNFPA) India has raised concerns about what she described as a widening “accountability gap” in the age of artificial intelligence, warning that unequal and biased systems risk deepening existing inequalities, particularly for women and girls at India Impact AI Summit 2026.

Add Asianet Newsable as a Preferred Source

AI’s Impact on Safety and Economic Participation

Speaking on the evolving role of artificial intelligence in society, Wojnar emphasized that while AI presents enormous opportunities, it also reshapes the landscape of risk. “AI is reshaping risks but possiblities also. AI will influence safety,” she said, underlining the dual nature of rapidly advancing technologies.

“When people, especially women and girls, feel unsafe, online participation drops and the promise of the digital economy narrows. When users don’t trust AI enabled services, adoption slows and reputational risks Grow digital economy, do not reach its potential. It happens with observation is navigating it under threat,” she added

Accountability, Trust, and Reputational Risk

According to Wojnar, the accountability gap in AI systems is not neutral. It reflects structural inequalities that can disproportionately affect those already marginalized. She stressed that questions of responsibility — who designs, regulates, deploys and benefits from AI — remain unevenly addressed across sectors and geographies.

A central theme of her remarks focused on trust. Beyond ethics and governance, she framed trust as a core economic issue. “But trust is also an economic issue, and for those of you who attended our session in December with our private sector tech partners, you’ll know that when people, especially women and girls, feel unsafe, online participation drops and the promise of the digital economy narrows,” she said.

Her comments suggest that digital safety is not merely a human rights concern but also a determinant of economic growth. When online spaces feel hostile or unsafe, participation declines. This withdrawal has ripple effects: fewer users, reduced engagement, and ultimately a contraction in the potential of digital markets.

Wojnar further cautioned that mistrust in AI-enabled services can slow technological adoption. “When users don’t trust AI enabled services, adoption slows and reputational risks Grow digital economy, do not reach its potential. It happens with observation is navigating it under threat,” she said. The implications, she indicated, extend to both public institutions and private sector actors. Companies investing heavily in AI innovation may find that technical sophistication alone does not guarantee uptake. Without safeguards, transparency and accountability, reputational risks can escalate, limiting the very growth the digital economy promises.

A Call for Ethical and Inclusive Transformation

Her remarks align with broader global discussions about ethical AI governance, data protection and inclusive digital transformation. As AI systems become embedded in health care, education, finance and public services, ensuring they operate fairly and safely is increasingly seen as foundational to sustainable development.

For UNFPA, whose mandate centers on reproductive health, gender equality and population dynamics, the intersection of AI, safety and gender equity is particularly significant. Wojnar’s intervention underscores a growing recognition that digital transformation must be accompanied by deliberate efforts to close accountability gaps — or risk reinforcing the inequalities it has the potential to solve. (ANI)

(Except for the headline, this story has not been edited by Asianet Newsable English staff and is published from a syndicated feed.)

Leave a Comment