AI for all: Creating a safe haven for women

Reports from Harvard Business Review have raised an alarming truth: AI can perpetuate gender biases, reflecting our own societal prejudices.

By
  • Storyboard18,
| November 8, 2023 , 8:40 am
The Ministry of Information Technology is actively working alongside all relevant ministries for the project. (Representative image by Steve Johnson via Unsplash)
The Ministry of Information Technology is actively working alongside all relevant ministries for the project. (Representative image by Steve Johnson via Unsplash)

By Geraldine Wu

In today’s world, artificial intelligence (AI) is everywhere. It’s a tool for work, a source of endless ideas, and has a constant presence in our lives.

With remarkable strides and widespread use of cutting-edge tools and apps to bolster women’s safety, AI grants them a sense of security and independence.

The likes of AI-powered real-time threat assessment and emergency coordination tools not only assist personal safety, but also provide data-driven insights for authorities to understand and address issues like harassment and violence.

Yet, there’s a catch.

What kind of gender biases exist in AI?

It is striking how a technological revolution like AI can also bring to light our society’s deep-rooted biases. Reports from Harvard Business Review have raised an alarming truth: that AI can perpetuate gender biases, reflecting our own societal prejudices.

One notable example highlighted by the Harvard Business Review sheds light on the issue within natural language processing (NLP) used by virtual assistants like Amazon’s Alexa and Apple’s Siri.

These systems often mirror biased associations, such as linking ‘man’ with ‘doctor’ and ‘woman’ with ‘nurse’. It’s a virtual word association game, reflecting outdated and stereotypical views, far from the diverse and progressive society we aspire to build.

But how does this gender bias seep into AI’s veins? The answer lies in the very core of machine learning. Picture this: the data it learns from, if insufficiently diverse or inclusive, leaves gaping holes in its understanding, paving the way for biased errors. The fault isn’t solely in technology but also in the very hands that guide it—humans. Our inherent biases find their way into the algorithms and datasets, tainting the AI’s judgment and responses.

One example would be AI-powered hiring tools favouring male candidates over equally qualified female applicants. This bias is derived from historical hiring data, where some industries or jobs have been male-dominated. AI trained on such data may perpetuate the bias by recommending more male candidates.

Image recognition, language models and text-generating AI systems are all good examples. Labelling a woman in a kitchen as a “cook” while identifying a man in the same setting as a “chef” is one example. These inaccuracies stem from the biased societal norms ingrained in the data sources these systems learn from.

What drives bias in AI?

Let’s talk about training datasets. When AI models are trained with under-represented groups like women, their mistakes are unsurprising. These datasets don’t adequately represent diverse demographics, leading to biased models.

There’s also the ripple effect of human-generated labels finding their way into machine- learning models. These biases, often unintended, sneak into the algorithms, causing inaccurate classifications regarding gender categories.

Moreover, the techniques used for modelling and the inputs contribute to the bias. For instance, many speech recognition systems underperform for female voice input because they are primarily trained on male voices.

Fed on such biased information, AI systems might misjudge leadership roles as male-centric and downplay leadership roles for women. Beyond that, these biased models can overlook issues related to under-represented genders, like those in the LGBTQ+ community.

How to deal with AI biases, diverse datasets?

The first step? It’s all about data. To diminish biases in AI, women should be given more representation in AI training, since they know and understand gender stereotypes better.

Humane approach

Secondly, it’s vital to be mindful of the human touch in crafting these technologies. The people behind AI development need to represent a diverse array of backgrounds, viewpoints, and experiences.

Evaluation

It’s essential to routinely and continuously audit these AI systems, challenging, identifying and rectifying any biases that might have seeped in during development.

Awareness and transparency

Education plays a pivotal role, too. It helps in creating awareness about gender biases in AI and educating those involved in AI development to find and reduce biases toward creating a more inclusive digital landscape.

Transparency in AI systems is also a key to handling the issue. Users and developers alike should have clear insights into how these AI models work. It’s about understanding the ‘why’ and ‘how’ behind the AI’s decisions.

Continuous improvement

Lastly, continuous improvement is fundamental to tackling gender biases. It’s about a commitment to constant learning, refining, and striving for an AI landscape that truly serves and represents everyone, leaving behind the shackles of stereotypes.

In our everyday lives, women often adapt to the world as it is, even if the world was built for men primarily. This same behaviour extends into the digital world, shaping technology that may not equally serve everyone. That’s why it’s crucial to bring awareness to such issues. We need to actively include women as an equal counterpart from the very inception of AI in order to ensure that the solutions cater to the needs of all.

By taking these steps, we pave the way for a society that’s more inclusive and welcoming, not just to women but to every single individual. It’s about crafting a world where technology isn’t just neutral but works equitably for everyone, reflecting the diverse blend of our society.

The article is written by Geraldine Wu, Product Lead, coto, a women exclusive web3 based social community platform.

Leave a comment

Your email address will not be published. Required fields are marked *