Submitted
Antiracist Technology in the US

KOSA AI

Team Leader
Layla Li
Solution Overview
Solution Name:
KOSA AI
One-line solution summary:
Bias detection and mitigation technology for enterprise AI and ML
Pitch your solution.

Artificial intelligence (AI) constitutes an essential part of today’s economy, but is riddled with biases that disadvantage BIPOCs, women, and other minorities unfairly. As a result, many cannot access the credit they need, do not receive healthcare they qualify for or get rejected from jobs unfairly. At KOSA AI, we have developed a proprietary algorithmic auditing solution to help businesses identify and minimise the biases inherent in their AI-powered decision-making models. Our solution is sector-agnostic and has multiple use cases including fraud detection, insurance, healthcare delivery, credit risk assessment, customer lifetime value, and talent acquisition. By identifying and mitigating biases in AI and ML processes across industries, our solution will reduce the hidden inequalities that today affect millions of people and enable everyone to access the services and jobs they want irrespective of race, gender, age or other status. 



Film your elevator pitch.
What specific problem are you solving?

AI constitutes an essential part of today’s economy and powers thousands of decision-making tools. However, AI is riddled with biases that disproportionately impact women and BIPOCs in applications as varied as healthcare, financial services, policing, and hiring. Facial recognition technology detects male faces with a 99% accuracy rate compared to 65% for black women; similarly, the algorithm behind the Apple Card was shown to give 20x more credit to men than to women. Last year, a healthcare algorithm sold by Optum was found to show drastic racial bias, denying care to 50% of qualifying black patients. 

Numerous factors cause bias in AI. These may occur due to historic prejudices that are reflected in the training datasets and inputted into the model (built-in bias); when a credit provider uses employment history and prior access to credit as factors to determine creditworthiness, it automatically imports racial biases into its credit risk assessment tool. Another factor is related to the wider configuration issues associated with the model that are then amplified as the algorithms evolve (AI-induced bias). 

AI biases could be mitigated with a reliable detection solution, but today only open-source library tools and expensive consulting services are available.

What is your solution?

KOSA AI’s algorithmic auditing software enables large enterprises in regulated sectors, like healthcare, financial services, and government, to understand and mitigate bias risks within their AI systems. We offer customers a fully-automated solution that seamlessly integrates into a company’s existing AI infrastructure. Our multi-stakeholder product comprises of 4 steps that provide support across the whole ML lifecycle: (1) it assesses and mitigates the biases in current ML processes; (2) it audits the model to assess the human impact and automate compliance checks (including adherence to the Digital Service Act) before deployment; (3) it explains why the model behaves the way it does; and (4) it builds a model monitor which tracks drifts or malfunctions and allows developers to fix vulnerabilities. 

We realise that building responsible AI is a team effort; this is why we have developed tools for all stakeholders, from executive team to product managers and developers. The software outputs an evaluation for the technical development team to identify and understand the bias within their systems and a quantifiable financial assessment targeted at non-technical stakeholders to grasp the missed opportunity that the bias brings to their business.

Who does your solution serve, and in what ways will the solution impact their lives?

KOSA AI’s mission is to make technology inclusive of all people. We work to support SDG 10 to reduce inequality, in particular SDG 10. By 2030, empower and promote the social, economic and political inclusion of all, irrespective of age, sex, disability, race, ethnicity, origin, religion or economic or other status. As seen above, the wide use of AI decision-making technology in financial services, healthcare and public sector fuels inequality for various minority groups. Because of biases inherent in AI-powered decision-making tools, BIPOC are denied access to quality healthcare, insurance and financial services, fair justice and policing, job opportunities, and more. As a result, the population we target are all those individuals who are negatively impacted by biases present in AI and ML, especially BIPOC.

Due to the nature of our solution, we work with large enterprises and not our target beneficiaries. However, we do everything we can to ensure that our software is having the impact we aim to achieve. For instance, we conducted a proof of concept with healthcare applications which was shown to increase BIPOC healthcare service utilisation by 30%. In addition, we work with a number of research institutions to increase knowledge and awareness of AI bias mitigation and ethical bias; we want to make sure that AI-driven inequalities are better understood to develop solutions that respond to the needs of those impacted. 

Finally, we leverage our distributed model to improve our software and increase our impact. Through our presence in Africa we can (1) access diverse training data sets which we can share with companies outside of the continent who require more representative datasets; and (2) build a data bank with more diverse data sets; and (3) increase the diversity of our team by mobilising African talent.

Which dimension of the Challenge does your solution most closely address?
  • Actively minimize human and algorithmic biases, particularly in healthcare, education, and workplace settings.
Explain how the problem you are addressing, the solution you have designed, and the population you are serving align with the Challenge.

The Challenge seeks solutions that will help end the disparities between BIPOC and white communities in the US. It requests solutions that are technology- or innovation-based and use data science for positive change to support and liberate BIPOC communities. Our solution is a technology-based innovation that enables companies to identify and correct biases in their AI and ML that prevent BIPOC from accessing the services, products, and jobs they need. Through our technology, credit agencies can build fair credit profiles and healthcare providers can give adequate healthcare, all while improving their hiring practices to include and support BIPOC. 

In what city, town, or region is your solution team headquartered?
Amsterdam, Netherlands
What is your solution’s stage of development?
  • Pilot: An organization deploying a tested product, service, or business model in at least one community.
Explain why you selected this stage of development for your solution.

We launched a minimum viable product (MVP) earlier this year and are currently piloting our solution with 5 partners and customers across healthcare, technology, services, government, and education. We expect to convert at least 2 of our 5 piloting organisations into paying customers and plan to fully launch our product by the end of 2021. We currently have 5 employees at KOSA AI, including the Co-Founders, and foresee growing to include a team of enterprise sales specialists to help us expand our client base in our next step of development.

Who is the Team Lead for your solution?
Layla Li
More About Your Solution
About Your Team
Your Business Model & Partnerships
Partnership & Prize Funding Opportunities
Solution Team:
Layla Li
Layla Li
Co-founder & CEO
Sonali Sanghrajka
Sonali Sanghrajka
Co-Founder & CCO