Solution Overview

Solution Name:

KOSA AI - future proofing AI

Add a comment

Read comments
No comments to show.

One-line solution summary:

KOSA AI builds multi-stakeholder AI governance software targeted at detecting and mitigating AI bias throughout the ML lifecycle.

Add a comment

Read comments
No comments to show.

Pitch your solution.

Artificial intelligence (AI) constitutes an essential part of today’s economy, but is riddled with biases that disadvantage historically underserved groups unfairly. In an age where workplace practices are becoming more and more digital, the proliferation of workforce management systems, such as applicant tracking systems (ATS) -such as resume scanning increases the risk of AI biases in hiring, retention, and promotion practices across organisations. 

At KOSA AI, we have developed a proprietary automated responsible AI system (ARAIS) to help businesses identify and minimise biases inherent in their AI-powered decision-making models. Our solution is sector-agnostic and has multiple use cases including ATS. By identifying and mitigating biases in AI and ML processes across industries, our solution will reduce the hidden inequalities that today affect millions of people and will enable equitable access to jobs, irrespective of race, gender, age or other social identifier. 

1
Add a comment

Read comments
Loading…

Film your elevator pitch.

Add a comment

Read comments
No comments to show.

What specific problem are you solving?

In today’s economy, AI powers thousands of decision-making tools, yet its biases against historically underserved groups such as women and other minority groups is often misunderstood and underestimated. These can be found in applications within human resources across multiple industries. 

KOSA AI is working on detecting and mitigating the bias within ATS use cases, where AI bias is denying 27 million people from finding new jobs as the biased hiring algorithm focuses on credentials rather than capabilities. 75% of employers rely on AI tools to scan resumes and as more organisations adapt to the perpetual growth of technological change, specifically, the harder it is for the workforce to keep up to date with their skills. Across hiring algorithms, this bias is particularly pervasive. For instance, Amazon was forced to renounce the development of a proprietary hiring system because it showed that it repeatedly discriminated against women. It specifically penalised resumes that included the word “women’s” and downgraded graduates of two all-women’s colleges in the US. This was a result of the way Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period, most of which came from white males.

Add a comment

Read comments
No comments to show.

What is your solution?

KOSA AI helps organisations improve hiring, retention, and promotion practices by mitigating biases present in AI decision-making tools. Our proprietary AI governance software solution supports organisations to recruit and retain a more diverse and inclusive workforce, which has been shown to provide countless benefits to businesses, from identifying missed revenue opportunities through increased customer trust and better product development to reducing costly litigation risks and more. 

Our automated solution seamlessly integrates into an organisation's existing AI infrastructure. Our product comprises of 4 steps that provide support across the whole ML lifecycle: (1) it assesses and mitigates the biases in current ML processes; (2) it audits the AI model to assess the human impact and to automate compliance checks; (3) it explains the model’s behaviours; (4) it builds an additional monitor module to track drifts or malfunctions and allow developers to fix vulnerabilities. Building responsible AI is a team effort; therefore we have developed tools for all stakeholders, from the executive team to the developers. Our software outputs both an evaluation for the technical development team to understand biases within their systems and a quantifiable financial assessment for non-technical stakeholders to grasp the missed opportunities for the business. 

Add a comment

Read comments
No comments to show.

Who does your solution serve, and in what ways will the solution impact their lives?

KOSA AI’s mission is to make technology accessible and inclusive of all people. We work to support SDG 10 to reduce inequality. By 2030 we strive to empower and promote the social, economic and political inclusion of all, irrespective of age, sex, disability, race, ethnicity, origin, religion and economic or other status. As we target companies using AI at scale irrespective of industry, we aim to serve the global digital workforce and traditionally underserved groups who have been denied equitable access to job opportunities in particular.

Due to the nature of our solution, we work with enterprises directly and not our target beneficiaries. However, we do everything we can to ensure that our software has the impact we aim to achieve. For instance, using the use case of ATS, we conducted a proof of concept by automatically neutralising subjective bias in text within resumes. We used a Bias Statement Detector (BSD) to detect and quantify the degree of bias within the sentences and could verify that the model is able to consistently predict the average bias achieving greater than 97% accuracy. This confirmed the inherent bias that lies within text and the impact keywords and language has on the degree of bias within the resume. 

Furthermore, we work with a number of research institutions such as the University of Massachusetts Amherst (UMASS), Institute of Electrical and Electronics Engineers (IEEE), Porsche and the Linux Foundation  to increase knowledge and awareness of AI bias mitigation and ethical bias; we want to make sure that AI-driven inequalities are better understood to develop solutions that respond to the needs of those impacted. 

In efforts to speed up the reduction of AI inequality, be it with the companies KOSA AI works with directly or with our target beneficiaries, KOSA AI is building an AI Academy. Here we provide course material and conduct workshops that are used to educate and create awareness on specific research topics such as ethical AI, AI governance, AI fairness definition, bias within facial recognition software, other computer vision applications and more. Through this platform, we endeavour for our target population to have access to improved quality data, help shape a supportive digital policy environment that enables data sharing whilst protecting privacy and security, and define use cases where AI has the largest potential for impact. One of the key locations for the AI Academy’s deployment is in Africa. 

KOSA AI has a prominent presence in Africa and here we leverage our distributed model to improve our software and increase our impact. Our network on this continent favours (1) access to diverse training data sets which we can share with companies outside of the continent who require more representative datasets; and (2) build a data bank with more diverse data sets; and (3) increase the diversity of our team by mobilising African talent. In Africa, we see an opportunity to directly grow KOSA AI’s vision and create tangible impact, especially for rural communities and historically underserved groups. Specifically in Kenya, where the emerging technology sector is rapidly growing into a consequential share of national GDP, the importance of AI governance within the tech ecosystem is crucial. KOSA AI’s AI academy, research partnerships with organisations such as the Technical University of Kenya (TUK), and other business efforts will reduce inequalities in the digital workforce by empowering minority groups and the underserved population through education, awareness and training on responsible AI and AI bias.

Add a comment

Read comments
No comments to show.

Which dimension of the Digital Workforce Challenge does your solution most closely address?

Reduce inequalities in the digital workforce for historically underserved groups through improved hiring and retention practices, skills assessments, training, and employer education and engagement
Add a comment

Read comments
No comments to show.

Explain how the problem you are addressing, the solution you have designed, and the population you are serving align with the Challenge.

The Challenge seeks solutions that will help underserved communities be better prepared for success in the digital workforce. It requests solutions that are technology- or innovation-based and use data science for positive change to support these communities. Our solution is a technology-based innovation that enables companies to identify and correct biases in their AI and ML that prevent minority groups such as women and historically underserved groups from accessing jobs they would otherwise be qualified for. Through our technology, businesses across industries can improve their hiring practices to include and support the underserved. 

Add a comment

Read comments
No comments to show.

In what city, town, or region is your solution team headquartered?

Amsterdam, Netherlands
Add a comment

Read comments
No comments to show.

Is your solution already being implemented in one or more of the following ServiceNow locations (Australia/New Zealand, Canada, France, Germany, India, Ireland, Israel, Japan, the Netherlands, Singapore, the United Kingdom, United States), or are you planning to expand your solution to one or more of these countries?

My solution is already being implemented in one or more of these ServiceNow locations

Add a comment

Read comments
No comments to show.

What is your solution’s stage of development?

Pilot: An organization deploying a tested product, service, or business model in at least one community.
Add a comment

Read comments
No comments to show.

Explain why you selected this stage of development for your solution.

KOSA AI launched a minimum viable product (MVP) earlier this year and has secured Letters of Intent from 5 customers across healthcare, technology, services, government, and education. We expect to convert at least 2 of our 5 piloting organisations into paying customers and plan to fully launch our product by early 2022. In addition, we are currently working on a community launch with Ai4Gov to deploy govtech solutions designed to improve the business environment in the Philippines, with the common objective to improve, develop, and promote, directly or indirectly, ethical AI practices. Finally, we have secured partnerships with Porsche, Linux Foundation, IEEE and TUK to perform extensive research into topics such as ethical AI, AI governance, AI fairness definition and more. The results from this research will contribute to KOSA AI’s architecture, long term development and subsequent success. 

Add a comment

Read comments
No comments to show.

Who is the Team Lead for your solution?

Layla Li

Add a comment

Read comments
No comments to show.

Do you qualify for and would you like to be considered for the ServiceNow US Racial Equity Prize? If you select Yes, explain how you are qualified for the prize in the additional question that appears.

Yes, I wish to apply for this prize

Add a comment

Read comments
No comments to show.

Explain how you are qualified for this prize. How will your team use the ServiceNow Digital Equity Prize to advance your solution?

The ServiceNow Digital Equity Prize seeks solutions that address communities of colour in the US, with a special focus on an equitable digital workforce. By identifying and mitigating biases in AI and ML processes across industries, our solution will reduce the hidden inequalities that today affect millions of people and will enable equitable access to services and jobs they want, irrespective of race.

Though we do not directly work with our target beneficiaries, we enable businesses to improve the quality of data that is fed into their decision-making tools to make them fairer while increasing their revenue stream; thanks to us, companies can be more profitable by being more equitable. We expect to mainstream AI data fairness across organisations and sectors, catalysing broader positive impact for all those impacted by our customers products and services. This includes people of all races and backgrounds. 

As seen above, UnitedHealth’s Optum algorithm, serving 200 million people, showed drastic racial bias, denying care to 50% qualifying black patients. At KOSA AI, we seek to reduce these inequalities by equipping businesses with our AI bias detection and mitigation technology. With support from this prize, we can accelerate the deployment of our technology across services sectors to improve access to financial services, insurance and health care services and strengthen economic opportunities for thousands of underserved communities across the US and globally.

Add a comment

Read comments
No comments to show.
More About Your Solution

Which of the following categories best describes your solution?

A new application of an existing technology
Add a comment

Read comments
No comments to show.

What makes your solution innovative?

KOSA AI is the world's first SaaS solution to AI bias auditing and mitigation. Today, most organisations that use AI and in decision-making tools don’t conduct algorithmic auditing and monitoring for potential biases, primarily due to three barriers: (1) the misconception that bias auditing is an expensive cost, (2) the subjective nature of AI bias as a topic and (3) the lack of buy-in from non-technical stakeholders. AI managers acknowledge that responsible AI is a priority for them, but are often held back because they lack the right tools.

Our solution addresses precisely these three barriers. First, it is half as expensive and 10x more efficient than current alternatives on the market. Second, the research conducted with our partners ensures the latest ethical definitions and fairness strategies are encapsulated in KOSA AI’s design. Last, it targets every AI and ML stakeholder across the organisation with a set of relevant tools that are didactic and easy to use, increasing understanding of biases in data and detailing the financial benefits of correcting the biases. 

Furthermore, our ties in Africa give us a significant advantage over our competitors. We can access diverse training data sets which we can share with companies outside of the continent who require more representative datasets, and our partnerships with institutions and talent recruitment from the continent, which are often negatively impacted and neglected by technology by major markets, represents a huge future opportunity for KOSA AI to have a decisive role in influencing the next AI frontier.

Add a comment

Read comments
No comments to show.

Describe the core technology that powers your solution.

On the backend, our tech stack is Python, Flask, Postman, AWS, Vue.js. We build a web application that connects to customer’s data warehousing solutions (GCP, AWS, etc.) and AI development platform (Sagemaker, Watson, etc.) through APIs, then we perform data and model evaluations through custom algorithms on available data and models.

 On the frontend, our platform features two dashboards, one for technical users, the other for non-technical users. (1) The developer dashboard allows the software engineer to connect KOSA AI’s algorithm to the company’s environment and select the desired fairness definition to evaluate AI bias; s/he can adjust metrics based on relevant use cases and select the mitigation strategy desired. The final output is a fairness matrix with fairness comparison between the original set and the mitigated set which developers can choose to migrate directly to their data storage system. (2) The non-developer dashboard enables non-technical stakeholders to select the company’s AI bias mitigation projects and view bias explanations and ROI.

Add a comment

Read comments
No comments to show.

Provide evidence that this technology works. Please cite your sources.

  We have conducted a proof of concept using an ATS use case, resume screening. We automatically neutralised subjective bias in text within resumes by using a Bias Statement Detector (BSD) to detect and quantify the degree of bias within the sentences. We incorporated common linguistic and structural cues of biased languages including sentiment analysis, subjectivity analysis, modality (expressed certainty), the use of factive verbs, hedge phrases, and many other features and accounted for 85.9% of the variance in human judgements of perceived bias in news-like text. The results obtained could verify that the model is able to consistently predict the average bias achieving greater than 97% accuracy. This confirmed the inherent bias that lies within text and the impact keywords and language has on the degree of bias within the resume and these results further confirmed the accuracy of our product and its relevance to what the market requires today. 

The success of the model in this use case will enable companies to scan resumes and filter our key words and experiences that are causing the bias that typically arise from criteria's historically defined by the employer but are not as relevant today. Subsequently, a much fairer hiring process where people can be hired based on their capabilities as well as their credentials. 

 Relevant papers and research:

[1] Cornell University. “Automatically Neutralizing Subjective Bias in Text” (Nov 2019)

[2] Georgia Institute of Technology “Computationally Detecting and Quantifying the Degree of Bias in Sentence-Level Text of News Stories 

[3] Georgia Institute of Technology. “Automatically Neutralizing Subjective Bias in Text”  (Dec 2019)

[4] KDnuggets “Word Embedding Fairness Evaluation” (2020) https://www.kdnuggets.com/2020/08/word-embedding-fairness-evaluation.html


Add a comment

Read comments
No comments to show.

Please select the technologies currently used in your solution:

  • Artificial Intelligence / Machine Learning
  • Big Data
  • Software and Mobile Applications
Add a comment

Read comments
No comments to show.

Does this technology introduce any risks? How are you addressing or mitigating these risks in your solution?

With AI responsibility, a number of ethical risks exist. Some companies may not be able to mitigate the risks relevant to some specific use cases; for instance, there may be some model deployment which has inherent bias and the mitigation strategies available from KOSA AI may not be applicable to that particular model. Therefore, though KOSA AI is sector agnostic, our go to market is starting with one sector at a time, beginning with the healthcare sector. This will enable more focused product development.

 In addition, there are a number of academic debates around the issues of process fairness and outcome fairness; in obtaining one, we can potentially risk the other. In the ATS use case, for example; we want to ensure underserved population groups have equal opportunities in finding a new job; currently, the hiring process focuses on credentials rather than capabilities. By mitigating bias text within resumes, (process fairness) to ensure capabilities are also considered, we must not compromise or undervalue the importance of credentials either, which may negatively impact population groups that have both the credentials and capabilities to be hired (outcome fairness).

 A similar conversation is also happening around the short-term and long-term fairness measurement. At this stage, KOSA AI  collaborates with Europe- and US-based research institutions to conduct more thorough studies on the topic to develop and validate the safest strategy which we intend to share with governments and industry.

Add a comment

Read comments
No comments to show.

Select the key characteristics of your target population.

  • Minorities & Previously Excluded Populations
  • Persons with Disabilities
Add a comment

Read comments
No comments to show.

In which countries do you currently operate?

  • Kenya
  • Netherlands
  • United States
Add a comment

Read comments
No comments to show.

In which countries will you be operating within the next year?

  • Kenya
  • Netherlands
  • United States
Add a comment

Read comments
No comments to show.

How many people does your solution currently serve? How many will it serve in one year? In five years?

As previously mentioned in the application, we work with organisations that offer services to our end beneficiaries. At this stage of development we are therefore actively tracking and forecasting our current customers while estimating the number of beneficiaries impacted.

 

Current customers: 5 customers, serving roughly 0.5 million people

Customers in 1 year (2022): 20 customers, serving roughly 2 million people

Customers in 5 years (2026): 600 customers, serving roughly 50 million people

Add a comment

Read comments
No comments to show.

What are your impact goals for the next year and the next five years, and -- importantly -- how will you achieve them?

We work to achieve SDG 10 reducing inequalities, especially SDG 10.2 empowering and promoting the social, economic and political inclusion of all and our mission is to make technology more inclusive of all races, genders, and ages.

There are three main activities that KOSA AI is investing in to ensure transformational impact on the millions of people that are unfairly underserved. 

  1. We have developed a software solution aimed at reducing gender, racial and ethnicity-based inequalities caused by biases in enterprise AI-powered decision-making tools. We leverage the business opportunity of de-biasing AI and ML for companies so that they can hire and retain a more diverse workforce, driving the deployment of our technology and accelerating the reduction of inequalities for millions of individuals. 

  2. The extensive research carried out with our partners, academic institutions and organisations listed above, on ethical definitions and fairness strategies, is not only encapsulated in KOSA AI’s product and service design but also generates increased awareness around AI bias detection and mitigation.

Our established presence in Africa through the AI academy, research partnerships with organisations such as TUK and other business efforts, such as collection and sharing of more representative data sets, will further enhance our vision to create equitable, accessible and inclusive AI for the world, specifically impacting minority groups and the historically underserved population. Leveraging our ties in Africa represents a huge future opportunity for KOSA AI to have a decisive role in influencing this next AI frontier.

Add a comment

Read comments
No comments to show.

How are you measuring your progress toward your impact goals?

Currently, 2021 key objectives and results we are following or benchmarking against are: 

  • Product: 80% feature adoption to ensure we are building a product that companies will benefit from. 

  • Product: 80% task success rate to ensure the product is easy to use.

  • Product: 90% success rate on bias mitigation in-house beta testing with use case application.

  • Marketing: 10 customer lead generations from our active content marketing and brand awareness.

  • Marketing: 500 active followers on our social media platforms.

  • Marketing: 500 website visits through marketing efforts. 

  • Sales: 80% success rate of customer traction from email reach (cold and warm introductions)

  • Customer: NPS > 6 from customer interviews.

  • Competition: Market differentiation from our competitors to ensure we are building a unique value proposition.

  • Team: eNPS > 7 from employees to ensure we are creating a high performing, sustainable team.

  • Team: Score over 4/5  from employee onboarding experience.

  • Finance: Monthly burn rate which is currently 20,000USD. Being a software solution, we are lean by definition with a low OPEX. 

In the long run, we would like to measure the following:

  • Number of people indirectly impacted i.e. our target beneficiaries (we are currently in the process of determining a viable process to do so)

  • Monthly active sales per use case/per industry vertical

  • Economic activity e.g. liquidity ratios

  • Value determined from key partnerships e.g. percentage of useful research that can be (1) incorporated into the product solution and (2) impact driven on target population i.e. minority groups

Add a comment

Read comments
No comments to show.
About Your Team

What type of organization is your solution team?

For-profit, including B-Corp or similar models

Add a comment

Read comments
No comments to show.

How many people work on your solution team?

6 full time

2 part time

Add a comment

Read comments
No comments to show.

How long have you been working on your solution?

November 2020

Add a comment

Read comments
No comments to show.

How are you and your team well-positioned to deliver this solution?

KOSA AI was founded by entrepreneurs passionate about reducing inequalities exacerbated by technology.  With multidisciplinary skills across software development, data science, business, and marketing, our full time team of 6 has the technical and commercial expertise to deliver our solution. 

Both founders have come to truly understand the granular problems of AI bias through not only the sheer passion they both have to fix this, but because of both their diverse backgrounds; both female with native talents from Asia, Africa, and Europe. Moreover, KOSA AI’s team comes from all corners of the world, spanning from Ethiopia to South Korea. They have all first handedly experienced bias at some point through their personal and professional lives and henceforth, together we are building a platform that addresses each issue around responsible AI. 

Key profiles below:

Co-Founder/CEO Layla Li is a Boston University and Harvard University graduate with 7+ years experience building technology solutions at companies including Tesla and Philip. Layla has built strong expertise in AI bias during her time developing automated decision-making systems for multiple international organisations.

Sonali Sanghrajka, Co-Founder/Chief Commercial Officer, has 10+ years’ experience in the Healthcare sector, driving brand and commercial strategies for products worth $500 million. Sonali has worked with patients who have been directly affected by bias through care delivery. And through her consulting services, she has been privy to the challenges that AI companies have in penetrating the African continent with their AI product/solution offerings.

 

Add a comment

Read comments
No comments to show.

What is your approach to building a diverse, equitable, and inclusive leadership team?

At KOSA AI, we embrace diversity -in fact, diversity, equity, and inclusion are both essential values and key pillars of our vision, mission, and strategy. Our Co-Founders are women from diverse backgrounds, Layla is Chinese, Sonali is Indian-Kenyan and the team itself comes from Korea, India, Ethiopia, Kenya and Greece. We believe that this diversity is essential for us to achieve our mission of reducing technology-fueled inequalities. In addition, our presence in Africa and our geographical remote working culture empowers us to actively recruit employees from every continent, which further increases the diversity of our team and enables holistic and dynamic input and outcomes. 

Add a comment

Read comments
No comments to show.
Your Business Model & Partnerships

Do you primarily provide products or services directly to individuals, to other organizations, or to the government?

Organizations (B2B)
Add a comment

Read comments
No comments to show.
Partnership & Prize Funding Opportunities

Why are you applying to the 2021 Digital Workforce Challenge?

We believe Solve’s mission to reduce inequalities in the digital workforce for historically underserved groups directly aligns with our own mission. Therefore, we seek help from Solve to support the following:

Netherlands is where KOSA AI is headquartered and where our software development and operations reside. Ahead of our MVP launch, we are currently conducting in-house beta testing of the product features and testing them against specific use cases, mainly within the Healthcare and Education sectors. These will be presented to organisations and companies within the NL AIC Coalite, Netherlands. The feedback acquired from the various stakeholders within the organisation will be useful in shaping and advancing our product and service to relevance within the Dutch and EU market. 

Scaling and partnering: As we launch our MVP in Europe, we are looking for organisations to support us and partner with us to pilot our product across the region. We are looking for companies in Healthcare and Lifesciences, BFSI, Public, Technology and Services and Education.  We would also welcome the opportunity to work with academic and research institutions to further increase the reach of our product.

Grant funding: We welcome grants to enable collaborations with academic institutions and fund research projects that advance the field of ethical AI and bias impact. This is specific to the feedback loop to organisations and companies in Europe and the US that will benefit from our efforts in Africa. 

Impact measurement: We are seeking support to develop an impact measurement framework to better understand the needs of our target beneficiaries. We have focused on the development of our product to ensure maximum usability and results for our customers; however, we would like to develop a system that enables us to better study our target beneficiaries and integrate their needs into our product development processes.

Networking and mentorship: We welcome the opportunity to network with institutions and individuals that share the same vision of reducing bias and creating more responsible and trustworthy AI.

Add a comment

Read comments
No comments to show.

In which of the following areas do you most need partners or support?

  • Monitoring & Evaluation (e.g. collecting/using data, measuring impact)
  • Product / Service Distribution (e.g. expanding client base)
Add a comment

Read comments
No comments to show.

Please provide an overview of your current activities in those locations.

Netherlands is where KOSA AI is headquartered and where our software development and operations reside. Ahead of our MVP launch, we are currently conducting in-house beta testing of the product features and testing them against specific use cases, mainly within the Healthcare and Education sectors. These will be presented to organisations and companies within the NL AIC Coalite, Netherlands. The feedback acquired from the various stakeholders within the organisation will be useful in shaping and advancing our product and service to relevance within the Dutch and EU market. 

Add a comment

Read comments
No comments to show.

Please explain in more detail here.

KOSA AI currently has a number of partners  ranging from research with academic institutions on ethical AI to training and education deployment in Africa; we thrive on the endless possibilities to expand our innovation through collaborative work.  

Listed here are a few of the partnership goals we have in mind:

  1. Partnering with associations that are engaged in AI regulatory framework and legislation policy implementation. This knowledge transfer will keep KOSA AI up to date with the most relevant legislation in each geographical context, as we plan to expand our frontiers.

  2. Partnering with academic institutions that are endeavouring to better define ethical AI, fairness strategies and AI governance and investing in use case research that is correlated to AI demand within a particular geographical area. Our long-term product roadmap includes expanding our product and services offerings to sectors that require it the most - such as automation. 

  3. Partnering with organisations that can support us in developing an impact measurement framework to better understand the needs of our target beneficiaries (traditionally excluded and underserved communities across regions). We would like to develop a system that enables us to better study our target beneficiaries and integrate their needs into our product development processes.

  4. Partner with organisations and companies whose global networks can be leveraged to help support new market entry. We have the intent to deploy our product and services within different industries, for example automation and even expand into new frontiers like Asia. 

Add a comment

Read comments
No comments to show.

What organizations would you like to partner with, and how would you like to partner with them?

We welcome the opportunity to partner with ServiceNow and the MIT Solve community, including the following:

  1. ServiceNow’s credibility and media opportunities can augment KOSA AI’s efforts towards creating awareness and educating the public on ethical AI, AI governance, AI fairness definition and more. 

  2. ServiceNow’s resources and diverse range of workflow solutions will support KOSA AI’s in managing the complexities of our internal operations as the business grows in product and market footprint. 

  3. Aside from KOSA AI embracing a diverse team, we understand the need to build a multi-disciplinary team with a wide range of skills and expertise from being able to apply AI frameworks into practice to engineering the product to market relevance. This is especially important in commercialising the product to sectors where companies themselves don't fully understand the consequences of responsible AI and AI governance. ServiceNow skill-based mentorship will be extremely beneficial in helping us navigate through this unknown landscape. 

  4. We see Oxford Sciences Innovation and its portfolio companies as potential research partners to advance research in ethical AI and bias detection tools, and as implementation partners to pilot our solution across a wide range of healthcare applications.

Add a comment

Read comments
No comments to show.

Solution Team

  • Layla Li Co-founder & CEO, KOSA AI
  • Sonali Sanghrajka Co-Founder & CCO, KOSA AI
 
    Back
to Top