Funded by
Evaluating Solutions

Learner//Meets//Future: AI-Enabled Assessments Challenge

How can we use AI to make assessments more personalized, engaging, and predictive for Pre-K to Grade 8 learners in the United States?

Submissions are Closed

FAQs

Table of Contents

Who can apply to the challenge? 

What types of solutions are eligible?

What does Global Access mean for my solution, specifically?

I have code. What am I expected to submit as a part of my application? 

What does the challenge process involve?

How will my solution be evaluated? 

What is the challenge timeline?

What will winners receive if their solution is selected?

Will the intellectual property rights of applicants, as it pertains to their solution submissions, be protected by MIT Solve?

Who can apply to the challenge?

AI-enabled assessment is a new space for many innovators. We encourage new players in this space to submit their ideas. Solutions led by individuals and teams who aren’t familiar with product development or EdTech, for example, but are well-versed in AI capabilities are encouraged to apply.

Solutions can be at any stage, and we very much welcome concept and prototype stage solutions. You might be an individual with code, part of an existing team—for example, an EdTech company or a lab at a university that is developing an AI assessment tool—or something else: all are welcome. You can be located anywhere, but your solution must be relevant for the US Pre-K-8 context, as we are looking for solutions that can be piloted in public Pre-K-8 classrooms in the US. 

We invite submissions from individuals, teams, and/or organizations. Your solution does not need to be a part of a registered organization to participate.

  • Solutions can be for-profit, nonprofit, hybrid, or not formally registered as any organization type

  • Solutions must be targeted for learners who are in between Pre-K to Grade 8 (ages 3-14). A solution does not need to serve that entire age range and may target a specific group, for example Pre-K to Grade 2 (approximately ages 3-8) or Grades 3-8 (approximately ages 8-14).

  • Solutions must benefit all Transitional Kindergarten (TK), Pre-K, and K-8 students but should prioritize strategies that support those who face the biggest barriers to opportunity, including Black and Latino learners and all learners experiencing poverty (referenced throughout this page and application as priority learners).
  • Solutions must be enabled by artificial intelligence.

  • Applicants may be based outside of the United States, but the solution must be applicable to the US context. US law prevents MIT Solve from accepting applications from people who are ordinarily resident in Iran, Cuba, Syria, North Korea, Crimea, Russia, and Belarus, or from parties blocked by the US Treasury Department.

What types of solutions are eligible?

Solution applications must be written in English. 

To ensure a positive impact for the intended beneficiaries, all winning solutions must ensure Global Access. Global Access requires that the winning solutions be made available and accessible at an affordable price in support of the U.S. educational system. For more information and resources on Global Access, see the foundation’s Global Access Statement and Global Access webpage. Solutions do not have to have Global Access plans in place at the time of application, but should be prepared to demonstrate how they would meet those requirements if selected. See ‘What does Global Access mean for my solution, specifically?’ and ‘Will the intellectual property rights of applicants, as it pertains to their solution submissions, be protected by MIT Solve?’ FAQs for more details.

The challenge considers solutions at various stages of development. We expect many solutions to be at the concept and prototype stage.

Concept: An idea being explored and researched for its feasibility to build a product, service, or business model, including prototypes under development. Until the solution has a functioning prototype, we would still consider it a Concept. Note: solutions that consist of code are likely to be considered concept stage.

Prototype: An initial working version of a solution that may be in the process of getting initial feedback or testing with users (i.e. running a pilot). If for-profit, a solution that has raised little or no investment capital. Until the solution transitions from testing to consistent availability, we would still consider it a Prototype. (Often 0 users/direct beneficiaries)

Pilot: The solution has been launched in at least one community, but is still iterating on design or business model. If for-profit, is generally working to gain traction and may have completed a fundraising round with investment capital. (Often 10+ users/direct beneficiaries)

Growth: An established solution available in one or more communities with a consistent design and approach, ready for further growth in multiple communities or countries. If for-profit, has generally completed at least one formal investment round (Seed stage or later). If nonprofit, has an established set of donors and/or revenue streams.

Scale: A standard solution operating in many communities or multiple countries and is prepared to scale significantly by improving efficiency. If for-profit, has likely raised at least a Series A investment round.

What does Global Access mean for my solution, specifically?

As noted above, Global Access requires that the winning solutions be made available and accessible at an affordable price in support of the U.S. educational system. 

If your solution is at the Concept, Prototype, or Pilot stage, it is likely sufficient for you to share learnings from your work publicly in order to meet Global Access requirements.

If your solution is at the Growth or Scale stage, and especially if your solution is a for-profit entity, you’ll need to be clear as to how this work will benefit your target population and create shared knowledge about how they can be served. This may include making code publicly available, sharing learnings widely, or improving an existing product to better serve priority learners. 

Please note that if you are a for-profit improving an existing product, you will need to indicate how you will ensure your product improvement is appropriate for priority learners, affordable to schools that serve them, and available within a package that makes sense for them.


I have code. What am I expected to submit as part of my application?

Access to the code base for your solution will allow our technical vetters to assess the technical feasibility of your solution and is strongly encouraged in your initial application. If selected as a finalist, demonstrating operational code that you have rights to will be required given the need to meet Global Access provisions (see here for details). 

The application has optional questions where we ask for you to link to a public GitHub repository (or similar), or to send a repository invite to MIT Solve’s GitHub account instead of sharing a public repository. These answers are viewable to Solve staff, foundation staff, technical vetters, and judges only.

What does the challenge process involve?

Sourcing Solutions: Anyone who meets the criteria above can participate in this challenge and submit a solution. Whether you’re working on a concept or scaling your program or product, we’re looking for students, researchers, innovators and entrepreneurs with the most promising solutions that leverage AI to improve the assessment experience for learners and educators in the United States.

Selecting Solutions: Once the submission deadline passes, judging begins. After an initial screening and review by Solve staff and community reviewers, up to 18 solutions will move forward as finalists. These finalists will be invited to pitch their solutions during a virtual interview day with the judges. After final scoring, the judges will select 5-8 winning solutions.

Supporting Solutions: Winning solutions will share prize funding (pool of $500,000) and receive support to further develop and implement their solutions. We intend for the winning solutions to be piloted in classrooms with priority learners within one year of selection. 

How will my solution be evaluated?

The judging panel for this challenge will be composed of leaders and experts with experience in educational assessment and artificial intelligence in the Pre-K-8 context in the United States. After an initial screening by Solve staff and community reviewers, the judges will score the screened solutions based on the following criteria. 

Alignment: The solution addresses at least one of the key dimensions of the challenge. The solution is applicable to US TK, Pre-K, and K-8 priority learners.

  • A solution would score lower on Alignment if it does not convincingly explain why it is relevant to the challenge.

  • A solution would score higher on Alignment if it fits one or more of the challenge dimensions, or is clearly relevant to the overall challenge question.

Potential for Impact: The planned solution has the potential to improve the efficiency and utility of Pre-K-8 assessments for learners and educators.

  • A solution would score lower on Potential for Impact if the theory of how it could change lives does not make logical sense, or if there is existing evidence that it will not work.

  • A solution would score higher on Potential for Impact if the theory of how it could change the lives of the intended population makes sense and the applicant provides evidence that it is likely to have the intended impact (either from evaluations of the solution itself or from an existing body of evidence about similar interventions).

Feasibility & Readiness: The team has a realistic and practical plan for implementing the solution, and it is feasible in the given context. If not already piloted, the solution has the potential to be ready for piloting within the next year.

  • A solution would score lower on Feasibility & Readiness if the team does not have a realistic plan for implementation, or if the plan is unlikely to succeed (even if funding is acquired).

  • A solution would score higher on Feasibility & Readiness if the team has a realistic plan for implementation and piloting that accounts for the political, economic, geographic, and cultural context, and the team has the necessary skills to implement that plan.

Inclusive Human-Centered Design: The solution is designed with and for priority learners and their educators in the United States. The solution team demonstrates proximity to the community and both embodies and addresses diversity, equity, and inclusion throughout the design, implementation and internal operations of the solution.

  • A solution would score lower on Inclusive Human-Centered Design if the solution is not designed with and for priority learners, and if the team and its leadership are not well-placed to deliver the solution because they are unable to demonstrate proximity to the population and/or how they prioritize DEI.

  • A solution would score higher on Inclusive Human-Centered Design if the solution, team, and leadership clearly demonstrate a focus on and proximity to priority learners; have clearly designed the solution for and with those populations; and articulate a clear plan for continuing to keep DEI at the center of their work. 

Scalability: The solution can be scaled to affect and improve the universal experience of learners and educators. Note: only solutions selected as finalists will be assessed on this criterion.

  • A solution would score lower on Scalability if it solves a problem that does not affect other places or populations, if it would not be possible for it to grow in size, or if there is no path to financial viability.

  • A solution would score higher on Scalability if it has the potential to grow to affect the lives of millions and has a viable plan for achieving financial sustainability.

Technical Feasibility: The applicant has provided convincing evidence that the technology has been built and functions as they claim it does. Note: only solutions selected as finalists will be assessed on this criterion.

  • A solution would score lower on Technical Feasibility if the technology underlying the solution would not be possible to create.

  • A solution would score higher on Technical Feasibility if the applicant has provided convincing evidence that the technology underlying the solution has been successfully built and tested.

 

What is the challenge timeline?

  • March 4, 2024: Challenge Opens for Submissions
  • March 27, 2024: Challenge Information Session
  • May 7, 2024: Challenge Closes for Submissions
  • May 8 - 28, 2024: Screening & Reviews
  • By May 31, 2024: Finalist Selection
  • June 12, 2024: Finalist Technical Vetting Interviews
  • Late June, 2024: Finalist Pitches & Interviews
  • Mid-July, 2024: Winner Selection

While we aim to follow the schedule above, the following dates are subject to change. All applicants will be notified if changes occur.

What will the winners receive if their solution is selected? 

A pool of $500,000 in funding is available for up to eight winners of the Learner//Meets//Future: AI-Enabled Assessments Challenge. Additional funding may be available, and winning solutions will receive support from Solve and the foundation to move forward on their development, piloting, and/or scaling journeys. More details on specific support activities will be provided at a later date.

Will the intellectual property rights of applicants, as it pertains to their solution submissions, be protected by MIT Solve? 

Your contributions are yours. Those who post information or materials on this website (the “Materials”) retain rights to their own work while giving us the right to distribute their work, and others the right to use the work with appropriate citation under the CC-BY-NC-SA license. Others’ work is not yours. You agree not to upload Materials to this website that you do not own or are not specifically authorized to use. You also agree to appropriately attribute references to works and ideas created by third parties, including other users of this website.

In order to upload content on this website, you must grant the Massachusetts Institute of Technology (“MIT”) a non-exclusive right to use the Materials. Unless specifically noted, all Materials on the website will be made available to third parties under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 United States License. You can review Solve’s full Terms of Service here.

For this challenge, MIT Solve adheres to the Bill & Melinda Gates Foundation’s Global Access provisions, which are intended to promote broad availability of winning solutions to priority populations, not to restrict innovators in commercializing their work in other ways. Winning solutions are also required to meet Global Access provisions as noted above. This will not involve a specific license to MIT, but solutions that make use of third-party code should consider the ownership and licensing of those tools when applying. For example, code that acts as a front-end to ChatGPT or other third-party models may be limited in their ability to guarantee continued operation at an affordable price based on decisions made by those models’ owners.

    Back
to Top