Submitted
2025 Global Learning Challenge

Transforming Student Literacy

Team Leader
Chad Vignola
LDC’s classroom prototype brings to life a first-of-its-kind, measurable, scalable, curriculum-embedded AI-enabled assessment system that guarantees students’ 21st Century skills (e.g. critical thinking, problem solving/etc.) and grade-level mastery by generating detailed data to (1) direct teachers to validated next-personalized instruction (“What do I do next?”); (2) empowers student learning self-agency; (3) instantly provides objective assessment of teacher instructional efficacy. Process:...
What is the name of your organization?
Literacy Design Collaborative
What is the name of your solution?
Transforming Student Literacy
Provide a one-line summary or tagline for your solution.
LDC Deeper Learning: Scaling Formative Assessment Feedback Systems to Transform K-12 Student Literacy and 21st C. Skills through Generative AI
In what city, town, or region is your solution team headquartered?
New York, NY, USA
In what country is your solution team headquartered?
USA
What type of organization is your solution team?
Nonprofit
Film your elevator pitch.
What specific problem are you solving?
Global productivity depends upon rigorous literacy and higher order executive function, strategic thinking and problem solving skills. Teachers have long-struggled to deliver the level of instruction necessary for all students to master these 21st Century skills, especially in high-needs, marginalized communities. The recent 2024 NAEP scores reveal worst-ever student literacy scores in America but also in many other countries. (See EF Ranking of Countries by English Skills) Twenty years of proven research demonstrates that RELIABLE curriculum-embedded formative assessment systems have the biggest impact on improving student conceptual understanding (Hattie) outside of highly effective teachers (impossible to scale). Existing GenAI tools accelerate formative assessment only on simplistic elements of student writing: none are tied to innovative assessment focused on deep learning. LDC’s independently-proven, statistically-significant doubling of student disciplinary literacy outcomes is held back from scaling (inter)nationally by its reliance on non-calibrated teacher use of Stanford-validated analytic rubric scores, discursive feedback, and teacher ad hoc identification of next deeper learning proximal learning for all students to reach grade level mastery. Moreover, most teachers don’t have the time or ability to quickly or perhaps most importantly, accurately provision student feedback for 100+ secondary students, much less identify individualized next differentiated learning steps.
What is your solution?
LDC’s classroom prototype brings to life a first-of-its-kind, measurable, scalable, curriculum-embedded AI-enabled assessment system that guarantees students’ 21st Century skills (e.g. critical thinking, problem solving/etc.) and grade-level mastery by generating detailed data to (1) direct teachers to validated next-personalized instruction (“What do I do next?”); (2) empowers student learning self-agency; (3) instantly provides objective assessment of teacher instructional efficacy. Process: students write in response to a rigorous LDC SCALE-validated disciplinary prompt and complex disciplinary text (Sci/SS/ELA). The AI utilizes SCALE’s analytic rubric to score, provide narrative feedback, and then directs teachers -- and students -- to next specified instructional steps to accelerate proximal learning to grade level disciplinary literacy and higher order thinking skills mastery. The teacher can mediate the instantaneous AI output for hundreds of students. The solution works because it is premised on LDC’s statistically-proven framework that guarantees Tier 1 disciplinary literacy instruction to drive deeper student learning (Wang) and proven research that curriculum-embedded formative assessment drives more student improvement than any other scalable resource. (Hattie) It replaces twice a year teacher extended writing feedback with unbounded student practice and calibrated AI-feedback while SCALE-analytic rubrics demonstrably mitigate potential AI bias through combined agentic LLM interactions.
Who does your solution serve, and in what ways will the solution impact their lives?
LDC will initially seek to support the approximately 25,000 K-12 high need rural and urban public-school students across current LDC implementations (NYC, LAUSD, urban NC and rural NC, KY, CO, and NYS: 75%+ FRL; 75%+ non-white, and 12-22% multilingual and SPED learners.) Historically marginalized rural and urban students have been underserved in at least six untenable ways with: (1) the least prepared new teachers (Kane); (2) the lowest skilled classroom teachers (Id.); (3) the least supported teachers by the least skillful administrators (Id.); (4) the least access to high quality instructional materials (EdTrust/TNTP- Opportunity Myth; (5) the least access to meaningful/nuanced student data to support deeper student learning, particularly those below grade level (D. Wiliams), and (6) the least meaningful job-embedded professional learning. Over-the-shoulder, ad hoc individual literacy coaches with no quality measurement, standardization, or impact have been the failed choice for 20 years. LDC’s GenAI data-driven, scalable and calibrated curriculum-embedded deeper learning assessment would transform teachers’ quickly (Hattie/Wiliam) providing nationally-calibrated student data, deeper learning feedback, and AI linkage to SCALE-validated next differentiated disciplinary learning content – considerably more effective than “Achieve 3000-style” drill and kill, or even rarely enduring, non-calibrated, non-validated 1:1 tutoring.
Solution Team:
Chad Vignola
Chad Vignola
Executive Director