News
YCYW Educational Insights
YCYW Educational Insights
04 May, 2026
16 : 37
In a single month, three school systems have done something concrete about AI. New York City issued preliminary AI guidance for its 1.1 million students. The US Department of Education made AI a priority for federal grants. South Korea stripped its AI Digital Textbook of legal status, and the publishers behind it are now in constitutional court. The pattern looks like a tug-of-war between adoption and prohibition. The more interesting reading is that schools are finally getting around to writing rules at all. Whether the rules actually help children learn is a different question. The rules themselves cannot answer it.
The largest US school district has set a clear floor. The New York City Department of Education's preliminary AI framework, released in April, lets teachers use generative AI for brainstorming, organising, drafting communications, and lesson planning. It bans AI-driven grading, disciplinary decisions, and biometric or behavioural data collection without strict oversight. Public comment runs through May 8, 2026, and a fuller playbook is expected in June. Coverage of the framework, in K-12 Dive and elsewhere, has stayed close to what is permitted and what is banned.
The more interesting figure is in a parallel survey. 74% of US students now report school-level AI rules in place, up from 51% in 2025. In one year, the share of students learning under some kind of school AI policy has jumped by almost half. School policy has moved faster on this than it usually moves on anything.
Policy is no longer the thing holding schools back. By the end of 2026, almost every serious school will have an AI policy of some kind. What separates the schools we would choose from the ones we would not is what they actually do inside their own rules.
The second signal cuts against any easy optimism about the first.
On April 13, 2026, the US Department of Education finalised a rule prioritising AI in its grant awards, per K-12 Dive. Districts and projects working on AI literacy or ethical use will get preferential treatment. Federal money is now flowing toward AI in classrooms on top of the local rule-making.
The data on actual use is much quieter. A recent RAND Corporation survey of US K-12 teachers found that only 34% say AI is making them more effective. Adoption keeps climbing. Effectiveness, measured by the people in the actual classroom, lags. A separate EdSurge analysis finds 1 in 3 US Pre-K teachers now use generative AI in school. The youngest children in the system are already inside the wave. Their teachers are still working out whether the wave is helping them.
No policy framework, on its own, will close that gap. Writing the rules is one job. Designing the teaching is a different job, and most schools have only really begun the first one.
Seoul shows the limits of expecting policy to do the work that teaching design has to do.
South Korea's AI Digital Textbook, known as AIDT, was meant to be one of the most ambitious public deployments of AI in classrooms anywhere. After the National Assembly stripped AIDT of its legal textbook status, the Korea Textbook Association, together with YBM, Cheonjae, and Dong-A Publishing, formed an emergency response committee and filed a constitutional complaint, per Korea Herald. Approximately USD 850 million in government investment and USD 580 million in private investment now sit in legal limbo. At the same time, Seoul has accelerated a parallel set of moves. The TOPIK Korean-language exam will be fully digitised by 2029. Government-dispatched Korean-language teachers at foreign schools rise from 77 to roughly 100 by 2026. Foreign AI graduates of designated Korean universities will become eligible for permanent residency in three years rather than six.
Even a national-scale deployment with this much money behind it can stall when nobody has answered the underlying question first: what the technology is actually doing inside the teaching.
The three stories are at different stages of the same cycle. Underneath, they are describing the same problem.
Governance and adoption are running on different clocks. Rules get written at the system level, top-down, by people writing carefully. AI gets used at the classroom level, lesson by lesson, by individual teachers improvising as they go. Rules can keep a school from doing obvious harm. They cannot, by themselves, make a school good. The 34% RAND figure is the cleanest evidence we have of this right now. Most teachers using AI do not yet feel it is improving their work. That is not a software problem. It is a teaching design problem. It is a problem about what is happening between an adult and a child in a room when neither of them is sure yet what the third thing in the room is for.
This is happening everywhere. The loudest signals this month are American and Korean, but the same question is landing in classrooms in Hong Kong, mainland China, the United Kingdom, and every other system serving international families. The exact date your local education ministry publishes its framework is a detail. The question waiting underneath, which is the question parents are about to spend ten years living with, is one level deeper than the date on the press release.
The question is not "what is your AI policy." Almost every school worth the fees will have one within twelve months, and most will be reasonable. The question is what the school's teaching model looked like before AI arrived. That is harder to change in a year, and it determines a lot more of what AI does once it shows up.
A classroom with one adult and twenty-eight students is a different environment, with or without AI, from a classroom with two adults and the same children. A school where one team is responsible for thinking through AI across the curriculum is a different kind of organisation from one where every teacher figures it out alone with whatever tool they happened to download that morning.
AI policy sets a baseline of safety. Teaching design is what decides whether AI actually improves learning. The schools worth choosing have thought carefully about both, and they have an answer ready when you ask.
Our answer to this question is not new. The YCYW Education Network was founded in Hong Kong in 1932 and operates ten cities across Hong Kong, mainland China, the United States, and the United Kingdom, serving more than 12,000 students and staff. The Co-Teaching Model in our classrooms was not designed with AI in mind. We have been running it for decades. AI has just made the design choice we already made years ago a lot more obvious.
The YCYW Co-Teaching Model puts one Chinese national teacher and one international teacher in the same classroom with equal status, co-teaching every lesson. The two share responsibility for the same group of children. Students develop bilingual academic competence in reading, writing, and speaking, and are immersed in real-time fusion of Eastern and Western cultures every day. The same logic runs at the top: each YCIS campus has both a Chinese national and an international principal, sharing the leadership.
When AI shows up in a classroom like this, it shows up as a third presence, not a stand-in for the second adult. Two qualified humans see the same child all day. Either of them can step in the moment the algorithm starts pointing the child somewhere the child should not go.
The network-level work of getting AI into our classrooms thoughtfully sits with the YCYW EdFutures team. Their job is to drive how AI and other new tools are adopted across the curriculum, and to make sure our students are being taught how to use them well. EdFutures does not write every rule on its own. Subject teachers, curriculum leaders, and campus heads all weigh in. What EdFutures does carry is the responsibility for making sure a YCYW student in Shanghai, Hong Kong, Somerset, or Silicon Valley is being prepared to be a creator with these tools, not just a user, and that the same standard of care follows them as they move between our campuses.
Three questions that work for any school you are considering this year, or any year after:
A school's AI policy tells you what the school has banned. Those three questions tell you what the school can actually teach a child to do.
The YCYW Education Network is a global education group founded in Hong Kong in 1932. It operates campuses in ten cities across Hong Kong, mainland China (Beijing, Shanghai, Guangzhou, Chongqing, Qingdao, Yantai, and Tongxiang in Zhejiang), and overseas (Silicon Valley in the United States and Somerset in the United Kingdom), serving more than 12,000 students and staff. The network's offering covers early childhood through postgraduate. It includes Yew Chung International School (YCIS) for international and qualifying students, Yew Wah International Education School (YWIES) for Chinese mainland students, and the Yew Chung College of Early Childhood Education (YCCECE), which is Hong Kong's first private degree-granting institution dedicated to early childhood education.
The YCYW EdFutures team is the network's strategic education innovation and research function. Its role is to drive how AI, programming, immersive technologies, and other emerging tools are adopted across YCIS and YWIES campuses, and to guide students in using these tools well. EdFutures does not act as the sole rule-setter for AI use in any single subject. Subject teachers, curriculum leaders, and campus heads share that responsibility. EdFutures also leads the World Classroom programme, an academically structured immersive learning experience based at the network's UK Somerset campus, and the Future School pilot at the Tongxiang campus in Zhejiang.
YCYW treats AI as a third presence in the classroom, alongside the existing pair of Co-Teachers, never as a substitute for an adult. The YCYW EdFutures team drives network-level adoption of AI in the curriculum and student-facing AI literacy. Subject teachers, curriculum leaders, and campus heads make the day-to-day decisions about when AI is and is not used in a given lesson. The CUGO (Careers and University Guidance Office), the network's university admissions support unit with more than 29 full-time advisors, also offers AI-assisted course planning to support student university preparation.
YCYW's holistic education is the network's century-long core philosophy, structured around three commitments: alignment with technology, alignment with the arts, and alignment with love and charity. It treats character development as equally important as academic outcomes, and weaves bilingual learning, the 12 Values character framework, service learning, and cross-cultural understanding into the daily curriculum. Day to day, the philosophy shows up through the Yew Chung Approach in early childhood, the Yew Chung Curriculum at primary level, and the Co-Teaching Model across the network.
YCYW operates campuses in ten cities: Hong Kong; Beijing, Shanghai, Guangzhou, Chongqing, Qingdao, Yantai, and Tongxiang in mainland China; Silicon Valley in the United States; and Somerset in the United Kingdom. The network also operates the Yew Chung College of Early Childhood Education (YCCECE) in Hong Kong, which in 2024 became the first private institution in Hong Kong authorised to offer postgraduate-level qualifications, breaking the public university monopoly at that level.