Every Tuesday morning, the scene in a large undergraduate STEM course at MIT is essentially the same as it has been for decades: rows of students, a professor at the front, and a teaching assistant hovering near the back keeping an eye on things. However, take a closer look at what students are doing when they are stuck on a problem and notice that something has changed. More of them are starting a chatbot before they even consider raising a hand.
In 2026, this is MIT. One of the most enduring aspects of a university education, the teaching assistant, is also quietly being renegotiated.
In order to facilitate active learning in large classrooms, a researcher at the Department of Aeronautics and Astronautics named Rea Lavi has been developing a web-based platform called SIDAI that pairs with a chatbot named Sid. The problem it is intended to solve is more complex than the concept itself. Asking students to analyze, synthesize, and wrestle with material instead of just passively absorbing it is the kind of active learning that truly sticks, and it requires a level of personalized feedback that doesn’t scale. Maybe fifteen students can be meaningfully engaged with by one teaching assistant in an hour. That math doesn’t work in a lecture hall with two hundred people. While the real person in the room concentrates on the more difficult conversations, Sid is designed to bridge the gap by offering real-time, personalized feedback at a scale no human TA could match.
In theory, this division of labor makes sense. Success, according to Lavi, is defined as high levels of satisfaction and perceived efficacy from both teachers and students—a purposefully pragmatic goal. Undergraduate STEM courses are piloting the platform through collaborations with Purdue and Aalborg University in Denmark. The data will eventually have to address the question of whether students in those courses feel truly supported by an AI tutor or if they just find it helpful for quick debugging questions.
Other researchers in Cambridge are taking the concept much further. NeuroChat, a platform that uses wearable EEG headbands to track a student’s brain activity and modify the complexity of explanations in real time based on whether engagement is declining, is being tested by Nataliya Kosmyna and Pattie Maes in the MIT Media Lab. The idea of a student sitting at a desk with electrodes on her head while a chatbot adjusts its vocabulary to reflect her cognitive state is either the logical conclusion of personalized education or necessitates a more in-depth discussion about the true purpose of learning. Most likely both.
| Field | Details |
|---|---|
| Topic | AI integration in MIT classrooms and the evolution of the teaching assistant role |
| Key Initiative | MIT Open Learning / Jameel World Education Lab (J-WEL) |
| Featured AI Tool | SIDAI platform and “Sid” chatbot — developed by Rea Lavi, Dept. of Aeronautics and Astronautics |
| Other AI Projects | NeuroChat (MIT Media Lab), vocabulary tutoring AI (McGovern Institute), social robot for refugee children (MIT Media Lab) |
| Key Researchers | Rea Lavi, Pattie Maes, Nataliya Kosmyna, Janet Rankin, Justin Reich, Mitch Resnick, Daniella DiPaola |
| Supporting Institution | MIT RAISE Initiative — MIT Media Lab, MIT Schwarzman College of Computing, MIT Open Learning |
| Core Shift | AI handles routine tasks (grading, Q&A, feedback); human TAs focus on mentorship and critical thinking |
| Pilot Partners | Aalborg University (Denmark), Purdue University |
| Key Concern | Equity and access; risk of AI widening existing educational disparities |
| Policy Gap | No federal framework yet for AI use in K-12 or higher education |
| Location | MIT campus, Cambridge, Massachusetts |

The director of the MIT Teaching + Learning Lab, Janet Rankin, has taken care to present AI as a tool that comes after pedagogy rather than the other way around. According to her, educators should first decide what they want students to be able to accomplish before determining how AI fits into that objective. It sounds clear. However, in reality, the technology usually arrives before the framework, and institutions have to retrofit the philosophy after the tools are in place. Calculators, the internet, and laptops have all exhibited this pattern. AI won’t be any different for any specific reason.
It appears that the equity questions are the most challenging to answer. Concerns about what happens when the best AI tools require fast hardware, require subscription fees, or are primarily developed by a small group of developers have long been voiced by MIT researchers working on AI policy. In short, graduate student Daniella DiPaola, who co-wrote a policy brief on AI in K–12 education, stated that AI systems should support teachers rather than take their place. In an underfunded school, replacing a human teacher with an AI tutor isn’t progress; rather, it’s a cost-cutting strategy disguised as innovation.
Observing all of this at MIT gives me the impression that the school is sincerely attempting to think carefully about something that the majority of the world is responding to in real time. Piloting the Sid chatbot is still ongoing. The NeuroChat experiments are still conducted outside of the laboratory. The state-by-state patchwork that has filled the void is, at best, inconsistent, and there is currently no federal policy framework that would direct any of this at scale. Perhaps more than most, MIT appears to recognize that the issue was never whether or not AI would be used in the classroom. Who gets to decide what it does once it’s there has always been the issue.
