Test Agents, not Students
In the era of AI agents, problem-solving skills are no longer paramount, as AI will solve everything for us. Nevertheless, higher education remains centered around teaching problem-solving skills. Exams are a prime example of this.
Excelling at solving problems does not necessarily mean one is good at managing others. You have likely heard of cases where an outstanding worker gets promoted to a management role, only to become a terrible manager. Excelling individually and empowering a team to excel are fundamentally different tasks.
One of the most important competencies of a manager is setting the stage. If you design the system and framework, AI agents will independently solve problems within it. We now need methods to evaluate these systems and frameworks themselves.
For instance, we could consider evaluating how well AI agents operate within a system and framework designed by a student. DARPA’s AIxCC competition is a prime example of this. Teams win by having their bug-finding and patching agents run autonomously for 72 hours to find and fix the most bugs. A student’s ability to utilize AI could be evaluated in a similar manner.