
The BDN Opinion section operates independently and does not set news policies or contribute to reporting or editing articles elsewhere in the newspaper or on bangordailynews.com
Abhilasha Kumar is an assistant professor of psychology at Bowdoin College. This column reflects her views and expertise and does not speak on behalf of the university. She is a member of the Maine chapter of the national Scholars Strategy Network, which brings together scholars across the country to address public challenges and their policy implications.
The release of the latest artificial intelligence (AI) model R1, by Chinese firm DeepSeek, has grabbed the eyeballs of politicians and scientists alike. Within a week, DeepSeek overtook ChatGPT as the most downloaded free app, inviting rave reactions from tech enthusiasts about how R1 will fundamentally change the AI race.
As an educator, this moment feels eerily similar to when ChatGPT was the new kid on the block. A mild panic about students using AI to complete coursework had ensued across schools and college campuses. Some instructors moved back to in-person testing, while others have tried to reinvent their teaching methods and course policies.
Cheating, however, is only part of the puzzle. Regardless of whether students learn with or without AI, the simple fact remains that these models are all around them. Not every student will join the tech workforce, but it is highly likely that their workplace will be using AI in some capacity. What our students need is the confidence to engage with these technologies in a critical, yet informed manner. To achieve this goal, AI literacy has to be the first step, over and above AI use.
The use of AI in educational spaces at this time is troubling, due to the lack of guardrails and knowledge about how these models work among students and educators. I have allowed students to use AI in my courses (with important caveats), which has led me to better understand their knowledge of AI.
When students are working on a problem that requires refining ideas, such as designing a psychology experiment, or writing a paper, AI can serve as an excellent learning tool by providing suggestions and helping students work through minor errors. However, for deeper conceptual questions, I have found that students readily believe the AI’s response, instead of evaluating its correctness. These behaviors, however, are informative — they help us realize that students don’t understand how models work and what they can be best used for.
Language models are trained with the broader goal of mimicking human language. Even though the specific goals on which these models are trained may vary and they can achieve relatively high performance on select tests, off-the-shelf models are not programmed to thoughtfully combine multiple reliable sources of information, or provide nuanced responses to deeper, more open-ended questions. Knowing this is incredibly important for how one may want to engage with AI.
However, what we need is a show-not-tell approach with students. Instead of handing over the tools to students and leaving them to discover these black boxes, our students need to see how the models learn, how and why they fail — with concrete examples, scientific rigor, and dedicated coursework. Knowing the inner workings of AI models may feel unnecessary and potentially daunting, but it is critical to helping students develop a certain degree of warranted skepticism. Our students need to be incisive, not naive, in their use of AI.
This ability to adapt and think critically is likely to make our students more attractive to prospective employers. Now more than ever, it is important to consider what skills educators should impart to their students that a language model cannot more easily deliver. The answer has to be linked to the fundamental human ability to be flexible “all rounders.” Educators and higher education institutions are in a great position to help students become active and competent voices in the AI revolution.
Recent efforts to bring educators together to brainstorm solutions to teaching with AI are a step in the right direction. Teaching about AI fundamentals may need to be prioritized in these conversations, for example through organizing expert-led workshops on how AI models learn, what data they are trained on, and the implications of these engineering decisions on the performance and risks of using AI.









