
Gen Z and Alpha are Tech-Savvy—but at What Cost?
Last updated: July 19, 2025 Read in fullscreen view



- 21 Aug 2024
What is Singularity and Its Impact on Businesses? 168
- 04 Oct 2023
The Future of Work: Harnessing AI Solutions for Business Growth 167
- 29 Oct 2024
Top AI Tools and Frameworks You’ll Master in an Artificial Intelligence Course 166
- 09 Jul 2024
What Is Artificial Intelligence and How Is It Used Today? 132
- 23 May 2024
Mastering AI: Sharpening the Axe in the Digital Age 118
- 15 Apr 2024
Weights & Biases: The AI Developer Platform 113
- 12 May 2024
The Pros and Cons of the Creator Economy in the Age of AI: Opportunities, Challenges, and the Gray Zone with the Gig Economy 112
- 05 Aug 2024
Affordable Tech: How Chatbots Enhance Value in Healthcare Software 88
- 24 Dec 2024
Artificial Intelligence and Cybersecurity: Building Trust in EFL Tutoring 68
- 06 May 2025
How Machine Learning Is Transforming Data Analytics Workflows 60
- 21 Apr 2025
Agent AI in Multimodal Interaction: Transforming Human-Computer Engagement 55
- 05 Jun 2025
How AI-Driven Computer Vision Is Changing the Face of Retail Analytics 32
- 01 Jul 2025
The Hidden Costs of Not Adopting AI Agents: Risk of Falling Behind 25
Generations Z and Alpha are impressively quick with technology. However, a darker side is emerging: they may trade critical thinking for grades, and substitute their own thoughts with algorithmically generated words devoid of soul. Over-reliance on AI is likely to negatively impact their ability to build real knowledge and cultivate independent thinking—especially since their foundational understanding is still developing.
Yet banning AI seems unrealistic in today’s world. The real question is: how can we use it wisely?
That question brings me back to December 2022, when ChatGPT was launched and quickly became available to users in the U.S. At the time, Stanford University had no clear policy on its use—each professor decided for themselves. Some banned it outright, while others embraced it as a tool worth experimenting with.
The “Introduction to Programming” class I took, taught by Prof. Nick Parlante, was one such case. He strictly prohibited AI usage, equating it to copying code from the Internet. To enforce this, the course used MOSS (Measure of Software Similarity) to detect signs of AI-generated code. The tool was advanced: even if students changed variable names, added comments, or tried to disguise the structure, MOSS could still identify the source. Prof. Parlante was blunt: “We need clear policies to maintain classroom integrity.” His tough stance wasn’t just about stopping cheating—it underscored the importance of thinking for yourself.
Rather than solely imposing restrictions, Stanford also adapted its evaluation methods. For example, computer science students were required to write code by hand during exams, rather than typing on a computer, to reduce AI interference. Grading was based on problem-solving logic and reading comprehension, not flawless syntax. This made me focus more on the thought process than just getting the right answer—a valuable lesson in the essence of learning.
Not all instructors were wary of AI. In our “Construction and Project Management” class, Prof. Martin Fischer—a world expert in virtual design and construction—enthusiastically encouraged AI use. He often shared humorous stories generated by ChatGPT. However, for realistic group simulation projects, students were still required to analyze data, plan strategies, and deliver in-person presentations—skills no AI could replicate. This experience taught me that when used properly, AI can enhance productivity without undermining independent thinking.
In our “Real Estate Tech Seminar,” instead of typical essays, we wrote reflective pieces after each guest speaker session, incorporating personal insights. These assignments demanded critical thinking in response to real-world events—making AI largely irrelevant. In fact, attempting to rely on AI became counterproductive, since its generic responses couldn’t replace personal reflections shaped by individual culture, worldview, or life experience.
Many top universities around the world are proactively developing policies to manage AI use in education and research. Rather than imposing blanket bans, they adopt flexible, controlled approaches—balancing AI's benefits with the need to preserve academic honesty and independent thought.
At Oxford University, students are encouraged to use AI as a learning aid—so long as they disclose their usage, particularly in assessments. Harvard, Stanford, MIT, and Cambridge share a similar stance: AI can assist with brainstorming, grammar checks, or information synthesis—but cannot replace genuine individual effort.
In Asia, the National University of Singapore (NUS) and Tsinghua University have also made it clear that AI use must comply with academic integrity and data privacy rules. Some institutions, such as Peking University, enforce stricter penalties—including the potential revocation of degrees if AI-assisted cheating is discovered.
The common thread in these policies is flexibility and balance. Many universities allow professors to decide how much AI is permitted in each course, trusting their judgment and encouraging transparent dialogue between instructors and students under shared principles.
First, banning AI doesn’t mean students will stop using it. In an era where AI is embedded in most software tools, prohibitions will only drive its use underground—making it harder to monitor and depriving students of the chance to learn responsible AI use. Second, using AI is not inherently cheating. If a student uses AI to analyze texts, ask critical questions, or explore new perspectives, that’s part of learning. The issue lies not in the tool, but in the intention and application. Third, the ability to work with AI is an essential future skill. Just as computers were once banned in classrooms but are now indispensable, AI will soon become a natural part of both academia and the workplace. Without proper guidance, students risk falling behind in the job market.
To manage AI in education effectively, we need to move away from rote-memory tests and focus on assignments that require analysis, synthesis, and real-world application. Project-based tasks, rather than traditional exams, help students build teamwork and problem-solving skills.
Handwritten exams, oral assessments, and real-life case studies are useful tools to evaluate independent thinking and reduce cheating. Additionally, ethics training on responsible AI usage should become a regular part of school curricula.
AI itself is not the problem. But without clear guidance, it risks becoming a “crutch” that weakens students’ ability to think for themselves. Instead of bans, education needs flexible solutions. What truly matters is maintaining the boundary between “assisting” and “replacing.”
No matter how far technology advances, core human skills like critical thinking, creativity, and collaboration must still be nurtured from within. The more powerful the technology becomes, the more we must return to the most fundamental form of thinking: asking our own questions, finding our own answers, and growing from those inquiries.
Quan Trinh
