Where Intelligence Ends, Wisdom Begins: Mr. Naresh Miglani on the Paradox of Progress

As technology grows more intelligent, do you think humanity is growing any wiser?

That question touches on the paradox of our age – we’ve never been more capable, and yet perhaps never more uncertain of what to do with that capability. Intelligence today has become measurable, programmable, and endlessly replicable. Wisdom, however, remains stubbornly human – it resists automation because it demands context, empathy, and restraint.

Technology can process, predict, and perform, but it cannot pause. Wisdom lives in that pause – in the hesitation before action, in the awareness that not everything that can be done should be done. We are surrounded by systems that can outthink us in speed, yet none that can match us in conscience. And that gap – between intelligence and intention – is where our responsibility now lies.

I don’t think humanity automatically becomes wiser as its tools become smarter. If anything, progress tempts us to confuse access with understanding, convenience with clarity. The challenge of this century is not keeping up with technology, but staying awake within it – ensuring that our inventions don’t dull the very capacities that made them possible: curiosity, discernment, wonder.

Wisdom demands a kind of humility that technology doesn’t teach. It asks us to remember that intelligence is only power, and power without direction can as easily destroy as it can transform. So perhaps the real question is not whether we are becoming wiser, but whether we are still willing to.

Because if intelligence is about building systems that think, wisdom is about building lives that matter. And that remains, for now, a profoundly human task.

The Human Spark in Programming: Ms. Hema Jain on Creativity Behind the Syntax

If code is just logic made visible, does that mean emotion and a human touch has no place in computer science? 

It would be naive to believe that programming is just logic. On the surface, it’s clean, structured, and exact. But behind every program is a person trying to make something work, trying to solve a problem that matters to them. That drive comes from varied emotions – curiosity, frustration, excitement, even obsession. The beauty of computing isn’t just in precision; it’s in the moments of chaos before clarity, when you’re experimenting, failing, and suddenly finding a solution. Machines can follow rules perfectly, but it’s emotion that makes us question those rules, imagine new ones, or visualise the final result where others see syntax. Even innovation itself often begins with a feeling – the sense that “this could be better.” or “this is possible”. Programs are made with a purpose by human beings for human beings, which makes all of this process inherently human. So yes, code might be logic made visible, but emotion is what makes it worth writing. Without that human spark, computer science would just be math – not magic. And maybe that’s the secret difference between a program that merely runs and one that truly creates impact. In the end, logic gives computers their structure, but emotion gives computer scientists their purpose. It’s what keeps us curious, restless, and endlessly willing to try again – until the logic finally feels right.

The Worlds We Build: Mr. Mohitendra Dey on Gaming and the Architecture of Evolving Minds

Do you think gaming is shaping how this generation thinks about logic, competition, and creativity- or is it just reflecting it? 

I think gaming reflects this generation far more than it shapes it. Games are, in a sense, mirrors- built from the collective imagination, desires, and frustrations of the people who play them. The logic in modern games mirrors how we think: fast, parallel, always multitasking.  That constant flow of information and decision-making feels familiar because it comes from our own lived reality -a reality filled with choices, distractions, and a constant need to adapt quickly. Gaming doesn’t create that rhythm; it translates it into play. The creativity we see in game design, from open worlds to nonlinear storytelling and the moral choice systems, all of this comes from a generation that questions structure and values freedom. Even competition in gaming feels less about domination and more about self-expression; people now play not only to prove themselves and their skills, but also to find community.

So, I see gaming as a kind of cultural feedback loop. It doesn’t tell us how to think – it reveals how we already do. Every mechanic, every narrative choice, is a reflection of our priorities as a society: our collective and constant pursuit of excellence, which drives our very progress. 

Awareness as a Skill: Ms. Anjana Virmani on Cultivating Conscious Digital Election

What’s the most important digital habit students should build early on?

I believe the most important digital habit students should build early on is digital mindfulness, the conscious awareness of what they are doing online, why they are doing it, and how their actions affect themselves and others. In today’s fast-paced world, it is easy for students to use technology passively, focusing only on convenience or entertainment. However, true digital literacy is not just about learning to operate devices efficiently, but about learning to use them responsibly and thoughtfully. Whether they are searching for information, communicating with others, or creating content, students should pause, think carefully, and make informed choices. For example, verifying the honesty of online sources, considering the negatives of sharing personal information, and understanding the tone and reach of their messages are all vital skills. These habits not only ensure safe and ethical internet use but also help students develop critical thinking and empathy in a digital environment. As the saying goes, “We shape our tools, and therefore our tools shape us. This is especially true for young learners, whose early experiences with technology often shape their lifelong digital behavior. By cultivating this awareness from the beginning, we can help students to become thoughtful digital citizens who can scroll through the online world responsibly. Having digital mindfulness equips them with skills that go far beyond the classroom and prepares them to contribute to society in a digital age.

The Human Algorithm: Ms. Shalini Harisukh on Teaching Computer Science Beyond Devices

If you could teach computer science without using any computers, what would the class look like?

If I had to teach computer science without computers, I’d focus on building the students’ core computational thinking through hands-on activities. Students would act out sorting algorithms to understand logic and efficiency, and model data structures using physical objects. I would teach Boolean logic through roleplay, with students simulating logic gates and signal flow. Binary encoding could be explained through beadwork or Morse code, making data representation creative and memorable. Programming could become storytelling: students write and debug step-by-step instructions for everyday tasks. To simulate networking, I would pass messages across a classroom “internet,” exploring latency and protocols. Finally, we’d hold discussions on ethics, bias, and the philosophy of computing to reflect on technology’s broader impact. Without screens, the class would become a dynamic, human-centered exploration of how computers think—and how we think through them.

The New Canvas: Ms. Sarika Kaushal on Technology as a Language of Creativity

Is technology limiting human creativity or becoming the greatest tool for expressing it?

Technology has always had two sides: it can open doors, and it can close them. But in my experience as an educator, technology feels less like a threat and more like a new language of creativity. It is not here to replace human imagination, but to extend its reach. I see technology as a new kind of paintbrush, camera, stage, and notebook all at once — a toolkit that allows ideas to travel farther and take shape in ways that once felt impossible.

In the classroom, I have seen students who rarely spoke or shared during discussions quietly design beautiful digital artwork, compose music using software, or build animations that express emotions they couldn’t put into words. Technology, in these moments, doesn’t just help them create — it gives them confidence. Creativity is no longer limited to those who draw well or write beautifully. Now, any child with a spark of curiosity can build, shape, and express.

At the same time, technology has made collaboration richer. Students learn to co-create across screens, combine ideas, and learn from one another in real time. A poem can become a song, which can become a video, which turns into a shared project. Technology dissolves boundaries — between subjects, between skill levels, and sometimes even between people.

But I also see the other side — the quiet dependency that can creep in. When students rely on quick templates, instant answers, or pre-made effects, creativity becomes surface-level. The joy of struggling, exploring, and discovering something new begins to fade. Sometimes, technology can make things so easy that the imagination doesn’t get the workout it needs to grow strong.

This is where our role becomes meaningful. Creativity does not come from the tools — it comes from the heart and mind of the learner. Technology simply gives it shape. Our task is to help students stay curious, to ask “What if?” and “Why not?”, to stretch their ideas rather than settle for the first result the computer offers.

So, is technology limiting creativity or enhancing it?
The truth is gentle and simple: technology becomes what we teach it to be.

Used with intention, it lets students dream bigger, see beyond what is visible, and express what is deeply personal. Used without reflection, it reduces creativity to convenience.

In the end, technology should not replace human imagination — it should celebrate it. And our job is to help students use these tools not to avoid thinking, but to deepen thinking, to build, to wonder, and to create something that feels unmistakably theirs.

Engineering Equality: Mr. Ajithkumar Sir on Empowering Girls in Robotics

Our school has had female presidents in Roboknights. What challenges do you think girls face in the robotics field, and how have you helped them overcome these challenges?

We have made significant progress in integrating women into STEM fields, particularly robotics, but this journey has not been without its challenges. There is still a lack of female role models in the field who young women can look up to for motivation and inspiration. Additionally, due to gender biases and stereotypes, many women experience imposter syndrome, even when they possess better skills than their male counterparts. These social barriers often restrict women to specific roles, such as design positions, even when they are capable of so much more. Roboknights, as a club, is built on the philosophy of gender equality and the belief that hard work and determination should be the only parameters of judgment, not gender or other superficial characteristics. As a teacher, I have always encouraged my students to relentlessly pursue their passion and to block out the negativity and societal limitations. I ensure that every girl participating in Roboknights receives an equal playing field and that her enthusiasm for robotics continues to thrive. From Lavanya Jose, Manjusha Roy Chaudhary and Richa Sharma in 2007, and Jasnoor Kaur and Swarnika Bhardwaj as office bearers of “Roboknights” club in 2023, we have ensured that girls took up leadership roles in this club. Today, we have female members who actively contribute and bring glory to the name of the club. At Roboknights, we take pride in nurturing the female role models of tomorrow, young women who will inspire the next generation and help revolutionise the field of robotics by breaking gender biases.

Shaping Thinkers for the Future: Mr. Shekhar on Building a Future-Focused Skillset

With AI transforming so many aspects of daily life, what skills do you think today’s students should develop to stay future-ready and stand out?

There is no doubt that Artificial Intelligence is rapidly reshaping every sphere of human life, and students must cultivate a balanced set of technical and human-centric skills to remain ready for this era of technology and be distinguished. Instead of distancing themselves from technology, students must learn to adapt to these changes. Students need a strong base in digital literacy and computational thinking. Students must develop basic knowledge and a clear understanding of how algorithms function, how data is processed, and how AI systems operate, which will empower them to engage with technology intelligently rather than passively.

Beyond technical skills, critical thinking skills, and problem-solving abilities are extremely important. At the end of the day, Artificial Intelligence is just that, artificial. It can provide information, solve problems using data, but it is the human mind that must frame meaningful questions, interpret outcomes, and apply them judiciously to real-world situations. Additionally, communication, collaboration, and ethical awareness must not be overlooked. AI lacks empathy and moral reasoning; therefore, students who can articulate ideas effectively, work harmoniously in teams, and evaluate the societal impact of technology will be invaluable contributors to any field.

And of course, the qualities of curiosity and resilience will ensure that students perceive AI not as a threat, but as a partner in progress. Those who approach technology with an open, creative mind and the courage to experiment will not only adapt to change but also help shape its direction. They will be able to use AI as a tool to extend their abilities, rather than replace them. As Sundar Pichai rightly remarked, “The future of AI is not about replacing humans, it’s about augmenting human capabilities.”

The Rise of Open-Source LLMs

In every era of innovation, there comes a moment when our machines start to reflect our ethics as much as our engineering. Just as early software like antivirus tools was built to protect and empower the public, today we are entering a new phase, one defined by open-source large language models (LLMs).

LLMs, or Large Language Models, have quickly become one of the defining technologies of our time. These systems are trained on large datasets, enabling them to understand and generate natural language. Rather than being manually programmed and hardcoded, they learn patterns from data and use that knowledge to generate meaningful responses.

But behind the rapid progress lies a philosophical divide: open-source versus proprietary AI. Open-source software represents transparency and collective responsibility. Its publicly available code enables collaboration, independent auditing, customization, and often higher-quality systems born from community oversight. In the context of LLMs, openness also means better control over data and more confidence of the user that models aren’t trained on harmful or invasive information.

Most widely used LLMs today such as Google’s Gemini or Anthropic’s Claude are closed-source, with their training data kept private. However, the rise of open alternatives has been swift. Early efforts like EleutherAI’s GPT-NeoX paved the way, and Meta’s release of the LLaMA family marked a turning point by offering high-performing models under a permissive license. This shift is reminiscent of Apple’s “1984” moment, positioning itself as the rebel in a world dominated by a single corporate vision. In a similar way, open-source LLMs provide users with choice, transparency, and freedom without sacrificing quality.

Ultimately, the rise of open-source LLMs is more than a technological trend, it is a cultural movement. As global communities build, test, critique, and refine these models, we return to a core philosophy of the early internet: knowledge grows fastest when shared. Open-source LLMs embody that spirit by letting researchers inspect what happens inside, giving developers the freedom to experiment, and empowering users to shape the tools they rely on.

Proprietary systems may still dominate in polish and scale, but open-source innovation is pushing the field in ways closed models cannot. The future of AI will not be defined by who owns the biggest model, it will be shaped by who enables the most people to understand, build, and benefit from these technologies. If the momentum continues, the next era of AI will not belong to a single company. It will belong to all of us.

– Govind Narayan

From Cats and Pigeons to AlphaGo

In 1905, American psychologist Edward Thorndike proposed the Law of Effect through his puzzle box experiments, in which he placed cats inside boxes that could only be opened by performing certain actions, like pressing a lever. Thorndike observed that when the cats were rewarded with food after escaping, they repeated the behaviours that led to success. From this, he concluded that actions followed by satisfying outcomes are more likely to recur. 

Decades later, Burrhus Frederic Skinner expanded on Thorndike’s work. During World War II, he launched the Pigeon Project, an attempt to guide missiles using pigeons trained to peck at enemy targets. Skinner rewarded the birds with food when they pecked correctly, shaping their behaviour through reinforcement. The plan was to place the pigeons into the nose of a warhead, where they would steer it by pecking at a moving image of the target.

Though the project was never deployed, its legacy lasted far longer than the war. Skinner’s experiments helped define the theory of Operant Conditioning- the idea that behaviour is strengthened or weakened by its consequences. It became one of the most powerful explanations of how humans and animals learn, and, unknowingly, a blueprint for how machines would one day do the same.

In reinforcement learning, a key branch of Artificial Intelligence, the same principle of trial and error resurfaces in digital form. An AI agent interacts with its environment, performs actions, and receives feedback in the form of rewards or penalties. These can be positive or negative reinforcements (similar to a pet receiving a treat or not after completing a task) or punishments. Over time, it learns which strategies maximize its long-term rewards.

In the words of Richard S. Sutton and Andrew G. Barto, “Reinforcement learning problems involve learning what to do—how to map situations to actions—so as to maximize a numerical reward signal.”

This approach reached a milestone with Google DeepMind’s AlphaGo, the program that famously defeated 18-time world champion Lee Sedol, in 2016. Go, an ancient Chinese board game, has an estimated 10¹⁷⁰ possible board configurations. AlphaGo first studied thousands of Go games played by humans, then improved by playing against itself thousands of times, similar to Skinner’s training cycles.

In the match, AlphaGo played with startling creativity. It made a move, known as Move 37, so unconventional that experts estimated only a 1 in 10,000 chance a human would choose it. That single move marked a turning point: AI was no longer just imitating human intelligence, it was demonstrating a surprising level of creativity.

Today, the same reinforcement principles guide self-driving cars, robotic systems, and adaptive algorithms across industries and such models can make decisions even in unpredictable environments. And yet, their foundation remains unchanged: trial, feedback, and improvement.

So, the next time you see a pigeon pecking at crumbs, remember that its behaviour has inspired the mechanics of modern intelligence. From Thorndike’s cats to Skinner’s pigeons to AI models like AlphaGo, Deepseek’s R1, OpenAI’s o1 and Anthropic’s Claude Opus 4, the thread is clear, learning, whether human, animal, or artificial, always begins the same way: by trying, failing, and trying again till you succeed. 

– Nikunj Kohli