All posts by Nabhay Khanna

The New Canvas: Ms. Sarika Kaushal on Technology as a Language of Creativity

Is technology limiting human creativity or becoming the greatest tool for expressing it?

Technology has always had two sides: it can open doors, and it can close them. But in my experience as an educator, technology feels less like a threat and more like a new language of creativity. It is not here to replace human imagination, but to extend its reach. I see technology as a new kind of paintbrush, camera, stage, and notebook all at once — a toolkit that allows ideas to travel farther and take shape in ways that once felt impossible.

In the classroom, I have seen students who rarely spoke or shared during discussions quietly design beautiful digital artwork, compose music using software, or build animations that express emotions they couldn’t put into words. Technology, in these moments, doesn’t just help them create — it gives them confidence. Creativity is no longer limited to those who draw well or write beautifully. Now, any child with a spark of curiosity can build, shape, and express.

At the same time, technology has made collaboration richer. Students learn to co-create across screens, combine ideas, and learn from one another in real time. A poem can become a song, which can become a video, which turns into a shared project. Technology dissolves boundaries — between subjects, between skill levels, and sometimes even between people.

But I also see the other side — the quiet dependency that can creep in. When students rely on quick templates, instant answers, or pre-made effects, creativity becomes surface-level. The joy of struggling, exploring, and discovering something new begins to fade. Sometimes, technology can make things so easy that the imagination doesn’t get the workout it needs to grow strong.

This is where our role becomes meaningful. Creativity does not come from the tools — it comes from the heart and mind of the learner. Technology simply gives it shape. Our task is to help students stay curious, to ask “What if?” and “Why not?”, to stretch their ideas rather than settle for the first result the computer offers.

So, is technology limiting creativity or enhancing it?
The truth is gentle and simple: technology becomes what we teach it to be.

Used with intention, it lets students dream bigger, see beyond what is visible, and express what is deeply personal. Used without reflection, it reduces creativity to convenience.

In the end, technology should not replace human imagination — it should celebrate it. And our job is to help students use these tools not to avoid thinking, but to deepen thinking, to build, to wonder, and to create something that feels unmistakably theirs.

Engineering Equality: Mr. Ajithkumar Sir on Empowering Girls in Robotics

Our school has had female presidents in Roboknights. What challenges do you think girls face in the robotics field, and how have you helped them overcome these challenges?

We have made significant progress in integrating women into STEM fields, particularly robotics, but this journey has not been without its challenges. There is still a lack of female role models in the field who young women can look up to for motivation and inspiration. Additionally, due to gender biases and stereotypes, many women experience imposter syndrome, even when they possess better skills than their male counterparts. These social barriers often restrict women to specific roles, such as design positions, even when they are capable of so much more. Roboknights, as a club, is built on the philosophy of gender equality and the belief that hard work and determination should be the only parameters of judgment, not gender or other superficial characteristics. As a teacher, I have always encouraged my students to relentlessly pursue their passion and to block out the negativity and societal limitations. I ensure that every girl participating in Roboknights receives an equal playing field and that her enthusiasm for robotics continues to thrive. From Lavanya Jose, Manjusha Roy Chaudhary and Richa Sharma in 2007, and Jasnoor Kaur and Swarnika Bhardwaj as office bearers of “Roboknights” club in 2023, we have ensured that girls took up leadership roles in this club. Today, we have female members who actively contribute and bring glory to the name of the club. At Roboknights, we take pride in nurturing the female role models of tomorrow, young women who will inspire the next generation and help revolutionise the field of robotics by breaking gender biases.

Shaping Thinkers for the Future: Mr. Shekhar on Building a Future-Focused Skillset

With AI transforming so many aspects of daily life, what skills do you think today’s students should develop to stay future-ready and stand out?

There is no doubt that Artificial Intelligence is rapidly reshaping every sphere of human life, and students must cultivate a balanced set of technical and human-centric skills to remain ready for this era of technology and be distinguished. Instead of distancing themselves from technology, students must learn to adapt to these changes. Students need a strong base in digital literacy and computational thinking. Students must develop basic knowledge and a clear understanding of how algorithms function, how data is processed, and how AI systems operate, which will empower them to engage with technology intelligently rather than passively.

Beyond technical skills, critical thinking skills, and problem-solving abilities are extremely important. At the end of the day, Artificial Intelligence is just that, artificial. It can provide information, solve problems using data, but it is the human mind that must frame meaningful questions, interpret outcomes, and apply them judiciously to real-world situations. Additionally, communication, collaboration, and ethical awareness must not be overlooked. AI lacks empathy and moral reasoning; therefore, students who can articulate ideas effectively, work harmoniously in teams, and evaluate the societal impact of technology will be invaluable contributors to any field.

And of course, the qualities of curiosity and resilience will ensure that students perceive AI not as a threat, but as a partner in progress. Those who approach technology with an open, creative mind and the courage to experiment will not only adapt to change but also help shape its direction. They will be able to use AI as a tool to extend their abilities, rather than replace them. As Sundar Pichai rightly remarked, “The future of AI is not about replacing humans, it’s about augmenting human capabilities.”

The Rise of Open-Source LLMs

In every era of innovation, there comes a moment when our machines start to reflect our ethics as much as our engineering. Just as early software like antivirus tools was built to protect and empower the public, today we are entering a new phase, one defined by open-source large language models (LLMs).

LLMs, or Large Language Models, have quickly become one of the defining technologies of our time. These systems are trained on large datasets, enabling them to understand and generate natural language. Rather than being manually programmed and hardcoded, they learn patterns from data and use that knowledge to generate meaningful responses.

But behind the rapid progress lies a philosophical divide: open-source versus proprietary AI. Open-source software represents transparency and collective responsibility. Its publicly available code enables collaboration, independent auditing, customization, and often higher-quality systems born from community oversight. In the context of LLMs, openness also means better control over data and more confidence of the user that models aren’t trained on harmful or invasive information.

Most widely used LLMs today such as Google’s Gemini or Anthropic’s Claude are closed-source, with their training data kept private. However, the rise of open alternatives has been swift. Early efforts like EleutherAI’s GPT-NeoX paved the way, and Meta’s release of the LLaMA family marked a turning point by offering high-performing models under a permissive license. This shift is reminiscent of Apple’s “1984” moment, positioning itself as the rebel in a world dominated by a single corporate vision. In a similar way, open-source LLMs provide users with choice, transparency, and freedom without sacrificing quality.

Ultimately, the rise of open-source LLMs is more than a technological trend, it is a cultural movement. As global communities build, test, critique, and refine these models, we return to a core philosophy of the early internet: knowledge grows fastest when shared. Open-source LLMs embody that spirit by letting researchers inspect what happens inside, giving developers the freedom to experiment, and empowering users to shape the tools they rely on.

Proprietary systems may still dominate in polish and scale, but open-source innovation is pushing the field in ways closed models cannot. The future of AI will not be defined by who owns the biggest model, it will be shaped by who enables the most people to understand, build, and benefit from these technologies. If the momentum continues, the next era of AI will not belong to a single company. It will belong to all of us.

– Govind Narayan

From Cats and Pigeons to AlphaGo

In 1905, American psychologist Edward Thorndike proposed the Law of Effect through his puzzle box experiments, in which he placed cats inside boxes that could only be opened by performing certain actions, like pressing a lever. Thorndike observed that when the cats were rewarded with food after escaping, they repeated the behaviours that led to success. From this, he concluded that actions followed by satisfying outcomes are more likely to recur. 

Decades later, Burrhus Frederic Skinner expanded on Thorndike’s work. During World War II, he launched the Pigeon Project, an attempt to guide missiles using pigeons trained to peck at enemy targets. Skinner rewarded the birds with food when they pecked correctly, shaping their behaviour through reinforcement. The plan was to place the pigeons into the nose of a warhead, where they would steer it by pecking at a moving image of the target.

Though the project was never deployed, its legacy lasted far longer than the war. Skinner’s experiments helped define the theory of Operant Conditioning- the idea that behaviour is strengthened or weakened by its consequences. It became one of the most powerful explanations of how humans and animals learn, and, unknowingly, a blueprint for how machines would one day do the same.

In reinforcement learning, a key branch of Artificial Intelligence, the same principle of trial and error resurfaces in digital form. An AI agent interacts with its environment, performs actions, and receives feedback in the form of rewards or penalties. These can be positive or negative reinforcements (similar to a pet receiving a treat or not after completing a task) or punishments. Over time, it learns which strategies maximize its long-term rewards.

In the words of Richard S. Sutton and Andrew G. Barto, “Reinforcement learning problems involve learning what to do—how to map situations to actions—so as to maximize a numerical reward signal.”

This approach reached a milestone with Google DeepMind’s AlphaGo, the program that famously defeated 18-time world champion Lee Sedol, in 2016. Go, an ancient Chinese board game, has an estimated 10¹⁷⁰ possible board configurations. AlphaGo first studied thousands of Go games played by humans, then improved by playing against itself thousands of times, similar to Skinner’s training cycles.

In the match, AlphaGo played with startling creativity. It made a move, known as Move 37, so unconventional that experts estimated only a 1 in 10,000 chance a human would choose it. That single move marked a turning point: AI was no longer just imitating human intelligence, it was demonstrating a surprising level of creativity.

Today, the same reinforcement principles guide self-driving cars, robotic systems, and adaptive algorithms across industries and such models can make decisions even in unpredictable environments. And yet, their foundation remains unchanged: trial, feedback, and improvement.

So, the next time you see a pigeon pecking at crumbs, remember that its behaviour has inspired the mechanics of modern intelligence. From Thorndike’s cats to Skinner’s pigeons to AI models like AlphaGo, Deepseek’s R1, OpenAI’s o1 and Anthropic’s Claude Opus 4, the thread is clear, learning, whether human, animal, or artificial, always begins the same way: by trying, failing, and trying again till you succeed. 

– Nikunj Kohli

The Reinforcement Gap

AI is remarkable. It can outplay humans in chess, solve complex mathematical equations
and generate working code. But it does not develop or demonstrate all skills at the same
rate. Some skills take flight and others tumble after several iterations. This phenomenon in
the field of AI research is called the “reinforcement gap.”
Reinforcement gap occurs as AI performs well at tasks that are scalable and identifiable
problems with structured definitions. The crux of how AI learns is through reinforcement
learning, which put in simpler terms is nothing but super-charging trial-and-error learning. AI
tries something, receives feedback in terms of reward or punishment, changes its approach,
and tries again. In the domains of coding, riddles, and math equation solving, the feedback
that it receives during each iteration of learning is explicitly defined and received in real-time.
As a result, the AI system learns quickly and begins to perform better exponentially in
comparison to the human learner.
When the actions have subjective, fuzzy or context-specific factors, the AI’s development
and performance slows dramatically. In artistic creative writing tasks, emotional reasoning
and judgement, and wind high stakes decision making, there are no definitive “right or
wrong” answers and without a sharp benchmark or explicitly measurable reward, it becomes
more challenging for an AI mechanism to determine its own performance and improve
incrementally and in some cases plateau.


This gap in reinforcement alludes to a basic truth about AI: it excels in circumstances where
success can be measured and quantified but falters in contexts that require judgement,
intuition, or creativity. Even the most advanced systems require human intuition to
understand ambiguous, complex, or socially-laden situations.
This distinction is useful for students and aspiring technologists. Though AI can allow for
efficiency and speed in taking on problems (in areas that are also well-understood), this
leaves humans with a crucial advantage in areas that involve subtlety, empathy, and
skepticism.


In a nutshell, while it is likely that AI will outperform human capacity in some quantifiable
areas, when imagination is required, human judgement cannot be replaced. Recognizing the
reinforcement gap not only helps make sense of why AI develops at such an uneven pace
but also helps clarify how unbelievably valuable human judgement will continue to be in a
more automated world.

– Divyansh Jai Purohit

Inside the Neuroverse: Building Worlds with Pure Thoughts

For decades, we have interacted with machines through keyboards, screens, and touch. However, scientists and technologists are now exploring something new and innovative — where our brains serve as the interface, and imagination, emotion, and memory can be translated into digital form.

The Neuroverse builds on Brain-Computer Interfaces (BCIs), systems that detect neural signals and turn them into computer commands. Companies like Neuralink and Emotiv already allow users to type or play games using only brain activity. In the Neuroverse, you wouldn’t use fingers or voice; you would just think what you want, and the technology would execute it. You would wear a device, such as a headband or an implant, that detects your brain’s electrical activity — much like an ECG measures your heartbeat — which advanced AI then analyzes. Behind this masterpiece lies complex neuroscience. Every thought, image, and emotion corresponds to specific neural activity — unique signatures that advanced AI could soon decode. By mapping these brainwave patterns, AI models might reconstruct visual or emotional experiences in real time. With the Neuroverse, your imagination could now become a creative operating system. Once the AI understands your brain patterns, it could reconstruct what you’re imagining, showing it on a screen or even turning it into a 3D model in virtual reality. Artists could paint with their minds, architects could design cities in seconds, and creativity would no longer depend on tools — only on thought.

In the Neuroverse, a game is no longer played; it’s designed in real time. A player’s fear might make a monster stronger; their concentration might unlock a hidden pathway. The Neuroverse offers unparalleled tools for mental health. Therapists could guide patients to directly confront trauma by safely and subtly redesigning the environments associated with their fear or anxiety, allowing the mind to heal in a customized, non-threatening digital space. You could compose a symphony through your emotional state and internal rhythm — one that no human composer could predict! Unlike today’s internet — fast, flat, and often emotionally numb — the Neuroverse could evolve into something alive and deeply human. It’s like an emotional internet, where communication transcends words.

Yet this power comes at a cost. Mapping the human mind means exposing our most private thoughts. Who owns this data — the user or the corporation? If controlled by companies, it risks turning consciousness into commercial code. As beautiful as the Neuroverse sounds, it also flirts with dystopia — a realm where imagination is both freedom and vulnerability. Thinking isn’t just confined to biology; it’s a tool of construction. Human consciousness could finally merge with technology — not as a servant to machines, but as a co-architect. As this technology moves from the lab toward commercial reality, the time to define the ethical boundaries of thought-responsive reality is now. Otherwise, we risk allowing our deepest imaginations to become the next frontier for corporate control.

For now, the Neuroverse exists only in the minds of futurists, dreamers, and researchers — but so did the internet once. If realized ethically, it could democratize creativity: no coding, no screens, no limits. Just the human mind — raw, vivid, unfiltered. And maybe that’s the most extraordinary idea of all — that the next universe humanity explores might not lie among the stars, but within ourselves.

– Arshia Barsain

Fintechs- The Future of Money

If you were to ask your grandmother about the good old days when “banking” meant standing in a queue that moved slower than Wi-Fi on a bad day, all you would get is a bunch of groans and sighs. Those were simpler & sadder times. Then along came fintechs and money became mobiles, payments got faster and wallets became apps. Fintech short for financial technology takes finance, adds some coding, a splash of innovation and makes money fun again. Fintechs have turned our everyday transactions into smooth swipes and taps. Need to split the dinner bill? One Paytm transfer and your friendship is saved. Need a loan? There’s an app for you that decides your creditworthiness faster than Usain Bolt running a 100m race.

Across the globe, fintechs have redefined convenience. From tiny startups in Nairobi powering mobile banking for farmers to billion-dollar apps in Silicon Valley helping you invest in space tourism, fintechs have transformed finance like never before. No more boring branch visits, no endless paperwork, just your phone and a simple app. Even traditional banks are now trying to “get up with the times”. They’ve started launching apps, chatbots, and digital wallets to adapt to the fintech environment. Fintechs have also brought a culture of financial literacy, making people care about their credit scores, crypto, and cashback offers utmost devotion.

But of course, it’s not all smooth sailing. With great tech comes great hackers. Security breaches and scams remind us that fintech isn’t foolproof. Fintechs still have miles to go before they can sleep. Still, these innovations have made finance faster and fairer. So yes, fintechs have dramatically changed the world and anyways who knew that one day, your phone would be your bank, your broker, and your best financial therapist? Welcome to the age of fintechs where money moves faster than your excuses for not completing your homework!

– Kabir Sahni