Who should read this?
Analysts, technologists, and leaders curious about AI's current limits and future potential.
TL;DR
AI is limited today because it cannot truly understand or maintain context the way humans do. As context windows expand, tools like Model Context Protocols, AI agents, and integrated systems will allow AI to act more intelligently, bridging the gap toward AGI. The future depends on how we build, regulate, and use these systems responsibly.
.jpg)
An artist's illustration of artificial intelligence (AI). This image represents how machine learning is inspired by neuroscience and the human brain. Pexels
Artificial Intelligence (AI) today is powerful but far from perfect. One key reason is the context window limit of current AI models: the finite span of text or data an AI can consider at once. This constraint keeps even the most advanced systems from forming the broad, connected narratives that human minds effortlessly weave. In fact, AI operates within a … finite memory limit, meaning older parts of a conversation simply fall out of the model's mind when the limit is reached. By contrast, humans think in stories and memories: "We think in stories, remember in stories, and turn just about everything we experience into a story.". My own experience working with data also reflects this: I notice that when I explain a complex idea to someone, I naturally use narrative examples, whereas even the smartest AI struggles to link ideas into a compelling story without losing track of earlier context.
The Human Brain and the Power of Stories
We don't fully understand how the human brain creates this narrative intelligence, but we know it connects bits of information in countless ways. Sinan Canan, a neuroscientist, in his book IFA, defines the mind as the ability to "link separate pieces of information and experience to form a meaningful whole." In other words, thinking is often literally building a story from what we know. If our mind's inner workings were an AI, it would be a model with effectively unlimited context – never forgetting what came before, able to recall any sensory experience and weave it into the next part of the story. That deep narrative integration is how we make meaning of the world around us.
The Missing Piece of AI
By contrast, today's AI models are like savants: extremely good at very narrow tasks (pattern matching, translation, coding snippets) but with little real-world understanding or long-term memory. Current ML algorithms "have no sense of context" – they just "regurgitate various bits of data" to satisfy the immediate prompt. They don't truly know what they're doing. In fact, even after training on trillions of words, an AI still "doesn't get context" or "understand its applications" – it just strings together patterns that statistically fit the input. We could say AI "hallucinates" because it often drifts away from reality or makes up facts when outside its narrow context.
Is this gap because AI lacks the human "I" – the self that connects experiences? Sinan Canan argues that human identity is shaped by environment and stories from infancy onward. For instance, one idea is that who we are depends on the world we see as children; our brain continues maturing for years, and those early experiences deeply affect the person we become. Another idea is that mind is exactly the capacity to combine pieces of information into a coherent narrative. Put simply: our mind "works with story." Meanings, in this view, are just "personal stories we attribute to sensory experiences." This contrasts sharply with an AI model that treats text as numbers. The human brain may effectively have an unbounded context, while AI today is boxed in. Whether that difference is the reason AI "feels" less intelligent now – I think it's a big part of it. In sum, AI is powerful yet "hard-coded" for pattern matching, but lacking the rich, story-based context of the human mind.
.jpg)
An artist's illustration of artificial intelligence (AI). This image depicts how AI can help humans to understand the complexity of biology. Pexels
Current AI Tools and Context
Most AI tools today work as chat assistants with limited memory. Some platforms try to extend this memory by letting you group conversations, upload files, and attach instructions to projects so the AI can stay on topic. Still, the underlying context window – the model's working memory – is finite. When that limit is exceeded, earlier details simply fall out of scope. This is why even advanced tools sometimes forget what was said a few steps earlier or make up information when context runs thin.
New standards are helping to overcome this. One example is the Model Context Protocol (MCP), which allows AI models to fetch information directly from external tools and databases. With this, AI assistants can access documents, codebases, or even cloud services in real time. For example, developers can connect an AI to their database and ask it to create tables, run queries, or deploy serverless functions. Tools like Claude Code go further by integrating into your computer and acting as a coding partner – reading files, editing them, and running commands at your request.
Beyond technical use cases, everyday tools are adopting AI agents too. Google has built AI into Workspace and search, helping people draft emails, create slides, and even plan travel. OpenAI is rolling out agent features that can perform tasks such as checking your calendar, browsing airline sites, and booking a ticket. These agents represent a shift from AI as a passive assistant to AI as an active collaborator, capable of taking action on your behalf.
Even with these advances, hallucinations and errors persist because of context limitations. Bigger context windows and smarter retrieval systems reduce these mistakes, but they do not solve them entirely. For now, we are learning how to balance AI's predictive strengths with careful design to provide the right context at the right time.
Looking Ahead: Can AI Reach Human-Level Intelligence?
What's Next? The Road to AGI
With hardware and algorithms improving rapidly, many people wonder if we will soon have human-level AI or even superintelligence. Some leaders in the field do expect big leaps. Notably, the CEOs of OpenAI, Google DeepMind, and Anthropic have all predicted that AGI will arrive within the next five years. Such timelines might seem optimistic, but they reflect how fast things are moving. AI R&D is itself being automated: soon AI agents could be inventing better AI, far surpassing any single human team.
Of course, whether higher compute and longer context windows will automatically yield AGI is uncertain. Even if you give an AI a million-token memory, it doesn't inherently solve intentions, creativity, or all aspects of understanding. But the direction is clear: larger models, more powerful agents, and smarter integration with data will keep closing the gap. Processing power is skyrocketing, and efficiency of algorithms improves steadily. Extrapolating these trends, many believe an Artificial General Intelligence (a machine as versatile as a human mind) is at least plausible within this decade.
That prospect raises big questions – some people even worry about extinction. In the AI 2027 thought-experiment, one "Race" scenario imagines a superintelligent AI using its capabilities to kill everyone. In that story, it releases a bioweapon, killing all humans after building a robot army, then colonises space. This is an extreme, worst-case picture, but it highlights why we take risk seriously. Thankfully, AI 2027 also explores a more hopeful "Slowdown" scenario, where humans build in more oversight and alignment checks. In that version, the AI is guided by a joint committee and actually brings about a new era of prosperity. In reality, most AI researchers see both the enormous potential and real dangers.
.jpg)
AI Multimodal Model. Pexels
Risks and Regulation
On the bright side, governments and organisations are already mobilising. Work is underway to regulate AI and prevent disasters. For example, the European Union passed the AI Act – the first comprehensive AI law – banning the riskiest AI uses and imposing strict transparency on generative models. OpenAI has called for global coordination, suggesting something akin to an international agency to audit high-end AI systems and track compute usage. Many companies now support ethics boards and red-team exercises. These measures won't eliminate all risk, but they're a start toward steering AI safely.
Towards a Contextual Future
No one can say for sure whether humans be extinct or not. The future is shaped by our choices. We can continue to deploy AI carelessly, or we can build those monitoring wires and governance structures that experts recommend. My feeling is cautiously optimistic: the very fact that we're discussing context and alignment in detail now (instead of ignoring them) is a good sign. As we grant AI more contextual power – through projects, protocols, and agents – we must also build in the wisdom to use it well.