Last weekend, the spotlight was on Manus, a Chinese AI super-agent reportedly capable of handling up to fifty complex tasks simultaneously. Videos of it seamlessly managing transactions while conducting deep research, launching websites, and even planning vacations went viral. This development isn’t just a wake-up call for AI labs like OpenAI—it also raises unsettling questions for us humans. Are we already being outpaced by AI?
Parmy Olson recently wrote about a Microsoft-CMU study examining how 319 knowledge workers interacted with AI. A surprising takeaway was that as they placed more trust in AI for skills like writing, analysis, and evaluation, they practiced those skills less themselves. Over time, they became passive recipients of AI output, rarely questioning its accuracy or refining their own judgment. This fuels a deeper anxiety—if AI keeps advancing, will we gradually lose not just our jobs but also our ability to think critically and creatively?
Fear of technological displacement is nothing new. Socrates worried that writing would weaken human memory. When calculators emerged, people feared they would erode arithmetic skills. Computers were once predicted to render millions of knowledge workers obsolete. Yet, despite these fears, technology has always integrated into our lives, shifting how we work rather than eliminating our purpose. Now, with frontier AI labs racing toward AGI—Artificial General Intelligence—the stakes feel higher. Futurist Ray Kurzweil initially predicted that AGI, or the Singularity, would arrive in 2045. The acceleration of generative AI has prompted some to revise that timeline, suggesting it could happen as soon as 2026, or within the next five years.
In reality, we may not need to wait until 2045—or even 2026. AI is already reshaping our world, not in a sudden, dramatic takeover but through a slow, incremental process. Imagine the proverbial frog in a pot of gradually warming water, unaware of the rising temperature until it’s too late. That’s us with AI.
Fifteen years ago, we memorized phone numbers; now, our phones do it for us. We used to navigate from place to place using memory or maps; now, we blindly follow GPS directions. My phone reminds me when to leave for my physiotherapy sessions, and soon, an autonomous car might pick me up without my intervention. As smart homes evolve, AI-powered appliances will anticipate and fulfill our needs—our blender preparing our protein shake, our microwave and fridge coordinating meals. The convenience is undeniable, but the trade-off is clear: we risk losing basic skills and autonomy in the process.
So, what can we do as AI becomes increasingly powerful, independent, and intelligent? Three things:
- Contain AI: We must establish rules and safeguards to ensure AI remains beneficial and aligned with human values. Mustafa Suleyman, Microsoft’s AI chief, highlights this in his book—traffic, for example, could be chaotic and deadly without signals and driving laws. Similarly, AI needs regulatory frameworks to prevent unintended consequences.
- Become AI-literate: AI is here to stay, and we must learn to navigate it. Just as previous generations adapted to new languages and mathematical systems, we must integrate AI fluency into our skillset. This means understanding how AI works, using generative AI tools effectively, and collaborating with AI as we would with human colleagues.
- Emphasize human strengths: As AI takes over more tasks, our uniquely human abilities become even more critical. Skills like curiosity, compassion, language, logic, intuition, and collaboration will define our relevance. We must learn to curate and interact with AI thoughtfully—asking the right questions, applying judgment, and refining AI-driven solutions. Our ability to work with others, exercise ethical reasoning, and explore creative alternatives will be indispensable. The more AI advances, the more human we must become.
The rise of AI doesn’t have to mean the decline of human agency. But if we want to remain active participants in our future, we must consciously shape our relationship with AI rather than passively letting it shape us.