About fifteen years ago I opened Coursera for the first time to learn linear algebra. Soon after, I discovered TED and fell into a world of open courses. I first touched Python in 2014 on a very basic tutorial site. With some programming background, the language felt simple—clean abstractions, concise syntax—but feeling something is simple is not the same as using it well.
We now live in the age of AI. A decade ago we learned on the internet, and that learning was often driven by anxiety—mine included. I worry about new knowledge and unfinished deadlines. Even though AI can write code faster than any human, I still miss that earlier era. Back then I sat in classes taught in English I barely understood and still threw myself into algorithms, data structures, games, and websites. Linear algebra and college physics on Coursera thrilled me because the internet changed how I came to know the world. Linear algebra felt “simple yet not simple.” English explanations struck me as pure and precise, focused on meaning; Chinese, in contrast, can be more easily swayed by surrounding context.
The internet rewired how we connect. Once, to find like‑minded people you searched your immediate circle. Over the last fifteen years, information and conversation stretched worldwide. Even from an ordinary school, you could attend extraordinary courses online.
AI has changed something else: it keeps me staring at screens longer and tempts me into “replacement thinking.” That shortcut is often wrong, but it makes laziness easy. And we are still mid‑transition. AI cannot replace human learning. Humans must remain the final check because AI will never be 100% correct. At best, it is an assistant that offers ideas when precision demands are low. We cannot outsource curiosity or judgment.
In an atomized society, AI has also become an emotional listener. Some friends no longer vent to friends when they are down; they talk to AI instead. Offloading negative emotion is hard for people but costs an AI only electricity. In that narrow sense, AI can absorb a bit of society’s stress.
Still, today’s revolutions mostly live at the information layer. Hardware domains—embodied intelligence, autonomous driving—remain constrained by safety and liability. This is where “imperfection” matters. AI often strives for a kind of polished neutrality, but human imperfection is the source of our richness. When I write, I bring bias, feeling, and mood; that color is part of the work. Diversity—biological and intellectual—emerges from tiny deviations. Perhaps we began as paramecia or other marine life. Over millions of years, countless mutations produced the abundance of animals, plants, microbes, even viruses. Creation never aimed at flawless beings; variation is the point.
Because of variation, human emotion is compelling. No one stays rational forever, and our feelings are never singular. As Laozi writes in the Dao De Jing, “Heaven’s way takes from what has excess and replenishes what is lacking.” It reads like mean reversion. Water runs downhill; when land is dry, vapor gathers into clouds and returns as rain. Things swing to extremes, then return. Nature is a song with rises and falls—imperfect cycles that create variety.
“The highest goodness is like water; water benefits all and does not compete,” Laozi also says. Water looks soft yet is resilient. It adapts to any container and, drop by drop, reshapes stone. That blend of softness and strength mirrors the value of imperfection: flexibility hiding power, power containing flexibility. Civilizations follow similar rhythms—peaks, troughs, recoveries. Economies do, too. AI may craft “perfect” sentences, but human diversity is itself beautiful. Tools should serve people; they do not replace us. The steam engine did not end work—it forced us to learn new tools. The AI era is no different.
Even with multimodal systems, AI today mostly processes information and supplies decision references. Five or ten years from now, much may change; even then, embodied systems and full self‑driving will likely be powerful assistants rather than final authorities, because responsibility must land somewhere. Autonomous driving is especially hard; perhaps it works on constrained roads with predictable traffic. I have held a license for thirteen years and still prefer not to drive: too many factors, too much risk. On the other hand, asking a robot to move laundry into a dryer is feasible. Robot vacuums and mop‑washers already automate small chores, but scenarios remain narrow. Once a system touches physical interaction with people, safety dominates. In software—code, text, analysis—AI assists, but people own the decision.
That is why I do not think AI anxiety is necessary. You do not need to chase every release. Live your life and benefit from progress as it matures. AI cannot learn to read or write code for you; those are foundational skills. Technical ability is the base; AI fluency is a multiplier on top. Without fundamentals, AI widens gaps. Used well, it can close them. Calculators did not kill the multiplication table. Fundamentals endure, and human oversight remains essential.
The spectacle of robots and shiny AI apps is understandable, but these are not final forms. Costs stay high in transition, and many products lack optimization and standards. They are interim solutions and will be replaced. Those who build ecosystems early gain an edge. Open and closed approaches will coexist—just as Android and iOS, Linux and Windows have.
Information security is unavoidable. Whatever you choose, you pay to deploy and you carry risk. Today’s AI reads code well enough to surface vulnerabilities quickly—both a capability and a concern.
We are still moving from text‑only toward multimodal systems; that arc may take another two to five years. I cannot predict when things stabilize or how capable future models will be. One point is stable: humans must remain the last mile. Embodied intelligence will likely need years before meaningful deployment. Robots existed decades ago; better algorithms make them feel closer, but the distance to our expectations is still large. These systems assist; they do not replace. AI, autonomy, embodiment—tools for people that can boost productivity and nudge us toward more equity, but not deliver utopia.
Zooming out, no society achieves perfect equality. Welfare without internal competition can dull initiative and weaken resilience to external pressure. The best we can hope for is balance. As long as nations exist, competition exists. If a society chooses high welfare and high taxes, it may reduce internal pressure without reducing external pressure. The result is a familiar tension we must navigate.
Laozi’s “small states with few people” is, in practice, utopian: a small polity struggles before a much larger, unified one. But relentless competition is not an answer either. We negotiate between extremes and look for workable balances. Some places seem to strike a compromise between competition and welfare; others lean heavily one way. Often, it is competition under constraint—not innate advantage—that unlocks potential.
My thoughts jump, but they trace one line: no matter how strong AI becomes, it remains a tool. Human imperfection keeps the world vivid. We need to keep learning—on purpose—to master our tools and to stay responsible for the outcomes.