Sherry's Blog


  • 首页

  • 关于我

  • 标签

  • 文章列表

Why Most People Don't Need a Personal Knowledge Graph

发表于 2026-04-11

Every few months a new wave of personal knowledge management tools sweeps through my timeline. Obsidian vaults with thousands of backlinks. Logseq graphs that look like neural tissue under a microscope. Zettelkasten disciples explaining how their “second brain” has finally freed them from forgetting. I find it all fascinating, and I am quietly convinced most people do not actually need any of it.

The pitch is seductive: capture everything, link everything, and your mind will become a searchable, navigable web. The trouble is that the human brain does not store memory the way a graph database does. A knowledge graph is a clean structure of nodes and edges, each one precise and typed. Memory is messier, warmer, and far more personal. It is shaped by who you are, what you felt, and what you happened to be looking at when the thought arrived.

Memory is not one thing. Some people think in propositions and crisp definitions. Some remember through sound and rhythm. Some rehearse by talking out loud. I am a scene‑based rememberer. What sticks for me is a picture—where I was sitting, what the light looked like, the texture of a particular afternoon. When I try to recall a concept, I do not search an index; I replay an episode. A conversation at a café, a paragraph on a specific page, the smell of rain during a lecture. The idea arrives wrapped in context, and the context is half the meaning.

Forcing that kind of memory into a tidy graph feels like pinning butterflies. You get a specimen, but the flight is gone. The graph can show me that Concept A links to Concept B, but it cannot reproduce the reason the link mattered to me in the first place—the small, private jolt of recognition that made me care. Strip away the scene and I am left with trivia I no longer have a reason to remember.

This is why I never got along with Obsidian. I tried, more than once. It is a beautiful piece of software and clearly a labor of love. But every time I sat down to “maintain” my vault, I felt like I was doing the paperwork of thinking instead of the thinking itself. Tagging, linking, renaming, reorganizing—the overhead kept eating the thing it was supposed to support. For a while I mistook the busyness for progress. Eventually I noticed that my best ideas were still coming from long walks, half‑finished drafts, and conversations I had not indexed anywhere.

There is also a quieter argument against the tools, and I think it is the more important one. The act of organizing by hand—slowly, with a pen or in a plain text file—is not a bottleneck to get rid of. It is the work. When I sit down and try to write out what I learned this week, I am forced to decide what actually matters. I drop the parts I cannot justify. I notice which ideas keep showing up next to each other. I simplify, rephrase, and sometimes discover that two things I thought were separate are really the same thing in different clothes. The friction is where the understanding happens. Automate it away and you get a larger archive and a smaller mind.

This is the part of the conversation where I have to talk about Karpathy, because a few days ago he posted about his “LLM Wiki” setup and the whole timeline tilted in response. The idea is elegant in the way his ideas usually are: you keep a folder of raw sources the model is never allowed to edit, you let an LLM agent maintain a second folder of markdown articles on top of those sources, and you give it a schema file—a CLAUDE.md or the equivalent—that tells it how to ingest new material, cross-link entities, and periodically lint the whole thing for contradictions and orphan pages. He mentioned that one of his research wikis has grown to around a hundred articles and four hundred thousand words, and that he rarely edits it by hand anymore. The day after, he dropped a gist meant to be copy-pasted into your own agent as a starting point. Within hours people were forking it, wiring it into Obsidian vaults, writing Medium posts with titles like “Karpathy just 10x’d everyone’s Claude setup,” and shipping GitHub repos called things like second-brain and llm-wiki. It has become a small genre.

I want to be honest about what is interesting here, because I do not think Karpathy is wrong in the way the loudest Obsidian evangelists are wrong. His framing actually concedes my main point: the tedious part of knowledge management is not reading, it is bookkeeping, and bookkeeping grows faster than the value it produces. That is exactly why I gave up on Obsidian. Where we part ways is on what to do about it. His answer is to hand the bookkeeping to a machine that does not get bored. Mine is to notice that the bookkeeping was mostly make-work in the first place, and to stop doing it. If the cross-references only existed because a human could not hold the material in their head, automating them does not make the material more yours—it just makes the scaffolding cheaper to maintain. You end up with a four-hundred-thousand-word wiki that an agent wrote on your behalf, and a vague feeling that you have read it, when in fact the model has.

There is a specific failure mode I want to name. When the wiki is maintained by an LLM, the canonical version of what you “know” lives outside your head, in prose you did not write. Querying it feels like remembering, but it is not—it is retrieval from a system whose compression choices you never made. For a scene-based rememberer this is especially bad, because the scenes never get encoded at all. There was no afternoon at the café. There was a diff in a markdown file. And the next time you want to recall the idea, there is nothing to replay, only something to look up. The copy-pasted gist is a beautiful piece of engineering and I understand why it went viral, but I think a lot of people are going to build one, feel productive for a month, and then quietly notice that their sense of having learned anything has gotten thinner, not thicker.

The thing worth stealing from Karpathy’s pattern, I think, is much smaller than the pattern itself. The useful primitive is the raw-sources folder: a flat pile of things you actually read, kept immutable, and occasionally grepped. That part costs nothing and preserves the context you might someday want to replay. Everything on top of it—the generated articles, the backlinks, the lint passes, the schema document—is optional, and for most people optional means “not worth it.” You can get eighty percent of the benefit by keeping the sources and writing the occasional essay in your own words when something refuses to settle.

Herbert Simon once said that a wealth of information creates a poverty of attention. Knowledge graphs, for all their elegance, tend to push in exactly the wrong direction. They make capture cheap and retrieval plausible, which encourages you to save more and decide less. But the scarce resource in adult life is not storage. It is the willingness to sit with a handful of ideas long enough to know what you think about them.

None of this is an argument against tools. If you are a researcher stitching together citations across a decade, a graph is probably indispensable. If your work genuinely has the shape of a network—legal precedents, drug interactions, an investigation with hundreds of named entities—then by all means, build the thing. What I am skeptical of is the default assumption that every knowledge worker needs a personal ontology, and that forgetting is a bug to be engineered out.

Forgetting is not a bug. It is a feature of a mind that has decided what matters. The things you cannot let go of are, almost by definition, the things worth keeping. A plain notebook, a few running documents, and the stubborn practice of writing in full sentences will take most people further than any graph ever will. Your brain already knows how it wants to remember. The job is to listen to it, not to overwrite it with somebody else’s schema.

So I will keep my notes simple. A handful of markdown files. A few drafts that grow in public. Long walks when something refuses to come clear. If that counts as a knowledge system, it is one with exactly one user, and the maintenance cost is the thinking itself—which, it turns out, is the only part I wanted in the first place.

Anxiety and Imperfection in the Age of AI

发表于 2026-04-11

About fifteen years ago I opened Coursera for the first time to learn linear algebra. Soon after, I discovered TED and fell into a world of open courses. I first touched Python in 2014 on a very basic tutorial site. With some programming background, the language felt simple—clean abstractions, concise syntax—but feeling something is simple is not the same as using it well.

We now live in the age of AI. A decade ago we learned on the internet, and that learning was often driven by anxiety—mine included. I worry about new knowledge and unfinished deadlines. Even though AI can write code faster than any human, I still miss that earlier era. Back then I sat in classes taught in English I barely understood and still threw myself into algorithms, data structures, games, and websites. Linear algebra and college physics on Coursera thrilled me because the internet changed how I came to know the world. Linear algebra felt “simple yet not simple.” English explanations struck me as pure and precise, focused on meaning; Chinese, in contrast, can be more easily swayed by surrounding context.

The internet rewired how we connect. Once, to find like‑minded people you searched your immediate circle. Over the last fifteen years, information and conversation stretched worldwide. Even from an ordinary school, you could attend extraordinary courses online.

AI has changed something else: it keeps me staring at screens longer and tempts me into “replacement thinking.” That shortcut is often wrong, but it makes laziness easy. And we are still mid‑transition. AI cannot replace human learning. Humans must remain the final check because AI will never be 100% correct. At best, it is an assistant that offers ideas when precision demands are low. We cannot outsource curiosity or judgment.

In an atomized society, AI has also become an emotional listener. Some friends no longer vent to friends when they are down; they talk to AI instead. Offloading negative emotion is hard for people but costs an AI only electricity. In that narrow sense, AI can absorb a bit of society’s stress.

Still, today’s revolutions mostly live at the information layer. Hardware domains—embodied intelligence, autonomous driving—remain constrained by safety and liability. This is where “imperfection” matters. AI often strives for a kind of polished neutrality, but human imperfection is the source of our richness. When I write, I bring bias, feeling, and mood; that color is part of the work. Diversity—biological and intellectual—emerges from tiny deviations. Perhaps we began as paramecia or other marine life. Over millions of years, countless mutations produced the abundance of animals, plants, microbes, even viruses. Creation never aimed at flawless beings; variation is the point.

Because of variation, human emotion is compelling. No one stays rational forever, and our feelings are never singular. As Laozi writes in the Dao De Jing, “Heaven’s way takes from what has excess and replenishes what is lacking.” It reads like mean reversion. Water runs downhill; when land is dry, vapor gathers into clouds and returns as rain. Things swing to extremes, then return. Nature is a song with rises and falls—imperfect cycles that create variety.

“The highest goodness is like water; water benefits all and does not compete,” Laozi also says. Water looks soft yet is resilient. It adapts to any container and, drop by drop, reshapes stone. That blend of softness and strength mirrors the value of imperfection: flexibility hiding power, power containing flexibility. Civilizations follow similar rhythms—peaks, troughs, recoveries. Economies do, too. AI may craft “perfect” sentences, but human diversity is itself beautiful. Tools should serve people; they do not replace us. The steam engine did not end work—it forced us to learn new tools. The AI era is no different.

Even with multimodal systems, AI today mostly processes information and supplies decision references. Five or ten years from now, much may change; even then, embodied systems and full self‑driving will likely be powerful assistants rather than final authorities, because responsibility must land somewhere. Autonomous driving is especially hard; perhaps it works on constrained roads with predictable traffic. I have held a license for thirteen years and still prefer not to drive: too many factors, too much risk. On the other hand, asking a robot to move laundry into a dryer is feasible. Robot vacuums and mop‑washers already automate small chores, but scenarios remain narrow. Once a system touches physical interaction with people, safety dominates. In software—code, text, analysis—AI assists, but people own the decision.

That is why I do not think AI anxiety is necessary. You do not need to chase every release. Live your life and benefit from progress as it matures. AI cannot learn to read or write code for you; those are foundational skills. Technical ability is the base; AI fluency is a multiplier on top. Without fundamentals, AI widens gaps. Used well, it can close them. Calculators did not kill the multiplication table. Fundamentals endure, and human oversight remains essential.

The spectacle of robots and shiny AI apps is understandable, but these are not final forms. Costs stay high in transition, and many products lack optimization and standards. They are interim solutions and will be replaced. Those who build ecosystems early gain an edge. Open and closed approaches will coexist—just as Android and iOS, Linux and Windows have.

Information security is unavoidable. Whatever you choose, you pay to deploy and you carry risk. Today’s AI reads code well enough to surface vulnerabilities quickly—both a capability and a concern.

We are still moving from text‑only toward multimodal systems; that arc may take another two to five years. I cannot predict when things stabilize or how capable future models will be. One point is stable: humans must remain the last mile. Embodied intelligence will likely need years before meaningful deployment. Robots existed decades ago; better algorithms make them feel closer, but the distance to our expectations is still large. These systems assist; they do not replace. AI, autonomy, embodiment—tools for people that can boost productivity and nudge us toward more equity, but not deliver utopia.

Zooming out, no society achieves perfect equality. Welfare without internal competition can dull initiative and weaken resilience to external pressure. The best we can hope for is balance. As long as nations exist, competition exists. If a society chooses high welfare and high taxes, it may reduce internal pressure without reducing external pressure. The result is a familiar tension we must navigate.

Laozi’s “small states with few people” is, in practice, utopian: a small polity struggles before a much larger, unified one. But relentless competition is not an answer either. We negotiate between extremes and look for workable balances. Some places seem to strike a compromise between competition and welfare; others lean heavily one way. Often, it is competition under constraint—not innate advantage—that unlocks potential.

My thoughts jump, but they trace one line: no matter how strong AI becomes, it remains a tool. Human imperfection keeps the world vivid. We need to keep learning—on purpose—to master our tools and to stay responsible for the outcomes.

123…22

49 日志
2 分类
28 标签
© 2026 Sherry
由 Hexo 强力驱动
|
主题 — NexT.Muse v5.1.4