Grifters and Grandfathered Intelligence
February 22, 2026
Remember 2021? What a year. COVID, January 6th, and my favorite: NFTs.
NFTs were the hot new thing. Among a certain corner of the internet, there were speculative thinkpieces about how they would change the world. Your weird cousin had one. Some of them sold for millions of dollars.
But what were NFTs, exactly? Honestly, who knows and who cares. What interests me about NFTs isn't the technology — it's what they revealed about the internet. NFTs commanded attention. Massive, frenzied, irrational attention. (Full disclosure, I rode the wave: did a couple of NFT-related projects, made some money, and lost some money.)
And wherever attention pools on the internet, grifters are never far behind. The economics of NFTs are linked inextricably to grifters. Hype cycles, pump-and-dumps, celebrity endorsements, and all the rest. The success of a given token depended on its hype, and once an opportunistic trader acquired a token, they had a direct incentive to "shill" that token to others (so its value would increase). And so NFTs were perfectly emblematic of the internet's ability to attract grifters.
The playbook was obvious with NFTs, but it carries out the same across different mediums: find something new enough that most people don't understand it, complicated enough that explaining it is difficult, and exciting enough that people get FOMO if they don't participate.
Other recent trends that I've seen include: 1) the "online course" guru, wherein a mid-level influencer flexes their wealth and promises to provide their followers the playbook to acquire such wealth; 2) the rise of Solana-based memecoins on pump.fun; 3) young men shilling Polymarket and Kalshi, both in their purest forms (regular "events trading") and more sophisticated alternatives (like arbitrage bots that promise to make you money by exploiting price discrepancies).
And I think we're about to witness the same thing happen with AI.
To be fair, the AI moment is different in one important respect: the underlying technology is genuinely remarkable. Throughout the first few months of 2026, model capabilities have made substantial leaps. In software specifically, these models can now generate somewhere between 90–95% of the code for a given project. As a practitioner in that field, I can say with confidence that AI has produced a meaningful shift in how software gets built. I imagine other industries will soon experience the same.
But those recent developments have also brought about a new class of operators who have found a new thing they don't understand but are very happy to charge you to explain. AI — beyond just ChatGPT — has finally reached a broader, less technical audience. And so the normies have arrived.
The OpenClaw Moment
This has been chiefly heralded by—and is best resembled by—OpenClaw. OpenClaw is an open-source AI agent built by Peter Steinberger, and its pitch is simple: rather than opening a browser tab to talk to an AI, OpenClaw runs locally on your machine (or your Brand New Mac Mini™) and meets you wherever you already are — WhatsApp, Telegram, Discord, or Slack. It connects to your files, your calendar, your email, and remembers everything across sessions.
And so now, predictably, we see an entire industry of people — many of whom have never written a line of code in their lives — setting up "OpenClaw" setup services:
- https://x.com/joshdgavin/status/2025323485292106231
- https://x.com/tipbtdennis/status/2023903516415185166
In the above examples, you see non-technical people aiming to "print money" by charging others to set up OpenClaw.
Why didn't we see this before? I think its a combination of models getting better and better, and popular knowledge of AI becoming more widespread. The former being important because it allows for more sophisticated applications of AI, and the latter being important because it allows for more people to be exposed to AI.
So OpenClaw is just the beginning. It represents a moment that not many people will recognize, but I think is significant. Until models reached their current level of capability and popularity, AI was reserved for those who were highly interested or highly technical. Now, it is accessible to many more people. And that opens the floodgates for grifters.
We are about to see an explosion of non-technical people claiming to harness the benefits of AI, without any understanding of what they are doing. These folks are simply glorified translators between ChatGPT and their customers — and for a while, that might be enough. The bar is low. Their customers' exposure to technology consists of the Microsoft Office Suite. Early wins will be easy to manufacture. We're going to relive the NFT moment in a new domain.
Grandfathered Intelligence
So, what does the future hold? I think it holds a lot of grift, a lot of junk, and a lot of people who are pretending to be experts. If you are a technical person, I think you're in a unique position, mostly because the value of genuine technical knowledge is only going to increase (at least in the short term).
Consider what these tools actually require to use well. Not the chat interface—anyone can use ChatGPT. I mean really harness them.
I work with AI in a frontend engineering context—in my experience, AI does an amazing job at getting started, and a terrible job at finishing. AI can write my initial code, and can do nothing to fix the arcane Safari-specific overflow bug that I only know about because of years of experience. Real value beyond AI's initial outputs is usually powered by pattern-matching abilities built on a foundation of having broken a lot of things and figured out why.
To make matters worse, there is no "learning" of any kind when using these models purely in a transactional or regurgitational way. You're not actually learning anything by vibe coding. Whereas we learned technology deeply when it was required to learn it from first principles, new "coders" merely play a game of telephone with their favorite models.
The people who built that foundation before AI arrived—who learned to read error messages, who understand what a filesystem actually is, who have some mental model of how software works beneath the surface—are the ones who can extract the full value from these tools. Everyone else is operating at the level of interface. And the interface will only take you so far.
This is grandfathered intelligence. Knowledge you didn't acquire for AI, but that turns out to be exactly what AI amplifies. The software engineer who spent years debugging now has a collaborator that can generate infrastructure code at will—but he's the one who knows whether that code is actually correct. The researcher who spent a decade learning how to ask precise questions now has a tool that can synthesize literature at scale—but she's the one who knows when the synthesis is subtly wrong. The non-technical grifter who learned to "use Claude" last month has neither of these things. They have the interface. They have the output. They do not have the judgment.
The ultimate lingering question is the whether this matters, and if so, for how long. Eventually, models will probably be so good that all of the above is moot. But in the interim, we're likely to see a divergence between people whose entire exposure to technology is the chat interface, and people who have a deeper understanding of how technology works. Will the outputs of their work be distinguishable? I think this depends a little bit on the intellect of the creator, and depends a lot on the tools/models they are using.