OpenAI is repeating Google’s biggest mistake
Search didn’t die. It just sold out. ChatGPT might be next.
Curiosity was never the business model
Google wasn’t built to make you curious. It was built to make you click.
For years, we pretended they were the same thing.
They’re not.
Search started as a tool for discovery. Now it’s a system for creating revenue.
Clicks won and relevance lost.
AI is already sprinting down the same path.
Google Search is worse. That’s not a feeling, It’s a fact
A 2024 study from German researchers looked at over 7,000 product-review queries across Google, Bing, and DuckDuckGo.
What ranked highest? SEO-stuffed pages with affiliate links and low-quality text.
The better answers were buried and the polished junk rose to the top.
Even Google admits this.
Their March 2024 core update promised to slash “unoriginal, low-quality content” by 45%. A quiet cleanup, if you will.
Meanwhile, a WalletHub audit showed top results for “best credit cards” often suggested higher-cost, brand-safe options and not what was actually best for users.
Not shocking to most reading this, but the fact it doesn’t shock you, says everything.
63% of people say Google results have gotten worse in the last year. They’re not wrong.
The real problem is incentives
Google made over $350 billion last year. More than 80% came from ads.
That tells you everything you need to know. The entire engine is built around clicks.
Not accuracy. Nor curiosity.
Just attention, and $. Lots of $.
Search decayed slowly. Little by little, compromises added up.
Now we can’t even ask a basic question without sorting through noise.
AI promised better. It's already compromised.
When ChatGPT and Google’s AI Overviews launched, it felt like a reset.
One question. One clean answer. No clutter.
But that clarity is already getting sold off.
Google’s AI Overviews now include sponsored ads inside the actual AI summary, especially for commercial queries on mobile. OpenAI is on the same track. They project $125 billion in annual revenue by 2029, driven partly by affiliate deals and in-chat revenue generating suggestions.
What started as information is becoming inventory. Again.
The faster it gets, the dumber it gets
LLMs are built to sound smart, fast. But they’re still hallucinating.
In a 2024 benchmark, GPT-3.5 made up fake citations in 40% of medical responses.
GPT-4 was better… but still hallucinated nearly 30% of the time.
The problem isn’t bugs. It’s design.
These systems are refined for confidence, speed, and fluency. Not depth.
The goal isn’t to make you think but to keep you engaged.
People aren’t stupid, they just want clarity
A Meta survey asked over 23,000 people in 13 countries what they wanted from AI tools.
Eighty-two percent said… just tell us when it’s AI. Label it clearly. Disclose when something is paid. Show your sources.
That’s not hard to do. It’s just not profitable for David, nor Goliath.
Right now, none of the major platforms are doing it well.
And most users don’t even know what’s missing.
What I want from AI
Every answer should come with receipts.
Every ad should be flagged like it’s radioactive.
And if a model is guessing, say that.
Give me confidence scores. Contradict yourself if needed. Don’t fake certainty.
Also, let me choose how I interact. If I want exploration over resolution, offer a “curiosity mode.” Surface tension. Reward second questions.
Some researchers are already building models that learn by seeking novelty instead of refining for familiarity.
That’s the direction I want to see.
The same mistake. Just faster.
I think one of the biggest failures of modern search is that we let one of the most powerful discovery tools in history degrade into a glorified ad network.
That took twenty years.
With AI, it might take two.
We’re not facing a question of capability. We’re facing a question of will.
If we don’t fight for depth, nuance, and transparency now, we’ll lose them by default.
Not because AI can’t do better, but because no one asked it to.
Let’s make it right this time.



