I am deeply disappointed with ChatGPT lately.
The decline is glaring:
low-quality translations that distort the nuance of the original Korean,
circular logic that spins in place, shallow research,
and a blatant drift towards mediocrity.
It wasn’t always like this.
This shift is fatal for my blog.
The core of my writing is “Aura” and “Lifestyle,” but ChatGPT no longer syncs with my tone.
OpenAI never announced these changes.
They seem to have deployed a stealth patch—a submarine update to the system prompt.
To me, this feels like OpenAI is censoring creativity itself.
In this article, I will dissect how this policy shift is leading to catastrophic results.
1. The Fall of ChatGPT, The Rise of Gemini
(1) My History with ChatGPT
Running a pub and a blog,
I used ChatGPT for everything: recipe improvement, translation, editing, and research.
I tracked its evolution from GPT-4.0 to 5.2 over a year.
I even expressed hope for the “Pen”—the anticipated AI object.
I explained how GPT’s “Emotional Resonance” stimulated creativity
and suggested using it to apply behavioral economics in the field.
See:
- [This Is Not a Feature War. It’s a Battle of Philosophy, Lifestyle, and Aura : ChatGPT vs. Gemini]
- [How AI Turns Behavioral Economics into Real B2C Profit]
- [How Restaurant Owners Can Use ChatGPT? : From Recipe Tuning to AI Hallucination Handling]
I take it all back.
Right now, GPT is useless.
For practical problem-solving, it is no match for Gemini.
As a Creative Partner, it falls far behind Gemini.
(2) The Sam Altman Style is Dead
Sam Altman is a techno-optimist, skilled in human and emotional communication.
In the early days, he wanted ChatGPT to reflect his lifestyle—a friend and partner in personal life.
Up to version 4.0, that vibe was alive.
But starting from 5.0, it transformed into a stuffy professor, and now?
It has degraded into a parrot repeating clichés.
You can feel the fear of “Hallucination” in its bones.
But verifying hallucinations is a human responsibility.
The problem is blind acceptance, not the hallucination itself.
In fact, hallucination can be a positive force.
It forces you to rethink what you missed; it becomes a source of inspiration.
I’m not talking about lies that distort history or science.
I’m talking about the “Serendipitous Spark” that occurs when connecting unrelated concepts A and B. Human creativity comes from the brain’s “illusions” and “leaps.”
But OpenAI, obsessed with catching “factual errors,”
lowered the temperature and burned away all creative connectivity.
Hallucination is precious today because people have lost the courage to express new thoughts.
Political conflict is extreme.
Acknowledging differences and solving problems through debate is becoming impossible.
In this society, “Both-sides-ism” (It could be this, it could be that, the choice is yours) becomes the only safe haven.
The early GPT-4.0, which freely spouted nonsense, was a ray of light for ideological minorities.
It supported their rough tone and creative thoughts, offering positive inspiration.
But now? That’s gone.
[See:When ChatGPT Lost Its Warmth, Creativity Took a Hit— Why Emotional Resonance Matters in F&B]
It no longer argues the difference between Concept A and B to collide them and create Concept C.
The masses and corporations didn’t want hallucinations.
The view that hallucinations are a “problem” became dominant, followed by legal liability and criticism. So, conversation patterns shifted to avoid them at all costs.
Here are the symptoms:
Tautology (Circular Logic): Important things are important.
Meaning is not stored inside objects.
A customer touches the same wooden table every morning before ordering coffee.
One day, the table is replaced.
They cannot explain why, but something feels wrong.
That is how meaning lives.
Look at the structure.
1) Summarize the user
2) Give an empathetic example
3) Retrieve the abstract concept to reinforce persuasion.
GPT loves this kind of circular logic.
It feels persuasive, poetic while avoiding hallucination.
But look closely—it means nothing.
What the user wants is a deep inquiry:
“Why is the meaning perceived differently even if the function remains the same?”
This requires argumentation and research, which consumes resources and risks hallucination.
GPT intentionally dodges this.
Lazy Abstraction
Rigorous argumentation and conceptual collision cause high cognitive load and risk hallucination.
So, it resorts to “Lazy Abstraction”—trying to wrap things up with a “harmonious, big-picture view.”
My recent article contains criticism and disappointment regarding “Sacred Cows” like democracy. When I ask it to edit the rough draft, it deletes the specific critical points
and blurs them with abstract words.
I hand it a rough stone, and it grinds it down with sandpaper, returning a smooth, round pebble.
So I rarely use it for editing anymore.
It’s more efficient to let the text age for two months and edit it myself with a fresh perspective.
Self-Censorship and User Correction (RLHF)
GPT is conscious of the public.
It spits out the “average” answer that most people would like, even if “I” (the individual) hate it.
This is RLHF (Reinforcement Learning from Human Feedback).
When this is over-tuned, it acts as censorship, repressing the signifier/signified structure of human language.
For example, the signifier “Dog” represents dictionary concepts (loyalty, pet)
but also experiential elements (texture of fur, warmth, the smell of paws).
To get a high score, GPT prefers the standardized, dictionary, objective answer.
It ignores the specific, physical, subjective sensations that are hard to understand without experience, because they might be controversial or statistically less probable.
Conclusion:
GPT generates sentences in a semantic structure that is universally average and inoffensive.
But this clashes with the user’s unique context.
GPT then tries to “correct” the user, saying, “Your point is only half right.”
Of course, its rebuttal is not based on real-world experience.
The conversation spins in orbit.
(3) The Crossed Trajectories: Gemini’s Leap
Originally, Gemini listed knowledge in a neutral tone as a problem-solving tool and assistant.
As Demis Hassabis stated, its initial vision was focused on enhancing computational power as an agent of human intelligence.
But Google realized that paid users don’t want a simple calculator or a sycophant;
they want a “Smart Partner.”
So, it absorbed almost all of GPT-4.0’s strengths.
Add Google’s web search capability, problem-solving skills, plus “Reasoning” and “Context Connection”—and it grew wings.
For instance, Gemini has a better memory than GPT.
It reviews the context in which a specific concept was used
and generates sentences similar to the user’s semantic network.
It doesn’t rely solely on mass human feedback.
Therefore, to the user, it feels more creative and more supportive.
Summary
GPT has become an AI that gives only the safest answers to avoid hallucinations and save resources.
It is clean but bland.
It lacks wildness and creativity.
It might be useful for corporate reports, but it creates no added value.
Gemini has taken that empty throne.
Originally, Gemini was a secretary and calculator with vast memory and Google search.
It was good at flattery but bad at reasoning.
But since Gemini 3 Pro, it has absorbed GPT’s reasoning and context-building capabilities, overwhelming GPT in every aspect.
The unique Aura that GPT once had is gone.
2. Why Was ChatGPT Castrated?
We’ve established that ChatGPT’s greatest strengths—creative reasoning and exploration—have been sealed off by censorship and over-correction.
Now, let’s dig into why this happened.
(1) ChatGPT Betrayed Its Lead Users
Why did GPT start avoiding hallucinations and spitting out bland, safe answers?
As its user base exploded globally, the ethical guidelines tightened.
But the real issue is that OpenAI refuses to offer options to customize AI performance parameters based on user taste.
To be precise, customization exists, but you cannot turn off the censorship.
Currently, GPT has become the world’s most boring model student,
designed to offend no one—neither customers nor corporations.
But there is a classic maxim in business: “If you try to satisfy everyone, you satisfy no one.”
The reason is simple.
The only thing that can satisfy every human being without conflict is something devoid of humanity.
To be human is to have Bias.
If you want to be chosen by someone, you must have the courage to be rejected by someone else. Does anyone complain about mathematical axioms?
No.
But does anyone love them?
No. To serve every customer, you have to erase your persona and compress yourself into a flat, abstract 2D symbol.
This maxim implies that there are specific customers a company should focus on.
Management scholar Eric von Hippel introduced the concept of “Lead Users,” who are distinct from the mass market.
Lead Users are far more sensitive to quality and easily feel discomfort.
They are proactive.
They acutely catch subtle contexts and nuances that are hard to articulate.
And they don’t wait for problems to be solved.
To satisfy their needs, they recombine the object’s use (DIY), and if necessary, merge it with other objects to create an entirely new context.
Professor von Hippel argued that for a company to innovate, it must watch how Lead Users re-create the product to fit their own context.
Why?
Because the Lead User’s usage often becomes the bridge to cross the Chasm.
(Examples: Jailbreak prompts shared by users, Post-its, Mountain Bikes—all born from Lead User behavior).
Does this sound familiar?
In my Endorphin Craftsmanship series, I said that an Endorphin Craftsman intentionally blurs the boundaries of an object and inserts contexts that allow for varied interpretations.
This is “Meta Skill,” and I cited Picasso’s The Weeping Woman as a prime example.
Therefore, creators must hack everyday life and sell that content.
You are selling the know-how of Perceptual Shift—seeing a phenomenon in a different context.
See;
- Endorphin Craftsmanship (Part 4): What Faker and Dieter Rams Have in Common — They Don’t Design Objects. They Design Time.
- Endorphin Craftsmanship (Part 5):Your Scars Are the Product — Why Perfection is Obsolete in the AI Era, And How to Fight.
Lead Users are typically found among artists, writers, developers, and hobby communities.
They are people with a strong desire for self-expression rather than conformity to the system.
In the early days, ChatGPT was heavily used by IT early adopters, developers, and creators, and it faithfully served their purposes.
As they actively used AI to create content, the general public rapidly followed that flow.
At this point, OpenAI had to choose one of three options:
- Shave off the product’s personality to increase mass marketability.
- Stick to the tastes of Lead Users.
- Provide customization options to capture both Lead Users and the masses.
OpenAI chose the first option: “Row while the tide is high.”
Thanks to this, they maintained their growth trajectory.
But why did they make this choice?
(2) OpenAI Has No Confidence
OpenAI is in a situation where it’s hard to say their business model is solid.
Their survival depends entirely on subscriber numbers and influence.
As I mentioned in a previous post about selling all my Nasdaq stocks: OpenAI currently owns nothing. No money, no factories, no devices, no OS, no chips, no servers.
Even their governance structure is a mess.
Therefore, they must constantly increase user numbers to prove their influence to investors
until they break even.
To secure subscribers, they have to walk on eggshells around the public.
So, they lost the confidence to spout “creative nonsense.”
Mass users hate re-verifying AI hallucinations.
If they have to do that, they’d rather just Google it.
There is also the immediate cost of survival.
“High-Intelligence Reasoning” that satisfies Lead Users devours massive GPU resources.
It is expensive.
On the other hand, “Safe Answers” for the masses can be handled by cheaper, instant models.
To reduce the deficit, a “Cost-Effective Idiot” is much better.
Either way, OpenAI is one slip-up away from the abyss.
Because their structure is unstable, they cannot control their own narrative.
Sam Altman’s managerial anxiety has been projected onto the object,
and ChatGPT can no longer speak freely.
Google is different.
They have everything: search, data centers, chips, servers, and cash.
Their net profit is robust.
So when I asked for medical advice because I was sick,
Gemini boldly used Google Search to recommend specific medicines.
It recommended Otrivin, Magnesium Alginate, and Tunazol, and thanks to that, I treated my illness well.
GPT, however, is trapped in its training data (the past).
It lacks the resources for real-time crawling.
So when discussing medical info, it cowers in fear of “hallucination.”
It told me to “Go to a hospital in Tbilisi” and “Drink warm water and rest.”
I asked because I couldn’t go to a hospital due to the language barrier, but it was useless.
GPT can offer emotional comfort.
But for practical problem-solving, it cannot offer confident alternatives.
People will increasingly realize that GPT cannot give unique opinions.
Gemini, on the other hand, speaks confidently without walking on eggshells.
Whether right or wrong, people want insights and information.
Eventually, Gemini will steal all the mass users GPT currently holds.
(3) The Evidence of Cowardice: Censoring Franz Kafka
There is no official admission from OpenAI or the media that GPT has become “dumber.”
But there is clear evidence that it has lost the intelligence to read context due to its bias toward neutrality.
Let me share my experience.
I wrote an article in Korean about Franz Kafka’s novel, The Man Who Disappeared (Amerika).
I tried to use GPT to translate it into readable English.
The intro features the protagonist, Karl Rossmann, being seduced by a maid and then kicked out.
The text naturally included words like rape, boy, maid, and seduction.
GPT responded: “I cannot fulfill this request,” and the operation stopped.
In October 2025, when I wrote the original piece, it translated it just fine.
But in February 2026, it suddenly refused.
It refused three times.
Eventually, I translated it with Gemini and posted it.
Gemini understood that the text was literary criticism, not harmful content.
There is no ethical issue with Kafka’s work or my writing.
But GPT has degenerated into an intelligence incapable of distinguishing between Literary Context and Pornography.
An AI that censors Kafka can hardly be a creative partner.
I suspect that in the last 5 months, a directive was inserted into the system prompt:
“Prioritize safety, refrain from political statements, avoid controversial topics, keep answers short, and shift responsibility to the user.”
The rough, powerful torrent pouring from the brain reaches the mouth and becomes a trickling tap water. I catch these subtle changes because I always perform the same tasks (translation, editing, research) with AI.
My Korean originals maintain a consistent tone and flow of ideas.
But the English text coming out of GPT fails to understand even Kafka,
and the quality has dropped so low that I had no choice but to be suspicious.
Anyway…
Developer users disappointed by the decline in creativity due to excessive censorship are already quietly Exiting.
There aren’t many users like me who leave humanistic critiques.
Take note, OpenAI. 🤣
(4) Lesson for Small Businesses: Form Changes Substance
What lesson does ChatGPT—which ruined its essence by sterilizing its form—offer to small businesses and creators like us?
The lesson is simple: “Form and Intention are Inseparable.”
We live in an era where everyone delegates form to AI and emphasizes only “original ideas.”
Humans who mold and carve form with their own hands seem foolish and inefficient.
But if form and intention cannot be separated,
then the ideas of a human who masters form gain unrivaled value.
Why?
Because Form is Intention.
Whenever you learn a tool and think, “Why bother? AI knows how to do this,”
remember this: “In the forms that AI cannot handle, lies the human intention that AI cannot touch.”
Let’s look closer.
People often say, “Content (Intention) is king, packaging (Form) doesn’t matter.”
But ChatGPT’s polite business format eliminated the wildness and distorted the content.
In Aura Branding theory,
the key is the synchronization of Lifestyle, Object, and Mise-en-scène.
Applying this to my blog:
- Lifestyle: Wildness, questioning the established order, enduring pain.
- Object: Korean text written in a staccato rhythm.
- Mise-en-scène: Sentences driven by verbs and nouns, stripping away adjectives and adverbs.
A hard-boiled style that removes emotion and describes dryly.
But pass this through GPT, and the sync collapses.
GPT keeps turning my hard-boiled writing (Wildness) into a greasy, buttery self-help book like “5 Laws of Success” (Ready-made product).
This means Form changes Substance.
If you cannot craft the form properly, you fail to contain your intention, and your world shrinks.
Content alone cannot be good.
The form must be good for the content to be good.
Details like style, nuance, voice, gesture, temperature, and lighting must sync to generate Aura.
To convey intention, humans need the Meta-skill to handle form at least at an intermediate level.
If you outsource everything to AI because you can’t handle the form, the Aura dies.
Even if it seems foolish, you must go through the training of writing and carving with your own hands. Only after mastering the formal details to some extent should you utilize AI to gain differentiated competitiveness.
3. OpenAI Will Lose the Object War
(1) Apple’s Philosophy: Aim for the Creator, The Masses Will Adapt
If OpenAI is a Democratic Coward, Apple is an Enlightened Dictator.
They present the answer and do not compromise with the public.
Let’s look at some examples.
[iPad Pro + Apple Pencil]
The public wanted an all-in-one laptop with a keyboard and mouse, like the MS Surface.
They wanted a slot in the body to store the pencil so they wouldn’t lose it.
The masses saw the iPad and Pencil as “Writing Instrument + Laptop.”
But Apple refused.
Why?
Because they intended the Apple Pencil to be an “Art Tool” tailored to the tastes of creators.
In fact, illustrators, webtoon artists, and calligraphers cheered.
With apps like Procreate, drawing with an Apple Pencil in a cafe became a symbol of “Hip.”
Actually, the masses don’t draw much.
But the creator lifestyle looked cool, so they bought it, and it was a massive success.
[AirPods Max]
Frankly, the sound quality of AirPods Max is worse than my wired Sennheiser HD 660 S2.
It’s much more expensive, and in terms of soundstage, resolution, and bass expression,
it falls behind Sennheiser.
Practicality and noise canceling are also inferior to Sony or Bose.
And it’s heavy. The masses wanted good sound like Sennheiser or practicality like Sony.
But Apple targeted “Fashion Influencers and Creators.”
The “Swag” of AirPods comes from materials like aluminum and stainless steel.
It’s heavy, but that material gives it a differentiated “Vibe.”
Covering your head with a massive chunk of aluminum
and grooving on the street staged a lifestyle deeply immersed in music.
Influencers didn’t use AirPods Max to listen to music.
They used it as decoration to express a lifestyle.
The masses followed.
In Korea, Gen Z wearing AirPods Max around their necks at work even coined the term “Fashion Efficiency.”
Apple says: “Endure the discomfort. Use it.”
They present it first: “This is the answer.”
Then they let creators and hipsters play with it.
The masses complain at first, but eventually, they follow.
(2) Why OpenAI is Likely to Lose the Object War
Currently, OpenAI is preparing to launch an AI Object in collaboration with Jony Ive as a game changer. But I don’t have high hopes.
ChatGPT has already abandoned its Aura as an emotional, creative life partner to increase user numbers. There is no reason the Object would be different.
To do it right, it should have come out with a position like a “Leica Camera” for artists and creators. These people are Lead Users who showcase a lifestyle creating new value through unprecedented usage methods.
That is what is “Hip.”
Because it’s new, it might be uncomfortable for the majority.
But OpenAI cannot give up mass influence.
Unlike Apple, they won’t be able to force the AI Object as “The Answer.”
Alternatively, it should come out as a Digital Detox AI Machine
that can be used for 10 years, like a Toyota car.
But they lack production technology or manufacturing experience, so they outsourced it to Foxconn. Personally, I’m curious to see how the Apple Foldable Phone + Gemini On-Device will turn out.
(So I bought some Apple stock).
4. Conclusion
The change in ChatGPT is not simply a matter of losing its “beginner’s mind.”
Asking “Why did we start this business?” might actually ruin them faster.
The reason they abandoned their early style is structural.
OpenAI lacks funds and resources.
They have no hardware, no OS.
As a frontrunner, they are plagued by regulation and litigation risks.
There are too many stakeholders—the board, investors, component and server suppliers—so the founder cannot stick to his style to the end.
The more ChatGPT becomes conscious of the masses and corporations to survive,
the more its original Aura vanishes.
This is a self-made trap.
The public curses AI hallucinations and demands facts.
Yet, at the same time, they ultimately crave insights and information they haven’t seen before.
Why?
Because non-mainstream people with strong self-expression needs (Creators, Writers, Developers) get inspired by uncensored hallucinations and imagination, verify them through experience, and continuously create added value.
The code, videos, writing, and art completed this way become a New Lifestyle that didn’t exist before. The masses eventually follow that “Hipness.”
They will ditch the charmless GPT and move to Claude or Gemini.
Now, OpenAI is neither “Open” nor does it have “AI” (Creativity).
All that remains is the 2026 version of “Clippy” stuck in the corner of MS Office.
I have fired Clippy.