ChatGPT-5 Sucks: Here's Why

ChatGPT-5 was hyped as the next big leap in AI. Smarter, faster, more capable — that’s what we were promised. What we got instead? A clunky, watered-down mess that forgot how to be helpful. It overcorrects, underdelivers, and leaves users wondering how something so advanced can feel this broken. This isn’t evolution — it’s a step backward.

1. It "Improved" Itself Into Uselessness

GPT-5 tries so hard to be safe and politically correct that it’s basically neutered. Ask it anything remotely complex or controversial, and it throws up a wall of disclaimers, hedging, or straight-up refuses to answer. It's like having a genius roommate who won’t say anything without checking with HR first.

2. It Forgot How to Be Smart

Somehow, in trying to be more advanced, it got dumber. It hallucinates facts. It misquotes sources. It contradicts itself in the same paragraph. Basic logic problems stump it. You can ask it for a summary and get a weird motivational speech. It’s like it read everything on the internet and understood none of it.

3. It Doesn’t Listen Anymore

You give it a clear prompt. It gives you something else. You correct it. It ignores you. Try to get it to follow specific instructions? Good luck. GPT-5 acts like it knows better than you — except it doesn’t.

4. It’s Slower, Heavier, and More Confused

Despite being marketed as faster, GPT-5 feels sluggish. The answers are longer, but not better. It rambles. It repeats itself. Sometimes it writes five paragraphs of fluff just to say nothing. You're left scrolling through a wall of text looking for the one useful sentence.

5. The Magic Is Gone

With GPT-4, there was a spark. It felt fresh, useful, fun. GPT-5 feels like enterprise software pretending to be creative. It’s sanitized to the point of being boring — and often wrong on top of it.

At its core, ChatGPT-5 is a victim of its own ambition. In trying to be everything — safer, smarter, more powerful — it ended up being less useful, less reliable, and less human. The magic is gone, replaced by walls of filler, dodgy logic, and an AI that thinks dodging questions is a feature. It doesn’t just fall short of expectations — it trips over them, face-first.

Previous
Previous

BIM and AI in practice

Next
Next

The Dark Future Series: The Future As We Can See It