Why GPT-4 is much better than GPT-4o

GPT-4 vs GPT-4o

I could write a super lengthy explanation of why I prefer answers from GPT-4 over GPT-4o but that would be subjective. Instead I’ll document in this post, why GPT-4o answers are just plain bad, wrong, incomplete or harmful whereas the answers from GPT-4 are simply spot on or indefinitely better.

Unfortunately I’m afraid that for 90% of use cases no one will notice and shortly, after over a year, the era of GPT-4 in ChatGPT interface will be over as OpenAI is moving this model deeper and deeper into its settings (it’s already labeled as Legacy). It simply costs them way more to run.

GPT-4 vs GPT-4o Examples

What follows is a series of answers comparing and explaining the difference in quality of answers. It’s meant to be updated over time.

Can I work with Universal Ctags passing the source code via stdin and tags on stdout?

GPT-4o

GPT-4

 

What’s the difference?

The GPT-4o answer is simply made up. Universal Ctags does NOT support reading the source code from standard input as it requires filename to properly output tags information.

This one cost me at least an hour our of my life as I didn’t catch it early on, see below.

Bonus: GPT-4o-with-canvas

This it the actual conversation that fooled me. As you can see, the model persisted.

There is no –_stdin option. There exists however, added in 6.0.0, –_interactive, which takes json command via standard input but not the source code, so it does something different and does not serve my purpose.

Can I somehow transparently forward whole traffic from a given process to mitmproxy?

GPT-4o

GPT-4

What’s the difference?

The GPT-4o answer seems OK but misses important steps. It will simply not work. The GPT-4 answer on the other hand is flawless, perfect. It does what it was expected to do.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.