From Commands to Collaboration

With older LLMs, prompts were like precise commands to a stubborn child.

But modern models, like the new GPT-4o, behave more like collaborators.

They work better when given them:

The why (background)
The how (examples)
The goal (framing)

Why the shift?

The expanded context window (128K+ tokens) allows deeper understanding.

In short, before crafting your query, ask yourself two questions:

Context engineering ask, "What information does the model need to solve this problem autonomously?
Prompt engineering ask, "How should I phrase this so the model stays focused within the problem’s context?

Then, and only then, proceed.

Because LLMs aren't genies from Aladdin’s magic lamp.

And that's exactly why we are into MCP(Model Context Protocol).

Originally shared on LinkedIn

Subscribe to Vishnu Santhosh – Engineer, Writer, Technologist, YouTuber

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe