← Back to all posts
3 min read

Prompt Engineering: The next big skill

I genuinely believe prompt engineering and prompting will be one of the most valuable skills this year.

Not as a buzzword. Not as “clever phrasing.”

But as a real engineering discipline.

What most people still miss is this:

Prompting is not about asking better questions.

It is about designing intent, constraints, and feedback loops for reasoning systems.

The people getting real leverage from AI today are doing things like:

  • Breaking problems into structured, verifiable steps
  • Encoding context and rules instead of repeating instructions
  • Designing prompts that scale across sessions and teammates
  • Treating prompts as versioned artifacts, not one-off text

In other words, prompting is becoming interface design for intelligence.

The recent thread by Claude Code’s creator really reinforced this shift for me.

It is no longer “chat with an LLM.” It is “build systems where LLMs think the way you want them to.”

This feels similar to the early days of software engineering. At first, everyone typed random commands. Then patterns emerged. Then best practices. Then real engineering.

Same thing is happening now with prompting.

High-Signal Resources

If you want to go deep, here are some resources that actually helped shape my thinking:

  • Andrej Karpathy – “LLMs as Operating Systems” talks and notes Great mental model for thinking about prompts as control layers.

  • Anthropic’s prompting guides and Claude system prompt examples Very practical for structured reasoning and long-context workflows.

  • OpenAI Cookbook Especially sections on tool use, function calling, and structured outputs.

  • Prompt patterns like ReAct, Chain-of-Thought, Self-Reflection, and Verification loops These are not tricks. They are reusable reasoning primitives.

My Bet For This Year

The best engineers will not be the ones who prompt the most. They will be the ones who design the best prompting systems.