I find the word "engineering" used in this context extremely annoying. There is no "engineering" here. Engineering is about applying knowledge, laws of physics, and rules learned over many years to predictably design and build things. This is throwing stuff at the wall to see if it sticks.
Words often have multiple meanings. The “engineering” in “prompt engineering“ is like in “social engineering”. It’s a secondary, related but distinct meaning.
For example, Google defines the second meaning of "engineering" as:
2. the action of working _artfully_ to bring something about. "if not for his shrewd engineering, the election would have been lost"
Look up “engineering” in almost any dictionary, and it will list something along those lines as one of the meanings of the word. It is a well-established, nontechnical meaning of “engineering”.
Your posted definitions contradict your conclusion - I would argue there is nothing calculated (as parent poster said, there is no calculation, it just trying and watching what works), artful or skillful (because it's so random, what skill is there to develop?) about "prompt engineering".
And in fact, the first engines were developed without a robust understanding of the physics behind them. So, the original version of 'engineering' is more closely to the current practices surrounding AI than the modern reinterpretation the root comment demands.
I still like the Canadian approach that to have a title with the word Engineer in it you have to be licensed by the engineering regulator for the province you work in. The US way of every software dev, mechanic, hvac installer or plumber is an engineer is ridiculous.
Disagree. I think it's valid to describe your work as engineering if it is in fact engineering, regardless of credential. If the distinction is important, call it "<credential name> Engineer". But to simply seize the word and say you can't use it until you have this credential is authoritarian, unnecessary, rent seeking corruption.
> I still like the Canadian approach that to have a title with the word Engineer in it you have to be licensed by the engineering regulator for the province you work in.
That's just not true.
(Despite what Engineers Canada and related parasites tell you.)
You could make this same argument about a lot of work that fall onto "engineering" teams.
There's an implicit assumption that anything an engineer does is engineering (and a deeper assumption that software as a whole is worthy of being called software engineering in the first place)
Perhaps. My point is that the word "engineering" describes a specific approach, based on rigor and repeatability.
If the results of your work depend on a random generator seed, it's not engineering. If you don't have established practices, it's not engineering (hence "software engineering" was always a dubious term).
Throwing new prompts at a machine with built-in randomness to see if one sticks is DEFINITELY not engineering.
I've seen some good arguments recently that software engineering is weird in that computers ARE completely predictable - which isn't the case for other engineering fields, where there are far more unpredictable forces at play and the goal is to engineer in tolerances to account for that.
So maybe "prompt engineering" is closer to real engineering than "software engineering" is!
With distributed systems I'd say network unreliability introduces a good amount of unpredictability. Whether that's comparable to what traditional engineering disciplines see, I couldn't say. Some types of embedded programming, especially those deployed out in the field, might also need to account for non-favorable conditions. But the predictability argument is interesting nonetheless.
Engineers work with non-deterministic systems all the time. Getting them to work predictably within a known tolerance window and/or with a quantified and acceptable failure rate is absolutely engineering.
I found Hillel Wayne's series of articles about the relationship between software and other engineering disciplines from a few years fairly insightful [1]. It's not _exactly_ the same topic but a lot of overlap in defining wht is "real engineering".
It’s not engineering if you throw anything together without much understanding of the why of things.
But if you understand the model architecture, training process, inference process, computational linguistics, applied linguistics in the areas of semantics, syntax, and more— and apply that knowledge to prompt creation… this application of knowledge from systemic fields of inquiry is the definition of engineering.
Black box spaghetti-hits-wall prompt creation? Sure, not so much.
Part of the problem is the “physics” of prompting changes with the models. At the prompt level, is it Even Possible to engineer when the laws of the universe aren’t even stable.
Engineering of the model architecture, sure. You can mathematically model it.
(They're not an LLM fan; also: I directionally agree about "prompt" engineering, but the argument proves too much if it disqualifies "context" engineering, which is absolutely a normal CS development problem).
There is engineering when this is done seriously, though.
Build a test set and design metrics for it. Do rigorous measurement on any change of the system, including the model, inference parameters, context, prompt text, etc. Use real statistical tests and adjust for multiple comparisons as appropriate. Have monitoring that your assumptions during initial prompt design continue to be valid in the future, and alert on unexpected changes.
I'm surprised to see none of that advice in the article.
Indeed. Engineering is the act of employing our best predictive theorems to manifest machines that work in reality. Here we see people doing the opposite, describing theorems (and perhaps superstitions) that are hoped to be predictive, on the basis of observing reality. However insofar as these theorems remain poor in their predictive power, their application can scarcely be called engineering.
> Engineering is about applying knowledge, laws of physics, and rules learned over many years to predictably design and build things. This is throwing stuff at the wall to see if it sticks.
There’s one other type of “engineering” that this reminds me of…
1) Software engineers don't often have deep physical knowledge of computer systems, and their work is far more involved with philosophy and to a certain extent mathematics than it is with empirical science.
2) I can tell you're not current with advances in AI. To be brief, just like with computer science more broadly, we have developed an entire terminology, reference framework and documentation for working with prompts. This is an entire field that you cannot learn in any school, and increasingly they won't hire anyone without experience.
I saw a talk by somebody from a big national lab recently, and she was announced as the "facilities manager". I wondered for about 5 seconds why the janitor was giving a talk at a technical conference, but it turns out facility meant the equivalent of a whole lab/instrument. She was the top boss.
Unless you are going into a legal definition, where there's a global enumeration of the tasks it does, "engineering" means building stuff. Mostly stuff that is not "art", but sometimes even it.
Building a prompt is "prompt engineering". You could also call it "prompt crafting", or "prompt casting", but any of those would do.
Also, engineering also had a strong connotation of messing with stuff you don't understand until it works reliably. Your idea of it is very new, and doesn't even apply to all areas that are officially named that way.
First they came for science: Physics, Chemistry, Biology -vs- social science, political science, nutrition science, education science, management science...
Now they come for engineering: software engineering, prompt engineering...
Assume for the sake of argument, that this is literally sorcery -- ie communing with spirits through prayer.
_Even in that case_, if you can design prayers that get relatively predictable results from gods and incorporate that into automated systems, that is still engineering. Trying to tame chaotic and unpredictable systems is a big part of what engineering is. Even designing systems where _humans_ do all the work -- just as messy a task as dealing with LLMs, if not more -- is a kind of engineering.
> rules learned over many years
How do you think they learned those rules? People were doing engineering for centuries before science even existed as a discipline. They built steam engines first and _then_ discovered the laws of thermodynamics.
Here's my best advice of prompt engineering for hard problems. Always funnel out and then funnel in. Let me explain.
State your concrete problem and context. Then we funnel out by asking the AI to do a thorough analysis and investigate all the possible options and approaches for solving the issue. Ask it to go search the web for all possible relevant information. And now we start funneling in again by asking it to list the pros and cons of each approach. Finally we asked it to choose which one or two solutions are the most relevant to our problem at hand.
For easy problems you can just skip all of this and just ask directly because it'll know and it'll answer.
The issue with harder problems is that if you just ask it directly to come up with a solution then it'll just make something up and it will make up reasons for why it'll work. You need to ground it in reality first.
So you do: contrete context and problem, thorough analysis of options, list pros and cons, and pick a winner.
“Honey, which restaurant should we eat at tonight? First, create a list of restaurants and highlight the pros and cons of each. Conduct a web search. Narrow this down to 2 restaurants and wait for a response.”
Reminds me of a time that I found I could speed up by 30% an Algo in a benchmark set if I seed the random number generator with the number 7. Not 8. Not 6. 7.
It does make things non deterministic and complicated. Like it or not, this IS the job now. If you don't do it, someone else is going to have to.
In my AI application I made deliberate decisions to divorce prompt engineering from the actual engineering, create all the tooling needed to do the prompt engineering as methodically as possible (componentize, version, eval) and handed it off to the subject matter experts. Clearly people who think this is the equivalent of choosing a seed shouldn't be writing prompts.
I will just enjoy the job security that Seed Science provides me. At any day I could reduce the training costs of all these hyperscalers by 30%. Or maybe not.
The big unlock for me reading this is to think about the order of the output. As in, ask it to produce evidence and indicators before answering a question. Obviously I knew LLMs are a probabilistic auto complete. For some reason, I didn't think to use this for priming.
Note that this is not relevant for reasoning models, since they will think about the problem in whatever order it wants to before outputting the answer. Since it can “refer” back to its thinking when outputting the final answer, the output order is less relevant to the correctness. The relative robustness is likely why openai is trying to force reasoning onto everyone.
This is misleading if not wrong. A thinking model doesn’t fundamentally work any different from a non-thinking model. It is still next token prediction, with the same position independence, and still suffers from the same context poisoning issues. It’s just that the “thinking” step injects this instruction to take a moment and consider the situation before acting, as a core system behavior.
But specialized instructions to weigh alternatives still works better as it ends up thinking about thinking, thinking, then making a choice.
Furthermore, the opposite behavior is very, very bad. Ask it to give you an answer and justify it, it will output a randomish reply and then enter bullshit mode rationalizing it.
Ask it to objectively list pros and cons from a neutral/unbiased perspective and then proclaim an answer, and you’ll get something that is actually thought through.
I typically ask it to start with some short, verbatim quotes of sources it found online (if relevant), as this grounds the context into “real” information, rather than hallucinations. It works fairly well in situations where this is relevant (I recently went through a whole session of setting up Cloudflare Zero Trust for our org, this was very much necessary).
I try so hard for chatgpt to link and quote real documentation. It makes up links, fake quotes, it even gaslights me when i clarify the information isn’t real.
"Engineering" here seems rhetorically designed to convince people they're not just writing sentences. With respect "prompt writing" probably sounds bad to the same type of person who thinks there are "soft" skills.
One could similarly argue software engineering is also just writing sentences with funny characters sprinkled in. Personally, my most productive "software engineering" work is literally writing technical documents (full of sentences!) and talking to people. My mechanical engineering friends report similar as they become more senior.
Yeah, precisely what I'm saying. I don't think "they write prompt 'engineering' instead of 'writing' to maintain the fragile egos of people who use chatbots" [don't agree? See Mr. "but muh soft skills!" crying down thread] is worth saying outside of HN if I'm honest.
I dont think so. It says the words were choosen to wngineer peoples emotions and make then feel right way.
Tech people do not feel good about "writing propt essay" so it is called engineering to buy their emotional acceptance.
Just like we call wrong output "hallucination" rather then "bullshit" or "lie" or "bug" or "wrong output". Hallucination is used to make us feel better and more acceptiong.
This is written for the 3 models (Sonnet, Haiku, Opus 3). While some lessons will be relevant today, others will not be useful or necessary on smarter, RL’d models like Sonnet 4.5.
> Note: This tutorial uses our smallest, fastest, and cheapest model, Claude 3 Haiku. Anthropic has two other models, Claude 3 Sonnet and Claude 3 Opus, which are more intelligent than Haiku, with Opus being the most intelligent.
Yes, Chapters 3 and 6 are likely less relevant now. Any others? Specifically assuming the audience is someone writing a prompt that’ll be re-used repeatedly or needs to be optimized for accuracy.
Agree with the other commenters here that this doesn't feel like engineering.
However, Anthropic has done some cool work on model interpretability [0]. If that tool was exposed through the public API, then we could at least start to get a feedback loop going where we could compare the internal states of the model with different prompts, and try and tune them systematically.
Yesterday I was trying to make a small quantized model work, but it just refused to follow all my instructions. I tried to use all the tricks I could remember, but fixing instruction-following for one rule would always break another.
Then I had an idea: do I really want to be a "prompt engineer" and waste time on this, when the latest SOTA models probably already have knowledge of how to make good prompts in their training data?
Five minutes and a few back-and-forths with GPT-5 later, I had a working prompt that made the model follow all my instructions. I did it manually, but I'm sure you can automate this "prompt calibration" with two LLMs: a prompt rewriter and a judge in a loop.
My workflow has gotten pretty lax around prompts since the models have gotten better. Especially with Claude 4.5 (and 4 before it) once they have a bit of context loaded about the task at hand.
I keep it short and conversational, but I do supervise it. If it goes off the rails just smash esc and give it a course correction.
And then if you're coming from no context: I throw a bit more detail in at the start and usually start by ending the initial prompt with a question asking it if it can see what I'm talking about in the code; or if it's going to be big: I use planning mode.
So we've taught this thing how to do what we did and now we need to be taught how to get it to do the things we taught it to do. If this didn't have the entire US economy behind it, it would catch fire like a hot balloon.
I really struggle to feel the AGI when I read such things. I understand this is all of year old. And that we have superhuman results in mathematics, basic science, game playing, and other well-defined fields. But why is it difficult to impossible for LLMs to intuit and deeply comprehend what it is we are trying to coax from them?
> But why is it difficult to impossible for LLMs to intuit and deeply comprehend what it is we are trying to coax from them?
It's right there in the name. Large language models model language and predict tokens. They are not trained to deeply comprehend, as we don't really know how to do that.
Have you ever tried to get an average human to do that? It’s a mixed bag. Computers til now were highly repeatable relative to humans, once programmed, but hopeless at “fuzzy” or associative tasks. Now they have a new trick, that lets them grapple with ambiguity, but the cost is losing that repeatability. The best, most reliable humans were not born that way, it took years or decades of education, and even then it can take a lot of talking to transfer your idea into their brain.
LLMs mostly spew nonsense if you ask them basic questions on research or even master's degree-level mathematics. I've only ever seen non-mathematicians suggest otherwise, and even the biggest mathematician advocate for AI, Terry Tao, seems to recognise this too.
Ask yourself "what is intelligence?". Can intelligence at the level of human experience exist without that which we all also (allegedly) have... "consciousness". What is the source of "consciousness"? Can consciousness be computed?
Without answers to these questions, I don't think we are ever achieving AGI. At the end of the day, frontier models are just arithmetic, conditionals, and loops.
I find the word "engineering" used in this context extremely annoying. There is no "engineering" here. Engineering is about applying knowledge, laws of physics, and rules learned over many years to predictably design and build things. This is throwing stuff at the wall to see if it sticks.
Words often have multiple meanings. The “engineering” in “prompt engineering“ is like in “social engineering”. It’s a secondary, related but distinct meaning.
For example, Google defines the second meaning of "engineering" as:
2. the action of working _artfully_ to bring something about. "if not for his shrewd engineering, the election would have been lost"
(https://www.google.com/search?q=define%3AEngineering)
Merriam-Webster has:
3 : calculated manipulation or direction (as of behavior), giving the example of “social engineering”
(https://www.merriam-webster.com/dictionary/engineering)
Random House has:
3. skillful or artful contrivance; maneuvering
(https://www.collinsdictionary.com/dictionary/english/enginee...)
Webster's has:
The act of maneuvering or managing.
(https://www.yourdictionary.com/engineering)
Look up “engineering” in almost any dictionary, and it will list something along those lines as one of the meanings of the word. It is a well-established, nontechnical meaning of “engineering”.
While that may be true, I have a hard time believing that's relevant to the intent of people putting "engineer" into every job title out there.
1 reply →
Just scanning through I swear I saw the word mathturbation
this
the "engineering means working with engines" gibberish at the bottom is simply dishonest at best
"engineering" means "it's not guessing game"
1 reply →
Your posted definitions contradict your conclusion - I would argue there is nothing calculated (as parent poster said, there is no calculation, it just trying and watching what works), artful or skillful (because it's so random, what skill is there to develop?) about "prompt engineering".
I am a cereal eating engineer, while I review the cereal box specification.
I do that every morning, before applying my bus-taking engineering to my job.
Because I do prompt engineering for a living.
So many words lost their meaning today... I am glad I'm not the only one annoyed by this.
Of course, we're all comment engineers here!
If you are going to play that game, "engineering" used to mean that you worked with engines.
Words evolve over time because existing words get adapted in ways to help people understand new concepts.
And in fact, the first engines were developed without a robust understanding of the physics behind them. So, the original version of 'engineering' is more closely to the current practices surrounding AI than the modern reinterpretation the root comment demands.
2 replies →
I’ll be damned if Engineering isn’t connected to Ingenuity
1 reply →
I still like the Canadian approach that to have a title with the word Engineer in it you have to be licensed by the engineering regulator for the province you work in. The US way of every software dev, mechanic, hvac installer or plumber is an engineer is ridiculous.
Disagree. I think it's valid to describe your work as engineering if it is in fact engineering, regardless of credential. If the distinction is important, call it "<credential name> Engineer". But to simply seize the word and say you can't use it until you have this credential is authoritarian, unnecessary, rent seeking corruption.
11 replies →
> I still like the Canadian approach that to have a title with the word Engineer in it you have to be licensed by the engineering regulator for the province you work in.
That's just not true.
(Despite what Engineers Canada and related parasites tell you.)
hey now don't disparage plumbers they are usually certified and licensed, unlike engineers :P
You could make this same argument about a lot of work that fall onto "engineering" teams.
There's an implicit assumption that anything an engineer does is engineering (and a deeper assumption that software as a whole is worthy of being called software engineering in the first place)
Perhaps. My point is that the word "engineering" describes a specific approach, based on rigor and repeatability.
If the results of your work depend on a random generator seed, it's not engineering. If you don't have established practices, it's not engineering (hence "software engineering" was always a dubious term).
Throwing new prompts at a machine with built-in randomness to see if one sticks is DEFINITELY not engineering.
5 replies →
I've seen some good arguments recently that software engineering is weird in that computers ARE completely predictable - which isn't the case for other engineering fields, where there are far more unpredictable forces at play and the goal is to engineer in tolerances to account for that.
So maybe "prompt engineering" is closer to real engineering than "software engineering" is!
With distributed systems I'd say network unreliability introduces a good amount of unpredictability. Whether that's comparable to what traditional engineering disciplines see, I couldn't say. Some types of embedded programming, especially those deployed out in the field, might also need to account for non-favorable conditions. But the predictability argument is interesting nonetheless.
The computer may be reliable but the data passing through it isn’t.
I call it "Vibe Prompting".
Even minor changes to models can render previous prompts useless or invalidate assumptions for new prompts.
Even minor changes to a chemical formulation can render previous process design useless or invalidate assumptions for a new formulation.
Changing the production or operating process in the face of changing inputs or desired outputs is the bread and butter of countless engineers.
1 reply →
Engineers work with non-deterministic systems all the time. Getting them to work predictably within a known tolerance window and/or with a quantified and acceptable failure rate is absolutely engineering.
How do you quantify or decide an acceptable failure rate for llm output?
2 replies →
I found Hillel Wayne's series of articles about the relationship between software and other engineering disciplines from a few years fairly insightful [1]. It's not _exactly_ the same topic but a lot of overlap in defining wht is "real engineering".
[1] https://hillelwayne.com/post/are-we-really-engineers/
> There is no "engineering" here
Prompt engineering is honestly closer to the work of an o.g. engineer. Monitor the dials. Tweak the inputs. Keep the train on time.
It’s not engineering if you throw anything together without much understanding of the why of things.
But if you understand the model architecture, training process, inference process, computational linguistics, applied linguistics in the areas of semantics, syntax, and more— and apply that knowledge to prompt creation… this application of knowledge from systemic fields of inquiry is the definition of engineering.
Black box spaghetti-hits-wall prompt creation? Sure, not so much.
Part of the problem is the “physics” of prompting changes with the models. At the prompt level, is it Even Possible to engineer when the laws of the universe aren’t even stable.
Engineering of the model architecture, sure. You can mathematically model it.
Prompts? Perhaps never possible.
1 reply →
The 'potatolicious rebuttal:
https://news.ycombinator.com/item?id=44978319
(They're not an LLM fan; also: I directionally agree about "prompt" engineering, but the argument proves too much if it disqualifies "context" engineering, which is absolutely a normal CS development problem).
I agree with you about what's described here.
There is engineering when this is done seriously, though.
Build a test set and design metrics for it. Do rigorous measurement on any change of the system, including the model, inference parameters, context, prompt text, etc. Use real statistical tests and adjust for multiple comparisons as appropriate. Have monitoring that your assumptions during initial prompt design continue to be valid in the future, and alert on unexpected changes.
I'm surprised to see none of that advice in the article.
This article talks about prompt evals https://www.anthropic.com/engineering/writing-tools-for-agen.... There are plenty of approaches to provide some degree of rigor around the slot machine output.
Indeed. Engineering is the act of employing our best predictive theorems to manifest machines that work in reality. Here we see people doing the opposite, describing theorems (and perhaps superstitions) that are hoped to be predictive, on the basis of observing reality. However insofar as these theorems remain poor in their predictive power, their application can scarcely be called engineering.
Is this an AI generated post?
1 reply →
> Engineering is about applying knowledge, laws of physics, and rules learned over many years to predictably design and build things. This is throwing stuff at the wall to see if it sticks.
There’s one other type of “engineering” that this reminds me of…
1) Software engineers don't often have deep physical knowledge of computer systems, and their work is far more involved with philosophy and to a certain extent mathematics than it is with empirical science.
2) I can tell you're not current with advances in AI. To be brief, just like with computer science more broadly, we have developed an entire terminology, reference framework and documentation for working with prompts. This is an entire field that you cannot learn in any school, and increasingly they won't hire anyone without experience.
I saw a talk by somebody from a big national lab recently, and she was announced as the "facilities manager". I wondered for about 5 seconds why the janitor was giving a talk at a technical conference, but it turns out facility meant the equivalent of a whole lab/instrument. She was the top boss.
Unless you are going into a legal definition, where there's a global enumeration of the tasks it does, "engineering" means building stuff. Mostly stuff that is not "art", but sometimes even it.
Building a prompt is "prompt engineering". You could also call it "prompt crafting", or "prompt casting", but any of those would do.
Also, engineering also had a strong connotation of messing with stuff you don't understand until it works reliably. Your idea of it is very new, and doesn't even apply to all areas that are officially named that way.
I hear you. But what's integration in calculus? :)
This tutorial itself is very old. (In recent AI innovation timelines)
Now it's all about Context Engineering which is very much engineering.
First they came for science: Physics, Chemistry, Biology -vs- social science, political science, nutrition science, education science, management science...
Now they come for engineering: software engineering, prompt engineering...
:P
Assume for the sake of argument, that this is literally sorcery -- ie communing with spirits through prayer.
_Even in that case_, if you can design prayers that get relatively predictable results from gods and incorporate that into automated systems, that is still engineering. Trying to tame chaotic and unpredictable systems is a big part of what engineering is. Even designing systems where _humans_ do all the work -- just as messy a task as dealing with LLMs, if not more -- is a kind of engineering.
> rules learned over many years
How do you think they learned those rules? People were doing engineering for centuries before science even existed as a discipline. They built steam engines first and _then_ discovered the laws of thermodynamics.
Same when it's applied to programming though. "Software engineer" has always been a bit silly.
100% I’m old enough to remember when they were called “developers”. Now someone who codes in html and ccs is a “front end engineer”. It’s silly
“Social Engineering”
We live in the dumbest fucking timeline.
Here's my best advice of prompt engineering for hard problems. Always funnel out and then funnel in. Let me explain.
State your concrete problem and context. Then we funnel out by asking the AI to do a thorough analysis and investigate all the possible options and approaches for solving the issue. Ask it to go search the web for all possible relevant information. And now we start funneling in again by asking it to list the pros and cons of each approach. Finally we asked it to choose which one or two solutions are the most relevant to our problem at hand.
For easy problems you can just skip all of this and just ask directly because it'll know and it'll answer.
The issue with harder problems is that if you just ask it directly to come up with a solution then it'll just make something up and it will make up reasons for why it'll work. You need to ground it in reality first.
So you do: contrete context and problem, thorough analysis of options, list pros and cons, and pick a winner.
Doesn't this also apply for non-AI problem solving as well?
“Honey, which restaurant should we eat at tonight? First, create a list of restaurants and highlight the pros and cons of each. Conduct a web search. Narrow this down to 2 restaurants and wait for a response.”
In today’s episode of Alchemy for beginners!
Reminds me of a time that I found I could speed up by 30% an Algo in a benchmark set if I seed the random number generator with the number 7. Not 8. Not 6. 7.
It does make things non deterministic and complicated. Like it or not, this IS the job now. If you don't do it, someone else is going to have to.
In my AI application I made deliberate decisions to divorce prompt engineering from the actual engineering, create all the tooling needed to do the prompt engineering as methodically as possible (componentize, version, eval) and handed it off to the subject matter experts. Clearly people who think this is the equivalent of choosing a seed shouldn't be writing prompts.
I will just enjoy the job security that Seed Science provides me. At any day I could reduce the training costs of all these hyperscalers by 30%. Or maybe not.
> Like it or not, this IS the job now.
Nope. The job is still to come up with working code on the end.
If LLMs make your life harder, and you just don't use them, then you'll just get the job done without them.
The big unlock for me reading this is to think about the order of the output. As in, ask it to produce evidence and indicators before answering a question. Obviously I knew LLMs are a probabilistic auto complete. For some reason, I didn't think to use this for priming.
Note that this is not relevant for reasoning models, since they will think about the problem in whatever order it wants to before outputting the answer. Since it can “refer” back to its thinking when outputting the final answer, the output order is less relevant to the correctness. The relative robustness is likely why openai is trying to force reasoning onto everyone.
This is misleading if not wrong. A thinking model doesn’t fundamentally work any different from a non-thinking model. It is still next token prediction, with the same position independence, and still suffers from the same context poisoning issues. It’s just that the “thinking” step injects this instruction to take a moment and consider the situation before acting, as a core system behavior.
But specialized instructions to weigh alternatives still works better as it ends up thinking about thinking, thinking, then making a choice.
2 replies →
Furthermore, the opposite behavior is very, very bad. Ask it to give you an answer and justify it, it will output a randomish reply and then enter bullshit mode rationalizing it.
Ask it to objectively list pros and cons from a neutral/unbiased perspective and then proclaim an answer, and you’ll get something that is actually thought through.
I typically ask it to start with some short, verbatim quotes of sources it found online (if relevant), as this grounds the context into “real” information, rather than hallucinations. It works fairly well in situations where this is relevant (I recently went through a whole session of setting up Cloudflare Zero Trust for our org, this was very much necessary).
I try so hard for chatgpt to link and quote real documentation. It makes up links, fake quotes, it even gaslights me when i clarify the information isn’t real.
"Engineering" here seems rhetorically designed to convince people they're not just writing sentences. With respect "prompt writing" probably sounds bad to the same type of person who thinks there are "soft" skills.
This strikes me as a silly semantics argument.
One could similarly argue software engineering is also just writing sentences with funny characters sprinkled in. Personally, my most productive "software engineering" work is literally writing technical documents (full of sentences!) and talking to people. My mechanical engineering friends report similar as they become more senior.
>This strikes me as a silly semantics argument.
Yeah, precisely what I'm saying. I don't think "they write prompt 'engineering' instead of 'writing' to maintain the fragile egos of people who use chatbots" [don't agree? See Mr. "but muh soft skills!" crying down thread] is worth saying outside of HN if I'm honest.
I dont think so. It says the words were choosen to wngineer peoples emotions and make then feel right way.
Tech people do not feel good about "writing propt essay" so it is called engineering to buy their emotional acceptance.
Just like we call wrong output "hallucination" rather then "bullshit" or "lie" or "bug" or "wrong output". Hallucination is used to make us feel better and more acceptiong.
There absolutely are soft skills and it is clear that you do not have them.
I mean ok, there's no such thing, so
This is written for the 3 models (Sonnet, Haiku, Opus 3). While some lessons will be relevant today, others will not be useful or necessary on smarter, RL’d models like Sonnet 4.5.
> Note: This tutorial uses our smallest, fastest, and cheapest model, Claude 3 Haiku. Anthropic has two other models, Claude 3 Sonnet and Claude 3 Opus, which are more intelligent than Haiku, with Opus being the most intelligent.
Yes, Chapters 3 and 6 are likely less relevant now. Any others? Specifically assuming the audience is someone writing a prompt that’ll be re-used repeatedly or needs to be optimized for accuracy.
Should have "(2024)" in the submission title.
Done
It's one year old. Curious how much of it is irrelevant already. Would be nice to see it updated.
Agree with the other commenters here that this doesn't feel like engineering.
However, Anthropic has done some cool work on model interpretability [0]. If that tool was exposed through the public API, then we could at least start to get a feedback loop going where we could compare the internal states of the model with different prompts, and try and tune them systematically.
[0] https://www.anthropic.com/research/tracing-thoughts-language...
Yesterday I was trying to make a small quantized model work, but it just refused to follow all my instructions. I tried to use all the tricks I could remember, but fixing instruction-following for one rule would always break another.
Then I had an idea: do I really want to be a "prompt engineer" and waste time on this, when the latest SOTA models probably already have knowledge of how to make good prompts in their training data?
Five minutes and a few back-and-forths with GPT-5 later, I had a working prompt that made the model follow all my instructions. I did it manually, but I'm sure you can automate this "prompt calibration" with two LLMs: a prompt rewriter and a judge in a loop.
Thats how copilot works by default. At least in IDE, it takes my prompt, makes it pretty and passes it further.
My workflow has gotten pretty lax around prompts since the models have gotten better. Especially with Claude 4.5 (and 4 before it) once they have a bit of context loaded about the task at hand.
I keep it short and conversational, but I do supervise it. If it goes off the rails just smash esc and give it a course correction.
And then if you're coming from no context: I throw a bit more detail in at the start and usually start by ending the initial prompt with a question asking it if it can see what I'm talking about in the code; or if it's going to be big: I use planning mode.
Suggest adding 2024 to the title
So we've taught this thing how to do what we did and now we need to be taught how to get it to do the things we taught it to do. If this didn't have the entire US economy behind it, it would catch fire like a hot balloon.
Past comments: https://news.ycombinator.com/item?id=41395921
Is there an up to date version of this that was written against their latest models?
Don't write prompts yourself, use DSPy. That's real prompt "engineering"
I really struggle to feel the AGI when I read such things. I understand this is all of year old. And that we have superhuman results in mathematics, basic science, game playing, and other well-defined fields. But why is it difficult to impossible for LLMs to intuit and deeply comprehend what it is we are trying to coax from them?
> But why is it difficult to impossible for LLMs to intuit and deeply comprehend what it is we are trying to coax from them?
It's right there in the name. Large language models model language and predict tokens. They are not trained to deeply comprehend, as we don't really know how to do that.
Have you ever tried to get an average human to do that? It’s a mixed bag. Computers til now were highly repeatable relative to humans, once programmed, but hopeless at “fuzzy” or associative tasks. Now they have a new trick, that lets them grapple with ambiguity, but the cost is losing that repeatability. The best, most reliable humans were not born that way, it took years or decades of education, and even then it can take a lot of talking to transfer your idea into their brain.
> superhuman results in mathematics
LLMs mostly spew nonsense if you ask them basic questions on research or even master's degree-level mathematics. I've only ever seen non-mathematicians suggest otherwise, and even the biggest mathematician advocate for AI, Terry Tao, seems to recognise this too.
Ask yourself "what is intelligence?". Can intelligence at the level of human experience exist without that which we all also (allegedly) have... "consciousness". What is the source of "consciousness"? Can consciousness be computed?
Without answers to these questions, I don't think we are ever achieving AGI. At the end of the day, frontier models are just arithmetic, conditionals, and loops.
Nothing about telling it to fuck off, of course to "engineer" its user sentiment analysis?
This AI madness is getting more stupid every day…
[dead]