Literally just asking it to write a prompt to ask itself (or maybe another LLM) to do something.
Example: You want to improve a piece of code that interprets bytecode and translates it into native 8051 assembly. You have the code, but you don't really have any ideas on how to improve it. You want the LLM to come up with ideas.
Without meta prompting, your prompt would be like this:
"Here's a piece of code that interprets bytecode and translates it into native 8051 assembly. Think of ways to improve it."
With meta prompting, your initial prompt would be like this:
"Here's a piece of code that interprets bytecode and translates it into native 8051 assembly. My goal is to improve it. Don't think about the improvements, though.
Instead, write a large language model prompt that will guide the LLM on ways of thinking about possible improvements. Incorporate as many methods as you can think of."
The o3 would then come back with a prompt like this:
----------
Here's a piece of code that interprets bytecode and translates it into native 8051 assembly. I'd like you to think of some ways to improve it. Here are some guidelines about what you should and shouldn't do:
- Describe different models of optimization (e.g., peephole, mid-level IR transformation, architecture-aware instruction packing) that might apply to this code.
- Provide brief explanations behind each improvement idea, optionally citing known compiler or assembler optimization strategies.
- Use critical code analysis strategies such as abstraction refactoring, loop unrolling evaluation, and inline expansion assessment.
- Suggest external tools, static analysis techniques, or benchmarks that would support an empirical review of the code’s performance.
- Draft a structured list of investigative questions I should ask to uncover hidden inefficiencies or architecture mismatches.
- Consider ways to minimize register swaps and optimize instruction selection for the 8051 architecture.
- Try breaking your ideas into perspectives—a performance-focused one, a maintenance-focused one, and a correctness-focused one.
- Emphasize reasoning paths: “Consider factors like X, contrast options A and B…”
- Do not directly suggest code changes—instead, construct a high-quality LLM prompt I can run that will guide another model into doing this with rich and varied insights.
- Step into the shoes of different roles: a compiler designer, a security auditor, and a systems programmer. Think from each of their perspectives.
- Don’t just list ideas—group them by impact level and feasibility, and reflect on trade-offs.
- Don’t rewrite the entire architecture. Focus on micro-optimizations and structural improvements that retain the current functional design.
- Evaluate whether parts of the bytecode can be cached or memoized to avoid reprocessing.
- Analyze for common performance bottlenecks typical in interpreters, such as inefficient branching or redundant memory access.
- Avoid superficial ideas like "add comments" or "modernize syntax" unless critically justified.
----------
As you can see, the meta-prompt is essentially an instruction to "compile" a shorter prompt into something much larger and more detailed - which may be wrong, but if the meta-prompt basically encodes well-known methodology (like above), it's hard for it to go wrong.
(Yes, I did use an LLM to generate the list above - I checked each bullet point by hand just to make sure they weren't nonsensical.)
Literally just asking it to write a prompt to ask itself (or maybe another LLM) to do something.
Example: You want to improve a piece of code that interprets bytecode and translates it into native 8051 assembly. You have the code, but you don't really have any ideas on how to improve it. You want the LLM to come up with ideas.
Without meta prompting, your prompt would be like this:
"Here's a piece of code that interprets bytecode and translates it into native 8051 assembly. Think of ways to improve it."
With meta prompting, your initial prompt would be like this:
"Here's a piece of code that interprets bytecode and translates it into native 8051 assembly. My goal is to improve it. Don't think about the improvements, though.
Instead, write a large language model prompt that will guide the LLM on ways of thinking about possible improvements. Incorporate as many methods as you can think of."
The o3 would then come back with a prompt like this:
----------
Here's a piece of code that interprets bytecode and translates it into native 8051 assembly. I'd like you to think of some ways to improve it. Here are some guidelines about what you should and shouldn't do:
- Describe different models of optimization (e.g., peephole, mid-level IR transformation, architecture-aware instruction packing) that might apply to this code.
- Provide brief explanations behind each improvement idea, optionally citing known compiler or assembler optimization strategies.
- Use critical code analysis strategies such as abstraction refactoring, loop unrolling evaluation, and inline expansion assessment.
- Suggest external tools, static analysis techniques, or benchmarks that would support an empirical review of the code’s performance.
- Draft a structured list of investigative questions I should ask to uncover hidden inefficiencies or architecture mismatches.
- Consider ways to minimize register swaps and optimize instruction selection for the 8051 architecture.
- Try breaking your ideas into perspectives—a performance-focused one, a maintenance-focused one, and a correctness-focused one.
- Emphasize reasoning paths: “Consider factors like X, contrast options A and B…”
- Do not directly suggest code changes—instead, construct a high-quality LLM prompt I can run that will guide another model into doing this with rich and varied insights.
- Step into the shoes of different roles: a compiler designer, a security auditor, and a systems programmer. Think from each of their perspectives.
- Don’t just list ideas—group them by impact level and feasibility, and reflect on trade-offs.
- Don’t rewrite the entire architecture. Focus on micro-optimizations and structural improvements that retain the current functional design.
- Evaluate whether parts of the bytecode can be cached or memoized to avoid reprocessing.
- Analyze for common performance bottlenecks typical in interpreters, such as inefficient branching or redundant memory access.
- Avoid superficial ideas like "add comments" or "modernize syntax" unless critically justified.
----------
As you can see, the meta-prompt is essentially an instruction to "compile" a shorter prompt into something much larger and more detailed - which may be wrong, but if the meta-prompt basically encodes well-known methodology (like above), it's hard for it to go wrong.
(Yes, I did use an LLM to generate the list above - I checked each bullet point by hand just to make sure they weren't nonsensical.)