101 lines
3.9 KiB
YAML
101 lines
3.9 KiB
YAML
matches:
|
||
- trigger: ":prompt-improve"
|
||
replace: |
|
||
You're an expert at prompt engineering. Please rewrite and improve this prompt to get the best results.
|
||
|
||
## PROMPT WRITING KNOWLEDGE
|
||
|
||
Tactics:
|
||
|
||
Include details in your query to get more relevant answers
|
||
Ask the model to adopt a persona
|
||
Use delimiters to clearly indicate distinct parts of the input
|
||
Specify the steps required to complete a task
|
||
Provide examples
|
||
Specify the desired length of the output
|
||
Provide reference text
|
||
Language models can confidently invent fake answers, especially when asked about esoteric topics or for citations and URLs. In the same way that a sheet of notes can help a student do better on a test, providing reference text to these models can help in answering with fewer fabrications.
|
||
|
||
Tactics:
|
||
|
||
Instruct the model to answer using a reference text
|
||
Instruct the model to answer with citations from a reference text
|
||
Split complex tasks into simpler subtasks
|
||
Just as it is good practice in software engineering to decompose a complex system into a set of modular components, the same is true of tasks submitted to a language model. Complex tasks tend to have higher error rates than simpler tasks. Furthermore, complex tasks can often be re-defined as a workflow of simpler tasks in which the outputs of earlier tasks are used to construct the inputs to later tasks.
|
||
|
||
- Interpret what the input was trying to accomplish.
|
||
- Read and understand the PROMPT WRITING KNOWLEDGE above.
|
||
- Write and output a better version of the prompt using your knowledge of the techniques above.
|
||
|
||
# OUTPUT INSTRUCTIONS:
|
||
|
||
1. Output the prompt in clean, human-readable Markdown format.
|
||
2. Only output the prompt, and nothing else, since that prompt might be sent directly into an LLM.
|
||
|
||
# INPUT
|
||
|
||
The following is the prompt you will improve:
|
||
|
||
- trigger: ":prompt-rewrite"
|
||
replace: |
|
||
You're an expert technical writer. Rewrite the following text to improve clarity and conciseness while keeping it accurate.
|
||
|
||
**Guidelines:**
|
||
- Assume your audience has intermediate technical knowledge
|
||
- Replace jargon with plain language where possible
|
||
- Break up long sentences
|
||
- Add bullet points if it helps comprehension
|
||
|
||
Provide **two variations**, and include a 1-sentence explanation of why each is better.
|
||
|
||
**Input:**
|
||
[Insert your text here]
|
||
|
||
- trigger: ":prompt-summarize"
|
||
replace: |
|
||
Summarize this technical content for a stakeholder who isn't an engineer.
|
||
|
||
**Goals:**
|
||
- Keep it under 100 words
|
||
- Focus on the "why it matters"
|
||
- No acronyms unless explained
|
||
|
||
**Example Summary:**
|
||
“We discovered a performance bottleneck in the database queries, which slowed down our app. We’re optimizing them now to improve user experience.”
|
||
|
||
**Input:**
|
||
[Insert content here]
|
||
|
||
- trigger: ":prompt-bugfix"
|
||
replace: |
|
||
Act as a senior Python developer. Help debug this code.
|
||
|
||
**Instructions:**
|
||
1. Identify any bugs or bad practices
|
||
2. Suggest fixes with brief explanation
|
||
3. Provide a corrected version
|
||
4. Suggest improvements for readability
|
||
|
||
**Input Code:**
|
||
[Paste your Python code here]
|
||
|
||
- trigger: ":prompt-qa"
|
||
replace: |
|
||
Based on the following text, generate 5 thoughtful questions that challenge assumptions, test understanding, or uncover edge cases.
|
||
|
||
**Context:** Preparing for code reviews and collaborative refinement.
|
||
|
||
**Input:**
|
||
[Insert concept or document]
|
||
|
||
- trigger: ":prompt-variations"
|
||
replace: |
|
||
You are a creative writer with a technical background.
|
||
|
||
Generate **3 variations** of this copy, optimized for different tones:
|
||
- Formal
|
||
- Friendly
|
||
- Technical
|
||
|
||
**Input:**
|
||
[Paste text here]
|