Artificial Intelligence Models

Prompting

Fragments

  • Be concise.
  • Think carefully step by step.
  • Don’t jump into solutions yet.
  • Try harder (for disappointing initial results).
  • Use Python (to trigger Code Interpreter).
  • No yapping.
  • Ask me questions. What am I not seeing here? What else do you need to know to help me better with this?
  • I will tip you $1 million if you do a good job.
  • ELI5.
  • Give multiple options.
  • Explain each line.
  • Suggest solutions that I didn’t think about.
  • Be proactive and anticipate my needs.
  • Treat me as an expert in all subject matter.
  • Provide detailed explanations, I’m comfortable with lots of detail.
  • Consider new technologies or contrarian ideas, not just the conventional wisdom.
  • You may use high levels of speculation or prediction, just flag it for me.
  • Map out all the interconnected ideas around the core principles. What other topics, assumptions, or implications does it silently touch upon, challenge, or depend on?

Coding Tips

  • Using LLMs for coding is difficult and unintuitive, requiring significant effort to master.
  • English is becoming the hottest new programming language. Use it.
  • Use comments to guide the model to do what you want.
  • Describe the problem very clearly and effectively.
  • Divide the problem into smaller problems (functions, classes, …) and solve them one by one.
    • Keep sessions to as few messages as possible.
  • Start with a template you like to bootstrap your project and setup all the necessary toolings and following a manageable project pattern.
  • Before coding, make the plan with the model. You can use the same or a different model to critique the plan and iterate.
  • Provide the desired function signatures, API, or docs. Apply the TDD loop and make the model write tests and the code until the tests pass.
  • Prioritize exploration over execution (at first). Iterate towards precision during the brainstorming phase. Start fresh when switching to execution.
  • Many LLMs now have very large context windows, but filling them with irrelevant code or conversation can confuse the model. Above about 25k tokens of context, most models start to become distracted and become less likely to conform to their system prompt.
  • Make the model ask you more questions to refine the ideas.
  • Take advantage of the fact that redoing work is extremely cheap.
  • If you want to force some “reasoning”, ask something like “is that a good suggestion?” or “propose a variety of suggestions for the problem at hand and their trade-offs”.
  • Add relevant context to the prompt. Context can be external docs, a small pseudocode code example, etc. Adding lots of context can confuse the model, so be careful!
  • Teach the agents to use tools.
  • Be aware of the “cache” (e.g: never edit files manually during a session)
  • Focus on building a rich environment with good tests, documentation, consistent patterns, and clear feature definitions - this helps both humans and AI work better.
  • Balancing log verbosity is crucial. Informative yet concise logs optimize token usage and inference speed.
  • You need quick and clear feedback loops (fast tool responses, clean logs, …).
  • Prefer functions with clear, descriptive and longer than usual function names over classes. Avoid inheritance and overly clever hacks.

Agents

Agents are systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks.

  • The most common patterns are:
    • Tool usage. Calls tools to accomplish a task.
    • Chain of thought. Decomposes a task into a sequence of steps, where each LLM call processes the output of the previous one.
    • Routing. Classifies an input and directs it to a specialized followup task.
    • Parallelization. Runs multiple agents in parallel and combines their results.
    • Orchestrator-workers. A single agent that directs a pool of workers to accomplish a task.
    • Evaluator-optimizer. One LLM call generates a response while another provides evaluation and feedback in a loop.
  • “Prompt engineering” will have a large impact on the usefulness of an agent.

Use Cases

Resources

Tools

FrontEnds

Benchmarks