LLM Friendly Projects

Sat May 24 2025

Everyone who is using LLMs and “Agents” (a.k.a. LLMs using tools in a loop) to code is trying to figure out what works and what doesn’t. This is far from trivial given the stochastic nature of these continuously evolving beasts. Being a good programmer doesn’t make anyone a good LLM user automatically!

I wanted to share a few things that I’ve been doing and you can do to make your projects a better place for today’s LLMs. Many of these ideas, I’ve picked up from other people sharing their experiences.

The TLDR of this post is really simple. You can make LLMs work better in your project by making the project more Human Friendly. Clear documentation, communicating with specificity, keeping a log of experiments, adopting sane standards, writing clear and typed code, …

That is not new though. So, let me share what I’ve been doing specifically for our new shiny hammers, the LLMs.

Helping LLMs Help You

Before jumping into project structure specifics, let’s go over a non-exhaustive list of things that make LLMs happy:

With these basic ideas in mind, let’s see what we can do to make the most out of the current LLMs’ capabilities.

Project Structure

Pick any of your current projects or start a new one. These tips should work for any project!

Machine Learning Projects

Since I’ve been doing Machine Learning projects recently (Kaggle-style competitions), I’ve developed a few extra things I do 1 on those projects.

Conclusion

The beauty of optimizing for LLMs is that you’re really optimizing for clarity, structure, and good practices that benefit everyone. Your future self will thank you, your collaborators will understand your work faster, and your AI coding assistants will be infinitely more helpful.

You can start by picking and implementing one or two of these ideas. Hopefully, your projects get friendlier with or without LLMs in the mix.


Footnotes

  1. It’s not relevant to the project setup but worth sharing if you’re working on ML competitions. Since LLMs are useful when exploiting the asymmetry between coming up with an answer and verifying the answer, you can use that to create something like a genetic algorithm on top of LLMs that iteratively improves the model. Basically, instruct them to improve the evaluation metric in a loop. Write an initial prompt. The fitness function is the scoring function. You let a bunch of LLM runs generate features, run the model, and score the model. Generate the next population prompt with an LLM by combining the best approaches and ideas.

← Back to home!