Icon for gfsd IntelliJ IDEA

Using Aider AI for code generation

A step-by-step guide to using Aider AI for LLM-powered code generation

This post covers the use of Aider (a generative LLM-powered software engineering tool) to generate software code.

LLM tools for software development

Lots of developers use large language models (LLMs) in software development to make them more productive. AI can accelerate coding through code completion and code generation, review and testing, transpiling code, and generating documentation.

💡 Interested in the best LLM tools to improve software development efficiency?

Read our post: AI tools for software development

What is Aider?

Aider is a free, open-source AI-powered pair programmer that you can use to edit code in your local Git repository. It’s a command line chat tool that enables you to write and edit code using the large language model of your choice. Aider also offers an experimental UI so that you can access it in your browser.

The tool creates a “map” (essentially a collection of signatures) of your entire repository to give the LLM an understanding of your codebase so that it can provide context-aware suggestions.

Aider works with most popular languages including Python, JavaScript, TypeScript, Java, PHP, HTML, CSS, Ruby, Rust, and more. Find the full range of supported languages here.

What LLMs does Aider support?

The tool connects to almost any LLM, but its developer claims it works best with GPT-4o & Claude 3.5 Sonnet. When using any LLM model, make sure you track the amount of LLM tokens used as this greatly influences the costs (see more on this below).

Aider’s developer suggests the tool works optimally with the following models:

If you’re looking to save costs, the tool also works with a range of free AI providers including Google’s Gemini 1.5 Pro and Llama 3 70B on Groq. Cohere’s Command-R+ model works for simple coding tasks.

You can also use local models through Ollama or any other model that can be queried via the OpenAI API. Aider has its own LLM leaderboards for you to check which model is currently considered best to work with Aider.

Features: how Aider can help you code

Essentially, Aider enables real-time collaboration with GPT (or other LLMs) in your CLI. It can explain your code, add new features, find and fix bugs, refactor code, add test cases, and update the documentation - and all of that across multiple files if necessary.

You can paste web links to documentation or GitHub issues into the Aider chat, and it will automatically scrape that web page and use it as an additional knowledge resource for the LLM.

One neat thing about Aider is that it will apply edits directly to the source files, and automatically create Git commits with the right commit messages so you can easily undo any changes it has done (this feature can be turned off).

Prompt caching (for models that support it, including Anthropic) can be very helpful in controlling LLM costs when working with Aider. Using prompt caching lets you provide the model with a large amount of context when you start working with the model. You can then just refer back to that input with your subsequent prompts. Anthropic claims this can help reduce costs by up to 90%!

How to start using Aider

Installing Aider

Installing Aider is super easy:

pip install aider-chat

If you don’t want to do a local install, Aider can be run in a Docker container:

docker pull paulgauthier/aider
docker run -it --volume $(pwd):/app paulgauthier/aider --openai-api-key $OPENAI_API_KEY [...other aider args...]

You can also use Aider in GitHub Codespaces.

Creating a configuration file and configuring an LLM

Aider can be configured using command line arguments, with a YAML config file, or by setting variables in your shell environment (or via an .env file). Not all configuration options are available via the YAML file, so we recommend starting out with an .env file if you want a single location containing all of your setup.

You can configure general Aider options and store API keys and other settings for the models you’re using with Aider. Note that to get started, you should at least provide a valid API key and an LLM model to be used. Place this .env file in your home directory, the root of your Git repository, the current directory, or just use the following parameter to specify a location:

--env-file <filename>

You’ll find a sample .env file in Aider’s documentation.

Using Aider to write software code

Run the command aider with the source files you’re about to edit, or name them to have Aider create new files. Pro tip: don’t add any extra files to the chat session, only the ones that you need edited! Aider will automatically pull content from related files to give the LLM the necessary context to help your coding.

From here on, you can just use natural language prompts to ask Aider for code changes in your files. There’s a wide range of commands you can use, including:

  • /add: Add files to the chat so it can edit them or review them in detail
  • /drop: Remove files from the chat session to free up context space
  • /commit: Commit edits to the repo made outside the chat (commit message optional)
  • /diff: Display the diff of the last aider commit
  • /lint: Lint and fix provided files or in-chat files if none provided
  • /model: Switch to a new LLM
  • /tokens: Report on the number of tokens used by the current chat context
  • /undo: Undo the last git commit if it was done by aider
  • /voice: Record and transcribe voice input

Source: https://aider.chat/docs/usage/commands.html

The last command is especially interesting as it allows you to code using your voice. You can also add images to the chat, and use a range of keybindings to make work with Aider faster. Additionally, you can point Aider to a CONVENTIONS.md file which defines coding conventions in Markdown, and Aider will forward these conventions to the currently used LLM.

You can even configure Aider to automatically run a linter or a test suite on the changes made by the LLM to verify everything is still working. Aider will even send any identified errors back to the LLM to give it a chance to fix them.

You’ll find a variety of tutorial videos in Aider’s documentation to help you get started.

Controlling costs

We’d say the biggest drawback of Aider is that you have to track and optimize the costs yourself. There is no one-off fee (as is the case with GitHub Copilot for example) but it’s your responsibility to select the LLM to use and to use the LLM context sparsely.

To reduce the costs during programming sessions, you should:

  • Select an LLM with a good cost-performance ratio (i.e. select the cheapest model that is just “good enough” for the complexity of your code). Our own DevQualityEval benchmark can help you make this decision.
  • Make sure to use a model that supports a diff-like editing format (basically the model responds only with the parts of the code that it wants to change, and not the whole file). Check out the Aider leaderboards for a table of which LLMs work best with which edit format.
  • Make frequent use of the Aider chat commands /drop <files…> and /clear to remove files that you know are not needed anymore to prune the context. You can use /tokens to check how much context size Aider is currently using.

If you don’t want to use a cloud-based LLM provider (for privacy or cost reasons) but happen to have a decent GPU lying around, you can pair Aider with Ollama to run LLMs entirely locally and create an offline AI coding buddy.

How good is Aider at generating software code?

Its developer claims Aider is a top scorer on the SWE-bench Lite benchmark. At the time of writing, Aider (specifically, Aider + GPT 4o & Claude 3 Opus) is 14th on this leaderboard.

However, on 2 June 2024, Aider’s developer reported that the tool scored 18.9% on the main SWE Bench benchmark, and scored 26.3% on the SWE Bench Lite benchmark (pass@1 result). That score is higher than that of the previous leader, the Amazon Q Developer Agent, and both were state-of-the-art results at the time.

From our own experience, Aider is a very solid AI pair programmer that can do everything you would expect from an AI coding assistant. It is simple to use yet very powerful and provides some clever features like automatic web scraping, convention specifications, verifying LLM changes with linters or tests, and voice coding.

Using it from the CLI or the web might seem cumbersome at first, but this simplicity can also be beneficial (i.e. if you work mostly in the terminal anyway) and it lets the Aider development community focus solely on the AI aspect and not on maintaining countless editor plugins.

Additionally, there are already third-party editor plugins being developed by the community. It is also exciting to follow along with the development process, as new updates with new features and fixes are usually published every week.

While the initial setup is trivial, it pays off to learn more about the “editing format” that Aider uses to communicate changes with the LLM, and to pay attention to how many files are currently used in the context of the LLM. These two factors have a great impact on how many tokens the input and output context consume, and in turn, how much the currently selected LLM costs to use.

Summary: Using Aider to generate software code

Overall, Aider is a very solid choice if you’re looking for LLM-based code generation. It is easy to use, offers powerful features, and can be configured to suit your needs. And the option for voice coding rocks 🦾 The only real drawback is that you have to keep an eye on the costs.

💸 Find out how to control LLM costs

Read our post: Managing the costs of LLMs

Aider’s results look promising (see Aider’s ranks on the SWE Benchmark above), making it a top contender for a genuinely useful AI tool for software development. Naturally, as with all things AI-related, your experience may vary depending on your codebase, programming language, and the complexity of your application.

If you want to give it a shot, check out Aider’s website to get started. Make sure you also check our latest DevQualityEval deep dive to find the best model for generating software code!

| 2024-08-09