- Replaces `.gitlab-ci.yml` with a GitHub Actions workflow. - Uses Ubuntu 29.1 Emacs container for consistent builds. - Installs and verifies dependencies (forge and llm). - Byte-compiles the package and uploads the `.elc` file. - Includes linting with `package-lint`. - Sets `TERM` and `DEBIAN_FRONTEND` environment variables. - Uses GitHub Actions artifacts to store the compiled file.
forge-llm
- forge-llm
forge-llm
Generate Pull Request descriptions for Forge using LLM providers through the llm package.
Overview
forge-llm
is an Emacs package that integrates Large Language Models (LLMs) with Forge, a Magit interface to GitHub and GitLab forges. This package helps you generate high-quality Pull Request descriptions based on your git diff and repository PR templates.
Main features:
- Automatically finds and uses your repository's PR template
- Generates PR descriptions based on git diffs between branches
- Seamless integration with Forge's PR creation workflow
- Supports any LLM provider supported by the
llm
package - Stream LLM responses in real-time
Installation
Using MELPA (Recommended)
The easiest way to install forge-llm
is via MELPA. Ensure you have MELPA configured in your Emacs setup (it's included by default in many distributions like Doom Emacs and Spacemacs).
(use-package forge-llm
:ensure t
:after forge
:config
(forge-llm-setup))
Using straight.el with use-package
If you use straight.el
to manage your packages, it can install forge-llm
directly from MELPA. Ensure MELPA is included in your straight-recipe-repositories
or straight-recipe-sources
.
(use-package forge-llm
;; straight.el will fetch this from MELPA if :ensure t is used
;; and straight.el is configured as the handler for use-package.
:ensure t
:after forge
:config
(forge-llm-setup))
Using Doom Emacs
Basic Setup
- Add the following to your
packages.el
(ensure MELPA is enabled in your Doom configuration, which is usually the default):
(package! forge-llm)
(package! llm) ; Dependency
- Add somewhere in your
config.el
:
;; Load and setup forge-llm after forge is loaded
(after! forge
(require 'forge-llm)
(forge-llm-setup))
;; Configure your LLM provider (example using OpenAI)
;; Place this somewhere appropriate in your config.el
(require 'llm-openai) ; Or your preferred LLM provider
(setq forge-llm-llm-provider (make-llm-openai :key "YOUR-OPENAI-KEY")) ; Replace with your key/provider setup
- Run
doom sync
to install the package.
Keybindings
The package automatically sets up Doom Emacs keybindings when Doom is detected:
SPC m g
- Generate PR description in a separate bufferSPC m p
- Generate PR description at pointSPC m t
- Insert PR template at point
No additional configuration is needed for these keybindings to work.
Manual installation
Clone the repository:
git clone https://gitlab.com/rogs/forge-llm.git ~/.emacs.d/site-lisp/forge-llm
Add to your Emacs configuration:
(add-to-list 'load-path "~/.emacs.d/site-lisp/forge-llm")
(require 'forge-llm)
(forge-llm-setup)
Setting up LLM providers
forge-llm
depends on the llm package for LLM integration. You'll need to set up at least one LLM provider. Please refer to the llm documentation for detailed instructions.
Some of the providers supported by the llm
package include:
- OpenAI
- Anthropic (Claude)
- Google (Gemini, Vertex AI)
- Azure OpenAI
- GitHub Models
- Ollama (for local models like Llama, Mistral, etc.)
- GPT4All (for local models)
- llama.cpp (via OpenAI compatible endpoint)
- Deepseek
- Generic OpenAI-compatible endpoints
See the llm documentation for the complete list and specific setup steps.
Example: OpenAI provider
First, create an OpenAI API key. Then configure the llm
OpenAI provider:
(require 'llm-openai)
(setq forge-llm-llm-provider (make-llm-openai :key "YOUR-OPENAI-KEY"))
Example: Anthropic provider
To use Claude models from Anthropic:
(require 'llm-claude)
(setq forge-llm-llm-provider (make-llm-claude :key "YOUR-ANTHROPIC-KEY" :chat-model "claude-3-7-sonnet-20250219"))
Using auth-source for API keys (recommended)
For better security, use Emacs auth-source
to store your API keys:
(use-package llm
:ensure t
:config
(setq llm-warn-on-nonfree nil))
(require 'llm-openai)
(use-package forge-llm
:ensure t
:after (forge llm)
:custom
(forge-llm-llm-provider
(make-llm-openai
:key (auth-source-pick-first-password
:host "api.openai.com"
:user "apikey")))
:config
(forge-llm-setup))
Content of .authinfo
or .authinfo.gpg
:
machine api.openai.com login apikey password YOUR-API-KEY-HERE
Usage
After setting up forge-llm
, the following commands will be available specifically within Forge's pull request creation buffer (which runs in forge-post-mode
):
Key binding | Command | Description |
---|---|---|
C-c C-l g | forge-llm-generate-pr-description | Generate a PR description (output to separate buffer) |
C-c C-l p | forge-llm-generate-pr-description-at-point | Generate a PR description at the current point |
C-c C-l t | forge-llm-insert-template-at-point | Insert the PR template at the current point |
SPC m g (Doom Emacs) | forge-llm-generate-pr-description | Generate a PR description (output to separate buffer) |
SPC m p (Doom Emacs) | forge-llm-generate-pr-description-at-point | Generate a PR description at the current point |
SPC m t (Doom Emacs) | forge-llm-insert-template-at-point | Insert the PR template at the current point |
Demo: Generate PR description in a new buffer
Demo: Generate PR description at point
Workflow:
- Create a PR using Forge as normal (
forge-create-pullreq
) - In the PR creation buffer, position your cursor where you want to insert the PR description
- Press
C-c C-l p
to generate and insert a PR description based on your changes - Edit the description as needed and submit the PR
Canceling Generation:
If you need to cancel an in-progress LLM request:
M-x forge-llm-cancel-request
Customization
You can customize various aspects of forge-llm
through the following variables:
PR Template Configuration
-
forge-llm-pr-template-paths
- List of possible paths for PR/MR templates relative to repo root(setq forge-llm-pr-template-paths '(".github/PULL_REQUEST_TEMPLATE.md" ".github/pull_request_template.md" "docs/pull_request_template.md" ".gitlab/merge_request_templates/default.md"))
forge-llm-default-pr-template
- Default PR template to use when no template is found in the repository
LLM Provider Configuration
-
forge-llm-llm-provider
- LLM provider to use. Can be a provider object or a function that returns a provider object (See the llm package documentation for how to create provider objects).(setq forge-llm-llm-provider (make-llm-openai :key "YOUR-API-KEY"))
-
forge-llm-temperature
- Temperature for LLM responses (nil for provider default)(setq forge-llm-temperature 0.7)
-
forge-llm-max-tokens
- Maximum number of tokens for LLM responses (nil for provider default)(setq forge-llm-max-tokens 1024)
-
forge-llm-max-diff-size
- Maximum size in characters for git diffs sent to the LLM (nil for no truncation);; Default is 50000, set to nil to disable truncation (setq forge-llm-max-diff-size 100000) ; Increase to 100K characters ;; Or disable truncation completely (setq forge-llm-max-diff-size nil)
Prompt Configuration
-
forge-llm-pr-description-prompt
- Prompt used to generate a PR description with the LLM. This prompt is formatted with the PR template and git diff.You can customize this prompt to match your project's PR description style:
(setq forge-llm-pr-description-prompt "Generate a PR description for the following changes. PR template: %s Git diff: ``` %s ``` Please generate a PR description that follows our team's style.")
Troubleshooting
- If you're having issues with the LLM provider, you can enable debug logging for
llm
by settingllm-log
tot
. - Check the
*forge-llm-debug-prompt*
buffer to see the exact prompt being sent to the LLM. - Check the
*forge-llm-output*
buffer to see the raw output from the LLM.
Common Issues:
-
Error: "No LLM provider configured"
- Make sure you've set
forge-llm-llm-provider
to a valid provider object. - Ensure your API key is correct.
- Make sure you've set
-
Error: "Failed to generate git diff"
- Ensure you're in a repository with valid head and base branches.
- Check if the current directory is within a git repository.
-
PR Generation is too slow
- Consider using a faster model (like GPT-3.5-turbo instead of GPT-4).
- Reduce
forge-llm-max-tokens
to limit the response size.
-
PR template not found
- Check if your PR template is in one of the paths listed in
forge-llm-pr-template-paths
. - Add your custom template path if needed.
- Check if your PR template is in one of the paths listed in
TO-DO:
- Add more examples and use cases
Contributing
Contributions are welcome! Please feel free to submit a Merge Request.
Development Setup
-
Clone the repository:
git clone https://gitlab.com/rogs/forge-llm.git cd forge-llm
-
Install dependencies for development:
- Ensure you have forge and llm packages
Acknowledgments
This project was heavily inspired by magit-gptcommit. Check it out! This package works very well with forge-llm.
Another huge inspiration was xenodium, with their Emacs package chatgpt-shell.
License
This project is licensed under the GNU General Public License version 3 - see the LICENSE file for details.