[tutorial] Enhancing Commit Messages with commitollama: A Guide for VSCode and Local LLM Integration
commitollama
commitollama is an alternative to GitHub Copilot’s commit message generator, powered by open-source models such as Llama (Llama3, Gemma, Mistral, Phi3, etc. For projects where confidentiality is a concern, commitollama allows you to use a local Large Language Model (LLM), ensuring privacy.
How to use
Thanks to its contributors, commitollama can be directly integrated into VSCode by installing the extension and setting up Ollama.
- Install the extension in VSCode.
- Install Ollama to integrate the LLM.
Installing Ollama
Run the following command to install Ollama:
1 | curl -fsSL https://ollama.com/install.sh | sh |
After installation, you can run Ollama using:
1 | ollama |
This will display a list of available commands:
1 | Usage: |
Download the Phi3 model (3.8b) by running:
1 | ollama pull phi3:3.8b |
Start the Ollama service using:
1 | ollama serve |
If you encounter the error message Error: listen tcp 127.0.0.1:11434: bind: address already in use
, you can find a solution here .
To restart Ollama, stop the current service and relaunch it:
1 | systemctl stop ollama.service |
To prevent the model from being deleted after downloading, refer to this discussion here .
Setting Up VSCode
- After installing the extension, use a custom model for commit message generation.
- Press the button in the interface to automatically generate the commit message.
[tutorial] Enhancing Commit Messages with commitollama: A Guide for VSCode and Local LLM Integration