[tutorial] Enhancing Commit Messages with commitollama: A Guide for VSCode and Local LLM Integration
๐ Introduction
This article introduces commitollama, an alternative to GitHub Copilot designed for generating commit messages using local LLMs, ensuring privacy for confidential projects. It outlines the installation process for the commitollama extension in VSCode and necessary setup steps to start using it effectively.
๐ Quick Start
How to use
- Install the extension in VSCode.
- Install Ollama to integrate the LLM.
Installing Ollama
Run the following command to install Ollama:
1 | curl -fsSL https://ollama.com/install.sh | sh |
After installation, you can run Ollama using:
1 | ollama |
This will display a list of available commands:
1 | Usage: |
Download the Phi3 model (3.8b) by running:
1 | ollama pull phi3:3.8b |
Start the Ollama service using:
1 | ollama serve |
If you encounter the error message Error: listen tcp 127.0.0.1:11434: bind: address already in use
, you can find a solution here .
To restart Ollama, stop the current service and relaunch it:
1 | systemctl stop ollama.service |
To prevent the model from being deleted after downloading, refer to this discussion here .
Setting Up VSCode
- After installing the extension, use a custom model for commit message generation.
- Press the button in the interface to automatically generate the commit message.
๐ Recap
- commitollama is a privacy-focused commit message generator alternative to GitHub Copilot.
- The tool leverages open-source LLMs like Llama, Mistral, and Phi3.
- Easy integration with VSCode through a simple extension installation process.
- Users can easily retrieve models, run services, and generate commit messages efficiently.
๐ References
[tutorial] Enhancing Commit Messages with commitollama: A Guide for VSCode and Local LLM Integration