This is a simple AI-powered product description generator built with FastAPI and connected to a local LLM via Ollama. It allows users to generate marketing-style product descriptions in different tones using models like Mistral, entirely offline and free.
- Input product details + select tone (e.g., friendly, technical, sales)
- Generates text using a local AI model
- Works with Ollama — no tokens, no limits
- Built with FastAPI + HTML/CSS
- Includes unit tests
- Python 3.10+
- FastAPI + Uvicorn
- Ollama (with
mistralmodel) - HTML + CSS
- Jinja2, python-dotenv
- Pytest
- Clone
git clone https://github.com/kat-git-hub/ProductDescriber.git
cd ProductDescriber
- Install dependencies with Poetry
poetry install
- Ollama: start & pull a model
ollama serve # if not running as a service
ollama pull mistral # or: ollama pull llama3.2:3b
ollama list # verify the model is available
- Start the FastAPI app
poetry run uvicorn productdescriber.main:app --reload
or, if you use Makefile:
make runserver
Then open in browser: http://127.0.0.1:8000
- Run tests
poetry run pytest -vv
or:
make test
- Build the image:
docker build -t productdescriber:latest .
or:
make docker-build
- Run the container (macOS/Windows — Ollama on the host):
docker run --rm -p 8000:8000 \
-e OLLAMA_URL=http://host.docker.internal:11434 \
-e MODEL_NAME=mistral \
productdescriber:latest
or:
make docker-run
-
Open: http://localhost:8000
-
Stop the container:
docker stop productdescriber
or:
make docker-stop