Run multiple same or different open source large language models such as Llama2, Mistral and Gemma in parallel simultaneously powered by Ollama.
Screen.Recording.April.4.mov
You need Ollama installed on your computer.
cmd + k (to open the chat prompt) alt + k (on Windows)
cd backend
bun install
bun run index.tscd frontend
bun install
bun run devRunning in docker containers frontend + (backend + ollama)
On Windows
docker compose -f docker-compose.windows.yml upOn Linux/MacOS
docker compose -f docker-compose.unix.yml upfrontend available at http://localhost:5173
⚠️ Still work in progress