Local AI Setup
Sunflare can use AI for session summaries, terminal assist, and chat — all running locally, nothing sent to the cloud.
Two Options
Section titled “Two Options”1. Local AI Server
Section titled “1. Local AI Server”Run your own models on your Mac or PC using tools like LM Studio or Ollama.
LM Studio example:
- Install LM Studio and load a model
- Start the local server (default:
http://localhost:1234) - In Sunflare: Settings → AI → Provider → OpenAI-Compatible
- URL:
http://localhost:1234/v1
2. Any OpenAI-Compatible API
Section titled “2. Any OpenAI-Compatible API”Sunflare works with any API that speaks the OpenAI chat completions protocol:
- Ollama —
http://localhost:11434/v1 - vLLM — your deployment URL
- Together AI —
https://api.together.xyz/v1+ API key - OpenAI —
https://api.openai.com/v1+ API key
Access Your LLM From Anywhere
Section titled “Access Your LLM From Anywhere”Running a local LLM at home but using Sunflare from a different network? Use Tailscale to securely access your home LLM from anywhere.