Skip to content

Feature/localllms#143

Merged
ahmad-ajmal merged 6 commits intodevfrom
feature/localllms
Mar 27, 2026
Merged

Feature/localllms#143
ahmad-ajmal merged 6 commits intodevfrom
feature/localllms

Conversation

@zfoong
Copy link
Copy Markdown
Collaborator

@zfoong zfoong commented Mar 27, 2026

What and Why
Users have to create their own ollama and connect to it through an endpoint. This update added support for local ollama auto setup, checking, and configuration. It comes with a new process and interface in the model setting page.

Items/features added

  • Support for local ollama support with checking, auto installation, and model configuration
  • Added new LLM providers (MiniMax, Deepseek, Moonshot)

The 5 states the setup screen handles:

1. Checking (automatic)
When the user arrives at this step, the app immediately and silently checks if Ollama is running on the machine in the background. The user sees a loading spinner — no action required.

2. Already running
If Ollama is already running (even if it was started for a completely different project), the app detects it automatically. The user sees a URL field pre-filled with http://localhost:11434  (any port it auto finds)and a Test Connection button. They click Test, the app verifies it works, and the Next button unlocks. No API key, no manual setup.

3. Installed but not running
If Ollama is installed on the machine but the server is not currently active, the user sees a Start Ollama button. Clicking it starts the Ollama server in the background automatically. The UI then moves to the "running" state above.

4. Not installed at all
If Ollama is not installed on the machine, the user sees an Install Ollama button. Clicking it triggers an automatic installation process — the app tries to use the system package manager (winget on Windows, or a script on Mac/Linux). A live scrolling log shows the installation progress in real time. After installation completes, the app automatically starts Ollama and connects.

5. Custom port support
If the user is running Ollama on a non-default port (e.g. 11435 instead of 11434), they can simply edit the URL in the text field and click Test. Whatever URL they test successfully is what gets saved and used going forward.
Added the MiniMax, Deepseek, Moonshot to the UI
I have made the following updates :

1.	UI fixes on install/test stage:  it now attempts automation if something fails during installation.
2.	Model handling improved: automatic model selection, download, and status tracking are now implemented. No need for the Ollama chat box; the CraftBot UI will handle model selection and installation automatically.
3.	Added support for 30+ models and User Guidance.
For extra backup, I also added the option for users to choose models manually from the  "Model Configuration"  section if needed. This gives them more flexibility to select and use the right model based on their needs.
@zfoong zfoong force-pushed the feature/localllms branch from 14ad3dd to 1ab3e65 Compare March 27, 2026 12:24
@ahmad-ajmal ahmad-ajmal merged commit 1af0d42 into dev Mar 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants