Merged
Conversation
The 5 states the setup screen handles: 1. Checking (automatic) When the user arrives at this step, the app immediately and silently checks if Ollama is running on the machine in the background. The user sees a loading spinner — no action required. 2. Already running If Ollama is already running (even if it was started for a completely different project), the app detects it automatically. The user sees a URL field pre-filled with http://localhost:11434 (any port it auto finds)and a Test Connection button. They click Test, the app verifies it works, and the Next button unlocks. No API key, no manual setup. 3. Installed but not running If Ollama is installed on the machine but the server is not currently active, the user sees a Start Ollama button. Clicking it starts the Ollama server in the background automatically. The UI then moves to the "running" state above. 4. Not installed at all If Ollama is not installed on the machine, the user sees an Install Ollama button. Clicking it triggers an automatic installation process — the app tries to use the system package manager (winget on Windows, or a script on Mac/Linux). A live scrolling log shows the installation progress in real time. After installation completes, the app automatically starts Ollama and connects. 5. Custom port support If the user is running Ollama on a non-default port (e.g. 11435 instead of 11434), they can simply edit the URL in the text field and click Test. Whatever URL they test successfully is what gets saved and used going forward.
Added the MiniMax, Deepseek, Moonshot to the UI
I have made the following updates : 1. UI fixes on install/test stage: it now attempts automation if something fails during installation. 2. Model handling improved: automatic model selection, download, and status tracking are now implemented. No need for the Ollama chat box; the CraftBot UI will handle model selection and installation automatically. 3. Added support for 30+ models and User Guidance.
For extra backup, I also added the option for users to choose models manually from the "Model Configuration" section if needed. This gives them more flexibility to select and use the right model based on their needs.
14ad3dd to
1ab3e65
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What and Why
Users have to create their own ollama and connect to it through an endpoint. This update added support for local ollama auto setup, checking, and configuration. It comes with a new process and interface in the model setting page.
Items/features added