This is the official implementation of the Open Source Institute-Cognitive System of Machine Intelligent Computing (OpenSI-CoSMIC) v1.0.0.
Before proceeding with the installation, ensure that the following tools are installed on your local machine:
-
Docker: Required for containerized environments. You can install it by following the official Docker installation guide.
-
Docker Compose: Facilitates defining and running multi-container Docker applications. You can install it by following the official Docker Compose installation guide.
The Docker installation provides the fastest way to get started with OpenSI-CoSMIC:
- Download the
docker-compose.yamlfile from the official CoSMIC GitHub repository:
wget https://github.com/TheOpenSI/CoSMIC/raw/production/docker-compose.yaml https://github.com/TheOpenSI/CoSMIC/blob/dev/start.sh- Important: If you're running on a machine without an NVIDIA GPU or CUDA support, you need to modify the
docker-compose.yamlfile. Open the file and comment out the GPU resource allocation section:
# Comment out these lines if you don't have an NVIDIA GPU
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]- Open the directory containing the downloaded files in a terminal and run the following command to start the services:
bash start.sh # bash start.sh --help for details-
Install Git on your local machine if it is not already installed. You can follow the official Git installation guide.
-
Clone the CoSMIC repository in your work directory:
# For users using SSH on GitHub
git clone git@github.com:TheOpenSI/CoSMIC.git
# For users using HTTPS
git clone https://github.com/TheOpenSI/CoSMIC.git- Clone the Open-WebUI repository in your work directory:
# For users using SSH on GitHub
git clone git@github.com:TheOpenSI/OpenWebUI-CoSMIC.git
# For users using HTTPS
git clone https://github.com/TheOpenSI/OpenWebUI-CoSMIC.gitNote: Ensure that both repositories are cloned into the same directory to maintain compatibility.
- Important: If you're running on a machine without an NVIDIA GPU or CUDA support, you need to modify the
docker-compose.yamlfile. Open the file and comment out the GPU resource allocation section:
# Comment out these lines if you don't have an NVIDIA GPU
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]- Navigate to the CoSMIC repository directory and start the services using Docker Compose:
cd CoSMIC- Now you can build from your local clone using the command below:
bash start.sh --docker_buildThis will create an external volume required for PyCapsule and run the docker compose up or docker compose up --build command. For details, run bash start.sh --help.
- Important: During the first run, the system will automatically download the Llama3.1 model, which may take some time depending on your internet connection. You can monitor the progress by checking the Docker logs:
docker compose logs -f cosmicWait until you see the message: cosmic | Model Llama3.1 is available in the Ollama container. before attempting to use the application.
The application will initialize on port 8080. To access it, open a web browser and navigate to http://localhost:8080.
You can integrate OAuth authentication into this application to enhance security and manage user access. For detailed instructions on setting up OAuth, please refer to our OAuth guide.
By default, OpenSI-CoSMIC uses SQLite as its database. However, if you prefer to use Postgres for enhanced scalability and performance, you can configure it by following these steps:
- Open the
.envfile in the root directory of the project and set the following variables:
DATABASE_USER: Specify the username for the Postgres database.DATABASE_PASSWORD: Specify the password for the Postgres database.PGADMIN_USER: Specify the username for PGAdmin. Ex. root@root.comPGADMIN_PASSWORD: Specify the password for PGAdmin.
- Once the
.envfile is configured, run the following command to start the services with Postgres:
docker compose -f docker-compose.postgres.yaml up -dThis will initialize the application with Postgres as the database backend.
Note: Configuring the .env file is mandatory for the Postgres setup to work correctly. Ensure all variables are properly set before starting the services.
The system is configurated through config.yaml. Currently, it has 5 base services, including
- Chess-game next move predication and analyse
- Vector database for text-based and document-base information update
- Context retrieving through the vector database if applicable
- PyCapsule (python code generation)
- General question answering and reasoning
Each query will be parsed by an LLM-based analyser to select the most relevant service.
Upper-level chess-game services include
- Puzzle next move prediction and analyse
- FEN generation given a sequence of moves
- Chain-of-Thought generation for next move prediction
For Chatbot users, the user access information including the user ID, email, visit dates, average token length, and the number of queries are stored monthly.
- For docker users: /app/data/cosmic/statistic/[month]-[year].csv in the cosmic container.
If this repository is useful for you, please cite the paper below.
@misc{
title = {Unleashing Artificial Cognition: Integrating Multiple AI Systems},
author = {Muntasir Adnan and Buddhi Gamage and Zhiwei Xu and Damith Herath and Carlos C. N. Kuhn},
howpublished = {Australasian Conference on Information Systems},
year = {2024}
}For technical supports, please contact Carlos Kuhn, Muntasir Adnan or Zohaib Hammad. For project supports, please contact Carlos C. N. Kuhn.
We welcome contributions from the community! Whether you’re a researcher, developer, or enthusiast, there are many ways to get involved:
- Report Issues: Found a bug or have a feature request? Open an issue on our GitHub page.
- Submit Pull Requests: Contribute code by submitting pull requests. Please follow our contribution guidelines.
- Make a Donation: Support our project by making a donation here.
This code is distributed under the MIT license. If Mistral 7B v0.1, Mistral 7B Instruct v0.1, Gemma 7B, or Gemma 7B It from Hugging Face is used, please also follow the license of Hugging Face; if the API of GPT 3.5-Turbo or GPT 4-o from OpenAI is used, please also follow the licence of OpenAI.
This project is funded under the agreement with the ACT Government for Future Jobs Fund with Open Source Institute (OpenSI)-R01553 and NetApp Technology Alliance Agreement with OpenSI-R01657.