This project was made as the final project for the ods.ai NLP course. For details, check out the project report.
- Clone the repository:
git clone https://github.com/TeoSable/llm-mind-maps
cd llm-mind-maps- (Recommended) Create a clean virtual environment for the project:
python3 -m venv venvActivate the environment:
Linux/Mac:
source venv/bin/activateWindows:
venv/bin/Activate.ps1
- Install dependencies:
pip install -r requirements.txtIt is highly recommended that you use a GPU for running the project, since it involves inferencing an LLM locally. Keep in mind that the default model for the experiment, Qwen 2.5-3B Instruct, requires at least 8 GB of GPU memory to run smoothly. You can check CUDA availability using:
python -c "import torch; print(torch.cuda.is_available())"If needed, visit the official PyTorch website for a guide on installation with CUDA enabled.
For a quick run on three documents from the development subset:
python run.py \
--data-dir data \
--split dev \
--model Qwen/Qwen2.5-3B-Instruct \
--max-files 3For the full test split experiment with 1-shot Qwen2.5-3B-Instruct:
python run.py \
--data-dir data \
--split test \
--model Qwen/Qwen2.5-3B-Instruct \
--few-shot-count 1 \
--output-json outputs/qwen25_3b_test_1shot.jsonFor the full test split experiment with 1-shot Qwen3-4B-Instruct:
python run.py \
--data-dir data \
--split test \
--model Qwen/Qwen3-4B-Instruct-2507 \
--quantization 4bit \
--few-shot-count 1 \
--output-json outputs/qwen3_4b_test_1shot.jsonFor more information on command-line arguments of run.py, run the following command:
python run.py --help