Skip to content

PurdueDualityLab/sysllmatic

Repository files navigation

SysLLMatic: Large Language Models are Software System Optimizers

About

This repository contains the artifacts for the project SysLLMatic: Large Language Models are Software System Optimizers. It includes implementation details, instructions to reproduce results, and experimental data.

Our artifact includes the following

Item Description Corresponding content in the paper Path
Pattern Catalog The catalog including 43 performance optimization patterns §4, Figure 2-3, Table 2 pattern_catalog
Implementation The implementation of SysLLMatic §5, Figure 4-6 src
Benchmarks The benchmarks we used in evaluation §6-B humaneval, scimark, dacapo
Eval The evaluation scripts and results §7, Figure 7-15, Table 6-12 eval

Table of Contents

Environment Requirement

This artifact requires a machine with the following capabilities to support RAPL (Running Average Power Limit) and read MSR (Model-Specific Registers):

  1. Hardware
  • Intel Processor: Machine with Intel processors supporting RAPL (Sandy Bridge or newer).
  • MSR Support: Machine must allow access to MSRs.
  1. Operating System
  • Linux-based OS (e.g., Ubuntu 16.04+).
  • Linux Kernel Version 3.13+ required for RAPL support.
  • Root Access: MSRs can only be accessed with root/superuser privileges.
  1. Software
  • msr-tools: Install for reading MSRs:
    sudo apt-get install msr-tools

Environment Setup

  1. Clone the repository:
    git clone <repository-link>
    cd <project-directory>
  2. Install the required dependencies using the Makefile
    make setup
  3. Create .env file in the root directory Add the following:
    API_KEY=your_openai_api_key_here
    USER_PREFIX=$(pwd)
    Then source your env with
    . .env
  4. Compile performance measurement module In the MEASURE directory, run:
    make

Running the pipeline

  1. Run the main script from the project root (/sysllmatic) Run HumanEval_CPP benchmark
    python3 src/main.py --benchmark HumanEval --llm gpt-4o --self_optimization_step 2 --num_programs 2
    Run SciMark benchmark
    python3 src/main.py --benchmark SciMark --llm gpt-4o --self_optimization_step 2
    Run Dacapo benchmark Prebuild the target application following the Dacapobench official instruction, then run:
    python3 src/main.py --benchmark Dacapobench --llm gpt-4.1 --self_optimization_step 2 --application_name biojava

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors