This repository demonstrates how to integrate Ludii (a general game playing engine) with an LLM-based agent to make gameplay decisions.
The architecture is composed of:
- Ludii: runs the game and enforces rules.
- C-3PO: a custom AI agent implementing Ludii's
AIinterface. - LangChain4j + Quarkus: Java framework to interact with LLMs, both online (e.g., OpenAI GPT) and offline (e.g., Ollama).
- LLM: a language model used to choose moves based on the current game state.
flowchart LR
LUDII["Ludii Engine (JAR)"]
C3PO["C3PO Agent (JAR) includes LangChain4j + Quarkus"]
LLM["LLM Backend (OpenAI, Ollama, etc.)"]
LUDII -->|calls initAI and selectAction| C3PO
C3PO -->|returns chosen move| LUDII
C3PO -->|sends game informations as prompt| LLM
LLM -->|suggested move in text| C3PO
- Ludii starts a match and periodically calls the
selectAction()method on the C3PO bot. - C3PO accesses the current game state (via
Context,Game,Move, etc.) and builds a textual prompt describing the situation. - This prompt is passed to LangChain4j, which makes an API request to an LLM.
- The model replies with the move to play (e.g., "move from E2 to E4").
- C3PO translates this response into a valid
Moveobject (or the closest match among the pseudo-legal options). - The move is returned to Ludii.
You can configure the agent to use:
- Remote models: like GPT-4 via OpenAI, Anthropic, Mistral on OpenRouter, etc.
- Local models: like Ollama, running LLMs locally (e.g.,
mistral,llama3,codellama).
Configure the LangChain4j backend in application.properties:
llm.provider=openai
# or for Ollama
llm.provider=ollama
llm.ollama.model=llama3
llm.ollama.host=http://localhost:11434Coming soon
- [ ]
- [ ]