NeLLCom is a framework that allows researchers to quickly implement multi-agent miniature language learning games.
In such games, the agents are firstly trained to listen or speak predefined languages via Supervised Learning (SL) and then pairs of speaking and listening agents talk to each other while optimizing communication success via Reinforcement Learning (RL).
The implementation of NeLLCom is partly based on EGG toolkit.
More details can be found in our TACL paper, titled "Communication Drives the Emergence of Language Universals in Neural Agents: Evidence from the Word-order/Case-marking Trade-off": arxiv
- Word-order/Case-marking Trade-off
Speaking Agent
- Encoder: Linear
- Decoder: GRU
Listening Agent
- Encoder: GRU
- Decoder: Linear
Generally, we assume that you use PyTorch 1.1.0 or newer and Python 3.6 or newer.
- Installing EGG toolkit;
- Moving to the EGG game design folder:
cd EGG/egg/zoo - Cloning the NeLLCom into the EGG game design folder:
git clone git@github.com:Yuchen-Lian/NeLLCom.git cd NeLLCom - Then, we can run a game, e.g. the Word-order/Case-marking trade-off game:
python -m egg.zoo.NeLLCom.train --n_epochs=60
data/contains the full dataset of the predefined artificial languages that are used in the paper.train.pycontain the actual logic implementation.games_*.pycontain the communication pipeline of the game.archs_*.pycontain the agent stucture design.pytorch-seq2seq/is a git submodule containing a 3rd party seq2seq framework.
If you find NeLLCom useful in your research, please cite this paper:
@article{lian2023communication,
title={Communication Drives the Emergence of Language Universals in Neural Agents: Evidence from the Word-order/Case-marking Trade-off},
author={Lian, Yuchen and Bisazza, Arianna and Verhoef, Tessa},
journal={arXiv preprint arXiv:2301.13083},
year={2023}
}
NeLLCom is licensed under MIT.