This document provides you with a step-by-step guide on how to install Themis, based on Atropos, using QEMU-Nyx and AFL++. On top, you will be guided through the required configurations before fuzzing WordPress plugins.
Table of Contents
Caution
The architecture works best and was successfully tested on Ubuntu 22.04.5. If possible, switch to a machine that runs that version. Otherwise, you will run into compatibility issues.
First of all, make sure that you have all necessary tools and dependencies:
# install docker from apt
for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done
sudo apt-get update
sudo apt install libgtk-3-0
sudo apt install python3-pip
pip install icecream
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo usermod -a -G docker $USER
sudo apt-get install zip -y
# install compilers
sudo apt install build-essential
curl https://nim-lang.org/choosenim/init.sh -sSf | sh
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | shDo not forget to add export PATH=/home/$USER/.nimble/bin:$PATH to your .bashrc file afterwards. Reload your terminal by running source ~/.bashrc.
You should switch to 1.83.0 for compatibility:
rustup install 1.83.0
rustup default 1.83.0Done, we can now start with the actual setup.
We strongly recommend using the first method below, as it is by far the easiest method to setup the architecture. If you want to build AFL++ from scratch on your own, you will most likely encounter issues depending on your operating system, the version of your system, the installed dependencies, and much more.
Important
Make sure to use the LATEST download link for AFLplusplus.zip below. You find the latest releases here: 10.5281/zenodo.19669019
Use our pre-compiled AFL++ build that already contains a ready VM:
# Download pre-compiled AFL++ build from Zenodo
wget -P ~/Themis/ -O AFLplusplus.zip https://zenodo.org/records/19669020/files/AFLplusplus.zip?download=1
unzip AFLplusplus.zip
# Install AFL++ to ~/AFLplusplus
python3 inflate_aflplusplus.py AFLplusplus/ --extract ~/AFLplusplus --clean
# File Permissions
sudo chown -R $USER ~/AFLplusplus
find ~/AFLplusplus -type d -exec chmod 775 {} \; && \
find ~/AFLplusplus -type f -exec chmod 775 {} \; && \
find ~/AFLplusplus -type f \( -name "afl-*" -o -name "*.so*" \) -exec chmod 755 {} \;After that, please update the file paths in /themis/fuzzer/webapp_share/config.ron and /themis/fuzzer/webapp_share/default_config_kernel.ron to match the new AFL++ and the Themis installation paths. Please do not use relative paths here but instead the absolute path.
In case you ran into any issues, please try the manual installation instead.
Next, we will need to build AFL++ in nyx mode. Make sure you have checked out version 4.0.0c. Now, run the following commands (compare INSTALL.md and README.md for more details):
sudo apt-get update
sudo apt-get install -y build-essential python3-dev automake cmake git flex bison libglib2.0-dev libpixman-1-dev python3-setuptools cargo libgtk-3-dev
# try to install llvm 14 and install the distro default if that fails
sudo apt-get install -y lld-14 llvm-14 llvm-14-dev clang-14 || sudo apt-get install -y lld llvm llvm-dev clang
sudo apt-get install -y gcc-$(gcc --version|head -n1|sed 's/\..*//'|sed 's/.* //')-plugin-dev libstdc++-$(gcc --version|head -n1|sed 's/\..*//'|sed 's/.* //')-dev
sudo apt-get install -y ninja-build # for QEMU mode
sudo apt-get install -y cpio libcapstone-dev # for Nyx mode
sudo apt-get install -y wget curl # for Frida mode
sudo apt-get install -y python3-pip # for Unicorn mode
cd ~/AFLplusplus
make distrib
sudo make installNow, we should also build the Nyx version of QEMU and all other dependencies. Trigger the whole AFL++ nyx support as follows:
apt-get install -y libgtk-3-dev pax-utils python3-msgpack python3-jinja2 libcapstone-dev
cd ./AFLplusplus/nyx_mode/ && sudo ./build_nyx_support.shThis builds everything we need:
- QEMU-Nyx (modified QEMU)
- Nyx packer tool (tool to build snapshot)
- libnyx (underlying library)
Important
Make sure to define NO_PT_NYX in AFLplusplus/nyx_mode/packer/nyx.h to disable PT mode, as explained in the beginning. We don't need CPU support for our purposes. This should be the case by default, however, you should double check that.
/*
This file is part of NYX.
Copyright (c) 2021 Sergej Schumilo
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
*/
#ifndef KAFL_USER_H
#define KAFL_USER_H
#define NO_PT_NYX // <----- MAKE SURE THIS IS THERE
// ...Important
You should pay close attention to version mismatches between different nyx.h header files. The following header files should match (i.e. no difference):
AFLplusplus/nyx_mode/packer/nyx.h
themis/fuzzer/nyx.h
themis/php-7.4-patched/nyx.h
With that, build all necessary htool binaries (h*.c) as follows:
cd ./AFLplusplus/nyx_mode/packer/packer/linux_x86_64-userspace && sudo ./compile_64.shThis will compile the following files to AFLplusplus/nyx_mode/packer/packer/linux_x86_64-userspace/:
- ./bin64/loader
- ./bin64/libnyx.so
- ./bin64/habort
- ./bin64/hcat
- ./bin64/hget
- ./bin64/hget_bulk
- ./bin64/hpush
- ./bin64/habort_no_pt
- ./bin64/hcat_no_pt
- ./bin64/hget_no_pt
- ./bin64/hget_bulk_no_pt
- ./bin64/hpush_no_pt
Copy the h* binaries into your webapp_share folder:
cp AFLplusplus/nyx_mode/packer/packer/linux_x86_64-userspace/bin64/h* Themis/themis/fuzzer/webapp_share/Download Ubuntu 18.04 Server AMD x64:
wget -P AFLplusplus/nyx_mode/ "https://old-releases.ubuntu.com/releases/18.04.4/ubuntu-18.04.4-server-amd64.iso"After that, we can start preparing the snapshot, that is going to be used by Themis for fast vm reloading. The following commands will create an Ubuntu 18.04 image with 25GB of HDD and 16GB of RAM.
cd AFLplusplus/nyx_mode
./packer/qemu_tool.sh create_image ubuntu_docker.img 25000
./packer/qemu_tool.sh install ubuntu_docker.img ubuntu-18.04.4-server-amd64.iso
# --------------------------------------------------------------------------
# keep the terminal open and connect to localhost:5900 via a VNC client and
# perform the full installation, log in and run:
# --------------------------------------------------------------------------
sudo apt-get remove -y docker docker-engine docker.io containerd runc
sudo apt install -y openssh-server python3 python3-pip libtool-bin bison glib2.0-dev automake docker.io libarchive-dev containerd vmtouch gzip
sudo usermod -a -G docker $USER
sudo systemctl unmask docker
# patch docker (optional)
sudo bash -c 'mkdir -p /etc/docker && printf "{\n \"data-root\": \"/dev/shm/docker\",\n \"storage-driver\": \"overlay2\"\n}\n" > daemon.json'
sudo bash -c 'printf "[Unit]\nDescription=Docker Setup Service\nBefore=docker.service\nAfter=local-fs.target\n\n[Service]\nType=oneshot\nExecStart=/bin/mkdir -p /dev/shm/docker\nExecStart=/bin/chmod 711 /dev/shm/docker\nRemainAfterExit=yes\n\n[Install]\nWantedBy=multi-user.target\n" > docker-setup.service'
sudo mv daemon.json /etc/docker/ && sudo chmod 644 /etc/docker/daemon.json
sudo mv docker-setup.service /etc/systemd/system/ && sudo chmod 777 /etc/systemd/system/docker-setup.service
cd /etc/systemd/system && sudo systemctl enable docker-setup.service
# logout and login again
sudo swapoff -a
shutdown -h now
# --------------------------------------------------------------------------
# back to your host system terminal!
# here, we assume the username of the Ubuntu server to be "user".
# --------------------------------------------------------------------------
./packer/qemu_tool.sh post_install ubuntu_docker.img
scp -P 2222 ./packer/packer/linux_x86_64-userspace/bin64/loader user@localhost:/home/user
ssh -p 2222 user@localhost
# --------------------------------------------------------------------------
# we are now connected to the Ubuntu server via an SSH tunnel.
# from here, run:
# --------------------------------------------------------------------------
sudo swapoff -a
sudo shutdown -h now
# --------------------------------------------------------------------------
# and again, back to your host system terminal!
# --------------------------------------------------------------------------
./packer/qemu_tool.sh create_snapshot ubuntu_docker.img 16384 ubuntu_docker_pre_snapshot
# --------------------------------------------------------------------------
# keep the terminal open again and connect to localhost:5900 via VNC.
# perform the following, final commands:
# --------------------------------------------------------------------------
sudo swapoff -a
sudo mv /home/user/loader /tmp
cd /tmp
sudo ./loaderAfter that, you should find a file called ubuntu_docker.img and a non-empty folder called ubuntu_docker_pre_snapshot inside of the AFLplusplus/nyx_mode/ folder.
Continue by placing these two paths inside of the (default_config_kernel|config).ron files, located in themis/fuzzer/webapp_share/. Don't forget about updating the .ron file locations as well (alternatively, make use of the nyx_config_gen.py script, as found in AFLplusplus/nyx_mode/packer/packer/; however, we accomplished that manually):
// themis/fuzzer/webapp_share/config.ron
#![enable(implicit_some)]
(
// TODO: replace the `include_default_config_path` value with the actual location
include_default_config_path: "/home/user/Themis/themis/fuzzer/webapp_share/default_config_kernel.ron",
runner: QemuSnapshot((
)),
fuzz: (
workdir_path: "/tmp/workdir",
mem_limit: 16384,
cow_primary_size: 9663676416,
seed_path: "",
dict: [
],
ip0: (
a: 0,
b: 0,
),
ip1: (
a: 0,
b: 0,
),
ip2: (
a: 0,
b: 0,
),
ip3: (
a: 0,
b: 0,
),
),
)// themis/fuzzer/webapp_share/default_config_kernel.ron
#![enable(implicit_some)]
(
runner: QemuSnapshot((
// TODO: replace the next three values with the actual paths for the VM snapshot
qemu_binary: "/home/user/AFLplusplus/nyx_mode/QEMU-Nyx/x86_64-softmmu/qemu-system-x86_64",
hda: "/home/user/AFLplusplus/nyx_mode/ubuntu_docker.img",
presnapshot: "/home/user/AFLplusplus/nyx_mode/ubuntu_docker_pre_snapshot",
snapshot_path: DefaultPath,
debug: false
)),
fuzz: (
workdir_path: "/tmp/workdir",
bitmap_size: 524288,
mem_limit: 16384,
time_limit: (
secs: 0,
nanos: 80000000,
),
threads: 1,
thread_id: 0,
cpu_pin_start_at: 0,
snapshot_placement: none,
seed_path: "",
dict: []
),
)Now, you will build the agent. Do so by running the following commands:
nim --gcc.exe:musl-gcc --gcc.linkerexe:musl-gcc --passL:-static c --d:release --opt:speed atropos_agent.nim
cp atropos_agent webapp_share/In case of any issues, you can try building the afl-fuzz binary in debug mode. This is specifically helpful if the binary encounters issues during execution, since this enables us to attach gdb to the process and read the debug symbols.
cd AFLplusplus && make DEBUG=1Attach to a process via gdb:
gdb -q --args /bin/bash /path/to/your/binaryThe snapshot folder is empty
In that case, the snapshot creation went wrong. Please try again by running the above command procedure again. Make sure that the destination folder is writable and that all necessary files exist. Most of the time, a second try did the job.
VNC address already in use
That means that the VNC service is for some reason still running. Maybe due to an earlier execution or similar. You can solve this by killing the process manually.
# try to find PID of VNC process(es)
ps -ef | grep vnc
# alternatively, try finding it with top
top
# kill process by PID
sudo kill <pid>That's it!
Run pre-flight setup of Themis.
cd ./themis/fuzzer && sudo ./setup.shNote, that we do not use the KVM-Nyx modules, as suggested by the Atropos repository. This is, because we're running an user-land application, where the assistance of the CPU is not needed for fuzzing. I.e. we don't require special hypercalls etc.
Since we are going to fuzz a new type of target with our new nested execution environment, we need to create a new container instead of the original one, that the webfuzzer of Atropos is originally using. For this purpose, please find the ready-made WordPress 6.4.2 target container docker-wp in the themis/fuzzer folder. The provided files contain an example experiment setup that you can use to play around with.
This container automatically installs the correct WordPress version, builds the instrumented php interpreter, which is used for the custom feedback mechanism for the custom AFL mutator, it additionally installs all dependencies, creates a database, sets up the WordPress instance, sets up the instrumentation tool and finally installs our custom harness, etc.
Now build the target:
cd ./themis/fuzzer/ && ./docker_build.sh docker-wpDuring this process, you will be asked to finish the setup. Log into the WordPress instance that was launched on http://127.0.0.1:8000 and copy the information from http://127.0.0.1:8000/browser_data.php into themis/fuzzer/browser_data.json and /dev/shm/browser_data.json afterwards. This will be used to clone your browser session later-on. Additionally, if required, you may apply additional configuration steps to the instance (e.g. altering the landing page etc). Finish the manual session then by pressing CTRL+C and wait until the build script terminates. This will also disable the container to communicate over the internet, in order to prevent spam to underlying API endpoints or telemetry services employed by plugin authors.
You may now test if your installation works correctly (see Let's fuzz!).
No internet connection
In case your docker container can not connect to the internet, make sure you've configured your docker networking correctly. Thanks to @wisbucky for elaborating this issue.
sudo nano /etc/docker/daemon.json// /etc/docker/daemon.json
{
"dns": ["8.8.8.8"]
}And now, restart your docker service.
sudo systemctl restart dockerIf necessary, delete all local stopped containers, networks, dangling images and unused build cache and re-build the target container.
# delete everything
docker system prune -a
sudo systemctl restart docker
# re-build target
cd ./themis/fuzzer/ && ./docker_build.sh docker-wpUnknown error occurring
If there are some errors occurring during the building procedure, you can start debugging this by attaching to the running docker container and check what is going on inside.
docker exec -t webapp bashOr run single commands from outside of the container:
docker exec -t webapp bash -ic "whoami"If the error occurred during image creation, run the following command to launch a container based on that image and use a shell:
docker run -i -t webapp-SUFFIX:latest /usr/bin/bashReplace SUFFIX by the image name's specific suffix. I.e. base, snapshot, etc. (you can list with docker images)
To run the final snapshot container, which is the one that is actually fuzzed later, run the following command (which skips also the custom entrypoint):
docker run --entrypoint "/usr/bin/bash" -v /tmp:/tmp --network none -it webapp-snapshot:latest
This will additionally mount /tmp what allows for an easy file transfer between host and docker container.
For all experiments, we had to configure the machines to enable targeted per-core execution and prevent crashes. Here is how you can do it:
sudo apt install linux-tools-common linux-tools-`uname -r`
sudo sysctl -w kernel.randomize_va_space=0
echo 0 | sudo tee /sys/devices/system/cpu/cpufreq/boost
sudo cpupower frequency-set -g performance
sudo swapon /swapfile # 300GB swapfileFinally, you can invoke the fuzzer! The test run script automatically launches our customized target, but it allows also to override this behavior. The script expects AFL++ to be installed at /home/$USER/AFLplusplus.
# launch single test instance
cd ./Themis/themis/fuzzer && sudo ./setup.sh && ./test.sh [<target>]Optionally, specify the <target> parameter (default: "docker-wp") with your target docker container. This is only necessary, if your target docker container has a different name.
If everything works fine, you should promptly see a screen as follows:
Place the plugin under test as wrapped target docker-wp in themis/fuzzer and run:
sudo ./setup.sh
./experiment_1.shThis will launch a parallel fuzzing campaign with 40 instances. Please make sure to fulfill the system requirements as mentioned in the paper.
To watch what the fuzzer frontend is doing in realtime, you can make use of the dump function of Themis:
watch -n 30 "python3 dump.py --last 1 /dev/shm/workdir_webapp_share_out/*/queue/"Here, you can identify all made requests to the application under test. Inspect all sent information and parameters to understand why or why not certain bug locations are reached.
After fuzzing for a while, you may want to see what results (i.e. triggered crashes) were already found. You can do so by running:
python3 print_crash_logs.py -w /dev/shm/workdir_webapp_share_out/*/crashes/This will give you a list of all found crashes, including a short description.
To reproduce a crash, you can extract a cURL command that replicates the used parameters/data one to one in your browser.
python3 curl.py /dev/shm/workdir_webapp_share_out/[ID]/crashes/[CRASH_NAME]This experiment was conducted with restricted resources for comparability. We also parallelized the execution of plugins to improve evaluation efficiency. You can place multiple targets as docker-A, docker-B, and so on in themis/fuzzer. Realistically, you will only have up to two concurrent evaluations on one machine, but you may play around with it.
sudo ./setup.sh
./experiment_2_3.sh A 1 10
./experiment_2_3.sh B 11 20This will launch two parallel experiments (à 10 instances) running against targets A and B in their respective folders docker-A and docker-B. The launch script will create diverse folders with the target slug name in it which are necessary for parallel execution.
# get start timestamp
stat --format="%Y" fuzz_start_[SLUG]_[INSTANCE]
# calculate delta time for each finding
find /dev/shm/workdir_webapp_share_[SLUG]_[SLUG]_out/[INSTANCE]/crashes/ -type f -exec stat --format="%Y" {} + | awk '{print $1 - [START_TIMESTAMP]}' | sort -nAs for the second experiment, this one was evaluated under the same constraints. The setup works the same.
If you ran multiple experiments in parallel (recommended), you will first need to organize the .cov files in separate folders, depending on the core they were pinned to. Example: If you
launched plugin A on cores 1 to 10, and B on cores 11 to 20, move /dev/shm/rq/(1-10)/*.cov to e.g. /tmp/cov_A/ and for B vice versa:
# plugin A
for i in $(seq 1 10); do cp -r -p /dev/shm/rq/$i /tmp/cov_A/; done;
# plugin B
for i in $(seq 11 20); do cp -r -p /dev/shm/rq/$i /tmp/cov_B/; done;Now, you can run the following command to obtain the coverage_plot.pdf file:
python3 plot_all_trials.py --plugin-name "Plugin A" --plugin-slug "A" --cov "/tmp/cov_A/"You may also be interested in the reached pcov coverage per time. You can plot it to your terminal (no graphical UI required) by running the following command:
python3 plot.py -wThemis crashes when loading docker container
This is most likely due to the docker container's size. Make sure it is < 2 GB in size.
You may use the following for compressing the webapp-snapshot image and produce a new webapp-snapshot.tar snapshot:
# Dockerfile.flat
FROM webapp-snapshot
COPY --from=webapp-snapshot / /
Now run:
docker build -t webapp-flat -f Dockerfile.flat .
docker save webapp-flat > webapp-snapshot.tarThis will produce a new docker container snapshot. Place it inside of themis/fuzzer/webapp_share/ to make it available to the fuzzer.
Themis crashes when creating VM snapshot
In this case, your host machine does most likely lack memory. Verify it by running:
sudo dmesg | grep Out of memoryYou can tackle this issue either by switching to a machine with more memory (for single instances, you will need ~32GB of RAM) or by using swap files:
sudo fallocate -l 32G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfileVerify your swapfile is being used:
sudo swapon --show
# NAME TYPE SIZE USED PRIO
# /swapfile file 32G 0B -2
sudo free -h
# total used free shared buff/cache available
# Mem: 15Gi 1,1Gi 12Gi 373Mi 1,9Gi 13Gi
# Swap: 31Gi 0B 31GiFor more detailed instructions, visit this link to a get a full tutorial.
