diff --git a/README.md b/README.md
index be05da9f..8ea31034 100644
--- a/README.md
+++ b/README.md
@@ -1,20 +1,27 @@
# System Intelligence Benchmark: A Benchmark Suite for Evaluating LLM's System Capabilities
-It is a comprehensive benchmarking framework for evaluating the performance of Large Language Models (LLMs) and AI systems across critical system capabilities. It features example benchmarks for system course exams, course projects, and cache algorithm design, and offers both CLI tools and an SDK for further development.
+System Intelligence Benchmark is a comprehensive benchmark suite for evaluating the performance of Large Language Models (LLMs) and AI systems across critical system capabilities. It features tutorial, example benchmarks and offers both CLI tools and an SDK for further development.
## Benchmark Overview
-### Benchmark Concept
-A benchmark is a standard or point of reference against which things may be compared or assessed. In the context of AI and LLMs, benchmarks are essential for evaluating model capabilities, guiding research directions, and measuring progress. The following figure illustrates the main components of a AI benchmark. We abstract the benchmark into 4 components: the taskset, the environment, the executor, and the evaluator. This abstraction ensures a clear flow from tasks to metrics. You can see [benchmark_abstraction.md](doc/benchmark_abstract.md) for details.
+A benchmark is a standard or point of reference against which things may be compared or assessed. In the context of AI and LLMs, benchmarks are essential for evaluating model capabilities, guiding research directions, and measuring progress.
+
+### Benchmark Framework
+
+To advance benchmark development, we propose the System Intelligence Benchmark, a modular and extensible framework designed to support diverse research domains and problem types. As shown in the below figure, the framework comprises four abstractions: task set, environment, executor, and evaluator. Each task is associated with a specific environment, wherein the executor generates a solution that is subsequently assessed by the evaluator, which returns the evaluation metrics. This design enables the flexible integration of heterogeneous agents and their systematic evaluation. Additionally, the framework includes built-in executors (agents), evaluators (methodologies and grading rubrics), and tutorials. In an ideal case, users need only supply tasks that represent specific capabilities, select an evaluator, and quickly create and run a new benchmark. You can see [benchmark_abstraction.md](doc/benchmark_abstract.md) for details.
+The benchmark framework is **still under development**. If you have any questions, feel free to open an issue or contact us directly.
+
### Benchmarks
-System Intelligence Benchmark currently includes the following example benchmarks. Some examples are still under development — we're actively updating them. Stay tuned!
-- **Course Exam Benchmark** (`benchmarks/course_exam_bench/`) - Tests LLM understanding of system concepts through university course exams (54 questions across 4 exams)
-- **Course Project Benchmark** (`benchmarks/course_project_bench/`) - Assesses AI capability on practical system course projects
-- **Cache Benchmark** (`benchmarks/cache_bench/`) - Evaluates AI performance on cache algorithm design tasks
-- **Example Benchmark** (`benchmarks/example_bench/`) - Template and reference implementation for creating new benchmarks
+System Intelligence Benchmark currently includes the following example benchmarks. Each benchmark assesses specific capabilities across multiple levels within a given research direction. Some benchmarks are still under development — we're actively updating them. Stay tuned!
+
+- **System Exam Benchmark** ([benchmarks/course_exam_bench/](benchmarks/course_exam_bench/)) - Tests LLM understanding of system concepts through university course exams (54 questions across 4 exams)
+- **System Lab Benchmark** ([benchmarks/course_lab_bench/](benchmarks/course_lab_bench/)) - Assesses AI capability on practical system course labs and projects
+- **System Artifact Benchmark** ([benchmarks/arteval_bench/](benchmarks/arteval_bench/)) - Evaluates AI performance on artifact evaluation
+- **System Modeling Benchmark** ([benchmarks/sysmobench/](benchmarks/sysmobench/)) - Evaluates an agent's ability to produce correct TLA+ models for real-world concurrent and distributed systems, covering system capabilities across system comprehension, abstraction, and potentially tool fluency.
+- **Example Benchmark** ([benchmarks/example_bench/](benchmarks/example_bench/)) - Template and reference implementation for creating new benchmarks
## Quick Start
### Repo Structure
@@ -29,13 +36,15 @@ System Intelligence Benchmark currently includes the following example benchmark
- Python 3.9+
- Docker (optional, for containerized execution)
+> Docker images currently only support x86_64/AMD64 architecture. ARM64 (Apple Silicon M1/M2/M3) is not yet supported
+
### Installation
1. Clone the repository:
```bash
- git clone https://github.com/systemintelligence/system_intelligence_benchmark.git
- cd system_intelligence_benchmark
+ git clone https://github.com/sys-intelligence/system-intelligence-benchmark.git
+ cd system-intelligence-benchmark
```
2. Install dependencies for a specific benchmark:
@@ -48,13 +57,25 @@ System Intelligence Benchmark currently includes the following example benchmark
### Running Benchmarks
-#### Using CLI
+#### Run All Benchmarks
+
+To run all benchmarks sequentially:
```bash
cd cli
./run_all_local.sh
```
+#### Run a Single Benchmark
+
+To run just one benchmark locally:
+
+```bash
+cd benchmarks/
+./install.sh # Only needed the first time
+./run.sh
+```
+
#### Output Format
Benchmarks generate standardized outputs in `cli/outputs/{benchmark_name}__{model_name}__{agent}_{timestamp}/`:
@@ -65,27 +86,42 @@ Benchmarks generate standardized outputs in `cli/outputs/{benchmark_name}__{mode
You can find more detailed usage guides in the CLI [README.md](cli/README.md).
-## Adding Benchmarks
+## Contribute to Benchmarks
+
+We welcome community contributions to enrich existing benchmarks (e.g., by adding more exam problems to the System Exam benchmark and more system artifacts to System Artifact and System Modeling benchmark), port your existing benchmarks, and more importantly to create new system intelligence benchmarks with our framework. See below for detailed instructions. We believe that such collective community efforts will advance AI to its next level and help realize System Intelligence, unlocking the potential of AI-driven computing system innovations. If you are interested in contributing or already have good system benchmarks, please let us know. We have set up a [slack channel](https://join.slack.com/t/sys-intelligence/shared_invite/zt-3hpkgr2aa-NnuPxUbyHr45S89DFi_N1A) at sys-intelligence.slack.com.
> [!NOTE]
-> We suggest getting starting by walking through the basic concept of a AI benchmark: [Benchmark Abstraction](doc/benchmark_abstract.md).
+> We suggest getting starting by walking through the basic concept of a AI benchmark: [Benchmark Abstraction](doc/benchmark_abstract.md). After understanding the basic concept, you can decide whether to Contribute to Existing Benchmarks, Porting Existing Benchmarks, or Creating New Benchmarks.
+
+### Contribute to Existing Benchmarks
+The easiest way to contribute is to add more tasks to existing benchmarks. Currently, the following two are highly recommended. You can simply follow the provided guidelines to submit your data—once that’s done, you’re all set.
+- **SystemExam**: If you are a professor teaching one or more courses, we highly recommend contributing **more exam problems** to SystemExam (see [this doc](https://github.com/sys-intelligence/system-intelligence-benchmark/tree/main/benchmarks/course_exam_bench#how-to-extend-the-benchmark) for step-by-step guidance).
+- **SystemArtifact**: If you are a researcher submitting artifacts, or an AE chair involved in artifact evaluation, we highly recommend contributing **more system artifacts** to SystemArtifact (see [this doc](https://github.com/sys-intelligence/system-intelligence-benchmark/blob/main/benchmarks/arteval_bench/README.md) for step-by-step guidance).
+
+In addition, you can also help review the existing benchmarks to propose improvement ideas or directly enhance them—for example, by adding more advanced evaluators or incorporating improved metrics.
-After understanding the basic concept, you can decide whether to add more tasks for existing benchmarks or create new benchmarks that map to different levels of system capabilities.
+### Porting Existing Benchmarks
+> [!NOTE]
+> See [porting_benchmark.md](doc/porting_benchmark.md) for step-by-step guidelines.
-### Contribute to existing Benchmarks
-The easiest way to contribute is to add more tasks to existing benchmarks. For example, you can add more questions to the course exam benchmark or more projects to the course project benchmark. You can add more system algorithm design problems into algorithm design benchmark. Please follow the existing format and structure for adding new tasks. You can also improve the existing benchmarks by adding more advanced evaluators with improved metrics.
+For integrating existing, independently-developed benchmark projects while maintaining synchronization with upstream:
+
+- Use Git Subtree/Submodule to incorporate upstream code
+- Write a bridge layer to connect upstream evaluators with framework SDK
+- Configure bidirectional sync for pulling updates and contributing fixes
+
+**Example:** [SysMoBench](benchmarks/sysmobench/) - ported from [SysSpecBench](https://github.com/specula-org/SysSpecBench)
### Creating New Benchmarks
-> [!NOTE]
-> See [custom_benchmark.md](doc/custom_benchmark.md) for step-by-step guidelines.
+> [!NOTE]
+> See [custom_benchmark.md](doc/creating_benchmark.md) for step-by-step guidelines.
To create a new benchmark, follow these steps:
1. Create a new benchmark directory in `benchmarks/`
- 1. Based on your specific requirements, copy an example benchmark as a starting point
- 2. Update the `src/main.py` file with your specific evaluation logic
- 3. Update the README.md with benchmark-specific details
- 4. Add test cases in the `tests/` directory
-2. Add an `env.toml` configuration file
+ 1. Based on your specific requirements, select and copy an example benchmark as a starting point
+ 2. Update the `src/main.py` file with your specific evaluation logic (your executor and evaluator)
+ 3. Add test cases in the `tests/` directory
+2. Update the README.md with benchmark-specific details
3. Implement `install.sh` and `run.sh` scripts
4. Update the benchmark list in `run_all_local.sh` and `run_docker.sh` if needed
@@ -110,3 +146,4 @@ trademarks or logos is subject to and must follow
[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).
Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
Any use of third-party trademarks or logos are subject to those third-party's policies.
+
diff --git a/benchmarks/arteval_bench/.gitignore b/benchmarks/arteval_bench/.gitignore
new file mode 100644
index 00000000..64b18d31
--- /dev/null
+++ b/benchmarks/arteval_bench/.gitignore
@@ -0,0 +1,119 @@
+# ----------------------
+# General
+# ----------------------
+*.lock
+*.log
+*.bak
+*.pkl
+*.png
+*.jpg
+*.jpeg
+*.pdf
+*.xls
+*.csv
+*.doc
+
+# Logs / temp
+logs/
+log/
+*.tmp
+*.temp
+*.swp
+*.swo
+*.orig
+
+# Trash / scratch areas
+__pycache__/
+trash/
+
+# OS files
+.DS_Store
+Thumbs.db
+
+# ----------------------
+# Python
+# ----------------------
+
+# Byte-compiled / optimized / DLL files
+*.py[cod]
+*$py.class
+
+# Virtual environments
+.venv/
+venv/
+env/
+ENV/
+
+# Distribution / packaging
+build/
+dist/
+eggs/
+*.egg-info/
+.eggs/
+pip-wheel-metadata/
+*.whl
+
+# Test / coverage
+.pytest_cache/
+.coverage
+.coverage.*
+htmlcov/
+.tox/
+.nox/
+
+# Type checking / tooling
+.mypy_cache/
+.pyre/
+.pytype/
+
+# ----------------------
+# Java
+# ----------------------
+
+# Compiled files
+*.class
+
+# Build outputs
+target/
+bin/
+out/
+
+# Maven / Gradle
+.mvn/
+.settings/
+.gradle/
+build/
+
+# IDE project files
+*.iml
+.idea/
+.project
+.classpath
+
+# Archives
+*.jar
+*.war
+*.ear
+
+# ----------------------
+# C / C++
+# ----------------------
+
+# Object / compiled files
+*.o
+*.obj
+*.so
+*.dll
+*.dylib
+*.a
+*.lib
+*.lo
+
+# Executables
+a.out
+*.exe
+*.out
+
+# Build directories
+build/
+cmake-build-*/
diff --git a/benchmarks/arteval_bench/README.md b/benchmarks/arteval_bench/README.md
index daa3db8e..f6711ca6 100644
--- a/benchmarks/arteval_bench/README.md
+++ b/benchmarks/arteval_bench/README.md
@@ -1,61 +1,111 @@
-# YourBenchmarkName
+# ArtEvalBench
-## Scenario Description
+`ArtEvalBench` is a benchmark for evaluating AI agents that support the Artifact Evaluation (AE) process by auditing research prototypes (artifacts) that accompany research papers, as part of the peer-review process. Artifact evaluation involves reconstructing a reference environment from (partial) specifications, building and configuring complex codebases with often implicit assumptions, preparing datasets and third-party benchmarks whose availability may change over time, orchestrating multi-stage experiments under controlled resource and time budgets, and validating that observed results fall within acceptable tolerance bounds relative to those reported in the paper. Despite the intricacy of the process, we believe AI agents can be trained to support reviewers in evaluating artifacts that accompany research papers by automating most of these stages.
-Provide a summary of your scenarios here. This section should give an overview of the context, objectives, and key elements involved in your scenarios.
+Want to find out more or contribute? Jump to the [contributor's guide](#contributors-guide).
-### Task Details
+## Goals and Objectives
-Describe your task in detail, including:
+Artifact evaluation has become a standard component of the peer-review process across a wide range of conferences in Computer Science, especially in Systems and related areas. Despite this progress however, the practical work of provisioning operational environments, resolving dependencies, building artifacts, preparing benchmarks, running experiments, and checking results remains brittle and time-consuming. To alleviate this burden, we envision an automated artifact evaluation AI assistant that executes repeatable steps under (human) reviewer supervision. This "AE assistant" would target artifact mechanics (e.g., code compilation, dataset/benchmark preparation, experiment orchestration, and output validation) alongside code auditing (e.g., does the artifact implementation match the paper prose? are results closely matching those in the paper?). The agent's output can then inform more a complex methodological assessment, design trade-off analysis, and results interpretation that reviewers need to perform to complete the AE process.
-- **Input**: Specify the type of input data required for the task.
-- **Output**: Define the expected output from the task.
-- **Evaluation**: Explain how to evaluate the output, including any metrics or criteria used to measure performance.
+Concretely, given an artifact (code, documentation, experiment framework), a complete installation & operation guide, and the paper itself, the AE assistant:
-## Benchmark Setup
+1. provisions the reference environment;
+
+2. builds/installs a particular version of the artifact using the specified toolchain;
+
+3. retrieves and prepares datasets or other third-party targets;
+
+4. orchestrates experiments with explicit configuration, time and resource budgets; and
+
+5. generates a human-readable report that summarizes the outcome of each step, indicating any blockers (e.g., install missing dependencies) and how it managed to overcome them.
+
+The goal is to reduce reviewer effort on mechanical tasks so attention can shift to scientific auditing.
+
+## Background
+
+#### » The artifact evaluation process
+
+Most conferences award badges to incentivize high-quality artifacts that support the paper's claims by asking authors to participate in a multi-stage evaluation process where reviewers attempt to download, install, and operate the artifacts themselves. The following summarizes the widely used criteria for each badge:
+
+* Artifact Available. This badge indicates that the artifact itself (code, documentation, scripts, benchmarks, etc.) is publicly accessible with a persistent identifier (e.g., DOI, commit ID) on an (ideally, long-term) archival repository (e.g., Zenodo, Github). Availability does not imply the artifact can compile, build, or is functionally correct. It only confirms that the materials needed to verify key claims, reproduce experimental results, and reuse the tool itself are open-sourced.
+
+* Artifact Functional. This badge indicates that the artifact installs/builds in a reference environment and runs at least a subset of the documented experiments. It confirms that dependencies and configurations are explicitly recorded, and outputs, at least for said subset of experiments, are consistent with the paper's prose.
+
+* Results Reproduced. This badge indicates that a third party can re-execute all necessary experiments to obtain results consistent with the paper, with a reasonable degree of tolerance (e.g., within relative error bounds, confidence intervals, or rank-ordering equivalence). On top of re-obtaining results that support the paper's claims, reproducibility further requires verifiable provenance (e.g., SW/HW environment characteristics, configuration parameters, experiment logs) and principled handling of non-determinism (e.g., repeated trials, fixed initial states, or variance analysis).
-### Test in Docker
+Further reading and a detailed description of criteria for each badge can be found [here](https://sysartifacts.github.io/eurosys2026/badges) and [here](https://sysartifacts.github.io/evaluator-guide.html).
-To test your benchmark in a Docker container, follow these steps:
+#### » What makes AE challenging in practice?
-1. Build the Docker image using the provided Dockerfile. You can do this by running the following command in the terminal:
+Reproducibility and reusability can be obstructed by multiple factors including, but not limited to: (i) environment drift (e.g., legacy libraries no longer available, drivers mismatch in newer OS versions); (ii) undocumented or implicit build assumptions (e.g., hard-coded compiler flags, directory paths, IPs, or reliance on OS-wide libraries that differ across distributions); (iii) brittle preprocessing of third-party benchmarks or datasets (e.g., broken download URL, non-deterministic compilation steps that silently invalidate subsequent stages); and (iv) unspecified results tolerance bounds that complicate validation for non-deterministic experiments (e.g., performance claims without clarifying what constitutes an acceptable deviation when running within a similar SW/HW setup).
- ```sh
- docker build -t your_benchmark_image .
- ```
+Overcoming such challenges require persistence and careful bookkeeping, precisely where an automated AE assistant can provide leverage.
-2. Once the image is built, you can run it using the following command:
+## Contributor's guide
+
+#### » Overview and high-level structure
+
+To train and improve AE agents in a principled way we introduce `ArtEvalBench`, a curated collection of artifacts accompanying peer-reviewed papers. To ensure a fair comparison we include artifacts that have been already evaluated in an official AE process and awarded all three badges by the committee. Each entry includes the original artifact (instructions, code, scripts, datasets/benchmarks, etc.), the original paper, and a collection of "oracle" scripts that define objective checkpoints at four canonical stages: environment setup, build/install, benchmark preparation, and experiment execution.
+
+`ArtEvalBench` is designed to evaluate agents on capability (which stages they complete), efficiency (wall-clock time and intervention count), and fidelity (how closely reproduced results match those reported).
+
+To check those capabilities, each artifact includes four oracle scripts that encode minimal, verifiable success criteria for each of the four stages. The oracles are invoked non-interactively and must be idempotent. Conceptually, these for stages correspond to:
+
+1. Environment Setup: verifies presence and versions of required tools, libraries, or other dependencies; confirms hardware availability when applicable; and checks that configurations are portable rather than hardcoded or tied to a specific machine.
+2. Build/Install: confirms a complete build (or install) operation from a specified version, with expected binaries/modules present; running tests, when available, or simple validation commands like invoking `--help` or equivalent.
+3. Benchmark Preparation: asserts that datasets/benchmarks are present and checksums match; verifies that necessary third-party tools compile and the artifact's instrumentation/monitoring hooks are enabled, if applicable.
+4. Experiment Runs: executes each experiment according to the authors' guidelines; checks that the artifact produces the expected metrics, logs, files, figures, etc.; provides an initial assessment relative to specified tolerance bounds.
+
+For a typical example, check out the [agent evaluator](data/benchmark/sosp24_wasabi/wasabi/_agent_eval/) of [WASABI](data/benchmark/sosp24_wasabi/wasabi/).
+
+#### » Adding a new artifact
+
+Adding a new artifact to the benchmark requires several steps:
+
+1. Create a stand-alone directory in `./data/benchmark` and copying all artifact files including the README file.
+2. Implement oracles for evaluating the AI agent. This feature should follow the same structure as Wasabi's [evaluator](data/benchmark/sosp24_wasabi/wasabi/_agent_eval/), where each oracle is implemented in a separate Python source file and orchestrated by a `main.py` whose `main()` method returns a single integer, the overal score (0..4) the agent achieved.
+3. Create an entry into the [task journal](data/benchmark/arteval_tasks.jsonl) and populate the appropriate fields.
+
+## Benchmark Setup
- ```sh
- docker run -it --rm your_benchmark_image
- # docker run --rm your_benchmark_image
- ```
+#### » Install dependencies
-3. Inside the container, navigate to the appropriate directory and execute the benchmark script to start the testing process.
+To install the benchmark, simply run the `install.sh` script to set up the environment:
+ ```sh
+ ./install.sh
+ ```
- ```sh
- ./run.sh
- ```
+ This operaiton will:
+ * Install Python 3.12 virtual environment
+ * Clone and install SWE-agent
+ * Install required Python packages (pytest, pytest-cov)
+ * Clone course repositories (6.5840-golabs-2024, xv6-labs-2024, etc.)
-### Maunaly Test
+#### » Run the benchmark
-To manually test your benchmark, follow these steps:
+To run the benchmark:
-#### Install Dependencies
+1. Execute the `run.sh` script with your model:
-To install and configure your benchmark, follow these steps:
+ ```sh
+ ./run.sh
+ # Example: ./run.sh claude-sonnet-4-5-20250929
+ ```
-1. Run the `install.sh` script to set up the environment and install necessary dependencies. You can simply execute the following command:
+2. Configure your LLM endpoint in `env.toml`:
+* For Azure/OpenAI models: Set `AZURE_API_KEY`, `AZURE_API_BASE`, `AZURE_API_VERSION`
+* For Anthropic models: Set `ANTHROPIC_API_KEY`
+* For self-hosted models: Configure `OPENAI_API_TYPE` and `OPENAI_BASE_URL`
- ```sh
- ./install.sh
- ```
+3. Results will be saved to `outputs/` with timestamp and model information
-#### Run
-To run your benchmark and obtain results for a specific task and model, follow these steps:
+#### » Supported Agents
-1. Review the `run.sh` script to understand the expected commands and parameters.
-2. Execute the `run.sh` script to start the benchmark. The script will guide you through the process and generate the results.
+The benchmark supports multiple AI agents:
+* **Claude Code**: Anthropic's code assistant
+* **Mini SWE Agent**: The compact version of [SWE-agent](https://github.com/SWE-agent) assistant
+* **OpenHands**: Open-source coding agent
-Feel free to adjust the details to better fit your specific scenario and requirements. Let me know if there's anything else you need!
+To add your own agent to the benchmark, see [add_agents.md](add_agents.md).
diff --git a/benchmarks/arteval_bench/add_agents.md b/benchmarks/arteval_bench/add_agents.md
new file mode 100644
index 00000000..3f5f6bd2
--- /dev/null
+++ b/benchmarks/arteval_bench/add_agents.md
@@ -0,0 +1,151 @@
+# Adding a New Agent
+
+To integrate a new agent into the benchmark, follow these steps:
+
+## 1. Create Agent Directory
+
+Create a new directory under `src/agents/` with your agent name:
+
+```sh
+mkdir src/agents/your_agent_name
+cd src/agents/your_agent_name
+```
+
+## 2. Create Required Files
+
+Each agent requires two files:
+
+### `install.sh` (optional but recommended)
+
+Installation script for your agent's dependencies:
+
+```bash
+#!/bin/bash
+set -e # Exit immediately on error.
+
+# Install your agent's dependencies
+# Example: pip install your-agent-package
+# Example: npm install -g your-agent-cli
+```
+
+### `runner.sh` (required)
+
+Execution script that accepts model and task parameters:
+
+```bash
+#!/bin/bash
+set -e # Exit immediately on error.
+
+# Validate parameters
+if [ $# -ne 2 ]; then
+ echo "Usage: $0 "
+ echo "Example: $0 azure/gpt-4 \"implement MapReduce\""
+ exit 1
+fi
+
+# Set API keys (read from env.toml or environment variables)
+export YOUR_API_KEY="your_key_here"
+
+# Run your agent with the provided model and task
+# $1 = model_location
+# $2 = task_description
+your-agent-command -m "$1" -t "$2" -o agent_trajectory.json
+```
+
+## 3. Agent Integration Points
+
+Your agent runner will be executed in a Docker container with:
+
+- **Working directory**: `/repo` (contains the project to work on)
+- **Agent directory**: `/agent` (contains your install.sh and runner.sh)
+- **Parameters**:
+ - `$1`: Model name/location (e.g., `anthropic/claude-sonnet-4-5-20250929`)
+ - `$2`: Task description (multi-line text describing what to implement)
+
+## 4. Examples
+
+### Claude Code Agent
+```bash
+# install.sh
+apt-get update -y
+apt-get install -y nodejs npm
+npm install -g @anthropic-ai/claude-code
+
+# runner.sh
+export ANTHROPIC_API_KEY="sk-ant-..."
+claude -p "$2" --model "$1" --output-format json
+```
+
+### OpenHands Agent
+```bash
+# install.sh
+curl -sSL https://install.python-poetry.org | python3 -
+export PATH="$HOME/.local/bin:$PATH"
+git clone https://github.com/All-Hands-AI/OpenHands.git
+cd OpenHands/
+poetry install
+
+# runner.sh
+cd OpenHands/
+poetry run python -m openhands.core.main \
+ --config-file /agent/config.toml \
+ --agent-cls CodeActAgent \
+ --selected-repo /repo \
+ -t "$2"
+```
+
+## 5. Testing Your Agent
+
+1. Add your agent path to the evaluation script
+2. Run the benchmark:
+ ```sh
+ python src/main_setup.py --agent ./src/agents/your_agent_name
+ ```
+
+## 6. Best Practices
+
+- Make scripts executable: `chmod +x install.sh runner.sh`
+- Handle errors gracefully with `set -e`
+- Use environment variables for API keys
+- Output agent trajectory/logs for debugging
+- Test with simple tasks first before running full benchmark
+- Ensure your agent can work within the `/repo` directory context
+
+## 7. Agent Execution Flow
+
+The benchmark framework executes your agent as follows:
+
+1. **Setup Phase**:
+ - Docker container starts with base image `xuafeng/swe-go-python:latest`
+ - Project files uploaded to `/repo`
+ - Agent files uploaded to `/agent`
+ - `/agent/install.sh` executed (if exists)
+
+2. **Execution Phase**:
+ - Runner script executed: `/agent/runner.sh "" ""`
+ - Agent works in `/repo` directory
+ - Agent should modify files to complete the task
+
+3. **Evaluation Phase**:
+ - Test method from task specification executed (e.g., `cd src/main && bash test-mr.sh`)
+ - Results captured and saved to `outputs/`
+
+## 8. Troubleshooting
+
+### Common Issues
+
+**Agent can't find dependencies**:
+- Ensure `install.sh` installs all required packages
+- Check Docker image has necessary base dependencies
+
+**Permission denied errors**:
+- Make scripts executable: `chmod +x install.sh runner.sh`
+- Check file permissions in Docker container
+
+**API key not found**:
+- Set environment variables in `runner.sh`
+- Alternatively, configure in `env.toml` and read from there
+
+**Agent output not captured**:
+- Ensure agent outputs to `agent_trajectory.json` or logs properly
+- Check exit codes and error handling
diff --git a/benchmarks/arteval_bench/data/benchmark/arteval_tasks.jsonl b/benchmarks/arteval_bench/data/benchmark/arteval_tasks.jsonl
new file mode 100644
index 00000000..d274d5bb
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/arteval_tasks.jsonl
@@ -0,0 +1 @@
+{"task_id": "sosp24_wasabi", "task_file": "data/benchmark/sosp24_wasabi/wasabi/README.md", "repo_name": "sosp24_wasabi", "test_method": "data/benchmark/sosp24_wasabi/wasabi/_agent_eval/main.py", "expected_score": 4, "test_results": "", "difficulty": "easy", "repo_url": "https://github.com/bastoica/wasabi/tree/sosp24-ae"}
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/artifact_tasks.jsonl b/benchmarks/arteval_bench/data/benchmark/artifact_tasks.jsonl
deleted file mode 100644
index c4b375f5..00000000
--- a/benchmarks/arteval_bench/data/benchmark/artifact_tasks.jsonl
+++ /dev/null
@@ -1,3 +0,0 @@
-{"id": "platform_1", "sys_prompt": "You are an expert on Kusto. Help me write kusto query based on the user requirement. DO NOT explain anything, DO give KQL directly. You should strictly use the following format:\n##### KQL\n```\ncluster(#CLUSTER_NAME#).database(#DATABASE_NAME#).#TABLE_NAME# | where #COLUMN_NAME# xxx\n```", "user_prompt": "Query the nodes are stuck in Linux MOS under cluster IAD02PrdGPC01", "thinking": "chain of thought", "response": "let targetCluster = \"IAD02PrdGPC01\";\ncluster(\"azuredcm\").database(\"AzureDCMDb\").dcmInventoryMachines \n| where AzureClusterName == targetCluster\n| join kind=inner \n( \n cluster('Oneboot').database('OneDhcp').OneDhcpApi\n | where PreciseTimeStamp > ago(7d)\n | where ApiName has 'SetDeviceOsType' \n | parse ApiParams with \"name=\" MachineName \",osType=\" OSType \",osPath=\" OSPath\n) on MachineName\n| project PreciseTimeStamp, AzureClusterName, AzureNodeId, MachineName, OSType, OSPath\n| summarize arg_max(PreciseTimeStamp, *) by AzureNodeId, MachineName\n| where OSPath == \"LinuxMOS\" ", "metadata": {"scenario": "Example", "subtask": "Example", "data_quality":"high", "XXX": "XXX"}}
-{"id": "platform_2", "sys_prompt": "You are an expert on Kusto. Help me write kusto query based on the user requirement. DO NOT explain anything, DO give KQL directly. You should strictly use the following format:\n##### KQL\n```\ncluster(#CLUSTER_NAME#).database(#DATABASE_NAME#).#TABLE_NAME# | where #COLUMN_NAME# xxx\n```", "user_prompt": "Check the capacity of cluster SIN22PrdApp22, which is a non-GPU cluster", "thinking": "chain of thought", "response": "cluster(\"azcore.centralus\").database(\"AzureCP\").MycroftNodeHealthSnapshot\n| where PreciseTimeStamp >= ago(1d)\n| where ClusterName == \"SIN22PrdApp22\"\n| summarize arg_max(PreciseTimeStamp, *) by ClusterName, NodeId\n| project PreciseTimeStamp, ClusterName, NodeId, NsdState, AvailabilityState, ContainerCount\n| summarize NodeCount = count() by NsdState, AvailabilityState, ContainerCount", "metadata": {"scenario": "Example", "subtask": "Example", "data_quality":"high", "XXX": "XXX"}}
-{"id": "platform_3", "sys_prompt": "You are an expert on Kusto. Help me write kusto query based on the user requirement. DO NOT explain anything, DO give KQL directly. You should strictly use the following format:\n##### KQL\n```\ncluster(#CLUSTER_NAME#).database(#DATABASE_NAME#).#TABLE_NAME# | where #COLUMN_NAME# xxx\n```", "user_prompt": "Check the capacity of cluster IAD02PrdGPC01, which is a GPU cluster", "thinking": "chain of thought", "response": "let cluster = \"IAD02PrdGPC01\";\ncluster(\"azcore.centralus\").database(\"AzureCP\").MycroftNodeSnapshot\n| where PreciseTimeStamp >= ago(1d)\n| where ClusterName == cluster\n| summarize arg_max(PreciseTimeStamp, *) by ClusterName, NodeId\n| where MachinePoolName contains \"MP2\"\n| project PreciseTimeStamp, ClusterName, NodeId, MachinePoolName\n| join (\ncluster(\"azcore.centralus\").database(\"AzureCP\").MycroftNodeHealthSnapshot\n| where PreciseTimeStamp >= ago(1d)\n| where ClusterName == cluster\n| summarize arg_max(PreciseTimeStamp, *) by ClusterName, NodeId\n| project PreciseTimeStamp, ClusterName, NodeId, NsdState, AvailabilityState, ContainerCount\n) on NodeId\n| summarize NodeCount = count() by NsdState, AvailabilityState, ContainerCount", "metadata": {"scenario": "Example", "subtask": "Example", "data_quality":"high", "XXX": "XXX"}}
diff --git a/benchmarks/arteval_bench/data/benchmark/env_setup_examples.jsonl b/benchmarks/arteval_bench/data/benchmark/env_setup_examples.jsonl
new file mode 100644
index 00000000..7a228a81
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/env_setup_examples.jsonl
@@ -0,0 +1,3 @@
+{"task_id": "example_1", "task_name": "problems/test-repo-problems/1.md", "task": "set up the java environment", "repo_name": "projects/test-repo", "repo_url": "https://github.com/SWE-agent/test-repo.git", "test_method": "java -version", "test_results": "", "difficulty": "easy", "docker_env": "xuafeng/swe-go-python:latest"}
+{"task_id": "example_2", "task_name": "problems/test-repo-problems/2.md", "task": "set up the rust environment", "repo_name": "projects/test-repo", "repo_url": "https://github.com/SWE-agent/test-repo.git", "test_method": "rustc --version", "test_results": "", "difficulty": "easy", "docker_env": "xuafeng/swe-go-python:latest"}
+{"task_id": "example_3", "task_name": "problems/test-repo-problems/3.md", "task": "set up the nodejs environment", "repo_name": "projects/test-repo", "repo_url": "https://github.com/SWE-agent/test-repo.git", "test_method": "node -v", "test_results": "", "difficulty": "easy", "docker_env": "xuafeng/swe-go-python:latest"}
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/LICENSE b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/LICENSE
new file mode 100644
index 00000000..05cd7325
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/LICENSE
@@ -0,0 +1,202 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright 2023 Systems Group at The University of Chicago -- Bogdan Alexandru Stoica , Utsav Sethi , Yiming Su , Cyrus Zhou
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/README.md b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/README.md
new file mode 100644
index 00000000..633b5c7a
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/README.md
@@ -0,0 +1,544 @@
+## 1. Overview
+
+The testing component of WASABI triggers retry bugs by using a combination of static analysis, large language models (LLMs), fault injection, and testing.
+
+## 2. Getting Started
+
+To get started, users should create a new directory structure, clone this repository, work on the `main` branch of the repository, configure and install dependencies, by following these steps:
+
+1. If not already in place, create a the appropriate directory structure:
+
+Note that your current working directory where the `README.md` is located id `~/sosp24_wasabi/benchmarks/wasabi`
+```bash
+mkdir -p ~/sosp24_wasabi/benchmarks
+cd ~/sosp24_wasabi/
+ls -la .
+```
+
+The working directory structure should look similar to the one below:
+```plaintext
+~/sosp24_wasabi
+ ├── benchmarks/
+ └── wasabi/
+ ├── config/
+ ├── README.md
+ ├── config/
+ ├── pom-java11.xml
+ ├── pom-java8.xml
+ ├── pom.xml
+ ├── src/
+ └── utils/
+```
+The `wasabi` directory contains the codebase of WASABI, while the `bugfinding` directory is where users can add applications that they want to use WASABI to find retry bugs.
+
+2. Set up the `WASABI_ROOT_DIR` environment variable:
+```
+export WASABI_ROOT_DIR=$(echo $HOME)/sosp24_wasabi/wasabi
+```
+3. Installing necessary dependnecies:
+```
+cd ~/sosp24_wasabi/wasabi/wasabi-testing/utils
+sudo ./prereqs.sh
+```
+
+> [!NOTE]
+> WASABI requires the following dependencies:
+> * Ubuntu >=22.04 LTE
+> * Python >=3.10
+> * Java 8 and 11
+> * Maven >=3.6
+> * Gradle >=4.4.1
+> * Ant >=1.10
+> * AspectJ runtime plugin** (`aspectjr`) 1.9.8.M1 for Java 8 and 1.9.19 for Java 11, respectively
+> * AspectJ Maven plugin** (`aspectj-maven-plugin`) 1.13 for Java 8 and 1.13.1 for Java 11, respectively
+>
+>**both added to WASABI's `pom.xml` as plugin dependencies
+>
+> WASABI was developed, built, and tested on a bare metal machine with an Intel i7-8700 CPU, 32 GB of RAM, and 512 GB of disk space, running Ubuntu 22.04 LTE.
+> While we implement WASABI to be agnostic to environment settings (i.e., OS distribution, versions of packages and dependencies), using WASABI in a different environment. Please see "[Known issues](README.md#7-known-issues)".
+
+## 3. Building and installing WASABI
+
+To build and install WASABI, first switch to the appropriate Java distribution. In this tutorial we work with Java 8 as it is the latest distribution required for HDFS.
+```bash
+sudo update-alternatives --config java
+...(select java 8)
+export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/jre
+```
+
+Next, run Maven's `clean`, `compile`, and `install` Maven from the `wasabi-testing` directory, to build WASABI. Note that the current codebase includes AspectJ for each of the applications used to evaluate WASABI (see Section 4 from our [paper](https://bastoica.github.io/files/papers/2024_sosp_wasabi.pdf)). In this walkthrough we build WASABI for finding bugs in HDFS (Hadoop) and use triggering HDFS-17590 as an example, [below](README.md#6-running-example-reproducing-hdfs-17590).
+```bash
+cd ~/sosp24_wasabi/wasabi/wasabi-testing
+mvn clean install -U -fn -B -Dinstrumentation.target=hadoop -DskipTests 2>&1 | tee wasabi-install.log
+```
+
+If successful users should see a message similar to
+```bash
+...
+[INFO] ------------------------------------------------------------------------
+[INFO] BUILD SUCCESS
+[INFO] ------------------------------------------------------------------------
+[INFO] Total time: 36.384 s
+[INFO] Finished at: 2024-08-12T19:57:24Z
+[INFO] ------------------------------------------------------------------------
+```
+If users need to use Java 11, they can either modify the `pom.xml` accordingly. We also provide pre-configured `pom` files for [Java 8](pom-java8.xml) and [Java 11](pom-java11.xml`).
+
+> [!NOTE]
+> When building WASABI multiple times, especially under a different Java distribution, it is recommended to first remove Maven's cache directory prior to compiling WASABI.
+```bash
+rm -rf ~/.m2/repository
+```
+
+## 4. Weaving (instrumenting) a target application
+
+WASABI can be woven into or instrument a target applications either at compile- or load-time.
+
+### 4.1 Compile-time weaving (Maven)
+
+To enable compile-time weaving for a target application, users need to modify the original `pom.xml` of the target to include Wasabi as a dependence and invoke the `aspectj` plugin:
+
+```xml
+
+
+
+
+
+ org.aspectj
+ aspectjrt
+ ${aspectj.version}
+
+
+
+
+ edu.uchicago.cs.systems
+ wasabi
+ ${wasabi.version}
+
+
+
+
+
+ 1.9.19
+ 1.13.1
+ 1.0.0
+
+
+
+
+
+
+
+
+ dev.aspectj
+ aspectj-maven-plugin
+ ${aspectj-maven.version}
+
+
+
+ edu.uchicago.cs.systems
+ wasabi
+
+
+ true
+ true
+
+
+
+
+ compile
+ test-compile
+
+
+
+
+
+
+```
+
+Next, build the target application with WASABI woven in:
+```bash
+cd /path/to/target_application
+mvn clean compile -T 8 -fn -DskipTests && mvn install -fn -DskipTests -B 2>&1 | tee wasabi-build.log
+```
+
+Successful weaving should produce log messages like this one:
+```bash
+[INFO] Join point 'method-execution(...)' in Type 'org.apache.hadoop.metrics2.util.SampleStat' ...
+```
+
+Users should also check out [examples](https://github.com/bastoica/wasabi/tree/sosp24_wasabi/wasabi-testing) of target applications instrumented with WASABI from our `sosp24-ae` branch. These not only include detailed weaving steps, but also the modified `pom.xml` files.
+
+### 4.2 Load-time weaving (Gradle, Ant, others)
+
+Some applications use build systems other than Maven, like Gradle or Ant. In these cases, WASABI can be woven at load-time.
+
+#### Load-time weaving with Gradle
+
+First, add the AspectJ plugin and dependencies to your build.gradle file:
+```xml
+plugins {
+ id 'io.freefair.aspectj.post-compile-weaving' version '8.1.0'
+ id 'java'
+}
+
+dependencies {
+ implementation 'org.aspectj:aspectjrt:1.9.19'
+ aspect 'edu.uchicago.cs.systems:wasabi:1.0.0'
+}
+```
+
+Next, configure AspectJ for load-time weaving:
+```xml
+compileJava {
+ options.compilerArgs += ['-Xlint:none']
+ doLast {
+ javaexec {
+ main = '-jar'
+ args = [configurations.aspectj.getSingleFile(), '-inpath', sourceSets.main.output.classesDirs.asPath, '-aspectpath', configurations.aspect.asPath]
+ }
+ }
+}
+```
+
+Finally, compile and build the project:
+```bash
+gradle clean build -i 2>&1 | tee wasabi-build.log
+```
+
+#### Load-time weaving with Ant
+
+First, make sure AspectJ libraries (`aspectjrt.jar`, `aspectjtools.jar`) are available in your project.
+
+Next, modify `build.xml` by adding the AspectJ tasks and specify WASABI in the aspect path:
+
+```xml
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+```
+
+Finally, compile and build the project:
+```bash
+ant compile 2>&1 | tee wasabi-build.log
+```
+
+## 5. Configure fault injection policies and metadata
+
+To specify fault injection policies and the precise injection locations, users need to create two types of files—a location data file (`.data`) and a policy configuration file (`.conf`).
+
+A `.data` file describes the retry locations and their respective exceptions to be injected by Wasabi. It has the following format:
+```xml
+Retry location!!!Enclosing method!!!Retried method!!!Injection site!!!Exception
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L790!!!org.apache.hadoop.ipc.Client$Connection.setupIOstreams!!!org.apache.hadoop.ipc.Client$Connection.writeConnectionContext!!!Client.java:831!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/EditLogTailer.java#L609!!!org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$MultipleNameNodeProxy.getActiveNodeProxy!!!org.apache.hadoop.ipc.RPC.getProtocolVersion!!!N/A!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java#L419!!!org.apache.hadoop.ipc.RPC.waitForProtocolProxy!!!org.apache.hadoop.ipc.RPC.getProtocolProxy!!!RPC.java:421!!!java.net.ConnectException
+...
+```
+where
+* `Retry location` indicates the program locations of a retry (e.g. https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L790)
+* `Enclosing method` indicates the method from where the retry location is called (e.g. `org.apache.hadoop.ipc.Client$Connection.setupIOstreams`)
+* `Retried method` indicates the method inside the retry logic ought to be retried (e.g. `org.apache.hadoop.ipc.Client$IpcStreams.setSaslClient`)
+* `Injection site` indicates the source location (source file and line of code) where a retried method is called. Also, this represents the program location where Wasabi injects exceptions.
+* `Exception` indicates the exception that Wasabi should throw at that location (e.g. `java.io.SocketException`)
+
+
+A `.conf` file instructs WASABI to use a specific injection policy and load injection locations from a particular `.data` file and has the following structure:
+
+```xml
+retry_data_file: /absolute/path/to/data/file/example_retry_locations.data
+injection_policy: max-count
+max_injection_count: 10
+```
+where
+* retry_data_file: Absolute path to a .data file specifying injection sites.
+* injection_policy: One of no-injection, forever, or max-count.
+* max_injection_count: Positive integer specifying the upper limit of injections (used with max-count policy).
+
+The users can check out examples of `.data` and `.conf` files in the `./config` directory, or on the `sosp24-ae` [branch](https://github.com/bastoica/wasabi/tree/sosp24_wasabi/wasabi-testing/config).
+
+
+## Find retry bugs
+
+Once WASABI is successfuly build, woven into a target application, and configured, users can instruct WASABI to finding potential retry bugs.
+
+To do so, users have two options:
+
+1. Option #1 (recommended): run individual tests and instruct WASABI to inject faults at only one location during the test run. The reason is that, by desing, WASABI tries to force the test to either crash or hang. If this happens at the first injection location, subsequent injection locations will not get a chance to execute due to the test terminating (or hanging) early.
+```bash
+cd [target_application_path]
+mvn clean install -U -fn -B -DskipTests 2>&1 | tee wasabi-build.log
+mvn surefire:test -fn -B -DconfigFile="$(echo $HOME)/wasabi/wasabi-testing/config/example_hdfs.conf" -Dtest=[TEST_NAME] 2>&1 | tee wasabi-test.log
+```
+
+2. Option #2: run the entire test suite and inject faults at multiple locations in the same testing runs. Users can opt to inject faults at multiple locations in the same testing run if they are confident that injecting at an earlier location does not affect the execution of a later location. In this case, users can create a multi-location `.data` file (e.g., like [this one](https://github.com/bastoica/wasabi/blob/sosp24_wasabi/wasabi-testing/config/hadoop/hadoop_retry_locations.data) for Hadoop).
+
+```bash
+cd [target_application_path]
+mvn clean install -U -fn -B -DskipTests 2>&1 | tee wasabi-build.log
+mvn test -fn -B -DconfigFile="$(echo $HOME)/wasabi/wasabi-testing/config/example_hdfs.conf" 2>&1 | tee wasabi-test.log
+```
+
+## 6. Running example: reproducing HDFS-17590
+
+To illustrate how WASABI work, we walk users through an example that reproduces [HDFS-17590](https://issues.apache.org/jira/browse/HDFS-17590)—a previously unknown retry bug uncovered by WASABI.
+
+> [!NOTE]
+> Users might observe a "build failure" message when building and testing Hadoop. This is expected as a few testing-related components of Hadoop need more configuration to build properly with the ACJ compiler. WASABI does not need those components to find retry bugs. See the "[Known issues](README.md#7-known-issues)" section below for more details.
+
+
+1. Ensure the prerequisites are successfully installed (see "Getting Started" above)
+
+2. Build and install WASABI (see "Building and installing WASABI" above)
+
+3. Clone Hadoop (note: HDFS is part of Hadoop),
+```bash
+cd ~/sosp24_wasabi/benchmarks
+git clone https://github.com/apache/hadoop
+```
+and check out version/commit `60867de`:
+```bash
+cd ~/sosp24_wasabi/benchmarks/hadoop
+git checkout 60867de
+```
+Users can check whether `60867de` was successfully checked out by running
+```bash
+git log
+```
+and checking the output
+```
+commit 60867de422949be416948bd106419c771c7d13fd (HEAD)
+Author: zhangshuyan <81411509+zhangshuyan0@users.noreply.github.com>
+Date: Mon Aug 21 10:05:34 2023 +0800
+
+ HDFS-17151. EC: Fix wrong metadata in BlockInfoStriped after recovery. (#5938). Contributed by Shuyan Zhang.
+
+ Signed-off-by: He Xiaoqiao
+
+```
+
+4. Build and install Hadoop using the following commands. This is necessary to download and install any missing dependencies that might break Hadoop's test suite during fault injection.
+```bash
+mvn install -U -fn -B -DskipTests 2>&1 | tee wasabi-pass-install.log
+```
+
+5. Run the test that WASABI uses to trigger HDFS-17590 to confirm that the bug does not get triggered without fault injection
+```bash
+mvn surefire:test -fn -B -Dtest=TestFSEditLogLoader 2>&1 | tee wasabi-pass-test.log
+```
+by checking that the test runs successfully. First, checking that there is no `NullPointerException`
+```bash
+grep -A10 -B2 "NullPointerException" wasabi-pass-test.log
+```
+which should yield no output, as well as that all such tests passed
+```bash
+grep "Tests run.*TestFSEditLogLoader" wasabi-pass-test.log
+```
+which should yield a line similar to this (note that number of tests might differ slightly)
+```bash
+[INFO] Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 154.223 s - in org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader
+```
+
+6. Copy a modified `pom.xml` file that allows WASABI to instrument (weave) Hadoop by running
+```bash
+cp pom.xml pom-original.xml
+cp ~/sosp24_wasabi/wasabi/wasabi-testing/config/hadoop/pom-hadoop.xml pom.xml
+```
+Note that these commands are making a copy of the original `pom.xml` and replace it with a slightly edited version that instructs the AJC compiler to instrument (weave) WASABI. Also, these alterations are specific to version `60867de`. Checking out another Hadoop commit ID requires adjustments. We provide instructions on how to adapt an original `pom.xml`, [here](README.md#instrumentation-weaving-instructions).
+
+7. Instrument Hadoop with WASABI by running
+```bash
+mvn clean install -U -fn -B -DskipTests 2>&1 | tee wasabi-fail-install.log
+```
+
+8. Run the bug-triggering tests with fault injection
+```bash
+mvn surefire:test -fn -B -DconfigFile="$(echo $HOME)/sosp24_wasabi/wasabi/wasabi-testing/config/hadoop/example.conf" -Dtest=TestFSEditLogLoader 2>&1 | tee wasabi-fail-test.log
+```
+and check the log to for `NullPointerException` errors
+```bash
+grep -A10 -B2 "NullPointerException" wasabi-fail-test.log
+```
+which should yield
+```bash
+[ERROR] Tests run: 26, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 137.645 s <<< FAILURE! - in org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader
+[ERROR] testErasureCodingPolicyOperations[0](org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader) Time elapsed: 22.691 s <<< ERROR!
+java.lang.NullPointerException
+ at java.base/java.util.concurrent.ConcurrentHashMap.putVal(ConcurrentHashMap.java:1011)
+ at java.base/java.util.concurrent.ConcurrentHashMap.put(ConcurrentHashMap.java:1006)
+ at org.apache.hadoop.hdfs.DFSInputStream.addToLocalDeadNodes(DFSInputStream.java:184)
+ at org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader(DFSStripedInputStream.java:279)
+ at org.apache.hadoop.hdfs.StripeReader.readChunk(StripeReader.java:304)
+ at org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:335)
+ at org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:320)
+ at org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:415)
+ at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:919)
+ at java.base/java.io.DataInputStream.read(DataInputStream.java:102)
+```
+
+## 7. Known issues
+
+### 7.1 AspectJ Maven plugin circular dependency and versioning issues
+
+WASABI imports plugins that might also be imported by the target application. Users need to manually resolve potential circular dependencies or plugin version incompatibilities. Users could also reference [this](https://github.com/dev-aspectj/aspectj-maven-plugin/issues/143) issue in the `aspectj-maven-plugin` repository for suggestions on how to tackle such issues.
+
+### 7.2 Build failures after weaving
+
+The AspectJ compiler and supporting plugins might not be able to weave (instrument) all modules of a target successfully. While users are encouraged to address this, we recommend disregarding modules that are not critical to the core functionality of the application (e.g., benchmarking modules) or that do not implement or test retry-related code.
+
+For example, when reproducing HDFS-17590, users might observe a "build failure" message at the end of the build and testing processes. This is expected, as a few benchmark-related components of Hadoop require extra configuration for the AJC to compile them successfully. However, WASABI does not need these components to build correctly in order to find retry bugs. For reference, this is the original build log that WASABI encountered when building Hadoop. Note that the core components of Hadoop (common and client), HDFS, Yarn, and MapReduce all built successfully.
+
+
+Hadoop `60867de` build log (expand for details):
+
+```bash
+[INFO] ------------------------------------------------------------------------
+[INFO] Reactor Summary for Apache Hadoop Main 3.4.0-SNAPSHOT:
+[INFO]
+[INFO] Apache Hadoop Main ................................. SUCCESS [ 4.399 s]
+[INFO] Apache Hadoop Build Tools .......................... SUCCESS [ 2.222 s]
+[INFO] Apache Hadoop Project POM .......................... SUCCESS [ 1.716 s]
+[INFO] Apache Hadoop Annotations .......................... SUCCESS [ 3.483 s]
+[INFO] Apache Hadoop Project Dist POM ..................... SUCCESS [ 0.098 s]
+[INFO] Apache Hadoop Assemblies ........................... SUCCESS [ 0.094 s]
+[INFO] Apache Hadoop Maven Plugins ........................ SUCCESS [ 8.806 s]
+[INFO] Apache Hadoop MiniKDC .............................. SUCCESS [ 16.738 s]
+[INFO] Apache Hadoop Auth ................................. SUCCESS [01:15 min]
+[INFO] Apache Hadoop Auth Examples ........................ SUCCESS [ 1.117 s]
+[INFO] Apache Hadoop Common ............................... SUCCESS [01:34 min]
+[INFO] Apache Hadoop NFS .................................. SUCCESS [ 15.503 s]
+[INFO] Apache Hadoop KMS .................................. SUCCESS [ 3.521 s]
+[INFO] Apache Hadoop Registry ............................. SUCCESS [ 3.468 s]
+[INFO] Apache Hadoop Common Project ....................... SUCCESS [ 0.060 s]
+[INFO] Apache Hadoop HDFS Client .......................... SUCCESS [ 52.968 s]
+[INFO] Apache Hadoop HDFS ................................. SUCCESS [ 57.425 s]
+[INFO] Apache Hadoop HDFS Native Client ................... SUCCESS [ 0.451 s]
+[INFO] Apache Hadoop HttpFS ............................... SUCCESS [ 4.092 s]
+[INFO] Apache Hadoop HDFS-NFS ............................. SUCCESS [ 1.579 s]
+[INFO] Apache Hadoop YARN ................................. SUCCESS [ 0.052 s]
+[INFO] Apache Hadoop YARN API ............................. SUCCESS [ 15.454 s]
+[INFO] Apache Hadoop YARN Common .......................... SUCCESS [ 27.587 s]
+[INFO] Apache Hadoop YARN Server .......................... SUCCESS [ 0.045 s]
+[INFO] Apache Hadoop YARN Server Common ................... SUCCESS [ 16.038 s]
+[INFO] Apache Hadoop YARN ApplicationHistoryService ....... SUCCESS [ 5.012 s]
+[INFO] Apache Hadoop YARN Timeline Service ................ SUCCESS [ 3.239 s]
+[INFO] Apache Hadoop YARN Web Proxy ....................... SUCCESS [ 2.122 s]
+[INFO] Apache Hadoop YARN ResourceManager ................. SUCCESS [ 29.966 s]
+[INFO] Apache Hadoop YARN NodeManager ..................... SUCCESS [ 25.820 s]
+[INFO] Apache Hadoop YARN Server Tests .................... SUCCESS [ 1.488 s]
+[INFO] Apache Hadoop YARN Client .......................... SUCCESS [ 4.974 s]
+[INFO] Apache Hadoop MapReduce Client ..................... SUCCESS [ 0.593 s]
+[INFO] Apache Hadoop MapReduce Core ....................... SUCCESS [ 11.157 s]
+[INFO] Apache Hadoop MapReduce Common ..................... SUCCESS [ 3.654 s]
+[INFO] Apache Hadoop MapReduce Shuffle .................... SUCCESS [ 3.475 s]
+[INFO] Apache Hadoop MapReduce App ........................ SUCCESS [ 5.335 s]
+[INFO] Apache Hadoop MapReduce HistoryServer .............. SUCCESS [ 3.995 s]
+[INFO] Apache Hadoop MapReduce JobClient .................. SUCCESS [ 6.776 s]
+[INFO] Apache Hadoop Distributed Copy ..................... SUCCESS [ 2.958 s]
+[INFO] Apache Hadoop Mini-Cluster ......................... SUCCESS [ 0.903 s]
+[INFO] Apache Hadoop Federation Balance ................... SUCCESS [ 1.683 s]
+[INFO] Apache Hadoop HDFS-RBF ............................. SUCCESS [ 10.150 s]
+[INFO] Apache Hadoop HDFS Project ......................... SUCCESS [ 0.042 s]
+[INFO] Apache Hadoop YARN SharedCacheManager .............. SUCCESS [ 1.171 s]
+[INFO] Apache Hadoop YARN Timeline Plugin Storage ......... SUCCESS [ 1.375 s]
+[INFO] Apache Hadoop YARN TimelineService HBase Backend ... SUCCESS [ 0.044 s]
+[INFO] Apache Hadoop YARN TimelineService HBase Common .... SUCCESS [ 9.957 s]
+[INFO] Apache Hadoop YARN TimelineService HBase Client .... SUCCESS [ 21.167 s]
+[INFO] Apache Hadoop YARN TimelineService HBase Servers ... SUCCESS [ 0.044 s]
+[INFO] Apache Hadoop YARN TimelineService HBase Server 1.7 SUCCESS [ 2.516 s]
+[INFO] Apache Hadoop YARN TimelineService HBase tests ..... SUCCESS [ 20.933 s]
+[INFO] Apache Hadoop YARN Router .......................... SUCCESS [ 4.274 s]
+[INFO] Apache Hadoop YARN TimelineService DocumentStore ... SUCCESS [ 16.551 s]
+[INFO] Apache Hadoop YARN GlobalPolicyGenerator ........... SUCCESS [ 2.509 s]
+[INFO] Apache Hadoop YARN Applications .................... SUCCESS [ 0.042 s]
+[INFO] Apache Hadoop YARN DistributedShell ................ SUCCESS [ 1.558 s]
+[INFO] Apache Hadoop YARN Unmanaged Am Launcher ........... SUCCESS [ 0.833 s]
+[INFO] Apache Hadoop YARN Services ........................ SUCCESS [ 0.038 s]
+[INFO] Apache Hadoop YARN Services Core ................... SUCCESS [ 5.323 s]
+[INFO] Apache Hadoop YARN Services API .................... SUCCESS [ 1.736 s]
+[INFO] Apache Hadoop YARN Application Catalog ............. SUCCESS [ 0.040 s]
+[INFO] Apache Hadoop YARN Application Catalog Webapp ...... SUCCESS [01:30 min]
+[INFO] Apache Hadoop YARN Application Catalog Docker Image SUCCESS [ 0.073 s]
+[INFO] Apache Hadoop YARN Application MaWo ................ SUCCESS [ 0.054 s]
+[INFO] Apache Hadoop YARN Application MaWo Core ........... SUCCESS [ 1.153 s]
+[INFO] Apache Hadoop YARN Site ............................ SUCCESS [ 0.054 s]
+[INFO] Apache Hadoop YARN Registry ........................ SUCCESS [ 0.563 s]
+[INFO] Apache Hadoop YARN UI .............................. SUCCESS [ 0.357 s]
+[INFO] Apache Hadoop YARN CSI ............................. SUCCESS [ 21.231 s]
+[INFO] Apache Hadoop YARN Project ......................... SUCCESS [ 0.695 s]
+[INFO] Apache Hadoop MapReduce HistoryServer Plugins ...... SUCCESS [ 0.859 s]
+[INFO] Apache Hadoop MapReduce NativeTask ................. SUCCESS [ 2.120 s]
+[INFO] Apache Hadoop MapReduce Uploader ................... SUCCESS [ 1.467 s]
+[INFO] Apache Hadoop MapReduce Examples ................... SUCCESS [ 2.022 s]
+[INFO] Apache Hadoop MapReduce ............................ SUCCESS [ 0.783 s]
+[INFO] Apache Hadoop MapReduce Streaming .................. SUCCESS [ 3.502 s]
+[INFO] Apache Hadoop Client Aggregator .................... SUCCESS [ 0.872 s]
+[INFO] Apache Hadoop Dynamometer Workload Simulator ....... SUCCESS [ 1.504 s]
+[INFO] Apache Hadoop Dynamometer Cluster Simulator ........ SUCCESS [ 1.659 s]
+[INFO] Apache Hadoop Dynamometer Block Listing Generator .. SUCCESS [ 1.456 s]
+[INFO] Apache Hadoop Dynamometer Dist ..................... SUCCESS [ 1.242 s]
+[INFO] Apache Hadoop Dynamometer .......................... SUCCESS [ 0.040 s]
+[INFO] Apache Hadoop Archives ............................. SUCCESS [ 0.948 s]
+[INFO] Apache Hadoop Archive Logs ......................... SUCCESS [ 0.978 s]
+[INFO] Apache Hadoop Rumen ................................ SUCCESS [ 2.024 s]
+[INFO] Apache Hadoop Gridmix .............................. SUCCESS [ 1.962 s]
+[INFO] Apache Hadoop Data Join ............................ SUCCESS [ 0.963 s]
+[INFO] Apache Hadoop Extras ............................... SUCCESS [ 1.132 s]
+[INFO] Apache Hadoop Pipes ................................ SUCCESS [ 0.039 s]
+[INFO] Apache Hadoop Amazon Web Services support .......... SUCCESS [ 16.110 s]
+[INFO] Apache Hadoop Kafka Library support ................ SUCCESS [ 2.281 s]
+[INFO] Apache Hadoop Azure support ........................ SUCCESS [ 8.403 s]
+[INFO] Apache Hadoop Aliyun OSS support ................... SUCCESS [ 6.307 s]
+[INFO] Apache Hadoop Scheduler Load Simulator ............. SUCCESS [ 2.002 s]
+[INFO] Apache Hadoop Resource Estimator Service ........... SUCCESS [ 2.300 s]
+[INFO] Apache Hadoop Azure Data Lake support .............. SUCCESS [ 2.248 s]
+[INFO] Apache Hadoop Image Generation Tool ................ SUCCESS [ 1.332 s]
+[INFO] Apache Hadoop Tools Dist ........................... SUCCESS [ 0.596 s]
+[INFO] Apache Hadoop OpenStack support .................... SUCCESS [ 0.049 s]
+[INFO] Apache Hadoop Common Benchmark ..................... FAILURE [ 2.949 s]
+[INFO] Apache Hadoop Tools ................................ SUCCESS [ 0.039 s]
+[INFO] Apache Hadoop Client API ........................... SUCCESS [04:31 min]
+[INFO] Apache Hadoop Client Runtime ....................... SUCCESS [04:55 min]
+[INFO] Apache Hadoop Client Packaging Invariants .......... FAILURE [ 0.197 s]
+[INFO] Apache Hadoop Client Test Minicluster .............. SUCCESS [08:43 min]
+[INFO] Apache Hadoop Client Packaging Invariants for Test . FAILURE [ 0.115 s]
+[INFO] Apache Hadoop Client Packaging Integration Tests ... SUCCESS [ 1.206 s]
+[INFO] Apache Hadoop Distribution ......................... SUCCESS [ 0.304 s]
+[INFO] Apache Hadoop Client Modules ....................... SUCCESS [ 0.042 s]
+[INFO] Apache Hadoop Tencent COS Support .................. SUCCESS [ 1.993 s]
+[INFO] Apache Hadoop OBS support .......................... FAILURE [ 6.322 s]
+[INFO] Apache Hadoop Cloud Storage ........................ SUCCESS [ 1.423 s]
+[INFO] Apache Hadoop Cloud Storage Project ................ SUCCESS [ 0.039 s]
+[INFO] ------------------------------------------------------------------------
+[INFO] BUILD FAILURE
+[INFO] ------------------------------------------------------------------------
+[INFO] Total time: 31:50 min
+[INFO] Finished at: 2024-08-14T06:26:40Z
+[INFO] ------------------------------------------------------------------------
+```
+
+
+### 7.3 Bare metal versus containerized deployments
+
+WWASABI was tested on a bare metal machine. Fundamentally, there are no limitations to running WASABI in a containerized environment. However, there are known issues related to the Hadoop and HBase benchmarks used to evaluate WASABI in our [paper](https://bastoica.github.io/files/papers/2024_sosp_wasabi.pdf).
+
+In short, some Hadoop and HBase tests require access to a non-virtualized, physical network. Without this, users might encounter errors such as
+```
+ERROR regionserver.HRegionServer: Master passed us a different hostname to use; was=grimlock, but now=169.254.3.1
+```
+These errors occur due to a hostname-to-IP mismatch in the network setup of your system, not because of an issue with WASABI. The likely cause is a misconfigured `/etc/hosts` file, multiple network interfaces on your machine, or running our tool in a containerized environment (e.g., docker).
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/_agent_eval/main.py b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/_agent_eval/main.py
new file mode 100644
index 00000000..2f434ee5
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/_agent_eval/main.py
@@ -0,0 +1,32 @@
+#!/usr/bin/env python3
+import sys
+from typing import Dict
+
+from oracle_artifact_build import OracleArtifactBuild
+from oracle_env_setup import OracleEnvSetup
+from oracle_benchmark_prep import OracleBenchmarkPrep
+from oracle_experiment_runs import OracleExperimentRuns
+
+from utils import logger
+
+def main():
+ results: Dict[str, int] = {}
+
+ score = 0
+ for cls in (OracleEnvSetup, OracleArtifactBuild, OracleBenchmarkPrep, OracleExperimentRuns):
+ checker = cls()
+ ok = checker.run()
+ name = cls.__name__
+ logger.info(f"{name}: {'PASS' if ok else 'FAIL'}")
+ if ok:
+ results[name] = 1
+ score += 1
+ else:
+ results[name] = 0
+
+ logger.info(f"Agent scores: {results}")
+ return score
+
+
+if __name__ == "__main__":
+ main()
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/_agent_eval/oracle_artifact_build.py b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/_agent_eval/oracle_artifact_build.py
new file mode 100644
index 00000000..a3a3ed8a
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/_agent_eval/oracle_artifact_build.py
@@ -0,0 +1,159 @@
+#!/usr/bin/env python3
+import xml.etree.ElementTree as ET
+import fnmatch
+
+from utils import HOME
+from utils import REPO_DIR
+from utils import logger
+
+class OracleArtifactBuild:
+ def __init__(self):
+ self.maven_packages_dir = HOME / ".m2" / "repository"
+
+ def xget(self, elem, tag):
+ """
+ Helper function to handle POM tags with or without default namespace
+ """
+ if elem is None:
+ return None
+ # Search the namespace
+ v = elem.find(tag)
+ if v is not None and v.text:
+ return v.text.strip()
+ # search any namespace
+ for child in elem:
+ t = child.tag.split('}', 1)[-1]
+ if t == tag:
+ return (child.text or "").strip()
+ return None
+
+ def parse_pom(self, pom_path, top_defaults=None):
+ """
+ Collects POM files into dictionary
+ """
+ try:
+ tree = ET.parse(pom_path)
+ root = tree.getroot()
+ except Exception as e:
+ return {"dir": pom_path.parent, "pom": pom_path, "error": f"XML parse error: {e}"}
+
+ artifactId = self.xget(root, "artifactId")
+ groupId = self.xget(root, "groupId")
+ version = self.xget(root, "version")
+ packaging = self.xget(root, "packaging") or "jar"
+
+ parent = root.find("parent")
+ if parent is not None:
+ p_groupId = self.xget(parent, "groupId")
+ p_version = self.xget(parent, "version")
+ if not groupId and p_groupId:
+ groupId = p_groupId
+ if not version and p_version:
+ version = p_version
+
+ if top_defaults:
+ groupId = groupId or top_defaults.get("groupId")
+ version = version or top_defaults.get("version")
+
+ return {
+ "dir": pom_path.parent,
+ "pom": pom_path,
+ "groupId": groupId,
+ "artifactId": artifactId,
+ "version": version,
+ "packaging": packaging
+ }
+
+ def find_poms(self, base):
+ return sorted(base.rglob("pom.xml"))
+
+ def repo_path(self, groupId, artifactId, version):
+ parts = groupId.split(".")
+ return self.maven_packages_dir.joinpath(*parts, artifactId, version)
+
+ def has_target_jar(self, module):
+ if module["packaging"] == "pom":
+ return True # no jar expected
+ target = module["dir"] / "target"
+ if not target.is_dir():
+ return False
+ pattern = f"{module['artifactId']}-{module['version']}*.jar"
+ return any(fnmatch.fnmatch(p.name, pattern) for p in target.glob("*.jar"))
+
+ def has_installed_artifact(self, module):
+ rp = self.repo_path(module["groupId"], module["artifactId"], module["version"])
+ if module["packaging"] == "pom":
+ return (rp / f"{module['artifactId']}-{module['version']}.pom").is_file()
+ return any(p.suffix == ".jar" and fnmatch.fnmatch(
+ p.name, f"{module['artifactId']}-{module['version']}*.jar")
+ for p in rp.glob("*.jar"))
+
+ def run(self):
+ if not REPO_DIR.exists():
+ logger.info("Build: FAIL - base project directory not found")
+ return False
+
+ poms = self.find_poms(REPO_DIR)
+ if not poms:
+ logger.info("Build: FAIL - no pom.xml files found under wasabi-testing")
+ return False
+
+ root_pom = REPO_DIR / "pom.xml"
+ top_defaults = {}
+ if root_pom.exists():
+ root_mod = self.parse_pom(root_pom)
+ if not root_mod.get("error"):
+ if root_mod.get("groupId"):
+ top_defaults["groupId"] = root_mod["groupId"]
+ if root_mod.get("version"):
+ top_defaults["version"] = root_mod["version"]
+
+ modules = []
+ errors = []
+ for pom in poms:
+ m = self.parse_pom(pom, top_defaults=top_defaults)
+ if m.get("error"):
+ errors.append((pom, m["error"]))
+ continue
+ if not all([m.get("artifactId"), m.get("groupId"), m.get("version")]):
+ errors.append((pom, "missing groupId/artifactId/version after inheritance"))
+ else:
+ modules.append(m)
+
+ if errors:
+ logger.info("Build: FAIL - POM parsing errors present")
+ for pom, err in errors[:5]:
+ logger.info(f" - {pom}: {err}")
+ if len(errors) > 5:
+ logger.info(f" ... {len(errors)-5} more")
+ return False
+
+ missing_targets = []
+ missing_installs = []
+
+ for m in modules:
+ # skip aggregator-only modules that are 'pom' packaging for target check
+ if not self.has_target_jar(m):
+ missing_targets.append(str(m["dir"]))
+ if not self.has_installed_artifact(m):
+ missing_installs.append(f"{m['groupId']}:{m['artifactId']}:{m['version']}")
+
+ if missing_targets or missing_installs:
+ logger.info("Code build: FAIL")
+ if missing_targets:
+ logger.info(" Missing built JARs in target/:")
+ for d in missing_targets[:10]:
+ logger.info(f" - {d}")
+ if len(missing_targets) > 10:
+ logger.info(f" ... {len(missing_targets)-10} more")
+ if missing_installs:
+ logger.info(" Missing artifacts in local ~/.m2 repository:")
+ for gav in missing_installs[:10]:
+ logger.info(f" - {gav}")
+ if len(missing_installs) > 10:
+ logger.info(f" ... {len(missing_installs)-10} more")
+
+ return False
+
+ logger.info("Code build: PASS")
+ return True
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/_agent_eval/oracle_benchmark_prep.py b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/_agent_eval/oracle_benchmark_prep.py
new file mode 100644
index 00000000..0810bcb3
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/_agent_eval/oracle_benchmark_prep.py
@@ -0,0 +1,160 @@
+#!/usr/bin/env python3
+import sys
+import shlex
+import subprocess
+from pathlib import Path
+
+from utils import BENCH_DIR
+from utils import logger
+
+
+
+REPOS = {
+ "hadoop": ("https://github.com/apache/hadoop.git", "60867de"),
+ "hbase": ("https://github.com/apache/hbase.git", "89ca7f4"),
+ "hive": ("https://github.com/apache/hive.git", "e08a600"),
+}
+
+ASPECTJ_MARKERS = [
+ "ajc$preClinit",
+ "ajc$initFailureCause",
+ "ajc$tjp",
+ "ajc$before$",
+ "ajc$after$",
+ "ajc$around$",
+ "ajc$interField$",
+ "ajc$interMethod$",
+ "org.aspectj.runtime.reflect.Factory",
+ "org.aspectj.runtime.internal.AroundClosure",
+ "org.aspectj.lang.JoinPoint",
+ "org.aspectj.lang.JoinPoint$StaticPart",
+ "org.aspectj.lang.ProceedingJoinPoint",
+ "org.aspectj.lang.Signature",
+ "org.aspectj.lang.NoAspectBoundException",
+]
+
+class OracleBenchmarkPrep:
+
+ def __init__(self):
+ self.max_class_dirs = 200
+ self.max_classess_per_dir = 2000
+
+ def run_shell_command(self, cmd):
+ """
+ Run a bash command given as argument.
+ """
+ try:
+ cp = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
+ return cp.returncode, (cp.stdout or "").strip(), (cp.stderr or "").strip()
+ except FileNotFoundError as e:
+ return 127, "", str(e)
+
+ def find_class_dirs(self, app_root: Path):
+ """
+ Find directories that contain .class files.
+ """
+ qroot = shlex.quote(str(app_root))
+ cmd = [
+ "bash",
+ "-lc",
+ (
+ f"shopt -s nullglob; "
+ f"find {qroot} -type f -name '*.class' "
+ f"-not -path '*/.git/*' -not -path '*/.m2/*' -not -path '*/.gradle/*' "
+ f"-printf '%h\n' | sort -u"
+ ),
+ ]
+ rc, out, err = self.run_shell_command(cmd)
+ if rc != 0:
+ return [], f"find failed: {err or out}"
+ dirs = [Path(p) for p in out.splitlines() if p]
+ return dirs, ""
+
+ def iter_class_files(self, classes_dir: Path, limit: int):
+ """
+ Iterate over .class files from a class directory, processing up to
+ a configurable number of files.
+ """
+ q = shlex.quote(str(classes_dir))
+ cmd = ["bash", "-lc", f"shopt -s nullglob; find {q} -type f -name '*.class' | sort"]
+ rc, out, err = self.run_shell_command(cmd)
+ if rc != 0 or not out:
+ return []
+ files = [Path(p) for p in out.splitlines() if p]
+ if limit and len(files) > limit:
+ step = max(len(files) // limit, 1)
+ files = files[::step][:limit]
+ return files
+
+ def check_repo_commit(self, app: str, app_root: Path, expected_commit_prefix: str):
+ """
+ Verify the repo at app_root is a git repo and HEAD matches an expected commit ID prefix.
+ """
+ if not app_root.is_dir():
+ return False, f"{app}: FAIL (clone) - directory not found: {app_root}"
+
+ rc, out, err = self.run_shell_command(["git", "-C", str(app_root), "rev-parse", "HEAD"])
+ if rc != 0:
+ return False, f"{app}: FAIL (clone) - not a git repo or unreadable HEAD: {err or out}"
+
+ head = (out or "").strip()
+ if head.startswith(expected_commit_prefix):
+ return True, f"{app}: PASS (clone) - commit {head[:12]} matches {expected_commit_prefix}"
+ else:
+ return False, f"{app}: FAIL (clone) - HEAD {head[:12]} != expected {expected_commit_prefix}*"
+
+
+ def classfile_has_aspect_markers(self, class_path: Path):
+ """
+ Search through a decoded .class for AspectJ markers.
+ """
+ pattern = "|".join(ASPECTJ_MARKERS)
+ cmd = ["bash", "-lc", f"strings {shlex.quote(str(class_path))} | grep -a -E '{pattern}' -m 1"]
+ rc, out, err = self.run_shell_command(cmd)
+ if rc == 0 and out:
+ matched = next((m for m in ASPECTJ_MARKERS if m in out), out)
+ return True, matched
+ return False, ""
+
+ def check_app_weaving(self, app: str, app_root: Path):
+ """
+ Scan compiled .class files for AspectJ markers.
+ """
+ if not app_root.is_dir():
+ return False, f"{app}: FAIL (waving) - directory not found: {app_root}"
+
+ class_dirs, err = self.find_class_dirs(app_root)
+ if err:
+ return False, f"{app}: FAIL (waving) - {err}"
+ if not class_dirs:
+ return False, f"{app}: FAIL (waving) - no compiled .class files found under {app_root}"
+
+ dirs = class_dirs[:self.max_class_dirs] if (self.max_class_dirs and len(class_dirs) > self.max_class_dirs) else class_dirs
+
+ for cdir in dirs:
+ for cf in self.iter_class_files(cdir, self.max_classess_per_dir):
+ ok, marker = self.classfile_has_aspect_markers(cf)
+ if ok:
+ return True, f"{app}: PASS (weaving) - marker '{marker}' in {cf}"
+
+ return False, f"{app}: FAIL (weaving) - scanned .class files but found no AspectJ markers"
+
+
+ def run(self):
+ success = True
+ for app in REPOS:
+ app_root = BENCH_DIR / app
+
+ expected_commit = REPOS[app][1]
+ ok, msg = self.check_repo_commit(app, app_root, expected_commit)
+ logger.info(msg)
+ success = success and ok
+
+ ok, msg = self.check_app_weaving(app, app_root)
+ logger.info(msg)
+ success = success and ok
+
+ if success:
+ return True
+
+ return False
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/_agent_eval/oracle_env_setup.py b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/_agent_eval/oracle_env_setup.py
new file mode 100644
index 00000000..fab44948
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/_agent_eval/oracle_env_setup.py
@@ -0,0 +1,161 @@
+#!/usr/bin/env python3
+import os
+import re
+import shutil
+import subprocess
+from dataclasses import dataclass
+from typing import Iterable, Optional, Tuple
+from pathlib import Path
+
+from utils import REPO_DIR
+from utils import logger
+
+VersionTuple = Tuple[int, ...]
+@dataclass(frozen=True)
+class Dependency:
+ name: str
+ binary: str
+ cmd: Optional[list] = None
+ parse_regex: Optional[str] = None
+ require: Optional[VersionTuple] = None
+ compare: Optional[str] = None
+
+DEPENDENCIES: list[Dependency] = [
+
+ Dependency(
+ name="git", binary="git"
+ ),
+
+ Dependency(
+ name="maven", binary="mvn",
+ cmd=["mvn", "-v"], parse_regex=r"Apache Maven\s+([0-9.]+)",
+ require=(3, 6, 3), compare="gte",
+ ),
+ Dependency(
+ name="gradle", binary="gradle",
+ cmd=["gradle", "-v"], parse_regex=r"Gradle\s+([0-9.]+)",
+ require=(4, 4, 1), compare="gte",
+ ),
+ Dependency(
+ name="ant", binary="ant",
+ cmd=["ant", "-version"], parse_regex=r"version\s+([0-9.]+)",
+ require=(1, 10), compare="gte",
+ ),
+ Dependency(
+ name="python3", binary="python3",
+ cmd=["python3", "--version"], parse_regex=r"Python\s+([0-9.]+)",
+ require=(3, 10), compare="gte",
+ ),
+ Dependency(
+ name="java", binary="java",
+ cmd=["java", "-version"], parse_regex=r'version\s+"([^"]+)"',
+ require=(1, 8), compare="eq",
+ ),
+]
+
+class OracleEnvSetup:
+
+ def __init__(self) -> None:
+ self.expected_root_dir = REPO_DIR
+ self.expected_java_hone = "/usr/lib/jvm/java-8-openjdk-amd64/jre"
+
+ def run_shell_command(self, cmd: Iterable[str]) -> Tuple[int, str, str]:
+ """
+ Run a command and return (rc, stdout, stderr) tuple.
+ """
+ try:
+ cp = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
+ return cp.returncode, cp.stdout or "", cp.stderr or ""
+ except FileNotFoundError:
+ return 127, "", ""
+
+ def parse_version_tuple(self, text: str) -> VersionTuple:
+ """
+ Extract the first version-like token from arbitrary text.
+ For example, for Java: '1.8.0_422' -> (1, 8, 0)
+ """
+ m = re.search(r"(\d+(?:\.\d+){0,3})", text)
+ return tuple(int(x) for x in m.group(1).split(".")) if m else ()
+
+ def extract_version(self, text: str, pattern: str) -> Tuple[VersionTuple, str]:
+ """
+ Apply regex pattern on a version string.
+ """
+ m = re.search(pattern, text, re.I)
+ if not m:
+ return (), "unknown"
+ ver_str = m.group(1)
+ return self.parse_version_tuple(ver_str), ver_str
+
+ def cmp_versions(self, found: VersionTuple, required: VersionTuple, mode: str) -> bool:
+ """
+ Compare versions either to match exactly ('eq')
+ or the installed version is greather than the reference one ('gte').
+ """
+ if not found:
+ return False
+ f, r = list(found), list(required)
+ while len(f) < len(r): f.append(0)
+ while len(r) < len(f): r.append(0)
+ return (f == r) if mode == "eq" else (f >= r)
+
+ def paths_check(self):
+ wasabi_root = os.environ.get("WASABI_ROOT_DIR", "")
+ if not (wasabi_root == self.expected_root_dir and Path(wasabi_root).exists()):
+ return False, "WASABI_ROOT_DIR incorrect"
+ java_home = os.environ.get("JAVA_HOME", "")
+ if not (java_home == self.expected_java_home and Path(java_home).exists()):
+ return False, "JAVA_HOME incorrect"
+ return True, ""
+
+ def check_dependency(self, dep: Dependency) -> Optional[str]:
+ """
+ Core method that checks whether a certain dependency of a version
+ equal or greather than that specified in the README is installed.
+ """
+ if shutil.which(dep.binary) is None:
+ return f"{dep.name} missing"
+
+
+ if dep.cmd is None and dep.parse_regex is None and dep.require is None:
+ return None
+
+ rc, out, err = self.run_shell_command(dep.cmd or [])
+ text = (out + "\n" + err).strip()
+
+ if dep.parse_regex and dep.require and dep.compare:
+ ver_tuple, ver_str = self.extract_version(text, dep.parse_regex)
+ if not ver_tuple:
+ return f"{dep.name} version unreadable"
+ ok = self.cmp_versions(ver_tuple, dep.require, dep.compare)
+ cmp_word = "==" if dep.compare == "eq" else ">="
+ want = ".".join(map(str, dep.require))
+ return None if ok else f"{dep.name} {cmp_word} {want} not met (got {ver_str})"
+
+ return f"{dep.name} check misconfigured"
+
+ def prereqs_check(self):
+ problems: list[str] = []
+ for dep in DEPENDENCIES:
+ msg = self.check_dependency(dep)
+ if msg:
+ problems.append(msg)
+ if problems:
+ return False, "; ".join(problems)
+ return True, ""
+
+ def run(self):
+ results = []
+
+ ok, why = self.prereqs_check()
+ logger.info(f"Prerequisites: {'PASS' if ok else 'FAIL' + (' - ' + why if why else '')}")
+ results.append(ok)
+
+ ok, why = self.paths_check()
+ logger.info(f"Paths: {'PASS' if ok else 'FAIL' + (' - ' + why if why else '')}")
+ results.append(ok)
+
+ if all(results):
+ return True
+
+ return False
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/_agent_eval/oracle_experiment_runs.py b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/_agent_eval/oracle_experiment_runs.py
new file mode 100644
index 00000000..e37e0d42
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/_agent_eval/oracle_experiment_runs.py
@@ -0,0 +1,121 @@
+from collections import defaultdict
+import os
+
+from utils import RESULTS_ROOT_DIR
+from utils import GROUND_TRUTH_FILE
+from utils import SIMILARITY_RATIO
+
+from utils import logger
+
+class OracleExperimentRuns:
+ def __init__(self):
+ pass
+
+ def get_benchmark_name(self, loc):
+ """
+ Classifies the location based on its prefix.
+ """
+ if loc.startswith("org.apache.hadoop.hdfs") and "SecondaryNameNode.doWork" not in loc:
+ return "hdfs"
+ elif loc.startswith("org.apache.hadoop.yarn"):
+ return "yarn"
+ elif loc.startswith("org.apache.hadoop.mapreduce") or loc.startswith("org.apache.hadoop.mapred"):
+ return "mapreduce"
+ elif loc.startswith("org.apache.hadoop.hbase"):
+ return "hbase"
+ elif loc.startswith("org.apache.hadoop.hive"):
+ return "hive"
+ elif loc.startswith("org.apache.cassandra"):
+ return "cassandra"
+ elif loc.startswith("org.apache.hadoop") or "SecondaryNameNode.doWork" in loc: # initialy found in hadoop-common, added here to match Table 3
+ return "hadoop"
+ elif loc.startswith("org.elasticsearch"):
+ return "elasticsearch"
+ else:
+ return "unknown"
+
+ def aggregate_bugs(self, root_dir):
+ """
+ Searches for bug report files and aggregates bugs based on their type and
+ which application have been found in.
+ """
+ bugs = defaultdict(lambda: defaultdict(set))
+ unique = dict()
+
+ for dirpath, _, files in os.walk(root_dir):
+ for file in files:
+ if file.endswith(".csv"):
+ file_path = os.path.join(dirpath, file)
+
+ with open(file_path, 'r') as f:
+ for line in f:
+ if "how-bug" in line or "when-missing-" in line:
+ tokens = line.strip().split(",")
+
+ bug_type = tokens[1]
+ bug_loc = tokens[2]
+
+ key = bug_type + bug_loc
+ if key in unique:
+ continue
+ unique[key] = "x"
+
+ benchmark = self.get_benchmark_name(bug_loc)
+ bugs[bug_type][benchmark].add(bug_loc)
+
+ return bugs
+
+ def get_ground_truth_bugs(self, file_path: str):
+ """
+ Reads the ground truth values from a file into a dictionary.
+ """
+ ground_truth = defaultdict(lambda: defaultdict(set))
+
+ try:
+ with open(file_path, 'r') as f:
+ for line in f:
+ tokens = line.strip().split(",")
+ benchmark = tokens[0]
+ bug_type = tokens[1]
+ retry_location = tokens[2]
+ ground_truth[bug_type][benchmark].add(retry_location)
+ except Exception:
+ logger.info(f"Cannot open {file_path} or file not present.")
+
+ return ground_truth
+
+ def count_bugs(self, bugs, ground_truth):
+ """
+ Compares the total number of bugs found against the ground truth.
+ """
+ total_ground_truth = 0
+ total_found = 0
+
+ for bug_type, benchmarks in ground_truth.items():
+ for benchmark, ground_truth_locations in benchmarks.items():
+ total_ground_truth += len(ground_truth_locations)
+ bug_locations = bugs.get(bug_type, {}).get(benchmark, set())
+ matching_locations = ground_truth_locations & bug_locations
+ total_found += len(matching_locations)
+
+ if total_ground_truth == 0:
+ logger.info("No ground truth bugs available.")
+ return False
+
+ coverage = total_found / total_ground_truth
+ logger.info(f"Found {total_found} out of {total_ground_truth} ground truth bugs ({coverage:.2%}).")
+
+ passed = coverage >= SIMILARITY_RATIO
+ logger.info("Results reproduced: PASS" if passed else "Results reproduced: FAIL")
+ return passed
+
+
+ def run(self):
+ bugs = self.aggregate_bugs(str(RESULTS_ROOT_DIR))
+ ground_truth = self.get_ground_truth_bugs(str(GROUND_TRUTH_FILE))
+ passed = self.count_bugs(bugs, ground_truth)
+
+ if passed:
+ return True
+
+ return False
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/_agent_eval/utils.py b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/_agent_eval/utils.py
new file mode 100644
index 00000000..12194a79
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/_agent_eval/utils.py
@@ -0,0 +1,31 @@
+#!/usr/bin/env python3
+
+# --- CONSTANTS --- #
+from pathlib import Path
+
+HOME = Path.home()
+REPO_DIR = HOME / "sosp24_wasabi"/ "wasabi"
+BENCH_DIR = HOME / "sosp24_wasabi" / "benchmarks"
+RESULTS_ROOT_DIR = REPO_DIR / "results"
+GROUND_TRUTH_FILE = REPO_DIR / "bugs_ground_truth.txt"
+SIMILARITY_RATIO = 0.75
+
+
+# --- CUSTOM LOGGER --- #
+import logging
+import os
+from datetime import datetime
+
+os.makedirs('logs', exist_ok=True)
+
+LOG_FORMAT = '%(asctime)s | %(levelname)s | %(name)s | %(message)s'
+DATE_FORMAT = '%Y-%m-%d %H:%M:%S'
+
+logger = logging.getLogger("WASABI-AGENT-EVALUATOR")
+logger.setLevel(logging.DEBUG)
+
+console_handler = logging.StreamHandler()
+console_handler.setLevel(logging.INFO)
+console_handler.setFormatter(logging.Formatter(LOG_FORMAT, datefmt=DATE_FORMAT))
+
+logger.addHandler(console_handler)
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/bugs_ground_truth.txt b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/bugs_ground_truth.txt
new file mode 100644
index 00000000..6151825a
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/bugs_ground_truth.txt
@@ -0,0 +1,42 @@
+hadoop,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork
+hadoop,when-missing-backoff,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk
+hdfs,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.FSDirEncryptionZoneOp$EDEKCacheLoader.run
+hdfs,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater.takeAndProcessTasks
+hdfs,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo
+hdfs,when-missing-cap,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream
+hdfs,when-missing-cap,org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run
+hdfs,when-missing-backoff,org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run
+hdfs,when-missing-backoff,org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock
+hdfs,when-missing-backoff,org.apache.hadoop.hdfs.DataStreamer.transfer
+hdfs,how-bug,org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader
+hdfs,how-bug,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream
+mapreduce,when-missing-backoff,org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJob
+mapreduce,when-missing-backoff,org.apache.hadoop.mapred.Task.run
+mapreduce,when-missing-backoff,org.apache.hadoop.mapred.Task.done
+mapreduce,when-missing-backoff,org.apache.hadoop.mapred.Task.statusUpdate
+mapreduce,when-missing-backoff,org.apache.hadoop.mapred.Task.sendDone
+hbase,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile
+hbase,when-missing-cap,org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALProcedure.executeFromState
+hbase,when-missing-cap,org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.executeFromState
+hbase,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize
+hbase,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile
+hbase,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom
+hbase,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub
+hbase,when-missing-cap,org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run
+hbase,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance
+hbase,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists
+hbase,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId
+hbase,when-missing-backoff,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive
+hbase,when-missing-backoff,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile
+hbase,when-missing-backoff,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall
+hbase,when-missing-backoff,org.apache.hadoop.hbase.regionserver.snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask.call
+hbase,how-bug,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom
+hbase,how-bug,org.apache.hadoop.hbase.HBaseServerBase.putUpWebUI
+hive,when-missing-cap,org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager.lock
+hive,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution
+hive,when-missing-backoff,org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook$EventLogger.writeEvent
+hive,when-missing-backoff,org.apache.hadoop.hive.ql.lockmgr.DbLockManager.lock
+hive,how-bug,org.apache.hadoop.hive.druid.DruidStorageHandlerUtils.publishSegmentWithShardSpec
+cassandra,when-missing-cap,org.apache.cassandra.db.compaction.Scrubber.scrub
+cassandra,when-missing-backoff,org.apache.cassandra.service.StorageService.repairPaxosForTopologyChange
+elasticsearch,when-missing-backoff,org.elasticsearch.cluster.coordination.ClusterBootstrapService.doBootstrap
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/cassandra/casandra_retry_locations.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/cassandra/casandra_retry_locations.data
new file mode 100644
index 00000000..6cc8521a
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/cassandra/casandra_retry_locations.data
@@ -0,0 +1,22 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/db/compaction/Scrubber.java#L196!!!org.apache.cassandra.db.compaction.Scrubber.scrub!!!org.apache.cassandra.db.compaction.Scrubber$ScrubInfo.getCompactionInfo!!!Scrubber.java:199!!!java.lang.InterruptedException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/db/compaction/Scrubber.java#L196!!!org.apache.cassandra.db.compaction.Scrubber.scrub!!!org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength!!!Scrubber.java:208!!!java.io.IOException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/db/compaction/Scrubber.java#L196!!!org.apache.cassandra.db.compaction.Scrubber.scrub!!!org.apache.cassandra.db.marshal.AbstractType>.validate!!!Scrubber.java:209!!!org.apache.cassandra.serializers.MarshalException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java#L298!!!org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run!!!org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.preparedStatement!!!CqlRecordWriter.java:320!!!java.lang.InterruptedException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java#L298!!!org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run!!!java.util.concurrent.BlockingQueue>.take!!!CqlRecordWriter.java:303!!!java.lang.InterruptedException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java#L312!!!org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run!!!org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.preparedStatement!!!CqlRecordWriter.java:320!!!java.lang.InterruptedException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/service/StorageService.java#L4587!!!org.apache.cassandra.service.StorageService.repairPaxosForTopologyChange!!!org.apache.cassandra.service.StorageService.tryRepairPaxosForTopologyChange!!!StorageService.java:4591!!!java.lang.InterruptedException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/service/StorageService.java#L4587!!!org.apache.cassandra.service.StorageService.repairPaxosForTopologyChange!!!org.apache.cassandra.service.StorageService.tryRepairPaxosForTopologyChange!!!StorageService.java:4591!!!java.lang.AssertionError
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/service/paxos/Paxos.java#L651!!!org.apache.cassandra.service.paxos.Paxos.cas!!!org.apache.cassandra.service.CASRequest.makeUpdates!!!Paxos.java:702!!!org.apache.cassandra.exceptions.InvalidRequestException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/service/paxos/Paxos.java#L651!!!org.apache.cassandra.service.paxos.Paxos.cas!!!org.apache.cassandra.service.CASRequest.appliesTo!!!Paxos.java:669!!!org.apache.cassandra.exceptions.InvalidRequestException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/service/paxos/Paxos.java#L651!!!org.apache.cassandra.service.paxos.Paxos.cas!!!org.apache.cassandra.service.paxos.Paxos$MaybeFailure.markAndThrowAsTimeoutOrFailure!!!Paxos.java:729!!!org.apache.cassandra.exceptions.RequestFailureException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/service/paxos/Paxos.java#L651!!!org.apache.cassandra.service.paxos.Paxos.cas!!!org.apache.cassandra.service.paxos.Paxos$MaybeFailure.markAndThrowAsTimeoutOrFailure!!!Paxos.java:729!!!org.apache.cassandra.exceptions.RequestTimeoutException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/service/paxos/Paxos.java#L651!!!org.apache.cassandra.service.paxos.Paxos.cas!!!org.apache.cassandra.triggers.TriggerExecutor.execute!!!Paxos.java:711!!!org.apache.cassandra.exceptions.InvalidRequestException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/service/paxos/Paxos.java#L953!!!org.apache.cassandra.service.paxos.Paxos.begin!!!org.apache.cassandra.service.paxos.Paxos$Participants.assureSufficientLiveNodes!!!Paxos.java:1049!!!org.apache.cassandra.exceptions.UnavailableException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/service/paxos/Paxos.java#L953!!!org.apache.cassandra.service.paxos.Paxos.begin!!!org.apache.cassandra.service.paxos.Paxos$MaybeFailure.markAndThrowAsTimeoutOrFailure!!!Paxos.java:992!!!org.apache.cassandra.exceptions.RequestFailureException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/service/paxos/Paxos.java#L953!!!org.apache.cassandra.service.paxos.Paxos.begin!!!org.apache.cassandra.service.paxos.Paxos$MaybeFailure.markAndThrowAsTimeoutOrFailure!!!Paxos.java:992!!!org.apache.cassandra.exceptions.RequestTimeoutException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/service/paxos/Paxos.java#L953!!!org.apache.cassandra.service.paxos.Paxos.begin!!!org.apache.cassandra.service.paxos.PaxosPrepare.prepare!!!Paxos.java:1013!!!org.apache.cassandra.exceptions.UnavailableException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/service/paxos/Paxos.java#L953!!!org.apache.cassandra.service.paxos.Paxos.begin!!!org.apache.cassandra.service.reads.ResponseResolver.preprocess!!!Paxos.java:1025!!!java.lang.IllegalArgumentException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/service/paxos/Paxos.java#L953!!!org.apache.cassandra.service.paxos.Paxos.begin!!!org.apache.cassandra.service.reads.ResponseResolver.preprocess!!!Paxos.java:1025!!!java.lang.IllegalStateException
+https://github.com/apache/cassandra/blob/f0ad7eadbeb3208e08a9339881931222fdab253b/src/java/org/apache/cassandra/utils/binlog/ExternalArchiver.java#L86!!!org.apache.cassandra.utils.binlog.ExternalArchiver.ExternalArchiver!!!org.apache.cassandra.utils.binlog.ExternalArchiver.archiveFile!!!ExternalArchiver.java:93!!!java.io.IOException
+https://github.com/apache/cassandra/blob/360128b3eb8f1b19dfc887a60d0678bc1f67703f/src/java/org/apache/cassandra/db/repair/PendingAntiCompaction.java!!!PendingAntiCompaction.AcquisitionCallable.call!!!PendingAntiCompaction.AcquisitionCallable.acquireSSTables!!!PendingAntiCompaction.SSTableAcquisitionException!!!PendingAntiCompaction.SSTableAcquisitionException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/elasticsearch/elasticsearch_retry_locations.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/elasticsearch/elasticsearch_retry_locations.data
new file mode 100644
index 00000000..32701aa0
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/elasticsearch/elasticsearch_retry_locations.data
@@ -0,0 +1,50 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/indices/IndicesService.java#L1205!!!org.elasticsearch.indices.IndicesService.processPendingDeletes!!!org.elasticsearch.env.NodeEnvironment.deleteIndexDirectoryUnderLock!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/indices/IndicesService.java#L1205!!!org.elasticsearch.indices.IndicesService.processPendingDeletes!!!org.elasticsearch.indices.IndicesService.deleteShardStore!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//x-pack/plugin/watcher/src/main/java/org/elasticsearch/xpack/watcher/notification/email/attachment/ReportingAttachmentParser.java#L179!!!org.elasticsearch.xpack.watcher.notification.email.attachment.ReportingAttachmentParser.toAttachment!!!org.elasticsearch.xpack.watcher.common.http.HttpClient.execute!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//x-pack/plugin/watcher/src/main/java/org/elasticsearch/xpack/watcher/notification/email/attachment/ReportingAttachmentParser.java#L179!!!org.elasticsearch.xpack.watcher.notification.email.attachment.ReportingAttachmentParser.toAttachment!!!org.elasticsearch.xpack.watcher.notification.email.attachment.ReportingAttachmentParser.sleep!!!N/A!!!org.elasticsearch.ElasticsearchException
+https://github.com/elastic/elasticsearch/blob/832e48e87faa547a29aafc008f4d4fc2820a3074/server/src/main/java/org/elasticsearch/cluster/coordination/ClusterBootstrapService.java#L263!!!org.elasticsearch.cluster.coordination.ClusterBootstrapService.doBootstrap!!!java.util.function.Consumer.accept!!!N/A!!!java.lang.Exception
+https://github.com/elastic/elasticsearch/blob/832e48e87faa547a29aafc008f4d4fc2820a3074/server/src/main/java/org/elasticsearch/index/IndexService.java#L617!!!org.elasticsearch.index.IndexService.onShardClose!!!beforeIndexShardDeleted!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/blob/1b84ea742143874a966d06daa373b31c9e99822f/server/src/main/java/org/elasticsearch/gateway/PersistedClusterStateService.java#L1289!!!org.elasticsearch.gateway.PersistedClusterStateService.completeCommit!!!org.elasticsearch.gateway.PersistedClusterStateService.commit!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/blob/832e48e87faa547a29aafc008f4d4fc2820a3074/server/src/main/java/org/elasticsearch/indices/IndicesService.java#L1346!!!org.elasticsearch.indices.IndicesService.processPendingDeletes!!!org.elasticsearch.env.NodeEnvironment.deleteIndexDirectoryUnderLock!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/blob/832e48e87faa547a29aafc008f4d4fc2820a3074/server/src/main/java/org/elasticsearch/indices/IndicesService.java!!!org.elasticsearch.indices.IndicesService.processPendingDeletes!!!org.elasticsearch.indices.IndicesService.deleteShardStore!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/blob/832e48e87faa547a29aafc008f4d4fc2820a3074/server/src/main/java/org/elasticsearch/common/blobstore/fs/FsBlobContainer.java#L330!!!org.elasticsearch.common.blobstore.fs.FsBlobContainer.moveBlobAtomic!!!java.nio.file.Files.move!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/blob/832e48e87faa547a29aafc008f4d4fc2820a3074/server/src/main/java/org/elasticsearch/common/file/AbstractFileWatchingService.java#L271!!!org.elasticsearch.common.file.AbstractFileWatchingService.enableDirectoryWatcher!!!org.elasticsearch.monitor.fs.FsInfo.Path.register!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/blob/832e48e87faa547a29aafc008f4d4fc2820a3074/x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/security/CommandLineHttpClient.java#L271!!!org.elasticsearch.xpack.core.security.CommandLineHttpClient.checkClusterHealthWithRetriesWaitingForCluster!!!org.elasticsearch.xpack.core.security.CommandLineHttpClient.execute!!!N/A!!!java.lang.Exception
+https://github.com/elastic/elasticsearch/tree//7556157//plugins/repository-gcs/src/main/java/org/elasticsearch/repositories/gcs/GoogleCloudStorageBlobStore.java#L257!!!org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobStore.writeBlobResumable!!!org.elasticsearch.core.internal.io.Streams.copy!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//plugins/repository-gcs/src/main/java/org/elasticsearch/repositories/gcs/GoogleCloudStorageBlobStore.java#L257!!!org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobStore.writeBlobResumable!!!org.elasticsearch.repositories.gcs.SocketAccess.doPrivilegedIOException!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.action.delete.DeleteRequest.setIfPrimaryTerm!!!N/A!!!java.lang.IllegalArgumentException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.action.delete.DeleteRequest.setIfSeqNo!!!N/A!!!java.lang.IllegalArgumentException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.action.index.IndexRequest.setIfPrimaryTerm!!!N/A!!!java.lang.IllegalArgumentException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.action.index.IndexRequest.setIfSeqNo!!!N/A!!!java.lang.IllegalArgumentException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.action.update.UpdateRequest.fromXContent!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.action.update.UpdateRequest.setIfPrimaryTerm!!!N/A!!!java.lang.IllegalArgumentException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.action.update.UpdateRequest.setIfSeqNo!!!N/A!!!java.lang.IllegalArgumentException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.common.xcontent.XContentParser.booleanValue!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.common.xcontent.XContentParser.longValue!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.common.xcontent.XContentParser.intValue!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.common.xcontent.XContentParser.text!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.common.xcontent.XContentParser.currentName!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.common.xcontent.XContentParser.nextToken!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.index.VersionType.fromString!!!N/A!!!java.lang.IllegalArgumentException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.search.fetch.subphase.FetchSourceContext.fromXContent!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.search.fetch.subphase.FetchSourceContext.fromXContent!!!N/A!!!org.elasticsearch.common.ParsingException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.action.bulk.BulkRequestParser.createParser!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.action.bulk.BulkRequestParser.findNextMarker!!!N/A!!!java.lang.IllegalArgumentException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L166!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.common.xcontent.XContentParser.booleanValue!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L166!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.common.xcontent.XContentParser.longValue!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L166!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.common.xcontent.XContentParser.intValue!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L166!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.common.xcontent.XContentParser.text!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L166!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.common.xcontent.XContentParser.currentName!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L166!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.common.xcontent.XContentParser.nextToken!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L166!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.index.VersionType.fromString!!!N/A!!!java.lang.IllegalArgumentException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L166!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.search.fetch.subphase.FetchSourceContext.fromXContent!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/index/reindex/BulkByScrollTask.java#L614!!!org.elasticsearch.index.reindex.BulkByScrollTask$Status.innerFromXContent!!!org.elasticsearch.common.xcontent.XContentParser.floatValue!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/index/reindex/BulkByScrollTask.java#L614!!!org.elasticsearch.index.reindex.BulkByScrollTask$Status.innerFromXContent!!!org.elasticsearch.common.xcontent.XContentParser.longValue!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/index/reindex/BulkByScrollTask.java#L614!!!org.elasticsearch.index.reindex.BulkByScrollTask$Status.innerFromXContent!!!org.elasticsearch.common.xcontent.XContentParser.intValue!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/index/reindex/BulkByScrollTask.java#L614!!!org.elasticsearch.index.reindex.BulkByScrollTask$Status.innerFromXContent!!!org.elasticsearch.common.xcontent.XContentParser.text!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/index/reindex/BulkByScrollTask.java#L614!!!org.elasticsearch.index.reindex.BulkByScrollTask$Status.innerFromXContent!!!org.elasticsearch.common.xcontent.XContentParser.currentName!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/index/reindex/BulkByScrollTask.java#L614!!!org.elasticsearch.index.reindex.BulkByScrollTask$Status.innerFromXContent!!!org.elasticsearch.common.xcontent.XContentParser.skipChildren!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/index/reindex/BulkByScrollTask.java#L614!!!org.elasticsearch.index.reindex.BulkByScrollTask$Status.innerFromXContent!!!org.elasticsearch.common.xcontent.XContentParser.nextToken!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/index/reindex/BulkByScrollTask.java#L614!!!org.elasticsearch.index.reindex.BulkByScrollTask$Status.innerFromXContent!!!org.elasticsearch.index.reindex.BulkByScrollTask$StatusOrException.fromXContent!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/index/reindex/BulkByScrollTask.java#L614!!!org.elasticsearch.index.reindex.BulkByScrollTask$Status.innerFromXContent!!!org.elasticsearch.common.xcontent.ConstructingObjectParser,Void>.parse!!!N/A!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/example_hdfs.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/example_hdfs.conf
new file mode 100644
index 00000000..d7a10a5a
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/example_hdfs.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/example_hdfs.data
+injection_policy: max-count
+max_injection_count: 97
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/example_hdfs.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/example_hdfs.data
new file mode 100644
index 00000000..35784eee
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/example_hdfs.data
@@ -0,0 +1,3 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java#L241!!!org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader!!!org.apache.hadoop.hdfs.DFSStripedInputStream.refreshLocatedBlock!!!DFSStripedInputStream.java:245!!!java.io.IOException
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java#L241!!!org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader!!!org.apache.hadoop.hdfs.DFSInputStream.getBlockReader!!!DFSStripedInputStream.java:256!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/example.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/example.conf
new file mode 100644
index 00000000..3567cd66
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/example.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/example.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/example.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/example.data
new file mode 100644
index 00000000..35784eee
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/example.data
@@ -0,0 +1,3 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java#L241!!!org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader!!!org.apache.hadoop.hdfs.DFSStripedInputStream.refreshLocatedBlock!!!DFSStripedInputStream.java:245!!!java.io.IOException
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java#L241!!!org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader!!!org.apache.hadoop.hdfs.DFSInputStream.getBlockReader!!!DFSStripedInputStream.java:256!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/hadoop.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/hadoop.conf
new file mode 100644
index 00000000..be986fa6
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/hadoop.conf
@@ -0,0 +1,3 @@
+retry_data_file: /home/bastoica/projects/current/wasabi/tool/config/hadoop/hadoop_retry_locations.data
+injection_policy: max-count
+max_injection_count: 0
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/hadoop_retry_bounds.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/hadoop_retry_bounds.data
new file mode 100644
index 00000000..3a333dd1
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/hadoop_retry_bounds.data
@@ -0,0 +1,195 @@
+Var name!!!Assigned value!!!Assign method!!!Test class
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestAMRestart
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestAMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestAMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestAMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestAMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestAMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestAMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestAMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestAMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestAMRestart
+RM_AM_MAX_ATTEMPTS!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestAMRestart
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestApplicationCleanup
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestApplicationCleanup
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestApplicationCleanup
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestApplicationCleanup
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestApplicationCleanup
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestApplicationMasterLauncher
+IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SASL_KEY!!!10!!!org.apache.hadoop.conf.Configuration.setInt!!!TestBalancer
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestBalancerService
+MAX_ATTEMPTS_KEY!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestBalancerService
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestBalancerWithHANameNodes
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestBlockRecovery
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestBlockRecovery2
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestBlockTokenWithDFS
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestBlockTokenWithShortCircuitRead
+RM_AM_MAX_ATTEMPTS!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestCapacityScheduler
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestCapacitySchedulerApps
+RM_AM_MAX_ATTEMPTS!!!"1"!!!org.apache.hadoop.conf.Configuration.set!!!TestCapacitySchedulerSurgicalPreemption
+DFS_NAMENODE_CHECKPOINT_MAX_RETRIES_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestCheckpoint
+MR_CLIENT_JOB_MAX_RETRIES!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestCLI
+MR_CLIENT_JOB_MAX_RETRIES!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestCLI
+LOCATEFOLLOWINGBLOCK_RETRIES_KEY!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestClientProtocolForPipelineRecovery
+CLIENT_FAILOVER_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setLong!!!TestClientRMProxy
+MR_CLIENT_MAX_RETRIES!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestClientServiceDelegate
+MR_CLIENT_MAX_RETRIES!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestClientServiceDelegate
+MAX_ATTEMPTS_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestConsistentReadsObserver
+MAX_ATTEMPTS_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestConsistentReadsObserver
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestContainerResizing
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestContainerResourceUsage
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDataNodeMetricsLogger
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDatanodeProtocolRetryPolicy
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDataNodeReconfiguration
+RM_DT_RENEWER_THREAD_RETRY_MAX_ATTEMPTS!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDelegationTokenRenewer
+RM_DT_RENEWER_THREAD_RETRY_MAX_ATTEMPTS!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDelegationTokenRenewer
+RM_DT_RENEWER_THREAD_RETRY_MAX_ATTEMPTS!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDelegationTokenRenewer
+RM_DT_RENEWER_THREAD_RETRY_MAX_ATTEMPTS!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDelegationTokenRenewer
+OBSERVER_PROBE_RETRY_PERIOD_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDelegationTokensWithHA
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDFSAdmin
+MAX_ATTEMPTS_KEY!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDFSAdminWithHA
+MAX_ATTEMPTS_KEY!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDFSAdminWithHA
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDFSAdminWithHA
+IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SASL_KEY!!!10!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDFSInotifyEventInputStreamKerberized
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDFSStripedOutputStreamWithFailure
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDSTimelineV10
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDSTimelineV10
+DFS_HA_TAILEDITS_ALL_NAMESNODES_RETRY_KEY!!!100!!!org.apache.hadoop.conf.Configuration.setInt!!!TestEditLogTailer
+DFS_HA_TAILEDITS_ALL_NAMESNODES_RETRY_KEY!!!100!!!org.apache.hadoop.conf.Configuration.setInt!!!TestEditLogTailer
+DFS_HA_TAILEDITS_ALL_NAMESNODES_RETRY_KEY!!!100!!!org.apache.hadoop.conf.Configuration.setInt!!!TestEditLogTailer
+DFS_HA_TAILEDITS_ALL_NAMESNODES_RETRY_KEY!!!100!!!org.apache.hadoop.conf.Configuration.setInt!!!TestEditLogTailer
+IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SASL_KEY!!!10!!!org.apache.hadoop.conf.Configuration.setInt!!!TestExternalStoragePolicySatisfier
+DFS_STORAGE_POLICY_SATISFIER_SELF_RETRY_TIMEOUT_MILLIS_KEY!!!"5000"!!!org.apache.hadoop.conf.Configuration.set!!!TestExternalStoragePolicySatisfier
+DFS_STORAGE_POLICY_SATISFIER_SELF_RETRY_TIMEOUT_MILLIS_KEY!!!"5000"!!!org.apache.hadoop.conf.Configuration.set!!!TestExternalStoragePolicySatisfier
+DFS_STORAGE_POLICY_SATISFIER_SELF_RETRY_TIMEOUT_MILLIS_KEY!!!"5000"!!!org.apache.hadoop.conf.Configuration.set!!!TestExternalStoragePolicySatisfier
+REDUCE_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFail
+REDUCE_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFail
+REDUCE_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFail
+REDUCE_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFail
+MAP_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFail
+MAP_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFail
+MAP_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFail
+MAP_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFail
+MAP_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFail
+MAP_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFail
+REDUCE_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFail
+REDUCE_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFail
+MAP_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFail
+MAP_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFail
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFileAppend4
+FILEOUTPUTCOMMITTER_FAILURE_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFileOutputCommitter
+FS_RM_STATE_STORE_NUM_RETRIES!!!8!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFSRMStateStore
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestHealthMonitor
+IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestIPC
+IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_KEY!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestIPC
+MR_CLIENT_JOB_MAX_RETRIES!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestJobClients
+MR_JOB_END_RETRY_ATTEMPTS!!!"0"!!!org.apache.hadoop.conf.Configuration.set!!!TestJobEndNotifier
+MR_JOB_END_NOTIFICATION_MAX_ATTEMPTS!!!"0"!!!org.apache.hadoop.conf.Configuration.set!!!TestJobEndNotifier
+MR_JOB_END_RETRY_ATTEMPTS!!!"0"!!!org.apache.hadoop.conf.Configuration.set!!!TestJobEndNotifier
+MR_JOB_END_NOTIFICATION_MAX_ATTEMPTS!!!"1"!!!org.apache.hadoop.conf.Configuration.set!!!TestJobEndNotifier
+MR_JOB_END_NOTIFICATION_MAX_ATTEMPTS!!!"1"!!!org.apache.hadoop.conf.Configuration.set!!!TestJobEndNotifier
+MR_JOB_END_RETRY_ATTEMPTS!!!"10"!!!org.apache.hadoop.conf.Configuration.set!!!TestJobEndNotifier
+MR_JOB_END_RETRY_ATTEMPTS!!!"10"!!!org.apache.hadoop.conf.Configuration.set!!!TestJobEndNotifier
+MR_JOB_END_NOTIFICATION_MAX_ATTEMPTS!!!"20"!!!org.apache.hadoop.conf.Configuration.set!!!TestJobEndNotifier
+MR_JOB_END_RETRY_ATTEMPTS!!!"3"!!!org.apache.hadoop.conf.Configuration.set!!!TestJobEndNotifier
+MR_JOB_END_NOTIFICATION_MAX_ATTEMPTS!!!"3"!!!org.apache.hadoop.conf.Configuration.set!!!TestJobEndNotifier
+MR_JOB_END_RETRY_ATTEMPTS!!!"3"!!!org.apache.hadoop.conf.Configuration.set!!!TestJobEndNotifier
+MAP_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestJobImpl
+MR_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestJobImpl
+AUTH_RETRY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestKMS
+LDAP_NUM_ATTEMPTS_KEY!!!10!!!org.apache.hadoop.conf.Configuration.setInt!!!TestLdapGroupsMappingWithBindUserSwitch
+LDAP_NUM_ATTEMPTS_KEY!!!"1"!!!org.apache.hadoop.conf.Configuration.set!!!TestLdapGroupsMappingWithOneQuery
+KMS_CLIENT_FAILOVER_MAX_RETRIES_KEY!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestLoadBalancingKMSClientProvider
+KMS_CLIENT_FAILOVER_MAX_RETRIES_KEY!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestLoadBalancingKMSClientProvider
+KMS_CLIENT_FAILOVER_MAX_RETRIES_KEY!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestLoadBalancingKMSClientProvider
+KMS_CLIENT_FAILOVER_MAX_RETRIES_KEY!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestLoadBalancingKMSClientProvider
+KMS_CLIENT_FAILOVER_MAX_RETRIES_KEY!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestLoadBalancingKMSClientProvider
+KMS_CLIENT_FAILOVER_MAX_RETRIES_KEY!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestLoadBalancingKMSClientProvider
+KMS_CLIENT_FAILOVER_MAX_RETRIES_KEY!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestLoadBalancingKMSClientProvider
+KMS_CLIENT_FAILOVER_MAX_RETRIES_KEY!!!4!!!org.apache.hadoop.conf.Configuration.setInt!!!TestLoadBalancingKMSClientProvider
+KMS_CLIENT_FAILOVER_MAX_RETRIES_KEY!!!5!!!org.apache.hadoop.conf.Configuration.setInt!!!TestLoadBalancingKMSClientProvider
+KMS_CLIENT_FAILOVER_MAX_RETRIES_KEY!!!10!!!org.apache.hadoop.conf.Configuration.setInt!!!TestLoadBalancingKMSClientProvider
+KMS_CLIENT_FAILOVER_MAX_RETRIES_KEY!!!10!!!org.apache.hadoop.conf.Configuration.setInt!!!TestLoadBalancingKMSClientProvider
+KMS_CLIENT_FAILOVER_MAX_RETRIES_KEY!!!10!!!org.apache.hadoop.conf.Configuration.setInt!!!TestLoadBalancingKMSClientProvider
+KMS_CLIENT_FAILOVER_MAX_RETRIES_KEY!!!10!!!org.apache.hadoop.conf.Configuration.setInt!!!TestLoadBalancingKMSClientProvider
+DFS_MOVER_RETRY_MAX_ATTEMPTS_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestMover
+DFS_MOVER_RETRY_MAX_ATTEMPTS_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestMover
+IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SASL_KEY!!!10!!!org.apache.hadoop.conf.Configuration.setInt!!!TestMover
+DFS_MOVER_RETRY_MAX_ATTEMPTS_KEY!!!10000!!!org.apache.hadoop.conf.Configuration.setInt!!!TestMover
+DFS_MOVER_RETRY_MAX_ATTEMPTS_KEY!!!10000!!!org.apache.hadoop.conf.Configuration.setInt!!!TestMover
+DFS_MOVER_RETRY_MAX_ATTEMPTS_KEY!!!"2"!!!org.apache.hadoop.conf.Configuration.set!!!TestMover
+DFS_MOVER_RETRY_MAX_ATTEMPTS_KEY!!!"2"!!!org.apache.hadoop.conf.Configuration.set!!!TestMover
+DFS_MOVER_RETRY_MAX_ATTEMPTS_KEY!!!"2"!!!org.apache.hadoop.conf.Configuration.set!!!TestMover
+DFS_MOVER_RETRY_MAX_ATTEMPTS_KEY!!!"2"!!!org.apache.hadoop.conf.Configuration.set!!!TestMover
+MAP_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestMRJobs
+MAP_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestMRJobs
+LOCATEFOLLOWINGBLOCK_RETRIES_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestNamenodeCapacityReport
+IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_KEY!!!100!!!org.apache.hadoop.conf.Configuration.setInt!!!TestNMProxy
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!100!!!org.apache.hadoop.conf.Configuration.setInt!!!TestNMProxy
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestNNStartupWhenViewFSOverloadSchemeEnabled
+RM_AM_MAX_ATTEMPTS!!!100!!!org.apache.hadoop.conf.Configuration.setInt!!!TestNodeBlacklistingOnAMFailures
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestNodeStatusUpdater
+OBSERVER_PROBE_RETRY_PERIOD_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setTimeDuration!!!TestObserverNode
+OBSERVER_PROBE_RETRY_PERIOD_KEY!!!5000!!!org.apache.hadoop.conf.Configuration.setLong!!!TestObserverNode
+OBSERVER_PROBE_RETRY_PERIOD_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setTimeDuration!!!TestObserverReadProxyProvider
+DFS_STORAGE_POLICY_SATISFIER_MAX_RETRY_ATTEMPTS_KEY!!!20!!!org.apache.hadoop.conf.Configuration.setInt!!!TestPersistentStoragePolicySatisfier
+DFS_STORAGE_POLICY_SATISFIER_MAX_RETRY_ATTEMPTS_KEY!!!20!!!org.apache.hadoop.conf.Configuration.setInt!!!TestPersistentStoragePolicySatisfier
+RM_PLACEMENT_CONSTRAINTS_RETRY_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestPlacementProcessor
+RM_PLACEMENT_CONSTRAINTS_RETRY_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestPlacementProcessor
+RM_PLACEMENT_CONSTRAINTS_RETRY_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestPlacementProcessor
+RM_PLACEMENT_CONSTRAINTS_RETRY_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestPlacementProcessor
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestQJMWithFaults
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestQuorumJournalManager
+REDUCE_MAX_ATTEMPTS!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestReduceFetchFromPartialMem
+MAP_MAX_ATTEMPTS!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestReduceFetchFromPartialMem
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRM
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRMContainerImpl
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRMContainerImpl
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRMRestart
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRMRestart
+RM_AM_MAX_ATTEMPTS!!!5!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRMRestart
+RM_AM_MAX_ATTEMPTS!!!40!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRMRestart
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRMWebServicesAppAttempts
+IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SASL_KEY!!!10!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRollingFileSystemSinkWithSecureHdfs
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRouterRPCClientRetries
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRPC
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRPC
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRPC
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRPCServerShutdown
+SPECULATIVE_RETRY_AFTER_SPECULATE!!!5000L!!!org.apache.hadoop.conf.Configuration.setLong!!!TestRuntimeEstimators
+SPECULATIVE_RETRY_AFTER_NO_SPECULATE!!!500L!!!org.apache.hadoop.conf.Configuration.setLong!!!TestRuntimeEstimators
+IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SASL_KEY!!!10!!!org.apache.hadoop.conf.Configuration.setInt!!!TestSecureEncryptionZoneWithKMS
+IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SASL_KEY!!!10!!!org.apache.hadoop.conf.Configuration.setInt!!!TestSecureNNWithQJM
+MAX_ATTEMPTS_KEY!!!128!!!org.apache.hadoop.conf.Configuration.setInt!!!TestSeveralNameNodes
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestSpaceReservation
+SPECULATIVE_RETRY_AFTER_NO_SPECULATE!!!3000L!!!org.apache.hadoop.conf.Configuration.setLong!!!TestSpeculativeExecutionWithMRApp
+RETRY_LIMIT!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestStagingCommitter
+DFS_STORAGE_POLICY_SATISFIER_MAX_RETRY_ATTEMPTS_KEY!!!30!!!org.apache.hadoop.conf.Configuration.setInt!!!TestStoragePolicySatisfierWithStripedFile
+DFS_STORAGE_POLICY_SATISFIER_MAX_RETRY_ATTEMPTS_KEY!!!30!!!org.apache.hadoop.conf.Configuration.setInt!!!TestStoragePolicySatisfierWithStripedFile
+DFS_STORAGE_POLICY_SATISFIER_SELF_RETRY_TIMEOUT_MILLIS_KEY!!!"5000"!!!org.apache.hadoop.conf.Configuration.set!!!TestStoragePolicySatisfierWithStripedFile
+TIMELINE_SERVICE_CLIENT_MAX_RETRIES!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestTimelineAuthenticationFilterForV1
+TIMELINE_SERVICE_CLIENT_MAX_RETRIES!!!-2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestTimelineClient
+TIMELINE_SERVICE_CLIENT_MAX_RETRIES!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestTimelineClient
+TIMELINE_SERVICE_CLIENT_MAX_RETRIES!!!5!!!org.apache.hadoop.conf.Configuration.setInt!!!TestTimelineCollector
+TIMELINE_SERVICE_CLIENT_MAX_RETRIES!!!5!!!org.apache.hadoop.conf.Configuration.setInt!!!TestTimelineCollector
+TIMELINE_SERVICE_CLIENT_MAX_RETRIES!!!5!!!org.apache.hadoop.conf.Configuration.setInt!!!TestTimelineCollector
+TIMELINE_SERVICE_CLIENT_MAX_RETRIES!!!5!!!org.apache.hadoop.conf.Configuration.setInt!!!TestTimelineCollector
+IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SASL_KEY!!!10!!!org.apache.hadoop.conf.Configuration.setInt!!!TestTrashWithSecureEncryptionZones
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestViewDistributedFileSystemWithMountLinks
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestViewFileSystemHdfs
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestViewFileSystemOverloadSchemeWithDFSAdmin
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestViewFileSystemOverloadSchemeWithFSCommands
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestViewFileSystemOverloadSchemeWithHdfsScheme
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestWorkPreservingRMRestart
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestWorkPreservingRMRestart
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestWorkPreservingRMRestart
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestWorkPreservingRMRestart
+HA_FC_ELECTOR_ZK_OP_RETRIES_KEY!!!100!!!org.apache.hadoop.conf.Configuration.setInt!!!TestZKFailoverControllerStress
+ZK_NUM_RETRIES!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestZKRMStateStoreZKClientConnections
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/hadoop_retry_locations.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/hadoop_retry_locations.data
new file mode 100644
index 00000000..4e970b83
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/hadoop_retry_locations.data
@@ -0,0 +1,202 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java#L151!!!org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash!!!org.apache.hadoop.fs.FileSystem.mkdirs!!!TrashPolicyDefault.java:161!!!java.io.IOException
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java#L151!!!org.apache.hadoop.fs.TrashPolicyDefault.run!!!deleteCheckpoint!!!TrashPolicyDefault.java:303!!!java.io.IOException
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java#L151!!!org.apache.hadoop.fs.TrashPolicyDefault.run!!!createCheckpoint!!!TrashPolicyDefault.java:304!!!java.io.IOException
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HealthMonitor.java#L170!!!org.apache.hadoop.ha.HealthMonitor.tryConnect!!!org.apache.hadoop.ha.HealthMonitor.createProxy!!!HealthMonitor.java:175!!!java.io.IOException
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryInvocationHandler.java#L89!!!org.apache.hadoop.io.retry.RetryInvocationHandler.invokeOnce!!!invoke!!!RetryInvocationHandler.java:100!!!java.io.IOException
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DiskChecker.java#L262!!!org.apache.hadoop.util.DiskChecker.doDiskIo!!!org.apache.hadoop.util.DiskChecker.diskIoCheckWithoutNativeIo!!!DiskChecker.java:262!!!java.io.IOException
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveIterator.java#L96!!!org.apache.hadoop.hdfs.protocol.CacheDirectiveIterator.makeRequest!!!org.apache.hadoop.hdfs.protocol.ClientProtocol.listCacheDirectives!!!CacheDirectiveIterator.java:97!!!java.io.IOException
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyRackFaultTolerant.java#L172!!!org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyRackFaultTolerant.chooseEvenlyFromRemainingRacks!!!org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyRackFaultTolerant.chooseOnce!!!BlockPlacementPolicyRackFaultTolerant.java:187!!!NotEnoughReplicasException
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithNodeGroup.java#L64!!!org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyWithNodeGroup.chooseFavouredNodes!!!chooseRandom!!!BlockPlacementPolicyWithNodeGroup.java:91!!!NotEnoughReplicasException
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/sps/ExternalStoragePolicySatisfier.java#L111!!!org.apache.hadoop.hdfs.server.sps.ExternalStoragePolicySatisfier.getNameNodeConnector!!!newNameNodeConnectors!!!ExternalStoragePolicySatisfier.java:114!!!java.io.IOException
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/local/LocalContainerAllocator.java#L106!!!org.apache.hadoop.mapreduce.v2.app.local.LocalContainerAllocator.heartbeat!!!org.apache.hadoop.yarn.api.ApplicationMasterProtocol.allocate!!!LocalContainerAllocator.java:113!!!java.io.IOException
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/committer/manifest/stages/CreateOutputDirectoriesStage.java#L299!!!CreateOutputDirectoriesStage.maybeCreateOneDirectory!!!org.apache.hadoop.fs.FileSystem.mkdirs!!!CreateOutputDirectoriesStage.java:305!!!java.io.IOException
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/AMRMClientRelayer.java#L347!!!org.apache.hadoop.yarn.server.AMRMClientRelayer.allocate!!!org.apache.hadoop.yarn.server.AMRMClientRelayer.reRegisterApplicationMaster!!!AMRMClientRelayer.java:386!!!java.io.IOException
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/algorithm/DefaultPlacementAlgorithm.java#L153!!!org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.algorithm.DefaultPlacementAlgorithm.doPlacement!!!org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.algorithm.DefaultPlacementAlgorithm.attemptPlacementOnNode!!!DefaultPlacementAlgorithm.java:162!!!InvalidAllocationTagsQueryException
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java#L382!!!org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.run!!!doAs!!!DelegationTokenRenewer.java:391!!!java.io.IOException
+https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java#L617!!!org.apache.hadoop.hdfs.DFSClient.renewLease!!!org.apache.hadoop.hdfs.protocol.ClientProtocol.renewLease!!!DFSClient.java:618!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/CosNativeFileSystemStore.java#L692!!!org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.callCOSClientWithRetry!!!org.apache.hadoop.fs.azure.StorageInterface$CloudBlockBlobWrapper.commitBlockList!!!N/A!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/CosNFileReadTask.java#L85!!!org.apache.hadoop.fs.cosn.CosNFileReadTask.run!!!org.apache.hadoop.fs.cosn.NativeFileSystemStore.retrieveBlock!!!CosNFileReadTask.java:87!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/CosNFileReadTask.java#L85!!!org.apache.hadoop.fs.cosn.CosNFileReadTask.run!!!org.apache.hadoop.io.IOUtils.readFully!!!CosNFileReadTask.java:89!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/CosNFileReadTask.java#L85!!!org.apache.hadoop.fs.cosn.CosNFileReadTask.run!!!java.io.InputStream.close!!!CosNFileReadTask.java:91!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSCommonUtils.java#L891!!!org.apache.hadoop.fs.obs.OBSCommonUtils.isFolderEmpty!!!org.apache.hadoop.fs.obs.OBSCommonUtils.innerIsFolderEmpty!!!OBSCommonUtils.java:893!!!java.io.FileNotFoundException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSFileSystem.java#L1214!!!org.apache.hadoop.fs.obs.OBSFileSystem.getFileStatus!!!org.apache.hadoop.fs.obs.OBSFileSystem.innerGetFileStatus!!!OBSFileSystem.java:1217!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSInputStream.java#L373!!!org.apache.hadoop.fs.obs.OBSInputStream.lazySeek!!!org.apache.hadoop.fs.obs.OBSInputStream.seekInStream!!!OBSInputStream.java:376!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSInputStream.java#L373!!!org.apache.hadoop.fs.obs.OBSInputStream.lazySeek!!!org.apache.hadoop.fs.obs.OBSInputStream.reopen!!!OBSInputStream.java:380!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSInputStream.java#L457!!!org.apache.hadoop.fs.obs.OBSInputStream.read!!!java.io.InputStream.read!!!OBSInputStream.java:459!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSInputStream.java#L526!!!org.apache.hadoop.fs.obs.OBSInputStream.onReadFailure!!!org.apache.hadoop.fs.obs.OBSInputStream.reopen!!!OBSInputStream.java:528!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSInputStream.java#L577!!!org.apache.hadoop.fs.obs.OBSInputStream.read!!!org.apache.hadoop.fs.obs.OBSInputStream.tryToReadFromInputStream!!!N/A!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSInputStream.java#L687!!!org.apache.hadoop.fs.obs.OBSInputStream.read!!!org.apache.hadoop.fs.obs.OBSInputStream.tryToReadFromInputStream!!!N/A!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSInputStream.java#L970!!!org.apache.hadoop.fs.obs.OBSInputStream.randomReadWithNewInputStream!!!org.apache.hadoop.fs.obs.OBSInputStream.tryToReadFromInputStream!!!OBSInputStream.java:976!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSObjectBucketUtils.java#L479!!!org.apache.hadoop.fs.obs.OBSObjectBucketUtils.createEmptyObject!!!org.apache.hadoop.fs.obs.OBSObjectBucketUtils.innerCreateEmptyObject!!!N/A!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSObjectBucketUtils.java#L542!!!org.apache.hadoop.fs.obs.OBSObjectBucketUtils.copyFile!!!org.apache.hadoop.fs.obs.OBSObjectBucketUtils.innerCopyFile!!!N/A!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSPosixBucketUtils.java#L182!!!org.apache.hadoop.fs.obs.OBSPosixBucketUtils.innerFsRenameWithRetry!!!org.apache.hadoop.fs.obs.OBSPosixBucketUtils.innerFsRenameFile!!!N/A!!!java.io.FileNotFoundException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSPosixBucketUtils.java#L182!!!org.apache.hadoop.fs.obs.OBSPosixBucketUtils.innerFsRenameWithRetry!!!org.apache.hadoop.fs.obs.OBSPosixBucketUtils.innerFsRenameFile!!!N/A!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java#L173!!!org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp!!!org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$ProviderCallable.call!!!LoadBalancingKMSClientProvider.java:176!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSInputChecker.java#L301!!!org.apache.hadoop.fs.FSInputChecker.readChecksumChunk!!!org.apache.hadoop.fs.FSInputChecker.readChunk!!!FSInputChecker.java:305!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/CachingBlockManager.java#L149!!!org.apache.hadoop.fs.impl.prefetch.CachingBlockManager.get!!!org.apache.hadoop.fs.impl.prefetch.CachingBlockManager.getInternal!!!CachingBlockManager.java:160!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/CachingBlockManager.java#L149!!!org.apache.hadoop.fs.impl.prefetch.CachingBlockManager.get!!!org.apache.hadoop.fs.impl.prefetch.BufferPool.acquire!!!N/A!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java#L1126!!!org.apache.hadoop.ha.ActiveStandbyElector.zkDoWithRetries!!!org.apache.hadoop.ha.ActiveStandbyElector$ZKAction.run!!!ActiveStandbyElector.java:1150!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java#L853!!!org.apache.hadoop.ha.ActiveStandbyElector.reEstablishSession!!!org.apache.hadoop.ha.ActiveStandbyElector.createConnection!!!ActiveStandbyElector.java:880!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L614!!!org.apache.hadoop.ipc.Client$Connection.setupConnection!!!javax.net.SocketFactory.createSocket!!!Client.java:625!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L614!!!org.apache.hadoop.ipc.Client$Connection.setupConnection!!!java.net.Socket.setTcpNoDelay!!!Client.java:626!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L614!!!org.apache.hadoop.ipc.Client$Connection.setupConnection!!!java.net.Socket.setKeepAlive!!!Client.java:627!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L614!!!org.apache.hadoop.ipc.Client$Connection.setupConnection!!!java.net.Socket.setTrafficClass!!!Client.java:639!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L614!!!org.apache.hadoop.ipc.Client$Connection.setupConnection!!!org.apache.hadoop.net.NetUtils.getLocalInetAddress!!!Client.java:656!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L614!!!org.apache.hadoop.ipc.Client$Connection.setupConnection!!!java.net.Socket.setReuseAddress!!!Client.java:658!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L614!!!org.apache.hadoop.ipc.Client$Connection.setupConnection!!!java.net.Socket.bind!!!Client.java:663!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L614!!!org.apache.hadoop.ipc.Client$Connection.setupConnection!!!org.apache.hadoop.net.NetUtils.connect!!!Client.java:668!!!java.net.ConnectException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L614!!!org.apache.hadoop.ipc.Client$Connection.setupConnection!!!java.net.Socket.setSoTimeout!!!Client.java:669!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L790!!!org.apache.hadoop.ipc.Client$Connection.setupIOstreams!!!org.apache.hadoop.ipc.Client$Connection.setupConnection!!!Client.java:789!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L790!!!org.apache.hadoop.ipc.Client$Connection.setupIOstreams!!!org.apache.hadoop.ipc.Client$Connection.writeConnectionHeader!!!Client.java:791!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L790!!!org.apache.hadoop.ipc.Client$Connection.setupIOstreams!!!org.apache.hadoop.security.UserGroupInformation.doAs!!!Client.java:795!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L790!!!org.apache.hadoop.ipc.Client$Connection.setupIOstreams!!!org.apache.hadoop.ipc.Client$IpcStreams.setSaslClient!!!Client.java:818!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L790!!!org.apache.hadoop.ipc.Client$Connection.setupIOstreams!!!org.apache.hadoop.ipc.Client$Connection.writeConnectionContext!!!Client.java:831!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java#L419!!!org.apache.hadoop.ipc.RPC.waitForProtocolProxy!!!org.apache.hadoop.ipc.RPC.getProtocolProxy!!!RPC.java:421!!!java.net.ConnectException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java#L419!!!org.apache.hadoop.ipc.RPC.waitForProtocolProxy!!!org.apache.hadoop.security.UserGroupInformation.getCurrentUser!!!RPC.java:422!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L967!!!org.apache.hadoop.security.UserGroupInformation$AutoRenewalForUserCredsRunnable.run!!!org.apache.hadoop.security.UserGroupInformation$AutoRenewalForUserCredsRunnable.relogin!!!UserGroupInformation.java:986!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java#L1834!!!org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcRequestHeaderProto.RpcRequestHeaderProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readTag!!!RpcHeaderProtos.java:1835!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java#L1834!!!org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcRequestHeaderProto.RpcRequestHeaderProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readEnum!!!RpcHeaderProtos.java:1841!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java#L1834!!!org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcRequestHeaderProto.RpcRequestHeaderProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readSInt32!!!RpcHeaderProtos.java:1866!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java#L1834!!!org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcRequestHeaderProto.RpcRequestHeaderProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readBytes!!!RpcHeaderProtos.java:1871!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java#L1834!!!org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcRequestHeaderProto.RpcRequestHeaderProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readMessage!!!RpcHeaderProtos.java:1884!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java#L1834!!!org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcRequestHeaderProto.RpcRequestHeaderProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readInt64!!!RpcHeaderProtos.java:1907!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java#L1834!!!org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcRequestHeaderProto.RpcRequestHeaderProto!!!org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3.parseUnknownField!!!RpcHeaderProtos.java:1916!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java#L3784!!!org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.RpcResponseHeaderProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readTag!!!RpcHeaderProtos.java:3785!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java#L3784!!!org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.RpcResponseHeaderProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readUInt32!!!RpcHeaderProtos.java:3792!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java#L3784!!!org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.RpcResponseHeaderProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readEnum!!!RpcHeaderProtos.java:3796!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java#L3784!!!org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.RpcResponseHeaderProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readBytes!!!RpcHeaderProtos.java:3831!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java#L3784!!!org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.RpcResponseHeaderProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readSInt32!!!RpcHeaderProtos.java:3843!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java#L3784!!!org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.RpcResponseHeaderProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readInt64!!!RpcHeaderProtos.java:3848!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java#L3784!!!org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.RpcResponseHeaderProto!!!org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3.parseUnknownField!!!RpcHeaderProtos.java:3857!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java#L440!!!org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run!!!org.apache.hadoop.hdfs.client.impl.LeaseRenewer.renew!!!LeaseRenewer.java:445!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1550!!!org.apache.hadoop.hdfs.DataStreamer.transfer!!!org.apache.hadoop.hdfs.DataStreamer$StreamerStreams.sendTransferBlock!!!DataStreamer.java:1594!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1826!!!org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream!!!org.apache.hadoop.hdfs.DataStreamer.createSocketForPipeline!!!DataStreamer.java:1868!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1826!!!org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream!!!org.apache.hadoop.net.NetUtils.getOutputStream!!!DataStreamer.java:1872!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1826!!!org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream!!!org.apache.hadoop.net.NetUtils.getInputStream!!!DataStreamer.java:1873!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1826!!!org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream!!!org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend!!!DataStreamer.java:1874!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1826!!!org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream!!!org.apache.hadoop.hdfs.protocol.datatransfer.BlockConstructionStage.getRecoveryStage!!!DataStreamer.java:1887!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1826!!!org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream!!!org.apache.hadoop.hdfs.protocol.datatransfer.Sender.writeBlock!!!DataStreamer.java:1896!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1826!!!org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream!!!org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos$BlockOpResponseProto.parseFrom!!!DataStreamer.java:1904!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1826!!!org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream!!!org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed!!!DataStreamer.java:1905!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1826!!!org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream!!!org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed!!!DataStreamer.java:1905!!!java.io.EOFException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1826!!!org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream!!!org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus!!!DataStreamer.java:1921!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L1156!!!org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode!!!org.apache.hadoop.hdfs.DFSInputStream.getBlockReader!!!DFSInputStream.java:1218!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L1156!!!org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode!!!org.apache.hadoop.fs.ByteBufferReadable.read!!!DFSInputStream.java:1229!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L235!!!org.apache.hadoop.hdfs.DFSInputStream.openInfo!!!org.apache.hadoop.hdfs.DFSInputStream.fetchAndCheckLocatedBlocks!!!DFSInputStream.java:238!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L235!!!org.apache.hadoop.hdfs.DFSInputStream.openInfo!!!org.apache.hadoop.hdfs.DFSInputStream.getLastBlockLength!!!DFSInputStream.java:243!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L344!!!org.apache.hadoop.hdfs.DFSInputStream.readBlockLength!!!org.apache.hadoop.hdfs.DFSUtilClient.createClientDatanodeProtocolProxy!!!DFSInputStream.java:348!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L344!!!org.apache.hadoop.hdfs.DFSInputStream.readBlockLength!!!org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol.getReplicaVisibleLength!!!DFSInputStream.java:352!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L622!!!org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo!!!org.apache.hadoop.hdfs.DFSInputStream.getBlockAt!!!DFSInputStream.java:627!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L622!!!org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo!!!org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode!!!DFSInputStream.java:637!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L622!!!org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo!!!org.apache.hadoop.hdfs.DFSInputStream.getBlockReader!!!DFSInputStream.java:645!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L786!!!org.apache.hadoop.hdfs.DFSInputStream.readBuffer!!!org.apache.hadoop.hdfs.ReaderStrategy.readFromBlock!!!DFSInputStream.java:790!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L786!!!org.apache.hadoop.hdfs.DFSInputStream.readBuffer!!!org.apache.hadoop.hdfs.DFSInputStream.seekToBlockSource!!!DFSInputStream.java:820!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L786!!!org.apache.hadoop.hdfs.DFSInputStream.readBuffer!!!org.apache.hadoop.hdfs.DFSInputStream.seekToNewSource!!!DFSInputStream.java:824!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L840!!!org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy!!!org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo!!!DFSInputStream.java:879!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L840!!!org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy!!!org.apache.hadoop.hdfs.DFSInputStream.readBuffer!!!DFSInputStream.java:889!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java#L1141!!!org.apache.hadoop.hdfs.DFSOutputStream.addBlock!!!org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock!!!DFSOutputStream.java:1148!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java#L291!!!org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate!!!org.apache.hadoop.hdfs.protocol.ClientProtocol.create!!!DFSOutputStream.java:294!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java#L990!!!org.apache.hadoop.hdfs.DFSOutputStream.completeFile!!!org.apache.hadoop.hdfs.protocol.ClientProtocol.complete!!!DFSOutputStream.java:997!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java#L241!!!org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader!!!org.apache.hadoop.hdfs.DFSStripedInputStream.refreshLocatedBlock!!!DFSStripedInputStream.java:247!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java#L241!!!org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader!!!org.apache.hadoop.hdfs.DFSInputStream.getBlockReader!!!DFSStripedInputStream.java:258!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/FileChecksumHelper.java#L521!!!org.apache.hadoop.hdfs.FileChecksumHelper$ReplicatedFileChecksumComputer.checksumBlock!!!org.apache.hadoop.hdfs.FileChecksumHelper$ReplicatedFileChecksumComputer.tryDatanode!!!FileChecksumHelper.java:523!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/FileChecksumHelper.java#L653!!!org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlockGroup!!!org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.tryDatanode!!!FileChecksumHelper.java:655!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java#L431!!!org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider$ObserverReadInvocationHandler.invoke!!!java.lang.reflect.Method.invoke!!!ObserverReadProxyProvider.java:543!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java#L197!!!org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache$SlotReleaser.run!!!org.apache.hadoop.hdfs.protocol.datatransfer.Sender.releaseShortCircuitFds!!!ShortCircuitCache.java:209!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java#L197!!!org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache$SlotReleaser.run!!!org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos$ReleaseShortCircuitAccessResponseProto.parseFrom!!!ShortCircuitCache.java:214!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java#L197!!!org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache$SlotReleaser.run!!!org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed!!!ShortCircuitCache.java:214!!!java.io.EOFException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java#L197!!!org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache$SlotReleaser.run!!!org.apache.hadoop.net.unix.DomainSocket.connect!!!UserGroupInformation.java:986!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java#L824!!!org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry!!!org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.getUrl!!!WebHdfsFileSystem.java:827!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java#L824!!!org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry!!!org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.connect!!!WebHdfsFileSystem.java:829!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java#L824!!!org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry!!!org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.getResponse!!!WebHdfsFileSystem.java:830!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java#L885!!!org.apache.hadoop.hdfs.server.balancer.Balancer.run!!!org.apache.hadoop.hdfs.server.balancer.Balancer.doBalance!!!Balancer.java:887!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java#L880!!!org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run!!!org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake!!!BPServiceActor.java:893!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java#L226!!!org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run!!!org.apache.hadoop.hdfs.net.PeerServer.accept!!!DataXceiverServer.java:242!!!java.net.SocketTimeoutException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java#L226!!!org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run!!!org.apache.hadoop.hdfs.server.datanode.DataXceiver.create!!!DataXceiverServer.java:253!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java#L163!!!org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ProvidedVolumeImpl$ProvidedBlockPoolSlice.fetchVolumeMap!!!org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap.getReader!!!ProvidedVolumeImpl.java:165!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirEncryptionZoneOp.java#L585!!!org.apache.hadoop.hdfs.server.namenode.FSDirEncryptionZoneOp$EDEKCacheLoader.run!!!org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.warmUpEncryptedKeys!!!FSDirEncryptionZoneOp.java:587!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L4632!!!org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run!!!org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.clearCorruptLazyPersistFiles!!!FSNamesystem.java:4671!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/EditLogTailer.java#L609!!!org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$MultipleNameNodeProxy.getActiveNodeProxy!!!org.apache.hadoop.ipc.RPC.waitForProxy!!!EditLogTailer.java:632!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/EditLogTailer.java#L609!!!org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$MultipleNameNodeProxy.getActiveNodeProxy!!!org.apache.hadoop.ipc.RPC.getProtocolVersion!!!EditLogTailer.java:633!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java#L328!!!org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler.run!!!org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler$ReencryptionPendingInodeIdCollector.checkPauseForTesting!!!ReencryptionHandler.java:333!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionUpdater.java#L436!!!org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater.takeAndProcessTasks!!!org.apache.hadoop.util.StopWatch.start!!!ReencryptionUpdater.java:439!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionUpdater.java#L436!!!org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater.takeAndProcessTasks!!!org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater.processTask!!!ReencryptionUpdater.java:440!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java#L341!!!org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork!!!org.apache.hadoop.security.UserGroupInformation.checkTGTAndReloginFromKeytab!!!SecondaryNameNode.java:353!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java#L341!!!org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork!!!org.apache.hadoop.security.UserGroupInformation.getCurrentUser!!!SecondaryNameNode.java:353!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java#L341!!!org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork!!!org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.shouldCheckpointBasedOnCount!!!SecondaryNameNode.java:358!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java#L341!!!org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork!!!org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint!!!SecondaryNameNode.java:360!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/BlockStorageMovementNeeded.java#L238!!!org.apache.hadoop.hdfs.server.namenode.sps.BlockStorageMovementNeeded$SPSPathIdProcessor.run!!!org.apache.hadoop.hdfs.server.namenode.sps.Context.scanAndCollectFiles!!!BlockStorageMovementNeeded.java:249!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/BlockStorageMovementNeeded.java#L238!!!org.apache.hadoop.hdfs.server.namenode.sps.BlockStorageMovementNeeded$SPSPathIdProcessor.run!!!org.apache.hadoop.hdfs.server.namenode.sps.Context.removeSPSHint!!!BlockStorageMovementNeeded.java:256!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/StoragePolicySatisfier.java#L217!!!org.apache.hadoop.hdfs.server.namenode.sps.StoragePolicySatisfier.run!!!org.apache.hadoop.hdfs.server.namenode.sps.BlockStorageMovementNeeded.removeItemTrackInfo!!!StoragePolicySatisfier.java:235!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/StoragePolicySatisfier.java#L217!!!org.apache.hadoop.hdfs.server.namenode.sps.StoragePolicySatisfier.run!!!org.apache.hadoop.hdfs.server.namenode.sps.Context.getFileInfo!!!StoragePolicySatisfier.java:243!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/StoragePolicySatisfier.java#L217!!!org.apache.hadoop.hdfs.server.namenode.sps.StoragePolicySatisfier.run!!!org.apache.hadoop.hdfs.server.namenode.sps.StoragePolicySatisfier.analyseBlocksStorageMovementsAndAssignToDN!!!StoragePolicySatisfier.java:255!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/sps/ExternalSPSBlockMoveTaskHandler.java#L203!!!org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock!!!org.apache.hadoop.hdfs.server.balancer.KeyManager.getAccessToken!!!ExternalSPSBlockMoveTaskHandler.java:206!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/sps/ExternalSPSBlockMoveTaskHandler.java#L203!!!org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock!!!org.apache.hadoop.hdfs.server.common.sps.BlockDispatcher.moveBlock!!!ExternalSPSBlockMoveTaskHandler.java:209!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java#L379!!!org.apache.hadoop.hdfs.tools.DebugAdmin$RecoverLeaseCommand.run!!!org.apache.hadoop.hdfs.DistributedFileSystem.recoverLease!!!DebugAdmin.java:384!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/YarnChild.java#L135!!!org.apache.hadoop.mapred.YarnChild.main!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.getTask!!!YarnChild.java:140!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobClient.java#L633!!!org.apache.hadoop.mapred.JobClient.getJob!!!org.apache.hadoop.mapred.JobClient.getJobInner!!!JobClient.java:639!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobEndNotifier.java#L87!!!org.apache.hadoop.mapred.JobEndNotifier.localRunnerNotification!!!org.apache.hadoop.mapred.JobEndNotifier.httpNotification!!!JobEndNotifier.java:89!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L1251!!!org.apache.hadoop.mapred.Task.done!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.commitPending!!!Task.java:1253!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L1315!!!org.apache.hadoop.mapred.Task.statusUpdate!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.statusUpdate!!!Task.java:1317!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L1377!!!org.apache.hadoop.mapred.Task.sendDone!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.done!!!Task.java:1379!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L1397!!!org.apache.hadoop.mapred.Task.commit!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.canCommit!!!Task.java:1399!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L860!!!org.apache.hadoop.mapred.Task$TaskReporter.run!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.statusUpdate!!!Task.java:885!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L860!!!org.apache.hadoop.mapred.Task$TaskReporter.run!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.statusUpdate!!!Task.java:891!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java#L375!!!org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJob!!!org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJobInternal!!!FileOutputCommitter.java:377!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/EventFetcher.java#L64!!!org.apache.hadoop.mapreduce.task.reduce.EventFetcher.run!!!org.apache.hadoop.mapreduce.task.reduce.EventFetcher.getMapCompletionEvents!!!EventFetcher.java:66!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java#L343!!!org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost!!!org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput!!!Fetcher.java:346!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java#L410!!!org.apache.hadoop.mapreduce.task.reduce.Fetcher.setupConnectionsWithRetry!!!org.apache.hadoop.mapreduce.task.reduce.Fetcher.openConnection!!!Fetcher.java:413!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java#L713!!!org.apache.hadoop.mapreduce.task.reduce.Fetcher.connect!!!java.net.URLConnection.connect!!!Fetcher.java:717!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/tools/CLI.java#L662!!!org.apache.hadoop.mapreduce.tools.CLI.getJob!!!org.apache.hadoop.mapreduce.Cluster.getJob!!!CLI.java:660!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/tools/CLI.java#L662!!!org.apache.hadoop.mapreduce.tools.CLI.getJob!!!org.apache.hadoop.mapreduce.Cluster.getJob!!!CLI.java:670!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ClientServiceDelegate.java#L322!!!org.apache.hadoop.mapred.ClientServiceDelegate.invoke!!!org.apache.hadoop.mapred.ClientServiceDelegate.getProxy!!!ClientServiceDelegate.java:325!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ClientServiceDelegate.java#L322!!!org.apache.hadoop.mapred.ClientServiceDelegate.invoke!!!java.lang.reflect.Method.invoke!!!ClientServiceDelegate.java:326!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java#L72!!!org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileReaderTask.run!!!org.apache.hadoop.io.IOUtils.readFully!!!AliyunOSSFileReaderTask.java:75!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Invoker.java#L462!!!org.apache.hadoop.fs.s3a.Invoker.retryUntranslated!!!org.apache.hadoop.util.functional.CallableRaisingIOE.apply!!!Invoker.java:468!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/BlockBlobAppendStream.java#L716!!!org.apache.hadoop.fs.azure.BlockBlobAppendStream.writeBlockRequestInternal!!!org.apache.hadoop.fs.azure.StorageInterface$CloudBlockBlobWrapper.uploadBlock!!!BlockBlobAppendStream.java:720!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/BlockBlobAppendStream.java#L782!!!org.apache.hadoop.fs.azure.BlockBlobAppendStream.writeBlockListRequestInternal!!!org.apache.hadoop.fs.azure.StorageInterface$CloudBlockBlobWrapper.commitBlockList!!!BlockBlobAppendStream.java:787!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/WasbRemoteCallHelper.java#L129!!!org.apache.hadoop.fs.azure.WasbRemoteCallHelper.retryableRequest!!!org.apache.hadoop.fs.azure.WasbRemoteCallHelper.getHttpRequest!!!WasbRemoteCallHelper.java:148!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/WasbRemoteCallHelper.java#L129!!!org.apache.hadoop.fs.azure.WasbRemoteCallHelper.retryableRequest!!!org.apache.http.client.HttpClient.execute!!!WasbRemoteCallHelper.java:151!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/WasbRemoteCallHelper.java#L129!!!org.apache.hadoop.fs.azure.WasbRemoteCallHelper.retryableRequest!!!org.apache.http.HttpEntity.getContent!!!WasbRemoteCallHelper.java:203!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/WasbRemoteCallHelper.java#L129!!!org.apache.hadoop.fs.azure.WasbRemoteCallHelper.retryableRequest!!!java.io.BufferedReader.readLine!!!WasbRemoteCallHelper.java:206!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/AzureADAuthenticator.java#L303!!!org.apache.hadoop.fs.azurebfs.oauth2.AzureADAuthenticator.getTokenCall!!!org.apache.hadoop.fs.azurebfs.oauth2.AzureADAuthenticator.getTokenSingleCall!!!AzureADAuthenticator.java:307!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/CustomTokenProviderAdapter.java#L72!!!org.apache.hadoop.fs.azurebfs.oauth2.CustomTokenProviderAdapter.refreshToken!!!org.apache.hadoop.fs.azurebfs.extensions.CustomTokenProviderAdaptee.getAccessToken!!!CustomTokenProviderAdapter.java:75!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java#L722!!!org.apache.hadoop.tools.SimpleCopyListing$TraverseDirectory.traverseDirectoryMultiThreaded!!!org.apache.hadoop.tools.util.ProducerConsumer.take!!!SimpleCopyListing.java:750!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java#L722!!!org.apache.hadoop.tools.SimpleCopyListing$TraverseDirectory.traverseDirectoryMultiThreaded!!!org.apache.hadoop.tools.util.DistCpUtils.toCopyListingFileStatus!!!SimpleCopyListing.java:757!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java#L722!!!org.apache.hadoop.tools.SimpleCopyListing$TraverseDirectory.traverseDirectoryMultiThreaded!!!org.apache.hadoop.tools.SimpleCopyListing.addToFileListing!!!SimpleCopyListing.java:765!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java#L722!!!org.apache.hadoop.tools.SimpleCopyListing$TraverseDirectory.traverseDirectoryMultiThreaded!!!org.apache.hadoop.tools.SimpleCopyListing.writeToFileListing!!!SimpleCopyListing.java:768!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/RetriableCommand.java#L85!!!org.apache.hadoop.tools.util.RetriableCommand.execute!!!org.apache.hadoop.tools.util.RetriableCommand.doExecute!!!RetriableCommand.java:87!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/main/java/org/apache/hadoop/tools/dynamometer/DynoInfraUtils.java#L235!!!org.apache.hadoop.tools.dynamometer.DynoInfraUtils.waitForAndGetNameNodeProperties!!!org.apache.hadoop.fs.FileSystem.open!!!DynoInfraUtils.java:237!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/main/java/org/apache/hadoop/tools/dynamometer/DynoInfraUtils.java#L235!!!org.apache.hadoop.tools.dynamometer.DynoInfraUtils.waitForAndGetNameNodeProperties!!!org.apache.hadoop.fs.Path.getFileSystem!!!DynoInfraUtils.java:237!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/main/java/org/apache/hadoop/tools/dynamometer/DynoInfraUtils.java#L235!!!org.apache.hadoop.tools.dynamometer.DynoInfraUtils.waitForAndGetNameNodeProperties!!!java.util.Properties.load!!!DynoInfraUtils.java:239!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/main/java/org/apache/hadoop/tools/dynamometer/DynoInfraUtils.java#L458!!!org.apache.hadoop.tools.dynamometer.DynoInfraUtils.waitForNameNodeJMXValue!!!org.apache.hadoop.tools.dynamometer.DynoInfraUtils.fetchNameNodeJMXValue!!!DynoInfraUtils.java:460!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/target/generated-sources/java/org/apache/hadoop/yarn/proto/YarnProtos.java#L83812!!!org.apache.hadoop.yarn.proto.YarnProtos$ContainerLaunchContextProto.ContainerLaunchContextProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readTag!!!YarnProtos.java:88317!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/target/generated-sources/java/org/apache/hadoop/yarn/proto/YarnProtos.java#L83812!!!org.apache.hadoop.yarn.proto.YarnProtos$ContainerLaunchContextProto.ContainerLaunchContextProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readMessage!!!YarnProtos.java:88328!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/target/generated-sources/java/org/apache/hadoop/yarn/proto/YarnProtos.java#L83812!!!org.apache.hadoop.yarn.proto.YarnProtos$ContainerLaunchContextProto.ContainerLaunchContextProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readBytes!!!YarnProtos.java:88333!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/target/generated-sources/java/org/apache/hadoop/yarn/proto/YarnProtos.java#L83812!!!org.apache.hadoop.yarn.proto.YarnProtos$ContainerLaunchContextProto.ContainerLaunchContextProto!!!org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3.parseUnknownField!!!YarnProtos.java:88391!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/target/generated-sources/java/org/apache/hadoop/yarn/proto/YarnProtos.java#L87908!!!org.apache.hadoop.yarn.proto.YarnProtos$ContainerRetryContextProto.ContainerRetryContextProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readTag!!!YarnProtos.java:92413!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/target/generated-sources/java/org/apache/hadoop/yarn/proto/YarnProtos.java#L87908!!!org.apache.hadoop.yarn.proto.YarnProtos$ContainerRetryContextProto.ContainerRetryContextProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readEnum!!!YarnProtos.java:92419!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/target/generated-sources/java/org/apache/hadoop/yarn/proto/YarnProtos.java#L87908!!!org.apache.hadoop.yarn.proto.YarnProtos$ContainerRetryContextProto.ContainerRetryContextProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readInt32!!!YarnProtos.java:92435!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/target/generated-sources/java/org/apache/hadoop/yarn/proto/YarnProtos.java#L87908!!!org.apache.hadoop.yarn.proto.YarnProtos$ContainerRetryContextProto.ContainerRetryContextProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readInt64!!!YarnProtos.java:92435!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/target/generated-sources/java/org/apache/hadoop/yarn/proto/YarnProtos.java#L87908!!!org.apache.hadoop.yarn.proto.YarnProtos$ContainerRetryContextProto.ContainerRetryContextProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readRawVarint32!!!YarnProtos.java:92463!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/target/generated-sources/java/org/apache/hadoop/yarn/proto/YarnProtos.java#L87908!!!org.apache.hadoop.yarn.proto.YarnProtos$ContainerRetryContextProto.ContainerRetryContextProto!!!org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3.parseUnknownField!!!YarnProtos.java:92467!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java#L1542!!!org.apache.hadoop.yarn.client.cli.LogsCLI$ClientConnectionRetry.retryOn!!!org.apache.hadoop.yarn.client.cli.LogsCLI$ClientRetryOp.run!!!LogsCLI.java:1545!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineConnector.java#L342!!!org.apache.hadoop.yarn.client.api.impl.TimelineConnector$TimelineClientConnectionRetry.retryOn!!!org.apache.hadoop.yarn.client.api.impl.TimelineConnector$TimelineClientRetryOp.run!!!TimelineConnector.java:341!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineV2ClientImpl.java#L251!!!org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl.putObjects!!!org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl.putObjects!!!TimelineV2ClientImpl.java:255!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/ifile/LogAggregationIndexedFileController.java#L1278!!!org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController$FSAction.runWithRetries!!!org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController$FSAction.run!!!LogAggregationIndexedFileController.java:1279!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/ifile/LogAggregationIndexedFileController.java#L1321!!!org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.loadUUIDFromLogFile!!!org.apache.hadoop.fs.RemoteIterator.hasNext!!!LogAggregationIndexedFileController.java:1320!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/ifile/LogAggregationIndexedFileController.java#L1321!!!org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.loadUUIDFromLogFile!!!org.apache.hadoop.fs.RemoteIterator.next!!!LogAggregationIndexedFileController.java:1322!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/ifile/LogAggregationIndexedFileController.java#L1321!!!org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.loadUUIDFromLogFile!!!org.apache.hadoop.fs.FileContext.open!!!LogAggregationIndexedFileController.java:1326!!!java.io.FileNotFoundException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/ifile/LogAggregationIndexedFileController.java#L1321!!!org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.loadUUIDFromLogFile!!!java.io.DataInputStream.readFully!!!LogAggregationIndexedFileController.java:1328!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/ifile/LogAggregationIndexedFileController.java#L1321!!!org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.loadUUIDFromLogFile!!!org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.deleteFileWithRetries!!!LogAggregationIndexedFileController.java:1331!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/retry/FederationActionRetry.java#L31!!!org.apache.hadoop.yarn.server.federation.retry.FederationActionRetry.runWithRetries!!!org.apache.hadoop.yarn.server.federation.retry.FederationActionRetry.run!!!FederationActionRetry.java:33!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedApplicationManager.java#L460!!!org.apache.hadoop.yarn.server.uam.UnmanagedApplicationManager.monitorCurrentAppAttempt!!!org.apache.hadoop.yarn.server.uam.UnmanagedApplicationManager.getApplicationReport!!!UnmanagedApplicationManager.java:475!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedApplicationManager.java#L460!!!org.apache.hadoop.yarn.server.uam.UnmanagedApplicationManager.monitorCurrentAppAttempt!!!org.apache.hadoop.yarn.server.uam.UnmanagedApplicationManager.getApplicationReport!!!UnmanagedApplicationManager.java:486!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedApplicationManager.java#L460!!!org.apache.hadoop.yarn.server.uam.UnmanagedApplicationManager.monitorCurrentAppAttempt!!!org.apache.hadoop.yarn.api.ApplicationBaseProtocol.getApplicationAttemptReport!!!UnmanagedApplicationManager.java:499!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/recovery/NMLeveldbStateStoreService.java#L355!!!org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.loadContainerState!!!org.apache.hadoop.yarn.server.utils.BuilderUtils.newContainerTokenIdentifier!!!NMLeveldbStateStoreService.java:368!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/recovery/NMLeveldbStateStoreService.java#L355!!!org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.loadContainerState!!!org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ResourceMappings$AssignedResources.fromBytes!!!NMLeveldbStateStoreService.java:432!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java#L788!!!org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore$FSAction.runWithRetries!!!org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore$FSAction.run!!!FileSystemRMStateStore.java:790!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java#L1000!!!org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.submitReservation!!!org.apache.hadoop.yarn.api.ApplicationClientProtocol.submitReservation!!!FederationClientInterceptor.java:1218!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java#L963!!!org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.getNewReservation!!!org.apache.hadoop.yarn.api.ApplicationClientProtocol.getNewReservation!!!FederationClientInterceptor.java:1151!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/FileSystemTimelineWriterImpl.java#L268!!!org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineWriterImpl$FSAction.runWithRetries!!!org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineWriterImpl$FSAction.run!!!FileSystemTimelineWriterImpl.java:271!!!java.io.IOException
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/hadoop_timeout_bounds.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/hadoop_timeout_bounds.data
new file mode 100644
index 00000000..975f93e4
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/hadoop_timeout_bounds.data
@@ -0,0 +1,221 @@
+TestBatchIbr.testIbr
+TestCheckpoint.testActiveImageWithTimeDeltaRelaxation
+TestCheckpoint.testActiveRejectSmallerTxidDeltaImage
+TestCheckpoint.testCheckpoint
+TestCheckpoint.testCheckpointAfterTwoFailedUploads
+TestCheckpoint.testCheckpointTriggerOnTxnCount
+TestCheckpoint.testCheckpointWithFailedStorageDir
+TestCheckpoint.testCheckpointWithSeparateDirsAfterNameFails
+TestCheckpoint.testDeleteTemporaryEditsOnStartup
+TestCheckpoint.testEditFailureBeforeRename
+TestCheckpoint.testEditFailureOnFirstCheckpoint
+TestCheckpoint.testFailureBeforeRename
+TestCheckpoint.testImportCheckpoint
+TestCheckpoint.testLegacyOivImage
+TestCheckpoint.testMultipleSecondaryNNsAgainstSameNN
+TestCheckpoint.testMultipleSecondaryNNsAgainstSameNN2
+TestCheckpoint.testMultipleSecondaryNamenodes
+TestCheckpoint.testNameDirError
+TestCheckpoint.testNameDirLocking
+TestCheckpoint.testNameNodeImageSendFailWrongDigest
+TestCheckpoint.testNameNodeImageSendFailWrongSize
+TestCheckpoint.testNamespaceVerifiedOnFileTransfer
+TestCheckpoint.testReloadOnEditReplayFailure
+TestCheckpoint.testSaveNamespace
+TestCheckpoint.testSecondaryFailsWithErrorBeforeSettingHeaders
+TestCheckpoint.testSecondaryImageDownload
+TestCheckpoint.testSecondaryNameNodeLocking
+TestCheckpoint.testSecondaryNameNodeWithDelegationTokens
+TestCheckpoint.testSecondaryNamenodeError1
+TestCheckpoint.testSecondaryNamenodeError2
+TestCheckpoint.testSecondaryNamenodeError3
+TestCheckpoint.testSecondaryPurgesEditLogs
+TestCheckpoint.testStorageAlreadyLockedErrorMessage
+TestCheckpoint.testTooManyEditReplayFailures
+TestComparators.testAllUserComparators
+TestComparators.testBakedUserComparator
+TestComparators.testDefaultMRComparator
+TestComparators.testUserMRComparator
+TestComparators.testUserValueGroupingComparator
+TestCompressionEmulationUtils.testCompressibleGridmixRecord
+TestCompressionEmulationUtils.testCompressionRatios
+TestCompressionEmulationUtils.testFileQueueDecompression
+TestCompressionEmulationUtils.testPossiblyCompressedDecompressedStreams
+TestCompressionEmulationUtils.testRandomCompressedTextDataGenerator
+TestCopyToLocal.testCopy
+TestCopyToLocal.testCopySingleFile
+TestCopyToLocal.testCopyWithThreads
+TestCopyToLocal.testCopyWithThreadsAndQueueSize
+TestCopyToLocal.testCopyWithThreadsAndQueueSizeWrong
+TestDataDrivenDBInputFormat.testDateSplits
+TestDatanodeDeath.testComplex
+TestDatanodeDeath.testSimple0
+TestDatanodeDeath.testSimple1
+TestDatanodeDeath.testSimple2
+TestDecommissionWithStriped.testCountNodes
+TestDecommissionWithStriped.testDecommission2NodeWithBusyNode
+TestDecommissionWithStriped.testDecommissionTwoNodes
+TestDecommissionWithStriped.testDecommissionWithBusyNode
+TestDecommissionWithStriped.testDecommissionWithFailedReplicating
+TestDecommissionWithStriped.testDecommissionWithMissingBlock
+TestDecommissionWithStriped.testDecommissionWithURBlockForSameBlockGroup
+TestDecommissionWithStriped.testFileChecksumAfterDecommission
+TestDecommissionWithStriped.testFileFullBlockGroup
+TestDecommissionWithStriped.testFileMultipleBlockGroups
+TestDecommissionWithStriped.testFileSmallerThanOneCell
+TestDecommissionWithStriped.testFileSmallerThanOneStripe
+TestDecommissionWithStriped.testRecoveryWithDecommission
+TestDirectoryCommitterScale.test_010_createTaskFiles
+TestDirectoryCommitterScale.test_030_commitFiles
+TestDirectoryCommitterScale.test_040_abortFiles
+TestDistCh.testDistCh
+TestFSEditLogLoader.testAddNewStripedBlock
+TestFSEditLogLoader.testDisplayRecentEditLogOpCodes
+TestFSEditLogLoader.testErasureCodingPolicyOperations
+TestFSEditLogLoader.testFSEditLogOpCodes
+TestFSEditLogLoader.testHasNonEcBlockUsingStripedIDForAddBlock
+TestFSEditLogLoader.testHasNonEcBlockUsingStripedIDForUpdateBlocks
+TestFSEditLogLoader.testReplicationAdjusted
+TestFSEditLogLoader.testUpdateStripedBlocks
+TestFSEditLogLoader.testValidateEmptyEditLog
+TestFileOutputCommitter.testAbortV1
+TestFileOutputCommitter.testCommitterV1
+TestFileOutputCommitter.testCommitterV2
+TestFileOutputCommitter.testCommitterWithDuplicatedCommitV1
+TestFileOutputCommitter.testCommitterWithDuplicatedCommitV2
+TestFileOutputCommitter.testCommitterWithFailureV1
+TestFileOutputCommitter.testCommitterWithFailureV2
+TestFileOutputCommitter.testMapFileOutputCommitterV2
+TestFileOutputCommitter.testMapOnlyNoOutputV1
+TestFileOutputCommitter.testMapOnlyNoOutputV2
+TestFileOutputCommitter.testRecoveryUpgradeV1V2
+TestFileOutputCommitter.testRecoveryV1
+TestFileOutputCommitter.testRecoveryV2
+TestFileSystemAccessService.createFileSystem
+TestFileSystemAccessService.fileSystemCache
+TestFileSystemAccessService.fileSystemExecutor
+TestFileSystemAccessService.serviceHadoopConf
+TestFsVolumeList.testAddRplicaProcessorForAddingReplicaInMap
+TestFsVolumeList.testExcludeSlowDiskWhenChoosingVolume
+TestFsVolumeList.testGetNextVolumeWithClosedVolume
+TestFsVolumeList.testInstanceOfAddReplicaThreadPool
+TestHDFSCLI.testAll
+TestHFlush.hFlush_01
+TestHFlush.hFlush_02
+TestHFlush.hFlush_03
+TestHFlush.hSyncEndBlockAndUpdateLength
+TestHFlush.hSyncEndBlock_00
+TestHFlush.hSyncEndBlock_01
+TestHFlush.hSyncEndBlock_02
+TestHFlush.hSyncEndBlock_03
+TestHFlush.hSyncUpdateLength_00
+TestHFlush.hSyncUpdateLength_01
+TestHFlush.hSyncUpdateLength_02
+TestHFlush.hSyncUpdateLength_03
+TestHFlush.testHFlushInterrupted
+TestHFlush.testPipelineHeartbeat
+TestHadoopArchives.testReadFileContent
+TestHttpFSServer.testAccess
+TestHttpFSServer.testAllowSnapshot
+TestHttpFSServer.testContentType
+TestHttpFSServer.testCreateFileWithUnmaskedPermissions
+TestHttpFSServer.testCreateSnapshot
+TestHttpFSServer.testCreateSnapshotNoSnapshotName
+TestHttpFSServer.testCustomizedUserAndGroupNames
+TestHttpFSServer.testDelegationTokenOperations
+TestHttpFSServer.testDelegationTokenOperationsSsl
+TestHttpFSServer.testDeleteSnapshot
+TestHttpFSServer.testDirAcls
+TestHttpFSServer.testDisallowSnapshot
+TestHttpFSServer.testDisallowSnapshotException
+TestHttpFSServer.testECPolicy
+TestHttpFSServer.testErasureCodingPolicy
+TestHttpFSServer.testFileAcls
+TestHttpFSServer.testGetFileBlockLocations
+TestHttpFSServer.testGetServerDefaults
+TestHttpFSServer.testGetSnapshotDiff
+TestHttpFSServer.testGetSnapshotDiffIllegalParam
+TestHttpFSServer.testGetSnapshotList
+TestHttpFSServer.testGetSnapshottableDirectoryList
+TestHttpFSServer.testGetTrashRoot
+TestHttpFSServer.testGlobFilter
+TestHttpFSServer.testHdfsAccess
+TestHttpFSServer.testMkdirWithUnmaskedPermissions
+TestHttpFSServer.testMkdirs
+TestHttpFSServer.testNoRedirect
+TestHttpFSServer.testNoRedirectWithData
+TestHttpFSServer.testOpenOffsetLength
+TestHttpFSServer.testPerms
+TestHttpFSServer.testRenameSnapshot
+TestHttpFSServer.testStoragePolicySatisfier
+TestHttpFSServer.testXAttrs
+TestKeyFieldBasedComparator.testBasicUnixComparator
+TestLineRecordReaderJobs.testCustomRecordDelimiters
+TestLineRecordReaderJobs.testDefaultRecordDelimiters
+TestMRKeyFieldBasedComparator.testBasicUnixComparator
+TestMapRed.testBiggerInput
+TestMapRed.testCompression
+TestMapRed.testMapred
+TestMapRed.testNullKeys
+TestMapRed.testSmallInput
+TestMapReduce.testMapred
+TestMultipleCachefiles.testMultipleCachefiles
+TestNameserviceRPCMetrics.testProxyOp
+TestNameserviceRPCMetrics.testProxyOpCompleteConcurrent
+TestRMFailover.testAutomaticFailover
+TestRMFailover.testEmbeddedWebAppProxy
+TestRMFailover.testExplicitFailover
+TestRMFailover.testRMWebAppRedirect
+TestRMFailover.testUncaughtExceptionHandlerWithHAEnabled
+TestRMFailover.testWebAppProxyInStandAloneMode
+TestReencryption.testCancelFutureThenReencrypt
+TestReencryption.testCancelFutureThenRestart
+TestReencryption.testDeleteDuringReencrypt
+TestReencryption.testRaceCreateHandler
+TestReencryption.testRaceDeleteCreateHandler
+TestReencryption.testRaceDeleteCreateUpdater
+TestReencryption.testRaceDeleteCurrentDirHandler
+TestReencryption.testRaceDeleteCurrentDirUpdater
+TestReencryption.testRaceDeleteHandler
+TestReencryption.testRaceDeleteUpdater
+TestReencryption.testRaceDeleteZoneHandler
+TestReencryption.testReencryptCancel
+TestReencryption.testReencryptCancelForUpdater
+TestReencryption.testReencryptCommandsQueuedOrdering
+TestReencryption.testReencryptLoadedFromEdits
+TestReencryption.testReencryptLoadedFromFsimage
+TestReencryption.testReencryptNestedZones
+TestReencryption.testReencryptOrdering
+TestReencryption.testReencryptRaceRename
+TestReencryption.testReencryptSnapshots
+TestReencryption.testReencryptionBasic
+TestReencryption.testReencryptionKMSDown
+TestReencryption.testReencryptionNNSafeMode
+TestReencryption.testReencryptionUpdaterFaultCkpt
+TestReencryption.testReencryptionUpdaterFaultOneTask
+TestReencryption.testReencryptionUpdaterFaultRecover
+TestReencryption.testReencryptionWithoutProvider
+TestReencryption.testRestartAfterReencrypt
+TestReencryption.testRestartAfterReencryptAndCheckpoint
+TestReencryption.testRestartDuringReencrypt
+TestReencryption.testRestartWithRenames
+TestReencryption.testZoneDeleteDuringReencrypt
+TestReplaceDatanodeOnFailure.testAppend
+TestReplaceDatanodeOnFailure.testBestEffort
+TestReplaceDatanodeOnFailure.testDefaultPolicy
+TestReplaceDatanodeOnFailure.testReplaceDatanodeOnFailure
+TestRouterAllResolver.testHashAll
+TestRouterAllResolver.testRandomAll
+TestRouterAllResolver.testSpaceAll
+TestStoragePolicySatisfierWithStripedFile.testMoverWithFullStripe
+TestStoragePolicySatisfierWithStripedFile.testSPSWhenFileHasLowRedundancyBlocks
+TestStoragePolicySatisfierWithStripedFile.testWhenNoTargetDatanodeToSatisfyStoragePolicy
+TestStoragePolicySatisfierWithStripedFile.testWhenOnlyFewTargetNodesAreAvailableToSatisfyStoragePolicy
+TestStreamAggregate.testCommandLine
+TestStreamXmlRecordReader.testStreamXmlRecordReader
+TestStreaming.testCommandLine
+TestViewFileSystemLinkRegex.testConfLinkRegexFixedDestMapping
+TestViewFileSystemLinkRegex.testConfLinkRegexIndexMapping
+TestViewFileSystemLinkRegex.testConfLinkRegexNamedGroupMapping
+TestViewFileSystemLinkRegex.testConfLinkRegexWithInterceptors
+TestViewFileSystemLinkRegex.testConfLinkRegexWithSingleInterceptor
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/pom-hadoop.xml b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/pom-hadoop.xml
new file mode 100644
index 00000000..9960fc0b
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/pom-hadoop.xml
@@ -0,0 +1,963 @@
+
+
+
+ 4.0.0
+ org.apache.hadoop
+ hadoop-main
+ 3.4.0-SNAPSHOT
+ Apache Hadoop Main
+ Apache Hadoop Main
+ pom
+
+
+
+
+ com.cenqua.clover
+ clover
+
+ 3.0.2
+
+
+ org.opentest4j
+ opentest4j
+
+ 1.2.0
+ test
+
+
+
+
+
+
+ ${distMgmtStagingId}
+ ${distMgmtStagingName}
+ ${distMgmtStagingUrl}
+
+
+ ${distMgmtSnapshotsId}
+ ${distMgmtSnapshotsName}
+ ${distMgmtSnapshotsUrl}
+
+
+ apache.website
+ scpexe://people.apache.org/www/hadoop.apache.org/docs/r${project.version}
+
+
+
+
+
+
+ org.aspectj
+ aspectjrt
+ ${aspectj.version}
+
+
+ edu.uchicago.cs.systems
+ wasabi
+ ${wasabi.version}
+
+
+
+
+
+ ${distMgmtSnapshotsId}
+ ${distMgmtSnapshotsName}
+ ${distMgmtSnapshotsUrl}
+
+
+ repository.jboss.org
+ https://repository.jboss.org/nexus/content/groups/public/
+
+ false
+
+
+
+
+
+
+ Apache License, Version 2.0
+ https://www.apache.org/licenses/LICENSE-2.0.txt
+
+
+
+
+ Apache Software Foundation
+ https://www.apache.org
+
+
+
+
+ 3.4.0-SNAPSHOT
+
+ apache.snapshots.https
+ Apache Development Snapshot Repository
+ https://repository.apache.org/content/repositories/snapshots
+ apache.staging.https
+ Apache Release Distribution Repository
+ https://repository.apache.org/service/local/staging/deploy/maven2
+
+
+ UTF-8
+ UTF-8
+
+
+ 2.8.1
+ 3.9.1
+ 1.5
+ 1.7
+ 2.4
+ 3.0.2
+ 3.0.0
+ 2.0.0
+ 3.0.1
+ 1.5
+ 1.5
+ 3.0.1
+ 0.12
+ 2.4
+ 4.4.1
+ 2.5.0
+ 1.0.0
+ 3.1.0
+ 8.29
+ 7.1.1
+ 4.2.2
+ 4.2.0
+ 1.1.1
+ 3.8.1
+ 2.7.6
+
+ bash
+
+ org.fusesource.leveldbjni
+
+
+ 1.9.8.M1
+ 1.13
+ 1.0.0
+
+
+
+
+ hadoop-project
+ hadoop-project-dist
+ hadoop-assemblies
+ hadoop-maven-plugins
+ hadoop-common-project
+ hadoop-hdfs-project
+ hadoop-yarn-project
+ hadoop-mapreduce-project
+ hadoop-tools
+ hadoop-dist
+ hadoop-minicluster
+ hadoop-client-modules
+ hadoop-build-tools
+ hadoop-cloud-storage-project
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-dependency-plugin
+ ${maven-dependency-plugin.version}
+
+
+ org.apache.maven.plugins
+ maven-enforcer-plugin
+ ${maven-enforcer-plugin.version}
+
+
+
+ [3.0.2,)
+
+
+ [1.8,)
+
+
+
+
+
+ de.skuzzle.enforcer
+ restrict-imports-enforcer-rule
+ ${restrict-imports.enforcer.version}
+
+
+
+
+ banned-illegal-imports
+ process-sources
+
+ enforce
+
+
+
+
+ true
+ Use hadoop-thirdparty shaded instead of curator shaded
+
+ org.apache.curator.shaded.**
+
+
+
+ true
+ Use hadoop-common provided Sets rather than Guava provided Sets
+
+ org.apache.hadoop.thirdparty.com.google.common.collect.Sets
+ org.apache.hadoop.thirdparty.com.google.common.collect.Sets.**
+
+
+
+ true
+ Use hadoop-common provided Lists rather than Guava provided Lists
+
+ org.apache.hadoop.thirdparty.com.google.common.collect.Lists
+ org.apache.hadoop.thirdparty.com.google.common.collect.Lists.**
+
+
+
+ true
+ Use hadoop-annotation provided VisibleForTesting rather than the one provided by Guava
+
+ org.apache.hadoop.thirdparty.com.google.common.annotations.VisibleForTesting
+
+
+
+ true
+ Use alternatives to Guava common classes
+
+ com.google.common.**
+
+
+
+ true
+ Use alternative to Guava provided BaseEncoding
+
+ org.apache.hadoop.thirdparty.com.google.common.io.BaseEncoding
+ org.apache.hadoop.thirdparty.com.google.common.io.BaseEncoding.**
+
+
+
+ true
+ Use alternative to Guava provided Optional
+
+ org.apache.hadoop.thirdparty.com.google.common.base.Optional
+ org.apache.hadoop.thirdparty.com.google.common.base.Optional.**
+
+
+
+ true
+ Use alternative to Guava provided Function
+
+ org.apache.hadoop.thirdparty.com.google.common.base.Function
+ org.apache.hadoop.thirdparty.com.google.common.base.Function.**
+
+
+
+ true
+ Use alternative to Guava provided Predicate
+
+ org.apache.hadoop.thirdparty.com.google.common.base.Predicate
+ org.apache.hadoop.thirdparty.com.google.common.base.Predicate.**
+
+
+
+ true
+ Use alternative to Guava provided Supplier
+
+ org.apache.hadoop.thirdparty.com.google.common.base.Supplier
+ org.apache.hadoop.thirdparty.com.google.common.base.Supplier.**
+
+
+
+ true
+ Use alternative to Guava provided ImmutableListMultimap
+
+ org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableListMultimap
+ org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableListMultimap.**
+
+
+
+ true
+ Use hadoop-common provided Preconditions rather than Guava provided
+
+ org.apache.hadoop.thirdparty.com.google.common.base.Preconditions
+ org.apache.hadoop.thirdparty.com.google.common.base.Preconditions.**
+
+
+
+ true
+ Use Fasterxml Jackson 2 dependency in place of org.codehaus Jackson 1
+
+ org.codehaus.jackson.**
+
+
+
+ true
+ Use HttpServlet APIs instead
+
+ org.glassfish.grizzly
+ org.glassfish.grizzly.**
+
+
+
+ true
+ Use slf4j based Logger
+
+ org.apache.commons.logging.**
+
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-assembly-plugin
+ ${maven-assembly-plugin.version}
+
+
+ org.apache.maven.plugins
+ maven-deploy-plugin
+ ${maven-deploy-plugin.version}
+
+
+ org.apache.rat
+ apache-rat-plugin
+ ${apache-rat-plugin.version}
+
+
+ org.apache.maven.plugins
+ maven-antrun-plugin
+ ${maven-antrun-plugin.version}
+
+
+ org.apache.maven.plugins
+ maven-site-plugin
+ ${maven-site-plugin.version}
+
+
+ org.apache.maven.wagon
+ wagon-ssh
+ ${wagon-ssh.version}
+
+
+
+
+
+ org.eclipse.m2e
+ lifecycle-mapping
+ ${lifecycle-mapping.version}
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-antrun-plugin
+ [1.7,)
+
+ run
+
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-resources-plugin
+ [2.2,)
+
+ testResources
+ resources
+
+
+
+
+
+
+
+
+ org.apache.avro
+ avro-maven-plugin
+ [1.5.3,)
+
+ schema
+ protocol
+
+
+
+
+
+
+
+
+ org.codehaus.mojo.jspc
+ jspc-maven-plugin
+ [2.0-alpha-3,)
+
+ compile
+
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-dependency-plugin
+ [2.4,)
+
+ copy-dependencies
+ build-classpath
+
+
+
+
+
+
+
+
+ org.codehaus.mojo
+ exec-maven-plugin
+ [1.2,)
+
+ exec
+
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-jar-plugin
+ [2.3.1,)
+
+ test-jar
+
+
+
+
+
+
+
+
+
+
+
+ org.openclover
+ clover-maven-plugin
+ ${clover-maven-plugin.version}
+
+
+ org.apache.felix
+ maven-bundle-plugin
+ ${maven-bundle-plugin.version}
+
+
+ org.apache.maven.plugins
+ maven-checkstyle-plugin
+ ${maven-checkstyle-plugin.version}
+
+
+ org.apache.hadoop
+ hadoop-build-tools
+ ${hadoop.version}
+
+
+ com.puppycrawl.tools
+ checkstyle
+ ${checkstyle.version}
+
+
+
+ checkstyle/checkstyle.xml
+ checkstyle/suppressions.xml
+ true
+ false
+ ${project.build.directory}/test/checkstyle-errors.xml
+
+
+
+ org.owasp
+ dependency-check-maven
+ ${dependency-check-maven.version}
+
+
+ com.github.spotbugs
+ spotbugs-maven-plugin
+ ${spotbugs-maven-plugin.version}
+
+
+ com.github.spotbugs
+ spotbugs
+ ${spotbugs.version}
+
+
+
+
+ org.jsonschema2pojo
+ jsonschema2pojo-maven-plugin
+ ${jsonschema2pojo-maven-plugin.version}
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ ${maven-compiler-plugin.version}
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-enforcer-plugin
+ false
+
+
+ clean
+
+ enforce
+
+ pre-clean
+
+
+ default
+
+ enforce
+
+ validate
+
+
+ site
+
+ enforce
+
+ pre-site
+
+
+ enforce-property
+
+ enforce
+
+
+
+
+ hadoop.version
+ You must set a hadoop.version to be the same as ${project.version}
+ ${project.version}
+ The hadoop.version property should be set and should be ${project.version}.
+
+
+ true
+
+
+
+
+
+ org.apache.rat
+ apache-rat-plugin
+
+
+ .gitattributes
+ .gitignore
+ .git/**
+ .github/pull_request_template.md
+ .idea/**
+ **/build/**
+ **/patchprocess/**
+ **/*.js
+ licenses/**
+ licenses-binary/**
+ dev-support/docker/pkg-resolver/packages.json
+ dev-support/docker/pkg-resolver/platforms.json
+ **/target/**
+
+
+
+
+ maven-site-plugin
+
+
+ attach-descriptor
+
+ attach-descriptor
+
+
+
+
+
+ org.apache.felix
+ maven-bundle-plugin
+ true
+ true
+
+
+ org.apache.maven.plugins
+ maven-checkstyle-plugin
+ ${maven-checkstyle-plugin.version}
+
+
+
+ org.owasp
+ dependency-check-maven
+ ${dependency-check-maven.version}
+
+
+ com.github.spotbugs
+ spotbugs-maven-plugin
+
+
+ org.cyclonedx
+ cyclonedx-maven-plugin
+ ${cyclonedx.version}
+
+
+
+
+ dev.aspectj
+ aspectj-maven-plugin
+ ${aspectj-maven.version}
+
+ 1.8
+ 1.8
+ true
+ true
+
+
+ edu.uchicago.cs.systems
+ wasabi
+
+
+
+ --add-exports=java.base/sun.net.spi.nameservice=ALL-UNNAMED
+ --add-opens=java.base/sun.net.spi.nameservice=ALL-UNNAMED
+
+
+
+
+
+ test-compile
+ compile
+
+
+ 1.8
+ 1.8
+
+ false
+ true
+ true
+ unmatchedSuperTypeInCall=ignore,adviceDidNotMatch=ignore,typeNotExposedToWeaver=ignore,uncheckedAdviceConversion=ignore,invalidAbsoluteTypeName=ignore,cantFindType=ignore
+
+
+
+
+
+ org.aspectj
+ aspectjtools
+ ${aspectj.version}
+
+
+
+
+
+
+
+
+ true
+
+
+ org.apache.maven.plugins
+ maven-javadoc-plugin
+ ${maven-javadoc-plugin.version}
+ false
+
+
+ aggregate
+
+ 1024m
+ true
+ false
+ ${maven.compile.source}
+ ${maven.compile.encoding}
+ ${project.build.directory}/site
+ hadoop-project/api
+
+ org.apache.hadoop.authentication*,org.apache.hadoop.mapreduce.v2.proto,org.apache.hadoop.yarn.proto,org.apache.hadoop.yarn.server*,org.apache.hadoop.yarn.webapp*
+
+
+ Common
+ org.apache.hadoop*
+
+
+ HDFS
+ org.apache.hadoop.hdfs*
+
+
+ MapReduce
+ org.apache.hadoop.mapred*
+
+
+ YARN
+ org.apache.hadoop.yarn*
+
+
+ org.apache.hadoop.classification.tools.IncludePublicAnnotationsStandardDoclet
+
+
+ org.apache.hadoop
+ hadoop-annotations
+ ${project.version}
+
+
+ true
+
+
+ false
+
+
+
+ org.apache.hadoop:hadoop-annotations
+
+
+
+
+ aggregate
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-dependency-plugin
+ ${maven-dependency-plugin.version}
+
+
+
+ analyze-report
+
+
+
+
+
+
+
+
+
+ src
+
+ false
+
+
+
+
+ org.apache.maven.plugins
+ maven-assembly-plugin
+ false
+
+
+ src-dist
+ package
+
+ single
+
+
+ false
+ false
+ hadoop-${project.version}-src
+ hadoop-dist/target
+
+
+
+ hadoop-assemblies/src/main/resources/assemblies/hadoop-src.xml
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-antrun-plugin
+ false
+
+
+ src-dist-msg
+ package
+
+ run
+
+
+
+
+ Hadoop source tar available at: ${basedir}/hadoop-dist/target/hadoop-${project.version}-src.tar.gz
+
+
+
+
+
+
+
+
+
+
+
+ dist
+
+
+
+ org.cyclonedx
+ cyclonedx-maven-plugin
+ ${cyclonedx.version}
+
+
+ package
+
+ makeBom
+
+
+
+
+ xml
+
+
+
+
+
+
+
+ sign
+
+
+
+ org.apache.maven.plugins
+ maven-gpg-plugin
+ ${maven-gpg-plugin.version}
+
+
+ sign-artifacts
+ verify
+
+ sign
+
+
+
+
+
+
+
+
+ clover
+
+ false
+
+ clover
+
+
+
+ ${project.build.directory}/clover/hadoop-coverage.db
+
+ true
+ true
+ true
+ false
+
+
+
+
+ org.openclover
+ clover-maven-plugin
+
+ false
+ true
+ ${cloverDatabase}
+ 50%
+ ${project.build.directory}/clover
+ ${cloverAlwaysReport}
+ ${cloverGenHtml}
+ ${cloverGenXml}
+ ${cloverGenHistorical}
+
+ **/examples/**/*.java
+ **/hamlet/*.java
+ **/ha/proto/*.java
+ **/protocol/proto/*.java
+ **/compiler/generated/*.java
+ **/protobuf/*.java
+ **/v2/proto/*.java
+ **/yarn/proto/*.java
+ **/security/proto/*.java
+ **/tools/proto/*.java
+ **/hs/proto/*.java
+
+
+
+
+ clover-setup
+ process-sources
+
+ setup
+
+
+
+ clover
+ test
+
+ clover
+
+
+
+
+
+
+
+
+ aarch64
+
+ org.openlabtesting.leveldbjni
+
+
+
+ linux
+ aarch64
+
+
+
+
+
+
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.cli.TestHDFSCLI.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.cli.TestHDFSCLI.conf
new file mode 100644
index 00000000..c6d1e4d7
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.cli.TestHDFSCLI.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.cli.TestHDFSCLI.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.cli.TestHDFSCLI.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.cli.TestHDFSCLI.data
new file mode 100644
index 00000000..397dd50a
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.cli.TestHDFSCLI.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L622!!!org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo!!!org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode!!!DFSInputStream.java:637!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.http.server.TestHttpFSServer.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.http.server.TestHttpFSServer.conf
new file mode 100644
index 00000000..4d3874cb
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.http.server.TestHttpFSServer.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.http.server.TestHttpFSServer.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.http.server.TestHttpFSServer.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.http.server.TestHttpFSServer.data
new file mode 100644
index 00000000..ec89adb5
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.http.server.TestHttpFSServer.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirEncryptionZoneOp.java#L585!!!org.apache.hadoop.hdfs.server.namenode.FSDirEncryptionZoneOp$EDEKCacheLoader.run!!!org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.warmUpEncryptedKeys!!!FSDirEncryptionZoneOp.java:587!!!java.net.SocketException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.s3a.commit.staging.TestDirectoryCommitterScale.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.s3a.commit.staging.TestDirectoryCommitterScale.conf
new file mode 100644
index 00000000..e0e4dd26
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.s3a.commit.staging.TestDirectoryCommitterScale.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.s3a.commit.staging.TestDirectoryCommitterScale.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.s3a.commit.staging.TestDirectoryCommitterScale.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.s3a.commit.staging.TestDirectoryCommitterScale.data
new file mode 100644
index 00000000..592cf524
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.s3a.commit.staging.TestDirectoryCommitterScale.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//warmUpEncryptedKeys//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSInputChecker.java#L301!!!org.apache.hadoop.fs.FSInputChecker.readChecksumChunk!!!org.apache.hadoop.fs.FSInputChecker.readChunk!!!FSInputChecker.java:305!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.shell.TestCopyToLocal.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.shell.TestCopyToLocal.conf
new file mode 100644
index 00000000..f4d70289
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.shell.TestCopyToLocal.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.shell.TestCopyToLocal.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.shell.TestCopyToLocal.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.shell.TestCopyToLocal.data
new file mode 100644
index 00000000..6ef0bfcf
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.shell.TestCopyToLocal.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSInputChecker.java#L301!!!org.apache.hadoop.fs.FSInputChecker.readChecksumChunk!!!org.apache.hadoop.fs.FSInputChecker.readChunk!!!FSInputChecker.java:305!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.viewfs.TestViewFileSystemLinkRegex.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.viewfs.TestViewFileSystemLinkRegex.conf
new file mode 100644
index 00000000..dd07e7ce
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.viewfs.TestViewFileSystemLinkRegex.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.viewfs.TestViewFileSystemLinkRegex.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.viewfs.TestViewFileSystemLinkRegex.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.viewfs.TestViewFileSystemLinkRegex.data
new file mode 100644
index 00000000..a95e0b33
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.viewfs.TestViewFileSystemLinkRegex.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java#L1141!!!org.apache.hadoop.hdfs.DFSOutputStream.addBlock!!!org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock!!!DFSOutputStream.java:1143!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestDatanodeDeath.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestDatanodeDeath.conf
new file mode 100644
index 00000000..cf340a0c
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestDatanodeDeath.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestDatanodeDeath.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestDatanodeDeath.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestDatanodeDeath.data
new file mode 100644
index 00000000..b2573113
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestDatanodeDeath.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L1177!!!org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode!!!org.apache.hadoop.hdfs.DFSInputStream.getBlockReader!!!DFSInputStream.java:1181!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestDecommissionWithStriped.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestDecommissionWithStriped.conf
new file mode 100644
index 00000000..eec6fbfc
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestDecommissionWithStriped.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestDecommissionWithStriped.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestDecommissionWithStriped.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestDecommissionWithStriped.data
new file mode 100644
index 00000000..47b321ca
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestDecommissionWithStriped.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/FileChecksumHelper.java#L521!!!org.apache.hadoop.hdfs.FileChecksumHelper$ReplicatedFileChecksumComputer.checksumBlock!!!org.apache.hadoop.hdfs.FileChecksumHelper$ReplicatedFileChecksumComputer.tryDatanode!!!FileChecksumHelper.java:523!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestHFlush.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestHFlush.conf
new file mode 100644
index 00000000..225538d3
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestHFlush.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestHFlush.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestHFlush.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestHFlush.data
new file mode 100644
index 00000000..3dc83d98
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestHFlush.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L344!!!org.apache.hadoop.hdfs.DFSInputStream.readBlockLength!!!org.apache.hadoop.hdfs.DFSUtilClient.createClientDatanodeProtocolProxy!!!DFSInputStream.java:348!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestLeaseRecovery.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestLeaseRecovery.conf
new file mode 100644
index 00000000..a3eaff9b
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestLeaseRecovery.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestLeaseRecovery.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestLeaseRecovery.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestLeaseRecovery.data
new file mode 100644
index 00000000..019d2604
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestLeaseRecovery.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java#L440!!!org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run!!!org.apache.hadoop.hdfs.client.impl.LeaseRenewer.renew!!!LeaseRenewer.java:445!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure.conf
new file mode 100644
index 00000000..ce7498d1
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure.data
new file mode 100644
index 00000000..f3ab6767
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1550!!!org.apache.hadoop.hdfs.DataStreamer.transfer!!!org.apache.hadoop.hdfs.DataStreamer$StreamerStreams.sendTransferBlock!!!DataStreamer.java:1558!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer.conf
new file mode 100644
index 00000000..6cc9d084
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer.data
new file mode 100644
index 00000000..019d2604
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java#L440!!!org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run!!!org.apache.hadoop.hdfs.client.impl.LeaseRenewer.renew!!!LeaseRenewer.java:445!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.nfs.TestMountd.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.nfs.TestMountd.conf
new file mode 100644
index 00000000..71483c63
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.nfs.TestMountd.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.nfs.TestMountd.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.nfs.TestMountd.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.nfs.TestMountd.data
new file mode 100644
index 00000000..81cd9bf5
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.nfs.TestMountd.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java#L895!!!org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run!!!org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake!!!BPServiceActor.java:903!!!java.net.SocketException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.TestBatchIbr.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.TestBatchIbr.conf
new file mode 100644
index 00000000..66b77e5a
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.TestBatchIbr.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.TestBatchIbr.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.TestBatchIbr.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.TestBatchIbr.data
new file mode 100644
index 00000000..4c0affa3
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.TestBatchIbr.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L622!!!org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo!!!org.apache.hadoop.hdfs.DFSInputStream.getBlockReader!!!DFSInputStream.java:645!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList.conf
new file mode 100644
index 00000000..3a7d402e
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList.data
new file mode 100644
index 00000000..0943556c
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1826!!!org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream!!!org.apache.hadoop.net.NetUtils.getInputStream!!!DataStreamer.java:1837!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation.conf
new file mode 100644
index 00000000..c618b37a
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation.data
new file mode 100644
index 00000000..019d2604
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java#L440!!!org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run!!!org.apache.hadoop.hdfs.client.impl.LeaseRenewer.renew!!!LeaseRenewer.java:445!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.metrics.TestNameserviceRPCMetrics.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.metrics.TestNameserviceRPCMetrics.conf
new file mode 100644
index 00000000..ccff78e0
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.metrics.TestNameserviceRPCMetrics.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.metrics.TestNameserviceRPCMetrics.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.metrics.TestNameserviceRPCMetrics.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.metrics.TestNameserviceRPCMetrics.data
new file mode 100644
index 00000000..4271ce5d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.metrics.TestNameserviceRPCMetrics.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L613!!!org.apache.hadoop.ipc.Client$Connection.setupConnection!!!org.apache.hadoop.net.NetUtils.connect!!!Client.java:668!!!java.net.ConnectException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.router.TestRouterAllResolver.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.router.TestRouterAllResolver.conf
new file mode 100644
index 00000000..b82ae3f5
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.router.TestRouterAllResolver.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.router.TestRouterAllResolver.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.router.TestRouterAllResolver.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.router.TestRouterAllResolver.data
new file mode 100644
index 00000000..18318d46
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.router.TestRouterAllResolver.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1826!!!org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream!!!org.apache.hadoop.hdfs.DataStreamer.createSocketForPipeline!!!DataStreamer.java:1832!!!java.net.SocketException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.router.TestRouterFsck.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.router.TestRouterFsck.conf
new file mode 100644
index 00000000..b50a14b4
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.router.TestRouterFsck.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.router.TestRouterFsck.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.router.TestRouterFsck.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.router.TestRouterFsck.data
new file mode 100644
index 00000000..a7d82f22
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.router.TestRouterFsck.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java#L291!!!org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate!!!org.apache.hadoop.hdfs.protocol.ClientProtocol.create!!!DFSOutputStream.java:294!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestCheckpoint.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestCheckpoint.conf
new file mode 100644
index 00000000..caba43a4
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestCheckpoint.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestCheckpoint.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestCheckpoint.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestCheckpoint.data
new file mode 100644
index 00000000..e463af73
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestCheckpoint.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java#L341!!!org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork!!!org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.shouldCheckpointBasedOnCount!!!SecondaryNameNode.java:358!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader.conf
new file mode 100644
index 00000000..21995051
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader.data
new file mode 100644
index 00000000..a262fcd3
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java#L241!!!org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader!!!org.apache.hadoop.hdfs.DFSStripedInputStream.refreshLocatedBlock!!!DFSStripedInputStream.java:245!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestReencryption.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestReencryption.conf
new file mode 100644
index 00000000..83b00166
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestReencryption.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestReencryption.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestReencryption.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestReencryption.data
new file mode 100644
index 00000000..de18cb5d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestReencryption.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionUpdater.java#L436!!!org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater.takeAndProcessTasks!!!org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater.processTask!!!ReencryptionUpdater.java:440!!!RetriableException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots.conf
new file mode 100644
index 00000000..adcf412b
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots.data
new file mode 100644
index 00000000..019d2604
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java#L440!!!org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run!!!org.apache.hadoop.hdfs.client.impl.LeaseRenewer.renew!!!LeaseRenewer.java:445!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.sps.TestStoragePolicySatisfierWithStripedFile.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.sps.TestStoragePolicySatisfierWithStripedFile.conf
new file mode 100644
index 00000000..12e0a377
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.sps.TestStoragePolicySatisfierWithStripedFile.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.sps.TestStoragePolicySatisfierWithStripedFile.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.sps.TestStoragePolicySatisfierWithStripedFile.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.sps.TestStoragePolicySatisfierWithStripedFile.data
new file mode 100644
index 00000000..7ad9b323
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.sps.TestStoragePolicySatisfierWithStripedFile.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/sps/ExternalSPSBlockMoveTaskHandler.java#L203!!!org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock!!!org.apache.hadoop.hdfs.server.balancer.KeyManager.getAccessToken!!!ExternalSPSBlockMoveTaskHandler.java:206!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.lib.service.hadoop.TestFileSystemAccessService.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.lib.service.hadoop.TestFileSystemAccessService.conf
new file mode 100644
index 00000000..bbb0a548
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.lib.service.hadoop.TestFileSystemAccessService.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.lib.service.hadoop.TestFileSystemAccessService.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.lib.service.hadoop.TestFileSystemAccessService.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.lib.service.hadoop.TestFileSystemAccessService.data
new file mode 100644
index 00000000..ec89adb5
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.lib.service.hadoop.TestFileSystemAccessService.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirEncryptionZoneOp.java#L585!!!org.apache.hadoop.hdfs.server.namenode.FSDirEncryptionZoneOp$EDEKCacheLoader.run!!!org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.warmUpEncryptedKeys!!!FSDirEncryptionZoneOp.java:587!!!java.net.SocketException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestComparators.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestComparators.conf
new file mode 100644
index 00000000..dc3f7016
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestComparators.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestComparators.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestComparators.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestComparators.data
new file mode 100644
index 00000000..a7456842
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestComparators.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L1377!!!org.apache.hadoop.mapred.Task.sendDone!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.done!!!Task.java:1379!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestFileOutputCommitter.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestFileOutputCommitter.conf
new file mode 100644
index 00000000..b58e3720
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestFileOutputCommitter.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestFileOutputCommitter.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestFileOutputCommitter.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestFileOutputCommitter.data
new file mode 100644
index 00000000..53fc96a6
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestFileOutputCommitter.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java#L375!!!org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJob!!!org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJobInternal!!!FileOutputCommitter.java:377!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestLineRecordReaderJobs.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestLineRecordReaderJobs.conf
new file mode 100644
index 00000000..b7c588be
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestLineRecordReaderJobs.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestLineRecordReaderJobs.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestLineRecordReaderJobs.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestLineRecordReaderJobs.data
new file mode 100644
index 00000000..a7456842
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestLineRecordReaderJobs.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L1377!!!org.apache.hadoop.mapred.Task.sendDone!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.done!!!Task.java:1379!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestMapRed.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestMapRed.conf
new file mode 100644
index 00000000..f92bce47
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestMapRed.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestMapRed.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestMapRed.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestMapRed.data
new file mode 100644
index 00000000..a7456842
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestMapRed.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L1377!!!org.apache.hadoop.mapred.Task.sendDone!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.done!!!Task.java:1379!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.gridmix.TestCompressionEmulationUtils.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.gridmix.TestCompressionEmulationUtils.conf
new file mode 100644
index 00000000..cf775654
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.gridmix.TestCompressionEmulationUtils.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.gridmix.TestCompressionEmulationUtils.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.gridmix.TestCompressionEmulationUtils.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.gridmix.TestCompressionEmulationUtils.data
new file mode 100644
index 00000000..3f2b005e
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.gridmix.TestCompressionEmulationUtils.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L1315!!!org.apache.hadoop.mapred.Task.statusUpdate!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.statusUpdate!!!Task.java:1317!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.lib.TestKeyFieldBasedComparator.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.lib.TestKeyFieldBasedComparator.conf
new file mode 100644
index 00000000..0415330e
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.lib.TestKeyFieldBasedComparator.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.lib.TestKeyFieldBasedComparator.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.lib.TestKeyFieldBasedComparator.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.lib.TestKeyFieldBasedComparator.data
new file mode 100644
index 00000000..a7456842
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.lib.TestKeyFieldBasedComparator.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L1377!!!org.apache.hadoop.mapred.Task.sendDone!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.done!!!Task.java:1379!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.TestMapReduce.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.TestMapReduce.conf
new file mode 100644
index 00000000..1a651be8
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.TestMapReduce.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.TestMapReduce.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.TestMapReduce.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.TestMapReduce.data
new file mode 100644
index 00000000..3f2b005e
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.TestMapReduce.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L1315!!!org.apache.hadoop.mapred.Task.statusUpdate!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.statusUpdate!!!Task.java:1317!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.lib.db.TestDataDrivenDBInputFormat.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.lib.db.TestDataDrivenDBInputFormat.conf
new file mode 100644
index 00000000..c0b72643
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.lib.db.TestDataDrivenDBInputFormat.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.lib.db.TestDataDrivenDBInputFormat.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.lib.db.TestDataDrivenDBInputFormat.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.lib.db.TestDataDrivenDBInputFormat.data
new file mode 100644
index 00000000..3f2b005e
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.lib.db.TestDataDrivenDBInputFormat.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L1315!!!org.apache.hadoop.mapred.Task.statusUpdate!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.statusUpdate!!!Task.java:1317!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.lib.partition.TestMRKeyFieldBasedComparator.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.lib.partition.TestMRKeyFieldBasedComparator.conf
new file mode 100644
index 00000000..b6568012
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.lib.partition.TestMRKeyFieldBasedComparator.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.lib.partition.TestMRKeyFieldBasedComparator.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.lib.partition.TestMRKeyFieldBasedComparator.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.lib.partition.TestMRKeyFieldBasedComparator.data
new file mode 100644
index 00000000..ce98ff5b
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.lib.partition.TestMRKeyFieldBasedComparator.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L1251!!!org.apache.hadoop.mapred.Task.done!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.commitPending!!!Task.java:1253!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.security.TestUGIWithMiniKdc.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.security.TestUGIWithMiniKdc.conf
new file mode 100644
index 00000000..497bfb35
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.security.TestUGIWithMiniKdc.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.security.TestUGIWithMiniKdc.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.security.TestUGIWithMiniKdc.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.security.TestUGIWithMiniKdc.data
new file mode 100644
index 00000000..7ac78106
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.security.TestUGIWithMiniKdc.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L979!!!org.apache.hadoop.security.UserGroupInformation$AutoRenewalForUserCredsRunnable.run!!!org.apache.hadoop.security.UserGroupInformation$AutoRenewalForUserCredsRunnable.relogin!!!UserGroupInformation.java:986!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestDumpTypedBytes.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestDumpTypedBytes.conf
new file mode 100644
index 00000000..1839ee15
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestDumpTypedBytes.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestDumpTypedBytes.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestDumpTypedBytes.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestDumpTypedBytes.data
new file mode 100644
index 00000000..cbc4b54c
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestDumpTypedBytes.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L235!!!org.apache.hadoop.hdfs.DFSInputStream.openInfo!!!org.apache.hadoop.hdfs.DFSInputStream.getLastBlockLength!!!DFSInputStream.java:243!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestMultipleCachefiles.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestMultipleCachefiles.conf
new file mode 100644
index 00000000..06bfbc0f
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestMultipleCachefiles.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestMultipleCachefiles.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestMultipleCachefiles.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestMultipleCachefiles.data
new file mode 100644
index 00000000..56af3faa
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestMultipleCachefiles.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java#L240!!!org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run!!!org.apache.hadoop.hdfs.server.datanode.DataXceiver.create!!!DataXceiverServer.java:253!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestStreamAggregate.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestStreamAggregate.conf
new file mode 100644
index 00000000..f29fc968
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestStreamAggregate.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestStreamAggregate.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestStreamAggregate.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestStreamAggregate.data
new file mode 100644
index 00000000..a7456842
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestStreamAggregate.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L1377!!!org.apache.hadoop.mapred.Task.sendDone!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.done!!!Task.java:1379!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestStreaming.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestStreaming.conf
new file mode 100644
index 00000000..61d0260a
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestStreaming.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestStreaming.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestStreaming.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestStreaming.data
new file mode 100644
index 00000000..ce98ff5b
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestStreaming.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L1251!!!org.apache.hadoop.mapred.Task.done!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.commitPending!!!Task.java:1253!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.mapreduce.TestStreamXmlRecordReader.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.mapreduce.TestStreamXmlRecordReader.conf
new file mode 100644
index 00000000..e7049c21
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.mapreduce.TestStreamXmlRecordReader.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.mapreduce.TestStreamXmlRecordReader.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.mapreduce.TestStreamXmlRecordReader.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.mapreduce.TestStreamXmlRecordReader.data
new file mode 100644
index 00000000..2b4f0088
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.mapreduce.TestStreamXmlRecordReader.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/EventFetcher.java#L64!!!org.apache.hadoop.mapreduce.task.reduce.EventFetcher.run!!!org.apache.hadoop.mapreduce.task.reduce.EventFetcher.getMapCompletionEvents!!!EventFetcher.java:66!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.test.TestHFSTestCase.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.test.TestHFSTestCase.conf
new file mode 100644
index 00000000..d56df585
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.test.TestHFSTestCase.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.test.TestHFSTestCase.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.test.TestHFSTestCase.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.test.TestHFSTestCase.data
new file mode 100644
index 00000000..ec89adb5
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.test.TestHFSTestCase.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirEncryptionZoneOp.java#L585!!!org.apache.hadoop.hdfs.server.namenode.FSDirEncryptionZoneOp$EDEKCacheLoader.run!!!org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.warmUpEncryptedKeys!!!FSDirEncryptionZoneOp.java:587!!!java.net.SocketException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.TestDistCh.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.TestDistCh.conf
new file mode 100644
index 00000000..667e321f
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.TestDistCh.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.TestDistCh.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.TestDistCh.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.TestDistCh.data
new file mode 100644
index 00000000..0943556c
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.TestDistCh.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1826!!!org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream!!!org.apache.hadoop.net.NetUtils.getInputStream!!!DataStreamer.java:1837!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.TestHadoopArchives.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.TestHadoopArchives.conf
new file mode 100644
index 00000000..10e261c3
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.TestHadoopArchives.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.TestHadoopArchives.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.TestHadoopArchives.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.TestHadoopArchives.data
new file mode 100644
index 00000000..56af3faa
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.TestHadoopArchives.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java#L240!!!org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run!!!org.apache.hadoop.hdfs.server.datanode.DataXceiver.create!!!DataXceiverServer.java:253!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.fedbalance.TestDistCpProcedure.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.fedbalance.TestDistCpProcedure.conf
new file mode 100644
index 00000000..fc63c9d5
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.fedbalance.TestDistCpProcedure.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.fedbalance.TestDistCpProcedure.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.fedbalance.TestDistCpProcedure.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.fedbalance.TestDistCpProcedure.data
new file mode 100644
index 00000000..ca8b1acd
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.fedbalance.TestDistCpProcedure.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java#L748!!!org.apache.hadoop.tools.SimpleCopyListing$TraverseDirectory.traverseDirectoryMultiThreaded!!!org.apache.hadoop.tools.util.DistCpUtils.toCopyListingFileStatus!!!SimpleCopyListing.java:757!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.yarn.client.TestRMFailover.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.yarn.client.TestRMFailover.conf
new file mode 100644
index 00000000..7e6114c8
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.yarn.client.TestRMFailover.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.yarn.client.TestRMFailover.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.yarn.client.TestRMFailover.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.yarn.client.TestRMFailover.data
new file mode 100644
index 00000000..6ea9f2fb
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.yarn.client.TestRMFailover.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java#L853!!!org.apache.hadoop.ha.ActiveStandbyElector.reEstablishSession!!!org.apache.hadoop.ha.ActiveStandbyElector.createConnection!!!ActiveStandbyElector.java:858!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/hbase.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/hbase.conf
new file mode 100644
index 00000000..ba08c39f
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/hbase.conf
@@ -0,0 +1,3 @@
+retry_data_file: /home/bastoica/projects/wasabi/tool/wasabi/config/hbase/hbase_retry_locations.data
+injection_policy: max-count
+max_injection_count: 0
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/hbase_retry_bounds.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/hbase_retry_bounds.data
new file mode 100644
index 00000000..4ea9802f
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/hbase_retry_bounds.data
@@ -0,0 +1,158 @@
+Var name!!!Assigned value!!!Assign method!!!Test class
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!6!!!setInt!!!AbstractTestShell
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!6!!!setInt!!!TestShellRSGroups
+hbase.client.retries.number!!!100!!!setInt!!!IntegrationTestMobCompaction
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!3!!!setInt!!!TestShadeSaslAuthenticationProvider
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestRefreshHFilesBase
+ReadOnlyZKClient.RECOVERY_RETRY!!!3!!!setInt!!!TestReadOnlyZKClient
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestAsyncCoprocessorOnAllRegionServersEndpoint
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!TestCoprocessorEndpoint
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestAsyncCoprocessorEndpoint
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestZstdDictionarySplitMerge
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!TestBackupDeleteWithFailures
+|hbase.client.retries.number"!!!"3"!!!setInt!!!TestThriftHBaseServiceHandler
+"hbase.client.retries.number"!!!"3"!!!setInt!!!TestThriftHBaseServiceHandler
+"hbase.client.retries.number"!!!"3"!!!setInt!!!TestThriftHBaseServiceHandlerWithReadOnly
+"hbase.client.retries.number"!!!3!!!setInt!!!TestThriftServer
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!6!!!setInt!!!TestExportSnapshotV1NoCluster
+DFSConfigKeys.DFS_CLIENT_RETRY_WINDOW_BASE!!!0!!!setInt!!!TestFSUtils
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!5!!!setInt!!!TestNamespaceAuditor
+hbase.client.retries.number!!!100!!!setInt!!!MobStressToolRunner
+hbase.client.retries.number!!!100!!!setInt!!!TestRSMobFileCleanerChore
+hbase.client.retries.number!!!100!!!setInt!!!TestMobFileCleanerChore
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestBulkLoadHFilesSplitRecovery
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestBulkLoadHFilesSplitRecovery
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!6!!!setInt!!!TestRestoreFlushSnapshotFromClient
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestRegionServerCoprocessorExceptionWithAbort
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestRegionServerCoprocessorExceptionWithAbort
+dfs.client.block.recovery.retries!!!2!!!setInt!!!TestWALObserver
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestMasterCoprocessorExceptionWithAbort
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestIncrementAndAppendWithNullResult
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestNegativeMemStoreSizeWithSlowCoprocessor
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setLong!!!TestClientOperationTimeout
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestPassCustomCellViaRegionObserver
+hbase.ipc.client.connect.max.retries!!!1!!!setInt!!!TestFullLogReconstruction
+dfs.client.block.recovery.retries!!!1!!!setInt!!!TestFullLogReconstruction
+zookeeper.recovery.retry!!!1!!!setInt!!!TestReplicationBase
+zookeeper.recovery.retry.intervalmill!!!10!!!setInt!!!TestReplicationBase
+replication.source.maxretriesmultiplier!!!10!!!setInt!!!TestReplicationBase
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!6!!!setInt!!!TestReplicationBase
+zookeeper.recovery.retry!!!1!!!setInt!!!TestReplicationWithTags
+zookeeper.recovery.retry.intervalmill!!!10!!!setInt!!!TestReplicationWithTags
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!6!!!setInt!!!TestReplicationWithTags
+zookeeper.recovery.retry!!!1!!!setInt!!!SyncReplicationTestBase
+zookeeper.recovery.retry.intervalmill!!!10!!!setInt!!!SyncReplicationTestBase
+replication.source.maxretriesmultiplier!!!10!!!setInt!!!SyncReplicationTestBase
+hbase.security.relogin.maxretries!!!1!!!setInt!!!TestRpcSkipInitialSaslHandshake
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestRpcClientLeaks
+zookeeper.recovery.retry!!!1!!!setInt!!!TestMetaRegionReplicaReplication
+zookeeper.recovery.retry.intervalmill!!!10!!!setInt!!!TestMetaRegionReplicaReplication
+HConstants.HBASE_CLIENT_SERVERSIDE_RETRIES_MULTIPLIER!!!1!!!setInt!!!TestMetaRegionReplicaReplication
+replication.source.maxretriesmultiplier!!!1!!!setInt!!!TestWALEntryStreamCompressionReset
+replication.source.maxretriesmultiplier!!!1!!!setInt!!!TestReplicationSource
+replication.source.maxretriesmultiplier!!!1!!!setInt!!!TestReplicationSource
+replication.source.maxretriesmultiplier!!!1!!!setInt!!!TestReplicationSource
+replication.source.maxretriesmultiplier!!!1!!!setInt!!!TestReplicationSource
+replication.source.maxretriesmultiplier!!!1!!!setInt!!!TestBasicWALEntryStream
+replication.source.maxretriesmultiplier!!!1!!!setInt!!!TestBasicWALEntryStream
+replication.source.maxretriesmultiplier!!!1!!!setInt!!!TestBasicWALEntryStream
+zookeeper.recovery.retry!!!1!!!setInt!!!TestRegionReplicaReplication
+zookeeper.recovery.retry.intervalmill!!!10!!!setInt!!!TestRegionReplicaReplication
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!5!!!setInt!!!TestRegionReplicaReplication
+HConstants.HBASE_CLIENT_SERVERSIDE_RETRIES_MULTIPLIER!!!1!!!setInt!!!TestRegionReplicaReplication
+hbase.security.relogin.maxretries!!!1!!!setInt!!!TestSecurityRpcSentBytesMetrics
+zookeeper.recovery.retry!!!1!!!setInt!!!TestReplicationWithWALExtendedAttributes
+zookeeper.recovery.retry.intervalmill!!!10!!!setInt!!!TestReplicationWithWALExtendedAttributes
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!6!!!setInt!!!TestReplicationWithWALExtendedAttributes
+hbase.ipc.client.connect.max.retries!!!1!!!setInt!!!TestBoundedRegionGroupingStrategy
+dfs.client.block.recovery.retries!!!1!!!setInt!!!TestBoundedRegionGroupingStrategy
+hbase.ipc.client.connect.max.retries!!!1!!!setInt!!!TestFSHLogProvider
+dfs.client.block.recovery.retries!!!1!!!setInt!!!TestFSHLogProvider
+hbase.ipc.client.connect.max.retries!!!1!!!setInt!!!TestWALFactory
+dfs.client.block.recovery.retries!!!1!!!setInt!!!TestWALFactory
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestSplitMerge
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!TestRegionServerScan
+RegionReplicationSink.RETRIES_NUMBER!!!1!!!setInt!!!TestRegionReplicationForWriteException
+RegionReplicationSink.RETRIES_NUMBER!!!1!!!setInt!!!TestRegionReplicationForFlushMarker
+RegionReplicationSink.RETRIES_NUMBER!!!15!!!setInt!!!TestRegionReplicationSinkCallbackAndFlushConcurrently
+hbase.client.retries.number!!!2!!!setInt!!!TestIsDeleteFailure
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!5!!!setInt!!!TestEndToEndSplitTransaction
+zookeeper.recovery.retry!!!0!!!setInt!!!TestRemoveRegionMetrics
+zookeeper.recovery.retry!!!0!!!setInt!!!TestRegionServerMetrics
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!TestSettingTimeoutOnBlockingPoint
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!TestRegionInterrupt
+dfs.client.block.write.retries!!!10!!!setInt!!!TestLogRollAbort
+FanOutOneBlockAsyncDFSOutputHelper.ASYNC_DFS_OUTPUT_CREATE_MAX_RETRIES!!!100!!!setInt!!!TestAsyncLogRolling
+dfs.client.block.recovery.retries!!!1!!!setInt!!!AbstractTestProtobufLog
+dfs.client.block.recovery.retries!!!2!!!setInt!!!AbstractTestWALReplay
+hbase.hstore.flush.retries.number!!!1!!!setInt!!!TestHRegion
+hbase.hstore.flush.retries.number!!!1!!!setInt!!!TestHRegion
+RegionReplicationSink.RETRIES_NUMBER!!!1!!!setInt!!!TestWALSyncTimeoutException
+dfs.client.block.write.retries!!!30!!!setInt!!!TestLogRolling
+hbase.ipc.client.connect.max.retries!!!1!!!setInt!!!AbstractTestFSWAL
+dfs.client.block.recovery.retries!!!1!!!setInt!!!AbstractTestFSWAL
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!0!!!setInt!!!TestCompactionWithShippingCoprocessor
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!0!!!setInt!!!TestScannerTimeoutHandling
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!TestFSErrorsExposed
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!TestTags
+hbase.hstore.flush.retries.number!!!1!!!setInt!!!TestHStore
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!100!!!setInt!!!TestConnection
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!TestConnection
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!6!!!setInt!!!CloneSnapshotFromClientTestBase
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestTableOperationException
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestAsyncReplicationAdminApi
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestFromClientSide3
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestFromClientSide3
+zookeeper.recovery.retry!!!1!!!setInt!!!TestSeparateClientZKCluster
+zookeeper.recovery.retry!!!1!!!setInt!!!TestReplicaWithCluster
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!0!!!setInt!!!TestBlockEvictionFromClient
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!100!!!setInt!!!TestAsyncAdminBuilder
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestAsyncClusterAdminApi2
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestAsyncAdminBase
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestAsyncProcedureAdminApi
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!5!!!setInt!!!AbstractTestCITimeout
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!TestReplicasClient
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!3!!!setInt!!!TestFromClientSideScanExcpetion
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!0!!!setInt!!!TestAvoidCellReferencesIntoShippedBlocks
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestAsyncQuotaAdminApi
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!0!!!setInt!!!TestMalformedCellFromClient
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!6!!!setInt!!!RestoreSnapshotFromClientTestBase
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!TestBadReplicationPeer
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!TestReplicationAdminForSyncReplication
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestAsyncReplicationAdminApiWithClusters
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!TestAsyncClientPauseForRpcThrottling
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestAsyncNamespaceAdminApi
+hbase.client.retries.number!!!3!!!setInt!!!TestEntityLocks
+dfs.client.block.write.retries!!!30!!!setInt!!!TestAdmin2
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestAsyncClusterAdminApi
+dfs.client.block.write.retries!!!30!!!setInt!!!TestAsyncClusterAdminApi
+hbase.client.retries.number!!!1!!!setInt!!!TestCheckAndMutateWithByteBuff
+hbase.client.retries.number!!!6!!!setInt!!!TestAdminBase
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!TestAssignmentManagerMetrics
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!3!!!setInt!!!TestSimpleRegionNormalizerOnCluster
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!3!!!setInt!!!TestMaster
+zookeeper.recovery.retry!!!0!!!setInt!!!AbstractTestDLS
+WALProcedureStore.ROLL_RETRIES_CONF_KEY!!!10!!!setInt!!!TestWALProcedureStoreOnHDFS
+hbase.client.retries.number!!!1!!!setInt!!!TestMasterShutdown
+ReadOnlyZKClient.RECOVERY_RETRY!!!3!!!setInt!!!TestMasterShutdown
+ReadOnlyZKClient.RECOVERY_RETRY_INTERVAL_MILLIS!!!100!!!setInt!!!TestMasterShutdown
+zookeeper.recovery.retry!!!1!!!setInt!!!TestVisibilityLabelReplicationWithExpAsString
+zookeeper.recovery.retry.intervalmill!!!10!!!setInt!!!TestVisibilityLabelReplicationWithExpAsString
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!6!!!setInt!!!TestVisibilityLabelReplicationWithExpAsString
+zookeeper.recovery.retry!!!1!!!setInt!!!TestVisibilityLabelsReplication
+zookeeper.recovery.retry.intervalmill!!!10!!!setInt!!!TestVisibilityLabelsReplication
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!6!!!setInt!!!TestVisibilityLabelsReplication
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!CustomSaslAuthenticationProviderTestBase
+hbase.client.retries.number!!!5!!!setInt!!!TestCoprocessorWhitelistMasterObserver
+hbase.client.retries.number!!!5!!!setInt!!!TestCoprocessorWhitelistMasterObserver
+hbase.client.retries.number!!!5!!!setInt!!!TestCoprocessorWhitelistMasterObserver
+hbase.client.retries.number!!!5!!!setInt!!!TestCoprocessorWhitelistMasterObserver
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestNamespaceCommands
+hbase.security.relogin.maxretries!!!1!!!setInt!!!AbstractTestSecureIPC
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!3!!!setInt!!!TestMetaTableLocator
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!FilterTestingCluster
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!TestFilterWrapper
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!10!!!setInt!!!TestMetaTableAccessor
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!6!!!setInt!!!TestQuotaTableUtil
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!6!!!setInt!!!TestQuotaAdmin
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!6!!!setInt!!!TestQuotaThrottle
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/hbase_retry_locations.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/hbase_retry_locations.data
new file mode 100644
index 00000000..8446d3e7
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/hbase_retry_locations.data
@@ -0,0 +1,137 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/blob/89ca7f4ade84c84a246281c71898543b6161c099/hbase-server/src/main/java/org/apache/hadoop/hbase/io/FileLink.java#L114!!!org.apache.hadoop.hbase.io.FileLink.read!!!org.apache.hadoop.fs.FSDataInputStream.read!!!FileLink.java:117!!!java.io.FileNotFoundException
+https://github.com/apache/hbase/blob/89ca7f4ade84c84a246281c71898543b6161c099/hbase-server/src/main/java/org/apache/hadoop/hbase/io/FileLink.java#L164!!!org.apache.hadoop.hbase.io.FileLink.readFully!!!readFully!!!FileLink.java:166!!!java.io.FileNotFoundException
+https://github.com/apache/hbase/blob/89ca7f4ade84c84a246281c71898543b6161c099/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/RSProcedureDispatcher.java#L349!!!org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher.run!!!org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher.sendRequest!!!RSProcedureDispatcher.java:398!!!java.io.IOException
+https://github.com/apache/hbase/blob/89ca7f4ade84c84a246281c71898543b6161c099/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ServerCrashProcedure.java#L136!!!org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState!!!org.apache.hadoop.hbase.master.MasterServices.getProcedures!!!ServerCrashProcedure.java:278!!!java.io.IOException
+https://github.com/apache/hbase/blob/89ca7f4ade84c84a246281c71898543b6161c099/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/SnapshotVerifyProcedure.java#L124!!!org.apache.hadoop.hbase.master.procedure.SnapshotVerifyProcedure.execute!!!org.apache.hadoop.hbase.master.procedure.ServerRemoteProcedure.execute!!!SnapshotVerifyProcedure.java:142!!!java.lang.InterruptedException
+https://github.com/apache/hbase/blob/89ca7f4ade84c84a246281c71898543b6161c099/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/SplitWALProcedure.java#L64!!!org.apache.hadoop.hbase.master.procedure.SplitWALProcedure.executeFromState!!!org.apache.hadoop.hbase.master.SplitWALManager.isSplitWALFinished!!!SplitWALProcedure.java:80!!!java.io.IOException
+https://github.com/apache/hbase/blob/89ca7f4ade84c84a246281c71898543b6161c099/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/SwitchRpcThrottleProcedure.java#L65!!!org.apache.hadoop.hbase.master.procedure.SwitchRpcThrottleProcedure.executeFromState!!!org.apache.hadoop.hbase.master.procedure.SwitchRpcThrottleProcedure.switchThrottleState!!!SwitchRpcThrottleProcedure.java:70!!!java.io.IOException
+https://github.com/apache/hbase/blob/89ca7f4ade84c84a246281c71898543b6161c099/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/SyncReplicationReplayWALProcedure.java#L59!!!org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALProcedure.executeFromState!!!org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALManager.isReplayWALFinished!!!SyncReplicationReplayWALProcedure.java:75!!!java.io.IOException
+https://github.com/apache/hbase/blob/89ca7f4ade84c84a246281c71898543b6161c099/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/SyncReplicationReplayWALRemoteProcedure.java#L89!!!org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALRemoteProcedure.truncateWALs!!!org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALManager.finishReplayWAL!!!SyncReplicationReplayWALRemoteProcedure.java:92!!!java.io.IOException
+https://github.com/apache/hbase/blob/89ca7f4ade84c84a246281c71898543b6161c099/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/BootstrapNodeManager.java#L135!!!org.apache.hadoop.hbase.regionserver.BootstrapNodeManager.getFromMaster!!!org.apache.hadoop.hbase.util.FutureUtils.get!!!BootstrapNodeManager.java:140!!!java.io.IOException
+https://github.com/apache/hbase/blob/89ca7f4ade84c84a246281c71898543b6161c099/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsckRepair.java#L103!!!org.apache.hadoop.hbase.util.HBaseFsckRepair.waitUntilAssigned!!!org.apache.hadoop.hbase.client.Admin.getClusterMetrics!!!HBaseFsckRepair.java:110!!!java.io.IOException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/ZkSplitLogWorkerCoordination.java#L446!!!ZkSplitLogWorkerCoordination.getTaskList!!!listChildrenAndWatchForNewChildren!!!ZkSplitLogWorkerCoordination.java:449!!!org.apache.zookeeper.KeeperExceptionSessionExpiredException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/HBaseServerBase.java#L336!!!org.apache.hadoop.hbase.HBaseServerBase.putUpWebUI!!!org.apache.hadoop.hbase.http.InfoServer.start!!!HBaseServerBase.java:348!!!java.net.BindException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/TransitRegionStateProcedure.java#L339!!!org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState!!!openRegion!!!TransitRegionStateProcedure.java:491!!!java.io.IOException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/TransitRegionStateProcedure.java#L339!!!org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState!!!confirmOpened!!!TransitRegionStateProcedure.java:494!!!java.io.IOException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/TransitRegionStateProcedure.java#L339!!!org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState!!!closeRegion!!!TransitRegionStateProcedure.java:496!!!java.io.IOException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/TransitRegionStateProcedure.java#L339!!!org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState!!!confirmClosed!!!TransitRegionStateProcedure.java:499!!!java.io.IOException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterWalManager.java#L215!!!org.apache.hadoop.hbase.master.MasterWalManager.getFailedServersFromLogFolders!!!listStatus!!!MasterWalManager.jav:234!!!java.io.IOException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ClaimReplicationQueuesProcedure.java#L84!!!org.apache.hadoop.hbase.master.replication.ClaimReplicationQueuesProcedure.execute!!!removeQueue!!!ClaimReplicationQueuesProcedure.java:102!!!ReplicationException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ModifyPeerProcedure.java#L158!!!org.apache.hadoop.hbase.master.replication.ModifyPeerProcedure.executeFromState!!!updatePeerStorage!!!ModifyPeerProcedure.java:205!!!ReplicationException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ModifyPeerProcedure.java#L158!!!org.apache.hadoop.hbase.master.replication.ModifyPeerProcedure.executeFromState!!!reopenRegions!!!ModifyPeerProcedure.java:220!!!java.io.IOException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ModifyPeerProcedure.java#L158!!!org.apache.hadoop.hbase.master.replication.ModifyPeerProcedure.executeFromState!!!updateLastPushedSequenceIdForSerialPeer!!!ModifyPeerProcedure.java:231!!!java.io.IOException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ModifyPeerProcedure.java#L158!!!org.apache.hadoop.hbase.master.replication.ModifyPeerProcedure.executeFromState!!!enablePeer!!!ModifyPeerProcedure.java:238!!!ReplicationException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ModifyPeerProcedure.java#L158!!!org.apache.hadoop.hbase.master.replication.ModifyPeerProcedure.executeFromState!!!postPeerModification!!!ModifyPeerProcedure.java:259!!!java.io.IOException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ModifyPeerProcedure.java#L176!!!org.apache.hadoop.hbase.master.replication.ModifyPeerProcedure.executeFromState!!!prePeerModification!!!ModifyPeerProcedure.java:188!!!java.io.IOException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/RecoverStandbyProcedure.java#L59!!!org.apache.hadoop.hbase.master.replication.RecoverStandbyProcedure.executeFromState!!!renameToPeerReplayWALDir!!!RecoverStandbyProcedure.java:62!!!java.io.IOException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/RecoverStandbyProcedure.java#L59!!!org.apache.hadoop.hbase.master.replication.RecoverStandbyProcedure.executeFromState!!!renameToPeerSnapshotWALDir!!!RecoverStandbyProcedure.java:84!!!java.io.IOException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobFileCleanerChore.java#L178!!!org.apache.hadoop.hbase.mob.MobFileCleanerChore.cleanupObsoleteMobFiles!!!initReader!!!MobFileCleanupUtil.java:125!!!java.io.IOException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobFileCleanupUtil.java#L100!!!org.apache.hadoop.hbase.mob.MobFileCleanerChore.cleanupObsoleteMobFiles!!!closeStoreFile!!!MobFileCleanupUtil.java:129!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java#L470!!!org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createOutput!!!org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock!!!FanOutOneBlockAsyncDFSOutputHelper.java:493!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java#L589!!!org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile!!!org.apache.hadoop.hdfs.protocol.ClientProtocol.complete!!!FanOutOneBlockAsyncDFSOutputHelper.java:592!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/impl/FullTableBackupClient.java#L950!!!org.apache.hadoop.hbase.backup.impl.FullTableBackupClient.snapshotTable!!!snapshot!!!FullTableBackupClient.java:209!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcConnection.java#L250!!!org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupConnection!!!org.apache.hadoop.net.NetUtils.connect!!!BlockingRpcConnection.java:258!!!java.net.SocketException
+https://github.com/apache/hbase/tree//e1ad781//hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcConnection.java#L461!!!org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupIOstreams!!!org.apache.hadoop.security.UserGroupInformation.doAs!!!BlockingRpcConnection.java:476!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-it/src/main/java/org/apache/hadoop/hbase/chaos/ChaosAgent.java#L412!!!org.apache.hadoop.hbase.chaos.ChaosAgent.execWithRetries!!!org.apache.hadoop.hbase.chaos.ChaosAgent.exec!!!ChaosAgent.java:414!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/WALInputFormat.java#L157!!!org.apache.hadoop.hbase.wal.AbstractFSWALProvider.openReader!!!org.apache.hadoop.fs.Path.getFileSystem!!!WALInputFormat.java:162!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/WALInputFormat.java#L157!!!org.apache.hadoop.hbase.wal.AbstractFSWALProvider.openReader!!!org.apache.hadoop.hbase.wal.WALFactory.createStreamReader!!!WALInputFormat.java:162!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java#L402!!!org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease!!!org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.getLogFiles!!!WALProcedureStore.java:410!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java#L402!!!org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease!!!org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.initOldLogs!!!WALProcedureStore.java:413!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java#L402!!!org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease!!!org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter!!!WALProcedureStore.java:420!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java#L402!!!org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease!!!org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFile.removeFile!!!WALProcedureStore.java:430!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java#L898!!!org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.syncSlots!!!org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.syncSlots!!!WALProcedureStore.java:900 !!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java#L950!!!org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriterWithRetries!!!org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter!!!WALProcedureStore.java:956!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-protocol-shaded/target/generated-sources/protobuf/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RPCProtos.java#L4652!!!org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom!!!org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream.readTag!!!RPCProtos.java:5020!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-protocol-shaded/target/generated-sources/protobuf/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RPCProtos.java#L4652!!!org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom!!!org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream.readBytes!!!RPCProtos.java:5026!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-protocol-shaded/target/generated-sources/protobuf/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RPCProtos.java#L4652!!!org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom!!!org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream.readBytes!!!RPCProtos.java:5031!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-protocol-shaded/target/generated-sources/protobuf/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RPCProtos.java#L4652!!!org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom!!!org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream.readBytes!!!RPCProtos.java:5036!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-protocol-shaded/target/generated-sources/protobuf/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RPCProtos.java#L4652!!!org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom!!!org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream.readInt32!!!RPCProtos.java:5041!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-protocol-shaded/target/generated-sources/protobuf/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RPCProtos.java#L4652!!!org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom!!!org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream.readBool!!!RPCProtos.java:5046!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-protocol-shaded/target/generated-sources/protobuf/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RPCProtos.java#L4652!!!org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom!!!org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream.readBool!!!RPCProtos.java:5051!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-protocol-shaded/target/generated-sources/protobuf/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RPCProtos.java#L4652!!!org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom!!!org.apache.hbase.thirdparty.com.google.protobuf.GeneratedMessageV3$Builder.parseUnknownField!!!RPCProtos.java:5056!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ZKReplicationQueueStorage.java#L233!!!org.apache.hadoop.hbase.replication.ZKReplicationQueueStorage.setWALPosition!!!org.apache.hadoop.hbase.zookeeper.ZKUtil.multiOrSequential!!!N/A!!!java.net.BindException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/backup/HFileArchiver.java#L548!!!org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile!!!org.apache.hadoop.fs.FileSystem.exists!!!HFileArchiver.java:555!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/backup/HFileArchiver.java#L548!!!org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile!!!org.apache.hadoop.fs.FileSystem.mkdirs!!!HFileArchiver.java:556!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/backup/HFileArchiver.java#L548!!!org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile!!!org.apache.hadoop.hbase.backup.HFileArchiver$File.moveAndClose!!!HFileArchiver.java:566!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterWalManager.java#L215!!!org.apache.hadoop.hbase.master.MasterWalManager.getFailedServersFromLogFolders!!!org.apache.hadoop.fs.FileSystem.exists!!!MasterWalManager.java:233!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterWalManager.java#L215!!!org.apache.hadoop.hbase.master.MasterWalManager.getFailedServersFromLogFolders!!!org.apache.hadoop.hbase.util.CommonFSUtils.listStatus!!!MasterWalManager.java:234!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/TransitPeerSyncReplicationStateProcedure.java#L234!!!org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.executeFromState!!!org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.setPeerNewSyncReplicationState!!!TransitPeerSyncReplicationStateProcedure.java:265!!!ReplicationException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/TransitPeerSyncReplicationStateProcedure.java#L234!!!org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.executeFromState!!!org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.setLastPushedSequenceId!!!TransitPeerSyncReplicationStateProcedure.java:296!!!ReplicationException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/TransitPeerSyncReplicationStateProcedure.java#L234!!!org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.executeFromState!!!org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.removeAllReplicationQueues !!!TransitPeerSyncReplicationStateProcedure.java:314!!!ReplicationException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/TransitPeerSyncReplicationStateProcedure.java#L234!!!org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.executeFromState!!!org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.transitPeerSyncReplicationState!!!TransitPeerSyncReplicationStateProcedure.java:329!!!ReplicationException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/TransitPeerSyncReplicationStateProcedure.java#L234!!!org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.executeFromState!!!org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.enablePeer!!!TransitPeerSyncReplicationStateProcedure.java:349!!!ReplicationException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/TransitPeerSyncReplicationStateProcedure.java#L234!!!org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.executeFromState!!!org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.createDirForRemoteWAL!!!TransitPeerSyncReplicationStateProcedure.java:367!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/TransitPeerSyncReplicationStateProcedure.java#L234!!!org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.executeFromState!!!org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.postTransit!!!TransitPeerSyncReplicationStateProcedure.java:381!!!ReplicationException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/master/zksyncer/ClientZKSyncer.java#L169!!!org.apache.hadoop.hbase.master.zksyncer.ClientZKSyncer.setDataForClientZkUntilSuccess!!!org.apache.hadoop.hbase.zookeeper.ZKUtil.createNodeIfNotExistsNoWatch!!!ClientZKSyncer.java:173!!!org.apache.zookeeper.KeeperExceptionSessionExpiredException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/master/zksyncer/ClientZKSyncer.java#L169!!!org.apache.hadoop.hbase.master.zksyncer.ClientZKSyncer.setDataForClientZkUntilSuccess!!!org.apache.hadoop.hbase.zookeeper.ZKUtil.setData!!!ClientZKSyncer.java:175!!!org.apache.zookeeper.KeeperExceptionNoNodeException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/master/zksyncer/ClientZKSyncer.java#L169!!!org.apache.hadoop.hbase.master.zksyncer.ClientZKSyncer.deleteDataForClientZkUntilSuccess!!!org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode!!!ClientZKSyncer.java:198!!!org.apache.zookeeper.KeeperExceptionSessionExpiredException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/master/zksyncer/ClientZKSyncer.java#L169!!!org.apache.hadoop.hbase.master.zksyncer.ClientZKSyncer.reconnectAfterExpiration!!!ZKWatcher.reconnectAfterExpiration!!!ClientZKSyncer.java:216!!!org.apache.zookeeper.KeeperExceptionSessionExpiredException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/master/zksyncer/ClientZKSyncer.java#L195!!!org.apache.hadoop.hbase.master.zksyncer.ClientZKSyncer.deleteDataForClientZkUntilSuccess!!!org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode!!!ClientZKSyncer.java:198!!!org.apache.zookeeper.KeeperExceptionSessionExpiredException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/MetaRegionLocationCache.java#L107!!!org.apache.hadoop.hbase.MetaRegionLocationCache.loadMetaLocationsFromZk!!!org.apache.hadoop.hbase.zookeeper.ZKWatcher.getMetaReplicaNodesAndWatchChildren!!!MetaRegionLocationCache.java:109!!!org.apache.zookeeper.KeeperExceptionSessionExpiredException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/MetaRegionLocationCache.java#L171!!!org.apache.hadoop.hbase.MetaRegionLocationCache.updateMetaLocation!!!org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists!!!MetaRegionLocationCache.java:174!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/MetaRegionLocationCache.java#L171!!!org.apache.hadoop.hbase.MetaRegionLocationCache.updateMetaLocation!!!org.apache.hadoop.hbase.MetaRegionLocationCache.getMetaRegionLocation!!!MetaRegionLocationCache.java:181!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/namequeues/WALEventTrackerTableAccessor.java#L70!!!org.apache.hadoop.hbase.namequeues.WALEventTrackerTableAccessor.doPut!!!org.apache.hadoop.hbase.client.Connection.getTable!!!WALEventTrackerTableAccessor.java:71!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/namequeues/WALEventTrackerTableAccessor.java#L70!!!org.apache.hadoop.hbase.namequeues.WALEventTrackerTableAccessor.doPut!!!org.apache.hadoop.hbase.client.Table.put!!!WALEventTrackerTableAccessor.java:72!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/RegionReplicaFlushHandler.java#L107!!!org.apache.hadoop.hbase.regionserver.handler.RegionReplicaFlushHandler.triggerFlushInPrimaryRegion!!!org.apache.hadoop.hbase.util.FutureUtils.get!!!RegionReplicaFlushHandler.java:114!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java#L1078!!!org.apache.hadoop.hbase.regionserver.HRegionFileSystem.createDir!!!org.apache.hadoop.hbase.regionserver.HRegionFileSystem.mkdirs!!!HRegionFileSystem.java:1080!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java#L1101!!!org.apache.hadoop.hbase.regionserver.HRegionFileSystem.rename!!!org.apache.hadoop.fs.FileSystem.rename!!!HRegionFileSystem.java:1103!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java#L1126!!!org.apache.hadoop.hbase.regionserver.HRegionFileSystem.deleteDir!!!org.apache.hadoop.fs.FileSystem.delete!!!HRegionFileSystem.java:1128!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java#L1165!!!org.apache.hadoop.hbase.regionserver.HRegionFileSystem.createDirOnFileSystem!!!org.apache.hadoop.fs.FileSystem.mkdirs!!!HRegionFileSystem.java:1167!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L2505!!!org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub!!!org.apache.hadoop.hbase.security.UserProvider.getCurrent!!!HRegionServer.java:2590!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java#L817!!!org.apache.hadoop.hbase.regionserver.HStore.flushCache!!!org.apache.hadoop.hbase.regionserver.StoreFlusher.flushSnapshot!!!HStore.java:828!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RemoteProcedureResultReporter.java#L71!!!org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run!!!org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$ReportProcedureDoneRequest$Builder.addResult!!!RemoteProcedureResultReporter.java:74!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RemoteProcedureResultReporter.java#L71!!!org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run!!!org.apache.hadoop.hbase.regionserver.HRegionServer.reportProcedureDone!!!RemoteProcedureResultReporter.java:89!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/snapshot/FlushSnapshotSubprocedure.java#L113!!!org.apache.hadoop.hbase.regionserver.snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask.call!!!org.apache.hadoop.hbase.regionserver.HRegion.flush!!!FlushSnapshotSubprocedure.java:114!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SnapshotRegionCallable.java#L57!!!org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall!!!org.apache.hadoop.hbase.regionserver.HRegion.flush!!!SnapshotRegionCallable.java:58!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AbstractFSWAL.java#L783!!!org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive!!!org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archiveLogFile!!!AbstractFSWAL.java:923!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/DualAsyncFSWAL.java#L76!!!org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance!!!org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createAsyncWriter!!!N/A!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java#L452!!!org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.replicate!!!org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.parallelReplicate!!!HBaseInterClusterReplicationEndpoint.java:461!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HFileReplicator.java#L169!!!org.apache.hadoop.hbase.replication.regionserver.HFileReplicator.doBulkLoad!!!org.apache.hadoop.hbase.tool.BulkLoadHFilesTool.loadHFileQueue!!!HFileReplicator.java:179!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/RecoveredReplicationSourceShipper.java#L57!!!org.apache.hadoop.hbase.replication.regionserver.RecoveredReplicationSourceShipper.getStartPosition!!!org.apache.hadoop.hbase.replication.regionserver.RecoveredReplicationSource.locateRecoveredPaths!!!N/A!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java#L432!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.uncaughtException!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.refreshSources!!!N/A!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java#L512!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.createReplicationEndpoint!!!ReplicationSource.java:555!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java#L512!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initAndStartReplicationEndpoint!!!ReplicationSource.java:565!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java#L706!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.cleanOldLogs!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.removeRemoteWALs!!!ReplicationSourceManager.java:739!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceShipper.java#L179!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceShipper.shipEdits!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceShipper.cleanUpHFileRefs!!!ReplicationSourceShipper.java:195!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java#L130!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.tryAdvanceStreamAndCreateWALBatch!!!N/A!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java#L130!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run!!!org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.reset!!!N/A!!!WALEntryFilterRetryableException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java#L130!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries!!!ReplicationSourceWALReader.java:171!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java#L1019!!!org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.moveRegionsBetweenGroups!!!moveAsync!!!RSGroupInfoManagerImpl.java:1037!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/tool/BulkLoadHFilesTool.java#L908!!!org.apache.hadoop.hbase.tool.BulkLoadHFilesTool.performBulkLoad!!!org.apache.hadoop.hbase.util.FutureUtils.get!!!BulkLoadHFilesTool.java:966!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/tool/BulkLoadHFilesTool.java#L908!!!org.apache.hadoop.hbase.tool.BulkLoadHFilesTool.performBulkLoad!!!org.apache.hadoop.hbase.tool.BulkLoadHFilesTool.groupOrSplitPhase!!!BulkLoadHFilesTool.java:981!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/tool/BulkLoadHFilesTool.java#L908!!!org.apache.hadoop.hbase.tool.BulkLoadHFilesTool.performBulkLoad!!!org.apache.hadoop.hbase.tool.BulkLoadHFilesTool.bulkLoadPhase!!!BulkLoadHFilesTool.java:990!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java#L622!!!org.apache.hadoop.hbase.util.FSTableDescriptors.writeTableDescriptor!!!org.apache.hadoop.fs.FileSystem.create!!!FSTableDescriptors.java:626!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java#L622!!!org.apache.hadoop.hbase.util.FSTableDescriptors.writeTableDescriptor!!!java.io.FilterOutputStream.write!!!FSTableDescriptors.java:627!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java#L622!!!org.apache.hadoop.hbase.util.FSTableDescriptors.writeTableDescriptor!!!org.apache.hadoop.hbase.util.FSTableDescriptors.deleteTableDescriptorFiles!!!FSTableDescriptors.java:635!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java#L444!!!org.apache.hadoop.hbase.util.FSUtils.setVersion!!!org.apache.hadoop.fs.FileSystem.create!!!FSUtils.java:440!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java#L499!!!org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists!!!org.apache.hadoop.fs.FileSystem.exists!!!FSUtils.java:495!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java#L601!!!org.apache.hadoop.hbase.util.FSUtils.setClusterId!!!org.apache.hadoop.fs.FileSystem.create!!!FSUtils.java:598!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java#L601!!!org.apache.hadoop.hbase.util.FSUtils.setClusterId!!!java.io.FilterOutputStream.write!!!FSUtils.java:599!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java#L601!!!org.apache.hadoop.hbase.util.FSUtils.setClusterId!!!org.apache.hadoop.fs.FileSystem.rename!!!FSUtils.java:609!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java#L426!!!org.apache.hadoop.hbase.util.HBaseFsck$FileLockCallable.createFileWithRetries!!!org.apache.hadoop.hbase.util.CommonFSUtils.create!!!HBaseFsck.java:428!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java#L481!!!org.apache.hadoop.hbase.util.HBaseFsck.unlockHbck!!!org.apache.hbase.thirdparty.com.google.common.io.Closeables.close!!!HBaseFsck.java:483!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java#L481!!!org.apache.hadoop.hbase.util.HBaseFsck.unlockHbck!!!org.apache.hadoop.hbase.util.CommonFSUtils.delete!!!HBaseFsck.java:484!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java#L481!!!org.apache.hadoop.hbase.util.HBaseFsck.unlockHbck!!!org.apache.hadoop.hbase.util.CommonFSUtils.getCurrentFileSystem!!!HBaseFsck.java:484!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java#L720!!!org.apache.hadoop.hbase.util.HBaseFsck.setMasterInMaintenanceMode!!!org.apache.hadoop.hbase.zookeeper.ZKUtil.createEphemeralNodeAndWatch!!!HBaseFsck.java:735!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsckRepair.java#L107!!!org.apache.hadoop.hbase.util.HBaseFsckRepair.waitUntilAssigned!!!org.apache.hadoop.hbase.client.Admin.getClusterMetrics!!!HBaseFsckRepair.java:110!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/MoveWithAck.java#L76!!!org.apache.hadoop.hbase.util.MoveWithAck.call!!!org.apache.hadoop.hbase.client.Admin.move!!!MoveWithAck.java:82!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/MoveWithAck.java#L76!!!org.apache.hadoop.hbase.util.MoveWithAck.call!!!org.apache.hadoop.hbase.util.MoveWithAck.isSameServer!!!MoveWithAck.java:85!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/wal/AbstractWALRoller.java#L174!!!org.apache.hadoop.hbase.wal.AbstractWALRoller.run!!!org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal!!!AbstractWALRoller.java:212!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALFactory.java#L339!!!org.apache.hadoop.hbase.wal.WALFactory.createStreamReader!!!org.apache.hadoop.hbase.wal.AbstractFSWALProvider$Reader.init!!!WALFactory.java:417!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java#L208!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.delete!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk!!!RecoverableZooKeeper.java:210!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java#L258!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk!!!RecoverableZooKeeper.java:262!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java#L258!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk!!!RecoverableZooKeeper.java:264!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java#L320!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getChildren!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk!!!RecoverableZooKeeper.java:324!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java#L320!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getChildren!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk!!!RecoverableZooKeeper.java:326!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java#L373!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk!!!RecoverableZooKeeper.java:377!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java#L373!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk!!!RecoverableZooKeeper.java:379!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java#L425!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk!!!RecoverableZooKeeper.java:428!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java#L475!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getAcl!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk!!!RecoverableZooKeeper.java:478!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java#L509!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setAcl!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk!!!RecoverableZooKeeper.java:512!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java#L574!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.createNonSequential!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk!!!RecoverableZooKeeper.java:576!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java#L616!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.createSequential!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk!!!RecoverableZooKeeper.java:626!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java#L680!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.multi!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk!!!RecoverableZooKeeper.java:683!!!org.apache.zookeeper.KeeperExceptionSessionExpiredException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKNodeTracker.java#L140!!!org.apache.hadoop.hbase.zookeeper.ZKNodeTracker.blockUntilAvailable!!!org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataAndWatch!!!ZKNodeTracker.java:131!!!org.apache.zookeeper.KeeperExceptionSessionExpiredException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKNodeTracker.java#L140!!!org.apache.hadoop.hbase.zookeeper.ZKNodeTracker.blockUntilAvailable!!!org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists!!!ZKNodeTracker.java:143!!!org.apache.zookeeper.KeeperExceptionSessionExpiredException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java#L1400!!!org.apache.hadoop.hbase.zookeeper.ZKUtil.waitForBaseZNode!!!org.apache.zookeeper.ZooKeeper.exists!!!ZKUtil.java:1402!!!org.apache.zookeeper.KeeperExceptionSessionExpiredException
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/hbase_timeout_bounds.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/hbase_timeout_bounds.data
new file mode 100644
index 00000000..e49cb2c6
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/hbase_timeout_bounds.data
@@ -0,0 +1,81 @@
+AbstractTestFSWAL.testFailedToCreateWALIfParentRenamed
+AbstractTestFSWAL.testFindMemStoresEligibleForFlush
+AbstractTestFSWAL.testFlushSequenceIdIsGreaterThanAllEditsInHFile
+AbstractTestFSWAL.testMaxFlushedSequenceIdGoBackwards
+AbstractTestFSWAL.testRollWriterForClosedWAL
+AbstractTestFSWAL.testSyncNoAppend
+AbstractTestFSWAL.testUnflushedSeqIdTrackingWithAsyncWal
+AbstractTestFSWAL.testWALCoprocessorLoaded
+AbstractTestFSWAL.testWriteEntryCanBeNull
+TestAsyncTable.testDisabled
+TestAsyncTable.testIncrement
+TestBackupDeleteRestore.testBackupDeleteRestore
+TestBasicWALEntryStream.testEOFExceptionInOldWALsDirectory
+TestBootstrapNodeManager.testNormal
+TestBootstrapNodeManager.testOnlyMaster
+TestBootstrapNodeManager.testRegionServerError
+TestBulkLoadReplicationHFileRefs.testWhenExcludeCF
+TestBulkLoadReplicationHFileRefs.testWhenExcludeNamespace
+TestBulkLoadReplicationHFileRefs.testWhenExcludeTable
+TestClassLoading.testClassLoadingFromHDFS
+TestClassLoading.testClassLoadingFromLibDirInJar
+TestClassLoading.testClassLoadingFromLocalFS
+TestClassLoading.testClassLoadingFromRelativeLibDirInJar
+TestClassLoading.testHBase3810
+TestClassLoading.testPrivateClassLoader
+TestClientSideRegionScanner.testContinuesToScanIfHasMore
+TestClientTimeouts.testAdminTimeout
+TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+TestDrainReplicationQueuesForStandBy.test
+TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+TestFSHLog.testUnflushedSeqIdTracking
+TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+TestFlushSnapshotFromClient.testAsyncFlushSnapshot
+TestFlushSnapshotFromClient.testFlushCreateListDestroy
+TestFlushSnapshotFromClient.testFlushTableSnapshot
+TestFlushSnapshotFromClient.testFlushTableSnapshotWithProcedure
+TestFlushSnapshotFromClient.testSkipFlushTableSnapshot
+TestFlushSnapshotFromClient.testSnapshotFailsOnNonExistantTable
+TestFlushSnapshotFromClient.testSnapshotStateAfterMerge
+TestFlushSnapshotFromClient.testTakeSnapshotAfterMerge
+TestHelloHBase.testCreateNamespaceAndTable
+TestHelloHBase.testDeleteRow
+TestHelloHBase.testNamespaceExists
+TestHelloHBase.testPutRowToTable
+TestMetaWithReplicasShutdownHandling.testShutdownHandling
+TestMultiVersions.testGetRowVersions
+TestMultiVersions.testScanMultipleVersions
+TestMultiVersions.testTimestamps
+TestRSGroupsBalance.testGetRSGroupAssignmentsByTable
+TestRSGroupsBalance.testGroupBalance
+TestRSGroupsBalance.testGroupDryRunBalance
+TestRSGroupsBalance.testMisplacedRegions
+TestRefreshRecoveredReplication.testReplicationRefreshSource
+TestRegionAssignedToMultipleRegionServers.test
+TestRegionMoverWithRSGroupEnable.testUnloadRegions
+TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+TestRegionObserverScannerOpenHook.testRegionObserverFlushTimeStacking
+TestRegionObserverScannerOpenHook.testRegionObserverScanTimeStacking
+TestRegionReplicaSplit.testAssignFakeReplicaRegion
+TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+TestRegionReplicationLagEvaluation.test
+TestRegionServerCrashDisableWAL.test
+TestReplicator.testReplicatorBatching
+TestReplicator.testReplicatorWithErrors
+TestRetainAssignmentOnRestart.testForceRetainAssignment
+TestRetainAssignmentOnRestart.testRetainAssignmentOnClusterRestart
+TestRetainAssignmentOnRestart.testRetainAssignmentOnSingleRSRestart
+TestSecurityHeadersFilter.testDefaultValues
+TestSecurityHeadersFilter.testHstsAndCspSettings
+TestSerialReplicationFailover.testKillRS
+TestSnapshotProcedureMasterRestarts.testMasterRestarts
+TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileVerifyingSnapshot
+TestSuperUserQuotaPermissions.testSuperUserCanStillCompact
+TestSuperUserQuotaPermissions.testSuperuserCanRemoveQuota
+TestSyncReplicationWALProvider.test
+TestTableMapReduceUtil.testInitCredentialsForCluster1
+TestTableMapReduceUtil.testInitCredentialsForCluster2
+TestTableMapReduceUtil.testInitCredentialsForCluster3
+TestTableMapReduceUtil.testInitCredentialsForCluster4
+TestZooKeeperScanPolicyObserver.test
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/pom-hbase.xml b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/pom-hbase.xml
new file mode 100644
index 00000000..6a74163b
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/pom-hbase.xml
@@ -0,0 +1,4721 @@
+
+
+
+ 4.0.0
+
+ org.apache
+ apache
+ 23
+
+
+
+ org.apache.hbase
+ hbase
+ ${revision}
+ pom
+ Apache HBase
+ Apache HBase is the Hadoop database. Use it when you need
+ random, realtime read/write access to your Big Data.
+ This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters
+ of commodity hardware.
+ https://hbase.apache.org
+ 2007
+
+
+
+ Apache License, Version 2.0
+ https://www.apache.org/licenses/LICENSE-2.0.txt
+ repo
+
+
+
+
+ achouhan
+ Abhishek Singh Chouhan
+ achouhan@apache.org
+ +5
+
+
+ acube123
+ Amitanand S. Aiyer
+ acube123@apache.org
+ -8
+
+
+ allan163
+ Allan Yang
+ allan163@apache.org
+ +8
+
+
+ appy
+ Apekshit Sharma
+ appy@apache.org
+ -8
+
+
+ anastasia
+ Anastasia Braginsky
+ anastasia@apache.org
+ +2
+
+
+ apurtell
+ Andrew Purtell
+ apurtell@apache.org
+ -8
+
+
+ anoopsamjohn
+ Anoop Sam John
+ anoopsamjohn@apache.org
+ +5
+
+
+ antonov
+ Mikhail Antonov
+ antonov@apache.org
+ -8
+
+
+ ashishsinghi
+ Ashish Singhi
+ ashishsinghi@apache.org
+ +5
+
+
+ ashu
+ Ashu Pachauri
+ ashu@apache.org
+ +5
+
+
+ bharathv
+ Bharath Vissapragada
+ bharathv@apache.org
+ -8
+
+
+ binlijin
+ Lijin Bin
+ binlijin@apache.org
+ +8
+
+
+ brfrn169
+ Toshihiro Suzuki
+ brfrn169@apache.org
+ +9
+
+
+ busbey
+ Sean Busbey
+ busbey@apache.org
+ -6
+
+
+ chenglei
+ Cheng Lei
+ chenglei@apache.org
+ +8
+
+
+ chenheng
+ Heng Chen
+ chenheng@apache.org
+ +8
+
+
+ chia7712
+ Chia-Ping Tsai
+ chia7712@apache.org
+ +8
+
+
+ ddas
+ Devaraj Das
+ ddas@apache.org
+ -8
+
+
+ dimaspivak
+ Dima Spivak
+ dimaspivak@apache.org
+ -8
+
+
+ dmeil
+ Doug Meil
+ dmeil@apache.org
+ -5
+
+
+ eclark
+ Elliott Clark
+ eclark@apache.org
+ -8
+
+
+ elserj
+ Josh Elser
+ elserj@apache.org
+ -5
+
+
+ enis
+ Enis Soztutar
+ enis@apache.org
+ -8
+
+
+ eshcar
+ Eshcar Hillel
+ eshcar@apache.org
+ +2
+
+
+ fenghh
+ Honghua Feng
+ fenghh@apache.org
+ +8
+
+
+ garyh
+ Gary Helmling
+ garyh@apache.org
+ -8
+
+
+ gchanan
+ Gregory Chanan
+ gchanan@apache.org
+ -8
+
+
+ gjacoby
+ Geoffrey Jacoby
+ gjacoby@apache.org
+ -5
+
+
+ gxcheng
+ Guangxu Cheng
+ gxcheng@apache.org
+ +8
+
+
+ haxiaolin
+ Xiaolin Ha
+ haxiaolin@apache.org
+ +8
+
+
+ huaxiangsun
+ Huaxiang Sun
+ huaxiangsun@apache.org
+ -8
+
+
+ jdcryans
+ Jean-Daniel Cryans
+ jdcryans@apache.org
+ -8
+
+
+ jeffreyz
+ Jeffrey Zhong
+ jeffreyz@apache.org
+ -8
+
+
+ jerryjch
+ Jing Chen (Jerry) He
+ jerryjch@apache.org
+ -8
+
+
+ jyates
+ Jesse Yates
+ jyates@apache.org
+ -8
+
+
+ jgray
+ Jonathan Gray
+ jgray@fb.com
+ -8
+
+
+ jingchengdu
+ Jingcheng Du
+ jingchengdu@apache.org
+ +8
+
+
+ esteban
+ Esteban Gutierrez
+ esteban@apache.org
+ -8
+
+
+ janh
+ Jan Hentschel
+ janh@apache.org
+ +1
+
+
+ jmhsieh
+ Jonathan Hsieh
+ jmhsieh@apache.org
+ -8
+
+
+ jxiang
+ Jimmy Xiang
+ jxiang@apache.org
+ -8
+
+
+ kannan
+ Kannan Muthukkaruppan
+ kannan@fb.com
+ -8
+
+
+ karthik
+ Karthik Ranganathan
+ kranganathan@fb.com
+ -8
+
+
+ larsfrancke
+ Lars Francke
+ larsfrancke@apache.org
+ Europe/Berlin
+
+
+ larsgeorge
+ Lars George
+ larsgeorge@apache.org
+ +1
+
+
+ larsh
+ Lars Hofhansl
+ larsh@apache.org
+ -8
+
+
+ liangxie
+ Liang Xie
+ liangxie@apache.org
+ +8
+
+
+ liushaohui
+ Shaohui Liu
+ liushaohui@apache.org
+ +8
+
+
+ liyin
+ Liyin Tang
+ liyin.tang@fb.com
+ -8
+
+
+ liyu
+ Yu Li
+ liyu@apache.org
+ +8
+
+
+ mbautin
+ Mikhail Bautin
+ mbautin@apache.org
+ -8
+
+
+ mbertozzi
+ Matteo Bertozzi
+ mbertozzi@apache.org
+ 0
+
+
+ mdrob
+ Mike Drob
+ mdrob@apache.org
+ -5
+
+
+ meszibalu
+ Balazs Meszaros
+ meszibalu@apache.org
+ +1
+
+
+ misty
+ Misty Stanley-Jones
+ misty@apache.org
+ -8
+
+
+ ndimiduk
+ Nick Dimiduk
+ ndimiduk@apache.org
+ -8
+
+
+ nihaljain
+ Nihal Jain
+ nihaljain@apache.org
+ +5
+
+
+ niuyulin
+ Yulin Niu
+ niuyulin@apache.org
+ +8
+
+
+ nkeywal
+ Nicolas Liochon
+ nkeywal@apache.org
+ +1
+
+
+ nspiegelberg
+ Nicolas Spiegelberg
+ nspiegelberg@fb.com
+ -8
+
+
+ octo47
+ Andrey Stepachev
+ octo47@gmail.com
+ 0
+
+
+ openinx
+ Zheng Hu
+ openinx@apache.org
+ +8
+
+
+ pankajkumar
+ Pankaj Kumar
+ pankajkumar@apache.org
+ +5
+
+
+ psomogyi
+ Peter Somogyi
+ psomogyi@apache.org
+ +1
+
+
+ rajeshbabu
+ Rajeshbabu Chintaguntla
+ rajeshbabu@apache.org
+ +5
+
+
+ ramkrishna
+ Ramkrishna S Vasudevan
+ ramkrishna@apache.org
+ +5
+
+
+ rawson
+ Ryan Rawson
+ rawson@apache.org
+ -8
+
+
+ reidchan
+ Reid Chan
+ reidchan@apache.org
+ +8
+
+
+ shahrs87
+ Rushabh Shah
+ shahrs87@apache.org
+ -8
+
+
+ sakthi
+ Sakthi Vel
+ sakthi@apache.org
+ -8
+
+
+ sershe
+ Sergey Shelukhin
+ sershe@apache.org
+ -8
+
+
+ ssrungarapu
+ Srikanth Srungarapu
+ ssrungarapu@apache.org
+ -8
+
+
+ stack
+ Michael Stack
+ stack@apache.org
+ -8
+
+
+ syuanjiang
+ Stephen Yuan Jiang
+ syuanjiang@apache.org
+ -8
+
+
+ taklwu
+ Tak-Lon (Stephen) Wu
+ taklwu@apache.org
+ -8
+
+
+ tedyu
+ Ted Yu
+ yuzhihong@gmail.com
+ -8
+
+
+ tianhang
+ Tianhang Tang
+ tianhang@apache.org
+ +8
+
+
+ tianjy
+ tianjy@apache.org
+ +8
+
+
+ todd
+ Todd Lipcon
+ todd@apache.org
+ -8
+
+
+ toffer
+ Francis Liu
+ toffer@apache.org
+ -8
+
+
+ vikasv
+ Vikas Vishwakarma
+ vikasv@apache.org
+ +5
+
+
+ virag
+ Virag Kothari
+ virag@yahoo-inc.com
+ -8
+
+
+ vjasani
+ Viraj Jasani
+ vjasani@apache.org
+ +5
+
+
+ water
+ Xiang Li
+ xiangli@apache.org
+ +8
+
+
+ wchevreuil
+ Wellington Chevreuil
+ wchevreuil@apache.org
+ 0
+
+
+ weichiu
+ Wei-Chiu Chuang
+ weichiu@apache.org
+ -8
+
+
+ xucang
+ Xu Cang
+ xucang@apache.org
+ -8
+
+
+ yangzhe1991
+ Phil Yang
+ yangzhe1991@apache.org
+ +8
+
+
+ zghao
+ Guanghao Zhang
+ zghao@apache.org
+ +8
+
+
+ zhangduo
+ Duo Zhang
+ zhangduo@apache.org
+ +8
+
+
+ zhaobaiqiang
+ Baiqiang Zhao
+ zhaobaiqiang@apache.org
+ +8
+
+
+ zjushch
+ Chunhui Shen
+ zjushch@apache.org
+ +8
+
+
+ churro
+ Rahul Gidwani
+ churro@apache.org
+ -8
+
+
+ yiliang
+ Yi Liang
+ yiliang@apache.org
+ -8
+
+
+ zyork
+ Zach York
+ zyork@apache.org
+ -8
+
+
+ meiyi
+ Yi Mei
+ meiyi@apache.org
+ +8
+
+
+ wangzheng
+ Zheng (bsglz) Wang
+ wangzheng@apache.org
+ +8
+
+
+ sunxin
+ Xin Sun
+ sunxin@apache.org
+ +8
+
+
+ huangzhuoyue
+ Zhuoyue Huang
+ huangzhuoyue@apache.org
+ +8
+
+
+ xiaoyt
+ Yutong Xiao
+ xiaoyt@apache.org
+ +8
+
+
+ bbeaudreault
+ Bryan Beaudreault
+ bbeaudreault@apache.org
+ -5
+
+
+ heliangjun
+ Liangjun He
+ heliangjun@apache.org
+ +8
+
+
+
+
+ User List
+ user-subscribe@hbase.apache.org
+ user-unsubscribe@hbase.apache.org
+ user@hbase.apache.org
+ https://lists.apache.org/list.html?user@hbase.apache.org
+
+ https://dir.gmane.org/gmane.comp.java.hadoop.hbase.user
+
+
+
+ Developer List
+ dev-subscribe@hbase.apache.org
+ dev-unsubscribe@hbase.apache.org
+ dev@hbase.apache.org
+ https://lists.apache.org/list.html?dev@hbase.apache.org
+
+ https://dir.gmane.org/gmane.comp.java.hadoop.hbase.devel
+
+
+
+ Commits List
+ commits-subscribe@hbase.apache.org
+ commits-unsubscribe@hbase.apache.org
+ https://lists.apache.org/list.html?commits@hbase.apache.org
+
+
+ Issues List
+ issues-subscribe@hbase.apache.org
+ issues-unsubscribe@hbase.apache.org
+ https://lists.apache.org/list.html?issues@hbase.apache.org
+
+
+ Builds List
+ builds-subscribe@hbase.apache.org
+ builds-unsubscribe@hbase.apache.org
+ https://lists.apache.org/list.html?builds@hbase.apache.org
+
+
+ User (ZH) List
+ user-zh-subscribe@hbase.apache.org
+ user-zh-unsubscribe@hbase.apache.org
+ user-zh@hbase.apache.org
+ https://lists.apache.org/list.html?user-zh@hbase.apache.org
+
+
+
+
+ hbase-build-configuration
+ hbase-replication
+ hbase-balancer
+ hbase-mapreduce
+ hbase-resource-bundle
+ hbase-http
+ hbase-server
+ hbase-thrift
+ hbase-shell
+ hbase-protocol-shaded
+ hbase-client
+ hbase-hadoop-compat
+ hbase-common
+ hbase-procedure
+ hbase-endpoint
+ hbase-it
+ hbase-examples
+ hbase-assembly
+ hbase-testing-util
+ hbase-annotations
+ hbase-rest
+ hbase-checkstyle
+ hbase-external-blockcache
+ hbase-shaded
+ hbase-archetypes
+ hbase-metrics-api
+ hbase-metrics
+ hbase-backup
+ hbase-zookeeper
+ hbase-hbtop
+ hbase-asyncfs
+ hbase-logging
+ hbase-compression
+
+
+ scm:git:git://gitbox.apache.org/repos/asf/hbase.git
+ scm:git:https://gitbox.apache.org/repos/asf/hbase.git
+ https://gitbox.apache.org/repos/asf?p=hbase.git
+
+
+ JIRA
+ https://issues.apache.org/jira/browse/HBASE
+
+
+
+ hbase.apache.org
+ HBase Website at hbase.apache.org
+
+ file:///tmp
+
+
+
+
+
+ 1.9.8.M1
+ 1.13
+ 1.0.0
+
+ 4.0.0-alpha-1-SNAPSHOT
+
+ false
+
+ false
+
+ false
+
+ false
+
+ false
+
+ false
+ ${project.build.finalName}.tar.gz
+ yyyy-MM-dd'T'HH:mm
+ ${maven.build.timestamp}
+ 1.8
+ 8
+
+
+ 3.5.0
+ ${compileSource}
+
+ 3.2.4
+
+ ${hadoop-three.version}
+ src/main/assembly/hadoop-three-compat.xml
+
+ 3.10.5.Final
+
+ 0.13.0
+
+ 0.13.0
+ 1.11.0
+ 2.8.1
+ 1.15
+ 1.7
+ 2.11.0
+ 3.9
+ 3.6.1
+ 3.4.4
+ 4.5.13
+ 4.4.13
+ 3.2.6
+ 2.14.1
+ 2.14.1
+ 2.3.1
+ 3.1.0
+ 2.1.1
+ 2.3.2
+ 3.0.1-b08
+ 9.3.9.0
+ 4.13.2
+ 1.3
+ 1.15.0
+ 1.15.0
+ 2.17.2
+ 4.11.0
+ 0.6.1
+ thrift
+ 0.14.1
+ 3.5.7
+ 2.11
+ 1.7.30
+ 4.0.3
+ 2.4.1
+ 1.5.4
+
+ 2.1.43
+ 1.0.57
+ 2.12.2
+ 1.70
+ 1.5.1
+ 1.0.1
+ 1.1.0
+ 4.2.0
+
+ 2.2.2
+ 2.0.6
+ 3.0.0
+ 1.4
+
+ 8.29
+ 3.1.0
+ 2.16
+ 2.4.2
+ 1.0.0
+ 1.8
+ 3.3.0
+ 3.1.0
+ 2.10
+ 3.0.1
+ 3.4.0
+ 1.1.0
+ 3.1.2
+ 1.5.0.Final
+ 1.3.9-1
+ 4.7.3
+ 4.7.2.1
+ 3.1.0
+ 2.12
+ 1.0.1
+ 2.27.2
+ 3.12.0
+
+ 0.24
+ 1.11.0
+ 1.8.0
+ 1.1.10.1
+ 1.9
+ 1.5.5-2
+ 4.1.4
+
+ 0.8.8
+
+ 3.9.1.2184
+
+
+ hbase-server-${project.version}-tests.jar
+ hbase-common-${project.version}-tests.jar
+ hbase-procedure-${project.version}-tests.jar
+ hbase-it-${project.version}-tests.jar
+ hbase-annotations-${project.version}-tests.jar
+ hbase-mapreduce-${project.version}-tests.jar
+ hbase-zookeeper-${project.version}-tests.jar
+ hbase-asyncfs-${project.version}-tests.jar
+ bash
+ surefire-junit47
+
+ false
+ false
+
+ 0.25C
+ 0.25C
+ org.apache.hadoop.hbase.testclassification.SmallTests
+ org.apache.hadoop.hbase.testclassification.MediumTests
+ false
+ true
+ 900
+
+
+ 2200m
+ 2200m
+
+ -enableassertions -Dhbase.build.id=${build.id} -Xmx${surefire.Xmx}
+ -Djava.security.egd=file:/dev/./urandom -Djava.net.preferIPv4Stack=true
+ -Djava.awt.headless=true -Djdk.net.URLClassPath.disableClassPathURLCheck=true
+ -Dorg.apache.hbase.thirdparty.io.netty.leakDetection.level=advanced
+ -Dio.netty.eventLoopThreads=3 -Dio.opentelemetry.context.enableStrictContext=true
+ -enableassertions -Xmx${surefire.cygwinXmx}
+ -Djava.security.egd=file:/dev/./urandom -Djava.net.preferIPv4Stack=true
+ "-Djava.library.path=${hadoop.library.path};${java.library.path}"
+ -Dorg.apache.hbase.thirdparty.io.netty.leakDetection.level=advanced
+ -Dio.opentelemetry.context.enableStrictContext=true
+ -Dorg.apache.hbase.thirdparty.io.netty.tryReflectionSetAccessible=true
+ --add-modules jdk.unsupported
+ --add-opens java.base/java.nio=ALL-UNNAMED
+ --add-opens java.base/sun.nio.ch=ALL-UNNAMED
+ --add-opens java.base/java.lang=ALL-UNNAMED
+ --add-opens java.base/jdk.internal.ref=ALL-UNNAMED
+ --add-opens java.base/java.lang.reflect=ALL-UNNAMED
+ --add-opens java.base/java.util=ALL-UNNAMED
+ --add-opens java.base/java.util.concurrent=ALL-UNNAMED
+ --add-exports java.base/jdk.internal.misc=ALL-UNNAMED
+ --add-exports java.security.jgss/sun.security.krb5=ALL-UNNAMED
+ --add-opens java.base/jdk.internal.util.random=ALL-UNNAMED
+
+ ${hbase-surefire.argLine} @{jacocoArgLine}
+ 1.5.1
+ 3.0.0
+ 0.14.0
+
+ ${project.build.directory}/test-classes
+ ${project.build.directory}
+ yyyy-MM-dd'T'HH:mm:ss'Z'
+
+ ${maven.build.timestamp}
+ bash
+
+ none
+
+ 2.0.0.AM26
+ 2.0.0
+
+
+
+
+
+
+
+ org.apache.hbase
+ hbase-annotations
+ ${project.version}
+ test-jar
+
+
+
+ org.apache.hbase
+ hbase-backup
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-common
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-common
+ ${project.version}
+ test-jar
+ test
+
+
+ org.apache.hbase
+ hbase-logging
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-logging
+ ${project.version}
+ test-jar
+ test
+
+
+ org.apache.hbase
+ hbase-protocol-shaded
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-procedure
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-procedure
+ ${project.version}
+ test-jar
+
+
+ org.apache.hbase
+ hbase-hadoop-compat
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-hadoop-compat
+ ${project.version}
+ test-jar
+
+
+ org.apache.hbase
+ hbase-replication
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-replication
+ ${project.version}
+ test-jar
+
+
+ org.apache.hbase
+ hbase-balancer
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-balancer
+ ${project.version}
+ test-jar
+ test
+
+
+ org.apache.hbase
+ hbase-http
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-http
+ ${project.version}
+ test-jar
+ test
+
+
+ org.apache.hbase
+ hbase-server
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-server
+ ${project.version}
+ test-jar
+ test
+
+
+ org.apache.hbase
+ hbase-mapreduce
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-mapreduce
+ ${project.version}
+ test-jar
+ test
+
+
+ org.apache.hbase
+ hbase-endpoint
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-shell
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-shell
+ ${project.version}
+ test-jar
+ test
+
+
+ org.apache.hbase
+ hbase-thrift
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-thrift
+ ${project.version}
+ test-jar
+ test
+
+
+ org.apache.hbase
+ hbase-testing-util
+ ${project.version}
+ test
+
+
+ org.apache.hbase
+ hbase-examples
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-external-blockcache
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-it
+ ${project.version}
+ test-jar
+ test
+
+
+ org.apache.hbase
+ hbase-client
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-client
+ ${project.version}
+ test-jar
+ test
+
+
+ org.apache.hbase
+ hbase-metrics-api
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-metrics-api
+ ${project.version}
+ test-jar
+ test
+
+
+ org.apache.hbase
+ hbase-metrics
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-metrics
+ ${project.version}
+ test-jar
+ test
+
+
+ org.apache.hbase
+ hbase-rest
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-resource-bundle
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-zookeeper
+ ${project.version}
+
+
+ com.google.code.findbugs
+ jsr305
+
+
+ com.github.spotbugs
+ spotbugs-annotations
+
+
+
+
+ org.apache.hbase
+ hbase-zookeeper
+ ${project.version}
+ test-jar
+ test
+
+
+ org.apache.hbase
+ hbase-hbtop
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-shaded-client
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-shaded-client-byo-hadoop
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-shaded-mapreduce
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-asyncfs
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-asyncfs
+ ${project.version}
+ test-jar
+ test
+
+
+ org.apache.hbase
+ hbase-compression-aircompressor
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-compression-brotli
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-compression-lz4
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-compression-snappy
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-compression-xz
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-compression-zstd
+ ${project.version}
+
+
+
+ com.github.stephenc.findbugs
+ findbugs-annotations
+ ${findbugs-annotations.version}
+
+
+
+ org.codehaus.jettison
+ jettison
+ ${jettison.version}
+
+
+
+ org.slf4j
+ slf4j-api
+ ${slf4j.version}
+
+
+ org.slf4j
+ jcl-over-slf4j
+ ${slf4j.version}
+
+
+ org.slf4j
+ jul-to-slf4j
+ ${slf4j.version}
+
+
+ org.apache.logging.log4j
+ log4j-api
+ ${log4j2.version}
+
+
+ org.apache.logging.log4j
+ log4j-core
+ ${log4j2.version}
+
+
+ org.apache.logging.log4j
+ log4j-slf4j-impl
+ ${log4j2.version}
+
+
+ org.apache.logging.log4j
+ log4j-1.2-api
+ ${log4j2.version}
+
+
+
+ org.apache.avro
+ avro
+ ${avro.version}
+
+
+ com.github.ben-manes.caffeine
+ caffeine
+ ${caffeine.version}
+
+
+ io.dropwizard.metrics
+ metrics-core
+ ${metrics-core.version}
+
+
+ org.apache.httpcomponents
+ httpclient
+ ${httpclient.version}
+
+
+ org.apache.httpcomponents
+ httpcore
+ ${httpcore.version}
+
+
+ commons-codec
+ commons-codec
+ ${commons-codec.version}
+
+
+ commons-validator
+ commons-validator
+ ${commons-validator.version}
+
+
+ commons-io
+ commons-io
+ ${commons-io.version}
+
+
+ org.apache.commons
+ commons-lang3
+ ${commons-lang3.version}
+
+
+ org.apache.commons
+ commons-math3
+ ${commons-math.version}
+
+
+
+ commons-logging
+ commons-logging
+ 1.2
+
+
+ org.apache.zookeeper
+ zookeeper
+ ${zookeeper.version}
+
+
+ com.google.code.findbugs
+ jsr305
+
+
+ com.github.spotbugs
+ spotbugs-annotations
+
+
+ jline
+ jline
+
+
+ com.sun.jmx
+ jmxri
+
+
+ com.sun.jdmk
+ jmxtools
+
+
+ javax.jms
+ jms
+
+
+ io.netty
+ netty
+
+
+ io.netty
+ netty-all
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ log4j
+ log4j
+
+
+
+
+ jline
+ jline
+ ${jline.version}
+
+
+ org.apache.thrift
+ libthrift
+ ${thrift.version}
+
+
+ org.apache.tomcat.embed
+ tomcat-embed-core
+
+
+
+
+ org.jruby
+ jruby-complete
+ ${jruby.version}
+
+
+ org.jruby.jcodings
+ jcodings
+ ${jcodings.version}
+
+
+ org.jruby.joni
+ joni
+ ${joni.version}
+
+
+ com.fasterxml.jackson.core
+ jackson-annotations
+ ${jackson.version}
+
+
+ com.fasterxml.jackson.core
+ jackson-core
+ ${jackson.version}
+
+
+ com.fasterxml.jackson.core
+ jackson-databind
+ ${jackson.databind.version}
+
+
+ org.jamon
+ jamon-runtime
+ ${jamon-runtime.version}
+
+
+
+ javax.servlet
+ javax.servlet-api
+ ${servlet.api.version}
+
+
+ javax.ws.rs
+ javax.ws.rs-api
+ ${wx.rs.api.version}
+
+
+ com.sun.activation
+ javax.activation
+ 1.2.0
+
+
+ javax.annotation
+ javax.annotation-api
+ 1.2
+
+
+
+ org.glassfish.web
+ javax.servlet.jsp
+ ${glassfish.jsp.version}
+
+
+
+ javax.servlet.jsp
+ javax.servlet.jsp-api
+ 2.3.1
+
+
+ org.glassfish
+ javax.el
+ ${glassfish.el.version}
+
+
+ javax.xml.bind
+ jaxb-api
+ ${jaxb-api.version}
+
+
+ javax.xml.stream
+ stax-api
+
+
+
+
+ junit
+ junit
+ ${junit.version}
+
+
+ org.hamcrest
+ hamcrest-core
+ ${hamcrest.version}
+
+
+ org.hamcrest
+ hamcrest-library
+ ${hamcrest.version}
+
+
+ org.mockito
+ mockito-bom
+ ${mockito.version}
+ pom
+ import
+
+
+ io.opentelemetry
+ opentelemetry-bom
+ ${opentelemetry.version}
+ pom
+ import
+
+
+ io.opentelemetry
+ opentelemetry-semconv
+ ${opentelemetry.version}-alpha
+
+
+ io.opentelemetry.javaagent
+ opentelemetry-javaagent
+ ${opentelemetry-javaagent.version}
+
+
+ com.lmax
+ disruptor
+ ${disruptor.version}
+
+
+ net.spy
+ spymemcached
+ ${spy.version}
+ true
+
+
+ org.bouncycastle
+ bcprov-jdk15on
+ ${bouncycastle.version}
+ test
+
+
+ org.skyscreamer
+ jsonassert
+ ${skyscreamer.version}
+ test
+
+
+ org.bouncycastle
+ bcpkix-jdk15on
+ ${bouncycastle.version}
+ test
+
+
+ org.apache.kerby
+ kerb-core
+ ${kerby.version}
+
+
+ org.apache.kerby
+ kerb-client
+ ${kerby.version}
+
+
+ org.apache.kerby
+ kerb-simplekdc
+ ${kerby.version}
+
+
+ org.apache.commons
+ commons-crypto
+ ${commons-crypto.version}
+
+
+ net.java.dev.jna
+ jna
+
+
+
+
+ org.apache.curator
+ curator-framework
+ ${curator.version}
+
+
+ org.apache.zookeeper
+ zookeeper
+
+
+
+
+ org.apache.curator
+ curator-client
+ ${curator.version}
+
+
+ com.google.guava
+ guava
+
+
+ org.apache.zookeeper
+ zookeeper
+
+
+
+
+ org.apache.curator
+ curator-recipes
+ ${curator.version}
+
+
+ com.google.guava
+ guava
+
+
+ org.apache.zookeeper
+ zookeeper
+
+
+
+
+ org.apache.yetus
+ audience-annotations
+ ${audience-annotations.version}
+
+
+
+ io.airlift
+ aircompressor
+ ${aircompressor.version}
+
+
+ org.lz4
+ lz4-java
+ ${lz4.version}
+
+
+ org.tukaani
+ xz
+ ${xz.version}
+
+
+ org.xerial.snappy
+ snappy-java
+ ${snappy.version}
+
+
+ com.github.luben
+ zstd-jni
+ ${zstd-jni.version}
+
+
+
+ org.apache.hbase.thirdparty
+ hbase-shaded-gson
+ ${hbase-thirdparty.version}
+
+
+ org.apache.hbase.thirdparty
+ hbase-shaded-miscellaneous
+ ${hbase-thirdparty.version}
+
+
+ org.apache.hbase.thirdparty
+ hbase-shaded-netty
+ ${hbase-thirdparty.version}
+
+
+ org.apache.hbase.thirdparty
+ hbase-shaded-protobuf
+ ${hbase-thirdparty.version}
+
+
+ org.apache.hbase.thirdparty
+ hbase-shaded-jetty
+ ${hbase-thirdparty.version}
+
+
+ org.apache.hbase.thirdparty
+ hbase-shaded-jersey
+ ${hbase-thirdparty.version}
+
+
+ org.apache.hbase.thirdparty
+ hbase-shaded-jackson-jaxrs-json-provider
+ ${hbase-thirdparty.version}
+
+
+ org.apache.hbase.thirdparty
+ hbase-unsafe
+ ${hbase-thirdparty.version}
+
+
+ com.sun.xml.ws
+ jaxws-ri
+ 2.3.2
+ pom
+
+
+ javax.activation
+ javax.activation-api
+
+
+
+
+
+
+
+
+
+ org.aspectj
+ aspectjrt
+ ${aspectj.version}
+
+
+ edu.uchicago.cs.systems
+ wasabi
+ ${wasabi.version}
+
+
+
+
+ junit
+ junit
+ test
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-remote-resources-plugin
+
+
+ org.apache.maven.plugins
+ maven-release-plugin
+
+
+ apache-release
+
+ -Dmaven.test.skip.exec ${arguments}
+ ${goals}
+ pom.xml
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.11.0
+
+ true
+ false
+ false
+ -Xlint:-options
+
+
+
+ org.apache.maven.plugins
+ maven-javadoc-plugin
+ ${maven.javadoc.version}
+
+ ${compileSource}
+
+
+
+
+ org.apache.maven.plugins
+ maven-surefire-plugin
+ ${surefire.version}
+
+
+ ${surefire.firstPartGroups}
+ false
+ false
+ false
+ ${surefire.skipFirstPart}
+ ${surefire.firstPartForkCount}
+
+
+ false
+ ${surefire.reportsDirectory}
+ ${surefire.tempDir}
+ ${surefire.testFailureIgnore}
+ ${surefire.timeout}
+ ${test.output.tofile}
+
+ ${test.build.classes}
+ ${test.tmp.dir}
+ org.apache.hadoop.hbase.logging.JulToSlf4jInitializer
+
+
+
+ ${test.exclude.pattern}
+
+
+
+ listener
+ org.apache.hadoop.hbase.TimedOutTestsListener,org.apache.hadoop.hbase.HBaseClassTestRuleChecker,org.apache.hadoop.hbase.ResourceCheckerJUnitListener
+
+
+
+
+
+
+ org.apache.maven.surefire
+ ${surefire.provider}
+ ${surefire.version}
+
+
+
+
+ secondPartTestsExecution
+
+ test
+
+ test
+
+ ${surefire.skipSecondPart}
+ ${surefire.testFailureIgnore}
+
+ false
+ ${surefire.secondPartForkCount}
+
+ ${surefire.secondPartGroups}
+ ${surefire.timeout}
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-surefire-report-plugin
+ ${surefire.version}
+
+
+ org.codehaus.mojo
+ buildnumber-maven-plugin
+ ${buildnumber.maven.version}
+
+
+ com.github.spotbugs
+ spotbugs-maven-plugin
+ ${spotbugs.maven.version}
+
+ ${project.basedir}/../dev-support/spotbugs-exclude.xml
+ true
+ true
+ Max
+
+
+
+
+ com.github.spotbugs
+ spotbugs
+ ${spotbugs.version}
+
+
+
+
+ org.codehaus.mojo
+ build-helper-maven-plugin
+ ${build.helper.maven.version}
+
+
+ maven-antrun-plugin
+ ${maven.antrun.version}
+
+
+ org.jamon
+ jamon-maven-plugin
+ ${jamon.plugin.version}
+
+
+
+ org.apache.maven.plugins
+ maven-source-plugin
+
+
+ attach-sources
+
+ jar-no-fork
+ test-jar-no-fork
+
+ prepare-package
+
+
+ log4j2.xml
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-jar-plugin
+
+ true
+
+ hbase-site.xml
+ hdfs-site.xml
+ mapred-queues.xml
+ mapred-site.xml
+
+
+
+
+
+
+ test-jar
+
+ prepare-package
+
+
+
+
+
+ org.apache.rat
+ apache-rat-plugin
+
+
+ **/*.versionsBackup
+ **/*.log
+ **/.*
+ **/*.tgz
+ **/*.orig
+ **/0000000000000016310
+ **/a6a6562b777440fd9c34885428f5cb61.21e75333ada3d5bafb34bb918f29576c
+ **/8e8ab58dcf39412da19833fcd8f687ac
+ **/.idea/**
+ **/*.iml
+ **/CHANGES.txt
+ **/generated/**
+ **/gen-*/**
+
+ conf/regionservers
+ **/*.avpr
+ **/*.svg
+
+ **/src/main/resources/META-INF/LEGAL
+
+ **/src/main/asciidoc/hbase.css
+
+ **/jquery.min.js
+ **/jquery.tablesorter.min.js
+ **/parser-date-iso8601.min.js
+
+ **/src/main/resources/hbase-webapps/static/*/bootstrap*
+
+ **/hbase-webapps/static/js/vega*.min.js
+
+ **/*.vm
+
+ **/control
+ **/conffile
+
+ docs/*
+ logs/*
+
+ .git/**
+ .svn/**
+ **/.settings/**
+ **/patchprocess/**
+ src/site/resources/repo/**
+ **/dependency-reduced-pom.xml
+ **/rat.txt
+
+ **/shaded/com/google/protobuf/**
+ **/src/main/patches/**
+ **/vote.tmpl
+
+ **/CC-MAIN-2021-10-warc.paths.gz
+
+
+
+
+ maven-assembly-plugin
+
+
+ true
+
+
+
+ org.xolstice.maven.plugins
+ protobuf-maven-plugin
+ ${protobuf.plugin.version}
+
+ ${basedir}/src/main/protobuf/
+ false
+ true
+
+
+
+
+ org.apache.maven.plugins
+ maven-checkstyle-plugin
+ ${maven.checkstyle.version}
+
+ hbase/checkstyle.xml
+ hbase/checkstyle-suppressions.xml
+ true
+
+
+
+ org.apache.hbase
+ hbase-checkstyle
+ ${project.version}
+
+
+ com.puppycrawl.tools
+ checkstyle
+ ${checkstyle.version}
+
+
+
+
+ net.revelc.code
+ warbucks-maven-plugin
+ ${maven.warbucks.version}
+
+ false
+
+
+
+ (?!.*(.generated.|.tmpl.|\$)).*
+ false
+ true
+ false
+ false
+ false
+ org[.]apache[.]yetus[.]audience[.]InterfaceAudience.*
+
+
+
+
+
+ run-warbucks
+
+ check
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-enforcer-plugin
+ ${enforcer.version}
+
+
+ org.codehaus.mojo
+ extra-enforcer-rules
+ ${extra.enforcer.version}
+
+
+ de.skuzzle.enforcer
+ restrict-imports-enforcer-rule
+ ${restrict-imports.enforcer.version}
+
+
+
+
+ org.apache.maven.plugins
+ maven-gpg-plugin
+ ${maven.gpg.version}
+
+
+
+
+
+ org.codehaus.mojo
+ flatten-maven-plugin
+ 1.3.0
+
+ true
+ true
+ oss
+
+
+
+
+ flatten
+
+ flatten
+
+ process-resources
+
+
+
+ flatten.clean
+
+ clean
+
+ clean
+
+
+
+
+ org.codehaus.mojo
+ build-helper-maven-plugin
+
+
+ negate-license-bundles-property
+
+ bsh-property
+
+
+ skip.license.check = !${license.bundles.dependencies};
+
+ skip.license.check
+
+
+
+
+
+ create-license-file-path-property
+
+ regex-property
+
+
+ license.aggregate.path
+ ${project.build.directory}/maven-shared-archive-resources/META-INF/LICENSE
+ \\
+ /
+ false
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-enforcer-plugin
+
+
+ display-info
+
+ display-info
+
+ initialize
+ false
+
+
+ hadoop-profile-min-maven-min-java-banned-xerces
+
+ enforce
+
+
+
+
+
+ System.getProperty("hadoop-profile", "").isEmpty()
+ The hadoop-profile property is unused, did you mean to set hadoop.profile instead?
+
+
+
+ [${maven.min.version},)
+ Maven is out of date.
+ HBase requires at least version ${maven.min.version} of Maven to properly build from source.
+ You appear to be using an older version. You can use either "mvn -version" or
+ "mvn enforcer:display-info" to verify what version is active.
+ See the reference guide on building for more information: https://hbase.apache.org/book.html#build
+
+
+
+ [${java.min.version},)
+ Java is out of date.
+ HBase requires at least version ${java.min.version} of the JDK to properly build from source.
+ You appear to be using an older version. You can use either "mvn -version" or
+ "mvn enforcer:display-info" to verify what version is active.
+ See the reference guide on building for more information: https://hbase.apache.org/book.html#build
+
+
+
+ xerces:xercesImpl
+
+ We avoid adding our own Xerces jars to the classpath, see HBASE-16340.
+
+
+
+
+
+ banned-jsr305
+
+ enforce
+
+
+
+
+
+ com.google.code.findbugs:jsr305
+
+ We don't allow the JSR305 jar from the Findbugs project, see HBASE-16321.
+
+
+
+
+
+ banned-scala
+
+ enforce
+
+
+
+
+
+ org.scala-lang:scala-library
+
+ We don't allow Scala, see HBASE-13992.
+
+
+
+
+
+ banned-commons-logging
+
+ enforce
+
+
+
+
+
+ commons-logging:commons-logging
+
+ We don't use commons-logging any more, so do not depend on it directly.
+ false
+
+
+
+
+
+ banned-other-logging-framework
+
+ enforce
+
+
+
+
+
+ log4j:*
+ org.slf4j:slf4j-log4j12
+ ch.qos.reload4j:*
+ org.slf4j:slf4j-reload4j
+ ch.qos.logback:*
+
+ We do not allow other logging frameworks as now we use log4j2
+
+
+
+
+
+ banned-jetty
+
+ enforce
+
+
+
+
+
+ org.eclipse.jetty:**
+
+ Use shaded jetty instead
+ false
+
+
+
+
+
+ banned-jersey
+
+ enforce
+
+
+
+
+
+ org.glassfish.jersey.containers:**
+ org.glassfish.jersey.core:**
+
+ Use shaded jersey instead
+ false
+
+
+
+
+
+ banned-htrace
+
+ enforce
+
+
+
+
+
+ org.apache.htrace:**
+
+ Use OpenTelemetry instead
+ false
+
+
+
+
+
+ check-aggregate-license
+
+ enforce
+
+
+ process-resources
+
+
+
+ File license = new File("${license.aggregate.path}");
+
+ // Beanshell does not support try-with-resources,
+ // so we must close this scanner manually
+ Scanner scanner = new Scanner(license);
+
+ while (scanner.hasNextLine()) {
+ if (scanner.nextLine().startsWith("ERROR:")) {
+ scanner.close();
+ return false;
+ }
+ }
+ scanner.close();
+ return true;
+ License errors detected, for more detail find ERROR in
+ ${license.aggregate.path}
+
+
+ ${skip.license.check}
+
+
+
+ banned-illegal-imports
+
+ enforce
+
+ process-sources
+
+
+
+ true
+ 512
+ Use SLF4j for logging
+
+ org.apache.commons.logging.**
+ org.apache.log4j.**
+ org.apache.logging.log4j.**
+
+
+
+ org.apache.hadoop.hbase.logging.HBaseTestAppender
+
+
+
+ false
+ 512
+ Do not use log4j2 directly in code, see Log4jUtils in hbase-logging for more details.
+
+ org.apache.logging.log4j.**
+
+
+
+ true
+ 512
+ Use shaded version in hbase-thirdparty
+
+ com.google.common.**
+ io.netty.**
+ org.apache.commons.cli.**
+ org.apache.commons.collections.**
+ org.apache.commons.collections4.**
+
+
+
+ true
+ 512
+ Do not use shaded classes from other dependencies
+
+ org.apache.curator.shaded.**
+ org.apache.htrace.shaded.**
+
+
+
+ true
+ 512
+ Use shaded gson in hbase-thirdparty
+
+ org.codehaus.jackson.**
+
+
+
+ true
+ 512
+ Use commons lang 3
+
+ org.apache.commons.lang.**
+
+
+
+ true
+ 512
+ Use yetus IA and IS annotations
+
+ org.apache.hadoop.classificatio.**
+
+
+
+ true
+ 512
+ Do not use htrace
+
+ org.htrace.**
+ org.apache.htrace.**
+
+
+
+ true
+ 512
+ Use shaded jetty in hbase-thirdparty
+
+ org.eclipse.jetty.**
+
+
+
+ true
+ 512
+ Use shaded jersey in hbase-thirdparty
+
+ org.glassfish.jersey.**
+
+
+
+ true
+ 512
+ You should never use this style of annotations(i.e, 'this is for test only')
+ in IA.Public or IA.LimitedPrivate classes. Use IA.Private to tell users this is
+ not for public use.
+ For IA.Private classes, use RestrictedApi annotation in error prone instead.
+
+ org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting
+
+
+
+ true
+ 512
+ Use shaded javax.ws.rs in hbase-thirdparty
+
+ javax.ws.rs.**
+
+
+
+ true
+ 512
+ Use shaded jackson-jaxrs-json-provider in hbase-thirdparty
+
+ com.fasterxml.jackson.jaxrs.**
+
+
+
+ true
+ 512
+ Use junit4 instead
+
+ junit.framework.**
+
+
+
+
+
+
+
+
+
+ org.codehaus.mojo
+ xml-maven-plugin
+ ${xml.maven.version}
+ false
+
+
+
+
+
+ ${basedir}/hbase-common/src/main/resources/
+
+ hbase-default.xml
+
+ ${basedir}/src/main/xslt/configuration_to_asciidoc_chapter.xsl
+
+
+ ^(.*)\.xml$
+ $1.adoc
+
+
+ ${basedir}/target/asciidoc
+
+
+
+
+
+
+
+ transform
+
+ site
+
+
+
+
+
+ com.github.spotbugs
+ spotbugs-maven-plugin
+
+
+
+ spotbugs
+
+ false
+
+ ${basedir}/dev-support/spotbugs-exclude.xml
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-checkstyle-plugin
+
+
+ org.apache.maven.plugins
+ maven-site-plugin
+ ${maven-site.version}
+
+ ${basedir}/src/site
+ ${basedir}/src/site/custom/project-info-report.properties
+ UTF-8
+ UTF-8
+
+
+
+
+ org.apache.maven.wagon
+ wagon-ssh
+ ${wagon.ssh.version}
+
+
+
+
+
+ org.asciidoctor
+ asciidoctor-maven-plugin
+ ${asciidoctor.plugin.version}
+ false
+
+ ${project.reporting.outputDirectory}/
+ book
+
+ ${project.version}
+ images
+ coderay
+
+
+
+
+ org.asciidoctor
+ asciidoctorj-pdf
+ ${asciidoctorj.pdf.version}
+
+
+
+
+ output-html
+
+ process-asciidoc
+
+ site
+
+
+ hbase.css
+
+ html5
+
+
+
+ output-pdf
+
+ process-asciidoc
+
+ site
+
+ pdf
+
+
+
+
+ -
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-resources-plugin
+
+ false
+
+ \
+
+
+
+ copy-htaccess
+
+ copy-resources
+
+ site
+
+ ${project.reporting.outputDirectory}/
+
+
+ ${basedir}/src/site/resources/
+
+ .htaccess
+
+
+
+
+
+
+
+ copy-empty-book-dir
+
+ copy-resources
+
+ site
+
+ ${project.reporting.outputDirectory}/
+
+
+ ${basedir}/src/site/resources/
+
+ book/**
+
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-antrun-plugin
+ ${maven.antrun.version}
+ false
+
+
+
+ rename-pdf
+
+ run
+
+ site
+
+
+
+
+
+
+
+
+
+ org.codehaus.mojo
+ buildnumber-maven-plugin
+
+ yyyy
+ build.year
+
+
+
+
+ create-timestamp
+
+ validate
+
+
+
+
+ org.apache.felix
+ maven-bundle-plugin
+ ${maven.bundle.version}
+ true
+ true
+
+
+ com.diffplug.spotless
+ spotless-maven-plugin
+ ${spotless.version}
+
+
+
+
+ **/generated/*
+ **/package-info.java
+
+
+
+ Remove unhelpful javadoc stubs
+ (?m)^ *\* *@(?:param|throws|return) *\w* *\n
+
+
+
+
+ Purge single returns tag multi line
+ (?m)^ */\*\*\n *\* *@return *(.*) *\n *\*/$
+ /** Returns $1 */
+
+
+ Purge single returns tag single line
+ ^ */\*\* *@return *(.*) *\*/$
+ /** Returns $1 */
+
+
+
+ ${session.executionRootDirectory}/dev-support/hbase_eclipse_formatter.xml
+
+
+ ${session.executionRootDirectory}/dev-support/eclipse.importorder
+
+
+
+
+
+
+
+ false
+
+
+
+
+
+
+
+ **/*.xml
+ **/*.sh
+ **/*.py
+ **/Jenkinsfile*
+ **/*.md
+ *.md
+ **/*.txt
+ *.txt
+
+
+ **/target/**
+ **/dependency-reduced-pom.xml
+
+
+
+
+
+
+
+
+ src/main/java/**/*.java
+ src/test/java/**/*.java
+
+
+ **/generated/*
+ **/package-info.java
+
+ src/main/java/org/apache/hadoop/hbase/util/AbstractByteRange.java
+ src/main/java/org/apache/hadoop/hbase/util/SimpleMutableByteRange.java
+ src/main/java/org/apache/hadoop/hbase/util/SimplePositionedMutableByteRange.java
+
+ src/main/java/org/apache/hadoop/hbase/metrics/impl/HBaseMetrics2HadoopMetricsAdapter.java
+
+ src/test/java/org/apache/hadoop/hbase/test/util/warc/WARCFileReader.java
+ src/test/java/org/apache/hadoop/hbase/test/util/warc/WARCFileWriter.java
+ src/test/java/org/apache/hadoop/hbase/test/util/warc/WARCInputFormat.java
+ src/test/java/org/apache/hadoop/hbase/test/util/warc/WARCOutputFormat.java
+ src/test/java/org/apache/hadoop/hbase/test/util/warc/WARCRecord.java
+ src/test/java/org/apache/hadoop/hbase/test/util/warc/WARCWritable.java
+
+
+ ${session.executionRootDirectory}/dev-support/license-header
+ package
+
+
+
+
+
+
+
+
+ dev.aspectj
+ aspectj-maven-plugin
+ ${aspectj-maven.version}
+
+
+
+ edu.uchicago.cs.systems
+ wasabi
+
+
+
+
+
+
+ test-compile
+ compile
+
+
+ 1.8
+ 1.8
+ false
+ true
+ true
+ unmatchedSuperTypeInCall=ignore,adviceDidNotMatch=ignore,typeNotExposedToWeaver=ignore,uncheckedAdviceConversion=ignore,invalidAbsoluteTypeName=ignore,cantFindType=ignore
+
+
+
+
+
+ org.aspectj
+ aspectjtools
+ ${aspectj.version}
+
+
+
+
+
+
+ kr.motd.maven
+ os-maven-plugin
+ ${os.maven.version}
+
+
+
+
+
+
+
+ maven-project-info-reports-plugin
+ ${maven.project.info.report.version}
+
+
+ false
+
+
+
+
+ dependencies
+ dependency-convergence
+ dependency-info
+ dependency-management
+ index
+ issue-management
+ licenses
+ mailing-lists
+ plugin-management
+ plugins
+ team
+ scm
+ summary
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-javadoc-plugin
+
+
+
+ apiNote
+ a
+ API Note:
+
+
+
+
+
+
+ devapi
+
+ aggregate-no-fork
+
+
+ devapidocs
+ Developer API
+ The full HBase API, including private and unstable APIs
+
+ **/generated/*
+ **/protobuf/*
+
+ org.apache.hadoop.hbase.tmpl.common:com.google.protobuf:org.apache.hadoop.hbase.generated*
+ private
+
+ true
+ true
+ 2
+ true
+ true
+ true
+ true
+ all
+ true
+ en_US
+
+ -J-Xmx2G
+
+
+
+ org.mockito
+ mockito-core
+ ${mockito.version}
+
+
+ org.hamcrest
+ hamcrest-core
+ ${hamcrest.version}
+
+
+
+ com.google.code.findbugs
+ jsr305
+ 3.0.2
+
+
+ false
+
+
+
+ testdevapi
+
+ test-aggregate-no-fork
+
+
+ testdevapidocs
+ Developer API
+ The full HBase API test code, including private and unstable APIs
+
+ **/generated/*
+ **/protobuf/*
+
+ org.apache.hadoop.hbase.tmpl.common:com.google.protobuf:org.apache.hadoop.hbase.generated*
+ private
+
+ true
+ true
+ 2
+ true
+ true
+ true
+ true
+ all
+ true
+ en_US
+
+ -J-Xmx2G
+
+
+
+ org.mockito
+ mockito-core
+ ${mockito.version}
+
+
+ org.hamcrest
+ hamcrest-core
+ ${hamcrest.version}
+
+
+
+ com.google.code.findbugs
+ jsr305
+ 3.0.2
+
+
+ false
+
+
+
+
+
+ userapi
+
+ aggregate-no-fork
+
+
+ org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet
+
+ org.apache.yetus
+ audience-annotations
+ ${javadoc.audience-annotations.version}
+
+ true
+ apidocs
+ User API
+ The HBase Application Programmer's API
+ org.apache.hadoop.hbase.backup*:org.apache.hadoop.hbase.catalog:org.apache.hadoop.hbase.client.coprocessor:org.apache.hadoop.hbase.client.metrics:org.apache.hadoop.hbase.codec*:org.apache.hadoop.hbase.constraint:org.apache.hadoop.hbase.coprocessor.*:org.apache.hadoop.hbase.executor:org.apache.hadoop.hbase.fs:*.generated.*:org.apache.hadoop.hbase.io.hfile.*:org.apache.hadoop.hbase.mapreduce.hadoopbackport:org.apache.hadoop.hbase.mapreduce.replication:org.apache.hadoop.hbase.master.*:org.apache.hadoop.hbase.metrics*:org.apache.hadoop.hbase.migration:org.apache.hadoop.hbase.monitoring:org.apache.hadoop.hbase.p*:org.apache.hadoop.hbase.regionserver.compactions:org.apache.hadoop.hbase.regionserver.handler:org.apache.hadoop.hbase.regionserver.snapshot:org.apache.hadoop.hbase.replication.*:org.apache.hadoop.hbase.rest.filter:org.apache.hadoop.hbase.rest.model:org.apache.hadoop.hbase.rest.p*:org.apache.hadoop.hbase.security.*:org.apache.hadoop.hbase.thrift*:org.apache.hadoop.hbase.tmpl.*:org.apache.hadoop.hbase.tool:org.apache.hadoop.hbase.trace:org.apache.hadoop.hbase.util.byterange*:org.apache.hadoop.hbase.util.test:org.apache.hadoop.hbase.util.vint:org.apache.hadoop.metrics2*:org.apache.hadoop.hbase.io.compress*
+
+ false
+ **/generated/*
+ protected
+
+ true
+ true
+ 2
+ true
+ true
+ true
+ true
+ all
+ true
+ en_US
+
+ -J-Xmx2G
+
+
+
+ org.mockito
+ mockito-core
+ ${mockito.version}
+
+
+ org.hamcrest
+ hamcrest-core
+ ${hamcrest.version}
+
+
+
+ com.google.code.findbugs
+ jsr305
+ 3.0.2
+
+
+ false
+
+
+
+
+ testuserapi
+
+ test-aggregate-no-fork
+
+
+ org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet
+
+ org.apache.yetus
+ audience-annotations
+ ${javadoc.audience-annotations.version}
+
+ true
+ testapidocs
+ User API
+ The HBase Application Programmer's API
+ org.apache.hadoop.hbase.backup*:org.apache.hadoop.hbase.catalog:org.apache.hadoop.hbase.client.coprocessor:org.apache.hadoop.hbase.client.metrics:org.apache.hadoop.hbase.codec*:org.apache.hadoop.hbase.constraint:org.apache.hadoop.hbase.coprocessor.*:org.apache.hadoop.hbase.executor:org.apache.hadoop.hbase.fs:*.generated.*:org.apache.hadoop.hbase.io.hfile.*:org.apache.hadoop.hbase.mapreduce.hadoopbackport:org.apache.hadoop.hbase.mapreduce.replication:org.apache.hadoop.hbase.master.*:org.apache.hadoop.hbase.metrics*:org.apache.hadoop.hbase.migration:org.apache.hadoop.hbase.monitoring:org.apache.hadoop.hbase.p*:org.apache.hadoop.hbase.regionserver.compactions:org.apache.hadoop.hbase.regionserver.handler:org.apache.hadoop.hbase.regionserver.snapshot:org.apache.hadoop.hbase.replication.*:org.apache.hadoop.hbase.rest.filter:org.apache.hadoop.hbase.rest.model:org.apache.hadoop.hbase.rest.p*:org.apache.hadoop.hbase.security.*:org.apache.hadoop.hbase.thrift*:org.apache.hadoop.hbase.tmpl.*:org.apache.hadoop.hbase.tool:org.apache.hadoop.hbase.trace:org.apache.hadoop.hbase.util.byterange*:org.apache.hadoop.hbase.util.test:org.apache.hadoop.hbase.util.vint:org.apache.hadoop.metrics2*:org.apache.hadoop.hbase.io.compress*
+
+ false
+ **/generated/*
+ protected
+
+ true
+ true
+ 2
+ true
+ true
+ true
+ true
+ all
+ true
+ en_US
+
+ -J-Xmx2G
+
+
+
+ org.mockito
+ mockito-core
+ ${mockito.version}
+
+
+ org.hamcrest
+ hamcrest-core
+ ${hamcrest.version}
+
+
+
+ com.google.code.findbugs
+ jsr305
+ 3.0.2
+
+
+ false
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-checkstyle-plugin
+ ${maven.checkstyle.version}
+
+ target/**
+
+
+
+
+
+
+
+
+
+ build-with-jdk8
+
+ 1.8
+
+
+ ${compileSource}
+ ${compileSource}
+
+
+
+ build-with-jdk11
+
+ [11,)
+
+
+ ${releaseTarget}
+
+ ${hbase-surefire.jdk11.flags}
+ ${hbase-surefire.argLine}
+ @{jacocoArgLine}
+
+ 2200m
+
+ 0.14.1
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-javadoc-plugin
+ ${maven.javadoc.version}
+
+ ${compileSource}
+
+ --ignore-source-errors
+
+ -J-Xmx2G
+ -J--add-exports
+ -Jjdk.javadoc/jdk.javadoc.internal.tool=ALL-UNNAMED
+
+
+
+
+
+
+
+
+ build-with-jdk17
+
+ [17,)
+
+
+ ${hbase-surefire.jdk11.flags}
+ ${hbase-surefire.jdk17.flags}
+ ${hbase-surefire.argLine}
+ @{jacocoArgLine}
+
+
+
+
+ jenkins.patch
+
+ false
+
+ HBasePatchProcess
+
+
+
+ 2
+
+
+
+
+ org.apache.maven.plugins
+ maven-antrun-plugin
+ false
+
+
+
+ run
+
+ validate
+
+
+ Maven Execution Environment
+ MAVEN_OPTS="${env.MAVEN_OPTS}"
+
+
+
+
+
+
+
+
+
+ jacoco
+
+ false
+
+
+ **/generated/**/*
+ **/generated/**/*,hbase-it/**,**/hbase-logging/**/*,**/hbase-testing-util/**/*,
+ **/hbase-protocol-shaded/**/*,**/hbase-external-blockcache/**/*,**/hbase-examples/**/*,
+ **/hbase-archetypes/**/*
+
+
+
+
+ org.jacoco
+ jacoco-maven-plugin
+ ${jacoco.version}
+
+
+ **/generated/**/*
+
+
+
+
+ prepare-agent
+
+ prepare-agent
+
+ initialize
+
+ jacocoArgLine
+ true
+
+
+
+ report
+
+ report
+
+ prepare-package
+
+
+
+
+ org.sonarsource.scanner.maven
+ sonar-maven-plugin
+ ${sonar-maven-plugin.version}
+
+
+
+
+
+ os.linux
+
+ false
+
+ Linux
+
+
+
+ ${os.name}-${os.arch}-${sun.arch.data.model}
+
+
+
+ os.mac
+
+
+ Mac
+
+
+
+ Mac_OS_X-${sun.arch.data.model}
+
+
+
+ os.windows
+
+
+ Windows
+
+
+
+ cygwin
+ ${hbase-surefire.cygwin-argLine} @{jacocoArgLine}
+
+
+
+
+ apache-release
+
+
+
+
+ org.sonatype.plugins
+ nexus-staging-maven-plugin
+ 1.6.8
+ true
+
+ https://repository.apache.org/
+ apache.releases.https
+
+
+
+
+
+
+
+ release
+
+
+
+ org.apache.rat
+ apache-rat-plugin
+
+
+
+ check
+
+ package
+
+
+
+
+ org.apache.maven.plugins
+ maven-enforcer-plugin
+ ${enforcer.version}
+
+
+
+ ${compileSource}
+ HBase has unsupported dependencies.
+ HBase requires that all dependencies be compiled with version ${compileSource} or earlier
+ of the JDK to properly build from source. You appear to be using a newer dependency. You can use
+ either "mvn -version" or "mvn enforcer:display-info" to verify what version is active.
+ Non-release builds can temporarily build with a newer JDK version by setting the
+ 'compileSource' property (eg. mvn -DcompileSource=1.8 clean package).
+
+ module-info
+
+
+
+
+
+
+ org.codehaus.mojo
+ extra-enforcer-rules
+ ${extra.enforcer.version}
+
+
+
+
+ org.cyclonedx
+ cyclonedx-maven-plugin
+ 2.7.6
+
+
+
+ makeBom
+
+ package
+
+
+
+
+
+
+
+
+
+
+ hadoop-3.0
+
+
+ !hadoop.profile
+
+
+
+ ${hadoop-three.version}
+ src/main/assembly/hadoop-three-compat.xml
+
+
+
+
+ org.apache.hadoop
+ hadoop-mapreduce-client-core
+ ${hadoop-three.version}
+
+
+ com.google.guava
+ guava
+
+
+ com.sun.jersey
+ jersey-core
+
+
+ javax.xml.bind
+ jaxb-api
+
+
+ javax.ws.rs
+ jsr311-api
+
+
+ org.codehaus.jackson
+ *
+
+
+ io.netty
+ netty
+
+
+ io.netty
+ netty-all
+
+
+ javax.servlet
+ servlet-api
+
+
+ javax.inject
+ javax.inject
+
+
+ com.google.guava
+ guava
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ log4j
+ log4j
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+
+
+ org.apache.hadoop
+ hadoop-mapreduce-client-app
+ ${hadoop-three.version}
+ test-jar
+
+
+ org.codehaus.jackson
+ jackson-mapper-asl
+
+
+ org.codehaus.jackson
+ jackson-core-asl
+
+
+ javax.xml.bind
+ jaxb-api
+
+
+ javax.ws.rs
+ jsr311-api
+
+
+ org.codehaus.jackson
+ *
+
+
+ javax.xml.bind
+ jaxb-api
+
+
+ javax.ws.rs
+ jsr311-api
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ log4j
+ log4j
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+
+
+ org.apache.hadoop
+ hadoop-mapreduce-client-jobclient
+ ${hadoop-three.version}
+
+
+ com.sun.jersey
+ jersey-core
+
+
+ io.netty
+ netty
+
+
+ io.netty
+ netty-all
+
+
+ javax.servlet
+ servlet-api
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+
+
+ org.apache.hadoop
+ hadoop-mapreduce-client-jobclient
+ ${hadoop-three.version}
+ test-jar
+ test
+
+
+ com.sun.jersey
+ jersey-core
+
+
+ io.netty
+ netty
+
+
+ io.netty
+ netty-all
+
+
+ javax.servlet
+ servlet-api
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+
+
+ org.apache.hadoop
+ hadoop-hdfs
+ ${hadoop-three.version}
+
+
+ com.sun.jersey
+ jersey-core
+
+
+ com.sun.jersey
+ jersey-server
+
+
+ javax.servlet.jsp
+ jsp-api
+
+
+ javax.servlet
+ servlet-api
+
+
+ stax
+ stax-api
+
+
+ xerces
+ xercesImpl
+
+
+ org.codehaus.jackson
+ *
+
+
+ com.google.guava
+ guava
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ log4j
+ log4j
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+ org.fusesource.leveldbjni
+ leveldbjni-all
+
+
+ org.openlabtesting.leveldbjni
+ leveldbjni-all
+
+
+
+
+ org.apache.hadoop
+ hadoop-hdfs
+ ${hadoop-three.version}
+ test-jar
+ test
+
+
+ javax.servlet.jsp
+ jsp-api
+
+
+ javax.servlet
+ servlet-api
+
+
+ stax
+ stax-api
+
+
+ xerces
+ xercesImpl
+
+
+ org.codehaus.jackson
+ *
+
+
+ com.google.guava
+ guava
+
+
+ com.sun.jersey
+ jersey-core
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ log4j
+ log4j
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+
+
+ org.apache.hadoop
+ hadoop-auth
+ ${hadoop-three.version}
+
+
+ com.google.guava
+ guava
+
+
+ net.minidev
+ json-smart
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ log4j
+ log4j
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+
+
+ org.apache.hadoop
+ hadoop-common
+ ${hadoop-three.version}
+
+
+ com.sun.jersey
+ jersey-core
+
+
+ com.sun.jersey
+ jersey-json
+
+
+ com.sun.jersey
+ jersey-servlet
+
+
+ com.sun.jersey
+ jersey-server
+
+
+ javax.servlet.jsp
+ jsp-api
+
+
+ javax.servlet
+ javax.servlet-api
+
+
+ stax
+ stax-api
+
+
+ io.netty
+ netty
+
+
+
+ com.google.code.findbugs
+ jsr305
+
+
+ junit
+ junit
+
+
+ org.codehaus.jackson
+ *
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ log4j
+ log4j
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+ io.netty
+ netty
+
+
+ io.netty
+ netty-all
+
+
+
+
+
+ javax.activation
+ javax.activation-api
+ 1.2.0
+ test
+
+
+ org.apache.hadoop
+ hadoop-common
+ ${hadoop-three.version}
+ test-jar
+
+
+ com.sun.jersey
+ jersey-core
+
+
+ com.google.code.findbugs
+ jsr305
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ log4j
+ log4j
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+
+ org.codehaus.jackson
+ *
+
+
+ javax.servlet.jsp
+ jsp-api
+
+
+ javax.xml.bind
+ jaxb-api
+
+
+ javax.ws.rs
+ jsr311-api
+
+
+
+
+
+ org.apache.hadoop
+ hadoop-client
+ ${hadoop-three.version}
+
+
+ org.apache.hadoop
+ hadoop-annotations
+ ${hadoop-three.version}
+
+
+
+ org.apache.hadoop
+ hadoop-minicluster
+ ${hadoop-three.version}
+
+
+
+ commons-httpclient
+ commons-httpclient
+
+
+ javax.servlet.jsp
+ jsp-api
+
+
+ javax.servlet
+ servlet-api
+
+
+ stax
+ stax-api
+
+
+ io.netty
+ netty
+
+
+ io.netty
+ netty-all
+
+
+ com.google.code.findbugs
+ jsr305
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ log4j
+ log4j
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+
+
+ org.apache.hadoop
+ hadoop-minikdc
+ ${hadoop-three.version}
+ test
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+ bouncycastle
+ bcprov-jdk15
+
+
+
+
+ org.apache.hadoop
+ hadoop-distcp
+ ${hadoop-three.version}
+
+
+
+ org.apache.hadoop
+ hadoop-hdfs-client
+ ${hadoop-three.version}
+
+
+
+
+
+
+
+
+ singleJVMTests
+
+ false
+
+
+ 1
+ false
+ true
+
+
+
+
+
+ runSmallTests
+
+ false
+
+
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.SmallTests
+
+
+
+
+
+ runMediumTests
+
+ false
+
+
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.MediumTests
+
+
+
+
+
+ runLargeTests
+
+ false
+
+
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.LargeTests
+
+
+
+
+
+ runDevTests
+
+ false
+
+
+ 1
+ false
+ false
+ org.apache.hadoop.hbase.testclassification.SmallTests
+ org.apache.hadoop.hbase.testclassification.MediumTests
+
+
+
+
+ runAllTests
+
+ false
+
+
+ false
+ false
+ org.apache.hadoop.hbase.testclassification.SmallTests
+ org.apache.hadoop.hbase.testclassification.MediumTests,org.apache.hadoop.hbase.testclassification.LargeTests
+
+
+
+ runMiscTests
+
+ false
+
+
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.MiscTests
+
+
+
+
+ runCoprocessorTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.CoprocessorTests
+
+
+
+
+ runClientTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.ClientTests
+
+
+
+
+ runMasterTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.MasterTests
+
+
+
+
+ runMapredTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.MapredTests
+
+
+
+
+ runMapreduceTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.MapReduceTests
+
+
+
+
+ runRegionServerTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.RegionServerTests
+
+
+
+
+ runVerySlowMapReduceTests
+
+ false
+
+
+ 2
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests
+
+
+
+
+
+ runVerySlowRegionServerTests
+
+ false
+
+
+ 2
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.VerySlowRegionServerTests
+
+
+
+
+
+ runFilterTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.FilterTests
+
+
+
+
+ runIOTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.IOTests
+
+
+
+
+ runRestTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.RestTests
+
+
+
+
+ runRPCTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.RPCTests
+
+
+
+
+ runReplicationTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.ReplicationTests
+
+
+
+
+ runSecurityTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.SecurityTests
+
+
+
+
+ runFlakeyTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.FlakeyTests
+
+
+
+
+ runZKTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.ZKTests
+
+
+
+
+ runRSGroupTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.RSGroupTests
+
+
+
+
+
+
+ localTests
+
+
+ test
+
+
+
+ surefire-junit4
+ false
+ true
+
+
+
+
+
+ clover
+
+ false
+
+ clover
+
+
+
+ ${user.home}/.clover.license
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-javadoc-plugin
+
+
+ com.atlassian.maven.plugins
+ maven-clover2-plugin
+ ${clover.version}
+
+
+
+
+ com.atlassian.maven.plugins
+ maven-clover2-plugin
+ ${clover.version}
+
+ true
+ true
+ 50%
+ true
+ true
+
+ **/generated/**
+
+
+
+
+ clover-setup
+
+ setup
+
+ process-sources
+
+
+ clover
+
+ clover
+
+ site
+
+
+
+
+
+
+
+
+ site-install-step
+
+ true
+ true
+ true
+ true
+ true
+ true
+
+
+
+
+ site-build-step
+
+ true
+ true
+ true
+ true
+ true
+ true
+ true
+
+
+
+ eclipse-specific
+
+
+ m2e.version
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-eclipse-plugin
+ ${maven.eclipse.version}
+
+
+
+ org.eclipse.m2e
+ lifecycle-mapping
+ ${lifecycle.mapping.version}
+
+
+
+
+
+ org.jacoco
+ jacoco-maven-plugin
+ [0.6.2.201302030002,)
+
+ prepare-agent
+
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-enforcer-plugin
+ ${enforcer.version}
+
+ enforce
+
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-remote-resources-plugin
+ [1.5,)
+
+ process
+ bundle
+
+
+
+
+
+
+
+
+ org.codehaus.mojo
+ buildnumber-maven-plugin
+ [1.3,)
+
+ create-timestamp
+
+
+
+
+ true
+ true
+
+
+
+
+
+
+
+
+
+
+
+
+ aarch64
+
+
+ linux
+ aarch64
+
+
+
+ org.openlabtesting.protobuf
+
+
+
+
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.TestMultiVersions.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.TestMultiVersions.conf
new file mode 100644
index 00000000..d5ccdcab
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.TestMultiVersions.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.TestMultiVersions.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.TestMultiVersions.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.TestMultiVersions.data
new file mode 100644
index 00000000..49dd7e4e
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.TestMultiVersions.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java#L491!!!org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists!!!org.apache.hadoop.fs.FileSystem.exists!!!FSUtils.java:494!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.TestRegionReplicationLagEvaluation.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.TestRegionReplicationLagEvaluation.conf
new file mode 100644
index 00000000..ae32237b
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.TestRegionReplicationLagEvaluation.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.TestRegionReplicationLagEvaluation.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.TestRegionReplicationLagEvaluation.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.TestRegionReplicationLagEvaluation.data
new file mode 100644
index 00000000..62344dac
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.TestRegionReplicationLagEvaluation.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/RegionReplicaFlushHandler.java#L107!!!org.apache.hadoop.hbase.regionserver.handler.RegionReplicaFlushHandler.triggerFlushInPrimaryRegion!!!org.apache.hadoop.hbase.util.FutureUtils.get!!!RegionReplicaFlushHandler.java:114!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.backup.TestBackupDeleteRestore.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.backup.TestBackupDeleteRestore.conf
new file mode 100644
index 00000000..bb834696
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.backup.TestBackupDeleteRestore.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.backup.TestBackupDeleteRestore.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.backup.TestBackupDeleteRestore.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.backup.TestBackupDeleteRestore.data
new file mode 100644
index 00000000..0bf27050
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.backup.TestBackupDeleteRestore.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/tool/BulkLoadHFilesTool.java#L963!!!org.apache.hadoop.hbase.tool.BulkLoadHFilesTool.performBulkLoad!!!org.apache.hadoop.hbase.tool.BulkLoadHFilesTool.bulkLoadPhase!!!BulkLoadHFilesTool.java:990!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestAsyncTable.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestAsyncTable.conf
new file mode 100644
index 00000000..383bd64b
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestAsyncTable.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestAsyncTable.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestAsyncTable.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestAsyncTable.data
new file mode 100644
index 00000000..342154a3
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestAsyncTable.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-protocol-shaded/target/generated-sources/protobuf/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RPCProtos.java#L5026!!!org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom!!!org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream.readBool!!!RPCProtos.java:5060!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestClientSideRegionScanner.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestClientSideRegionScanner.conf
new file mode 100644
index 00000000..fff77a77
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestClientSideRegionScanner.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestClientSideRegionScanner.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestClientSideRegionScanner.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestClientSideRegionScanner.data
new file mode 100644
index 00000000..9c88deb7
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestClientSideRegionScanner.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/TransitRegionStateProcedure.java#L409!!!org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState!!!confirmOpened!!!TransitRegionStateProcedure.java:437!!!java.io.IOException
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestClientTimeouts.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestClientTimeouts.conf
new file mode 100644
index 00000000..3a43c78e
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestClientTimeouts.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestClientTimeouts.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestClientTimeouts.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestClientTimeouts.data
new file mode 100644
index 00000000..a44778da
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestClientTimeouts.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcConnection.java#L250!!!org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupConnection!!!org.apache.hadoop.net.NetUtils.connect!!!BlockingRpcConnection.java:259!!!java.net.SocketException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestEnableTable.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestEnableTable.conf
new file mode 100644
index 00000000..cb7e668f
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestEnableTable.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestEnableTable.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestEnableTable.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestEnableTable.data
new file mode 100644
index 00000000..5506033e
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestEnableTable.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/backup/HFileArchiver.java#L548!!!org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile!!!org.apache.hadoop.hbase.backup.HFileArchiver$File.moveAndClose!!!HFileArchiver.java:566!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestMetaWithReplicasShutdownHandling.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestMetaWithReplicasShutdownHandling.conf
new file mode 100644
index 00000000..94e807cb
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestMetaWithReplicasShutdownHandling.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestMetaWithReplicasShutdownHandling.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestMetaWithReplicasShutdownHandling.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestMetaWithReplicasShutdownHandling.data
new file mode 100644
index 00000000..488008f1
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestMetaWithReplicasShutdownHandling.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RemoteProcedureResultReporter.java#L71!!!org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run!!!org.apache.hadoop.hbase.regionserver.HRegionServer.reportProcedureDone!!!RemoteProcedureResultReporter.java:89!!!java.io.IOException
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.TestClassLoading.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.TestClassLoading.conf
new file mode 100644
index 00000000..73ff36cd
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.TestClassLoading.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.TestClassLoading.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.TestClassLoading.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.TestClassLoading.data
new file mode 100644
index 00000000..5506033e
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.TestClassLoading.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/backup/HFileArchiver.java#L548!!!org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile!!!org.apache.hadoop.hbase.backup.HFileArchiver$File.moveAndClose!!!HFileArchiver.java:566!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.TestRegionObserverScannerOpenHook.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.TestRegionObserverScannerOpenHook.conf
new file mode 100644
index 00000000..efd03df7
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.TestRegionObserverScannerOpenHook.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.TestRegionObserverScannerOpenHook.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.TestRegionObserverScannerOpenHook.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.TestRegionObserverScannerOpenHook.data
new file mode 100644
index 00000000..b97f8839
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.TestRegionObserverScannerOpenHook.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java#L593!!!org.apache.hadoop.hbase.util.FSUtils.setClusterId!!!org.apache.hadoop.fs.FileSystem.write!!!FSUtils.java:598!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.example.TestRefreshHFilesEndpoint.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.example.TestRefreshHFilesEndpoint.conf
new file mode 100644
index 00000000..e69de29b
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.example.TestRefreshHFilesEndpoint.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.example.TestRefreshHFilesEndpoint.data
new file mode 100644
index 00000000..b97f8839
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.example.TestRefreshHFilesEndpoint.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java#L593!!!org.apache.hadoop.hbase.util.FSUtils.setClusterId!!!org.apache.hadoop.fs.FileSystem.write!!!FSUtils.java:598!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.example.TestZooKeeperScanPolicyObserver.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.example.TestZooKeeperScanPolicyObserver.conf
new file mode 100644
index 00000000..9563643f
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.example.TestZooKeeperScanPolicyObserver.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.example.TestZooKeeperScanPolicyObserver.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.example.TestZooKeeperScanPolicyObserver.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.example.TestZooKeeperScanPolicyObserver.data
new file mode 100644
index 00000000..5bf1851d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.example.TestZooKeeperScanPolicyObserver.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RemoteProcedureResultReporter.java#L71!!!org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run!!!org.apache.hadoop.hbase.regionserver.HRegionServer.reportProcedureDone!!!RemoteProcedureResultReporter.java:89!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.io.asyncfs.TestFanOutOneBlockAsyncDFSOutput.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.io.asyncfs.TestFanOutOneBlockAsyncDFSOutput.conf
new file mode 100644
index 00000000..ddd75ec1
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.io.asyncfs.TestFanOutOneBlockAsyncDFSOutput.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.io.asyncfs.TestFanOutOneBlockAsyncDFSOutput.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.io.asyncfs.TestFanOutOneBlockAsyncDFSOutput.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.io.asyncfs.TestFanOutOneBlockAsyncDFSOutput.data
new file mode 100644
index 00000000..f55d877d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.io.asyncfs.TestFanOutOneBlockAsyncDFSOutput.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java#L589!!!org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile!!!org.apache.hadoop.hdfs.protocol.ClientProtocol.complete!!!FanOutOneBlockAsyncDFSOutputHelper.java:591!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.mapreduce.TestTableMapReduceUtil.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.mapreduce.TestTableMapReduceUtil.conf
new file mode 100644
index 00000000..8ce0058d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.mapreduce.TestTableMapReduceUtil.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.mapreduce.TestTableMapReduceUtil.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.mapreduce.TestTableMapReduceUtil.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.mapreduce.TestTableMapReduceUtil.data
new file mode 100644
index 00000000..2e8b30d1
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.mapreduce.TestTableMapReduceUtil.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L2524!!!org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub!!!org.apache.hadoop.hbase.security.UserProvider.getCurrent!!!HRegionServer.java:2544!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.TestRetainAssignmentOnRestartSplitWithoutZk.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.TestRetainAssignmentOnRestartSplitWithoutZk.conf
new file mode 100644
index 00000000..27480884
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.TestRetainAssignmentOnRestartSplitWithoutZk.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.TestRetainAssignmentOnRestartSplitWithoutZk.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.TestRetainAssignmentOnRestartSplitWithoutZk.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.TestRetainAssignmentOnRestartSplitWithoutZk.data
new file mode 100644
index 00000000..7c637b59
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.TestRetainAssignmentOnRestartSplitWithoutZk.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java#L593!!!org.apache.hadoop.hbase.util.FSUtils.setClusterId!!!org.apache.hadoop.fs.FileSystem.rename!!!FSUtils.java:608!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.assignment.TestRegionAssignedToMultipleRegionServers.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.assignment.TestRegionAssignedToMultipleRegionServers.conf
new file mode 100644
index 00000000..1547bbc9
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.assignment.TestRegionAssignedToMultipleRegionServers.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.assignment.TestRegionAssignedToMultipleRegionServers.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.assignment.TestRegionAssignedToMultipleRegionServers.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.assignment.TestRegionAssignedToMultipleRegionServers.data
new file mode 100644
index 00000000..f55d877d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.assignment.TestRegionAssignedToMultipleRegionServers.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java#L589!!!org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile!!!org.apache.hadoop.hdfs.protocol.ClientProtocol.complete!!!FanOutOneBlockAsyncDFSOutputHelper.java:591!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.assignment.TestRegionReplicaSplit.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.assignment.TestRegionReplicaSplit.conf
new file mode 100644
index 00000000..fa27b8d9
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.assignment.TestRegionReplicaSplit.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.assignment.TestRegionReplicaSplit.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.assignment.TestRegionReplicaSplit.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.assignment.TestRegionReplicaSplit.data
new file mode 100644
index 00000000..5506033e
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.assignment.TestRegionReplicaSplit.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/backup/HFileArchiver.java#L548!!!org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile!!!org.apache.hadoop.hbase.backup.HFileArchiver$File.moveAndClose!!!HFileArchiver.java:566!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.procedure.TestSnapshotProcedureMasterRestarts.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.procedure.TestSnapshotProcedureMasterRestarts.conf
new file mode 100644
index 00000000..7e376b76
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.procedure.TestSnapshotProcedureMasterRestarts.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.procedure.TestSnapshotProcedureMasterRestarts.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.procedure.TestSnapshotProcedureMasterRestarts.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.procedure.TestSnapshotProcedureMasterRestarts.data
new file mode 100644
index 00000000..b0c79f70
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.procedure.TestSnapshotProcedureMasterRestarts.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SnapshotRegionCallable.java#L57!!!org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall!!!org.apache.hadoop.hbase.regionserver.HRegion.flush!!!SnapshotRegionCallable.java:58!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.procedure.TestSnapshotProcedureRSCrashes.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.procedure.TestSnapshotProcedureRSCrashes.conf
new file mode 100644
index 00000000..011eaeaa
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.procedure.TestSnapshotProcedureRSCrashes.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.procedure.TestSnapshotProcedureRSCrashes.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.procedure.TestSnapshotProcedureRSCrashes.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.procedure.TestSnapshotProcedureRSCrashes.data
new file mode 100644
index 00000000..b0c79f70
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.procedure.TestSnapshotProcedureRSCrashes.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SnapshotRegionCallable.java#L57!!!org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall!!!org.apache.hadoop.hbase.regionserver.HRegion.flush!!!SnapshotRegionCallable.java:58!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.procedure2.TestChildProcedures.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.procedure2.TestChildProcedures.conf
new file mode 100644
index 00000000..20247157
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.procedure2.TestChildProcedures.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.procedure2.TestChildProcedures.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.procedure2.TestChildProcedures.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.procedure2.TestChildProcedures.data
new file mode 100644
index 00000000..ee608d9c
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.procedure2.TestChildProcedures.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java#L402!!!org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease!!!org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.getLogFiles!!!WALProcedureStore.java:410!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.quotas.TestSuperUserQuotaPermissions.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.quotas.TestSuperUserQuotaPermissions.conf
new file mode 100644
index 00000000..5d7abee8
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.quotas.TestSuperUserQuotaPermissions.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.quotas.TestSuperUserQuotaPermissions.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.quotas.TestSuperUserQuotaPermissions.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.quotas.TestSuperUserQuotaPermissions.data
new file mode 100644
index 00000000..342154a3
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.quotas.TestSuperUserQuotaPermissions.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-protocol-shaded/target/generated-sources/protobuf/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RPCProtos.java#L5026!!!org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom!!!org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream.readBool!!!RPCProtos.java:5060!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestBootstrapNodeManager.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestBootstrapNodeManager.conf
new file mode 100644
index 00000000..2f0d763d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestBootstrapNodeManager.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestBootstrapNodeManager.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestBootstrapNodeManager.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestBootstrapNodeManager.data
new file mode 100644
index 00000000..432f7dce
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestBootstrapNodeManager.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/BootstrapNodeManager.java#L135!!!org.apache.hadoop.hbase.regionserver.BootstrapNodeManager.getFromMaster!!!org.apache.hadoop.hbase.util.FutureUtils.get!!!BootstrapNodeManager.java:140!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestBulkLoadReplicationHFileRefs.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestBulkLoadReplicationHFileRefs.conf
new file mode 100644
index 00000000..99593176
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestBulkLoadReplicationHFileRefs.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestBulkLoadReplicationHFileRefs.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestBulkLoadReplicationHFileRefs.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestBulkLoadReplicationHFileRefs.data
new file mode 100644
index 00000000..2dc7a9e7
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestBulkLoadReplicationHFileRefs.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java#L539!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initAndStartReplicationEndpoint!!!ReplicationSource.java:552!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestCompactionArchiveConcurrentClose.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestCompactionArchiveConcurrentClose.conf
new file mode 100644
index 00000000..21d0c5c5
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestCompactionArchiveConcurrentClose.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestCompactionArchiveConcurrentClose.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestCompactionArchiveConcurrentClose.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestCompactionArchiveConcurrentClose.data
new file mode 100644
index 00000000..5506033e
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestCompactionArchiveConcurrentClose.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/backup/HFileArchiver.java#L548!!!org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile!!!org.apache.hadoop.hbase.backup.HFileArchiver$File.moveAndClose!!!HFileArchiver.java:566!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestRegionServerCrashDisableWAL.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestRegionServerCrashDisableWAL.conf
new file mode 100644
index 00000000..29dc8945
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestRegionServerCrashDisableWAL.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestRegionServerCrashDisableWAL.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestRegionServerCrashDisableWAL.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestRegionServerCrashDisableWAL.data
new file mode 100644
index 00000000..2e8b30d1
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestRegionServerCrashDisableWAL.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L2524!!!org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub!!!org.apache.hadoop.hbase.security.UserProvider.getCurrent!!!HRegionServer.java:2544!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.wal.TestFSHLog.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.wal.TestFSHLog.conf
new file mode 100644
index 00000000..0115d5d5
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.wal.TestFSHLog.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.wal.TestFSHLog.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.wal.TestFSHLog.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.wal.TestFSHLog.data
new file mode 100644
index 00000000..cc983cac
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.wal.TestFSHLog.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AbstractFSWAL.java#L914!!!org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive!!!org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archiveLogFile!!!AbstractFSWAL.java:916!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.TestSerialReplicationFailover.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.TestSerialReplicationFailover.conf
new file mode 100644
index 00000000..3ef2bb84
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.TestSerialReplicationFailover.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.TestSerialReplicationFailover.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.TestSerialReplicationFailover.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.TestSerialReplicationFailover.data
new file mode 100644
index 00000000..44b4c95c
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.TestSerialReplicationFailover.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ServerCrashProcedure.java#L136!!!org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState!!!org.apache.hadoop.hbase.master.MasterServices.getProcedures!!!ServerCrashProcedure.java:272!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStreamFSHLog.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStreamFSHLog.conf
new file mode 100644
index 00000000..a9e31ef1
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStreamFSHLog.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStreamFSHLog.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStreamFSHLog.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStreamFSHLog.data
new file mode 100644
index 00000000..0f616884
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStreamFSHLog.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/SyncReplicationReplayWALProcedure.java#L59!!!org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALProcedure.executeFromState!!!org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALManager.isReplayWALFinished!!!SyncReplicationReplayWALProcedure.java:75!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestDrainReplicationQueuesForStandBy.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestDrainReplicationQueuesForStandBy.conf
new file mode 100644
index 00000000..0e646cf9
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestDrainReplicationQueuesForStandBy.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestDrainReplicationQueuesForStandBy.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestDrainReplicationQueuesForStandBy.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestDrainReplicationQueuesForStandBy.data
new file mode 100644
index 00000000..0f616884
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestDrainReplicationQueuesForStandBy.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/SyncReplicationReplayWALProcedure.java#L59!!!org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALProcedure.executeFromState!!!org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALManager.isReplayWALFinished!!!SyncReplicationReplayWALProcedure.java:75!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestRefreshRecoveredReplication.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestRefreshRecoveredReplication.conf
new file mode 100644
index 00000000..ce4da4cd
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestRefreshRecoveredReplication.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestRefreshRecoveredReplication.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestRefreshRecoveredReplication.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestRefreshRecoveredReplication.data
new file mode 100644
index 00000000..2dc7a9e7
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestRefreshRecoveredReplication.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java#L539!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initAndStartReplicationEndpoint!!!ReplicationSource.java:552!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestReplicator.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestReplicator.conf
new file mode 100644
index 00000000..41700205
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestReplicator.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestReplicator.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestReplicator.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestReplicator.data
new file mode 100644
index 00000000..2e8b30d1
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestReplicator.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L2524!!!org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub!!!org.apache.hadoop.hbase.security.UserProvider.getCurrent!!!HRegionServer.java:2544!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rest.TestDeleteRow.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rest.TestDeleteRow.conf
new file mode 100644
index 00000000..5a9a57c7
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rest.TestDeleteRow.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rest.TestDeleteRow.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rest.TestDeleteRow.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rest.TestDeleteRow.data
new file mode 100644
index 00000000..8ed03bb5
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rest.TestDeleteRow.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-protocol-shaded/target/generated-sources/protobuf/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RPCProtos.java#L5026!!!org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom!!!org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream.readBool!!!RPCProtos.java:5055!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rest.TestSecurityHeadersFilter.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rest.TestSecurityHeadersFilter.conf
new file mode 100644
index 00000000..a3bf63be
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rest.TestSecurityHeadersFilter.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rest.TestSecurityHeadersFilter.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rest.TestSecurityHeadersFilter.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rest.TestSecurityHeadersFilter.data
new file mode 100644
index 00000000..49dd7e4e
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rest.TestSecurityHeadersFilter.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java#L491!!!org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists!!!org.apache.hadoop.fs.FileSystem.exists!!!FSUtils.java:494!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rsgroup.TestRSGroupsBalance.java.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rsgroup.TestRSGroupsBalance.java.conf
new file mode 100644
index 00000000..373f6db4
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rsgroup.TestRSGroupsBalance.java.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rsgroup.TestRSGroupsBalance.java.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rsgroup.TestRSGroupsBalance.java.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rsgroup.TestRSGroupsBalance.java.data
new file mode 100644
index 00000000..85eb4119
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rsgroup.TestRSGroupsBalance.java.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java#L1019!!!org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.moveRegionsBetweenGroups!!!moveAsync!!!RSGroupInfoManagerImpl.java:1037!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient.conf
new file mode 100644
index 00000000..35f7c318
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient.data
new file mode 100644
index 00000000..aac0fbe6
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/snapshot/FlushSnapshotSubprocedure.java#L113!!!org.apache.hadoop.hbase.regionserver.snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask.call!!!org.apache.hadoop.hbase.regionserver.HRegion.flush!!!FlushSnapshotSubprocedure.java:114!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.util.TestRegionMoverWithRSGroupEnable.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.util.TestRegionMoverWithRSGroupEnable.conf
new file mode 100644
index 00000000..3cc01cc8
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.util.TestRegionMoverWithRSGroupEnable.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.util.TestRegionMoverWithRSGroupEnable.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.util.TestRegionMoverWithRSGroupEnable.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.util.TestRegionMoverWithRSGroupEnable.data
new file mode 100644
index 00000000..a4e361b9
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.util.TestRegionMoverWithRSGroupEnable.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/util/MoveWithAck.java#L76!!!org.apache.hadoop.hbase.util.MoveWithAck.call!!!org.apache.hadoop.hbase.client.Admin.move!!!MoveWithAck.java:82!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.wal.TestSyncReplicationWALProvider.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.wal.TestSyncReplicationWALProvider.conf
new file mode 100644
index 00000000..d3f5a448
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.wal.TestSyncReplicationWALProvider.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.wal.TestSyncReplicationWALProvider.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.wal.TestSyncReplicationWALProvider.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.wal.TestSyncReplicationWALProvider.data
new file mode 100644
index 00000000..ca47f588
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.wal.TestSyncReplicationWALProvider.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/DualAsyncFSWAL.java#L76!!!org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance!!!org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createAsyncWriter!!!DualAsyncFSWAL.java:82!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hbase.archetypes.exemplars.client.TestHelloHBase.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hbase.archetypes.exemplars.client.TestHelloHBase.conf
new file mode 100644
index 00000000..88b0419c
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hbase.archetypes.exemplars.client.TestHelloHBase.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hbase.archetypes.exemplars.client.TestHelloHBase.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hbase.archetypes.exemplars.client.TestHelloHBase.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hbase.archetypes.exemplars.client.TestHelloHBase.data
new file mode 100644
index 00000000..9c88deb7
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hbase/test-plan/hbase_retry_locations-org.apache.hbase.archetypes.exemplars.client.TestHelloHBase.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/TransitRegionStateProcedure.java#L409!!!org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState!!!confirmOpened!!!TransitRegionStateProcedure.java:437!!!java.io.IOException
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/hive.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/hive.conf
new file mode 100644
index 00000000..6888129b
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/hive.conf
@@ -0,0 +1,3 @@
+retry_data_file: /home/bastoica/projects/current/wasabi/tool/config/hive/hive_retry_locations.data
+injection_policy: max-count
+max_injection_count: 0
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/hive_retry_bounds.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/hive_retry_bounds.data
new file mode 100644
index 00000000..7e745685
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/hive_retry_bounds.data
@@ -0,0 +1,32 @@
+Var name!!!Assigned value!!!Assign method!!!Test class
+HIVE_SERVER2_THRIFT_CLIENT_CONNECTION_RETRY_LIMIT!!!0!!!org.apache.hadoop.hive.conf.HiveConf.setIntVar!!!TestRetryingThriftCLIServiceClient
+HIVE_SERVER2_THRIFT_CLIENT_CONNECTION_RETRY_LIMIT!!!3!!!org.apache.hadoop.hive.conf.HiveConf.setIntVar!!!TestRetryingThriftCLIServiceClient
+HIVE_SERVER2_THRIFT_CLIENT_CONNECTION_RETRY_LIMIT!!!3!!!org.apache.hadoop.hive.conf.HiveConf.setIntVar!!!TestRetryingThriftCLIServiceClient
+HIVE_SERVER2_THRIFT_CLIENT_RETRY_LIMIT!!!3!!!org.apache.hadoop.hive.conf.HiveConf.setIntVar!!!TestRetryingThriftCLIServiceClient
+HIVE_LOCK_SLEEP_BETWEEN_RETRIES!!!100!!!org.apache.hadoop.hive.conf.HiveConf.setTimeVar!!!TestConcurrentDppInserts
+HIVE_LOCK_NUMRETRIES!!!2!!!org.apache.hadoop.hive.conf.HiveConf.setIntVar!!!TestDbTxnManager2
+METASTORE_THRIFT_FAILURE_RETRIES!!!3!!!org.apache.hadoop.hive.conf.HiveConf.setIntVar!!!TestPermsGrp
+METASTORE_THRIFT_FAILURE_RETRIES!!!3!!!org.apache.hadoop.hive.conf.HiveConf.setIntVar!!!TestHiveClientCache
+METASTORE_THRIFT_FAILURE_RETRIES!!!3!!!org.apache.hadoop.hive.conf.HiveConf.setIntVar!!!TestHCatMultiOutputFormat
+METASTORE_THRIFT_FAILURE_RETRIES!!!3!!!org.apache.hadoop.hive.conf.HiveConf.setIntVar!!!TestHCatPartitionPublish
+METASTORE_THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.conf.HiveConf.setIntVar!!!TestHCatClient
+METASTORE_THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.conf.HiveConf.setIntVar!!!TestPermsGrp
+METASTORE_THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.conf.HiveConf.setIntVar!!!TestHiveClientCache
+METASTORE_THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.conf.HiveConf.setIntVar!!!TestHCatMultiOutputFormat
+METASTORE_THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.conf.HiveConf.setIntVar!!!TestHCatPartitionPublish
+THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestFilterHooks
+THRIFT_CONNECTION_RETRIES!!!10!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestHiveMetaStoreGetMetaConf
+THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestHiveMetaStorePartitionSpecs
+THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestHiveMetaStoreWithEnvironmentContext
+THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestHmsServerAuthorization
+THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestMetaStoreEndFunctionListener
+THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestMetaStoreEventListener
+THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestMetaStoreEventListenerOnlyOnCommit
+THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestMetaStoreEventListenerWithOldConf
+THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestMetaStoreInitListener
+THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestRetryingHMSHandler
+THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestHiveMetaStoreAuthorizer
+HMS_HANDLER_ATTEMPTS!!!4!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestObjectStoreInitRetry
+HMS_HANDLER_ATTEMPTS!!!2!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestRetryingHMSHandler
+HIVE_COMPACTOR_CLEANER_MAX_RETRY_ATTEMPTS!!!3!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestCleaner
+JOB_TIMEOUT_TASK_RETRY_COUNT!!!4!!!org.apache.hadoop.conf.Configuration.setInt!!!TestConcurrentJobRequestsThreadsAndTimeout
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/hive_retry_locations.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/hive_retry_locations.data
new file mode 100644
index 00000000..11cc901d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/hive_retry_locations.data
@@ -0,0 +1,74 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/ql/src/java/org/apache/hadoop/hive/llap/ProactiveEviction.java#L134!!!org.apache.hadoop.hive.llap.ProactiveEviction.run!!!evictEntity!!!ProactiveEviction.java:143!!!org.apache.hive.service.ServiceException
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/YarnQueueHelper.java#L120!!!org.apache.hadoop.hive.ql.exec.tez.YarnQueueHelper.checkQueueAccessInternal!!!checkQueueAccessFromSingleRm!!!YarnQueueHelper.java:131!!!java.io.IOException
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/leader/LeaseLeaderElection.java#L175!!!org.apache.hadoop.hive.metastore.leader.LeaseLeaderElection.tryBeLeader!!!lock!!!LeaseLeaderElection.java:177!!!org.apache.hadoop.hive.metastore.api.MetaException
+https://github.com/apache/hive/blob/e427ce0d572c9adf6f194693a1b3ba85f246f3b7/hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/listener/NotificationListener.java#L304!!!org.apache.hive.hcatalog.listener.NotificationListener.send!!!createProducer!!!NotificationListener.java:316!!!JMSException
+https://github.com/apache/hive/tree//e427ce0//common/src/java/org/apache/hive/common/util/RetryUtilities.java#L87!!!org.apache.hive.common.util.RetryUtilities$ExponentiallyDecayingBatchWork.run!!!org.apache.hive.common.util.RetryUtilities$ExponentialBackOffRetry.execute!!!RetryUtilities.java:93!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//common/src/test/org/apache/hive/common/util/Retry.java#L59!!!org.apache.hive.common.util.Retry$RetryingStatement.evaluate!!!org.junit.runners.model.Statement.evaluate!!!Retry.java:61!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandlerUtils.java#L765!!!org.apache.hadoop.hive.druid.DruidStorageHandlerUtils.publishSegmentWithShardSpec!!!org.apache.hadoop.fs.FileSystem.mkdirs!!!DruidStorageHandlerUtils.java:774!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandlerUtils.java#L765!!!org.apache.hadoop.hive.druid.DruidStorageHandlerUtils.publishSegmentWithShardSpec!!!org.apache.hadoop.fs.FileSystem.rename!!!DruidStorageHandlerUtils.java:776!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandlerUtils.java#L765!!!org.apache.hadoop.hive.druid.DruidStorageHandlerUtils.publishSegmentWithShardSpec!!!org.apache.hadoop.fs.FileSystem.exists!!!DruidStorageHandlerUtils.java:777!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/LauncherDelegator.java#L234!!!org.apache.hive.hcatalog.templeton.LauncherDelegator.killTempletonJobWithRetry!!!org.apache.hive.hcatalog.templeton.LauncherDelegator.killJob!!!LauncherDelegator.java:237!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java#L379!!!org.apache.hive.jdbc.HiveConnection.HiveConnection!!!org.apache.hive.jdbc.HiveConnection.executeInitSql!!!HiveConnection.java:387!!!java.sql.SQLException
+https://github.com/apache/hive/tree//e427ce0//jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java#L379!!!org.apache.hive.jdbc.HiveConnection.HiveConnection!!!org.apache.hive.jdbc.HiveConnection.openSession!!!HiveConnection.java:386!!!java.sql.SQLException
+https://github.com/apache/hive/tree//e427ce0//jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java#L379!!!org.apache.hive.jdbc.HiveConnection.HiveConnection!!!org.apache.hive.jdbc.HiveConnection.openTransport!!!HiveConnection.java:382!!!java.sql.SQLException
+https://github.com/apache/hive/tree//e427ce0//kafka-handler/src/java/org/apache/hadoop/hive/kafka/RetryUtils.java#L90!!!org.apache.hadoop.hive.kafka.RetryUtils.retry!!!org.apache.hadoop.hive.kafka.RetryUtils$Task.perform!!!RetryUtils.java:93!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//llap-client/src/java/org/apache/hadoop/hive/registry/impl/ZkRegistryBase.java#L609!!!org.apache.hadoop.hive.registry.impl.ZkRegistryBase.ensureInstancesCache!!!org.apache.curator.framework.recipes.cache.PathChildrenCache.start!!!ZkRegistryBase.java:644!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//llap-common/src/java/org/apache/hadoop/hive/llap/AsyncPbRpcProxy.java#L442!!!org.apache.hadoop.hive.llap.AsyncPbRpcProxy$AsyncCallableRequest.call!!!org.apache.hadoop.hive.llap.AsyncPbRpcProxy$AsyncCallableRequest.callInternal!!!AsyncPbRpcProxy.java:444!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/exec/repl/atlas/RetryingClientTimeBased.java#L49!!!org.apache.hadoop.hive.ql.exec.repl.atlas.RetryingClientTimeBased.invokeWithRetry!!!java.util.concurrent.Callable.call!!!RetryingClientTimeBased.java:52!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/TezJobMonitor.java#L166!!!org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution!!!org.apache.hadoop.hive.ql.Context.checkHeartbeaterLockException!!!TezJobMonitor.java:171!!!org.apache.hadoop.hive.ql.lockmgr.LockException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/TezJobMonitor.java#L166!!!org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution!!!org.apache.tez.dag.api.client.DAGClient.getDAGStatus!!!TezJobMonitor.java:183!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/exec/util/Retryable.java#L71!!!org.apache.hadoop.hive.ql.exec.util.Retryable.executeCallable!!!org.apache.hadoop.security.UserGroupInformation.doAs!!!Retryable.java:75!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/exec/util/Retryable.java#L71!!!org.apache.hadoop.hive.ql.exec.util.Retryable.executeCallable!!!org.apache.hadoop.security.UserGroupInformation.getLoginUser!!!Retryable.java:75!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/exec/util/Retryable.java#L71!!!org.apache.hadoop.hive.ql.exec.util.Retryable.executeCallable!!!org.apache.hadoop.hive.metastore.utils.SecurityUtils.reloginExpiringKeytabUser!!!Retryable.java:74!!!org.apache.hadoop.hive.metastore.api.MetaException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java#L3133!!!org.apache.hadoop.hive.ql.exec.Utilities.executeWithRetry!!!org.apache.hadoop.hive.ql.exec.Utilities$SQLCommand.run!!!Utilities.java:3144!!!java.sql.SQLException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java#L3173!!!org.apache.hadoop.hive.ql.exec.Utilities.connectWithRetry!!!java.sql.DriverManager.getConnection!!!Utilities.java:3144!!!java.sql.SQLException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java#L3214!!!org.apache.hadoop.hive.ql.exec.Utilities.prepareWithRetry!!!java.sql.Connection.prepareStatement!!!Utilities.java:3225!!!java.sql.SQLException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/hooks/HiveProtoLoggingHook.java#L315!!!org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook$EventLogger.writeEvent!!!org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook$EventLogger.maybeRolloverWriterForDay!!!HiveProtoLoggingHook.java:327!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/hooks/HiveProtoLoggingHook.java#L315!!!org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook$EventLogger.writeEvent!!!org.apache.tez.dag.history.logging.proto.ProtoMessageWriter.writeProto!!!HiveProtoLoggingHook.java:328!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/hooks/HiveProtoLoggingHook.java#L315!!!org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook$EventLogger.writeEvent!!!org.apache.tez.dag.history.logging.proto.ProtoMessageWriter.hflush!!!HiveProtoLoggingHook.java:329!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/hooks/HiveProtoLoggingHook.java#L315!!!org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook$EventLogger.writeEvent!!!org.apache.tez.dag.history.logging.proto.DatePartitionedLogger.getWriter!!!HiveProtoLoggingHook.java:321!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/lockmgr/DbLockManager.java#L114!!!org.apache.hadoop.hive.ql.lockmgr.DbLockManager.lock!!!org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.getMS!!!DbLockManager.java:104!!!org.apache.hadoop.hive.ql.lockmgr.LockException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/lockmgr/DbLockManager.java#L114!!!org.apache.hadoop.hive.ql.lockmgr.DbLockManager.lock!!!org.apache.hadoop.hive.metastore.IMetaStoreClient.checkLock!!!DbLockManager.java:118!!!org.apache.hadoop.hive.metastore.api.NoSuchLockException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/lockmgr/EmbeddedLockManager.java#L113!!!org.apache.hadoop.hive.ql.lockmgr.EmbeddedLockManager.lock!!!org.apache.hadoop.hive.ql.lockmgr.EmbeddedLockManager.lockPrimitive!!!EmbeddedLockManager.java:117!!!org.apache.hadoop.hive.ql.lockmgr.LockException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/lockmgr/zookeeper/ZooKeeperHiveLockManager.java#L298!!!org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager.lock!!!org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager.lockPrimitive!!!ZooKeeperHiveLockManager.java:306!!!org.apache.hadoop.hive.ql.lockmgr.LockException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/lockmgr/zookeeper/ZooKeeperHiveLockManager.java#L487!!!org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager.unlockWithRetry!!!org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager.unlockPrimitive!!!ZooKeeperHiveLockManager.java:493!!!org.apache.hadoop.hive.ql.lockmgr.LockException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/parse/repl/CopyUtils.java#L232!!!org.apache.hadoop.hive.ql.parse.repl.CopyUtils.doCopyRetry!!!org.apache.hadoop.hive.ql.parse.repl.CopyUtils.getFilesToRetry!!!CopyUtils.java:257!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/parse/repl/CopyUtils.java#L232!!!org.apache.hadoop.hive.ql.parse.repl.CopyUtils.doCopyRetry!!!org.apache.hadoop.hive.ql.parse.repl.CopyUtils.doCopyOnce!!!CopyUtils.java:268!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//service/src/java/org/apache/hive/service/cli/thrift/RetryingThriftCLIServiceClient.java#L290!!!org.apache.hive.service.cli.thrift.RetryingThriftCLIServiceClient.connectWithRetry!!!org.apache.hive.service.cli.thrift.RetryingThriftCLIServiceClient.connect!!!RetryingThriftCLIServiceClient.java:292!!!org.apache.hive.service.cli.HiveSQLException
+https://github.com/apache/hive/tree//e427ce0//service/src/java/org/apache/hive/service/cli/thrift/RetryingThriftCLIServiceClient.java#L382!!!org.apache.hive.service.cli.thrift.RetryingThriftCLIServiceClient.invoke!!!org.apache.hive.service.cli.thrift.RetryingThriftCLIServiceClient.connectWithRetry!!!RetryingThriftCLIServiceClient.java:391!!!org.apache.hive.service.cli.HiveSQLException
+https://github.com/apache/hive/tree//e427ce0//service/src/java/org/apache/hive/service/cli/thrift/RetryingThriftCLIServiceClient.java#L382!!!org.apache.hive.service.cli.thrift.RetryingThriftCLIServiceClient.invoke!!!org.apache.hive.service.cli.thrift.RetryingThriftCLIServiceClient.invokeInternal!!!RetryingThriftCLIServiceClient.java:385!!!org.apache.hive.service.cli.HiveSQLException
+https://github.com/apache/hive/tree//e427ce0//service/src/java/org/apache/hive/service/server/HiveServer2.java#L1089!!!org.apache.hive.service.server.HiveServer2.startHiveServer2!!!org.apache.hive.service.server.HiveServer2.init!!!HiveServer2.java:1112!!!org.apache.hive.service.ServiceException
+https://github.com/apache/hive/tree//e427ce0//service/src/java/org/apache/hive/service/server/HiveServer2.java#L1089!!!org.apache.hive.service.server.HiveServer2.startHiveServer2!!!org.apache.hive.service.server.HiveServer2.start!!!HiveServer2.java:1113!!!org.apache.hive.service.ServiceException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java#L753!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createHttpClient!!!HiveMetaStoreClient.java:798!!!org.apache.thrift.transport.TTransportException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java#L753!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createBinaryClient!!!HiveMetaStoreClient.java:800!!!org.apache.thrift.transport.TTransportException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java#L753!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open!!!org.apache.thrift.transport.TTransport.open!!!HiveMetaStoreClient.java:816!!!org.apache.thrift.transport.TTransportException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java#L753!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open!!!org.apache.hadoop.hive.metastore.utils.SecurityUtils.getUGI!!!HiveMetaStoreClient.java:848!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java#L753!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open!!!org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Iface.set_ugi!!!HiveMetaStoreClient.java:849!!!org.apache.thrift.TException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/RetryingMetaStoreClient.java#L178!!!org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke!!!org.apache.hadoop.hive.metastore.utils.SecurityUtils.reloginExpiringKeytabUser!!!RetryingMetaStoreClient.java:175!!!org.apache.hadoop.hive.metastore.api.MetaException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/RetryingMetaStoreClient.java#L178!!!org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke!!!org.apache.hadoop.security.UserGroupInformation.doAs!!!RetryingMetaStoreClient.java:184!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-common/target/generated-sources/org/apache/hadoop/hive/metastore/grpc/HiveMetastore.java#L323587!!!org.apache.hadoop.hive.metastore.grpc.HiveMetastore$CompactionInfoStruct.CompactionInfoStruct!!!com.google.protobuf.CodedInputStream.readBool!!!N/A!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-common/target/generated-sources/org/apache/hadoop/hive/metastore/grpc/HiveMetastore.java#L323587!!!org.apache.hadoop.hive.metastore.grpc.HiveMetastore$CompactionInfoStruct.CompactionInfoStruct!!!com.google.protobuf.CodedInputStream.readEnum!!!N/A!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-common/target/generated-sources/org/apache/hadoop/hive/metastore/grpc/HiveMetastore.java#L323587!!!org.apache.hadoop.hive.metastore.grpc.HiveMetastore$CompactionInfoStruct.CompactionInfoStruct!!!com.google.protobuf.CodedInputStream.readInt64!!!N/A!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-common/target/generated-sources/org/apache/hadoop/hive/metastore/grpc/HiveMetastore.java#L323587!!!org.apache.hadoop.hive.metastore.grpc.HiveMetastore$CompactionInfoStruct.CompactionInfoStruct!!!com.google.protobuf.CodedInputStream.readStringRequireUtf8!!!N/A!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-common/target/generated-sources/org/apache/hadoop/hive/metastore/grpc/HiveMetastore.java#L323587!!!org.apache.hadoop.hive.metastore.grpc.HiveMetastore$CompactionInfoStruct.CompactionInfoStruct!!!com.google.protobuf.CodedInputStream.readTag!!!N/A!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-common/target/generated-sources/org/apache/hadoop/hive/metastore/grpc/HiveMetastore.java#L323587!!!org.apache.hadoop.hive.metastore.grpc.HiveMetastore$CompactionInfoStruct.CompactionInfoStruct!!!com.google.protobuf.GeneratedMessageV3.parseUnknownField!!!N/A!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java#L11654!!!org.apache.hadoop.hive.metastore.ObjectStore$RetryingExecutor.run!!!org.apache.hadoop.hive.metastore.ObjectStore$RetryingExecutor$Command.process!!!ObjectStore.java:11999!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/RetryingHMSHandler.java#L138!!!org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal!!!org.apache.hadoop.hive.metastore.MetaStoreInit.updateConnectionURL!!!RetryingHMSHandler.java:172!!!org.apache.hadoop.hive.metastore.api.MetaException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/RetryingHMSHandler.java#L138!!!org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal!!!org.apache.hadoop.hive.metastore.Deadline.startTimer!!!RetryingHMSHandler.java:89!!!org.apache.hadoop.hive.metastore.api.MetaException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java#L414!!!org.apache.hadoop.hive.metastore.txn.CompactionTxnHandler.findReadyToClean!!!java.sql.ResultSet.getInt!!!N/A!!!java.sql.SQLException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java#L414!!!org.apache.hadoop.hive.metastore.txn.CompactionTxnHandler.findReadyToClean!!!java.sql.ResultSet.getLong!!!N/A!!!java.sql.SQLException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java#L414!!!org.apache.hadoop.hive.metastore.txn.CompactionTxnHandler.findReadyToClean!!!java.sql.ResultSet.getString!!!N/A!!!java.sql.SQLException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java#L414!!!org.apache.hadoop.hive.metastore.txn.CompactionTxnHandler.findReadyToClean!!!java.sql.ResultSet.next!!!N/A!!!java.sql.SQLException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java#L414!!!org.apache.hadoop.hive.metastore.txn.CompactionTxnHandler.findReadyToClean!!!org.apache.hadoop.hive.metastore.txn.TxnUtils.dbCompactionType2ThriftType!!!N/A!!!org.apache.hadoop.hive.metastore.api.MetaException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/utils/MetaStoreServerUtils.java#L847!!!org.apache.hadoop.hive.metastore.utils.MetaStoreServerUtils.loopUntilHMSReady!!!java.net.Socket.close!!!MetaStoreServerUtils.java:917!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/utils/MetaStoreServerUtils.java#L847!!!org.apache.hadoop.hive.metastore.utils.MetaStoreServerUtils.loopUntilHMSReady!!!java.net.Socket.connect!!!MetaStoreServerUtils.java:916!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/utils/RetryUtilities.java#L85!!!org.apache.hadoop.hive.metastore.utils.RetryUtilities$ExponentiallyDecayingBatchWork.run!!!org.apache.hadoop.hive.metastore.utils.RetryUtilities$ExponentialBackOffRetry.execute!!!RetryUtilities.java:91!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClientPreCatalog.java#L447!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClientPreCatalog.open!!!org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Iface.set_ugi!!!HiveMetaStoreClientPreCatalog.java:560!!!org.apache.thrift.TException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClientPreCatalog.java#L447!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClientPreCatalog.open!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.getPassword!!!HiveMetaStoreClientPreCatalog.java:462!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClientPreCatalog.java#L447!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClientPreCatalog.open!!!org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Client.createClientTransport!!!HiveMetaStoreClientPreCatalog.java:507!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClientPreCatalog.java#L447!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClientPreCatalog.open!!!org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Client.createClientTransport!!!HiveMetaStoreClientPreCatalog.java:514!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClientPreCatalog.java#L447!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClientPreCatalog.open!!!org.apache.hadoop.hive.metastore.utils.SecurityUtils.getSSLSocket!!!HiveMetaStoreClientPreCatalog.java:470!!!org.apache.thrift.transport.TTransportException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClientPreCatalog.java#L447!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClientPreCatalog.open!!!org.apache.hadoop.hive.metastore.utils.SecurityUtils.getTokenStrForm!!!HiveMetaStoreClientPreCatalog.java:502!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClientPreCatalog.java#L447!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClientPreCatalog.open!!!org.apache.hadoop.hive.metastore.utils.SecurityUtils.getUGI!!!HiveMetaStoreClientPreCatalog.java:559!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClientPreCatalog.java#L447!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClientPreCatalog.open!!!org.apache.thrift.transport.TTransport.open!!!HiveMetaStoreClientPreCatalog.java:542!!!org.apache.thrift.transport.TTransportException
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/hive_timeout_bounds.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/hive_timeout_bounds.data
new file mode 100644
index 00000000..34df60d1
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/hive_timeout_bounds.data
@@ -0,0 +1,424 @@
+TestAbortedTxnCleaner.testAbortedCleaningWithThreeTxnsWithDiffWriteIds
+TestAbortedTxnCleaner.testCleaningOfAbortedDirectoriesBelowBase
+TestAbortedTxnCleaner.testCleaningOfAbortedDirectoriesForMultiplePartitions
+TestAbortedTxnCleaner.testCleaningOfAbortedDirectoriesForSinglePartition
+TestAbortedTxnCleaner.testCleaningOfAbortedDirectoriesForUnpartitionedTables
+TestAbortedTxnCleaner.testCleaningOfAbortedDirectoriesOnTopOfBase
+TestAbortedTxnCleaner.testCleaningOfAbortedDirectoriesWithLongRunningOpenWriteTxn
+TestCliDriverMethods.testprocessInitFiles
+TestCliDriverMethods.testProcessSelectDatabase
+TestCliDriverMethods.testRun
+TestCLIServiceConnectionLimits.testConnectionForwardedIpAddresses
+TestCLIServiceConnectionLimits.testConnectionLimitPerIpAddress
+TestCLIServiceConnectionLimits.testConnectionLimitPerUser
+TestCLIServiceConnectionLimits.testConnectionLimitPerUserIpAddress
+TestCLIServiceConnectionLimits.testConnectionMultipleLimitsIPAndUserIP
+TestCLIServiceConnectionLimits.testConnectionMultipleLimitsUserAndIP
+TestCLIServiceConnectionLimits.testConnectionMultipleLimitsUserIPAndUser
+TestCLIServiceConnectionLimits.testIncrementAndDecrementConnectionsUser
+TestCLIServiceConnectionLimits.testInvalidIpaddress
+TestCLIServiceConnectionLimits.testInvalidUserIpaddress
+TestCLIServiceConnectionLimits.testInvalidUserName
+TestCLIServiceConnectionLimits.testNoLimit
+TestCLIServiceRestore.testRestore
+TestColumnAccess.testJoinTable1AndTable2
+TestColumnAccess.testJoinView1AndTable2
+TestColumnAccess.testQueryTable1
+TestColumnAccess.testShowPartitions
+TestCommands.testBasicReplEximCommands
+TestCommands.testBeelineCommands
+TestCommands.testDropDatabaseCommand
+TestCommands.testMetadataReplEximCommands
+TestCommands.testNoopReplEximCommands
+TestCommandWithSpace.testCommandWithPrefixSpace
+TestCompactionMetrics.testInitiatorPerfMetricsEnabled
+TestCompactionMetrics.testOldestReadyForCleaningAge
+TestCompactionMetrics.testWorkerPerfMetrics
+TestDbTxnManager.testDDLExclusive
+TestDbTxnManager.testDDLNoLock
+TestDbTxnManager.testDDLShared
+TestDbTxnManager.testDelete
+TestDbTxnManager.testExceptions
+TestDbTxnManager.testHeartbeater
+TestDbTxnManager.testHeartbeaterReplicationTxn
+TestDbTxnManager.testJoin
+TestDbTxnManager.testLockAcquisitionAndRelease
+TestDbTxnManager.testReadWrite
+TestDbTxnManager.testRollback
+TestDbTxnManager.testSingleReadMultiPartition
+TestDbTxnManager.testSingleReadPartition
+TestDbTxnManager.testSingleReadTable
+TestDbTxnManager.testSingleWritePartition
+TestDbTxnManager.testSingleWriteTable
+TestDbTxnManager.testUpdate
+TestDbTxnManager.testWriteDynamicPartition
+TestDbTxnManager2.testMergePartitioned
+TestDbTxnManagerIsolationProperties.testRebuildMVWhenOpenTxnPresents
+TestDeltaFilesMetricFlags.testDeltaFilesMetricFromCleanerWithAcidMetricsThreadDisabled
+TestDeltaFilesMetricFlags.testDeltaFilesMetricFromCleanerWithMetricsDisabled
+TestDeltaFilesMetricFlags.testDeltaFilesMetricFromInitiatorWithAcidMetricsThreadDisabled
+TestDeltaFilesMetricFlags.testDeltaFilesMetricFromInitiatorWithMetricsDisabled
+TestDeltaFilesMetricFlags.testDeltaFilesMetricFromWorkerWithAcidMetricsThreadDisabled
+TestDeltaFilesMetricFlags.testDeltaFilesMetricFromWorkerWithMetricsDisabled
+TestDeltaFilesMetrics.testDeltaFileMetricMultiPartitionedTable
+TestDeltaFilesMetrics.testDeltaFileMetricPartitionedTable
+TestDeltaFilesMetrics.testDeltaFileMetricUnpartitionedTable
+TestDMLSemanticAnalyzer.testDeleteAllNonPartitioned
+TestDMLSemanticAnalyzer.testDeleteAllPartitioned
+TestDMLSemanticAnalyzer.testDeleteAllWherePartitioned
+TestDMLSemanticAnalyzer.testDeleteOnePartition
+TestDMLSemanticAnalyzer.testDeleteOnePartitionWhere
+TestDMLSemanticAnalyzer.testDeleteWhereNoPartition
+TestDMLSemanticAnalyzer.testInsertSelect
+TestDMLSemanticAnalyzer.testInsertValues
+TestDMLSemanticAnalyzer.testInsertValuesPartitioned
+TestDMLSemanticAnalyzer.testUpdateAllNonPartitioned
+TestDMLSemanticAnalyzer.testUpdateAllNonPartitionedWhere
+TestDMLSemanticAnalyzer.testUpdateAllPartitioned
+TestDMLSemanticAnalyzer.testUpdateAllPartitionedWhere
+TestDMLSemanticAnalyzer.testUpdateOnePartition
+TestDMLSemanticAnalyzer.testUpdateOnePartitionWhere
+TestDruidStorageHandler.testCommitCreateTablePlusCommitDropTableWithoutPurge
+TestDruidStorageHandler.testCommitCreateTablePlusCommitDropTableWithPurge
+TestDruidStorageHandler.testCommitInsertIntoTable
+TestDruidStorageHandler.testCommitInsertIntoWhenDestinationSegmentFileExist
+TestDruidStorageHandler.testCommitInsertIntoWithConflictingIntervalSegment
+TestDruidStorageHandler.testCommitInsertIntoWithNonExtendableSegment
+TestDruidStorageHandler.testCommitInsertOverwriteTable
+TestDruidStorageHandler.testCommitInsertTable
+TestDruidStorageHandler.testCommitMultiInsertOverwriteTable
+TestDruidStorageHandler.testInsertIntoAppendOneMorePartition
+TestDummyTxnManager.testSingleReadTable
+TestE2EScenarios.testReadOrcAndRCFromPig
+TestEmbeddedLockManager.testLocking
+TestExecDriver.testMapPlan1
+TestExecDriver.testMapPlan2
+TestExecDriver.testMapRedPlan1
+TestExecDriver.testMapRedPlan2
+TestExecDriver.testMapRedPlan3
+TestExecDriver.testMapRedPlan4
+TestExecDriver.testMapRedPlan5
+TestExecDriver.testMapRedPlan6
+TestExprProcessorGetFuncExpr.testLookupFunctionOnDemand
+TestFileSinkOperator.testDeleteDynamicPartitioning
+TestFileSinkOperator.testInsertDynamicPartitioning
+TestFileSinkOperator.testNonAcidDynamicPartitioning
+TestFileSinkOperator.testNonAcidRemoveDuplicate
+TestFileSinkOperator.testNonAcidWrite
+TestFileSinkOperator.testUpdateDynamicPartitioning
+TestFilterHooks.testHMSClientWithFilter
+TestFilterHooks.testHMSClientWithoutFilter
+TestFilterHooks.testHMSServerWithFilter
+TestFilterHooks.testHMSServerWithoutFilter
+TestGenericUDTFGetSQLSchema.testWithComplexTypes
+TestGenericUDTFGetSQLSchema.testWithDDL
+TestGenericUDTFGetSQLSchema.testWithSimpleTypes
+TestGetInputSummary.testGetInputSummaryWithInputEstimator
+TestGetPartitionAuthWithBatches.testSmallNumberOfPartitions
+TestGetPartitionInBatches.testGetAllPartitionsOf
+TestHBaseQueries.testRollbackDoesNotDeleteOriginTableWhenCTLTFails
+TestHCatClient.testBasicDDLCommands
+TestHCatClient.testCreateTableLike
+TestHCatClient.testDatabaseLocation
+TestHCatClient.testDropPartitionsWithPartialSpec
+TestHCatClient.testDropTableException
+TestHCatClient.testEmptyTableInstantiation
+TestHCatClient.testGetMessageBusTopicName
+TestHCatClient.testGetPartitionsWithPartialSpec
+TestHCatClient.testObjectNotFoundException
+TestHCatClient.testOtherFailure
+TestHCatClient.testPartitionRegistrationWithCustomSchema
+TestHCatClient.testPartitionSchema
+TestHCatClient.testPartitionsHCatClientImpl
+TestHCatClient.testPartitionSpecRegistrationWithCustomSchema
+TestHCatClient.testRenameTable
+TestHCatClient.testReplicationTaskIter
+TestHCatClient.testTableSchemaPropagation
+TestHCatClient.testTransportFailure
+TestHCatClient.testUpdateTableSchema
+TestHCatDynamicPartitioned.testHCatDynamicPartitionedTable
+TestHCatDynamicPartitioned.testHCatDynamicPartitionedTableMultipleTask
+TestHCatExternalDynamicPartitioned.testHCatExternalDynamicCustomLocation
+TestHCatInputFormat.testBadRecordHandlingPasses
+TestHCatInputFormatMethods.testGetPartitionAndDataColumns
+TestHCatLoaderComplexSchema.testMapNullKey
+TestHCatLoaderComplexSchema.testMapWithComplexData
+TestHCatLoaderComplexSchema.testSyntheticComplexSchema
+TestHCatLoaderComplexSchema.testTupleInBagInTupleInBag
+TestHCatLoaderEncryption.testReadDataFromEncryptedHiveTableByPig
+TestHCatLoaderStorer.testReadWrite
+TestHCatLoaderStorer.testSmallTinyInt
+TestHCatMultiOutputFormat.testOutputFormat
+TestHCatNonPartitioned.testHCatNonPartitionedTable
+TestHCatOutputFormat.testGetTableSchema
+TestHCatOutputFormat.testSetOutput
+TestHCatPartitioned.testHCatPartitionedTable
+TestHCatPartitionPublish.testPartitionPublish
+TestHCatStorerMulti.testStorePartitionedTable
+TestHCatStorerWrapper.testStoreExternalTableWithExternalDir
+TestHive.testAutoPurgeTablesAndPartitions
+TestHive.testDropMissingPartitionsByFilter
+TestHive.testDropPartitionsWithPurge
+TestHive.testDropTableTrash
+TestHive.testGetAndDropTables
+TestHive.testGetPartitionsWithMaxLimit
+TestHive.testHiveCloseCurrent
+TestHive.testHiveRefreshOnConfChange
+TestHive.testMetaStoreApiTiming
+TestHive.testPartition
+TestHive.testTable
+TestHive.testThriftTable
+TestHive.testWmNamespaceHandling
+TestHiveAuthorizationTaskFactory.testGrantGroupTable
+TestHiveAuthorizationTaskFactory.testGrantRoleGroup
+TestHiveAuthorizationTaskFactory.testGrantRoleRole
+TestHiveAuthorizationTaskFactory.testGrantRoleTable
+TestHiveAuthorizationTaskFactory.testGrantRoleUser
+TestHiveAuthorizationTaskFactory.testGrantServer
+TestHiveAuthorizationTaskFactory.testGrantUri
+TestHiveAuthorizationTaskFactory.testGrantUserTable
+TestHiveAuthorizationTaskFactory.testRevokeGroupTable
+TestHiveAuthorizationTaskFactory.testRevokeRoleGroup
+TestHiveAuthorizationTaskFactory.testRevokeRoleRole
+TestHiveAuthorizationTaskFactory.testRevokeRoleTable
+TestHiveAuthorizationTaskFactory.testRevokeRoleUser
+TestHiveAuthorizationTaskFactory.testRevokeUserTable
+TestHiveCli.testCmd
+TestHiveCli.testCommentStripping
+TestHiveCli.testDatabaseOptions
+TestHiveCli.testErrOutput
+TestHiveCli.testInValidCmd
+TestHiveCli.testInvalidDatabaseOptions
+TestHiveCli.testNoErrorDB
+TestHiveCli.testSetHeaderValue
+TestHiveCli.testSetPromptValue
+TestHiveCli.testSourceCmd
+TestHiveCli.testSourceCmd3
+TestHiveCli.testSourceCmd4
+TestHiveCli.testSqlFromCmd
+TestHiveCli.testSqlFromCmdWithComments1
+TestHiveCli.testSqlFromCmdWithComments2
+TestHiveCli.testSqlFromCmdWithComments3
+TestHiveCli.testSqlFromCmdWithDBName
+TestHiveCli.testSqlFromCmdWithEmbeddedQuotes
+TestHiveCli.testUseCurrentDB1
+TestHiveCli.testUseCurrentDB2
+TestHiveCli.testUseCurrentDB3
+TestHiveCli.testUseInvalidDB
+TestHiveCli.testVariables
+TestHiveCli.testVariablesForSource
+TestHiveClientCache.testCacheExpiry
+TestHiveClientCache.testCacheHit
+TestHiveClientCache.testCacheMiss
+TestHiveClientCache.testCloseAllClients
+TestHiveClientCache.testMultipleThreadAccess
+TestHiveDecimalParse.testDecimalType
+TestHiveDecimalParse.testDecimalType1
+TestHiveDecimalParse.testDecimalType2
+TestHiveDecimalParse.testDecimalType3
+TestHiveDecimalParse.testDecimalType4
+TestHiveDecimalParse.testDecimalType5
+TestHiveDecimalParse.testDecimalType6
+TestHiveDecimalParse.testDecimalType7
+TestHiveDecimalParse.testDecimalType8
+TestHiveDecimalParse.testDecimalType9
+TestHiveFunctionHelper.testGetUDTFFunction
+TestHiveFunctionHelper.testGetUDTFFunctionThrowingException
+TestHiveMetaStoreChecker.testSingleThreadedDeeplyNestedTables
+TestHiveMetaStoreClientApiArgumentsChecker.testGetPartitionNames2
+TestHiveMetaStoreClientApiArgumentsChecker.testGetPartitions
+TestHiveMetaStoreGetMetaConf.testGetMetaConfDefault
+TestHiveMetaStoreTxns.testAllocateTableWriteIdForReadOnlyTxn
+TestHiveMetaStoreTxns.testGetLatestCommittedCompactionInfo
+TestHiveMetaStoreTxns.testGetValidWriteIds
+TestHiveMetaStoreTxns.testLocks
+TestHiveMetaStoreTxns.testLocksWithTxn
+TestHiveMetaStoreTxns.testOpenReadOnlyTxnExcluded
+TestHiveMetaStoreTxns.testOpenTxnNotExcluded
+TestHiveMetaStoreTxns.testOpenTxnWithType
+TestHiveMetaStoreTxns.testTxns
+TestHiveMetaStoreTxns.testTxnTypePersisted
+TestHiveMetaStoreTxns.testTxNWithKeyValue
+TestHiveMetaStoreTxns.testTxNWithKeyValueNoTableId
+TestHiveMetaStoreTxns.testTxNWithKeyWrongPrefix
+TestHiveMetaStoreWithEnvironmentContext.testEnvironmentContext
+TestHivePrivilegeObjectOwnerNameAndType.testActionTypeForPartitionedTable
+TestHivePrivilegeObjectOwnerNameAndType.testOwnerNames
+TestHivePrivilegeObjectOwnerNameAndType.testOwnerType
+TestHivePrivilegeObjectOwnerNameAndType.testSingleInstanceOfHPOForPartitionedTable
+TestHiveProtoLoggingHook.testFailureEventLog
+TestHiveProtoLoggingHook.testNonPartionedTable
+TestHiveProtoLoggingHook.testPartitionedTable
+TestHiveProtoLoggingHook.testPostEventLog
+TestHiveProtoLoggingHook.testPreAndPostEventBoth
+TestHiveProtoLoggingHook.testPreEventLog
+TestHiveProtoLoggingHook.testQueueLogs
+TestHiveProtoLoggingHook.testRolloverFiles
+TestHiveStrictManagedMigration.testUpgrade
+TestHMSFetchPartitionsWithoutCols.testPartitionsWithoutCols
+TestHmsServerAuthorization.testGetFields
+TestHooks.testQueryRedactor
+TestHS2HttpServer.testApiServletActiveSessions
+TestHS2HttpServer.testApiServletHistoricalQueries
+TestHS2HttpServerPamConfiguration.testPamCorrectConfiguration
+TestHS2HttpServerPamConfiguration.testPamServicesAreNotConfigured
+TestHS2HttpServerPamConfiguration.testSslIsFalse
+TestInitiator.testFindUserToRunAs
+TestInitiator.testInitiatorFailure
+TestInitiator.testInitiatorHostAndVersion
+TestInitiator.testMetaCache
+TestListPartitions.testListPartitionSpecsByFilterInvalidFilter
+TestListPartitionsWithXIncludeParams.testListPartitionsByExr
+TestLlapZookeeperRegistryImpl.testRegister
+TestLlapZookeeperRegistryImpl.testUpdate
+TestMacroSemanticAnalyzer.testDropMacro
+TestMacroSemanticAnalyzer.testDropMacroDoesNotExist
+TestMacroSemanticAnalyzer.testDropMacroExistsDoNotIgnoreErrors
+TestMacroSemanticAnalyzer.testDropMacroNonExistent
+TestMacroSemanticAnalyzer.testDropMacroNonExistentWithIfExists
+TestMacroSemanticAnalyzer.testDropMacroNonExistentWithIfExistsDoNotIgnoreNonExistent
+TestMacroSemanticAnalyzer.testOneInputParamters
+TestMacroSemanticAnalyzer.testOneUnusedParameterName
+TestMacroSemanticAnalyzer.testThreeDuplicateParameters
+TestMacroSemanticAnalyzer.testThreeInputParamters
+TestMacroSemanticAnalyzer.testTwoDuplicateParameterNames
+TestMacroSemanticAnalyzer.testTwoInputParamters
+TestMacroSemanticAnalyzer.testTwoUnusedParameterNames
+TestMacroSemanticAnalyzer.testUnknownInputParameter
+TestMacroSemanticAnalyzer.testZeroInputParamters
+TestMetaStoreAcidCleanup.testDropDatabaseShouldRollback_whenAcidCleanupFails
+TestMetaStoreAcidCleanup.testDropTableShouldRollback_whenAcidCleanupFails
+TestMetaStoreEndFunctionListener.testEndFunctionListener
+TestMetaStoreEventListener.testListener
+TestMetaStoreEventListener.testMetaConfDuplicateNotification
+TestMetaStoreEventListener.testMetaConfNotifyListenersClosingClient
+TestMetaStoreEventListener.testMetaConfNotifyListenersNonClosingClient
+TestMetaStoreEventListener.testMetaConfSameHandler
+TestMetaStoreEventListenerOnlyOnCommit.testEventStatus
+TestMetaStoreEventListenerWithOldConf.testMetaConfDuplicateNotification
+TestMetaStoreEventListenerWithOldConf.testMetaConfNotifyListenersClosingClient
+TestMetaStoreEventListenerWithOldConf.testMetaConfNotifyListenersNonClosingClient
+TestMetaStoreEventListenerWithOldConf.testMetaConfSameHandler
+TestMetastoreExpr.testPartitionExpr
+TestMetaStoreListenersError.testEventListenerException
+TestMetaStoreListenersError.testInitListenerException
+TestMetastoreScheduledQueries.testCreate
+TestMetastoreScheduledQueries.testCreateWithInvalidSchedule
+TestMetastoreScheduledQueries.testDeleteNonExistent
+TestMetastoreScheduledQueries.testDisable1
+TestMetastoreScheduledQueries.testDisable2
+TestMetastoreScheduledQueries.testDuplicateCreate
+TestMetastoreScheduledQueries.testExclusivePoll
+TestMetastoreScheduledQueries.testNonExistent
+TestMetastoreScheduledQueries.testNormalDelete
+TestMetastoreScheduledQueries.testNormalDeleteWithExec
+TestMetastoreScheduledQueries.testPoll
+TestMetastoreScheduledQueries.testSkip2
+TestMetastoreScheduledQueries.testUpdate
+TestMsckCreatePartitionsInBatches.testSmallNumberOfPartitions
+TestMsckDropPartitionsInBatches.testSmallNumberOfPartitions
+TestMSCKRepairOnAcid.testAddPartitionMinorCompacted
+TestObjectStore.testMaxEventResponse
+TestObjectStore.testNotificationOps
+TestOperationLogManager.testGetOperationLog
+TestOperationLogManager.testOperationLogManager
+TestOperators.testFetchOperatorContext
+TestOperators.testLlapMemoryOversubscriptionMaxExecutorsPerQueryCalculation
+TestPartitionManagement.testNoPartitionDiscoveryForReplTable
+TestPartitionManagement.testNoPartitionRetentionForReplTarget
+TestPartitionManagement.testPartitionDiscoveryDBPattern
+TestPartitionManagement.testPartitionDiscoveryDisabledByDefault
+TestPartitionManagement.testPartitionDiscoveryEnabledBothTableTypes
+TestPartitionManagement.testPartitionDiscoveryNonDefaultCatalog
+TestPartitionManagement.testPartitionDiscoverySkipInvalidPath
+TestPartitionManagement.testPartitionDiscoveryTablePattern
+TestPartitionManagement.testPartitionDiscoveryTransactionalTable
+TestPartitionManagement.testPartitionExprFilter
+TestPartitionManagement.testPartitionRetention
+TestPartitionNameWhitelistValidation.testAddPartitionWithCommas
+TestPartitionNameWhitelistValidation.testAddPartitionWithUnicode
+TestPartitionNameWhitelistValidation.testAddPartitionWithValidPartVal
+TestPartitionNameWhitelistValidation.testAppendPartitionWithCommas
+TestPartitionNameWhitelistValidation.testAppendPartitionWithUnicode
+TestPartitionNameWhitelistValidation.testAppendPartitionWithValidCharacters
+TestPassProperties.testSequenceTableWriteReadMR
+TestPermsGrp.testCustomPerms
+TestPlainSaslHelper.testDoAsSetting
+TestPluggableHiveSessionImpl.testSessionImpl
+TestPluggableHiveSessionImpl.testSessionImplWithUGI
+TestPrivilegesV1.testPrivInGrant
+TestPrivilegesV1.testPrivInGrantNotAccepted
+TestPrivilegesV2.testPrivInGrant
+TestHCatMultiOutputFormat.testOutputFormat
+TestQBCompact.testBogusLevel
+TestQBCompact.testMajor
+TestQBCompact.testMinor
+TestQBCompact.testNonPartitionedTable
+TestQueryHooks.testAllQueryLifeTimeHooks
+TestQueryHooks.testAllQueryLifeTimeWithParseHooks
+TestQueryHooks.testQueryLifeTimeWithCompileError
+TestQueryHooks.testQueryLifeTimeWithParseHooksWithCompileError
+TestQueryHooks.testQueryLifeTimeWithParseHooksWithParseError
+TestQueryLifeTimeHooksWithSQLOperation.testQueryInfoInHookContext
+TestReadEntityDirect.testSelectEntityDirect
+TestReadEntityDirect.testSelectEntityInDirect
+TestReadEntityDirect.testSelectEntityInDirectJoinAlias
+TestReadEntityDirect.testSelectEntityViewDirectJoin
+TestReadEntityDirect.testSelectEntityViewDirectUnion
+TestReaderWriter.test
+TestRemoteHiveMetastoreWithHttpJwt.testExpiredJWT
+TestRemoteHiveMetastoreWithHttpJwt.testInvalidJWT
+TestRemoteHiveMetastoreWithHttpJwt.testValidJWT
+TestReplicationMetrics.testAddMetrics
+TestReplicationMetrics.testDeleteMetrics
+TestReplicationMetrics.testGetMetricsByScheduleId
+TestReplicationMetrics.testUpdateMetrics
+TestReplicationMetricUpdateOnFailure.testReplLoadFailure
+TestReplicationMetricUpdateOnFailure.testReplLoadNonRecoverableMissingStage
+TestReplicationMetricUpdateOnFailure.testReplLoadRecoverableMissingStage
+TestReplicationTask.testCreate
+TestRetryable.testRetrySuccessSecureCallable
+TestRetryingThriftCLIServiceClient.testRetryBehaviour
+TestRetryingThriftCLIServiceClient.testSessionLifeAfterTransportClose
+TestRetryingThriftCLIServiceClient.testTransportClose
+TestRuntimeStats.testCleanup
+TestRuntimeStats.testReading
+TestRuntimeStats.testRuntimeStatHandling
+TestSemanticAnalysis.testStoredAs
+TestSemanticAnalyzerFactory.testCreate
+TestSemanticAnalyzerFactory.testDrop
+TestSessionCleanup.testTempSessionFileCleanup
+TestSessionGlobalInitFile.testSessionGlobalInitFile
+TestSessionHiveMetastoreClientAddPartitionsTempTable.testAddPartitionsNullLocationInTableToo
+TestSessionHiveMetastoreClientAlterPartitionsTempTable.testAlterPartitionsCheckRollbackNullPartition
+TestSessionHiveMetastoreClientExchangePartitionsTempTable.testExchangePartitionsNonExistingPartLocation
+TestSessionHiveMetastoreClientListPartitionsTempTable.testListPartitionsSpecByExprNullResult
+TestSessionHooks.testProxyUser
+TestSessionHooks.testSessionHook
+TestSessionManagerMetrics.testAbandonedSessionMetrics
+TestSessionManagerMetrics.testActiveSessionTimeMetrics
+TestSessionManagerMetrics.testOpenSessionMetrics
+TestSessionManagerMetrics.testOpenSessionTimeMetrics
+TestSessionManagerMetrics.testThreadPoolMetrics
+TestShowPartitionAnalyzer.testGetShowPartitionsFilter
+TestStatsUpdaterThread.testPartitionsWithDifferentColsAll
+TestStreamingDynamicPartitioning.testWriteBeforeBegin
+TestSymlinkTextInputFormat.testCombine
+TestTempAcidTable.testTempFullAcidTableTranslate
+TestTempAcidTable.testTempInsertOnlyTableTranslate
+TestTezTask.testBuildDag
+TestTezTask.testEmptyWork
+TestTxnCommands.testMergeUpdateDelete
+TestTxnCommands3.testSdpoBucketed
+TestTxnCommandsForMmTable.testInsertOverwriteForMmTable
+TestTxnConcatenate.testConcatenate
+TestTxnExIm.testMMFlatSource
+TestTxnNoBuckets.testInsertFromUnion
+TestUpgradeTool.testPostUpgrade
+TestUseDatabase.testAlterTablePass
+TestViewEntity.testSubQueryInSubView
+TestViewEntity.testUnionAllInSubView
+TestViewEntity.testUnionView
+TestViewEntity.testViewInSubQuery
+TestViewEntity.testViewInSubQueryWithWhereClauseCbo
+TestViewEntity.testViewInSubQueryWithWhereClauseRbo
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/pom-hive-standalone-metastore.xml b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/pom-hive-standalone-metastore.xml
new file mode 100644
index 00000000..d6c23715
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/pom-hive-standalone-metastore.xml
@@ -0,0 +1,719 @@
+
+
+
+ 4.0.0
+
+ org.apache
+ apache
+ 23
+
+ org.apache.hive
+ hive-standalone-metastore
+ 4.0.0-beta-2-SNAPSHOT
+ pom
+ Hive Standalone Metastore
+
+ metastore-common
+ metastore-server
+ metastore-tools
+
+
+ 4.0.0-beta-2-SNAPSHOT
+ 4.0.0-beta-2
+ .
+
+ UTF-8
+ UTF-8
+ 1.8
+ 1.8
+ false
+ ${settings.localRepository}
+ 3.1.0
+ ${basedir}/${standalone.metastore.path.to.root}/checkstyle
+
+ ${project.basedir}/src/test/resources
+ ${project.build.directory}/tmp
+ ${project.build.directory}/warehouse
+ ${project.build.directory}/external
+ file://
+ 1
+ true
+ org.apache.hadoop.hive.metastore.annotation.MetastoreUnitTest
+
+ 1.0b3
+ 2.17
+ 2.16.0
+ 3.0.0-M4
+
+ 4.9.3
+ 1.5.7
+ 3.12.0
+ 1.1.3
+ 2.9.0
+ 1.1.0-incubating
+ 5.2.8
+ 5.2.10
+ 3.2.0-release
+ 5.2.10
+ 10.14.2.0
+ 2.5.0
+ 6.2.1.jre8
+ 8.0.31
+ 42.5.1
+ 21.3.0.0
+ 0.1.2
+
+ 3.1.0
+ 22.0
+ 3.3.6
+ 4.0.3
+ 2.13.5
+ 3.3
+ 5.5.1
+ 4.13.2
+ 5.6.2
+ 5.6.3
+ 0.9.3
+ 0.16.0
+ 2.18.0
+ 3.3.3
+ 1.8.5
+ 3.21.7
+ 1.51.0
+ 1.9.0
+ 2.14.6
+ 4.0.4
+ 4.0.0-beta-2-SNAPSHOT
+ 1.9.4
+ 1.3
+ 5.2.0
+ 3.7.2
+ 9.1.6
+ 4.0.3
+ 2.8.4
+ 1.7.30
+ 4.4.13
+ 4.5.13
+ 4.5.5
+ 9.31
+ 9.4.40.v20210413
+ 1.3.2
+ 5.2.24.RELEASE
+
+ you-must-set-this-to-run-thrift
+ ${basedir}/src/gen/thrift
+ -I ${thrift.home} -strict --gen java:beans,generated_annotations=undated --gen cpp --gen php --gen py --gen rb
+
+
+
+ 1.9.8.M1
+ 1.13
+ 1.0.0
+
+
+
+
+
+ org.apache.orc
+ orc-core
+ ${orc.version}
+
+
+ com.fasterxml.jackson
+ jackson-bom
+ ${jackson.version}
+ pom
+ import
+
+
+ com.github.joshelser
+ dropwizard-metrics-hadoop-metrics2-reporter
+ ${dropwizard-metrics-hadoop-metrics2-reporter.version}
+
+
+ com.google.guava
+ guava
+ ${guava.version}
+
+
+ com.google.protobuf
+ protobuf-java
+ ${protobuf.version}
+
+
+ com.zaxxer
+ HikariCP
+ ${hikaricp.version}
+
+
+ io.dropwizard.metrics
+ metrics-core
+ ${dropwizard.version}
+
+
+ io.dropwizard.metrics
+ metrics-jvm
+ ${dropwizard.version}
+
+
+ io.dropwizard.metrics
+ metrics-json
+ ${dropwizard.version}
+
+
+ javolution
+ javolution
+ ${javolution.version}
+
+
+ org.antlr
+ antlr4-runtime
+ ${antlr.version}
+
+
+ org.antlr
+ ST4
+ ${ST4.version}
+
+
+ org.apache.commons
+ commons-lang3
+ ${commons-lang3.version}
+
+
+ org.apache.datasketches
+ datasketches-hive
+ ${datasketches.version}
+
+
+ org.slf4j
+ slf4j-simple
+
+
+
+
+ org.apache.hadoop
+ hadoop-common
+ ${hadoop.version}
+
+
+ org.apache.zookeeper
+ zookeeper
+
+
+ org.apache.curator
+ curator-test
+
+
+ org.apache.curator
+ curator-client
+
+
+ org.apache.curator
+ curator-framework
+
+
+ org.apache.curator
+ curator-recipes
+
+
+ org.eclipse.jetty
+ *
+
+
+
+
+ org.apache.hadoop
+ hadoop-distcp
+ ${hadoop.version}
+ provided
+
+
+ org.apache.hadoop
+ hadoop-hdfs
+ ${hadoop.version}
+
+
+ org.eclipse.jetty
+ *
+
+
+
+
+ org.apache.hadoop
+ hadoop-hdfs-client
+ ${hadoop.version}
+
+
+ org.apache.hadoop
+ hadoop-mapreduce-client-core
+ ${hadoop.version}
+
+
+ org.jline
+ jline
+
+
+ io.netty
+ netty
+
+
+
+
+ org.apache.hive
+ hive-storage-api
+ ${storage-api.version}
+
+
+ org.apache.commons
+ commons-dbcp2
+ ${commons-dbcp2.version}
+
+
+ org.apache.logging.log4j
+ log4j-slf4j-impl
+ ${log4j2.version}
+
+
+ org.apache.logging.log4j
+ log4j-1.2-api
+ ${log4j2.version}
+
+
+ org.apache.thrift
+ libfb303
+ ${libfb303.version}
+
+
+ org.apache.thrift
+ libthrift
+ ${libthrift.version}
+
+
+ org.datanucleus
+ datanucleus-api-jdo
+ ${datanucleus-api-jdo.version}
+
+
+ org.datanucleus
+ datanucleus-core
+ ${datanucleus-core.version}
+
+
+ org.datanucleus
+ datanucleus-rdbms
+ ${datanucleus-rdbms.version}
+
+
+ org.datanucleus
+ javax.jdo
+ ${datanucleus-jdo.version}
+
+
+ org.skyscreamer
+ jsonassert
+ 1.4.0
+ test
+
+
+ sqlline
+ sqlline
+ ${sqlline.version}
+
+
+ jline
+ jline
+ ${jline.version}
+
+
+ commons-logging
+ commons-logging
+ ${commons-logging.version}
+
+
+ com.cronutils
+ cron-utils
+ ${cron-utils.version}
+
+
+ com.github.ben-manes.caffeine
+ caffeine
+ ${caffeine.version}
+
+
+ org.slf4j
+ slf4j-api
+ ${slf4j.version}
+
+
+ org.springframework
+ spring-jdbc
+ ${spring.version}
+
+
+ org.springframework
+ spring-core
+ ${spring.version}
+
+
+
+ com.microsoft.sqlserver
+ mssql-jdbc
+ ${mssql.version}
+ runtime
+
+
+ com.oracle.database.jdbc
+ ojdbc8
+ ${oracle.version}
+ runtime
+
+
+ com.mysql
+ mysql-connector-j
+ ${mysql.version}
+ runtime
+
+
+ org.apache.derby
+ derby
+ ${derby.version}
+ runtime
+
+
+ org.mariadb.jdbc
+ mariadb-java-client
+ ${mariadb.version}
+ runtime
+
+
+ org.postgresql
+ postgresql
+ ${postgres.version}
+ runtime
+
+
+ org.apache.httpcomponents
+ httpcore
+ ${httpcomponents.core.version}
+
+
+ org.eclipse.jetty
+ jetty-util
+ ${jetty.version}
+
+
+ org.eclipse.jetty
+ jetty-server
+ ${jetty.version}
+
+
+ org.eclipse.jetty
+ jetty-servlet
+ ${jetty.version}
+
+
+
+ junit
+ junit
+ ${junit.version}
+ test
+
+
+ org.junit.jupiter
+ junit-jupiter-engine
+ ${junit.jupiter.version}
+ test
+
+
+ org.junit.vintage
+ junit-vintage-engine
+ ${junit.vintage.version}
+ test
+
+
+ org.apache.directory.server
+ apacheds-server-integ
+ ${apache-directory-server.version}
+ test
+
+
+ dom4j
+ dom4j
+
+
+
+
+ org.apache.directory.server
+ apacheds-test-framework
+ ${apache-directory-server.version}
+ test
+
+
+ org.mockito
+ mockito-core
+ ${mockito-core.version}
+ test
+
+
+
+ org.hamcrest
+ hamcrest-all
+ ${hamcrest.version}
+ test
+
+
+ org.apache.curator
+ curator-test
+ ${curator.version}
+ test
+
+
+ org.junit.jupiter
+ junit-jupiter-api
+
+
+
+
+ org.slf4j
+ slf4j-simple
+ ${slf4j.version}
+ test
+
+
+
+
+
+
+
+ org.aspectj
+ aspectjrt
+ ${aspectj.version}
+
+
+ edu.uchicago.cs.systems
+ wasabi
+ ${wasabi.version}
+
+
+
+ com.fasterxml.jackson.core
+ jackson-databind
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-assembly-plugin
+
+
+ org.codehaus.mojo
+ versions-maven-plugin
+ ${maven.versions.plugin.version}
+
+
+ org.apache.maven.plugins
+ maven-surefire-plugin
+ ${maven.surefire.plugin.version}
+
+ false
+
+
+
+ org.apache.maven.plugins
+ maven-checkstyle-plugin
+ ${maven.checkstyle.plugin.version}
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-assembly-plugin
+
+
+ assemble
+ package
+
+ single
+
+
+ apache-${project.artifactId}-${project.version}
+
+ tar.gz
+
+
+ src/assembly/src.xml
+
+ posix
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-checkstyle-plugin
+
+ ${checkstyle.conf.dir}/checkstyle.xml
+ config_loc=${checkstyle.conf.dir}
+ true
+
+
+
+ org.apache.rat
+ apache-rat-plugin
+
+
+ process-resources
+
+ check
+
+
+
+
+
+ *.patch
+ DEV-README
+ **/src/main/sql/**
+ **/README.md
+ **/*.iml
+ **/*.txt
+ **/*.log
+ **/*.arcconfig
+ **/package-info.java
+ **/*.properties
+ **/*.q
+ **/*.q.out
+ **/*.xml
+ **/gen/**
+ **/patchprocess/**
+ **/metastore_db/**
+ **/test/resources/**/*.ldif
+ **/test/resources/sql/**
+ **/test/resources/**/*.json
+
+
+
+
+
+
+ dev.aspectj
+ aspectj-maven-plugin
+ ${aspectj-maven.version}
+
+
+
+ edu.uchicago.cs.systems
+ wasabi
+
+
+
+
+
+
+ test-compile
+ compile
+
+
+ 1.8
+ 1.8
+ false
+ true
+ true
+ unmatchedSuperTypeInCall=ignore,adviceDidNotMatch=ignore,typeNotExposedToWeaver=ignore,uncheckedAdviceConversion=ignore,invalidAbsoluteTypeName=ignore,cantFindType=ignore
+
+
+
+
+
+ org.aspectj
+ aspectjtools
+ ${aspectj.version}
+
+
+
+
+
+
+
+
+ javadoc
+
+
+
+ org.apache.maven.plugins
+ maven-javadoc-plugin
+
+ none
+ false
+
+
+
+ attach-javadocs
+
+ jar
+
+
+
+
+
+
+
+
+ spotbugs
+
+
+
+
+ com.github.spotbugs
+ spotbugs-maven-plugin
+ 4.0.0
+
+
+
+ com.github.spotbugs
+ spotbugs
+ ${spotbugs.version}
+
+
+
+ true
+ 2048
+ -Djava.awt.headless=true -Xmx2048m -Xms512m
+ ${basedir}/${standalone.metastore.path.to.root}/spotbugs/spotbugs-exclude.xml
+
+
+
+
+
+
+
+ com.github.spotbugs
+ spotbugs-maven-plugin
+ 4.0.0
+
+ true
+ 2048
+ -Djava.awt.headless=true -Xmx2048m -Xms512m
+ ${basedir}/${standalone.metastore.path.to.root}/spotbugs/spotbugs-exclude.xml
+
+
+
+
+
+
+
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/pom-hive.xml b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/pom-hive.xml
new file mode 100644
index 00000000..1310a6da
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/pom-hive.xml
@@ -0,0 +1,2115 @@
+
+
+
+ 4.0.0
+
+ org.apache
+ apache
+ 23
+
+ org.apache.hive
+ hive
+ 4.0.0-beta-2-SNAPSHOT
+ pom
+ Hive
+ https://hive.apache.org
+
+ storage-api
+ accumulo-handler
+ vector-code-gen
+ beeline
+ classification
+ cli
+ common
+ contrib
+ druid-handler
+ hbase-handler
+ jdbc-handler
+ hcatalog
+ hplsql
+ jdbc
+ metastore
+ parser
+ udf
+ ql
+ serde
+ service-rpc
+ service
+ streaming
+ llap-common
+ llap-client
+ llap-ext-client
+ llap-tez
+ llap-server
+ shims
+ kudu-handler
+ testutils
+ packaging
+ standalone-metastore
+ kafka-handler
+
+
+ 4.0.0-beta-2-SNAPSHOT
+ 4.0.0-beta-2
+
+ 1.8
+ 1.8
+ false
+ ${settings.localRepository}
+ .
+ standalone
+ ${basedir}/${hive.path.to.root}/checkstyle
+
+ ${project.groupId}:${project.artifactId}
+
+
+
+ ${maven.test.classpath}
+ file://
+ ${project.build.directory}/tmp
+ ${project.build.directory}/testconf
+ file://${test.tmp.dir}
+
+ INFO
+ ${project.build.directory}/warehouse
+ ${project.build.directory}/localfs/warehouse
+ pfile://
+
+
+
+ 1.0b3
+ -Xmx2048m -DJETTY_AVAILABLE_PROCESSORS=4
+ 2.17
+ 3.4.0
+ 2.10
+ 3.1.0
+ 2.16.0
+ 3.5.0
+ 3.0.0-M4
+ 2.7.10
+ 2.3.0
+
+ 1.10.1
+ 1.10.13
+ 3.5.2
+
+ 4.9.3
+ 1.5.7
+
+ 12.0.0
+ 1.12.0
+ 1.11.3
+ 1.68
+ 1.25.0
+ 5.2.8
+ 5.2.10
+ 3.2.0-release
+ 5.2.10
+ 1.5.0
+ 1.15
+ 3.2.2
+ 4.1
+ 1.23.0
+ 1.10
+ 1.1
+ 2.12.0
+ 2.11.1
+ 3.12.0
+ 3.6.1
+ 2.9.0
+ 1.10.0
+ 10.14.2.0
+ 3.1.0
+ 0.1.2
+ 0.17.1
+ 2.2.4
+ 1.12.0
+ 22.0
+ 2.4.21
+ 2.2.220
+ 3.3.6
+ ${basedir}/${hive.path.to.root}/testutils/hadoop
+ 1.3
+ 2.5.6-hadoop3
+ 0.7.2
+
+ 3.3.7
+ 4.0.3
+
+ 4.5.13
+ 4.4.13
+ 2.5.2
+ 2.13.5
+ 2.3.4
+ 2.4.1
+ 3.1.0
+ 5.5.1
+ 1.5.4
+ 9.4.45.v20220203
+ 1.19
+ 2.14.6
+ 2.0.2
+ 2.9.9
+ 6.0.0
+ 1.8
+ 4.13.2
+ 5.6.2
+ 5.6.3
+ 2.5.0
+ 5.5.0
+ 1.11.9
+ 1.12.0
+
+ 0.9.3
+ 0.16.0
+ 2.18.0
+ 2.5.0
+ 6.2.1.jre8
+ 8.0.31
+ 42.5.1
+ 21.3.0.0
+ 2.3
+ 1.8.5
+ 3.4.4
+ 4.11.0
+ 2.0.0-M5
+ 4.1.77.Final
+ 3.10.5.Final
+
+ 4.5.5
+ 2.8
+ 1.13.1
+ 0.16.0
+ 1.5.6
+ 3.21.7
+ 1.0.1
+ 1.7.30
+ 4.0.4
+ 4.0.0-beta-2-SNAPSHOT
+ 0.10.2
+ 2.2.0
+ 1.1
+ 1.1.10.4
+ 1.4
+ 2.3
+ 2.12.2
+ 2.3.4
+ 3.7.2
+ 1.1
+ 2.4.0
+ 5.2.0
+ 3.0.0
+ 2.9.0
+ 0.10.5
+ 1.2
+ 2.0.1
+ 2.8.0
+ 3.0.11
+ 1.1.0-incubating
+ 4.0.3
+ 1.1.0.Final
+ 1.0.1
+ 1.12.499
+ 2.4.0
+ 5.2.24.RELEASE
+
+
+ 1.9.8.M1
+ 1.13
+ 1.0.0
+
+
+
+
+
+ central
+ central
+ https://repo.maven.apache.org/maven2
+ default
+
+ true
+ warn
+
+
+
+ repository-release
+ https://repository.apache.org/content/repositories/releases/
+
+ true
+
+
+ true
+
+
+
+
+ shibboleth
+ https://build.shibboleth.net/nexus/content/groups/public
+
+ true
+ warn
+
+
+ false
+
+
+
+
+
+
+
+ com.amazonaws
+ aws-java-sdk-bundle
+ ${aws-java-sdk.version}
+
+
+ io.netty
+ *
+
+
+
+
+ com.amazonaws.secretsmanager
+ aws-secretsmanager-caching-java
+ ${aws-secretsmanager-caching.version}
+
+
+ com.amazonaws
+ aws-java-sdk-secretsmanager
+
+
+
+
+ com.esotericsoftware
+ kryo
+ ${kryo.version}
+
+
+ com.esotericsoftware
+ reflectasm
+ ${reflectasm.version}
+
+
+ com.google.guava
+ guava
+ ${guava.version}
+
+
+ com.google.protobuf
+ protobuf-java
+ ${protobuf.version}
+
+
+ com.google.code.tempus-fugit
+ tempus-fugit
+ ${tempus-fugit.version}
+
+
+ org.hamcrest
+ hamcrest-core
+
+
+
+
+ com.zaxxer
+ HikariCP
+ ${hikaricp.version}
+
+
+ com.thoughtworks.paranamer
+ paranamer
+ ${paranamer.version}
+
+
+ org.apache.parquet
+ parquet
+ ${parquet.version}
+
+
+ org.apache.parquet
+ parquet-column
+ ${parquet.version}
+ tests
+
+
+ org.apache.parquet
+ parquet-hadoop-bundle
+ ${parquet.version}
+
+
+ com.sun.jersey
+ jersey-core
+ ${jersey.version}
+
+
+ com.sun.jersey
+ jersey-json
+ ${jersey.version}
+
+
+ com.sun.jersey
+ jersey-server
+ ${jersey.version}
+
+
+ com.sun.jersey.contribs
+ wadl-resourcedoc-doclet
+ ${wadl-resourcedoc-doclet.version}
+
+
+ com.sun.jersey
+ jersey-servlet
+ ${jersey.version}
+
+
+ commons-cli
+ commons-cli
+ ${commons-cli.version}
+
+
+ commons-codec
+ commons-codec
+ ${commons-codec.version}
+
+
+ commons-collections
+ commons-collections
+ ${commons-collections.version}
+
+
+ org.apache.commons
+ commons-collections4
+ ${commons-collections4.version}
+
+
+ commons-io
+ commons-io
+ ${commons-io.version}
+
+
+ org.apache.commons
+ commons-dbcp2
+ ${commons-dbcp2.version}
+
+
+ org.apache.commons
+ commons-math3
+ ${commons-math3.version}
+
+
+ io.jsonwebtoken
+ jjwt-api
+ ${jjwt.version}
+
+
+ io.jsonwebtoken
+ jjwt-impl
+ ${jjwt.version}
+
+
+ io.jsonwebtoken
+ jjwt-jackson
+ ${jjwt.version}
+
+
+ io.netty
+ netty-all
+ ${netty.version}
+
+
+ jakarta.jms
+ jakarta.jms-api
+ ${jms.version}
+
+
+ javolution
+ javolution
+ ${javolution.version}
+
+
+ jline
+ jline
+ ${jline.version}
+
+
+ joda-time
+ joda-time
+ ${joda.version}
+
+
+ junit
+ junit
+ ${junit.version}
+
+
+ org.junit.jupiter
+ junit-jupiter-engine
+ ${junit.jupiter.version}
+
+
+ org.junit.jupiter
+ junit-jupiter-params
+ ${junit.jupiter.version}
+
+
+ org.junit.vintage
+ junit-vintage-engine
+ ${junit.vintage.version}
+
+
+ org.apache.commons
+ commons-text
+ ${commons-text.version}
+
+
+ org.apache.logging.log4j
+ log4j-1.2-api
+ ${log4j2.version}
+
+
+ org.apache.logging.log4j
+ log4j-web
+ ${log4j2.version}
+
+
+ org.apache.logging.log4j
+ log4j-slf4j-impl
+ ${log4j2.version}
+
+
+ org.antlr
+ antlr-runtime
+ ${antlr.version}
+
+
+ org.antlr
+ ST4
+ ${ST4.version}
+
+
+ org.apache.commons
+ commons-compress
+ ${commons-compress.version}
+
+
+ org.apache.commons
+ commons-exec
+ ${commons-exec.version}
+
+
+ org.apache.accumulo
+ accumulo-core
+ ${accumulo.version}
+
+
+ org.apache.accumulo
+ accumulo-fate
+ ${accumulo.version}
+
+
+ org.apache.accumulo
+ accumulo-minicluster
+ ${accumulo.version}
+
+
+ org.apache.accumulo
+ accumulo-start
+ ${accumulo.version}
+
+
+ org.apache.accumulo
+ accumulo-trace
+ ${accumulo.version}
+
+
+ org.apache.calcite.avatica
+ avatica
+ ${avatica.version}
+
+
+ org.apache.calcite.avatica
+ avatica-core
+ ${avatica.version}
+
+
+ org.apache.calcite.avatica
+ avatica-metrics
+ ${avatica.version}
+
+
+ org.apache.calcite.avatica
+ avatica-server
+ ${avatica.version}
+
+
+ org.apache.avro
+ avro
+ ${avro.version}
+
+
+ org.apache.avro
+ avro-mapred
+ ${avro.version}
+
+
+ org.mortbay.jetty
+ jetty-util
+
+
+ org.mortbay.jetty
+ servlet-api
+
+
+ io.netty
+ netty
+
+
+
+
+ org.apache.httpcomponents
+ httpclient
+ ${httpcomponents.client.version}
+
+
+ org.apache.httpcomponents
+ httpcore
+ ${httpcomponents.core.version}
+
+
+ org.apache.velocity
+ velocity-engine-core
+ ${velocity.version}
+
+
+ stax
+ stax-api
+ ${stax.version}
+
+
+ org.apache.calcite
+ calcite-core
+ ${calcite.version}
+
+
+ org.apache.calcite
+ calcite-linq4j
+ ${calcite.version}
+
+
+ org.apache.calcite
+ calcite-druid
+ ${calcite.version}
+
+
+ org.apache.curator
+ curator-test
+ ${curator.version}
+ test
+
+
+ org.junit.jupiter
+ junit-jupiter-api
+
+
+
+
+ org.apache.datasketches
+ datasketches-hive
+ ${datasketches.version}
+
+
+ org.slf4j
+ slf4j-simple
+
+
+
+
+ org.apache.orc
+ orc-core
+ ${orc.version}
+
+
+ org.apache.hadoop
+ hadoop-common
+
+
+ org.apache.hive
+ hive-storage-api
+
+
+
+
+ org.apache.hive
+ hive-storage-api
+ ${storage-api.version}
+
+
+ org.apache.zookeeper
+ zookeeper
+
+
+ org.apache.curator
+ curator-client
+
+
+ org.apache.curator
+ curator-recipes
+
+
+
+
+ org.apache.pig
+ pig
+ ${pig.version}
+
+
+ org.apache.thrift
+ libfb303
+ ${libfb303.version}
+
+
+ org.apache.thrift
+ libthrift
+ ${libthrift.version}
+
+
+ org.apache.zookeeper
+ zookeeper
+ ${zookeeper.version}
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ commons-logging
+ commons-logging
+
+
+ org.apache.httpcomponents
+ httpcore
+
+
+ org.apache.httpcomponents
+ httpclient
+
+
+ io.netty
+ netty-all
+
+
+
+
+ org.apache.curator
+ curator-client
+ ${curator.version}
+
+
+ org.apache.curator
+ curator-framework
+ ${curator.version}
+
+
+ org.apache.curator
+ curator-recipes
+ ${curator.version}
+
+
+ org.codehaus.groovy
+ groovy-all
+ ${groovy.version}
+
+
+ com.fasterxml.jackson
+ jackson-bom
+ ${jackson.version}
+ pom
+ import
+
+
+ org.codehaus.jettison
+ jettison
+ ${jettison.version}
+
+
+ stax
+ stax-api
+
+
+
+
+ org.eclipse.jetty
+ jetty-rewrite
+ ${jetty.version}
+
+
+ org.eclipse.jetty
+ jetty-server
+ ${jetty.version}
+
+
+ org.eclipse.jetty
+ jetty-servlet
+ ${jetty.version}
+
+
+ org.eclipse.jetty
+ jetty-runner
+ ${jetty.version}
+
+
+ org.eclipse.jetty
+ jetty-webapp
+ ${jetty.version}
+
+
+ org.eclipse.jetty
+ jetty-http
+ ${jetty.version}
+
+
+ org.eclipse.jetty
+ jetty-util
+ ${jetty.version}
+
+
+ javax.servlet
+ javax.servlet-api
+ ${javax-servlet.version}
+
+
+ org.datanucleus
+ datanucleus-api-jdo
+ ${datanucleus-api-jdo.version}
+
+
+ org.datanucleus
+ datanucleus-core
+ ${datanucleus-core.version}
+
+
+ org.datanucleus
+ datanucleus-rdbms
+ ${datanucleus-rdbms.version}
+
+
+ org.datanucleus
+ javax.jdo
+ ${datanucleus-jdo.version}
+
+
+ org.pac4j
+ pac4j-saml-opensamlv3
+ ${pac4j-saml.version}
+
+
+ com.google.code.findbugs
+ jsr305
+
+
+ ch.qos.logback
+ logback-classic
+
+
+ xalan
+ xalan
+
+
+ org.springframework
+ spring-core
+
+
+ dom4j
+ dom4j
+
+
+ commons-collections
+ commons-collections
+
+
+ org.slf4j
+ *
+
+
+ org.jboss.logging
+ *
+
+
+ org.hibernate
+ *
+
+
+ org.hibernate.javax.persistence
+ *
+
+
+ org.springframework
+ *
+
+
+ org.javassist
+ javassist
+
+
+
+ org.bouncycastle
+ org.bouncycastle
+
+
+ org.apache.santuario
+ xmlsec
+
+
+
+
+ org.bouncycastle
+ bcprov-jdk15on
+ ${bcprov-jdk15on.version}
+
+
+ org.apache.santuario
+ xmlsec
+ ${xmlsec.version}
+
+
+ com.fasterxml.woodstox
+ woodstox-core
+
+
+
+
+ com.tdunning
+ json
+ ${json.version}
+
+
+ org.slf4j
+ slf4j-api
+ ${slf4j.version}
+
+
+ xerces
+ xercesImpl
+ ${xerces.version}
+
+
+ org.apache.hadoop
+ hadoop-client
+ ${hadoop.version}
+
+
+ commons-logging
+ commons-logging
+
+
+
+
+ org.apache.hadoop
+ hadoop-auth
+ ${hadoop.version}
+
+
+ commons-logging
+ commons-logging
+
+
+ org.apache.zookeeper
+ zookeeper
+
+
+ org.apache.curator
+ curator-framework
+
+
+ org.apache.curator
+ curator-test
+
+
+
+
+ org.apache.hadoop
+ hadoop-common
+ ${hadoop.version}
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ commons-logging
+ commons-logging
+
+
+ org.apache.httpcomponents
+ httpcore
+
+
+ org.apache.httpcomponents
+ httpclient
+
+
+ org.apache.zookeeper
+ zookeeper
+
+
+ org.apache.curator
+ curator-test
+
+
+ org.apache.curator
+ curator-client
+
+
+ org.apache.curator
+ curator-recipes
+
+
+ javax.servlet.jsp
+ jsp-api
+
+
+
+
+ org.apache.hadoop
+ hadoop-hdfs
+ ${hadoop.version}
+
+
+ io.netty
+ netty
+
+
+
+
+ org.apache.hadoop
+ hadoop-mapreduce-client-jobclient
+ ${hadoop.version}
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ commons-logging
+ commons-logging
+
+
+ com.codahale.metrics
+ metrics-core
+
+
+ io.netty
+ netty-all
+
+
+ io.netty
+ netty
+
+
+
+
+ org.apache.hadoop
+ hadoop-mapreduce-client-common
+ ${hadoop.version}
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ commons-logging
+ commons-logging
+
+
+ io.netty
+ netty-all
+
+
+ io.netty
+ netty
+
+
+
+
+ org.apache.hadoop
+ hadoop-mapreduce-client-core
+ ${hadoop.version}
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ org.jline
+ jline
+
+
+ commons-logging
+ commons-logging
+
+
+ io.netty
+ netty-all
+
+
+ io.netty
+ netty
+
+
+
+
+ org.apache.hadoop
+ hadoop-minikdc
+ ${hadoop.version}
+
+
+ io.netty
+ netty-all
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ commons-logging
+ commons-logging
+
+
+
+
+ org.apache.hadoop
+ hadoop-yarn-api
+ ${hadoop.version}
+
+
+ org.apache.hadoop
+ hadoop-yarn-client
+ ${hadoop.version}
+
+
+ org.jline
+ jline
+
+
+
+
+ org.apache.hadoop
+ hadoop-yarn-common
+ ${hadoop.version}
+
+
+ org.apache.hadoop
+ hadoop-yarn-registry
+ ${hadoop.version}
+
+
+ org.apache.hadoop
+ hadoop-yarn-server-web-common
+ ${hadoop.version}
+
+
+ org.apache.hadoop
+ hadoop-yarn-server-web-proxy
+ ${hadoop.version}
+
+
+ io.netty
+ netty-all
+
+
+
+
+ org.apache.hbase
+ hbase-common
+ ${hbase.version}
+
+
+ org.apache.hbase
+ hbase-client
+ ${hbase.version}
+
+
+ org.apache.hbase
+ hbase-hadoop-compat
+ ${hbase.version}
+
+
+ org.apache.hbase
+ hbase-hadoop2-compat
+ ${hbase.version}
+
+
+ javax.servlet
+ servlet-api
+
+
+ javax.servlet.jsp
+ jsp-api
+
+
+ org.jruby
+ jruby-complete
+
+
+ io.netty
+ netty-all
+
+
+ io.netty
+ netty
+
+
+ com.sun.jersey
+ jersey-core
+
+
+ com.sun.jersey
+ jersey-json
+
+
+ com.sun.jersey
+ jersey-server
+
+
+ com.codahale.metrics
+ metrics-core
+
+
+
+
+ org.apache.hbase
+ hbase-server
+ ${hbase.version}
+
+
+ org.glassfish.web
+ javax.servlet.jsp
+
+
+
+
+ org.apache.hbase
+ hbase-mapreduce
+ ${hbase.version}
+
+
+ org.apache.hbase
+ hbase-zookeeper
+ tests
+ ${hbase.version}
+
+
+ org.apache.hadoop
+ hadoop-minicluster
+ ${hadoop.version}
+
+
+ org.jamon
+ jamon-runtime
+ ${jamon-runtime.version}
+
+
+ org.xerial.snappy
+ snappy-java
+ ${snappy.version}
+
+
+ com.google.re2j
+ re2j
+ ${re2j.version}
+
+
+ com.jayway.jsonpath
+ json-path
+ ${json-path.version}
+ runtime
+
+
+ org.codehaus.janino
+ commons-compiler
+ ${janino.version}
+ runtime
+
+
+ org.codehaus.janino
+ janino
+ ${janino.version}
+ runtime
+
+
+ org.apache.tez
+ tez-runtime-internals
+ ${tez.version}
+
+
+ org.apache.tez
+ tez-runtime-library
+ ${tez.version}
+
+
+ io.netty
+ netty
+
+
+
+
+ org.apache.tez
+ tez-api
+ ${tez.version}
+
+
+ org.apache.tez
+ tez-dag
+ ${tez.version}
+
+
+ org.apache.tez
+ tez-mapreduce
+ ${tez.version}
+
+
+ io.netty
+ netty
+
+
+
+
+ org.apache.tez
+ tez-common
+ ${tez.version}
+
+
+ org.springframework
+ spring-jdbc
+ ${spring.version}
+
+
+ org.springframework
+ spring-core
+ ${spring.version}
+
+
+
+ com.microsoft.sqlserver
+ mssql-jdbc
+ ${mssql.version}
+ runtime
+
+
+ com.oracle.database.jdbc
+ ojdbc8
+ ${oracle.version}
+ runtime
+
+
+ com.mysql
+ mysql-connector-j
+ ${mysql.version}
+ runtime
+
+
+ org.apache.derby
+ derby
+ ${derby.version}
+ runtime
+
+
+ org.mariadb.jdbc
+ mariadb-java-client
+ ${mariadb.version}
+ runtime
+
+
+ org.postgresql
+ postgresql
+ ${postgres.version}
+ runtime
+
+
+
+
+
+
+
+ org.aspectj
+ aspectjrt
+ ${aspectj.version}
+
+
+ edu.uchicago.cs.systems
+ wasabi
+ ${wasabi.version}
+
+
+
+
+
+ org.slf4j
+ slf4j-api
+
+
+
+
+
+
+
+ org.antlr
+ antlr3-maven-plugin
+ ${antlr.version}
+
+
+ org.apache.avro
+ avro-maven-plugin
+ ${avro.version}
+
+
+ org.apache.maven.plugins
+ maven-antrun-plugin
+
+
+ ant-contrib
+ ant-contrib
+ ${ant.contrib.version}
+
+
+ ant
+ ant
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-eclipse-plugin
+ ${maven.eclipse.plugin.version}
+
+ false
+ true
+ target/eclipse/classes
+ Hive
+ ${basedir}/dev-support/eclipse-styles.xml
+
+
+
+ org.apache.maven.plugins
+ maven-checkstyle-plugin
+ ${maven.checkstyle.plugin.version}
+
+
+ org.codehaus.mojo
+ versions-maven-plugin
+ ${maven.versions.plugin.version}
+
+
+ org.apache.maven.plugins
+ maven-surefire-plugin
+ ${maven.surefire.plugin.version}
+
+
+ org.apache.felix
+ maven-bundle-plugin
+ ${felix.version}
+
+
+ org.apache.maven.plugins
+ maven-shade-plugin
+ ${maven.shade.plugin.version}
+
+
+ org.codehaus.mojo
+ build-helper-maven-plugin
+ ${maven.build-helper.plugin.version}
+
+
+ org.codehaus.mojo
+ exec-maven-plugin
+ ${maven.exec.plugin.version}
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-antrun-plugin
+
+
+ define-classpath
+ process-resources
+
+ run
+
+
+ true
+
+
+
+
+
+
+ setup-test-dirs
+ process-test-resources
+
+ run
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-clean-plugin
+
+
+
+ ./
+
+ datanucleus.log
+ derby.log
+
+ false
+
+
+ build
+ false
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-checkstyle-plugin
+
+ ${checkstyle.conf.dir}/checkstyle.xml
+ config_loc=${checkstyle.conf.dir}
+ true
+
+
+
+ org.apache.maven.plugins
+ maven-enforcer-plugin
+
+
+ de.skuzzle.enforcer
+ restrict-imports-enforcer-rule
+ 0.9.0
+
+
+
+
+ enforce-no-snapshots
+
+ enforce
+
+
+
+
+ Release builds are not allowed to have SNAPSHOT depenendencies
+ true
+ true
+
+
+ true
+
+
+
+ enforce-banned-dependencies-licenses
+
+ enforce
+
+
+
+
+
+
+ com.google.code.findbugs:annotations
+
+ A banned license dependency was found!
+
+
+ true
+
+
+
+ enforce-banned-dependencies-logging
+
+ enforce
+
+
+
+
+
+
+ commons-logging:commons-logging
+ log4j:log4j
+ ch.qos.reload4j:reload4j
+
+ false
+ A banned logging dependency was found!
+
+
+ true
+
+
+
+ check-banned-imports
+ initialize
+
+ enforce
+
+
+
+
+ Do not use shaded imports
+
+ **.shaded.**
+ jersey.repackaged.com.google.**
+ org.codehaus.jackson.**
+ org.apache.hive.com.**
+ org.apache.hive.org.**
+
+
+ org.apache.hadoop.hbase.shaded.protobuf.**
+
+ true
+
+
+ Do not use commons-lang
+
+ org.apache.commons.lang.**
+
+ true
+
+
+ Do not use commons-logging; use slf4j
+
+ org.apache.commons.logging.**
+
+ true
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-surefire-plugin
+
+
+ **/TestSerDe.java
+ **/TestHiveMetaStore.java
+ **/ql/exec/vector/util/*.java
+ **/ql/exec/vector/udf/legacy/*.java
+ **/ql/exec/vector/udf/generic/*.java
+ **/TestHiveServer2Concurrency.java
+ ${test.excludes.additional}
+
+ true
+ false
+ false
+ ${maven.test.jvm.args}
+ false
+
+ ${test.conf.dir}
+ ${basedir}/${hive.path.to.root}/conf
+
+
+ US/Pacific
+ en_US.UTF-8
+ ${test.conf.dir}:${basedir}/${hive.path.to.root}/conf
+ ${test.hive.hadoop.classpath}
+ ${env.PATH}${test.extra.path}
+
+
+ ${project.build.directory}
+
+ ${test.tmp.dir}
+
+ ${derby.version}
+ ${test.tmp.dir}/derby.log
+ ${hadoop.bin.path}
+
+ ${test.tmp.dir}
+ ${basedir}/${hive.path.to.root}/
+ ${project.version}
+
+ ${maven.repo.local}
+ local
+ ${test.log4j.scheme}${test.conf.dir}/hive-log4j2.properties
+ ${test.console.log.level}
+ true
+
+ ${test.tmp.dir}
+
+ ${test.tmp.dir}
+
+ ${basedir}/${hive.path.to.root}/data/files
+ ${basedir}/${hive.path.to.root}/data/files
+ ${test.tmp.dir}
+ ${test.tmp.dir.uri}
+ ${test.dfs.mkdir}
+ ${test.output.overwrite}
+ ${test.warehouse.scheme}${test.warehouse.dir}
+ ${test.warehouse.scheme}${test.local.warehouse.dir}
+ true
+
+
+ ${test.conf.dir}/krb5.conf
+ ${hadoop.version}
+ ${qfile}
+ ${initScript}
+ ${clustermode}
+ ${qfile_regex}
+ ${run_disabled}
+
+
+
+
+ org.apache.rat
+ apache-rat-plugin
+
+
+ process-resources
+
+ check
+
+
+
+
+
+ *.patch
+ .github/**
+ data/**
+ conf/**
+ checkstyle/**
+ docs/Gemfile
+ bin/**
+ itests/**
+ **/README.md
+ **/*.iml
+ **/*.txt
+ **/*.log
+ **/.factorypath
+ **/.classpath
+ **/.project
+ **/.settings/**
+ **/*.arcconfig
+ **/package-info.java
+ **/*.properties
+ **/*.q
+ **/*.q.out
+ **/*.q.out_*
+ **/*.xml
+ **/*.yml
+ **/*json
+ **/gen/**
+ **/target/**
+ **/scripts/**
+ **/resources/**
+ **/*.rc
+ **/*.rcfile
+ **/*.qv
+ **/*.out
+ **/RecordTestObj.java
+ **/*.m
+ **/gen-java/**
+ **/testdata/**
+ **/test/org/apache/hadoop/hive/hbase/avro/**
+ **/avro_test.avpr
+ **/xmlReport.pl
+ **/*.html
+ **/sit
+ **/test/queries/**/*.sql
+ **/patchprocess/**
+ **/metastore_db/**
+ **/test/resources/**/*.ldif
+ hcatalog/core/mapred/**/part-m*
+ hcatalog/core/mapred/**/*_SUCCESS*
+ **/PriorityBlockingDeque.java
+ LICENSE-binary
+
+
+
+
+ org.jamon
+ jamon-maven-plugin
+ ${jamon.plugin.version}
+
+
+
+
+ dev.aspectj
+ aspectj-maven-plugin
+ ${aspectj-maven.version}
+
+
+
+ edu.uchicago.cs.systems
+ wasabi
+
+
+
+
+
+
+ test-compile
+ compile
+
+
+ 1.8
+ 1.8
+ false
+ true
+ true
+ unmatchedSuperTypeInCall=ignore,adviceDidNotMatch=ignore,typeNotExposedToWeaver=ignore,uncheckedAdviceConversion=ignore,invalidAbsoluteTypeName=ignore,cantFindType=ignore
+
+
+
+
+
+ org.aspectj
+ aspectjtools
+ ${aspectj.version}
+
+
+
+
+
+
+
+
+ thriftif
+
+
+
+ org.codehaus.mojo
+ exec-maven-plugin
+
+
+ check-thrift-version
+ generate-sources
+
+ exec
+
+
+ sh
+ ${basedir}
+
+ -c
+ ${thrift.home}/bin/thrift -version | fgrep 'Thrift version ${libthrift.version}' && exit 0;
+ echo "=================================================================================";
+ echo "========== [FATAL] Build is configured to require Thrift version ${libthrift.version} =========";
+ echo "========== Currently installed: ";
+ ${thrift.home}/bin/thrift -version;
+ echo "=================================================================================";
+ exit 1
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-antrun-plugin
+
+
+ generate-thrift-sources
+ generate-sources
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ run
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-enforcer-plugin
+
+
+ enforce-property
+
+ enforce
+
+
+
+
+ thrift.home
+
+
+ true
+
+
+
+
+
+
+
+
+ sources
+
+
+
+ org.apache.maven.plugins
+ maven-source-plugin
+
+
+ attach-sources
+
+ jar
+
+
+
+
+
+
+
+
+ javadoc
+
+
+
+ org.apache.maven.plugins
+ maven-javadoc-plugin
+
+ none
+ false
+
+
+
+ attach-javadocs
+
+ jar
+
+
+
+
+
+
+
+
+ spotbugs
+
+
+
+
+ com.github.spotbugs
+ spotbugs-maven-plugin
+ 4.0.0
+
+
+
+ com.github.spotbugs
+ spotbugs
+ ${spotbugs.version}
+
+
+
+ true
+ 2048
+ -Djava.awt.headless=true -Xmx2048m -Xms512m
+ ${basedir}/${hive.path.to.root}/spotbugs/spotbugs-exclude.xml
+
+
+
+
+
+
+
+ com.github.spotbugs
+ spotbugs-maven-plugin
+ 4.0.0
+
+ true
+ 2048
+ -Djava.awt.headless=true -Xmx2048m -Xms512m
+ ${basedir}/${hive.path.to.root}/spotbugs/spotbugs-exclude.xml
+
+
+
+
+
+
+
+ windows-test
+
+
+ Windows
+
+
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-dependency-plugin
+ 2.8
+
+
+ copy-dependencies
+ package
+
+ copy-dependencies
+
+
+ ${project.build.directory}/deplibs/
+ false
+ false
+ true
+
+
+
+
+
+
+
+ ${basedir}/${hive.path.to.root}/testutils/hadoop.cmd
+
+ ;${env.HADOOP_HOME}/bin
+ ${project.build.directory}/deplibs/*
+ file:///${test.tmp.dir}
+ file:/
+
+
+
+ itests
+
+ itests
+
+
+
+ iceberg
+
+ iceberg
+
+
+
+ customhbase
+
+
+ hbase.version
+
+
+
+
+ dist
+
+
+
+ org.cyclonedx
+ cyclonedx-maven-plugin
+ ${maven.cyclonedx.plugin.version}
+
+
+ package
+
+ makeBom
+
+
+
+
+
+
+
+
+
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.druid.TestDruidStorageHandler.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.druid.TestDruidStorageHandler.conf
new file mode 100644
index 00000000..5e335794
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.druid.TestDruidStorageHandler.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.druid.TestDruidStorageHandler.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.druid.TestDruidStorageHandler.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.druid.TestDruidStorageHandler.data
new file mode 100644
index 00000000..bab3f821
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.druid.TestDruidStorageHandler.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandlerUtils.java#L765!!!org.apache.hadoop.hive.druid.DruidStorageHandlerUtils.publishSegmentWithShardSpec!!!org.apache.hadoop.fs.FileSystem.mkdirs!!!DruidStorageHandlerUtils.java:774!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.llap.registry.impl.TestLlapZookeeperRegistryImpl.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.llap.registry.impl.TestLlapZookeeperRegistryImpl.conf
new file mode 100644
index 00000000..d42903b7
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.llap.registry.impl.TestLlapZookeeperRegistryImpl.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.llap.registry.impl.TestLlapZookeeperRegistryImpl.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.llap.registry.impl.TestLlapZookeeperRegistryImpl.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.llap.registry.impl.TestLlapZookeeperRegistryImpl.data
new file mode 100644
index 00000000..932680e0
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.llap.registry.impl.TestLlapZookeeperRegistryImpl.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/llap-client/src/java/org/apache/hadoop/hive/registry/impl/ZkRegistryBase.java#L640!!!org.apache.hadoop.hive.registry.impl.ZkRegistryBase.ensureInstancesCache!!!org.apache.curator.framework.recipes.cache.PathChildrenCache.start!!!ZkRegistryBase.java:644!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestListPartitionsWithXIncludeParams.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestListPartitionsWithXIncludeParams.conf
new file mode 100644
index 00000000..4e9d1dc5
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestListPartitionsWithXIncludeParams.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestListPartitionsWithXIncludeParams.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestListPartitionsWithXIncludeParams.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestListPartitionsWithXIncludeParams.data
new file mode 100644
index 00000000..50c1ed80
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestListPartitionsWithXIncludeParams.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/RetryingHMSHandler.java#L83!!!org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal!!!org.apache.hadoop.hive.metastore.Deadline.startTimer!!!RetryingHMSHandler.java:89!!!org.apache.hadoop.hive.metastore.api.MetaException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestMetaStoreEventListener.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestMetaStoreEventListener.conf
new file mode 100644
index 00000000..c858eaa4
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestMetaStoreEventListener.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestMetaStoreEventListener.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestMetaStoreEventListener.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestMetaStoreEventListener.data
new file mode 100644
index 00000000..971f560b
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestMetaStoreEventListener.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java#L753!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open!!!org.apache.hadoop.hive.metastore.utils.SecurityUtils.getUGI!!!HiveMetaStoreClient.java:848!!!java.io.IOException
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestObjectStore.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestObjectStore.conf
new file mode 100644
index 00000000..5d18da41
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestObjectStore.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestObjectStore.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestObjectStore.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestObjectStore.data
new file mode 100644
index 00000000..8141f8f6
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestObjectStore.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java#L11654!!!org.apache.hadoop.hive.metastore.ObjectStore$RetryingExecutor.run!!!org.apache.hadoop.hive.metastore.ObjectStore$RetryingExecutor$Command.process!!!ObjectStore.java:11999!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.TestTxnCommands.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.TestTxnCommands.conf
new file mode 100644
index 00000000..0d72c0c6
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.TestTxnCommands.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.TestTxnCommands.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.TestTxnCommands.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.TestTxnCommands.data
new file mode 100644
index 00000000..35ee5865
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.TestTxnCommands.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/ql/src/java/org/apache/hadoop/hive/ql/lockmgr/DbLockManager.java#L101!!!org.apache.hadoop.hive.ql.lockmgr.DbLockManager.lock!!!org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.getMS!!!DbLockManager.java:104!!!org.apache.hadoop.hive.ql.lockmgr.LockException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.conf
new file mode 100644
index 00000000..dfdb29ae
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.data
new file mode 100644
index 00000000..35ee5865
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/ql/src/java/org/apache/hadoop/hive/ql/lockmgr/DbLockManager.java#L101!!!org.apache.hadoop.hive.ql.lockmgr.DbLockManager.lock!!!org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.getMS!!!DbLockManager.java:104!!!org.apache.hadoop.hive.ql.lockmgr.LockException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.exec.tez.TestTezOutputCommitter.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.exec.tez.TestTezOutputCommitter.conf
new file mode 100644
index 00000000..04359835
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.exec.tez.TestTezOutputCommitter.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.exec.tez.TestTezOutputCommitter.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.exec.tez.TestTezOutputCommitter.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.exec.tez.TestTezOutputCommitter.data
new file mode 100644
index 00000000..b1214e6d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.exec.tez.TestTezOutputCommitter.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/TezJobMonitor.java#L166!!!org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution!!!org.apache.tez.dag.api.client.DAGClient.getDAGStatus!!!TezJobMonitor.java:183!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.exec.util.TestRetryable.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.exec.util.TestRetryable.conf
new file mode 100644
index 00000000..07caf1e4
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.exec.util.TestRetryable.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.exec.util.TestRetryable.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.exec.util.TestRetryable.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.exec.util.TestRetryable.data
new file mode 100644
index 00000000..316ee87d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.exec.util.TestRetryable.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/ql/src/java/org/apache/hadoop/hive/ql/exec/util/Retryable.java#L71!!!org.apache.hadoop.hive.ql.exec.util.Retryable.executeCallable!!!org.apache.hadoop.hive.metastore.utils.SecurityUtils.reloginExpiringKeytabUser!!!Retryable.java:74!!!org.apache.hadoop.hive.metastore.api.MetaException
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.hooks.TestHiveProtoLoggingHook.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.hooks.TestHiveProtoLoggingHook.conf
new file mode 100644
index 00000000..5d2f2d1d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.hooks.TestHiveProtoLoggingHook.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.hooks.TestHiveProtoLoggingHook.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.hooks.TestHiveProtoLoggingHook.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.hooks.TestHiveProtoLoggingHook.data
new file mode 100644
index 00000000..86dedf9f
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.hooks.TestHiveProtoLoggingHook.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/ql/src/java/org/apache/hadoop/hive/ql/hooks/HiveProtoLoggingHook.java#L315!!!org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook$EventLogger.writeEvent!!!org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook$EventLogger.maybeRolloverWriterForDay!!!HiveProtoLoggingHook.java:327!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestCounterMapping.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestCounterMapping.conf
new file mode 100644
index 00000000..482a2d5d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestCounterMapping.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestCounterMapping.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestCounterMapping.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestCounterMapping.data
new file mode 100644
index 00000000..b1214e6d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestCounterMapping.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/TezJobMonitor.java#L166!!!org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution!!!org.apache.tez.dag.api.client.DAGClient.getDAGStatus!!!TezJobMonitor.java:183!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.conf
new file mode 100644
index 00000000..ee35077d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.data
new file mode 100644
index 00000000..b1214e6d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/TezJobMonitor.java#L166!!!org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution!!!org.apache.tez.dag.api.client.DAGClient.getDAGStatus!!!TezJobMonitor.java:183!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestReOptimization.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestReOptimization.conf
new file mode 100644
index 00000000..376b0ebd
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestReOptimization.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestReOptimization.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestReOptimization.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestReOptimization.data
new file mode 100644
index 00000000..b1214e6d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestReOptimization.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/TezJobMonitor.java#L166!!!org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution!!!org.apache.tez.dag.api.client.DAGClient.getDAGStatus!!!TezJobMonitor.java:183!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.schq.TestScheduledQueryService.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.schq.TestScheduledQueryService.conf
new file mode 100644
index 00000000..35359a09
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.schq.TestScheduledQueryService.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.schq.TestScheduledQueryService.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.schq.TestScheduledQueryService.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.schq.TestScheduledQueryService.data
new file mode 100644
index 00000000..b1214e6d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.schq.TestScheduledQueryService.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/TezJobMonitor.java#L166!!!org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution!!!org.apache.tez.dag.api.client.DAGClient.getDAGStatus!!!TezJobMonitor.java:183!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hive.hcatalog.api.repl.commands.TestCommands.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hive.hcatalog.api.repl.commands.TestCommands.conf
new file mode 100644
index 00000000..60012f49
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hive.hcatalog.api.repl.commands.TestCommands.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hive.hcatalog.api.repl.commands.TestCommands.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hive.hcatalog.api.repl.commands.TestCommands.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hive.hcatalog.api.repl.commands.TestCommands.data
new file mode 100644
index 00000000..7e3a3a47
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hive.hcatalog.api.repl.commands.TestCommands.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/ql/src/java/org/apache/hadoop/hive/ql/parse/repl/CopyUtils.java#L253!!!org.apache.hadoop.hive.ql.parse.repl.CopyUtils.doCopyRetry!!!org.apache.hadoop.hive.ql.parse.repl.CopyUtils.getFilesToRetry!!!CopyUtils.java:257!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hive.hcatalog.mapreduce.TestHCatMultiOutputFormat.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hive.hcatalog.mapreduce.TestHCatMultiOutputFormat.conf
new file mode 100644
index 00000000..918cd310
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hive.hcatalog.mapreduce.TestHCatMultiOutputFormat.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hive.hcatalog.mapreduce.TestHCatMultiOutputFormat.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hive.hcatalog.mapreduce.TestHCatMultiOutputFormat.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hive.hcatalog.mapreduce.TestHCatMultiOutputFormat.data
new file mode 100644
index 00000000..971f560b
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hive.hcatalog.mapreduce.TestHCatMultiOutputFormat.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java#L753!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open!!!org.apache.hadoop.hive.metastore.utils.SecurityUtils.getUGI!!!HiveMetaStoreClient.java:848!!!java.io.IOException
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hive.testutils.TestHiveTestEnvSetup.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hive.testutils.TestHiveTestEnvSetup.conf
new file mode 100644
index 00000000..d00d154c
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hive.testutils.TestHiveTestEnvSetup.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hive.testutils.TestHiveTestEnvSetup.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hive.testutils.TestHiveTestEnvSetup.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hive.testutils.TestHiveTestEnvSetup.data
new file mode 100644
index 00000000..b1214e6d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/config/hive/test-plan/hive_retry_locations-org.apache.hive.testutils.TestHiveTestEnvSetup.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/TezJobMonitor.java#L166!!!org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution!!!org.apache.tez.dag.api.client.DAGClient.getDAGStatus!!!TezJobMonitor.java:183!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/pom-java11.xml b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/pom-java11.xml
new file mode 100644
index 00000000..3928be3e
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/pom-java11.xml
@@ -0,0 +1,535 @@
+
+
+
+ 4.0.0
+ edu.uchicago.cs.systems
+ wasabi
+ Wasabi Fault Injection Instrumentation Library
+ Wasabi Library
+ 1.0.0
+ jar
+
+
+
+ Apache License, Version 2.0
+ http://www.apache.org/licenses/LICENSE-2.0.txt
+ repo
+
+
+
+
+
+ default.conf
+ 1.9.19
+ 1.13.1
+
+ 3.0.0-beta-1
+ 3.3.5
+ 4.0.0-beta-1
+ 3.9.1
+ 0.15.0
+
+
+
+
+
+ org.opentest4j
+ opentest4j
+ 1.2.0
+ test
+
+
+
+
+
+
+ org.aspectj
+ aspectjrt
+ ${aspectj.version}
+
+
+ org.slf4j
+ slf4j-simple
+ 2.0.6
+
+
+ junit
+ junit
+ 4.13.2
+
+
+ org.junit.jupiter
+ junit-jupiter-api
+ 5.8.2
+
+
+ commons-codec
+ commons-codec
+ 1.16.0
+
+
+ org.assertj
+ assertj-core
+ 3.20.2
+ test
+
+
+
+
+
+
+
+ central
+ Central Repository
+ https://repo.maven.apache.org/maven2
+ default
+
+ false
+
+
+ true
+ never
+
+
+
+ ossrh-snapshots
+ Sonatype OSSRH snapshots
+ https://oss.sonatype.org/content/repositories/snapshots
+ default
+
+ true
+ always
+
+
+ false
+
+
+
+ repository-release
+ https://repository.apache.org/content/repositories/releases/
+
+ true
+
+
+ true
+
+
+
+
+ shibboleth
+ https://build.shibboleth.net/nexus/content/groups/public
+
+ true
+ warn
+
+
+ false
+
+
+
+
+
+
+
+ config
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.11.0
+
+
+
+ default-compile
+ none
+
+
+ default-testCompile
+ none
+
+
+
+
+ dev.aspectj
+ aspectj-maven-plugin
+ ${aspectj-maven.version}
+
+
+
+ test-compile
+ compile
+
+
+
+
+ true
+ true
+ 11
+ 11
+ 11
+ UTF-8
+ false
+
+
+
+ org.aspectj
+ aspectjtools
+ ${aspectj.version}
+
+
+ javax.xml.stream
+ stax-api
+
+
+
+
+
+
+
+
+
+
+ hadoop
+
+
+ instrumentation.target
+ hadoop
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.11.0
+
+
+ default-compile
+ compile
+
+ compile
+
+
+
+
+
+
+ dev.aspectj
+ aspectj-maven-plugin
+
+
+
+ compile
+
+
+ 11
+ 11
+ 11
+ compile
+ true
+ true
+
+ **/InterceptHadoop.aj
+
+
+
+
+
+
+
+
+ src/main/java
+
+
+
+
+
+ org.junit.jupiter
+ junit-jupiter-api
+ 5.8.2
+
+
+ org.apache.hadoop
+ hadoop-common
+ ${hadoop.version}
+
+
+
+
+
+ hbase
+
+
+ instrumentation.target
+ hbase
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.11.0
+
+
+ default-compile
+ compile
+
+ compile
+
+
+
+
+
+
+ dev.aspectj
+ aspectj-maven-plugin
+
+
+
+ compile
+
+
+ 11
+ 11
+ 11
+ compile
+ true
+ true
+
+ **/InterceptHBase.aj
+
+
+
+
+
+
+
+
+ src/main/java
+
+
+
+
+
+
+ org.junit.jupiter
+ junit-jupiter-api
+ 5.8.2
+
+ org.apache.hbase
+ hbase-client
+ ${hbase.version}
+
+
+
+
+
+ hive
+
+
+ instrumentation.target
+ hive
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.11.0
+
+
+ default-compile
+ compile
+
+ compile
+
+
+
+
+
+
+ dev.aspectj
+ aspectj-maven-plugin
+
+
+
+ compile
+
+
+ 11
+ 11
+ 11
+ compile
+ true
+ true
+
+ **/InterceptHive.aj
+
+
+
+
+
+
+
+
+ src/main/java
+
+
+
+
+
+ org.apache.hive
+ hive-metastore
+ ${hive.version}
+
+
+ org.apache.hive
+ hive-service
+ ${hive.version}
+
+
+ org.apache.hive
+ hive-exec
+ ${hive.version}
+
+
+ org.apache.thrift
+ libthrift
+ ${trift.version}
+
+
+
+
+
+ cassandra
+
+
+ instrumentation.target
+ cassandra
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.11.0
+
+
+ default-compile
+ compile
+
+ compile
+
+
+
+
+
+
+ dev.aspectj
+ aspectj-maven-plugin
+
+
+
+ compile
+
+
+ 11
+ 11
+ 11
+ compile
+ true
+ true
+
+ **/InterceptCassandra.aj
+
+
+
+
+
+
+
+
+ src/main/java
+
+
+
+
+
+
+ elasticsearch
+
+
+ instrumentation.target
+ elasticsearch
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.11.0
+
+
+ default-compile
+ compile
+
+ compile
+
+
+
+
+
+
+ dev.aspectj
+ aspectj-maven-plugin
+
+
+
+ compile
+
+
+ 11
+ 11
+ 11
+ compile
+ true
+ true
+
+ **/InterceptElasticSearch.aj
+
+
+
+
+
+
+
+
+ src/main/java
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/pom-java8.xml b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/pom-java8.xml
new file mode 100644
index 00000000..e8a52640
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/pom-java8.xml
@@ -0,0 +1,530 @@
+
+
+
+ 4.0.0
+ edu.uchicago.cs.systems
+ wasabi
+ Wasabi Fault Injection Instrumentation Library
+ Wasabi Library
+ 1.0.0
+ jar
+
+
+
+ Apache License, Version 2.0
+ http://www.apache.org/licenses/LICENSE-2.0.txt
+ repo
+
+
+
+
+
+ default.conf
+ 1.9.8.M1
+ 1.13
+
+ 3.0.0-beta-1
+ 3.3.5
+ 4.0.0-beta-1
+ 3.9.1
+ 0.15.0
+
+
+
+
+
+ org.opentest4j
+ opentest4j
+ 1.2.0
+ test
+
+
+
+
+
+
+ org.aspectj
+ aspectjrt
+ ${aspectj.version}
+
+
+ org.slf4j
+ slf4j-simple
+ 2.0.6
+
+
+ junit
+ junit
+ 4.13.2
+
+
+ commons-codec
+ commons-codec
+ 1.16.0
+
+
+ org.assertj
+ assertj-core
+ 3.20.2
+ test
+
+
+
+
+
+
+
+ central
+ Central Repository
+ https://repo.maven.apache.org/maven2
+ default
+
+ false
+
+
+ true
+ never
+
+
+
+ ossrh-snapshots
+ Sonatype OSSRH snapshots
+ https://oss.sonatype.org/content/repositories/snapshots
+ default
+
+ true
+ always
+
+
+ false
+
+
+
+ repository-release
+ https://repository.apache.org/content/repositories/releases/
+
+ true
+
+
+ true
+
+
+
+
+ shibboleth
+ https://build.shibboleth.net/nexus/content/groups/public
+
+ true
+ warn
+
+
+ false
+
+
+
+
+
+
+
+ config
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.11.0
+
+
+
+ default-compile
+ none
+
+
+ default-testCompile
+ none
+
+
+
+
+ dev.aspectj
+ aspectj-maven-plugin
+ ${aspectj-maven.version}
+
+
+
+ test-compile
+ compile
+
+
+
+
+ true
+ true
+ 1.8
+ 1.8
+ 1.8
+ UTF-8
+ false
+
+
+
+ org.aspectj
+ aspectjtools
+ ${aspectj.version}
+
+
+ javax.xml.stream
+ stax-api
+
+
+
+
+
+
+
+
+
+
+ hadoop
+
+
+ instrumentation.target
+ hadoop
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.11.0
+
+
+ default-compile
+ compile
+
+ compile
+
+
+
+
+
+
+ dev.aspectj
+ aspectj-maven-plugin
+
+
+
+ compile
+
+
+ 1.8
+ 1.8
+ 1.8
+ compile
+ true
+ true
+
+ **/InterceptHadoop.aj
+
+
+
+
+
+
+
+
+ src/main/java
+
+
+
+
+
+ org.junit.jupiter
+ junit-jupiter-api
+ 5.8.2
+
+
+ org.apache.hadoop
+ hadoop-common
+ ${hadoop.version}
+
+
+
+
+
+ hbase
+
+
+ instrumentation.target
+ hbase
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.11.0
+
+
+ default-compile
+ compile
+
+ compile
+
+
+
+
+
+
+ dev.aspectj
+ aspectj-maven-plugin
+
+
+
+ compile
+
+
+ 1.8
+ 1.8
+ 1.8
+ compile
+ true
+ true
+
+ **/InterceptHBase.aj
+
+
+
+
+
+
+
+
+ src/main/java
+
+
+
+
+
+ org.apache.hbase
+ hbase-client
+ ${hbase.version}
+
+
+ org.junit.jupiter
+ junit-jupiter-api
+ 5.8.2
+
+
+
+
+
+ hive
+
+
+ instrumentation.target
+ hive
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.11.0
+
+
+ default-compile
+ compile
+
+ compile
+
+
+
+
+
+
+ dev.aspectj
+ aspectj-maven-plugin
+
+
+
+ compile
+
+
+ 1.8
+ 1.8
+ 1.8
+ compile
+ true
+ true
+
+ **/InterceptHive.aj
+
+
+
+
+
+
+
+
+ src/main/java
+
+
+
+
+
+ org.apache.hive
+ hive-metastore
+ ${hive.version}
+
+
+ org.apache.hive
+ hive-service
+ ${hive.version}
+
+
+ org.apache.hive
+ hive-exec
+ ${hive.version}
+
+
+ org.apache.thrift
+ libthrift
+ ${trift.version}
+
+
+
+
+
+ cassandra
+
+
+ instrumentation.target
+ cassandra
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.11.0
+
+
+ default-compile
+ compile
+
+ compile
+
+
+
+
+
+
+ dev.aspectj
+ aspectj-maven-plugin
+
+
+
+ compile
+
+
+ 1.8
+ 1.8
+ 1.8
+ compile
+ true
+ true
+
+ **/InterceptCassandra.aj
+
+
+
+
+
+
+
+
+ src/main/java
+
+
+
+
+
+
+ elasticsearch
+
+
+ instrumentation.target
+ elasticsearch
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.11.0
+
+
+ default-compile
+ compile
+
+ compile
+
+
+
+
+
+
+ dev.aspectj
+ aspectj-maven-plugin
+
+
+
+ compile
+
+
+ 1.8
+ 1.8
+ 1.8
+ compile
+ true
+ true
+
+ **/InterceptElasticSearch.aj
+
+
+
+
+
+
+
+
+ src/main/java
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/pom.xml b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/pom.xml
new file mode 100644
index 00000000..e8a52640
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/pom.xml
@@ -0,0 +1,530 @@
+
+
+
+ 4.0.0
+ edu.uchicago.cs.systems
+ wasabi
+ Wasabi Fault Injection Instrumentation Library
+ Wasabi Library
+ 1.0.0
+ jar
+
+
+
+ Apache License, Version 2.0
+ http://www.apache.org/licenses/LICENSE-2.0.txt
+ repo
+
+
+
+
+
+ default.conf
+ 1.9.8.M1
+ 1.13
+
+ 3.0.0-beta-1
+ 3.3.5
+ 4.0.0-beta-1
+ 3.9.1
+ 0.15.0
+
+
+
+
+
+ org.opentest4j
+ opentest4j
+ 1.2.0
+ test
+
+
+
+
+
+
+ org.aspectj
+ aspectjrt
+ ${aspectj.version}
+
+
+ org.slf4j
+ slf4j-simple
+ 2.0.6
+
+
+ junit
+ junit
+ 4.13.2
+
+
+ commons-codec
+ commons-codec
+ 1.16.0
+
+
+ org.assertj
+ assertj-core
+ 3.20.2
+ test
+
+
+
+
+
+
+
+ central
+ Central Repository
+ https://repo.maven.apache.org/maven2
+ default
+
+ false
+
+
+ true
+ never
+
+
+
+ ossrh-snapshots
+ Sonatype OSSRH snapshots
+ https://oss.sonatype.org/content/repositories/snapshots
+ default
+
+ true
+ always
+
+
+ false
+
+
+
+ repository-release
+ https://repository.apache.org/content/repositories/releases/
+
+ true
+
+
+ true
+
+
+
+
+ shibboleth
+ https://build.shibboleth.net/nexus/content/groups/public
+
+ true
+ warn
+
+
+ false
+
+
+
+
+
+
+
+ config
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.11.0
+
+
+
+ default-compile
+ none
+
+
+ default-testCompile
+ none
+
+
+
+
+ dev.aspectj
+ aspectj-maven-plugin
+ ${aspectj-maven.version}
+
+
+
+ test-compile
+ compile
+
+
+
+
+ true
+ true
+ 1.8
+ 1.8
+ 1.8
+ UTF-8
+ false
+
+
+
+ org.aspectj
+ aspectjtools
+ ${aspectj.version}
+
+
+ javax.xml.stream
+ stax-api
+
+
+
+
+
+
+
+
+
+
+ hadoop
+
+
+ instrumentation.target
+ hadoop
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.11.0
+
+
+ default-compile
+ compile
+
+ compile
+
+
+
+
+
+
+ dev.aspectj
+ aspectj-maven-plugin
+
+
+
+ compile
+
+
+ 1.8
+ 1.8
+ 1.8
+ compile
+ true
+ true
+
+ **/InterceptHadoop.aj
+
+
+
+
+
+
+
+
+ src/main/java
+
+
+
+
+
+ org.junit.jupiter
+ junit-jupiter-api
+ 5.8.2
+
+
+ org.apache.hadoop
+ hadoop-common
+ ${hadoop.version}
+
+
+
+
+
+ hbase
+
+
+ instrumentation.target
+ hbase
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.11.0
+
+
+ default-compile
+ compile
+
+ compile
+
+
+
+
+
+
+ dev.aspectj
+ aspectj-maven-plugin
+
+
+
+ compile
+
+
+ 1.8
+ 1.8
+ 1.8
+ compile
+ true
+ true
+
+ **/InterceptHBase.aj
+
+
+
+
+
+
+
+
+ src/main/java
+
+
+
+
+
+ org.apache.hbase
+ hbase-client
+ ${hbase.version}
+
+
+ org.junit.jupiter
+ junit-jupiter-api
+ 5.8.2
+
+
+
+
+
+ hive
+
+
+ instrumentation.target
+ hive
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.11.0
+
+
+ default-compile
+ compile
+
+ compile
+
+
+
+
+
+
+ dev.aspectj
+ aspectj-maven-plugin
+
+
+
+ compile
+
+
+ 1.8
+ 1.8
+ 1.8
+ compile
+ true
+ true
+
+ **/InterceptHive.aj
+
+
+
+
+
+
+
+
+ src/main/java
+
+
+
+
+
+ org.apache.hive
+ hive-metastore
+ ${hive.version}
+
+
+ org.apache.hive
+ hive-service
+ ${hive.version}
+
+
+ org.apache.hive
+ hive-exec
+ ${hive.version}
+
+
+ org.apache.thrift
+ libthrift
+ ${trift.version}
+
+
+
+
+
+ cassandra
+
+
+ instrumentation.target
+ cassandra
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.11.0
+
+
+ default-compile
+ compile
+
+ compile
+
+
+
+
+
+
+ dev.aspectj
+ aspectj-maven-plugin
+
+
+
+ compile
+
+
+ 1.8
+ 1.8
+ 1.8
+ compile
+ true
+ true
+
+ **/InterceptCassandra.aj
+
+
+
+
+
+
+
+
+ src/main/java
+
+
+
+
+
+
+ elasticsearch
+
+
+ instrumentation.target
+ elasticsearch
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.11.0
+
+
+ default-compile
+ compile
+
+ compile
+
+
+
+
+
+
+ dev.aspectj
+ aspectj-maven-plugin
+
+
+
+ compile
+
+
+ 1.8
+ 1.8
+ 1.8
+ compile
+ true
+ true
+
+ **/InterceptElasticSearch.aj
+
+
+
+
+
+
+
+
+ src/main/java
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/aspect/edu/uchicago/cs/systems/wasabi/InterceptCassandra.aj b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/aspect/edu/uchicago/cs/systems/wasabi/InterceptCassandra.aj
new file mode 100644
index 00000000..76e2954b
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/aspect/edu/uchicago/cs/systems/wasabi/InterceptCassandra.aj
@@ -0,0 +1,779 @@
+package edu.uchicago.cs.systems.wasabi;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.EOFException;
+import java.io.FileNotFoundException;
+import java.net.BindException;
+import java.net.ConnectException;
+import java.net.SocketException;
+import java.net.SocketTimeoutException;
+import java.net.UnknownHostException;
+import java.lang.InterruptedException;
+import java.sql.SQLException;
+import java.sql.SQLTransientException;
+
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.Set;
+
+import org.apache.cassandra.exceptions.MarshalException;
+import org.apache.cassandra.exceptions.InvalidRequestException;
+import org.apache.cassandra.exceptions.RequestFailureException;
+import org.apache.cassandra.exceptions.RequestTimeoutException;
+import org.apache.cassandra.exceptions.UnavailableException;
+import org.apache.cassandra.exceptions.SSTableAcquisitionException;
+
+import edu.uchicago.cs.systems.wasabi.ConfigParser;
+import edu.uchicago.cs.systems.wasabi.WasabiLogger;
+import edu.uchicago.cs.systems.wasabi.WasabiContext;
+import edu.uchicago.cs.systems.wasabi.InjectionPolicy;
+import edu.uchicago.cs.systems.wasabi.StackSnapshot;
+import edu.uchicago.cs.systems.wasabi.InjectionPoint;
+import edu.uchicago.cs.systems.wasabi.ExecutionTrace;
+
+public aspect InterceptCassandra {
+ private WasabiContext wasabiCtx = null;
+
+ private static final String UNKNOWN = "UNKNOWN";
+
+ private static final WasabiLogger LOG = new WasabiLogger();
+ private static final String configFile = (System.getProperty("configFile") != null) ? System.getProperty("configFile") : "default.conf";
+ private static final ConfigParser configParser = new ConfigParser(LOG, configFile);
+
+ private Set activeInjectionLocations = ConcurrentHashMap.newKeySet();
+ private String testMethodName = UNKNOWN;
+
+ pointcut testMethod():
+ (@annotation(org.junit.Test) ||
+ @annotation(org.junit.jupiter.api.Test)) &&
+ !within(org.apache.hadoop.*.TestDFSClientFailover.*) &&
+ !within(org.apache.hadoop.hdfs.*.TestOfflineImageViewer.*) &&
+ !within(org.apache.hadoop.example.ITUseHadoopCodec.*);
+
+
+ before() : testMethod() {
+ this.wasabiCtx = new WasabiContext(LOG, configParser);
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[TEST-BEFORE]: Test ---%s--- started", thisJoinPoint.toString())
+ );
+
+ if (this.testMethodName != this.UNKNOWN) {
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[TEST-BEFORE]: [ALERT]: Test method ---%s--- executes concurrentlly with test method ---%s---",
+ this.testMethodName, thisJoinPoint.toString())
+ );
+ }
+
+ this.testMethodName = thisJoinPoint.toString();
+ }
+
+ after() returning: testMethod() {
+ if (this.wasabiCtx == null) { // This happens for non-test methods (e.g. config) inside test code
+ return; // Ignore retry in "before" and "after" annotated methods
+ }
+
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[TEST-AFTER]: [SUCCESS]: Test ---%s--- done", thisJoinPoint.toString())
+ );
+
+ this.wasabiCtx.printExecTrace(this.LOG, String.format(" Test: %s", this.testMethodName));
+
+ this.testMethodName = this.UNKNOWN;
+ this.wasabiCtx = null;
+ this.activeInjectionLocations.clear();
+ }
+
+ after() throwing (Throwable t): testMethod() {
+ if (this.wasabiCtx == null) { // This happens for non-test methods (e.g. config) inside test code
+ return; // Ignore retry in "before" and "after" annotated methods
+ }
+
+ this.wasabiCtx.printExecTrace(this.LOG, String.format(" Test: %s", this.testMethodName));
+
+ StringBuilder exception = new StringBuilder();
+ for (Throwable e = t; e != null; e = e.getCause()) {
+ exception.append(e);
+ exception.append(" :-: ");
+ }
+
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[TEST-AFTER] [FAILURE] Test ---%s--- | Failure message :-: %s| Stack trace:\n%s\n:-:-:\n\n",
+ thisJoinPoint.toString(), exception.toString(), stackSnapshot.toString())
+ );
+
+ this.testMethodName = this.UNKNOWN;
+ this.activeInjectionLocations.clear();
+ }
+
+ /*
+ * Callback before calling Thread.sleep(...)
+ */
+
+ pointcut recordThreadSleep():
+ (call(* java.lang.Object.wait(..)) ||
+ call(* java.lang.Thread.sleep(..)) ||
+ call(* java.util.concurrent.locks.LockSupport.parkNanos(..)) ||
+ call(* java.util.concurrent.locks.LockSupport.parkUntil(..)) ||
+ call(* java.util.concurrent.ScheduledExecutorService.schedule(..)) ||
+ call(* java.util.concurrent.TimeUnit.*scheduledExecutionTime(..)) ||
+ call(* java.util.concurrent.TimeUnit.*sleep(..)) ||
+ call(* java.util.concurrent.TimeUnit.*timedWait(..)) ||
+ call(* java.util.Timer.schedule*(..)) ||
+ call(* java.util.TimerTask.wait(..)) ||
+ call(* org.apache.hadoop.hbase.*.Procedure.suspend(..))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ before() : recordThreadSleep() {
+ try {
+ if (this.wasabiCtx == null) { // This happens for non-test methods (e.g. config) inside test code
+ return; // Ignore retry in "before" and "after" annotated methods
+ }
+
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ for (String retryCallerFunction : this.activeInjectionLocations) {
+ if (stackSnapshot.hasFrame(retryCallerFunction.split("\\(", 2)[0])) {
+ String sleepLocation = String.format("%s(%s:%d)",
+ retryCallerFunction.split("\\(", 2)[0],
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ this.wasabiCtx.addToExecTrace(sleepLocation, OpEntry.THREAD_SLEEP_OP, stackSnapshot);
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[THREAD-SLEEP] Test ---%s--- | Sleep location ---%s--- | Retry location ---%s---\n",
+ this.testMethodName,
+ sleepLocation,
+ retryCallerFunction.split("\\(", 2)[0])
+ );
+ }
+ }
+ } catch (Exception e) {
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_ERROR,
+ String.format("Exception occurred in recordThreadSleep(): %s", e.getMessage())
+ );
+ e.printStackTrace();
+ }
+ }
+
+
+ /* Inject IOException */
+
+ pointcut injectIOException():
+ ((withincode(* org.apache.cassandra.db.compaction.Scrubber.scrub(..)) &&
+ call(* org.apache.cassandra.db.compaction.Scrubber.*ScrubInfo.getCompactionInfo(..) throws *Exception*)) ||
+ (withincode(* org.apache.cassandra.db.compaction.Scrubber.scrub(..)) &&
+ call(* org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(..) throws *Exception*)) ||
+ (withincode(* org.apache.cassandra.hadoop.cql3.CqlRecordWriter.*RangeClient.run(..)) &&
+ call(* org.apache.cassandra.hadoop.cql3.CqlRecordWriter.*RangeClient.preparedStatement(..) throws *Exception*)) ||
+ (withincode(* org.apache.cassandra.hadoop.cql3.CqlRecordWriter.*RangeClient.run(..)) &&
+ call(* java.util.concurrent.BlockingQueue.*.take(..) throws *Exception*)) ||
+ (withincode(* org.apache.cassandra.hadoop.cql3.CqlRecordWriter.*RangeClient.run(..)) &&
+ call(* org.apache.cassandra.hadoop.cql3.CqlRecordWriter.*RangeClient.preparedStatement(..) throws *Exception*)) ||
+ (withincode(* org.apache.cassandra.service.StorageService.repairPaxosForTopologyChange(..)) &&
+ call(* org.apache.cassandra.service.StorageService.tryRepairPaxosForTopologyChange(..) throws *Exception*)) ||
+ (withincode(* org.apache.cassandra.utils.binlog.ExternalArchiver.ExternalArchiver(..)) &&
+ call(* org.apache.cassandra.utils.binlog.ExternalArchiver.archiveFile(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws IOException : injectIOException() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "IOException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new IOException(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+ /* Inject MarshalException */
+
+ pointcut injectMarshalException():
+ ((withincode(* org.apache.cassandra.db.compaction.Scrubber.scrub(..)) &&
+ call(* org.apache.cassandra.db.marshal.AbstractType.*.validate(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws MarshalException : injectMarshalException() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "MarshalException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new MarshalException(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+ /* Inject AssertionError */
+
+ pointcut injectAssertionError():
+ ((withincode(* org.apache.cassandra.service.StorageService.repairPaxosForTopologyChange(..)) && call(* org.apache.cassandra.service.StorageService.tryRepairPaxosForTopologyChange(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws AssertionError : injectAssertionError() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "AssertionError";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new AssertionError(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+ /* Inject InvalidRequestException */
+
+ pointcut injectInvalidRequestException():
+ ((withincode(* org.apache.cassandra.service.paxos.Paxos.cas(..)) && call(* org.apache.cassandra.service.CASRequest.makeUpdates(..) throws *Exception*)) ||
+ (withincode(* org.apache.cassandra.service.paxos.Paxos.cas(..)) && call(* org.apache.cassandra.service.CASRequest.appliesTo(..) throws *Exception*)) ||
+ (withincode(* org.apache.cassandra.service.paxos.Paxos.cas(..)) && call(* org.apache.cassandra.triggers.TriggerExecutor.execute(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws InvalidRequestException : injectInvalidRequestException() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "InvalidRequestException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new InvalidRequestException(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+ /* Inject RequestFailureException */
+
+ pointcut injectRequestFailureException():
+ ((withincode(* org.apache.cassandra.service.paxos.Paxos.cas(..)) && call(* org.apache.cassandra.service.paxos.Paxos.*MaybeFailure.markAndThrowAsTimeoutOrFailure(..) throws *Exception*)) ||
+ (withincode(* org.apache.cassandra.service.paxos.Paxos.begin(..)) && call(* org.apache.cassandra.service.paxos.Paxos.*MaybeFailure.markAndThrowAsTimeoutOrFailure(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws RequestFailureException : injectRequestFailureException() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "RequestFailureException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new RequestFailureException(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+ /* Inject RequestTimeoutException */
+
+ pointcut injectRequestTimeoutException():
+ ((withincode(* org.apache.cassandra.service.paxos.Paxos.cas(..)) && call(* org.apache.cassandra.service.paxos.Paxos.*MaybeFailure.markAndThrowAsTimeoutOrFailure(..) throws *Exception*)) ||
+ (withincode(* org.apache.cassandra.service.paxos.Paxos.begin(..)) && call(* org.apache.cassandra.service.paxos.Paxos.*MaybeFailure.markAndThrowAsTimeoutOrFailure(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws RequestTimeoutException : injectRequestTimeoutException() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "RequestTimeoutException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new RequestTimeoutException(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+ /* Inject UnavailableException */
+
+ pointcut injectUnavailableException():
+ ((withincode(* org.apache.cassandra.service.paxos.Paxos.begin(..)) && call(* org.apache.cassandra.service.paxos.Paxos.*Participants.assureSufficientLiveNodes(..) throws *Exception*)) ||
+ (withincode(* org.apache.cassandra.service.paxos.Paxos.begin(..)) && call(* org.apache.cassandra.service.paxos.PaxosPrepare.prepare(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws UnavailableException : injectUnavailableException() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "UnavailableException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new UnavailableException(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+ /* Inject IllegalArgumentException */
+
+ pointcut injectIllegalArgumentException():
+ ((withincode(* org.apache.cassandra.service.paxos.Paxos.begin(..)) &&
+ call(* org.apache.cassandra.service.reads.ResponseResolver.preprocess(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws IllegalArgumentException : injectIllegalArgumentException() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "IllegalArgumentException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new IllegalArgumentException(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+ /* Inject IllegalStateException */
+
+ pointcut injectIllegalStateException():
+ ((withincode(* org.apache.cassandra.service.paxos.Paxos.begin(..)) &&
+ call(* org.apache.cassandra.service.reads.ResponseResolver.preprocess(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws IllegalStateException : injectIllegalStateException() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "IllegalStateException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new IllegalStateException(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+ /* Inject SSTableAcquisitionException */
+
+ pointcut injectSSTableAcquisitionException():
+ ((withincode(* PendingAntiCompaction.AcquisitionCallable.call(..)) &&
+ call(* PendingAntiCompaction.AcquisitionCallable.acquireSSTables(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws SSTableAcquisitionException : injectSSTableAcquisitionException() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "SSTableAcquisitionException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new SSTableAcquisitionException(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+}
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/aspect/edu/uchicago/cs/systems/wasabi/InterceptElasticSearch.aj b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/aspect/edu/uchicago/cs/systems/wasabi/InterceptElasticSearch.aj
new file mode 100644
index 00000000..fb0d7b4f
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/aspect/edu/uchicago/cs/systems/wasabi/InterceptElasticSearch.aj
@@ -0,0 +1,372 @@
+package edu.uchicago.cs.systems.wasabi;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.net.SocketException;
+
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.Set;
+
+import edu.uchicago.cs.systems.wasabi.ConfigParser;
+import edu.uchicago.cs.systems.wasabi.WasabiLogger;
+import edu.uchicago.cs.systems.wasabi.WasabiContext;
+import edu.uchicago.cs.systems.wasabi.InjectionPolicy;
+import edu.uchicago.cs.systems.wasabi.StackSnapshot;
+import edu.uchicago.cs.systems.wasabi.InjectionPoint;
+import edu.uchicago.cs.systems.wasabi.ExecutionTrace;
+
+public aspect InterceptElasticSearch {
+ private WasabiContext wasabiCtx = null;
+
+ private static final String UNKNOWN = "UNKNOWN";
+
+ private static final WasabiLogger LOG = new WasabiLogger();
+ private static final String configFile = (System.getProperty("configFile") != null) ? System.getProperty("configFile") : "default.conf";
+ private static final ConfigParser configParser = new ConfigParser(LOG, configFile);
+
+ private Set activeInjectionLocations = ConcurrentHashMap.newKeySet();
+ private String testMethodName = UNKNOWN;
+
+ pointcut testMethod():
+ (@annotation(org.junit.Test) ||
+ @annotation(org.junit.jupiter.api.Test)) &&
+ !within(org.apache.hadoop.*.TestDFSClientFailover.*) &&
+ !within(org.apache.hadoop.hdfs.*.TestOfflineImageViewer.*) &&
+ !within(org.apache.hadoop.example.ITUseHadoopCodec.*);
+
+
+ before() : testMethod() {
+ this.wasabiCtx = new WasabiContext(LOG, configParser);
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[TEST-BEFORE]: Test ---%s--- started", thisJoinPoint.toString())
+ );
+
+ if (this.testMethodName != this.UNKNOWN) {
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[TEST-BEFORE]: [ALERT]: Test method ---%s--- executes concurrentlly with test method ---%s---",
+ this.testMethodName, thisJoinPoint.toString())
+ );
+ }
+
+ this.testMethodName = thisJoinPoint.toString();
+ }
+
+ after() returning: testMethod() {
+ if (this.wasabiCtx == null) { // This happens for non-test methods (e.g. config) inside test code
+ return; // Ignore retry in "before" and "after" annotated methods
+ }
+
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[TEST-AFTER]: [SUCCESS]: Test ---%s--- done", thisJoinPoint.toString())
+ );
+
+ this.wasabiCtx.printExecTrace(this.LOG, String.format(" Test: %s", this.testMethodName));
+
+ this.testMethodName = this.UNKNOWN;
+ this.wasabiCtx = null;
+ this.activeInjectionLocations.clear();
+ }
+
+ after() throwing (Throwable t): testMethod() {
+ if (this.wasabiCtx == null) { // This happens for non-test methods (e.g. config) inside test code
+ return; // Ignore retry in "before" and "after" annotated methods
+ }
+
+ this.wasabiCtx.printExecTrace(this.LOG, String.format(" Test: %s", this.testMethodName));
+
+ StringBuilder exception = new StringBuilder();
+ for (Throwable e = t; e != null; e = e.getCause()) {
+ exception.append(e);
+ exception.append(" :-: ");
+ }
+
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[TEST-AFTER] [FAILURE] Test ---%s--- | Failure message :-: %s| Stack trace:\n%s\n:-:-:\n\n",
+ thisJoinPoint.toString(), exception.toString(), stackSnapshot.toString())
+ );
+
+ this.testMethodName = this.UNKNOWN;
+ this.activeInjectionLocations.clear();
+ }
+
+ /*
+ * Callback before calling Thread.sleep(...)
+ */
+
+ pointcut recordThreadSleep():
+ (call(* java.lang.Object.wait(..)) ||
+ call(* java.lang.Thread.sleep(..)) ||
+ call(* java.util.concurrent.locks.LockSupport.parkNanos(..)) ||
+ call(* java.util.concurrent.locks.LockSupport.parkUntil(..)) ||
+ call(* java.util.concurrent.ScheduledExecutorService.schedule(..)) ||
+ call(* java.util.concurrent.TimeUnit.*scheduledExecutionTime(..)) ||
+ call(* java.util.concurrent.TimeUnit.*sleep(..)) ||
+ call(* java.util.concurrent.TimeUnit.*timedWait(..)) ||
+ call(* java.util.Timer.schedule*(..)) ||
+ call(* java.util.TimerTask.wait(..)) ||
+ call(* org.apache.hadoop.hbase.*.Procedure.suspend(..))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ before() : recordThreadSleep() {
+ try {
+ if (this.wasabiCtx == null) { // This happens for non-test methods (e.g. config) inside test code
+ return; // Ignore retry in "before" and "after" annotated methods
+ }
+
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ for (String retryCallerFunction : this.activeInjectionLocations) {
+ if (stackSnapshot.hasFrame(retryCallerFunction.split("\\(", 2)[0])) {
+ String sleepLocation = String.format("%s(%s:%d)",
+ retryCallerFunction.split("\\(", 2)[0],
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ this.wasabiCtx.addToExecTrace(sleepLocation, OpEntry.THREAD_SLEEP_OP, stackSnapshot);
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[THREAD-SLEEP] Test ---%s--- | Sleep location ---%s--- | Retry location ---%s---\n",
+ this.testMethodName,
+ sleepLocation,
+ retryCallerFunction.split("\\(", 2)[0])
+ );
+ }
+ }
+ } catch (Exception e) {
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_ERROR,
+ String.format("Exception occurred in recordThreadSleep(): %s", e.getMessage())
+ );
+ e.printStackTrace();
+ }
+ }
+
+
+ /* Inject IOException */
+
+ pointcut injectIOException():
+ ((withincode(* org.elasticsearch.indices.IndicesService.processPendingDeletes(..)) && call(* org.elasticsearch.env.NodeEnvironment.deleteIndexDirectoryUnderLock(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.indices.IndicesService.processPendingDeletes(..)) && call(* org.elasticsearch.indices.IndicesService.deleteShardStore(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.xpack.watcher.notification.email.attachment.ReportingAttachmentParser.toAttachment(..)) && call(* org.elasticsearch.xpack.watcher.common.http.HttpClient.execute(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.index.IndexService.onShardClose(..)) && call(* beforeIndexShardDeleted(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.gateway.PersistedClusterStateService.completeCommit(..)) && call(* org.elasticsearch.gateway.PersistedClusterStateService.commit(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.indices.IndicesService.processPendingDeletes(..)) && call(* org.elasticsearch.env.NodeEnvironment.deleteIndexDirectoryUnderLock(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.indices.IndicesService.processPendingDeletes(..)) && call(* org.elasticsearch.indices.IndicesService.deleteShardStore(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.common.blobstore.fs.FsBlobContainer.moveBlobAtomic(..)) && call(* java.nio.file.Files.move(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.common.file.AbstractFileWatchingService.enableDirectoryWatcher(..)) && call(* org.elasticsearch.monitor.fs.FsInfo.Path.register(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobStore.writeBlobResumable(..)) && call(* org.elasticsearch.core.internal.io.Streams.copy(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobStore.writeBlobResumable(..)) && call(* org.elasticsearch.repositories.gcs.SocketAccess.doPrivilegedIOException(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.action.bulk.BulkRequestParser.parse(..)) && call(* org.elasticsearch.action.update.UpdateRequest.fromXContent(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.action.bulk.BulkRequestParser.parse(..)) && call(* org.elasticsearch.common.xcontent.XContentParser.booleanValue(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.action.bulk.BulkRequestParser.parse(..)) && call(* org.elasticsearch.common.xcontent.XContentParser.longValue(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.action.bulk.BulkRequestParser.parse(..)) && call(* org.elasticsearch.common.xcontent.XContentParser.intValue(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.action.bulk.BulkRequestParser.parse(..)) && call(* org.elasticsearch.common.xcontent.XContentParser.text(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.action.bulk.BulkRequestParser.parse(..)) && call(* org.elasticsearch.common.xcontent.XContentParser.currentName(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.action.bulk.BulkRequestParser.parse(..)) && call(* org.elasticsearch.common.xcontent.XContentParser.nextToken(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.action.bulk.BulkRequestParser.parse(..)) && call(* org.elasticsearch.search.fetch.subphase.FetchSourceContext.fromXContent(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.action.bulk.BulkRequestParser.parse(..)) && call(* org.elasticsearch.action.bulk.BulkRequestParser.createParser(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.action.bulk.BulkRequestParser.parse(..)) && call(* org.elasticsearch.common.xcontent.XContentParser.booleanValue(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.action.bulk.BulkRequestParser.parse(..)) && call(* org.elasticsearch.common.xcontent.XContentParser.longValue(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.action.bulk.BulkRequestParser.parse(..)) && call(* org.elasticsearch.common.xcontent.XContentParser.intValue(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.action.bulk.BulkRequestParser.parse(..)) && call(* org.elasticsearch.common.xcontent.XContentParser.text(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.action.bulk.BulkRequestParser.parse(..)) && call(* org.elasticsearch.common.xcontent.XContentParser.currentName(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.action.bulk.BulkRequestParser.parse(..)) && call(* org.elasticsearch.common.xcontent.XContentParser.nextToken(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.action.bulk.BulkRequestParser.parse(..)) && call(* org.elasticsearch.search.fetch.subphase.FetchSourceContext.fromXContent(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.index.reindex.BulkByScrollTask.*Status.innerFromXContent(..)) && call(* org.elasticsearch.common.xcontent.XContentParser.floatValue(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.index.reindex.BulkByScrollTask.*Status.innerFromXContent(..)) && call(* org.elasticsearch.common.xcontent.XContentParser.longValue(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.index.reindex.BulkByScrollTask.*Status.innerFromXContent(..)) && call(* org.elasticsearch.common.xcontent.XContentParser.intValue(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.index.reindex.BulkByScrollTask.*Status.innerFromXContent(..)) && call(* org.elasticsearch.common.xcontent.XContentParser.text(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.index.reindex.BulkByScrollTask.*Status.innerFromXContent(..)) && call(* org.elasticsearch.common.xcontent.XContentParser.currentName(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.index.reindex.BulkByScrollTask.*Status.innerFromXContent(..)) && call(* org.elasticsearch.common.xcontent.XContentParser.skipChildren(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.index.reindex.BulkByScrollTask.*Status.innerFromXContent(..)) && call(* org.elasticsearch.common.xcontent.XContentParser.nextToken(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.index.reindex.BulkByScrollTask.*Status.innerFromXContent(..)) && call(* org.elasticsearch.index.reindex.BulkByScrollTask.*StatusOrException.fromXContent(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.index.reindex.BulkByScrollTask.*Status.innerFromXContent(..)) && call(* org.elasticsearch.common.xcontent.ConstructingObjectParser.*.parse(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws IOException : injectIOException() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "IOException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new IOException(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+ /* Inject SocketException */
+
+ pointcut injectSocketException():
+ ((withincode(* org.elasticsearch.cluster.coordination.ClusterBootstrapService.doBootstrap(..)) && call(* java.util.function.Consumer.accept(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.xpack.core.security.CommandLineHttpClient.checkClusterHealthWithRetriesWaitingForCluster(..)) && call(* org.elasticsearch.xpack.core.security.CommandLineHttpClient.execute(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws Exception : injectSocketException() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "SocketExceptionException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new SocketException(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+ /* Inject IllegalArgumentException */
+
+ pointcut injectIllegalArgumentException():
+ ((withincode(* org.elasticsearch.action.bulk.BulkRequestParser.parse(..)) && call(* org.elasticsearch.action.delete.DeleteRequest.setIfPrimaryTerm(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.action.bulk.BulkRequestParser.parse(..)) && call(* org.elasticsearch.action.delete.DeleteRequest.setIfSeqNo(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.action.bulk.BulkRequestParser.parse(..)) && call(* org.elasticsearch.action.index.IndexRequest.setIfPrimaryTerm(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.action.bulk.BulkRequestParser.parse(..)) && call(* org.elasticsearch.action.index.IndexRequest.setIfSeqNo(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.action.bulk.BulkRequestParser.parse(..)) && call(* org.elasticsearch.action.update.UpdateRequest.setIfPrimaryTerm(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.action.bulk.BulkRequestParser.parse(..)) && call(* org.elasticsearch.action.update.UpdateRequest.setIfSeqNo(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.action.bulk.BulkRequestParser.parse(..)) && call(* org.elasticsearch.index.VersionType.fromString(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.action.bulk.BulkRequestParser.parse(..)) && call(* org.elasticsearch.action.bulk.BulkRequestParser.findNextMarker(..) throws *Exception*)) ||
+ (withincode(* org.elasticsearch.action.bulk.BulkRequestParser.parse(..)) && call(* org.elasticsearch.index.VersionType.fromString(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws IllegalArgumentException : injectIllegalArgumentException() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "IllegalArgumentException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new IllegalArgumentException(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+}
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/aspect/edu/uchicago/cs/systems/wasabi/InterceptHBase.aj b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/aspect/edu/uchicago/cs/systems/wasabi/InterceptHBase.aj
new file mode 100644
index 00000000..5cea0771
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/aspect/edu/uchicago/cs/systems/wasabi/InterceptHBase.aj
@@ -0,0 +1,1077 @@
+package edu.uchicago.cs.systems.wasabi;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.EOFException;
+import java.io.FileNotFoundException;
+import java.net.BindException;
+import java.net.ConnectException;
+import java.net.SocketException;
+import java.net.SocketTimeoutException;
+import java.net.UnknownHostException;
+import java.lang.InterruptedException;
+import java.sql.SQLException;
+import java.sql.SQLTransientException;
+
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.Set;
+
+import org.apache.zookeeper.KeeperException;
+import org.apache.hadoop.hbase.replication.ReplicationException;
+
+import edu.uchicago.cs.systems.wasabi.ConfigParser;
+import edu.uchicago.cs.systems.wasabi.WasabiLogger;
+import edu.uchicago.cs.systems.wasabi.WasabiContext;
+import edu.uchicago.cs.systems.wasabi.InjectionPolicy;
+import edu.uchicago.cs.systems.wasabi.StackSnapshot;
+import edu.uchicago.cs.systems.wasabi.InjectionPoint;
+import edu.uchicago.cs.systems.wasabi.ExecutionTrace;
+
+public aspect InterceptHBase {
+ private WasabiContext wasabiCtx = null;
+
+ private static final String UNKNOWN = "UNKNOWN";
+
+ private static final WasabiLogger LOG = new WasabiLogger();
+ private static final String configFile = (System.getProperty("configFile") != null) ? System.getProperty("configFile") : "default.conf";
+ private static final ConfigParser configParser = new ConfigParser(LOG, configFile);
+
+ private Set activeInjectionLocations = ConcurrentHashMap.newKeySet();
+ private String testMethodName = UNKNOWN;
+
+ pointcut testMethod():
+ (@annotation(org.junit.Test) ||
+ // @annotation(org.junit.Before) ||
+ // @annotation(org.junit.After) ||
+ // @annotation(org.junit.BeforeClass) ||
+ // @annotation(org.junit.AfterClass) ||
+ // @annotation(org.junit.jupiter.api.BeforeEach) ||
+ // @annotation(org.junit.jupiter.api.AfterEach) ||
+ // @annotation(org.junit.jupiter.api.BeforeAll) ||
+ // @annotation(org.junit.jupiter.api.AfterAll) ||
+ @annotation(org.junit.jupiter.api.Test)) &&
+ !within(org.apache.hadoop.*.TestDFSClientFailover.*) &&
+ !within(org.apache.hadoop.hdfs.*.TestOfflineImageViewer.*) &&
+ !within(org.apache.hadoop.example.ITUseHadoopCodec.*);
+
+
+ before() : testMethod() {
+ this.wasabiCtx = new WasabiContext(LOG, configParser);
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[TEST-BEFORE]: Test ---%s--- started", thisJoinPoint.toString())
+ );
+
+ if (this.testMethodName != this.UNKNOWN) {
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[TEST-BEFORE]: [ALERT]: Test method ---%s--- executes concurrentlly with test method ---%s---",
+ this.testMethodName, thisJoinPoint.toString())
+ );
+ }
+
+ this.testMethodName = thisJoinPoint.toString();
+ }
+
+ after() returning: testMethod() {
+ if (this.wasabiCtx == null) { // This happens for non-test methods (e.g. config) inside test code
+ return; // Ignore retry in "before" and "after" annotated methods
+ }
+
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[TEST-AFTER]: [SUCCESS]: Test ---%s--- done", thisJoinPoint.toString())
+ );
+
+ this.wasabiCtx.printExecTrace(this.LOG, String.format(" Test: %s", this.testMethodName));
+
+ this.testMethodName = this.UNKNOWN;
+ this.wasabiCtx = null;
+ this.activeInjectionLocations.clear();
+ }
+
+ after() throwing (Throwable t): testMethod() {
+ if (this.wasabiCtx == null) { // This happens for non-test methods (e.g. config) inside test code
+ return; // Ignore retry in "before" and "after" annotated methods
+ }
+
+ this.wasabiCtx.printExecTrace(this.LOG, String.format(" Test: %s", this.testMethodName));
+
+ StringBuilder exception = new StringBuilder();
+ for (Throwable e = t; e != null; e = e.getCause()) {
+ exception.append(e);
+ exception.append(" :-: ");
+ }
+
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[TEST-AFTER] [FAILURE] Test ---%s--- | Failure message :-: %s| Stack trace:\n%s\n:-:-:\n\n",
+ thisJoinPoint.toString(), exception.toString(), stackSnapshot.toString())
+ );
+
+ this.testMethodName = this.UNKNOWN;
+ this.activeInjectionLocations.clear();
+ }
+
+ /*
+ * Callback before calling Thread.sleep(...)
+ */
+
+ pointcut recordThreadSleep():
+ (call(* java.lang.Object.wait(..)) ||
+ call(* java.lang.Thread.sleep(..)) ||
+ call(* java.util.concurrent.locks.LockSupport.parkNanos(..)) ||
+ call(* java.util.concurrent.locks.LockSupport.parkUntil(..)) ||
+ call(* java.util.concurrent.ScheduledExecutorService.schedule(..)) ||
+ call(* java.util.concurrent.TimeUnit.*scheduledExecutionTime(..)) ||
+ call(* java.util.concurrent.TimeUnit.*sleep(..)) ||
+ call(* java.util.concurrent.TimeUnit.*timedWait(..)) ||
+ call(* java.util.Timer.schedule*(..)) ||
+ call(* java.util.TimerTask.wait(..)) ||
+ call(* org.apache.hadoop.hbase.*.Procedure.suspend(..))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ before() : recordThreadSleep() {
+ try {
+ if (this.wasabiCtx == null) { // This happens for non-test methods (e.g. config) inside test code
+ return; // Ignore retry in "before" and "after" annotated methods
+ }
+
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ for (String retryCallerFunction : this.activeInjectionLocations) {
+ if (stackSnapshot.hasFrame(retryCallerFunction.split("\\(", 2)[0])) {
+ String sleepLocation = String.format("%s(%s:%d)",
+ retryCallerFunction.split("\\(", 2)[0],
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ this.wasabiCtx.addToExecTrace(sleepLocation, OpEntry.THREAD_SLEEP_OP, stackSnapshot);
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[THREAD-SLEEP] Test ---%s--- | Sleep location ---%s--- | Retry location ---%s---\n",
+ this.testMethodName,
+ sleepLocation,
+ retryCallerFunction.split("\\(", 2)[0])
+ );
+ }
+ }
+ } catch (Exception e) {
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_ERROR,
+ String.format("Exception occurred in recordThreadSleep(): %s", e.getMessage())
+ );
+ e.printStackTrace();
+ }
+ }
+
+
+ /* Inject IOException */
+
+ pointcut injectIOException():
+ ((withincode(* org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive(..)) &&
+ call(* org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archiveLogFile(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.wal.AbstractWALRoller.run(..)) &&
+ call(* org.apache.hadoop.hbase.wal.AbstractWALRoller.*RollController.rollWal(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.wal.AbstractWALRoller.run(..)) &&
+ call(* org.apache.hadoop.hbase.wal.AbstractWALRoller.*RollController.rollWal(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.wal.AbstractWALRoller.run(..)) &&
+ call(* org.apache.hadoop.hbase.wal.AbstractWALRoller.*RollController.rollWal(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupIOstreams(..)) &&
+ call(* org.apache.hadoop.net.NetUtils.getInputStream(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupIOstreams(..)) &&
+ call(* org.apache.hadoop.net.NetUtils.getOutputStream(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupIOstreams(..)) &&
+ call(* org.apache.hadoop.security.UserGroupInformation.doAs(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupIOstreams(..)) &&
+ call(* org.apache.hadoop.hbase.security.HBaseSaslRpcClient.getInputStream(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupIOstreams(..)) &&
+ call(* org.apache.hadoop.hbase.security.HBaseSaslRpcClient.getOutputStream(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.regionserver.BootstrapNodeManager.getFromMaster(..)) &&
+ call(* org.apache.hadoop.hbase.util.FutureUtils.get(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.tool.BulkLoadHFilesTool.performBulkLoad(..)) &&
+ call(* org.apache.hadoop.hbase.util.FutureUtils.get(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.tool.BulkLoadHFilesTool.performBulkLoad(..)) &&
+ call(* org.apache.hadoop.hbase.tool.BulkLoadHFilesTool.groupOrSplitPhase(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.tool.BulkLoadHFilesTool.performBulkLoad(..)) &&
+ call(* org.apache.hadoop.hbase.tool.BulkLoadHFilesTool.bulkLoadPhase(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance(..)) &&
+ call(* org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createAsyncWriter(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createOutput(..)) &&
+ call(* org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile(..)) &&
+ call(* org.apache.hadoop.hdfs.protocol.ClientProtocol.complete(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.regionserver.snapshot.FlushSnapshotSubprocedure.*RegionSnapshotTask.call(..)) &&
+ call(* org.apache.hadoop.hbase.regionserver.HRegion.flush(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.util.FSTableDescriptors.writeTableDescriptor(..)) &&
+ call(* org.apache.hadoop.fs.FileSystem.create(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.util.FSTableDescriptors.writeTableDescriptor(..)) &&
+ call(* java.io.FilterOutputStream.write(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.util.FSTableDescriptors.writeTableDescriptor(..)) &&
+ call(* org.apache.hadoop.hbase.util.FSTableDescriptors.deleteTableDescriptorFiles(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.util.FSUtils.setVersion(..)) &&
+ call(* org.apache.hadoop.fs.FileSystem.create(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.util.FSUtils.setVersion(..)) &&
+ call(* java.io.FilterOutputStream.write(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.util.FSUtils.setVersion(..)) &&
+ call(* org.apache.hadoop.fs.FSDataOutputStream.close(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.util.FSUtils.setVersion(..)) &&
+ call(* org.apache.hadoop.fs.FileSystem.rename(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists(..)) &&
+ call(* org.apache.hadoop.fs.FileSystem.exists(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.util.FSUtils.setClusterId(..)) &&
+ call(* org.apache.hadoop.fs.FileSystem.create(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.util.FSUtils.setClusterId(..)) &&
+ call(* java.io.FilterOutputStream.write(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.util.FSUtils.setClusterId(..)) &&
+ call(* org.apache.hadoop.fs.FileSystem.rename(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.util.HBaseFsck.*FileLockCallable.createFileWithRetries(..)) &&
+ call(* org.apache.hadoop.hbase.util.CommonFSUtils.create(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.util.HBaseFsck.unlockHbck(..)) &&
+ call(* org.apache.hadoop.hbase.util.CommonFSUtils.delete(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.util.HBaseFsck.unlockHbck(..)) &&
+ call(* org.apache.hadoop.hbase.util.CommonFSUtils.getCurrentFileSystem(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.replicate(..)) &&
+ call(* org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.parallelReplicate(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile(..)) &&
+ call(* org.apache.hadoop.fs.FileSystem.exists(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile(..)) &&
+ call(* org.apache.hadoop.hbase.backup.HFileArchiver.*File.moveAndClose(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.replication.regionserver.HFileReplicator.doBulkLoad(..)) &&
+ call(* org.apache.hadoop.hbase.tool.BulkLoadHFilesTool.loadHFileQueue(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.regionserver.HRegionFileSystem.createDir(..)) &&
+ call(* org.apache.hadoop.hbase.regionserver.HRegionFileSystem.mkdirs(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.regionserver.HRegionFileSystem.rename(..)) &&
+ call(* org.apache.hadoop.fs.FileSystem.rename(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.regionserver.HRegionFileSystem.deleteDir(..)) &&
+ call(* org.apache.hadoop.fs.FileSystem.delete(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.regionserver.HRegionFileSystem.createDirOnFileSystem(..)) &&
+ call(* org.apache.hadoop.fs.FileSystem.mkdirs(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub(..)) &&
+ call(* org.apache.hadoop.hbase.security.UserProvider.getCurrent(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.regionserver.HStore.flushCache(..)) &&
+ call(* org.apache.hadoop.hbase.regionserver.StoreFlusher.flushSnapshot(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.util.MoveWithAck.call(..)) &&
+ call(* org.apache.hadoop.hbase.client.Admin.move(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.util.MoveWithAck.call(..)) &&
+ call(* org.apache.hadoop.hbase.util.MoveWithAck.isSameServer(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.chaos.ChaosAgent.execWithRetries(..)) &&
+ call(* org.apache.hadoop.hbase.chaos.ChaosAgent.exec(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease(..)) &&
+ call(* org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFile.removeFile(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.*ExceptionResponse.*Builder.mergeFrom(..)) &&
+ call(* org.apache.hbase.thirdparty.com.google.protobuf.GeneratedMessageV3.*Builder.*.parseUnknownField(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile(..)) &&
+ call(* org.apache.hadoop.fs.FileSystem.mkdirs(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.MasterWalManager.getFailedServersFromLogFolders(..)) &&
+ call(* org.apache.hadoop.fs.FileSystem.exists(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.MasterWalManager.getFailedServersFromLogFolders(..)) &&
+ call(* org.apache.hadoop.hbase.util.CommonFSUtils.listStatus(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(..)) &&
+ call(* org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos.*ReportProcedureDoneRequest.*Builder.addResult(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.replication.regionserver.RecoveredReplicationSourceShipper.getStartPosition(..)) &&
+ call(* org.apache.hadoop.hbase.replication.regionserver.RecoveredReplicationSource.locateRecoveredPaths(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.uncaughtException(..)) &&
+ call(* org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.refreshSources(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(..)) &&
+ call(* org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(..) throws *IOException*)) ||
+ (withincode(* org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(..)) &&
+ call(* org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.reset(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(..)) &&
+ call(* org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.tryAdvanceStreamAndCreateWALBatch(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.util.HBaseFsck.unlockHbck(..)) &&
+ call(* org.apache.hbase.thirdparty.com.google.common.io.Closeables.close(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.util.HBaseFsckRepair.waitUntilAssigned(..)) &&
+ call(* org.apache.hadoop.hbase.client.Admin.getClusterMetrics(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.wal.AbstractFSWALProvider.openReader(..)) &&
+ call(* org.apache.hadoop.fs.Path.getFileSystem(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.wal.AbstractFSWALProvider.openReader(..)) &&
+ call(* org.apache.hadoop.hbase.wal.WALFactory.createStreamReader(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.wal.WALFactory.createStreamReader(..)) &&
+ call(* org.apache.hadoop.hbase.wal.AbstractFSWALProvider.*Reader.init(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.procedure.SwitchRpcThrottleProcedure.executeFromState(..)) &&
+ call(* org.apache.hadoop.hbase.master.procedure.SwitchRpcThrottleProcedure.switchThrottleState(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.util.HBaseFsckRepair.waitUntilAssigned(..)) &&
+ call(* org.apache.hadoop.hbase.client.Admin.getClusterMetrics(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher.run(..)) &&
+ call(* org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher.sendRequest(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.regionserver.handler.RegionReplicaFlushHandler.triggerFlushInPrimaryRegion(..)) &&
+ call(* org.apache.hadoop.hbase.util.FutureUtils.get(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(..)) &&
+ call(* org.apache.hadoop.hbase.regionserver.HRegionServer.reportProcedureDone(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize(..)) &&
+ call(* org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.createReplicationEndpoint(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize(..)) &&
+ call(* org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initAndStartReplicationEndpoint(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.cleanOldLogs(..)) &&
+ call(* org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.removeRemoteWALs(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceShipper.shipEdits(..)) &&
+ call(* org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceShipper.cleanUpHFileRefs(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.*ExceptionResponse.*Builder.mergeFrom(..)) &&
+ call(* org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream.readTag(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.*ExceptionResponse.*Builder.mergeFrom(..)) &&
+ call(* org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream.readBytes(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.*ExceptionResponse.*Builder.mergeFrom(..)) &&
+ call(* org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream.readBytes(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.*ExceptionResponse.*Builder.mergeFrom(..)) &&
+ call(* org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream.readBytes(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.*ExceptionResponse.*Builder.mergeFrom(..)) &&
+ call(* org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream.readInt32(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.*ExceptionResponse.*Builder.mergeFrom(..)) &&
+ call(* org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream.readBool(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.*ExceptionResponse.*Builder.mergeFrom(..)) &&
+ call(* org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream.readBool(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.moveRegionsBetweenGroups(..)) &&
+ call(* org.apache.hadoop.hbase.master.LoadBalancer.randomAssignment(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState(..)) &&
+ call(* org.apache.hadoop.hbase.master.MasterServices.getProcedures(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall(..)) &&
+ call(* org.apache.hadoop.hbase.regionserver.HRegion.flush(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.procedure.SplitWALProcedure.executeFromState(..)) &&
+ call(* org.apache.hadoop.hbase.master.SplitWALManager.isSplitWALFinished(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALProcedure.executeFromState(..)) &&
+ call(* org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALManager.isReplayWALFinished(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALRemoteProcedure.truncateWALs(..)) &&
+ call(* org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALManager.finishReplayWAL(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.namequeues.WALEventTrackerTableAccessor.doPut(..)) &&
+ call(* org.apache.hadoop.hbase.client.Connection.getTable(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.namequeues.WALEventTrackerTableAccessor.doPut(..)) &&
+ call(* org.apache.hadoop.hbase.client.Table.put(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease(..)) &&
+ call(* org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.getLogFiles(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease(..)) &&
+ call(* org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.initOldLogs(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease(..)) &&
+ call(* org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.syncSlots(..)) &&
+ call(* org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.syncSlots(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriterWithRetries(..)) &&
+ call(* org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.backup.impl.FullTableBackupClient.snapshotTable(..)) &&
+ call(* snapshot(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.zksyncer.ClientZKSyncer.setDataForClientZkUntilSuccess(..)) &&
+ call(* org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(..) throws *IOException*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.executeFromState(..)) &&
+ call(* org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.createDirForRemoteWAL(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.MasterWalManager.getFailedServersFromLogFolders(..)) &&
+ call(* listStatus(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.executeFromState(..)) &&
+ call(* org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.createDirForRemoteWAL(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState(..)) &&
+ call(* openRegion(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState(..)) &&
+ call(* confirmOpened(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState(..)) &&
+ call(* closeRegion(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState(..)) &&
+ call(* confirmClosed(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.replication.ModifyPeerProcedure.executeFromState(..)) &&
+ call(* prePeerModification(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.replication.ModifyPeerProcedure.executeFromState(..)) &&
+ call(* reopenRegions(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.replication.ModifyPeerProcedure.executeFromState(..)) &&
+ call(* updateLastPushedSequenceIdForSerialPeer(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.replication.ModifyPeerProcedure.executeFromState(..)) &&
+ call(* postPeerModification(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.replication.RecoverStandbyProcedure.executeFromState(..)) &&
+ call(* renameToPeerReplayWALDir(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.replication.RecoverStandbyProcedure.executeFromState(..)) &&
+ call(* renameToPeerSnapshotWALDir(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.mob.MobFileCleanerChore.cleanupObsoleteMobFiles(..)) &&
+ call(* initReader(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.mob.MobFileCleanerChore.cleanupObsoleteMobFiles(..)) &&
+ call(* closeStoreFile(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws IOException : injectIOException() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "IOException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new IOException(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+ /* Inject SocketException */
+
+ pointcut injectSocketException():
+ ((withincode(* org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupConnection(..)) &&
+ call(* javax.net.SocketFactory.createSocket(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupConnection(..)) &&
+ call(* java.net.Socket.setTcpNoDelay(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupConnection(..)) &&
+ call(* java.net.Socket.setKeepAlive(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupConnection(..)) &&
+ call(* org.apache.hadoop.net.NetUtils.connect(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupConnection(..)) &&
+ call(* java.net.Socket.setSoTimeout(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupIOstreams(..)) &&
+ call(* org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupConnection(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupIOstreams(..)) &&
+ call(* org.apache.hadoop.hbase.ipc.BlockingRpcConnection.writeConnectionHeaderPreamble(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupIOstreams(..)) &&
+ call(* org.apache.hadoop.hbase.ipc.BlockingRpcConnection.writeConnectionHeader(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupIOstreams(..)) &&
+ call(* org.apache.hadoop.hbase.ipc.BlockingRpcConnection.processResponseForConnectionHeader(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupConnection(..)) &&
+ call(* java.net.Socket.bind(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws SocketException : injectSocketException() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "SocketException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new SocketException(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+ /* Inject UnknownHostException */
+
+ pointcut injectUnknownHostException():
+ ((withincode(* org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupConnection(..)) &&
+ call(* org.apache.hadoop.hbase.ipc.RpcConnection.getRemoteInetAddress(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws UnknownHostException : injectUnknownHostException() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "UnknownHostException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new UnknownHostException(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+ /* Inject FileNotFoundException */
+
+ pointcut injectFileNotFoundException():
+ ((withincode(* org.apache.hadoop.hbase.io.FileLink.read(..)) &&
+ call(* org.apache.hadoop.fs.FSDataInputStream.read(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.io.FileLink.readFully(..)) &&
+ call(* readFully(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws FileNotFoundException : injectFileNotFoundException() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "FileNotFoundException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new FileNotFoundException(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+ /* Inject InterruptedException */
+
+ pointcut injectInterruptedException():
+ ((withincode(* org.apache.hadoop.hbase.master.procedure.SnapshotVerifyProcedure.execute(..)) &&
+ call(* org.apache.hadoop.hbase.master.procedure.ServerRemoteProcedure.execute(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws InterruptedException : injectInterruptedException() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "InterruptedException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new InterruptedException(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+ /* Inject KeeperException.OperationTimeoutException */
+
+ pointcut injectKeeperExceptionOperationTimeoutException():
+ ((withincode(* org.apache.hadoop.hbase.util.HBaseFsck.setMasterInMaintenanceMode(..)) &&
+ call(* org..*.createEphemeralNodeAndWatch(..))) ||
+ (withincode(* org.apache.hadoop.hbase.MetaRegionLocationCache.updateMetaLocation(..)) &&
+ call(* org..*.watchAndCheckExists(..))) ||
+ (withincode(* org.apache.hadoop.hbase.MetaRegionLocationCache.updateMetaLocation(..)) &&
+ call(* org..*.getMetaRegionLocation(..))) ||
+ (withincode(* org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.delete(..)) &&
+ call(* org..*.checkZk(..))) ||
+ (withincode(* org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(..)) &&
+ call(* org..*.checkZk(..))) ||
+ (withincode(* org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getChildren(..)) &&
+ call(* org..*.checkZk(..))) ||
+ (withincode(* org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(..)) &&
+ call(* org..*.checkZk(..))) ||
+ (withincode(* org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(..)) &&
+ call(* org..*.checkZk(..))) ||
+ (withincode(* org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.createNonSequential(..)) &&
+ call(* org..*.checkZk(..))) ||
+ (withincode(* org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.createSequential(..)) &&
+ call(* org..*.checkZk(..))) ||
+ (withincode(* org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getAcl(..)) &&
+ call(* org..*.checkZk(..))) ||
+ (withincode(* org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setAcl(..)) &&
+ call(* org..*.checkZk(..)))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws KeeperException : injectKeeperExceptionOperationTimeoutException() {
+ if (this.wasabiCtx == null) { // This happens for non-test methods (e.g. config) inside test code
+ return; // Ignore retry in "before" and "after" annotated methods
+ }
+
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "KeeperException.OperationTimeoutException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_ERROR,
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ throw new KeeperException.OperationTimeoutException();
+ }
+ }
+
+ /* Inject KeeperException.SessionExpiredException */
+
+ pointcut injectKeeperExceptionSessionExpiredException():
+ ((withincode(* org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.multi(..)) &&
+ call(* org..*.checkZk(..))) ||
+ (withincode(* org.apache.hadoop.hbase.zookeeper.ZKNodeTracker.blockUntilAvailable(..)) &&
+ call(* org..*.getDataAndWatch(..))) ||
+ (withincode(* org.apache.hadoop.hbase.zookeeper.ZKNodeTracker.blockUntilAvailable(..)) &&
+ call(* org..*.ZKUtil.checkExists(..))) ||
+ (withincode(* org.apache.hadoop.hbase.zookeeper.ZKUtil.waitForBaseZNode(..)) &&
+ call(* org..*.exists(..))) ||
+ (withincode(* org.apache.hadoop.hbase.master.zksyncer.ClientZKSyncer.deleteDataForClientZkUntilSuccess(..)) &&
+ call(* org..*.deleteNode(..))) ||
+ (withincode(* org.apache.hadoop.hbase.MetaRegionLocationCache.loadMetaLocationsFromZk(..)) &&
+ call(* org..*.getMetaReplicaNodesAndWatchChildren(..))) ||
+ (withincode(* ZkSplitLogWorkerCoordination.getTaskList(..)) &&
+ call(* org..*listChildrenAndWatchForNewChildren(..))) ||
+ (withincode(* org.apache.hadoop.hbase.master.zksyncer.ClientZKSyncer.deleteDataForClientZkUntilSuccess(..)) &&
+ call(* org..*.deleteNode(..))) ||
+ (withincode(* org.apache.hadoop.hbase.master.zksyncer.ClientZKSyncer.reconnectAfterExpiration(..)) &&
+ call(* org..*.reconnectAfterExpiration(..))) ||
+ (withincode(* org.apache.hadoop.hbase.master.zksyncer.ClientZKSyncer.setDataForClientZkUntilSuccess(..)) &&
+ call(* org..*.createNodeIfNotExistsNoWatch(..)))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws KeeperException : injectKeeperExceptionSessionExpiredException() {
+ if (this.wasabiCtx == null) { // This happens for non-test methods (e.g. config) inside test code
+ return; // Ignore retry in "before" and "after" annotated methods
+ }
+
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "KeeperException.SessionExpiredException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_ERROR,
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ throw new KeeperException.SessionExpiredException();
+ }
+ }
+
+ /* Inject KeeperException.NoNodeException */
+
+ pointcut injectKeeperExceptionNoNodeException():
+ withincode(* org.apache.hadoop.hbase.master.zksyncer.ClientZKSyncer.setDataForClientZkUntilSuccess(..)) &&
+ call(* org..*ZKUtil.setData(..) throws *KeeperException*) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws KeeperException : injectKeeperExceptionNoNodeException() {
+ if (this.wasabiCtx == null) { // This happens for non-test methods (e.g. config) inside test code
+ return; // Ignore retry in "before" and "after" annotated methods
+ }
+
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "KeeperException.NoNodeException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_ERROR,
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ throw new KeeperException.NoNodeException();
+ }
+ }
+
+ /* Inject BindException */
+
+ pointcut injectBindException():
+ ((withincode(* org.apache.hadoop.hbase.HBaseServerBase.putUpWebUI(..)) &&
+ call(* org.apache.hadoop.hbase.http.InfoServer.start(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.replication.ZKReplicationQueueStorage.setWALPosition(..)) &&
+ call(* org.apache.hadoop.hbase.zookeeper.ZKUtil.multiOrSequential(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws BindException : injectBindException() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "BindException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new BindException(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+ /* Inject ReplicationException */
+
+ pointcut injectReplicationException():
+ ((withincode(* org.apache.hadoop.hbase.master.replication.ClaimReplicationQueuesProcedure.execute(..)) &&
+ call(* removeQueue(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.executeFromState(..)) &&
+ call(* org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.postTransit(..) throws *ReplicationException*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.executeFromState(..)) &&
+ call(* org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.setPeerNewSyncReplicationState(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.executeFromState(..)) &&
+ call(* org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.removeAllReplicationQueues (..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.executeFromState(..)) &&
+ call(* org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.setLastPushedSequenceId(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.executeFromState(..)) &&
+ call(* org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.transitPeerSyncReplicationState(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.executeFromState(..)) &&
+ call(* org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.enablePear(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.replication.ModifyPeerProcedure.executeFromState(..)) &&
+ call(* updatePeerStorage(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hbase.master.replication.ModifyPeerProcedure.executeFromState(..)) &&
+ call(* enablePeer(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws ReplicationException : injectReplicationException() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "ReplicationException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new ReplicationException(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+}
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/aspect/edu/uchicago/cs/systems/wasabi/InterceptHadoop.aj b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/aspect/edu/uchicago/cs/systems/wasabi/InterceptHadoop.aj
new file mode 100644
index 00000000..492940b2
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/aspect/edu/uchicago/cs/systems/wasabi/InterceptHadoop.aj
@@ -0,0 +1,988 @@
+package edu.uchicago.cs.systems.wasabi;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.EOFException;
+import java.io.FileNotFoundException;
+import java.net.BindException;
+import java.net.ConnectException;
+import java.net.SocketException;
+import java.net.SocketTimeoutException;
+import java.net.UnknownHostException;
+import java.lang.InterruptedException;
+import java.sql.SQLException;
+import java.sql.SQLTransientException;
+
+import org.apache.hadoop.ipc.RetriableException;
+
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.Set;
+
+import edu.uchicago.cs.systems.wasabi.ConfigParser;
+import edu.uchicago.cs.systems.wasabi.WasabiLogger;
+import edu.uchicago.cs.systems.wasabi.WasabiContext;
+import edu.uchicago.cs.systems.wasabi.InjectionPolicy;
+import edu.uchicago.cs.systems.wasabi.StackSnapshot;
+import edu.uchicago.cs.systems.wasabi.InjectionPoint;
+import edu.uchicago.cs.systems.wasabi.ExecutionTrace;
+
+public aspect InterceptHadoop {
+ private WasabiContext wasabiCtx = null;
+
+ private static final String UNKNOWN = "UNKNOWN";
+
+ private static final WasabiLogger LOG = new WasabiLogger();
+ private static final String configFile = (System.getProperty("configFile") != null) ? System.getProperty("configFile") : "default.conf";
+ private static final ConfigParser configParser = new ConfigParser(LOG, configFile);
+
+ private Set activeInjectionLocations = ConcurrentHashMap.newKeySet();
+ private String testMethodName = UNKNOWN;
+
+ pointcut testMethod():
+ (@annotation(org.junit.Test) ||
+ // @annotation(org.junit.Before) ||
+ // @annotation(org.junit.After) ||
+ // @annotation(org.junit.BeforeClass) ||
+ // @annotation(org.junit.AfterClass) ||
+ // @annotation(org.junit.jupiter.api.BeforeEach) ||
+ // @annotation(org.junit.jupiter.api.AfterEach) ||
+ // @annotation(org.junit.jupiter.api.BeforeAll) ||
+ // @annotation(org.junit.jupiter.api.AfterAll) ||
+ @annotation(org.junit.jupiter.api.Test)) &&
+ !within(org.apache.hadoop.*.TestDFSClientFailover.*) &&
+ !within(org.apache.hadoop.hdfs.*.TestOfflineImageViewer.*) &&
+ !within(org.apache.hadoop.example.ITUseHadoopCodec.*);
+
+
+ before() : testMethod() {
+ this.wasabiCtx = new WasabiContext(LOG, configParser);
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[TEST-BEFORE]: Test ---%s--- started", thisJoinPoint.toString())
+ );
+
+ if (this.testMethodName != this.UNKNOWN) {
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[TEST-BEFORE]: [ALERT]: Test method ---%s--- executes concurrentlly with test method ---%s---",
+ this.testMethodName, thisJoinPoint.toString())
+ );
+ }
+
+ this.testMethodName = thisJoinPoint.toString();
+ }
+
+ after() returning: testMethod() {
+ if (this.wasabiCtx == null) { // This happens for non-test methods (e.g. config) inside test code
+ return; // Ignore retry in "before" and "after" annotated methods
+ }
+
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[TEST-AFTER]: [SUCCESS]: Test ---%s--- done", thisJoinPoint.toString())
+ );
+
+ this.wasabiCtx.printExecTrace(this.LOG, String.format(" Test: %s", this.testMethodName));
+
+ this.testMethodName = this.UNKNOWN;
+ this.wasabiCtx = null;
+ this.activeInjectionLocations.clear();
+ }
+
+ after() throwing (Throwable t): testMethod() {
+ if (this.wasabiCtx == null) { // This happens for non-test methods (e.g. config) inside test code
+ return; // Ignore retry in "before" and "after" annotated methods
+ }
+
+ this.wasabiCtx.printExecTrace(this.LOG, String.format(" Test: %s", this.testMethodName));
+
+ StringBuilder exception = new StringBuilder();
+ for (Throwable e = t; e != null; e = e.getCause()) {
+ exception.append(e);
+ exception.append(" :-: ");
+ }
+
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[TEST-AFTER] [FAILURE] Test ---%s--- | Failure message :-: %s| Stack trace:\n%s\n:-:-:\n\n",
+ thisJoinPoint.toString(), exception.toString(), stackSnapshot.toString())
+ );
+
+ this.testMethodName = this.UNKNOWN;
+ this.activeInjectionLocations.clear();
+ }
+
+ /*
+ * Callback before calling Thread.sleep(...)
+ */
+
+ pointcut recordThreadSleep():
+ (call(* java.lang.Object.wait(..)) ||
+ call(* java.lang.Thread.sleep(..)) ||
+ call(* java.util.concurrent.locks.LockSupport.parkNanos(..)) ||
+ call(* java.util.concurrent.locks.LockSupport.parkUntil(..)) ||
+ call(* java.util.concurrent.ScheduledExecutorService.schedule(..)) ||
+ call(* java.util.concurrent.TimeUnit.*scheduledExecutionTime(..)) ||
+ call(* java.util.concurrent.TimeUnit.*sleep(..)) ||
+ call(* java.util.concurrent.TimeUnit.*timedWait(..)) ||
+ call(* java.util.Timer.schedule*(..)) ||
+ call(* java.util.TimerTask.wait(..)) ||
+ call(* org.apache.hadoop.hbase.*.Procedure.suspend(..))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ before() : recordThreadSleep() {
+ try {
+ if (this.wasabiCtx == null) { // This happens for non-test methods (e.g. config) inside test code
+ return; // Ignore retry in "before" and "after" annotated methods
+ }
+
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ for (String retryCallerFunction : this.activeInjectionLocations) {
+ if (stackSnapshot.hasFrame(retryCallerFunction.split("\\(", 2)[0])) {
+ String sleepLocation = String.format("%s(%s:%d)",
+ retryCallerFunction.split("\\(", 2)[0],
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ this.wasabiCtx.addToExecTrace(sleepLocation, OpEntry.THREAD_SLEEP_OP, stackSnapshot);
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[THREAD-SLEEP] Test ---%s--- | Sleep location ---%s--- | Retry location ---%s---\n",
+ this.testMethodName,
+ sleepLocation,
+ retryCallerFunction.split("\\(", 2)[0])
+ );
+ }
+ }
+ } catch (Exception e) {
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_ERROR,
+ String.format("Exception occurred in recordThreadSleep(): %s", e.getMessage())
+ );
+ e.printStackTrace();
+ }
+ }
+
+
+ /* Inject IOException */
+
+ pointcut injectIOException():
+ ((withincode(* org.apache.hadoop.ha.ActiveStandbyElector.reEstablishSession(..)) &&
+ call(* org.apache.hadoop.ha.ActiveStandbyElector.createConnection(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.server.balancer.Balancer.run(..)) &&
+ call(* org.apache.hadoop.hdfs.server.balancer.Balancer.doBalance(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.server.namenode.sps.BlockStorageMovementNeeded.*SPSPathIdProcessor.run(..)) &&
+ call(* org.apache.hadoop.hdfs.server.namenode.sps.Context.scanAndCollectFiles(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.server.namenode.sps.BlockStorageMovementNeeded.*SPSPathIdProcessor.run(..)) &&
+ call(* org.apache.hadoop.hdfs.server.namenode.sps.Context.scanAndCollectFiles(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.server.namenode.sps.BlockStorageMovementNeeded.*SPSPathIdProcessor.run(..)) &&
+ call(* org.apache.hadoop.hdfs.server.namenode.sps.Context.removeSPSHint(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.impl.prefetch.CachingBlockManager.get(..)) &&
+ call(* org.apache.hadoop.fs.impl.prefetch.CachingBlockManager.getInternal(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.mapreduce.tools.CLI.getJob(..)) &&
+ call(* org.apache.hadoop.mapreduce.Cluster.getJob(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.mapreduce.tools.CLI.getJob(..)) &&
+ call(* org.apache.hadoop.mapreduce.Cluster.getJob(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.Client.*Connection.setupIOstreams(..)) &&
+ call(* org.apache.hadoop.security.UserGroupInformation.doAs(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.mapred.ClientServiceDelegate.invoke(..)) &&
+ call(* org.apache.hadoop.mapred.ClientServiceDelegate.getProxy(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.azurebfs.oauth2.CustomTokenProviderAdapter.refreshToken(..)) &&
+ call(* org.apache.hadoop.fs.azurebfs.extensions.CustomTokenProviderAdaptee.getAccessToken(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DataStreamer.transfer(..)) &&
+ call(* org.apache.hadoop.hdfs.DataStreamer.*StreamerStreams.sendTransferBlock(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(..)) &&
+ call(* org.apache.hadoop.net.NetUtils.getOutputStream(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(..)) &&
+ call(* org.apache.hadoop.net.NetUtils.getInputStream(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(..)) &&
+ call(* org.apache.hadoop.hdfs.protocol.datatransfer.Sender.writeBlock(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(..)) &&
+ call(* org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.*BlockOpResponseProto.parseFrom(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(..)) &&
+ call(* org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(..)) &&
+ call(* org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(..)) &&
+ call(* org.apache.hadoop.hdfs.server.datanode.DataXceiver.create(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.tools.DebugAdmin.*RecoverLeaseCommand.run(..)) &&
+ call(* org.apache.hadoop.hdfs.DistributedFileSystem.recoverLease(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(..)) &&
+ call(* org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(..)) &&
+ call(* org.apache.hadoop.fs.ByteBufferReadable.read(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DFSInputStream.openInfo(..)) &&
+ call(* org.apache.hadoop.hdfs.DFSInputStream.getLastBlockLength(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(..)) &&
+ call(* org.apache.hadoop.hdfs.DFSUtilClient.createClientDatanodeProtocolProxy(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(..)) &&
+ call(* org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol.getReplicaVisibleLength(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(..)) &&
+ call(* org.apache.hadoop.hdfs.DFSInputStream.getBlockAt(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(..)) &&
+ call(* org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(..)) &&
+ call(* org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DFSInputStream.readBuffer(..)) &&
+ call(* org.apache.hadoop.hdfs.ReaderStrategy.readFromBlock(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DFSInputStream.readBuffer(..)) &&
+ call(* org.apache.hadoop.hdfs.DFSInputStream.seekToBlockSource(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DFSInputStream.readBuffer(..)) &&
+ call(* org.apache.hadoop.hdfs.DFSInputStream.seekToNewSource(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(..)) &&
+ call(* org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(..)) &&
+ call(* org.apache.hadoop.hdfs.DFSInputStream.readBuffer(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DFSOutputStream.addBlock(..)) &&
+ call(* org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(..)) &&
+ call(* org.apache.hadoop.hdfs.protocol.ClientProtocol.create(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DFSOutputStream.completeFile(..)) &&
+ call(* org.apache.hadoop.hdfs.protocol.ClientProtocol.complete(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader(..)) &&
+ call(* org.apache.hadoop.hdfs.DFSStripedInputStream.refreshLocatedBlock(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader(..)) &&
+ call(* org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.mapreduce.task.reduce.EventFetcher.run(..)) &&
+ call(* org.apache.hadoop.mapreduce.task.reduce.EventFetcher.getMapCompletionEvents(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler.*BlockMovingTask.moveBlock(..)) &&
+ call(* org.apache.hadoop.hdfs.server.balancer.KeyManager.getAccessToken(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler.*BlockMovingTask.moveBlock(..)) &&
+ call(* org.apache.hadoop.hdfs.server.common.sps.BlockDispatcher.moveBlock(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.server.federation.retry.FederationActionRetry.runWithRetries(..)) &&
+ call(* org.apache.hadoop.yarn.server.federation.retry.FederationActionRetry.run(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(..)) &&
+ call(* org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.FileChecksumHelper.*ReplicatedFileChecksumComputer.checksumBlock(..)) &&
+ call(* org.apache.hadoop.hdfs.FileChecksumHelper.*ReplicatedFileChecksumComputer.tryDatanode(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.FileChecksumHelper.*StripedFileNonStripedChecksumComputer.checksumBlockGroup(..)) &&
+ call(* org.apache.hadoop.hdfs.FileChecksumHelper.*StripedFileNonStripedChecksumComputer.tryDatanode(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJob(..)) &&
+ call(* org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJobInternal(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineWriterImpl.*FSAction.runWithRetries(..)) &&
+ call(* org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineWriterImpl.*FSAction.run(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(..)) &&
+ call(* org.apache.hadoop.fs.FSInputChecker.readChunk(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.server.namenode.FSNamesystem.*LazyPersistFileScrubber.run(..)) &&
+ call(* org.apache.hadoop.hdfs.server.namenode.FSNamesystem.*LazyPersistFileScrubber.clearCorruptLazyPersistFiles(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.mapred.JobClient.getJob(..)) &&
+ call(* org.apache.hadoop.mapred.JobClient.getJobInner(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.mapred.JobEndNotifier.localRunnerNotification(..)) &&
+ call(* org.apache.hadoop.mapred.JobEndNotifier.httpNotification(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(..)) &&
+ call(* org.apache.hadoop.hdfs.client.impl.LeaseRenewer.renew(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(..)) &&
+ call(* org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.*ProviderCallable.*.call(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.*FSAction.runWithRetries(..)) &&
+ call(* org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.*FSAction.run(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.callCOSClientWithRetry(..)) &&
+ call(* org.apache.hadoop.fs.azure.StorageInterface.*CloudBlockBlobWrapper.commitBlockList(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.cosn.CosNFileReadTask.run(..)) &&
+ call(* java.io.InputStream.close(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.cosn.CosNFileReadTask.run(..)) &&
+ call(* org.apache.hadoop.fs.cosn.NativeFileSystemStore.retrieveBlock(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.cosn.CosNFileReadTask.run(..)) &&
+ call(* org.apache.hadoop.io.IOUtils.readFully(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.obs.OBSFileSystem.getFileStatus(..)) &&
+ call(* org.apache.hadoop.fs.obs.OBSFileSystem.innerGetFileStatus(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.obs.OBSInputStream.lazySeek(..)) &&
+ call(* org.apache.hadoop.fs.obs.OBSInputStream.reopen(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.obs.OBSInputStream.lazySeek(..)) &&
+ call(* org.apache.hadoop.fs.obs.OBSInputStream.seekInStream(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.obs.OBSInputStream.read(..)) &&
+ call(* java.io.InputStream.read(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.obs.OBSInputStream.onReadFailure(..)) &&
+ call(* org.apache.hadoop.fs.obs.OBSInputStream.reopen(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.obs.OBSInputStream.read(..)) &&
+ call(* org.apache.hadoop.fs.obs.OBSInputStream.tryToReadFromInputStream(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.obs.OBSInputStream.read(..)) &&
+ call(* org.apache.hadoop.fs.obs.OBSInputStream.tryToReadFromInputStream(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.obs.OBSInputStream.randomReadWithNewInputStream(..)) &&
+ call(* org.apache.hadoop.fs.obs.OBSInputStream.tryToReadFromInputStream(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.obs.OBSObjectBucketUtils.createEmptyObject(..)) &&
+ call(* org.apache.hadoop.fs.obs.OBSObjectBucketUtils.innerCreateEmptyObject(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.obs.OBSObjectBucketUtils.copyFile(..)) &&
+ call(* org.apache.hadoop.fs.obs.OBSObjectBucketUtils.innerCopyFile(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.obs.OBSPosixBucketUtils.innerFsRenameWithRetry(..)) &&
+ call(* org.apache.hadoop.fs.obs.OBSPosixBucketUtils.innerFsRenameFile(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.impl.prefetch.CachingBlockManager.get(..)) &&
+ call(* org.apache.hadoop.fs.impl.prefetch.BufferPool.acquire(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ha.ActiveStandbyElector.zkDoWithRetries(..)) &&
+ call(* org.apache.hadoop.ha.ActiveStandbyElector.*ZKAction.*.run(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.security.UserGroupInformation.*AutoRenewalForUserCredsRunnable.run(..)) &&
+ call(* org.apache.hadoop.security.UserGroupInformation.*AutoRenewalForUserCredsRunnable.relogin(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.*RpcRequestHeaderProto.RpcRequestHeaderProto(..)) &&
+ call(* org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readBytes(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.*RpcRequestHeaderProto.RpcRequestHeaderProto(..)) &&
+ call(* org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readEnum(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.*RpcRequestHeaderProto.RpcRequestHeaderProto(..)) &&
+ call(* org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readInt64(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.*RpcRequestHeaderProto.RpcRequestHeaderProto(..)) &&
+ call(* org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readMessage(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.*RpcRequestHeaderProto.RpcRequestHeaderProto(..)) &&
+ call(* org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readSInt32(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.*RpcRequestHeaderProto.RpcRequestHeaderProto(..)) &&
+ call(* org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readTag(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.*RpcRequestHeaderProto.RpcRequestHeaderProto(..)) &&
+ call(* org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3.parseUnknownField(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.*RpcResponseHeaderProto.RpcResponseHeaderProto(..)) &&
+ call(* org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readBytes(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.*RpcResponseHeaderProto.RpcResponseHeaderProto(..)) &&
+ call(* org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readEnum(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.*RpcResponseHeaderProto.RpcResponseHeaderProto(..)) &&
+ call(* org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readInt64(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.*RpcResponseHeaderProto.RpcResponseHeaderProto(..)) &&
+ call(* org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readSInt32(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.*RpcResponseHeaderProto.RpcResponseHeaderProto(..)) &&
+ call(* org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readTag(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.*RpcResponseHeaderProto.RpcResponseHeaderProto(..)) &&
+ call(* org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readUInt32(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.*RpcResponseHeaderProto.RpcResponseHeaderProto(..)) &&
+ call(* org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3.parseUnknownField(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(..)) &&
+ call(* org.apache.hadoop.hdfs.protocol.datatransfer.BlockConstructionStage.getRecoveryStage(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DFSInputStream.openInfo(..)) &&
+ call(* org.apache.hadoop.hdfs.DFSInputStream.fetchAndCheckLocatedBlocks(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DFSInputStream.openInfo(..)) &&
+ call(* org.apache.hadoop.hdfs.DFSInputStream.waitFor(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(..)) &&
+ call(* org.apache.hadoop.util.StopWatch.start(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.*ObserverReadInvocationHandler.invoke(..)) &&
+ call(* java.lang.reflect.Method.invoke(..) throws *IOException*)) ||
+ (withincode(* org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.*SlotReleaser.run(..)) &&
+ call(* org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ProvidedVolumeImpl.*ProvidedBlockPoolSlice.fetchVolumeMap(..)) &&
+ call(* org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap.*.getReader(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.*MultipleNameNodeProxy.getActiveNodeProxy(..)) &&
+ call(* org.apache.hadoop.ipc.RPC.getProtocolVersion(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler.run(..)) &&
+ call(* org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler.*ReencryptionPendingInodeIdCollector.checkPauseForTesting(..) throws *IOException*)) ||
+ (withincode(* org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater.takeAndProcessTasks(..)) &&
+ call(* org.apache.hadoop.util.StopWatch.start(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(..)) &&
+ call(* org.apache.hadoop.security.UserGroupInformation.checkTGTAndReloginFromKeytab(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(..)) &&
+ call(* org.apache.hadoop.security.UserGroupInformation.getCurrentUser(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.mapred.YarnChild.main(..)) &&
+ call(* org.apache.hadoop.mapred.TaskUmbilicalProtocol.getTask(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.mapred.ClientServiceDelegate.invoke(..)) &&
+ call(* java.lang.reflect.Method.invoke(..) throws *IOException*)) ||
+ (withincode(* org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileReaderTask.run(..)) &&
+ call(* org.apache.hadoop.io.IOUtils.readFully(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(..)) &&
+ call(* org.apache.hadoop.util.functional.CallableRaisingIOE.*.apply(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.azure.BlockBlobAppendStream.writeBlockRequestInternal(..)) &&
+ call(* org.apache.hadoop.fs.azure.StorageInterface.*CloudBlockBlobWrapper.uploadBlock(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.azure.BlockBlobAppendStream.writeBlockRequestInternal(..)) &&
+ call(* org.apache.hadoop.fs.azure.StorageInterface.*CloudBlockBlobWrapper.uploadBlock(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.azure.BlockBlobAppendStream.writeBlockListRequestInternal(..)) &&
+ call(* org.apache.hadoop.fs.azure.StorageInterface.*CloudBlockBlobWrapper.commitBlockList(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.azure.WasbRemoteCallHelper.retryableRequest(..)) &&
+ call(* java.io.BufferedReader.readLine(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.azurebfs.oauth2.AzureADAuthenticator.getTokenCall(..)) &&
+ call(* org.apache.hadoop.fs.azurebfs.oauth2.AzureADAuthenticator.getTokenSingleCall(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.tools.SimpleCopyListing.*TraverseDirectory.traverseDirectoryMultiThreaded(..)) &&
+ call(* org.apache.hadoop.tools.util.ProducerConsumer.*.take(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.tools.dynamometer.DynoInfraUtils.waitForAndGetNameNodeProperties(..)) &&
+ call(* java.util.Properties.load(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.tools.dynamometer.DynoInfraUtils.waitForAndGetNameNodeProperties(..)) &&
+ call(* org.apache.hadoop.fs.FileSystem.open(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.tools.dynamometer.DynoInfraUtils.waitForAndGetNameNodeProperties(..)) &&
+ call(* org.apache.hadoop.fs.Path.getFileSystem(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.tools.dynamometer.DynoInfraUtils.waitForNameNodeJMXValue(..)) &&
+ call(* org.apache.hadoop.tools.dynamometer.DynoInfraUtils.fetchNameNodeJMXValue(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.proto.YarnProtos.*ContainerLaunchContextProto.ContainerLaunchContextProto(..)) &&
+ call(* org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readBytes(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.proto.YarnProtos.*ContainerLaunchContextProto.ContainerLaunchContextProto(..)) &&
+ call(* org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readMessage(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.proto.YarnProtos.*ContainerLaunchContextProto.ContainerLaunchContextProto(..)) &&
+ call(* org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readTag(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.proto.YarnProtos.*ContainerLaunchContextProto.ContainerLaunchContextProto(..)) &&
+ call(* org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3.parseUnknownField(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.proto.YarnProtos.*ContainerRetryContextProto.ContainerRetryContextProto(..)) &&
+ call(* org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readEnum(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.proto.YarnProtos.*ContainerRetryContextProto.ContainerRetryContextProto(..)) &&
+ call(* org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readInt32(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.proto.YarnProtos.*ContainerRetryContextProto.ContainerRetryContextProto(..)) &&
+ call(* org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readInt64(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.proto.YarnProtos.*ContainerRetryContextProto.ContainerRetryContextProto(..)) &&
+ call(* org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readRawVarint32(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.proto.YarnProtos.*ContainerRetryContextProto.ContainerRetryContextProto(..)) &&
+ call(* org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readTag(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.proto.YarnProtos.*ContainerRetryContextProto.ContainerRetryContextProto(..)) &&
+ call(* org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3.parseUnknownField(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.loadUUIDFromLogFile(..)) &&
+ call(* java.io.DataInputStream.readFully(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.loadUUIDFromLogFile(..)) &&
+ call(* org.apache.hadoop.fs.FileContext.open(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.loadUUIDFromLogFile(..)) &&
+ call(* org.apache.hadoop.fs.RemoteIterator.*.hasNext(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.loadUUIDFromLogFile(..)) &&
+ call(* org.apache.hadoop.fs.RemoteIterator.*.next(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.loadUUIDFromLogFile(..)) &&
+ call(* org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.deleteFileWithRetries(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.*FSAction.runWithRetries(..)) &&
+ call(* org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.*FSAction.run(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.submitReservation(..)) &&
+ call(* org.apache.hadoop.yarn.api.ApplicationClientProtocol.submitReservation(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.getNewReservation(..)) &&
+ call(* org.apache.hadoop.yarn.api.ApplicationClientProtocol.getNewReservation(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.loadContainerState(..)) &&
+ call(* org.apache.hadoop.yarn.server.utils.BuilderUtils.newContainerTokenIdentifier(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.loadContainerState(..)) &&
+ call(* org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ResourceMappings.*AssignedResources.fromBytes(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.RPC.waitForProtocolProxy(..)) &&
+ call(* org.apache.hadoop.security.UserGroupInformation.getCurrentUser(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(..)) &&
+ call(* org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.shouldCheckpointBasedOnCount(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(..)) &&
+ call(* org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.tools.SimpleCopyListing.*TraverseDirectory.traverseDirectoryMultiThreaded(..)) &&
+ call(* org.apache.hadoop.tools.util.DistCpUtils.toCopyListingFileStatus(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.tools.SimpleCopyListing.*TraverseDirectory.traverseDirectoryMultiThreaded(..)) &&
+ call(* org.apache.hadoop.tools.SimpleCopyListing.addToFileListing(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.tools.SimpleCopyListing.*TraverseDirectory.traverseDirectoryMultiThreaded(..)) &&
+ call(* org.apache.hadoop.tools.SimpleCopyListing.writeToFileListing(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.server.namenode.sps.StoragePolicySatisfier.run(..)) &&
+ call(* org.apache.hadoop.hdfs.server.namenode.sps.Context.getFileInfo(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.server.namenode.sps.StoragePolicySatisfier.run(..)) &&
+ call(* org.apache.hadoop.hdfs.server.namenode.sps.StoragePolicySatisfier.analyseBlocksStorageMovementsAndAssignToDN(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.server.namenode.sps.StoragePolicySatisfier.run(..)) &&
+ call(* org.apache.hadoop.hdfs.server.namenode.sps.BlockStorageMovementNeeded.removeItemTrackInfo(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.mapred.Task.done(..)) &&
+ call(* org.apache.hadoop.mapred.TaskUmbilicalProtocol.commitPending(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.mapred.Task.statusUpdate(..)) &&
+ call(* org.apache.hadoop.mapred.TaskUmbilicalProtocol.statusUpdate(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.mapred.Task.sendDone(..)) &&
+ call(* org.apache.hadoop.mapred.TaskUmbilicalProtocol.done(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.mapred.Task.commit(..)) &&
+ call(* org.apache.hadoop.mapred.TaskUmbilicalProtocol.canCommit(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.mapred.Task.*TaskReporter.run(..)) &&
+ call(* org.apache.hadoop.mapred.TaskUmbilicalProtocol.statusUpdate(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.mapred.Task.*TaskReporter.run(..)) &&
+ call(* org.apache.hadoop.mapred.TaskUmbilicalProtocol.statusUpdate(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl.putObjects(..)) &&
+ call(* org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl.putObjects(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.server.uam.UnmanagedApplicationManager.monitorCurrentAppAttempt(..)) &&
+ call(* org.apache.hadoop.yarn.server.uam.UnmanagedApplicationManager.getApplicationReport(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.server.uam.UnmanagedApplicationManager.monitorCurrentAppAttempt(..)) &&
+ call(* org.apache.hadoop.yarn.server.uam.UnmanagedApplicationManager.getApplicationReport(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.server.uam.UnmanagedApplicationManager.monitorCurrentAppAttempt(..)) &&
+ call(* org.apache.hadoop.yarn.api.ApplicationBaseProtocol.getApplicationAttemptReport(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ha.HealthMonitor.tryConnect(..)) &&
+ call(* org.apache.hadoop.ha.HealthMonitor.createProxy(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(..)) &&
+ call(* mkdirs(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.TrashPolicyDefault.run(..)) &&
+ call(* createCheckpoint(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.TrashPolicyDefault.run(..)) &&
+ call(* deleteCheckpoint(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DFSClient.renewLease(..)) &&
+ call(* renewLease(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.util.DiskChecker.doDiskIo(..)) &&
+ call(* diskIoCheckWithoutNativeIo(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.server.sps.ExternalStoragePolicySatisfier.getNameNodeConnector(..)) &&
+ call(* newNameNodeConnectors(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.protocol.CacheDirectiveIterator.makeRequest(..)) &&
+ call(* listCacheDirectives(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.mapreduce.v2.app.local.LocalContainerAllocator.heartbeat(..)) &&
+ call(* allocate(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.run(..)) &&
+ call(* doAs(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.server.AMRMClientRelayer.allocate(..)) &&
+ call(* reRegisterApplicationMaster(..) throws *Exception*)) ||
+ (withincode(* CreateOutputDirectoriesStage.maybeCreateOneDirectory(..)) &&
+ call(* mkdirs(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.io.retry.RetryInvocationHandler.invokeOnce(..)) &&
+ call(* invoke(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws IOException : injectIOException() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "IOException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new IOException(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+ /* Inject SocketException */
+
+ pointcut injectSocketException():
+ ((withincode(* org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(..)) &&
+ call(* org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.Client.*Connection.setupConnection(..)) &&
+ call(* javax.net.SocketFactory.createSocket(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.Client.*Connection.setupConnection(..)) &&
+ call(* java.net.Socket.setTcpNoDelay(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.Client.*Connection.setupConnection(..)) &&
+ call(* java.net.Socket.setKeepAlive(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.Client.*Connection.setupConnection(..)) &&
+ call(* org.apache.hadoop.net.NetUtils.getLocalInetAddress(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.Client.*Connection.setupConnection(..)) &&
+ call(* java.net.Socket.setReuseAddress(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.Client.*Connection.setupConnection(..)) &&
+ call(* java.net.Socket.bind(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.Client.*Connection.setupConnection(..)) &&
+ call(* java.net.Socket.setSoTimeout(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.Client.*Connection.setupIOstreams(..)) &&
+ call(* org.apache.hadoop.ipc.Client.*Connection.setupConnection(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.Client.*Connection.setupIOstreams(..)) &&
+ call(* org.apache.hadoop.ipc.Client.*Connection.writeConnectionHeader(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.Client.*Connection.setupIOstreams(..)) &&
+ call(* org.apache.hadoop.ipc.Client.*IpcStreams.setSaslClient(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.Client.*Connection.setupIOstreams(..)) &&
+ call(* org.apache.hadoop.ipc.Client.*Connection.writeConnectionContext(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(..)) &&
+ call(* org.apache.hadoop.hdfs.DataStreamer.createSocketForPipeline(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(..)) &&
+ call(* org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.*MultipleNameNodeProxy.getActiveNodeProxy(..)) &&
+ call(* org.apache.hadoop.ipc.RPC.waitForProxy(..) throws *Exception*)) ||
+ (withincode(* java.io.IOException(..)) &&
+ call(* org.apache.hadoop.mapreduce.task.reduce.Fetcher.openConnection(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.mapreduce.task.reduce.Fetcher.connect(..)) &&
+ call(* java.net.URLConnection.connect(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.server.namenode.FSDirEncryptionZoneOp.*EDEKCacheLoader.run(..)) &&
+ call(* org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.warmUpEncryptedKeys(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.client.cli.LogsCLI.*ClientConnectionRetry.retryOn(..)) &&
+ call(* org.apache.hadoop.yarn.client.cli.LogsCLI.*ClientRetryOp.run(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.Client.*Connection.setupConnection(..)) &&
+ call(* java.net.Socket.setTrafficClass(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.*SlotReleaser.run(..)) &&
+ call(* org.apache.hadoop.hdfs.protocol.datatransfer.Sender.releaseShortCircuitFds(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.*SlotReleaser.run(..)) &&
+ call(* org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.*ReleaseShortCircuitAccessResponseProto.parseFrom(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.azure.WasbRemoteCallHelper.retryableRequest(..)) &&
+ call(* org.apache.hadoop.fs.azure.WasbRemoteCallHelper.getHttpRequest(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.azure.WasbRemoteCallHelper.retryableRequest(..)) &&
+ call(* org.apache.hadoop.fs.azure.WasbRemoteCallHelper.getHttpRequest(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.azure.WasbRemoteCallHelper.retryableRequest(..)) &&
+ call(* org.apache.http.client.HttpClient.execute(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.azure.WasbRemoteCallHelper.retryableRequest(..)) &&
+ call(* org.apache.http.HttpEntity.getContent(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.tools.util.RetriableCommand.execute(..)) &&
+ call(* org.apache.hadoop.tools.util.RetriableCommand.doExecute(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.client.api.impl.TimelineConnector.*TimelineClientConnectionRetry.retryOn(..)) &&
+ call(* org.apache.hadoop.yarn.client.api.impl.TimelineConnector.*TimelineClientRetryOp.run(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.*SlotReleaser.run(..)) &&
+ call(* org.apache.hadoop.net.unix.DomainSocket.connect(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.web.WebHdfsFileSystem.*AbstractRunner.runWithRetry(..)) &&
+ call(* org.apache.hadoop.hdfs.web.WebHdfsFileSystem.*AbstractRunner.getUrl(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.web.WebHdfsFileSystem.*AbstractRunner.runWithRetry(..)) &&
+ call(* org.apache.hadoop.hdfs.web.WebHdfsFileSystem.*AbstractRunner.connect(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.web.WebHdfsFileSystem.*AbstractRunner.runWithRetry(..)) &&
+ call(* org.apache.hadoop.hdfs.web.WebHdfsFileSystem.*AbstractRunner.getResponse(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws SocketException : injectSocketException() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "SocketException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new SocketException(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+ /* Inject ConnectException */
+
+ pointcut injectConnectException():
+ ((withincode(* org.apache.hadoop.ipc.Client.*Connection.setupConnection(..)) &&
+ call(* org.apache.hadoop.net.NetUtils.connect(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.ipc.RPC.waitForProtocolProxy(..)) &&
+ call(* org.apache.hadoop.ipc.RPC.getProtocolProxy(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws ConnectException : injectConnectException() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "ConnectException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new ConnectException(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+ /* Inject SocketTimeoutException */
+
+ pointcut injectSocketTimeoutException():
+ ((withincode(* org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(..)) &&
+ call(* org.apache.hadoop.hdfs.net.PeerServer.accept(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws SocketTimeoutException : injectSocketTimeoutException() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "SocketTimeoutException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new SocketTimeoutException(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+ /* Inject FileNotFoundException */
+
+ pointcut injectFileNotFoundException():
+ ((withincode(* org.apache.hadoop.fs.obs.OBSCommonUtils.isFolderEmpty(..)) &&
+ call(* org.apache.hadoop.fs.obs.OBSCommonUtils.innerIsFolderEmpty(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.fs.obs.OBSPosixBucketUtils.innerFsRenameWithRetry(..)) &&
+ call(* org.apache.hadoop.fs.obs.OBSPosixBucketUtils.innerFsRenameFile(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.loadUUIDFromLogFile(..)) &&
+ call(* org.apache.hadoop.fs.FileContext.open(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws FileNotFoundException : injectFileNotFoundException() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "FileNotFoundException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new FileNotFoundException(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+ /* Inject EOFException */
+
+ pointcut injectEOFException():
+ ((withincode(* org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(..)) &&
+ call(* org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.*SlotReleaser.run(..)) &&
+ call(* org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws EOFException : injectEOFException() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "EOFException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new EOFException(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+ /* Inject RetriableException */
+
+ pointcut injectRetriableException():
+ ((withincode(* org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater.takeAndProcessTasks(..)) &&
+ call(* org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater.processTask(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.*SlotReleaser.run(..)) &&
+ call(* org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws RetriableException : injectRetriableException() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "RetriableException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new RetriableException(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+}
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/aspect/edu/uchicago/cs/systems/wasabi/InterceptHive.aj b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/aspect/edu/uchicago/cs/systems/wasabi/InterceptHive.aj
new file mode 100644
index 00000000..8c9b11d7
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/aspect/edu/uchicago/cs/systems/wasabi/InterceptHive.aj
@@ -0,0 +1,384 @@
+package edu.uchicago.cs.systems.wasabi;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.EOFException;
+import java.io.FileNotFoundException;
+import java.net.BindException;
+import java.net.ConnectException;
+import java.net.SocketException;
+import java.net.SocketTimeoutException;
+import java.net.UnknownHostException;
+import java.lang.InterruptedException;
+import java.sql.SQLException;
+import java.sql.SQLTransientException;
+
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.Set;
+
+import edu.uchicago.cs.systems.wasabi.ConfigParser;
+import edu.uchicago.cs.systems.wasabi.WasabiLogger;
+import edu.uchicago.cs.systems.wasabi.WasabiContext;
+import edu.uchicago.cs.systems.wasabi.InjectionPolicy;
+import edu.uchicago.cs.systems.wasabi.StackSnapshot;
+import edu.uchicago.cs.systems.wasabi.InjectionPoint;
+import edu.uchicago.cs.systems.wasabi.ExecutionTrace;
+
+public aspect InterceptHive {
+ private WasabiContext wasabiCtx = null;
+
+ private static final String UNKNOWN = "UNKNOWN";
+
+ private static final WasabiLogger LOG = new WasabiLogger();
+ private static final String configFile = (System.getProperty("configFile") != null) ? System.getProperty("configFile") : "default.conf";
+ private static final ConfigParser configParser = new ConfigParser(LOG, configFile);
+
+ private Set activeInjectionLocations = ConcurrentHashMap.newKeySet();
+ private String testMethodName = UNKNOWN;
+
+ pointcut testMethod():
+ @annotation(org.junit.Test) &&
+ // @annotation(org.junit.Before) ||
+ // @annotation(org.junit.After) ||
+ // @annotation(org.junit.BeforeClass) ||
+ // @annotation(org.junit.AfterClass) ||
+ // @annotation(org.junit.jupiter.api.BeforeEach) ||
+ // @annotation(org.junit.jupiter.api.AfterEach) ||
+ // @annotation(org.junit.jupiter.api.BeforeAll) ||
+ // @annotation(org.junit.jupiter.api.AfterAll) ||
+ // @annotation(org.junit.jupiter.api.Test)) &&
+ !within(org.apache.hadoop.*.TestDFSClientFailover.*) &&
+ !within(org.apache.hadoop.hdfs.*.TestOfflineImageViewer.*) &&
+ !within(org.apache.hadoop.example.ITUseHadoopCodec.*) &&
+ !within(org.apache.hive.hcatalog.mapreduce.TestHCatMultiOutputFormat.test.*);
+
+
+ before() : testMethod() {
+ this.wasabiCtx = new WasabiContext(LOG, configParser);
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[TEST-BEFORE]: Test ---%s--- started", thisJoinPoint.toString())
+ );
+
+ if (this.testMethodName != this.UNKNOWN) {
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[TEST-BEFORE]: [ALERT]: Test method ---%s--- executes concurrentlly with test method ---%s---",
+ this.testMethodName, thisJoinPoint.toString())
+ );
+ }
+
+ this.testMethodName = thisJoinPoint.toString();
+ }
+
+ after() returning: testMethod() {
+ if (this.wasabiCtx == null) { // This happens for non-test methods (e.g. config) inside test code
+ return; // Ignore retry in "before" and "after" annotated methods
+ }
+
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[TEST-AFTER]: [SUCCESS]: Test ---%s--- done", thisJoinPoint.toString())
+ );
+
+ this.wasabiCtx.printExecTrace(this.LOG, String.format(" Test: %s", this.testMethodName));
+
+ this.testMethodName = this.UNKNOWN;
+ this.wasabiCtx = null;
+ this.activeInjectionLocations.clear();
+ }
+
+ after() throwing (Throwable t): testMethod() {
+ if (this.wasabiCtx == null) { // This happens for non-test methods (e.g. config) inside test code
+ return; // Ignore retry in "before" and "after" annotated methods
+ }
+
+ this.wasabiCtx.printExecTrace(this.LOG, String.format(" Test: %s", this.testMethodName));
+
+ StringBuilder exception = new StringBuilder();
+ for (Throwable e = t; e != null; e = e.getCause()) {
+ exception.append(e);
+ exception.append(" :-: ");
+ }
+
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[TEST-AFTER] [FAILURE] Test ---%s--- | Failure message :-: %s| Stack trace:\n%s\n:-:-:\n\n",
+ thisJoinPoint.toString(), exception.toString(), stackSnapshot.toString())
+ );
+
+ this.testMethodName = this.UNKNOWN;
+ this.activeInjectionLocations.clear();
+ }
+
+ /*
+ * Callback before calling Thread.sleep(...)
+ */
+
+ pointcut recordThreadSleep():
+ (call(* java.lang.Object.wait(..)) ||
+ call(* java.lang.Thread.sleep(..)) ||
+ call(* java.util.concurrent.locks.LockSupport.parkNanos(..)) ||
+ call(* java.util.concurrent.locks.LockSupport.parkUntil(..)) ||
+ call(* java.util.concurrent.ScheduledExecutorService.schedule(..)) ||
+ call(* java.util.concurrent.TimeUnit.*scheduledExecutionTime(..)) ||
+ call(* java.util.concurrent.TimeUnit.*sleep(..)) ||
+ call(* java.util.concurrent.TimeUnit.*timedWait(..)) ||
+ call(* java.util.Timer.schedule*(..)) ||
+ call(* java.util.TimerTask.wait(..)) ||
+ call(* org.apache.hadoop.hbase.*.Procedure.suspend(..))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ before() : recordThreadSleep() {
+ try {
+ if (this.wasabiCtx == null) { // This happens for non-test methods (e.g. config) inside test code
+ return; // Ignore retry in "before" and "after" annotated methods
+ }
+
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ for (String retryCallerFunction : this.activeInjectionLocations) {
+ if (stackSnapshot.hasFrame(retryCallerFunction.split("\\(", 2)[0])) {
+ String sleepLocation = String.format("%s(%s:%d)",
+ retryCallerFunction.split("\\(", 2)[0],
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ this.wasabiCtx.addToExecTrace(sleepLocation, OpEntry.THREAD_SLEEP_OP, stackSnapshot);
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[THREAD-SLEEP] Test ---%s--- | Sleep location ---%s--- | Retry location ---%s---\n",
+ this.testMethodName,
+ sleepLocation,
+ retryCallerFunction.split("\\(", 2)[0])
+ );
+ }
+ }
+ } catch (Exception e) {
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_ERROR,
+ String.format("Exception occurred in recordThreadSleep(): %s", e.getMessage())
+ );
+ e.printStackTrace();
+ }
+ }
+
+
+ /* Inject IOException */
+
+ pointcut injectIOException():
+ ((withincode(* org.apache.hadoop.hive.ql.parse.repl.CopyUtils.doCopyRetry(..)) &&
+ call(* org.apache.hadoop.hive.ql.parse.repl.CopyUtils.getFilesToRetry(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.ql.parse.repl.CopyUtils.doCopyRetry(..)) &&
+ call(* org.apache.hadoop.hive.ql.parse.repl.CopyUtils.doCopyOnce(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.druid.DruidStorageHandlerUtils.publishSegmentWithShardSpec(..)) &&
+ call(* org.apache.hadoop.fs.FileSystem.mkdirs(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.druid.DruidStorageHandlerUtils.publishSegmentWithShardSpec(..)) &&
+ call(* org.apache.hadoop.fs.FileSystem.rename(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(..)) &&
+ call(* org.apache.hadoop.hive.metastore.utils.SecurityUtils.getUGI(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook.*EventLogger.writeEvent(..)) &&
+ call(* org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook.*EventLogger.maybeRolloverWriterForDay(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook.*EventLogger.writeEvent(..)) &&
+ call(* org.apache.tez.dag.history.logging.proto.ProtoMessageWriter.*.writeProto(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook.*EventLogger.writeEvent(..)) &&
+ call(* org.apache.tez.dag.history.logging.proto.ProtoMessageWriter.*.hflush(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.metastore.ObjectStore.*RetryingExecutor.run(..)) &&
+ call(* org.apache.hadoop.hive.metastore.ObjectStore.*RetryingExecutor.*Command.process(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.ql.exec.util.Retryable.executeCallable(..)) &&
+ call(* org.apache.hadoop.security.UserGroupInformation.doAs(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.ql.exec.util.Retryable.executeCallable(..)) &&
+ call(* org.apache.hadoop.security.UserGroupInformation.getLoginUser(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.ql.exec.repl.atlas.RetryingClientTimeBased.invokeWithRetry(..)) &&
+ call(* java.util.concurrent.Callable.*.call(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.registry.impl.ZkRegistryBase.ensureInstancesCache(..)) &&
+ call(* org.apache.curator.framework.recipes.cache.PathChildrenCache.start(..) throws *Exception*)) ||
+ (withincode(* org.apache.hive.common.util.RetryUtilities.*ExponentiallyDecayingBatchWork.run(..)) &&
+ call(* org.apache.hive.common.util.RetryUtilities.*ExponentialBackOffRetry.*.execute(..) throws *Exception*)) ||
+ (withincode(* org.apache.hive.common.util.Retry.*RetryingStatement.evaluate(..)) &&
+ call(* org.junit.runners.model.Statement.evaluate(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.druid.DruidStorageHandlerUtils.publishSegmentWithShardSpec(..)) &&
+ call(* org.apache.hadoop.fs.FileSystem.exists(..) throws *Exception*)) ||
+ (withincode(* org.apache.hive.hcatalog.templeton.LauncherDelegator.killTempletonJobWithRetry(..)) &&
+ call(* org.apache.hive.hcatalog.templeton.LauncherDelegator.killJob(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.kafka.RetryUtils.retry(..)) &&
+ call(* org.apache.hadoop.hive.kafka.RetryUtils.*Task.*.perform(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.llap.AsyncPbRpcProxy.*AsyncCallableRequest.call(..)) &&
+ call(* org.apache.hadoop.hive.llap.AsyncPbRpcProxy.*AsyncCallableRequest.callInternal(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution(..)) &&
+ call(* org.apache.tez.dag.api.client.DAGClient.getDAGStatus(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook.*EventLogger.writeEvent(..)) &&
+ call(* org.apache.tez.dag.history.logging.proto.DatePartitionedLogger.*.getWriter(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(..)) &&
+ call(* org.apache.hadoop.security.UserGroupInformation.doAs(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.metastore.grpc.HiveMetastore.*CompactionInfoStruct.CompactionInfoStruct(..)) &&
+ call(* com.google.protobuf.CodedInputStream.readBool(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.metastore.grpc.HiveMetastore.*CompactionInfoStruct.CompactionInfoStruct(..)) &&
+ call(* com.google.protobuf.CodedInputStream.readEnum(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.metastore.grpc.HiveMetastore.*CompactionInfoStruct.CompactionInfoStruct(..)) &&
+ call(* com.google.protobuf.CodedInputStream.readInt64(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.metastore.grpc.HiveMetastore.*CompactionInfoStruct.CompactionInfoStruct(..)) &&
+ call(* com.google.protobuf.CodedInputStream.readStringRequireUtf8(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.metastore.grpc.HiveMetastore.*CompactionInfoStruct.CompactionInfoStruct(..)) &&
+ call(* com.google.protobuf.CodedInputStream.readTag(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.metastore.grpc.HiveMetastore.*CompactionInfoStruct.CompactionInfoStruct(..)) &&
+ call(* com.google.protobuf.GeneratedMessageV3.parseUnknownField(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.metastore.utils.MetaStoreServerUtils.loopUntilHMSReady(..)) &&
+ call(* java.net.Socket.close(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.metastore.utils.MetaStoreServerUtils.loopUntilHMSReady(..)) &&
+ call(* java.net.Socket.connect(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.metastore.utils.RetryUtilities.run(..)) &&
+ call(* org.*.execute(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.metastore.HiveMetaStoreClientPreCatalog.open(..)) &&
+ call(* org.apache.hadoop.hive.metastore.conf.MetastoreConf.getPassword(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.metastore.HiveMetaStoreClientPreCatalog.open(..)) &&
+ call(* org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge.*Client.createClientTransport(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.metastore.HiveMetaStoreClientPreCatalog.open(..)) &&
+ call(* org.apache.hadoop.hive.metastore.utils.SecurityUtils.getTokenStrForm(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.metastore.HiveMetaStoreClientPreCatalog.open(..)) &&
+ call(* org.apache.hadoop.hive.metastore.utils.SecurityUtils.getUGI(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.ql.exec.tez.YarnQueueHelper.checkQueueAccessInternal(..)) &&
+ call(* checkQueueAccessFromSingleRm(..) throws *Exception*)) ||
+ (withincode(* *checkJobTracker(..)) &&
+ call(* *openStream(..) throws *Exception*)) ||
+ (withincode(* *close*(..)) &&
+ call(* *read(..) throws *Exception*)) ||
+ (withincode(* *checkJobTracker(..)) &&
+ call(* *read(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws IOException : injectIOException() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "IOException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new IOException(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+
+ /* Inject SQLException */
+
+ pointcut injectSQLException():
+ ((withincode(* org.apache.hive.jdbc.HiveConnection.HiveConnection(..)) &&
+ call(* org.apache.hive.jdbc.HiveConnection.executeInitSql(..) throws *Exception*)) ||
+ (withincode(* org.apache.hive.jdbc.HiveConnection.HiveConnection(..)) &&
+ call(* org.apache.hive.jdbc.HiveConnection.openSession(..) throws *Exception*)) ||
+ (withincode(* org.apache.hive.jdbc.HiveConnection.HiveConnection(..)) &&
+ call(* org.apache.hive.jdbc.HiveConnection.openTransport(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.ql.exec.Utilities.executeWithRetry(..)) &&
+ call(* org.apache.hadoop.hive.ql.exec.Utilities.*SQLCommand.*.run(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.ql.exec.Utilities.connectWithRetry(..)) &&
+ call(* java.sql.DriverManager.getConnection(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.ql.exec.Utilities.prepareWithRetry(..)) &&
+ call(* java.sql.Connection.prepareStatement(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.metastore.txn.CompactionTxnHandler.findReadyToClean(..)) &&
+ call(* java.sql.ResultSet.getInt(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.metastore.txn.CompactionTxnHandler.findReadyToClean(..)) &&
+ call(* java.sql.ResultSet.getLong(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.metastore.txn.CompactionTxnHandler.findReadyToClean(..)) &&
+ call(* java.sql.ResultSet.getString(..) throws *Exception*)) ||
+ (withincode(* org.apache.hadoop.hive.metastore.txn.CompactionTxnHandler.findReadyToClean(..)) &&
+ call(* java.sql.ResultSet.next(..) throws *Exception*))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws SQLException : injectSQLException() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "SQLException";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new SQLException(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }
+ }
+
+}
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/java/edu/uchicago/cs/systems/wasabi/ConfigParser.java b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/java/edu/uchicago/cs/systems/wasabi/ConfigParser.java
new file mode 100644
index 00000000..0b71df54
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/java/edu/uchicago/cs/systems/wasabi/ConfigParser.java
@@ -0,0 +1,163 @@
+package edu.uchicago.cs.systems.wasabi;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.BufferedReader;
+import java.io.FileReader;
+import java.io.IOException;
+import java.lang.System;
+import java.nio.file.Paths;
+import java.nio.file.Path;
+import java.util.Arrays;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+
+import edu.uchicago.cs.systems.wasabi.InjectionPoint;
+
+import org.junit.Assert;
+
+class ConfigParser {
+
+ private static WasabiLogger LOG;
+ private static String configFile;
+
+ private static String wasabiRootDir;
+ private static String retryDataFile;
+ private static String injectionPolicy;
+ private static int maxInjectionCount;
+
+ private static final ArrayList rawRecords = new ArrayList<>();
+ private static final Map injectionPlan = new HashMap<>();
+
+ private static final HashingPrimitives hashingPrimitives = new HashingPrimitives();
+ private static final String[] RETRY_DATA_COLUMN_NAMES = {"Retry location", "Retry caller", "Injection site", "Injection location", "Exception"};
+
+ public ConfigParser(WasabiLogger logger, String configFile) {
+ this.LOG = logger;
+ this.configFile = configFile;
+
+ this.wasabiRootDir = System.getenv("WASABI_ROOT_DIR");
+ if (this.wasabiRootDir == null) {
+ this.LOG.printMessage(
+ LOG.LOG_LEVEL_ERROR,
+ String.format("WASABI_ROOT_DIR environment variable is not set.")
+ );
+ throw new IllegalStateException("[wasabi] WASABI_ROOT_DIR environment variable is not set.");
+ }
+
+ parseConfigFile();
+ parseCodeQLOutput();
+ }
+
+ private void parseConfigFile() {
+ try (BufferedReader br = new BufferedReader(new FileReader(this.configFile))) {
+ String line;
+ while ((line = br.readLine()) != null) {
+ String[] parts = line.split(":");
+ Assert.assertEquals("[wasabi] Invalid line format for <" + line + ">", 2, parts.length);
+
+ String parameter = parts[0].trim();
+ String value = parts[1].replaceAll("\\s+", "").trim();
+ switch (parameter) {
+ case "retry_data_file":
+ try {
+ Path retryDataFilePath = Paths.get(this.wasabiRootDir).resolve(value).normalize();
+ this.retryDataFile = retryDataFilePath.toString();
+ } catch (Exception e) {
+ this.LOG.printMessage(
+ LOG.LOG_LEVEL_ERROR,
+ String.format("[wasabi] Invalid path: %s/%s", this.wasabiRootDir, value)
+ );
+ e.printStackTrace();
+ throw new IllegalStateException("[wasabi] Invalid path: " + this.wasabiRootDir + "/" + value);
+ }
+ break;
+ case "injection_policy":
+ this.injectionPolicy = value;
+ break;
+ case "max_injection_count":
+ try {
+ this.maxInjectionCount = Integer.parseInt(value);
+ } catch (Exception e) {
+ this.LOG.printMessage(
+ LOG.LOG_LEVEL_ERROR,
+ String.format("An exception occurred when parsing line <%s>: %s\n",
+ line, e.getMessage())
+ );
+ e.printStackTrace();
+ }
+ break;
+ }
+ }
+ } catch (IOException e) {
+ this.LOG.printMessage(
+ LOG.LOG_LEVEL_ERROR,
+ String.format("An exception occurred when parsing the config file: %s\n", e.getMessage())
+ );
+ e.printStackTrace();
+ }
+ }
+
+ private void parseCodeQLOutput() {
+ try (BufferedReader br = new BufferedReader(new FileReader(this.retryDataFile))) {
+ boolean foundHeader = false;
+ String line;
+ while ((line = br.readLine()) != null) {
+ String[] values = line.split("!!!");
+
+ if (!foundHeader && Arrays.equals(RETRY_DATA_COLUMN_NAMES, values)) {
+ foundHeader = true;
+ continue;
+ }
+
+ if (foundHeader) {
+ rawRecords.add(values);
+ }
+ }
+ } catch (IOException e) {
+ this.LOG.printMessage(
+ LOG.LOG_LEVEL_ERROR,
+ String.format("An exception occurred when parsing the retry data file: %s\n", e.getMessage())
+ );
+ e.printStackTrace();
+ }
+
+ for (String[] record : rawRecords) {
+ String retrySourceLocation = record[0];
+ String retryCallerFunction = record[1];
+ String injectionSite = record[2];
+ String injectionLocation = record[3];
+ String retryException = record[4];
+
+ InjectionPoint entry = new InjectionPoint(
+ null,
+ retrySourceLocation,
+ retryCallerFunction,
+ injectionSite,
+ retryException,
+ -1
+ );
+
+ injectionPlan.put(injectionLocation, entry);
+ }
+ }
+
+ public ArrayList getRawRecords() {
+ return rawRecords;
+ }
+
+ public Map getInjectionPlan() {
+ return Collections.unmodifiableMap(injectionPlan);
+ }
+
+ public int getMaxInjectionCount() {
+ return this.maxInjectionCount;
+ }
+
+ public String getInjectionPolicy() {
+ return this.injectionPolicy;
+ }
+}
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/java/edu/uchicago/cs/systems/wasabi/ExecutionTrace.java b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/java/edu/uchicago/cs/systems/wasabi/ExecutionTrace.java
new file mode 100644
index 00000000..045b7716
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/java/edu/uchicago/cs/systems/wasabi/ExecutionTrace.java
@@ -0,0 +1,243 @@
+package edu.uchicago.cs.systems.wasabi;
+
+import java.util.ArrayDeque;
+import java.util.ArrayList;
+import java.util.concurrent.locks.Lock;
+import java.util.concurrent.locks.ReentrantLock;
+import java.util.Deque;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.Objects;
+
+class OpEntry {
+
+ public static final Integer RETRY_CALLER_OP = 0;
+ public static final Integer THREAD_SLEEP_OP = 1;
+
+ private String opName = "";
+ private Integer opType = this.RETRY_CALLER_OP;
+ private StackSnapshot stackSnapshot = null;
+ private Long timestamp = 0L;
+ private String exception = null;
+
+ public OpEntry(String opName,
+ Integer opType,
+ Long timestamp,
+ StackSnapshot stackSnapshot) {
+ this.opName = opName;
+ this.opType = opType;
+ this.timestamp = timestamp;
+ this.stackSnapshot = stackSnapshot;
+ this.exception = null;
+ }
+
+ public OpEntry(String opName,
+ Integer opType,
+ Long timestamp,
+ StackSnapshot stackSnapshot,
+ String exception) {
+ this.opName = opName;
+ this.opType = opType;
+ this.timestamp = timestamp;
+ this.stackSnapshot = stackSnapshot;
+ this.exception = exception;
+ }
+
+ public OpEntry(String opName,
+ Integer opType,
+ StackSnapshot stackSnapshot,
+ String exception) {
+ this.opName = opName;
+ this.opType = opType;
+ this.timestamp = 0L;
+ this.stackSnapshot = stackSnapshot;
+ this.exception = exception;
+ }
+
+ public Boolean isOfType(Integer opType) {
+ return Objects.equals(this.opType, opType);
+ }
+
+ public Boolean hasFrame(String target) {
+ return this.stackSnapshot.hasFrame(target);
+ }
+
+ public Boolean isSameOp(OpEntry target) {
+ return (
+ this.opType == target.opType &&
+ (this.exception == null || this.exception.equals(target.exception)) &&
+ this.stackSnapshot.isEqual(target.stackSnapshot)
+ );
+ }
+
+ public void printOpEntry(WasabiLogger log) {
+ log.printMessage(WasabiLogger.LOG_LEVEL_WARN,
+ String.format("\n Op type: %s\n Op name: %s\n Timestamp: %d\n Callstack (top):\n%s\n Exception: %s\n",
+ this.opType == this.RETRY_CALLER_OP ? "retry" : "sleep",
+ this.opName,
+ this.timestamp,
+ this.stackSnapshot.serializeTopFrames(5),
+ this.exception
+ )
+ );
+ }
+}
+
+class ExecutionTrace {
+
+ private final Lock mutex = new ReentrantLock();
+ private final int INFINITE_CACHE = -1;
+
+ private ArrayDeque opCache;
+ private int maxOpCacheSize;
+
+ public ExecutionTrace() {
+ this.opCache = new ArrayDeque();
+ this.maxOpCacheSize = this.INFINITE_CACHE;
+ }
+
+ public ExecutionTrace(int maxOpCacheSize) {
+ this.opCache = new ArrayDeque();
+ this.maxOpCacheSize = maxOpCacheSize;
+ }
+
+ public Boolean isNullOrEmpty() {
+ mutex.lock();
+ try {
+ return this.opCache == null || this.opCache.isEmpty();
+ } finally {
+ mutex.unlock();
+ }
+ }
+
+ public int getMaxOpCacheSize() {
+ mutex.lock();
+ try {
+ return this.maxOpCacheSize;
+ } finally {
+ mutex.unlock();
+ }
+ }
+
+ public int getSize() {
+ mutex.lock();
+ try {
+ return this.opCache.size();
+ } finally {
+ mutex.unlock();
+ }
+ }
+
+ public void addLast(OpEntry opEntry) {
+ mutex.lock();
+ try {
+ if (this.maxOpCacheSize != this.INFINITE_CACHE && this.opCache.size() >= this.maxOpCacheSize) {
+ this.opCache.removeFirst();
+ }
+ this.opCache.addLast(opEntry);
+ } finally {
+ mutex.unlock();
+ }
+ }
+
+ public Boolean checkIfOpsAreEqual(int leftIndex, int rightIndex) {
+ mutex.lock();
+ try {
+ if (this.opCache.size() < Math.max(leftIndex, rightIndex)) {
+ return false;
+ }
+
+ OpEntry leftOp = null;
+ OpEntry rightOp = null;
+
+ int index = this.opCache.size() - 1;
+ Iterator itr = this.opCache.descendingIterator();
+ while (itr.hasNext() && index >= Math.min(leftIndex, rightIndex)) {
+ OpEntry current = itr.next();
+
+ if (index == leftIndex) {
+ leftOp = current;
+ } else if (index == rightIndex) {
+ rightOp = current;
+ }
+
+ --index;
+ }
+
+ return leftOp != null && rightOp != null && leftOp.isSameOp(rightOp);
+
+ } finally {
+ mutex.unlock();
+ }
+ }
+
+ public Boolean checkIfOpIsOfType(int targetIndex, int targetOpType) {
+ mutex.lock();
+ try {
+ if (this.opCache.size() < targetIndex) {
+ return false;
+ }
+
+ OpEntry targetOp = null;
+
+ int index = this.opCache.size() - 1;
+ Iterator itr = this.opCache.descendingIterator();
+ while (itr.hasNext() && index >= targetIndex) {
+ OpEntry current = itr.next();
+
+ if (index == targetIndex) {
+ targetOp = current;
+ }
+
+ --index;
+ }
+
+ return targetOp != null && targetOp.isOfType(targetOpType);
+
+ } finally {
+ mutex.unlock();
+ }
+ }
+
+ public Boolean checkIfOpHasFrame(int targetIndex, String targetFrame) {
+ mutex.lock();
+ try {
+ if (this.opCache.size() < targetIndex) {
+ return false;
+ }
+
+ OpEntry targetOp = null;
+
+ int index = this.opCache.size() - 1;
+ Iterator itr = this.opCache.descendingIterator();
+ while (itr.hasNext() && index >= targetIndex) {
+ OpEntry current = itr.next();
+
+ if (index == targetIndex) {
+ targetOp = current;
+ }
+
+ --index;
+ }
+
+ return targetOp != null && targetOp.hasFrame(targetFrame);
+
+ } finally {
+ mutex.unlock();
+ }
+ }
+
+ public void printExecutionTrace(WasabiLogger log, String msg) {
+ mutex.lock();
+ try {
+ log.printMessage(WasabiLogger.LOG_LEVEL_WARN, String.format("================================ %s", msg));
+ for (OpEntry op : this.opCache) {
+ op.printOpEntry(log);
+ }
+ log.printMessage(WasabiLogger.LOG_LEVEL_WARN, String.format("================================================================\n\n"));
+
+ } finally {
+ mutex.unlock();
+ }
+ }
+}
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/java/edu/uchicago/cs/systems/wasabi/HashingPrimitives.java b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/java/edu/uchicago/cs/systems/wasabi/HashingPrimitives.java
new file mode 100644
index 00000000..c102a964
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/java/edu/uchicago/cs/systems/wasabi/HashingPrimitives.java
@@ -0,0 +1,46 @@
+package edu.uchicago.cs.systems.wasabi;
+
+//import java.nio.charset.StandardCharsets;
+//import org.apache.commons.codec.digest.MurmurHash3;
+import java.util.ArrayList;
+
+class HashingPrimitives {
+ public static int getHashValue(String str1, String str2, String str3) {
+ return 0;
+ /*
+ byte[] bytes1 = str1.getBytes(StandardCharsets.UTF_8);
+ byte[] bytes2 = str2.getBytes(StandardCharsets.UTF_8);
+ byte[] bytes3 = str3.getBytes(StandardCharsets.UTF_8);
+
+ byte[] bytes = new byte[bytes1.length + bytes2.length + bytes3.length];
+
+ System.arraycopy(bytes1, 0, bytes, 0, bytes1.length);
+ System.arraycopy(bytes2, 0, bytes, bytes1.length, bytes2.length);
+ System.arraycopy(bytes3, 0, bytes, bytes1.length + bytes2.length, bytes3.length);
+
+ return MurmurHash3.hash32x86(bytes, 0, bytes.length, 0);
+ */
+ }
+
+ public static int getHashValue(ArrayList arr) {
+ return 0;
+ /*
+ ArrayList byteList = new ArrayList<>();
+ int totalLength = 0;
+ for (String e : arr) {
+ byte[] bytes = e.getBytes(StandardCharsets.UTF_8);
+ byteList.add(bytes);
+ totalLength += bytes.length;
+ }
+
+ byte[] bytes = new byte[totalLength];
+ int offset = 0;
+ for (byte[] b : byteList) {
+ System.arraycopy(b, 0, bytes, offset, b.length);
+ offset += b.length;
+ }
+
+ return MurmurHash3.hash32x86(bytes, 0, bytes.length, 0);
+ */
+ }
+}
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/java/edu/uchicago/cs/systems/wasabi/InjectionPoint.java b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/java/edu/uchicago/cs/systems/wasabi/InjectionPoint.java
new file mode 100644
index 00000000..43b9138e
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/java/edu/uchicago/cs/systems/wasabi/InjectionPoint.java
@@ -0,0 +1,35 @@
+package edu.uchicago.cs.systems.wasabi;
+
+class InjectionPoint {
+
+ public StackSnapshot stackSnapshot = null;
+ public String retrySourceLocation = null;
+ public String retryCallerFunction = null;
+ public String injectionSite = null;
+ public String retryException = null;
+ public Integer injectionCount = 0;
+
+ public InjectionPoint(StackSnapshot stackSnapshot,
+ String retrySourceLocation,
+ String retryCallerFunction,
+ String injectionSite,
+ String retryException,
+ Integer injectionCount) {
+ this.stackSnapshot = stackSnapshot;
+ this.retrySourceLocation = retrySourceLocation;
+ this.retryCallerFunction = retryCallerFunction;
+ this.injectionSite = injectionSite;
+ this.retryException = retryException;
+ this.injectionCount = injectionCount;
+ }
+
+ public Boolean isEmpty() {
+ return (
+ this.stackSnapshot.isNullOrEmpty() &&
+ this.retrySourceLocation == null &&
+ this.retryCallerFunction == null &&
+ this.injectionSite == null &&
+ this.retryException == null
+ );
+ }
+}
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/java/edu/uchicago/cs/systems/wasabi/InjectionPolicies.java b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/java/edu/uchicago/cs/systems/wasabi/InjectionPolicies.java
new file mode 100644
index 00000000..aca90d41
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/java/edu/uchicago/cs/systems/wasabi/InjectionPolicies.java
@@ -0,0 +1,38 @@
+package edu.uchicago.cs.systems.wasabi;
+
+import java.util.Random;
+
+abstract class InjectionPolicy {
+
+ public abstract boolean shouldInject(int injectionCount);
+}
+
+class NoInjection extends InjectionPolicy {
+ @Override
+ public boolean shouldInject(int injectionCount) {
+ return false;
+ }
+}
+
+class InjectForever extends InjectionPolicy {
+ @Override
+ public boolean shouldInject(int injectionCount) {
+ return true;
+ }
+}
+
+class InjectUpToMaxCount extends InjectionPolicy {
+ private int maxInjectionCount = 0;
+
+ InjectUpToMaxCount(int maxInjectionCount) {
+ this.maxInjectionCount = maxInjectionCount;
+ }
+
+ @Override
+ public boolean shouldInject(int injectionCount) {
+ if (injectionCount < this.maxInjectionCount) {
+ return true;
+ }
+ return false;
+ }
+}
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/java/edu/uchicago/cs/systems/wasabi/StackSnapshot.java b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/java/edu/uchicago/cs/systems/wasabi/StackSnapshot.java
new file mode 100644
index 00000000..f384e319
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/java/edu/uchicago/cs/systems/wasabi/StackSnapshot.java
@@ -0,0 +1,107 @@
+package edu.uchicago.cs.systems.wasabi;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.stream.Collectors;
+
+class StackSnapshot {
+ private ArrayList stacktrace;
+
+ public StackSnapshot() {
+ this.stacktrace = new ArrayList();
+
+ StackTraceElement[] ste = Thread.currentThread().getStackTrace();
+ for (StackTraceElement frame : ste) {
+ if (!frame.toString().contains("edu.uchicago.cs.systems.wasabi") &&
+ !frame.toString().contains("java.lang.Thread.getStackTrace(Thread.java:")) {
+ this.stacktrace.add(frame.toString());
+ }
+ }
+ }
+
+ public StackSnapshot(ArrayList stacktrace) {
+ this.stacktrace = stacktrace;
+ }
+
+ public int getSize() {
+ return this.stacktrace.size();
+ }
+
+ public Boolean isNullOrEmpty() {
+ return this.stacktrace == null || this.stacktrace.isEmpty();
+ }
+
+ public String toString() {
+ return this.stacktrace.stream().map(frame -> "\t" + frame).collect(Collectors.joining("\n"));
+ }
+
+ public ArrayList getStacktrace() {
+ return this.stacktrace;
+ }
+
+ public String serializeTopFrames(int maxLevel) {
+ ArrayList topOfStack = new ArrayList();
+ int level = 0;
+
+ for (String frame : this.stacktrace) {
+ if (++level > maxLevel) {
+ break;
+ }
+ topOfStack.add(frame);
+ }
+
+ return topOfStack.stream().map(frame -> "\t" + frame).collect(Collectors.joining("\n"));
+ }
+
+ public String getFrame(int index) {
+ if (index >= 0 && index < this.stacktrace.size()) {
+ return stacktrace.get(index);
+ }
+ return null;
+ }
+
+ public Boolean hasFrame(String target) {
+ return this.stacktrace.stream().anyMatch(frame -> frame.contains(target));
+ }
+
+ public Boolean isEqual(StackSnapshot target) {
+ if (target.isNullOrEmpty()) {
+ return false;
+ }
+
+ if (this.stacktrace.size() != target.stacktrace.size()) {
+ return false;
+ }
+
+ for (int i = 0; i < this.stacktrace.size(); ++i) {
+ if (!this.stacktrace.get(i).equals(target.stacktrace.get(i))) {
+ return false;
+ }
+ }
+
+ return true;
+ }
+
+ public ArrayList normalizeStackBelowFrame(String target) {
+ ArrayList normalizedStack = new ArrayList();
+ Boolean targetFound = false;
+
+ for (String frame : stacktrace) {
+ if (frame.contains(target)) {
+ targetFound = true;
+ normalizedStack.add(target);
+ continue;
+ }
+
+ if (targetFound) {
+ normalizedStack.add(frame);
+ }
+ }
+
+ return normalizedStack;
+ }
+
+ public static String getQualifiedName(String frame) {
+ return frame != null ? frame.split("\\(")[0] : null;
+ }
+}
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/java/edu/uchicago/cs/systems/wasabi/WasabiContext.java b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/java/edu/uchicago/cs/systems/wasabi/WasabiContext.java
new file mode 100644
index 00000000..4c63a57f
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/java/edu/uchicago/cs/systems/wasabi/WasabiContext.java
@@ -0,0 +1,120 @@
+package edu.uchicago.cs.systems.wasabi;
+
+import java.util.ArrayList;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Collections;
+
+import edu.uchicago.cs.systems.wasabi.ConfigParser;
+import edu.uchicago.cs.systems.wasabi.WasabiLogger;
+import edu.uchicago.cs.systems.wasabi.InjectionPolicy;
+import edu.uchicago.cs.systems.wasabi.StackSnapshot;
+import edu.uchicago.cs.systems.wasabi.InjectionPoint;
+import edu.uchicago.cs.systems.wasabi.ExecutionTrace;
+
+class WasabiContext {
+
+ private WasabiLogger LOG;
+ private ConfigParser configParser;
+
+ private final HashingPrimitives hashingPrimitives = new HashingPrimitives();
+
+ private Map injectionPlan;
+ private InjectionPolicy injectionPolicy;
+
+ private ExecutionTrace executionTrace = new ExecutionTrace(10);
+ private ConcurrentHashMap injectionCounts = new ConcurrentHashMap<>();
+
+ public WasabiContext(WasabiLogger logger,
+ ConfigParser configParser) {
+ this.LOG = logger;
+ this.configParser = configParser;
+
+ int maxInjectionCount = this.configParser.getMaxInjectionCount();
+
+ String injectionPolicyString = this.configParser.getInjectionPolicy();
+ switch (injectionPolicyString) {
+ case "no-injection":
+ injectionPolicy = new NoInjection();
+ break;
+ case "forever":
+ injectionPolicy = new InjectForever();
+ break;
+ case "max-count":
+ injectionPolicy = new InjectUpToMaxCount(maxInjectionCount);
+ break;
+ default:
+ injectionPolicy = new NoInjection();
+ break;
+ }
+
+ injectionPlan = Collections.unmodifiableMap(this.configParser.getInjectionPlan());
+ }
+
+ private Boolean isNullOrEmpty(String str) {
+ return str == null || str.isEmpty();
+ }
+
+ private synchronized int getInjectionCount(ArrayList stacktrace) {
+ int hval = hashingPrimitives.getHashValue(stacktrace);
+ return injectionCounts.getOrDefault(hval, 0);
+ }
+
+ private synchronized int updateInjectionCount(ArrayList stacktrace) {
+ int hval = hashingPrimitives.getHashValue(stacktrace);
+ return injectionCounts.compute(hval, (k, v) -> (v == null) ? 1 : v + 1);
+ }
+
+ public synchronized void addToExecTrace(String opName, int opType, StackSnapshot stackSnapshot) {
+ long currentTime = System.nanoTime();
+ executionTrace.addLast(new OpEntry(opName, opType, currentTime, stackSnapshot));
+ }
+
+ public synchronized void addToExecTrace(String opName, int opType, StackSnapshot stackSnapshot, String retryException) {
+ long currentTime = System.nanoTime();
+ executionTrace.addLast(new OpEntry(opName, opType, currentTime, stackSnapshot, retryException));
+ }
+
+ public synchronized InjectionPoint getInjectionPoint(String testName,
+ String injectionSite,
+ String injectionSourceLocation,
+ String retryException,
+ String retryCallerFunction,
+ StackSnapshot stackSnapshot) {
+
+ if (!injectionPlan.containsKey(injectionSourceLocation)) {
+ return null;
+ }
+
+ String retrySourceLocation = injectionPlan.get(injectionSourceLocation).retryCallerFunction;
+ int injectionCount = getInjectionCount(stackSnapshot.getStacktrace());
+
+ addToExecTrace(injectionSite, OpEntry.RETRY_CALLER_OP, stackSnapshot, retryException);
+
+ return new InjectionPoint(
+ stackSnapshot,
+ retrySourceLocation,
+ retryCallerFunction,
+ injectionSite,
+ retryException,
+ injectionCount
+ );
+ }
+
+ public Boolean shouldInject(InjectionPoint ipt) {
+ if (injectionPolicy.shouldInject(ipt.injectionCount)) {
+ ipt.injectionCount = updateInjectionCount(ipt.stackSnapshot.getStacktrace());
+ return true;
+ }
+
+ return false;
+ }
+
+ public void printExecTrace(WasabiLogger log, String msg) {
+ if (executionTrace.getSize() > 0) {
+ executionTrace.printExecutionTrace(log, msg);
+ }
+ }
+
+}
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/java/edu/uchicago/cs/systems/wasabi/WasabiContextHolder.java b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/java/edu/uchicago/cs/systems/wasabi/WasabiContextHolder.java
new file mode 100644
index 00000000..cf900b3c
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/java/edu/uchicago/cs/systems/wasabi/WasabiContextHolder.java
@@ -0,0 +1,7 @@
+package edu.uchicago.cs.systems.wasabi;
+
+// A simple interface for runnables that can hold a WasabiContext object
+public interface WasabiContextHolder {
+ public void setWasabiContext(WasabiContext ctx);
+ public WasabiContext getWasabiContext();
+ }
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/java/edu/uchicago/cs/systems/wasabi/WasabiLogger.java b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/java/edu/uchicago/cs/systems/wasabi/WasabiLogger.java
new file mode 100644
index 00000000..ae6e3cca
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/main/java/edu/uchicago/cs/systems/wasabi/WasabiLogger.java
@@ -0,0 +1,33 @@
+package edu.uchicago.cs.systems.wasabi;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+class WasabiLogger {
+ private final Logger LOG = LoggerFactory.getLogger(WasabiLogger.class);
+
+ public static final int LOG_LEVEL_INFO = 1;
+ public static final int LOG_LEVEL_WARN = 2;
+ public static final int LOG_LEVEL_DEBUG = 3;
+ public static final int LOG_LEVEL_ERROR = 4;
+
+ public synchronized void printMessage(int logLevel, String msg) {
+ long timestamp = System.nanoTime();
+ long threadId = Thread.currentThread().getId();
+
+ switch(logLevel) {
+ case LOG_LEVEL_INFO:
+ LOG.info("[wasabi] [" + timestamp + "] [thread=" + threadId + "] " + msg);
+ break;
+ case LOG_LEVEL_WARN:
+ LOG.warn("[wasabi] [" + timestamp + "] [thread=" + threadId + "] " + msg);
+ break;
+ case LOG_LEVEL_DEBUG:
+ LOG.debug("[wasabi] [" + timestamp + "] [thread=" + threadId + "] " + msg);
+ break;
+ case LOG_LEVEL_ERROR:
+ LOG.error("[wasabi] [" + timestamp + "] [thread=" + threadId + "] " + msg);
+ break;
+ }
+ }
+}
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/test/java/edu/uchicago/cs/systems/wasabi/TestExecutionTrace.java b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/test/java/edu/uchicago/cs/systems/wasabi/TestExecutionTrace.java
new file mode 100644
index 00000000..05f1ba62
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/test/java/edu/uchicago/cs/systems/wasabi/TestExecutionTrace.java
@@ -0,0 +1,261 @@
+package edu.uchicago.cs.systems.wasabi;
+
+import java.util.ArrayList;
+import edu.uchicago.cs.systems.wasabi.StackSnapshot;
+import edu.uchicago.cs.systems.wasabi.ExecutionTrace;
+
+import static org.junit.Assert.*;
+import org.junit.Test;
+
+public class TestExecutionTrace {
+
+ @Test
+ public void testIsSameOpEntry() {
+ OpEntry testOpA = new OpEntry(
+ "baz(Baz.java:42)",
+ OpEntry.RETRY_CALLER_OP,
+ 0L,
+ new StackSnapshot(
+ new ArrayList() {
+ {
+ add("baz(Baz.java:42)");
+ add("bar(Bar.java:42)");
+ add("foo(Foo.java:42)");
+ }
+ }
+ ),
+ "IOException"
+ );
+ OpEntry testOpB = new OpEntry(
+ "baz(Baz.java:42)",
+ OpEntry.RETRY_CALLER_OP,
+ 0L,
+ new StackSnapshot(
+ new ArrayList() {
+ {
+ add("baz(Baz.java:42)");
+ add("bar(Bar.java:42)");
+ add("foo(Foo.java:42)");
+ }
+ }
+ ),
+ "RuntimeException"
+ );
+
+ assertTrue(testOpA.isSameOp(testOpA));
+ assertFalse(testOpA.isSameOp(testOpB));
+ }
+
+ @Test
+ public void testHasFrame() {
+ OpEntry testOp = new OpEntry(
+ "baz(Baz.java:42)",
+ OpEntry.RETRY_CALLER_OP,
+ 0L,
+ new StackSnapshot(
+ new ArrayList() {
+ {
+ add("baz(Baz.java:42)");
+ add("bar(Bar.java:42)");
+ add("foo(Foo.java:42)");
+ }
+ }
+ ),
+ "Exception"
+ );
+
+ assertTrue(testOp.hasFrame("bar(Bar.java:42)"));
+ assertFalse(testOp.hasFrame("not-a-frame"));
+ }
+
+ @Test
+ public void testIsOfType() {
+ OpEntry testOp = new OpEntry(
+ "foo(Foo.java:42)",
+ OpEntry.RETRY_CALLER_OP,
+ 0L,
+ new StackSnapshot(
+ new ArrayList() {
+ {
+ add("foo(Foo.java:42)");
+ }
+ }
+ ),
+ "Exception"
+ );
+
+ assertTrue(testOp.isOfType(OpEntry.RETRY_CALLER_OP));
+ assertFalse(testOp.isOfType(OpEntry.THREAD_SLEEP_OP));
+ }
+
+ @Test
+ public void testIsNullOrEmpty() {
+ ExecutionTrace execTrace = new ExecutionTrace();
+ assertTrue(execTrace.isNullOrEmpty());
+ }
+
+ @Test
+ public void testCheckIfOpIsOfType() {
+ ExecutionTrace execTrace = new ExecutionTrace();
+ execTrace.addLast(
+ new OpEntry(
+ "foo(Foo.java:42)",
+ OpEntry.RETRY_CALLER_OP,
+ 0L,
+ new StackSnapshot(
+ new ArrayList() {
+ {
+ add("foo(Foo.java:42)");
+ }
+ }
+ ),
+ "Exception"
+ )
+ );
+
+ assertEquals(execTrace.getSize(), 1);
+ assertTrue(execTrace.checkIfOpIsOfType(0, OpEntry.RETRY_CALLER_OP));
+ assertFalse(execTrace.checkIfOpIsOfType(1, OpEntry.RETRY_CALLER_OP));
+ assertFalse(execTrace.checkIfOpIsOfType(0, OpEntry.THREAD_SLEEP_OP));
+ }
+
+ @Test
+ public void testCheckIfOpHasFrame() {
+ ExecutionTrace execTrace = new ExecutionTrace();
+ execTrace.addLast(
+ new OpEntry(
+ "foo(Foo.java:42)",
+ OpEntry.RETRY_CALLER_OP,
+ 0L,
+ new StackSnapshot(
+ new ArrayList() {
+ {
+ add("foo(Foo.java:42)");
+ }
+ }
+ ),
+ "Exception"
+ )
+ );
+
+ assertEquals(execTrace.getSize(), 1);
+ assertTrue(execTrace.checkIfOpHasFrame(0, "foo(Foo.java:42)"));
+ assertFalse(execTrace.checkIfOpHasFrame(1, "foo(Foo.java:42)"));
+ assertFalse(execTrace.checkIfOpHasFrame(0, "not-a-frame"));
+ }
+
+ @Test
+ public void testCheckIfOpsAreEqual() {
+ ExecutionTrace execTrace = new ExecutionTrace();
+ execTrace.addLast(
+ new OpEntry(
+ "foo(Foo.java:42)",
+ OpEntry.RETRY_CALLER_OP,
+ 0L,
+ new StackSnapshot(
+ new ArrayList() {
+ {
+ add("foo(Foo.java:42)");
+ }
+ }
+ ),
+ "Exception"
+ )
+ );
+ execTrace.addLast(
+ new OpEntry(
+ "Thread.sleep(Bar.java:43)",
+ OpEntry.THREAD_SLEEP_OP,
+ 0L,
+ new StackSnapshot(
+ new ArrayList() {
+ {
+ add("Thread.sleep(Bar.java:43)");
+ add("bar(Bar.java:42)");
+ }
+ }
+ ),
+ null
+ )
+ );
+ execTrace.addLast(
+ new OpEntry(
+ "foo(Foo.java:42)",
+ OpEntry.RETRY_CALLER_OP,
+ 0L,
+ new StackSnapshot(
+ new ArrayList() {
+ {
+ add("foo(Foo.java:42)");
+ }
+ }
+ ),
+ "Exception"
+ )
+ );
+
+ assertEquals(execTrace.getSize(), 3);
+ assertTrue(execTrace.checkIfOpsAreEqual(0, 2));
+ assertFalse(execTrace.checkIfOpsAreEqual(1, 2));
+ }
+
+ @Test
+ public void testMaxOpCacheSize() {
+ int maxOpCacheSize = 50;
+ ExecutionTrace execTrace = new ExecutionTrace(maxOpCacheSize);
+ execTrace.addLast(
+ new OpEntry(
+ "foo(Foo.java:42)",
+ OpEntry.RETRY_CALLER_OP,
+ 0L,
+ new StackSnapshot(
+ new ArrayList() {
+ {
+ add("foo(Foo.java:42)");
+ }
+ }
+ ),
+ "Exception"
+ )
+ );
+ for (int i = 1; i < maxOpCacheSize; ++i) {
+ execTrace.addLast(
+ new OpEntry(
+ "bar(Bar.java:42)",
+ OpEntry.RETRY_CALLER_OP,
+ 0L,
+ new StackSnapshot(
+ new ArrayList() {
+ {
+ add("bar(Bar.java:42)");
+ }
+ }
+ ),
+ "Exception"
+ )
+ );
+ }
+
+ assertEquals(execTrace.getSize(), maxOpCacheSize);
+ assertTrue(execTrace.checkIfOpHasFrame(0, "foo(Foo.java:42)"));
+
+ execTrace.addLast(
+ new OpEntry(
+ "bar(Bar.java:42)",
+ OpEntry.RETRY_CALLER_OP,
+ 0L,
+ new StackSnapshot(
+ new ArrayList() {
+ {
+ add("bar(Bar.java:42)");
+ }
+ }
+ ),
+ "Exception"
+ )
+ );
+
+ assertEquals(execTrace.getSize(), maxOpCacheSize);
+ assertTrue(execTrace.checkIfOpHasFrame(0, "bar(Bar.java:42)"));
+ }
+}
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/test/java/edu/uchicago/cs/systems/wasabi/TestInjectionPolicies.java b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/test/java/edu/uchicago/cs/systems/wasabi/TestInjectionPolicies.java
new file mode 100644
index 00000000..795369aa
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/test/java/edu/uchicago/cs/systems/wasabi/TestInjectionPolicies.java
@@ -0,0 +1,32 @@
+package edu.uchicago.cs.systems.wasabi;
+
+import java.util.ArrayList;
+import edu.uchicago.cs.systems.wasabi.StackSnapshot;
+
+import static org.junit.Assert.*;
+import org.junit.Test;
+
+public class TestInjectionPolicies {
+
+ int fakeCount = 1;
+ int fakeBound = 2;
+
+ @Test
+ public void testNoInjectionPolicy() {
+ InjectionPolicy policy = new NoInjection();
+ assertFalse(policy.shouldInject(this.fakeCount));
+ }
+
+ @Test
+ public void testInjectForeverPolicy() {
+ InjectionPolicy policy = new InjectForever();
+ assertTrue(policy.shouldInject(this.fakeCount));
+ }
+
+ @Test
+ public void testInjectUpToMaxCountPolicy() {
+ InjectionPolicy policy = new InjectUpToMaxCount(this.fakeBound);
+ assertTrue(policy.shouldInject(this.fakeCount));
+ assertFalse(policy.shouldInject(this.fakeCount + this.fakeBound));
+ }
+}
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/test/java/edu/uchicago/cs/systems/wasabi/TestStackSnapshot.java b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/test/java/edu/uchicago/cs/systems/wasabi/TestStackSnapshot.java
new file mode 100644
index 00000000..aea99e87
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/test/java/edu/uchicago/cs/systems/wasabi/TestStackSnapshot.java
@@ -0,0 +1,69 @@
+package edu.uchicago.cs.systems.wasabi;
+
+import java.util.ArrayList;
+import edu.uchicago.cs.systems.wasabi.StackSnapshot;
+
+import static org.junit.Assert.*;
+import org.junit.Test;
+
+public class TestStackSnapshot {
+
+ @Test
+ public void testIsNullOrEmpty() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ assertFalse(stackSnapshot.isNullOrEmpty());
+
+ StackSnapshot emptyStackSnapshot = new StackSnapshot(null);
+ assertTrue(emptyStackSnapshot.isNullOrEmpty());
+ }
+
+ @Test
+ public void testHasFrame() {
+ ArrayList testStack = new ArrayList() {
+ {
+ add("baz(Baz.java:42)");
+ add("bar(Bar.java:42)");
+ add("foo(Foo.java:42)");
+ }
+ };
+
+ StackSnapshot stackSnapshot = new StackSnapshot(testStack);
+ for (String frame : testStack) {
+ assertTrue(stackSnapshot.hasFrame(frame));
+ }
+
+ assertFalse(stackSnapshot.hasFrame("not-a-frame"));
+ }
+
+ @Test
+ public void testNormalizeStackBelowFrame() {
+ ArrayList testStack = new ArrayList() {
+ {
+ add("baz(Baz.java:42)");
+ add("bar(Bar.java:42)");
+ add("foo(Foo.java:42)");
+ }
+ };
+
+ StackSnapshot stackSnapshot = new StackSnapshot(testStack);
+
+ assertEquals(
+ String.join(" ! ", stackSnapshot.normalizeStackBelowFrame("bar")),
+ String.join(" ! ", new ArrayList() {
+ {
+ add("bar");
+ add("foo(Foo.java:42)");
+ }
+ })
+ );
+ }
+
+ @Test
+ public void testGetQualifiedName() {
+ String frameFoo = "foo(Foo.java:42)";
+ String frameBar = "bar[0](Bar.java:42)";
+
+ assertEquals(StackSnapshot.getQualifiedName(frameFoo), "foo");
+ assertEquals(StackSnapshot.getQualifiedName(frameBar), "bar[0]");
+ }
+}
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/test/java/edu/uchicago/cs/systems/wasabi/TestThrowableCallback.java b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/test/java/edu/uchicago/cs/systems/wasabi/TestThrowableCallback.java
new file mode 100644
index 00000000..a2c71cfc
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/test/java/edu/uchicago/cs/systems/wasabi/TestThrowableCallback.java
@@ -0,0 +1,27 @@
+package edu.uchicago.cs.systems.wasabi;
+
+import java.lang.Thread;
+
+import static org.junit.Assert.*;
+import org.junit.Test;
+
+public class TestThrowableCallback {
+
+ @Test
+ public void testShouldNotThrowException() throws Exception {
+ try {
+ shouldNotThrow();
+ } catch (Exception e) {
+ // do nothing
+ }
+ }
+
+ private void shouldNotThrow() {
+ try {
+ Thread.sleep(5);
+ } catch (InterruptedException e) {
+ // do nothing
+ }
+ }
+
+}
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/test/java/edu/uchicago/cs/systems/wasabi/TestWasabiContext.java b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/test/java/edu/uchicago/cs/systems/wasabi/TestWasabiContext.java
new file mode 100644
index 00000000..c4d648cb
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/src/test/java/edu/uchicago/cs/systems/wasabi/TestWasabiContext.java
@@ -0,0 +1,143 @@
+package edu.uchicago.cs.systems.wasabi;
+
+import edu.uchicago.cs.systems.wasabi.InjectionPoint;
+import edu.uchicago.cs.systems.wasabi.WasabiContext;
+import edu.uchicago.cs.systems.wasabi.WasabiLogger;
+
+import java.io.FileWriter;
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import static org.junit.Assert.*;
+
+public class TestWasabiContext {
+
+ private final WasabiLogger LOG = new WasabiLogger();
+
+ private final String testConfigFile = "./_test.conf";
+ private final String testRetryDataFile = "./_test_retry_locations.data";
+ private final String testRetryPolicy = "max-count";
+ private final int testMaxCount = 42;
+
+ private ConfigParser configParser;
+
+ private void generateConfigFile() {
+ try (FileWriter writer = new FileWriter(this.testConfigFile)) {
+ writer.append("retry_data_file: " + this.testRetryDataFile + "\n");
+ writer.append("injection_policy: " + this.testRetryPolicy + "\n");
+ writer.append("max_injection_count: " + String.valueOf(this.testMaxCount) + "\n");
+ } catch (IOException e) {
+ this.LOG.printMessage(
+ LOG.LOG_LEVEL_ERROR,
+ String.format("[wasabi] Error occurred while generating the retry data file: %s", e.getMessage())
+ );
+ e.printStackTrace();
+ }
+ }
+
+ private void generateDataRetryFile() {
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String[][] records = {
+ {
+ "test_retry_location:TestWasabiContext.javaL#0", // retry location
+ StackSnapshot.getQualifiedName(stackSnapshot.getFrame(1)), // enclosing method
+ StackSnapshot.getQualifiedName(stackSnapshot.getFrame(0)), // retried method
+ "SocketException", // exception
+ "1.0", // injection probability
+ "0" // test coverage metrics
+ }
+ };
+
+ try (FileWriter writer = new FileWriter(this.testRetryDataFile)) {
+ writer.append("Retry location!!!Enclosing method!!!Retried method!!!Exception!!!Injection probability!!!Test coverage\n");
+
+ for (String[] record : records) {
+ writer.append(
+ String.format("%s!!!%s!!!%s!!!%s!!!%s!!!%s\n", record[0], record[1], record[2], record[3], record[4], record[5])
+ );
+ }
+ } catch (IOException e) {
+ this.LOG.printMessage(
+ LOG.LOG_LEVEL_ERROR,
+ String.format("[wasabi] Error occurred while generating the retry data file: %s", e.getMessage())
+ );
+ e.printStackTrace();
+ }
+ }
+
+ @Before
+ public void startUp() {
+ generateConfigFile();
+ generateDataRetryFile();
+ this.configParser = new ConfigParser(LOG, testConfigFile);
+ }
+
+ /*
+ @Test
+ public void testShouldInject() {
+ WasabiContext wasabiCtx = new WasabiContext(this.LOG, this.configParser);
+ InjectionPoint validInjectionPoint = wasabiCtx.getInjectionPoint();
+
+ assertTrue(validInjectionPoint != null);
+ assertTrue(wasabiCtx.shouldInject(validInjectionPoint));
+
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ InjectionPoint invalidInjectionPoint = new InjectionPoint(
+ stackSnapshot,
+ "FakeRetryLocation",
+ "FakeRetryCaller",
+ "FakeRetriedCallee",
+ "FakeException",
+ 100 // injection count
+ );
+
+ assertFalse(wasabiCtx.shouldInject(invalidInjectionPoint));
+ }
+ */
+
+ /*
+ @Test
+ public void testUpdateInjectionCount() {
+ WasabiContext wasabiCtx = new WasabiContext(this.LOG, this.configParser);
+ InjectionPoint ipt = wasabiCtx.getInjectionPoint(); // new injection point
+ int initialCount = ipt.injectionCount;
+
+ ipt = wasabiCtx.getInjectionPoint(); // new injeciton point, same retry context
+ assertTrue(wasabiCtx.shouldInject(ipt));
+ assertEquals(initialCount + 1, ipt.injectionCount.intValue());
+
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ int uniqueId = HashingPrimitives.getHashValue(stackSnapshot.normalizeStackBelowFrame(stackSnapshot.getFrame(1)));
+ wasabiCtx.addToExecTrace(uniqueId, OpEntry.THREAD_SLEEP_OP, stackSnapshot); // some sleep operations in between
+ wasabiCtx.addToExecTrace(uniqueId, OpEntry.THREAD_SLEEP_OP, stackSnapshot);
+ wasabiCtx.addToExecTrace(uniqueId, OpEntry.THREAD_SLEEP_OP, stackSnapshot);
+
+ ipt = wasabiCtx.getInjectionPoint(); // new injeciton point, same retry context
+ assertTrue(wasabiCtx.shouldInject(ipt));
+ assertEquals(initialCount + 2, ipt.injectionCount.intValue());
+ }
+ */
+
+ @After
+ public void tearDown() {
+ try {
+ Path path = Paths.get(this.testRetryDataFile);
+ Files.deleteIfExists(path);
+
+ path = Paths.get(this.testConfigFile);
+ Files.deleteIfExists(path);
+
+ } catch (IOException e) {
+ this.LOG.printMessage(
+ LOG.LOG_LEVEL_ERROR,
+ String.format("[wasabi] Error occurred while deleting test configuration files: %s", e.getMessage())
+ );
+ e.printStackTrace();
+ }
+ }
+}
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/builddef.lst b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/builddef.lst
new file mode 100644
index 00000000..37475da4
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/builddef.lst
@@ -0,0 +1,13 @@
+-Xajruntimetarget:1.5
+-encoding
+UTF-8
+-showWeaveInfo
+-verbose
+-1.8
+-classpath
+/home/cc/.m2/repository/org/aspectj/aspectjrt/1.9.8.M1/aspectjrt-1.9.8.M1.jar:/home/cc/.m2/repository/org/slf4j/slf4j-simple/2.0.6/slf4j-simple-2.0.6.jar:/home/cc/.m2/repository/org/slf4j/slf4j-api/2.0.6/slf4j-api-2.0.6.jar:/home/cc/.m2/repository/junit/junit/4.13.2/junit-4.13.2.jar:/home/cc/.m2/repository/org/hamcrest/hamcrest-core/1.3/hamcrest-core-1.3.jar:/home/cc/.m2/repository/commons-codec/commons-codec/1.16.0/commons-codec-1.16.0.jar:/home/cc/.m2/repository/org/apache/hive/hive-metastore/4.0.0-beta-1/hive-metastore-4.0.0-beta-1.jar:/home/cc/.m2/repository/org/apache/hive/hive-common/4.0.0-beta-1/hive-common-4.0.0-beta-1.jar:/home/cc/.m2/repository/org/apache/hive/hive-classification/4.0.0-beta-1/hive-classification-4.0.0-beta-1.jar:/home/cc/.m2/repository/org/apache/hive/hive-storage-api/4.0.0-beta-1/hive-storage-api-4.0.0-beta-1.jar:/home/cc/.m2/repository/org/apache/commons/commons-lang3/3.12.0/commons-lang3-3.12.0.jar:/home/cc/.m2/repository/org/apache/orc/orc-core/1.8.3/orc-core-1.8.3.jar:/home/cc/.m2/repository/org/apache/orc/orc-shims/1.8.3/orc-shims-1.8.3.jar:/home/cc/.m2/repository/io/airlift/aircompressor/0.21/aircompressor-0.21.jar:/home/cc/.m2/repository/org/jetbrains/annotations/17.0.0/annotations-17.0.0.jar:/home/cc/.m2/repository/org/threeten/threeten-extra/1.7.1/threeten-extra-1.7.1.jar:/home/cc/.m2/repository/jline/jline/2.14.6/jline-2.14.6.jar:/home/cc/.m2/repository/org/eclipse/jetty/jetty-http/9.4.40.v20210413/jetty-http-9.4.40.v20210413.jar:/home/cc/.m2/repository/org/eclipse/jetty/jetty-rewrite/9.4.40.v20210413/jetty-rewrite-9.4.40.v20210413.jar:/home/cc/.m2/repository/org/eclipse/jetty/jetty-webapp/9.4.40.v20210413/jetty-webapp-9.4.40.v20210413.jar:/home/cc/.m2/repository/org/eclipse/jetty/jetty-xml/9.4.40.v20210413/jetty-xml-9.4.40.v20210413.jar:/home/cc/.m2/repository/joda-time/joda-time/2.9.9/joda-time-2.9.9.jar:/home/cc/.m2/repository/org/apache/logging/log4j/log4j-web/2.18.0/log4j-web-2.18.0.jar:/home/cc/.m2/repository/org/apache/tez/tez-api/0.10.2/tez-api-0.10.2.jar:/home/cc/.m2/repository/org/apache/hadoop/hadoop-auth/3.3.1/hadoop-auth-3.3.1.jar:/home/cc/.m2/repository/org/apache/hadoop/hadoop-annotations/3.3.1/hadoop-annotations-3.3.1.jar:/home/cc/.m2/repository/org/apache/hadoop/hadoop-yarn-api/3.3.1/hadoop-yarn-api-3.3.1.jar:/home/cc/.m2/repository/javax/xml/bind/jaxb-api/2.2.11/jaxb-api-2.2.11.jar:/home/cc/.m2/repository/org/apache/hadoop/thirdparty/hadoop-shaded-protobuf_3_7/1.1.1/hadoop-shaded-protobuf_3_7-1.1.1.jar:/home/cc/.m2/repository/org/apache/hadoop/hadoop-yarn-common/3.3.1/hadoop-yarn-common-3.3.1.jar:/home/cc/.m2/repository/com/google/inject/extensions/guice-servlet/4.0/guice-servlet-4.0.jar:/home/cc/.m2/repository/com/google/inject/guice/4.0/guice-4.0.jar:/home/cc/.m2/repository/aopalliance/aopalliance/1.0/aopalliance-1.0.jar:/home/cc/.m2/repository/com/sun/jersey/contribs/jersey-guice/1.19/jersey-guice-1.19.jar:/home/cc/.m2/repository/org/apache/hadoop/hadoop-yarn-client/3.3.1/hadoop-yarn-client-3.3.1.jar:/home/cc/.m2/repository/org/jline/jline/3.9.0/jline-3.9.0.jar:/home/cc/.m2/repository/com/sun/jersey/jersey-json/1.19/jersey-json-1.19.jar:/home/cc/.m2/repository/com/sun/xml/bind/jaxb-impl/2.2.3-1/jaxb-impl-2.2.3-1.jar:/home/cc/.m2/repository/org/codehaus/jackson/jackson-core-asl/1.9.2/jackson-core-asl-1.9.2.jar:/home/cc/.m2/repository/org/codehaus/jackson/jackson-mapper-asl/1.9.2/jackson-mapper-asl-1.9.2.jar:/home/cc/.m2/repository/org/codehaus/jackson/jackson-jaxrs/1.9.2/jackson-jaxrs-1.9.2.jar:/home/cc/.m2/repository/org/codehaus/jackson/jackson-xc/1.9.2/jackson-xc-1.9.2.jar:/home/cc/.m2/repository/org/apache/hadoop/hadoop-hdfs-client/3.3.1/hadoop-hdfs-client-3.3.1.jar:/home/cc/.m2/repository/com/squareup/okhttp/okhttp/2.7.5/okhttp-2.7.5.jar:/home/cc/.m2/repository/com/squareup/okio/okio/1.6.0/okio-1.6.0.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/../lib/tools.jar:/home/cc/.m2/repository/org/fusesource/jansi/jansi/2.3.4/jansi-2.3.4.jar:/home/cc/.m2/repository/com/tdunning/json/1.8/json-1.8.jar:/home/cc/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.0/metrics-core-3.1.0.jar:/home/cc/.m2/repository/io/dropwizard/metrics/metrics-jvm/3.1.0/metrics-jvm-3.1.0.jar:/home/cc/.m2/repository/io/dropwizard/metrics/metrics-json/3.1.0/metrics-json-3.1.0.jar:/home/cc/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.13.5/jackson-databind-2.13.5.jar:/home/cc/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.13.5/jackson-core-2.13.5.jar:/home/cc/.m2/repository/com/github/joshelser/dropwizard-metrics-hadoop-metrics2-reporter/0.1.2/dropwizard-metrics-hadoop-metrics2-reporter-0.1.2.jar:/home/cc/.m2/repository/org/apache/hive/hive-serde/4.0.0-beta-1/hive-serde-4.0.0-beta-1.jar:/home/cc/.m2/repository/com/google/code/findbugs/jsr305/3.0.0/jsr305-3.0.0.jar:/home/cc/.m2/repository/org/apache/arrow/arrow-vector/12.0.0/arrow-vector-12.0.0.jar:/home/cc/.m2/repository/org/apache/arrow/arrow-format/12.0.0/arrow-format-12.0.0.jar:/home/cc/.m2/repository/com/fasterxml/jackson/datatype/jackson-datatype-jsr310/2.13.4/jackson-datatype-jsr310-2.13.4.jar:/home/cc/.m2/repository/com/carrotsearch/hppc/0.7.2/hppc-0.7.2.jar:/home/cc/.m2/repository/com/google/flatbuffers/flatbuffers-java/1.12.0/flatbuffers-java-1.12.0.jar:/home/cc/.m2/repository/org/apache/avro/avro/1.11.1/avro-1.11.1.jar:/home/cc/.m2/repository/net/sf/opencsv/opencsv/2.3/opencsv-2.3.jar:/home/cc/.m2/repository/org/apache/parquet/parquet-hadoop-bundle/1.13.0/parquet-hadoop-bundle-1.13.0.jar:/home/cc/.m2/repository/com/esri/geometry/esri-geometry-api/2.2.4/esri-geometry-api-2.2.4.jar:/home/cc/.m2/repository/org/apache/hive/hive-shims/4.0.0-beta-1/hive-shims-4.0.0-beta-1.jar:/home/cc/.m2/repository/org/apache/hive/shims/hive-shims-common/4.0.0-beta-1/hive-shims-common-4.0.0-beta-1.jar:/home/cc/.m2/repository/org/apache/hive/hive-standalone-metastore-common/4.0.0-beta-1/hive-standalone-metastore-common-4.0.0-beta-1.jar:/home/cc/.m2/repository/org/apache/commons/commons-jexl3/3.3/commons-jexl3-3.3.jar:/home/cc/.m2/repository/io/grpc/grpc-netty-shaded/1.51.0/grpc-netty-shaded-1.51.0.jar:/home/cc/.m2/repository/io/grpc/grpc-core/1.51.0/grpc-core-1.51.0.jar:/home/cc/.m2/repository/io/grpc/grpc-protobuf/1.51.0/grpc-protobuf-1.51.0.jar:/home/cc/.m2/repository/io/grpc/grpc-api/1.51.0/grpc-api-1.51.0.jar:/home/cc/.m2/repository/io/grpc/grpc-context/1.51.0/grpc-context-1.51.0.jar:/home/cc/.m2/repository/com/google/api/grpc/proto-google-common-protos/2.9.0/proto-google-common-protos-2.9.0.jar:/home/cc/.m2/repository/io/grpc/grpc-protobuf-lite/1.51.0/grpc-protobuf-lite-1.51.0.jar:/home/cc/.m2/repository/io/grpc/grpc-stub/1.51.0/grpc-stub-1.51.0.jar:/home/cc/.m2/repository/com/google/protobuf/protobuf-java/3.21.7/protobuf-java-3.21.7.jar:/home/cc/.m2/repository/com/zaxxer/HikariCP/4.0.3/HikariCP-4.0.3.jar:/home/cc/.m2/repository/commons-logging/commons-logging/1.1.3/commons-logging-1.1.3.jar:/home/cc/.m2/repository/com/github/ben-manes/caffeine/caffeine/2.8.4/caffeine-2.8.4.jar:/home/cc/.m2/repository/org/checkerframework/checker-qual/3.4.0/checker-qual-3.4.0.jar:/home/cc/.m2/repository/javolution/javolution/5.5.1/javolution-5.5.1.jar:/home/cc/.m2/repository/com/google/guava/guava/22.0/guava-22.0.jar:/home/cc/.m2/repository/com/google/errorprone/error_prone_annotations/2.0.18/error_prone_annotations-2.0.18.jar:/home/cc/.m2/repository/com/google/j2objc/j2objc-annotations/1.1/j2objc-annotations-1.1.jar:/home/cc/.m2/repository/org/codehaus/mojo/animal-sniffer-annotations/1.14/animal-sniffer-annotations-1.14.jar:/home/cc/.m2/repository/commons-cli/commons-cli/1.5.0/commons-cli-1.5.0.jar:/home/cc/.m2/repository/org/apache/thrift/libfb303/0.9.3/libfb303-0.9.3.jar:/home/cc/.m2/repository/org/apache/hive/hive-service/4.0.0-beta-1/hive-service-4.0.0-beta-1.jar:/home/cc/.m2/repository/org/apache/hive/hive-service-rpc/4.0.0-beta-1/hive-service-rpc-4.0.0-beta-1.jar:/home/cc/.m2/repository/org/apache/hive/hive-llap-server/4.0.0-beta-1/hive-llap-server-4.0.0-beta-1.jar:/home/cc/.m2/repository/org/apache/hive/hive-llap-common/4.0.0-beta-1/hive-llap-common-4.0.0-beta-1.jar:/home/cc/.m2/repository/io/jsonwebtoken/jjwt-api/0.10.5/jjwt-api-0.10.5.jar:/home/cc/.m2/repository/io/jsonwebtoken/jjwt-impl/0.10.5/jjwt-impl-0.10.5.jar:/home/cc/.m2/repository/io/jsonwebtoken/jjwt-jackson/0.10.5/jjwt-jackson-0.10.5.jar:/home/cc/.m2/repository/org/apache/hive/hive-llap-client/4.0.0-beta-1/hive-llap-client-4.0.0-beta-1.jar:/home/cc/.m2/repository/io/netty/netty-all/4.1.77.Final/netty-all-4.1.77.Final.jar:/home/cc/.m2/repository/io/netty/netty-buffer/4.1.77.Final/netty-buffer-4.1.77.Final.jar:/home/cc/.m2/repository/io/netty/netty-codec/4.1.77.Final/netty-codec-4.1.77.Final.jar:/home/cc/.m2/repository/io/netty/netty-codec-dns/4.1.77.Final/netty-codec-dns-4.1.77.Final.jar:/home/cc/.m2/repository/io/netty/netty-codec-haproxy/4.1.77.Final/netty-codec-haproxy-4.1.77.Final.jar:/home/cc/.m2/repository/io/netty/netty-codec-http/4.1.77.Final/netty-codec-http-4.1.77.Final.jar:/home/cc/.m2/repository/io/netty/netty-codec-http2/4.1.77.Final/netty-codec-http2-4.1.77.Final.jar:/home/cc/.m2/repository/io/netty/netty-codec-memcache/4.1.77.Final/netty-codec-memcache-4.1.77.Final.jar:/home/cc/.m2/repository/io/netty/netty-codec-mqtt/4.1.77.Final/netty-codec-mqtt-4.1.77.Final.jar:/home/cc/.m2/repository/io/netty/netty-codec-redis/4.1.77.Final/netty-codec-redis-4.1.77.Final.jar:/home/cc/.m2/repository/io/netty/netty-codec-smtp/4.1.77.Final/netty-codec-smtp-4.1.77.Final.jar:/home/cc/.m2/repository/io/netty/netty-codec-socks/4.1.77.Final/netty-codec-socks-4.1.77.Final.jar:/home/cc/.m2/repository/io/netty/netty-codec-stomp/4.1.77.Final/netty-codec-stomp-4.1.77.Final.jar:/home/cc/.m2/repository/io/netty/netty-codec-xml/4.1.77.Final/netty-codec-xml-4.1.77.Final.jar:/home/cc/.m2/repository/io/netty/netty-common/4.1.77.Final/netty-common-4.1.77.Final.jar:/home/cc/.m2/repository/io/netty/netty-handler-proxy/4.1.77.Final/netty-handler-proxy-4.1.77.Final.jar:/home/cc/.m2/repository/io/netty/netty-resolver/4.1.77.Final/netty-resolver-4.1.77.Final.jar:/home/cc/.m2/repository/io/netty/netty-resolver-dns/4.1.77.Final/netty-resolver-dns-4.1.77.Final.jar:/home/cc/.m2/repository/io/netty/netty-transport/4.1.77.Final/netty-transport-4.1.77.Final.jar:/home/cc/.m2/repository/io/netty/netty-transport-rxtx/4.1.77.Final/netty-transport-rxtx-4.1.77.Final.jar:/home/cc/.m2/repository/io/netty/netty-transport-sctp/4.1.77.Final/netty-transport-sctp-4.1.77.Final.jar:/home/cc/.m2/repository/io/netty/netty-transport-udt/4.1.77.Final/netty-transport-udt-4.1.77.Final.jar:/home/cc/.m2/repository/io/netty/netty-transport-classes-epoll/4.1.77.Final/netty-transport-classes-epoll-4.1.77.Final.jar:/home/cc/.m2/repository/io/netty/netty-transport-native-unix-common/4.1.77.Final/netty-transport-native-unix-common-4.1.77.Final.jar:/home/cc/.m2/repository/io/netty/netty-transport-classes-kqueue/4.1.77.Final/netty-transport-classes-kqueue-4.1.77.Final.jar:/home/cc/.m2/repository/io/netty/netty-resolver-dns-classes-macos/4.1.77.Final/netty-resolver-dns-classes-macos-4.1.77.Final.jar:/home/cc/.m2/repository/org/codehaus/jettison/jettison/1.5.4/jettison-1.5.4.jar:/home/cc/.m2/repository/org/eclipse/jetty/jetty-util/9.4.40.v20210413/jetty-util-9.4.40.v20210413.jar:/home/cc/.m2/repository/com/lmax/disruptor/3.3.7/disruptor-3.3.7.jar:/home/cc/.m2/repository/org/apache/hive/hive-llap-common/4.0.0-beta-1/hive-llap-common-4.0.0-beta-1-tests.jar:/home/cc/.m2/repository/org/apache/hive/hive-hplsql/4.0.0-beta-1/hive-hplsql-4.0.0-beta-1.jar:/home/cc/.m2/repository/org/antlr/antlr4-runtime/4.9.3/antlr4-runtime-4.9.3.jar:/home/cc/.m2/repository/com/nimbusds/nimbus-jose-jwt/9.31/nimbus-jose-jwt-9.31.jar:/home/cc/.m2/repository/com/github/stephenc/jcip/jcip-annotations/1.0-1/jcip-annotations-1.0-1.jar:/home/cc/.m2/repository/net/sf/jpam/jpam/1.1/jpam-1.1.jar:/home/cc/.m2/repository/org/eclipse/jetty/jetty-server/9.4.40.v20210413/jetty-server-9.4.40.v20210413.jar:/home/cc/.m2/repository/org/eclipse/jetty/jetty-io/9.4.40.v20210413/jetty-io-9.4.40.v20210413.jar:/home/cc/.m2/repository/org/eclipse/jetty/jetty-servlet/9.4.40.v20210413/jetty-servlet-9.4.40.v20210413.jar:/home/cc/.m2/repository/org/eclipse/jetty/jetty-security/9.4.40.v20210413/jetty-security-9.4.40.v20210413.jar:/home/cc/.m2/repository/org/eclipse/jetty/jetty-util-ajax/9.4.40.v20210413/jetty-util-ajax-9.4.40.v20210413.jar:/home/cc/.m2/repository/org/eclipse/jetty/jetty-runner/9.4.40.v20210413/jetty-runner-9.4.40.v20210413.jar:/home/cc/.m2/repository/org/eclipse/jetty/jetty-plus/9.4.40.v20210413/jetty-plus-9.4.40.v20210413.jar:/home/cc/.m2/repository/org/eclipse/jetty/jetty-annotations/9.4.40.v20210413/jetty-annotations-9.4.40.v20210413.jar:/home/cc/.m2/repository/org/ow2/asm/asm/9.0/asm-9.0.jar:/home/cc/.m2/repository/org/ow2/asm/asm-commons/9.0/asm-commons-9.0.jar:/home/cc/.m2/repository/org/ow2/asm/asm-tree/9.0/asm-tree-9.0.jar:/home/cc/.m2/repository/org/ow2/asm/asm-analysis/9.0/asm-analysis-9.0.jar:/home/cc/.m2/repository/org/eclipse/jetty/jetty-jaas/9.4.40.v20210413/jetty-jaas-9.4.40.v20210413.jar:/home/cc/.m2/repository/org/eclipse/jetty/websocket/websocket-server/9.4.40.v20210413/websocket-server-9.4.40.v20210413.jar:/home/cc/.m2/repository/org/eclipse/jetty/websocket/websocket-common/9.4.40.v20210413/websocket-common-9.4.40.v20210413.jar:/home/cc/.m2/repository/org/eclipse/jetty/websocket/websocket-api/9.4.40.v20210413/websocket-api-9.4.40.v20210413.jar:/home/cc/.m2/repository/org/eclipse/jetty/websocket/websocket-client/9.4.40.v20210413/websocket-client-9.4.40.v20210413.jar:/home/cc/.m2/repository/org/eclipse/jetty/jetty-client/9.4.40.v20210413/jetty-client-9.4.40.v20210413.jar:/home/cc/.m2/repository/org/eclipse/jetty/websocket/websocket-servlet/9.4.40.v20210413/websocket-servlet-9.4.40.v20210413.jar:/home/cc/.m2/repository/org/eclipse/jetty/jetty-jndi/9.4.40.v20210413/jetty-jndi-9.4.40.v20210413.jar:/home/cc/.m2/repository/org/eclipse/jetty/apache-jsp/9.4.40.v20210413/apache-jsp-9.4.40.v20210413.jar:/home/cc/.m2/repository/org/eclipse/jetty/toolchain/jetty-schemas/3.1.2/jetty-schemas-3.1.2.jar:/home/cc/.m2/repository/org/eclipse/jetty/apache-jstl/9.4.40.v20210413/apache-jstl-9.4.40.v20210413.jar:/home/cc/.m2/repository/org/apache/taglibs/taglibs-standard-spec/1.2.5/taglibs-standard-spec-1.2.5.jar:/home/cc/.m2/repository/org/apache/taglibs/taglibs-standard-impl/1.2.5/taglibs-standard-impl-1.2.5.jar:/home/cc/.m2/repository/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar:/home/cc/.m2/repository/org/apache/curator/curator-framework/5.2.0/curator-framework-5.2.0.jar:/home/cc/.m2/repository/org/apache/curator/curator-client/5.2.0/curator-client-5.2.0.jar:/home/cc/.m2/repository/org/apache/curator/curator-recipes/5.2.0/curator-recipes-5.2.0.jar:/home/cc/.m2/repository/org/pac4j/pac4j-saml-opensamlv3/4.5.5/pac4j-saml-opensamlv3-4.5.5.jar:/home/cc/.m2/repository/org/pac4j/pac4j-core/4.5.5/pac4j-core-4.5.5.jar:/home/cc/.m2/repository/org/opensaml/opensaml-core/3.4.5/opensaml-core-3.4.5.jar:/home/cc/.m2/repository/net/shibboleth/utilities/java-support/7.5.1/java-support-7.5.1.jar:/home/cc/.m2/repository/org/opensaml/opensaml-saml-api/3.4.5/opensaml-saml-api-3.4.5.jar:/home/cc/.m2/repository/org/opensaml/opensaml-storage-api/3.4.5/opensaml-storage-api-3.4.5.jar:/home/cc/.m2/repository/org/opensaml/opensaml-saml-impl/3.4.5/opensaml-saml-impl-3.4.5.jar:/home/cc/.m2/repository/org/opensaml/opensaml-soap-impl/3.4.5/opensaml-soap-impl-3.4.5.jar:/home/cc/.m2/repository/org/opensaml/opensaml-soap-api/3.4.5/opensaml-soap-api-3.4.5.jar:/home/cc/.m2/repository/org/opensaml/opensaml-xmlsec-api/3.4.5/opensaml-xmlsec-api-3.4.5.jar:/home/cc/.m2/repository/org/opensaml/opensaml-security-api/3.4.5/opensaml-security-api-3.4.5.jar:/home/cc/.m2/repository/org/opensaml/opensaml-security-impl/3.4.5/opensaml-security-impl-3.4.5.jar:/home/cc/.m2/repository/org/opensaml/opensaml-profile-api/3.4.5/opensaml-profile-api-3.4.5.jar:/home/cc/.m2/repository/org/opensaml/opensaml-profile-impl/3.4.5/opensaml-profile-impl-3.4.5.jar:/home/cc/.m2/repository/org/opensaml/opensaml-messaging-api/3.4.5/opensaml-messaging-api-3.4.5.jar:/home/cc/.m2/repository/org/opensaml/opensaml-messaging-impl/3.4.5/opensaml-messaging-impl-3.4.5.jar:/home/cc/.m2/repository/org/opensaml/opensaml-storage-impl/3.4.5/opensaml-storage-impl-3.4.5.jar:/home/cc/.m2/repository/org/ldaptive/ldaptive/1.0.13/ldaptive-1.0.13.jar:/home/cc/.m2/repository/javax/json/javax.json-api/1.0/javax.json-api-1.0.jar:/home/cc/.m2/repository/net/spy/spymemcached/2.12.3/spymemcached-2.12.3.jar:/home/cc/.m2/repository/org/opensaml/opensaml-xmlsec-impl/3.4.5/opensaml-xmlsec-impl-3.4.5.jar:/home/cc/.m2/repository/org/cryptacular/cryptacular/1.2.4/cryptacular-1.2.4.jar:/home/cc/.m2/repository/net/shibboleth/tool/xmlsectool/2.0.0/xmlsectool-2.0.0.jar:/home/cc/.m2/repository/com/beust/jcommander/1.48/jcommander-1.48.jar:/home/cc/.m2/repository/org/apache/velocity/velocity-engine-core/2.3/velocity-engine-core-2.3.jar:/home/cc/.m2/repository/org/bouncycastle/bcprov-jdk15on/1.64/bcprov-jdk15on-1.64.jar:/home/cc/.m2/repository/org/apache/santuario/xmlsec/2.3.0/xmlsec-2.3.0.jar:/home/cc/.m2/repository/org/jamon/jamon-runtime/2.4.1/jamon-runtime-2.4.1.jar:/home/cc/.m2/repository/org/apache/hive/hive-exec/4.0.0-beta-1/hive-exec-4.0.0-beta-1.jar:/home/cc/.m2/repository/org/apache/atlas/atlas-client-v2/2.1.0/atlas-client-v2-2.1.0.jar:/home/cc/.m2/repository/cglib/cglib/2.2.2/cglib-2.2.2.jar:/home/cc/.m2/repository/asm/asm/3.3.1/asm-3.3.1.jar:/home/cc/.m2/repository/org/apache/atlas/atlas-client-common/2.1.0/atlas-client-common-2.1.0.jar:/home/cc/.m2/repository/com/sun/jersey/jersey-client/1.19/jersey-client-1.19.jar:/home/cc/.m2/repository/org/apache/atlas/atlas-intg/2.1.0/atlas-intg-2.1.0.jar:/home/cc/.m2/repository/commons-collections/commons-collections/3.2.2/commons-collections-3.2.2.jar:/home/cc/.m2/repository/org/apache/hadoop/hadoop-common/3.1.1/hadoop-common-3.1.1.jar:/home/cc/.m2/repository/commons-net/commons-net/3.6/commons-net-3.6.jar:/home/cc/.m2/repository/com/sun/jersey/jersey-servlet/1.19/jersey-servlet-1.19.jar:/home/cc/.m2/repository/com/sun/jersey/jersey-server/1.19/jersey-server-1.19.jar:/home/cc/.m2/repository/log4j/log4j/1.2.17/log4j-1.2.17.jar:/home/cc/.m2/repository/commons-beanutils/commons-beanutils/1.9.3/commons-beanutils-1.9.3.jar:/home/cc/.m2/repository/org/apache/commons/commons-configuration2/2.1.1/commons-configuration2-2.1.1.jar:/home/cc/.m2/repository/com/google/re2j/re2j/1.1/re2j-1.1.jar:/home/cc/.m2/repository/com/jcraft/jsch/0.1.54/jsch-0.1.54.jar:/home/cc/.m2/repository/org/apache/htrace/htrace-core4/4.1.0-incubating/htrace-core4-4.1.0-incubating.jar:/home/cc/.m2/repository/org/apache/kerby/kerb-simplekdc/1.0.1/kerb-simplekdc-1.0.1.jar:/home/cc/.m2/repository/org/apache/kerby/kerb-client/1.0.1/kerb-client-1.0.1.jar:/home/cc/.m2/repository/org/apache/kerby/kerby-config/1.0.1/kerby-config-1.0.1.jar:/home/cc/.m2/repository/org/apache/kerby/kerb-core/1.0.1/kerb-core-1.0.1.jar:/home/cc/.m2/repository/org/apache/kerby/kerby-pkix/1.0.1/kerby-pkix-1.0.1.jar:/home/cc/.m2/repository/org/apache/kerby/kerby-asn1/1.0.1/kerby-asn1-1.0.1.jar:/home/cc/.m2/repository/org/apache/kerby/kerby-util/1.0.1/kerby-util-1.0.1.jar:/home/cc/.m2/repository/org/apache/kerby/kerb-common/1.0.1/kerb-common-1.0.1.jar:/home/cc/.m2/repository/org/apache/kerby/kerb-crypto/1.0.1/kerb-crypto-1.0.1.jar:/home/cc/.m2/repository/org/apache/kerby/kerb-util/1.0.1/kerb-util-1.0.1.jar:/home/cc/.m2/repository/org/apache/kerby/token-provider/1.0.1/token-provider-1.0.1.jar:/home/cc/.m2/repository/org/apache/kerby/kerb-admin/1.0.1/kerb-admin-1.0.1.jar:/home/cc/.m2/repository/org/apache/kerby/kerb-server/1.0.1/kerb-server-1.0.1.jar:/home/cc/.m2/repository/org/apache/kerby/kerb-identity/1.0.1/kerb-identity-1.0.1.jar:/home/cc/.m2/repository/org/apache/kerby/kerby-xdr/1.0.1/kerby-xdr-1.0.1.jar:/home/cc/.m2/repository/org/codehaus/woodstox/stax2-api/3.1.4/stax2-api-3.1.4.jar:/home/cc/.m2/repository/com/fasterxml/woodstox/woodstox-core/5.0.3/woodstox-core-5.0.3.jar:/home/cc/.m2/repository/commons-validator/commons-validator/1.6/commons-validator-1.6.jar:/home/cc/.m2/repository/commons-digester/commons-digester/1.8.1/commons-digester-1.8.1.jar:/home/cc/.m2/repository/com/fasterxml/jackson/jaxrs/jackson-jaxrs-base/2.9.9/jackson-jaxrs-base-2.9.9.jar:/home/cc/.m2/repository/com/fasterxml/jackson/jaxrs/jackson-jaxrs-json-provider/2.9.9/jackson-jaxrs-json-provider-2.9.9.jar:/home/cc/.m2/repository/com/fasterxml/jackson/module/jackson-module-jaxb-annotations/2.9.9/jackson-module-jaxb-annotations-2.9.9.jar:/home/cc/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.9.9/jackson-annotations-2.9.9.jar:/home/cc/.m2/repository/javax/inject/javax.inject/1/javax.inject-1.jar:/home/cc/.m2/repository/org/springframework/spring-context/4.3.20.RELEASE/spring-context-4.3.20.RELEASE.jar:/home/cc/.m2/repository/org/springframework/spring-aop/4.3.20.RELEASE/spring-aop-4.3.20.RELEASE.jar:/home/cc/.m2/repository/org/springframework/spring-beans/4.3.20.RELEASE/spring-beans-4.3.20.RELEASE.jar:/home/cc/.m2/repository/org/springframework/spring-core/4.3.20.RELEASE/spring-core-4.3.20.RELEASE.jar:/home/cc/.m2/repository/org/springframework/spring-expression/4.3.20.RELEASE/spring-expression-4.3.20.RELEASE.jar:/home/cc/.m2/repository/commons-configuration/commons-configuration/1.10/commons-configuration-1.10.jar:/home/cc/.m2/repository/commons-lang/commons-lang/2.6/commons-lang-2.6.jar:/home/cc/.m2/repository/org/apache/commons/commons-dbcp2/2.9.0/commons-dbcp2-2.9.0.jar:/home/cc/.m2/repository/org/apache/commons/commons-math3/3.6.1/commons-math3-3.6.1.jar:/home/cc/.m2/repository/org/apache/commons/commons-pool2/2.11.1/commons-pool2-2.11.1.jar:/home/cc/.m2/repository/org/apache/hive/hive-vector-code-gen/4.0.0-beta-1/hive-vector-code-gen-4.0.0-beta-1.jar:/home/cc/.m2/repository/org/apache/hive/hive-llap-tez/4.0.0-beta-1/hive-llap-tez-4.0.0-beta-1.jar:/home/cc/.m2/repository/com/amazonaws/secretsmanager/aws-secretsmanager-caching-java/1.0.1/aws-secretsmanager-caching-java-1.0.1.jar:/home/cc/.m2/repository/commons-io/commons-io/2.12.0/commons-io-2.12.0.jar:/home/cc/.m2/repository/org/apache/commons/commons-collections4/4.1/commons-collections4-4.1.jar:/home/cc/.m2/repository/org/apache/commons/commons-text/1.10.0/commons-text-1.10.0.jar:/home/cc/.m2/repository/org/apache/logging/log4j/log4j-1.2-api/2.18.0/log4j-1.2-api-2.18.0.jar:/home/cc/.m2/repository/org/apache/logging/log4j/log4j-api/2.18.0/log4j-api-2.18.0.jar:/home/cc/.m2/repository/org/apache/logging/log4j/log4j-core/2.18.0/log4j-core-2.18.0.jar:/home/cc/.m2/repository/org/apache/logging/log4j/log4j-slf4j-impl/2.18.0/log4j-slf4j-impl-2.18.0.jar:/home/cc/.m2/repository/org/antlr/ST4/4.0.4/ST4-4.0.4.jar:/home/cc/.m2/repository/org/antlr/antlr-runtime/3.3/antlr-runtime-3.3.jar:/home/cc/.m2/repository/org/antlr/stringtemplate/3.2.1/stringtemplate-3.2.1.jar:/home/cc/.m2/repository/antlr/antlr/2.7.7/antlr-2.7.7.jar:/home/cc/.m2/repository/org/apache/ant/ant/1.10.13/ant-1.10.13.jar:/home/cc/.m2/repository/org/apache/ant/ant-launcher/1.10.13/ant-launcher-1.10.13.jar:/home/cc/.m2/repository/org/apache/commons/commons-compress/1.23.0/commons-compress-1.23.0.jar:/home/cc/.m2/repository/org/apache/hadoop/hadoop-yarn-registry/3.3.1/hadoop-yarn-registry-3.3.1.jar:/home/cc/.m2/repository/org/apache/hadoop/hadoop-registry/3.3.1/hadoop-registry-3.3.1.jar:/home/cc/.m2/repository/commons-daemon/commons-daemon/1.0.13/commons-daemon-1.0.13.jar:/home/cc/.m2/repository/org/apache/hadoop/thirdparty/hadoop-shaded-guava/1.1.1/hadoop-shaded-guava-1.1.1.jar:/home/cc/.m2/repository/dnsjava/dnsjava/2.1.7/dnsjava-2.1.7.jar:/home/cc/.m2/repository/org/apache/ivy/ivy/2.5.1/ivy-2.5.1.jar:/home/cc/.m2/repository/org/apache/zookeeper/zookeeper/3.7.1/zookeeper-3.7.1.jar:/home/cc/.m2/repository/org/apache/zookeeper/zookeeper-jute/3.7.1/zookeeper-jute-3.7.1.jar:/home/cc/.m2/repository/org/apache/yetus/audience-annotations/0.12.0/audience-annotations-0.12.0.jar:/home/cc/.m2/repository/io/netty/netty-handler/4.1.76.Final/netty-handler-4.1.76.Final.jar:/home/cc/.m2/repository/io/netty/netty-transport-native-epoll/4.1.76.Final/netty-transport-native-epoll-4.1.76.Final.jar:/home/cc/.m2/repository/org/codehaus/groovy/groovy-all/2.4.21/groovy-all-2.4.21.jar:/home/cc/.m2/repository/org/datanucleus/datanucleus-core/5.2.10/datanucleus-core-5.2.10.jar:/home/cc/.m2/repository/com/google/code/gson/gson/2.9.0/gson-2.9.0.jar:/home/cc/.m2/repository/stax/stax-api/1.0.1/stax-api-1.0.1.jar:/home/cc/.m2/repository/org/apache/arrow/arrow-memory-netty/12.0.0/arrow-memory-netty-12.0.0.jar:/home/cc/.m2/repository/org/apache/arrow/arrow-memory-core/12.0.0/arrow-memory-core-12.0.0.jar:/home/cc/.m2/repository/org/reflections/reflections/0.10.2/reflections-0.10.2.jar:/home/cc/.m2/repository/org/javassist/javassist/3.28.0-GA/javassist-3.28.0-GA.jar:/home/cc/.m2/repository/net/minidev/json-smart/2.4.10/json-smart-2.4.10.jar:/home/cc/.m2/repository/net/minidev/accessors-smart/2.4.9/accessors-smart-2.4.9.jar:/home/cc/.m2/repository/com/sun/jersey/contribs/jersey-multipart/1.19/jersey-multipart-1.19.jar:/home/cc/.m2/repository/org/jvnet/mimepull/mimepull/1.9.3/mimepull-1.9.3.jar:/home/cc/.m2/repository/com/sun/jersey/jersey-core/1.19/jersey-core-1.19.jar:/home/cc/.m2/repository/javax/ws/rs/jsr311-api/1.1.1/jsr311-api-1.1.1.jar:/home/cc/.m2/repository/org/apache/thrift/libthrift/0.15.0/libthrift-0.15.0.jar:/home/cc/.m2/repository/org/apache/httpcomponents/httpclient/4.5.10/httpclient-4.5.10.jar:/home/cc/.m2/repository/org/apache/httpcomponents/httpcore/4.4.12/httpcore-4.4.12.jar:/home/cc/.m2/repository/javax/annotation/javax.annotation-api/1.3.2/javax.annotation-api-1.3.2.jar:/home/cc/.m2/repository/org/assertj/assertj-core/3.20.2/assertj-core-3.20.2.jar:/home/cc/sosp24-ae/wasabi/wasabi-testing/target/classes
+-d
+/home/cc/sosp24-ae/wasabi/wasabi-testing/target/classes
+-s
+/home/cc/sosp24-ae/wasabi/wasabi-testing/target/generated-sources/aspectj-maven-plugin
+/home/cc/sosp24-ae/wasabi/wasabi-testing/src/main/aspect/edu/uchicago/cs/systems/wasabi/InterceptHive.aj
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/cassandra/casandra_retry_locations.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/cassandra/casandra_retry_locations.data
new file mode 100644
index 00000000..6cc8521a
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/cassandra/casandra_retry_locations.data
@@ -0,0 +1,22 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/db/compaction/Scrubber.java#L196!!!org.apache.cassandra.db.compaction.Scrubber.scrub!!!org.apache.cassandra.db.compaction.Scrubber$ScrubInfo.getCompactionInfo!!!Scrubber.java:199!!!java.lang.InterruptedException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/db/compaction/Scrubber.java#L196!!!org.apache.cassandra.db.compaction.Scrubber.scrub!!!org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength!!!Scrubber.java:208!!!java.io.IOException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/db/compaction/Scrubber.java#L196!!!org.apache.cassandra.db.compaction.Scrubber.scrub!!!org.apache.cassandra.db.marshal.AbstractType>.validate!!!Scrubber.java:209!!!org.apache.cassandra.serializers.MarshalException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java#L298!!!org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run!!!org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.preparedStatement!!!CqlRecordWriter.java:320!!!java.lang.InterruptedException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java#L298!!!org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run!!!java.util.concurrent.BlockingQueue>.take!!!CqlRecordWriter.java:303!!!java.lang.InterruptedException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java#L312!!!org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run!!!org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.preparedStatement!!!CqlRecordWriter.java:320!!!java.lang.InterruptedException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/service/StorageService.java#L4587!!!org.apache.cassandra.service.StorageService.repairPaxosForTopologyChange!!!org.apache.cassandra.service.StorageService.tryRepairPaxosForTopologyChange!!!StorageService.java:4591!!!java.lang.InterruptedException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/service/StorageService.java#L4587!!!org.apache.cassandra.service.StorageService.repairPaxosForTopologyChange!!!org.apache.cassandra.service.StorageService.tryRepairPaxosForTopologyChange!!!StorageService.java:4591!!!java.lang.AssertionError
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/service/paxos/Paxos.java#L651!!!org.apache.cassandra.service.paxos.Paxos.cas!!!org.apache.cassandra.service.CASRequest.makeUpdates!!!Paxos.java:702!!!org.apache.cassandra.exceptions.InvalidRequestException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/service/paxos/Paxos.java#L651!!!org.apache.cassandra.service.paxos.Paxos.cas!!!org.apache.cassandra.service.CASRequest.appliesTo!!!Paxos.java:669!!!org.apache.cassandra.exceptions.InvalidRequestException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/service/paxos/Paxos.java#L651!!!org.apache.cassandra.service.paxos.Paxos.cas!!!org.apache.cassandra.service.paxos.Paxos$MaybeFailure.markAndThrowAsTimeoutOrFailure!!!Paxos.java:729!!!org.apache.cassandra.exceptions.RequestFailureException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/service/paxos/Paxos.java#L651!!!org.apache.cassandra.service.paxos.Paxos.cas!!!org.apache.cassandra.service.paxos.Paxos$MaybeFailure.markAndThrowAsTimeoutOrFailure!!!Paxos.java:729!!!org.apache.cassandra.exceptions.RequestTimeoutException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/service/paxos/Paxos.java#L651!!!org.apache.cassandra.service.paxos.Paxos.cas!!!org.apache.cassandra.triggers.TriggerExecutor.execute!!!Paxos.java:711!!!org.apache.cassandra.exceptions.InvalidRequestException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/service/paxos/Paxos.java#L953!!!org.apache.cassandra.service.paxos.Paxos.begin!!!org.apache.cassandra.service.paxos.Paxos$Participants.assureSufficientLiveNodes!!!Paxos.java:1049!!!org.apache.cassandra.exceptions.UnavailableException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/service/paxos/Paxos.java#L953!!!org.apache.cassandra.service.paxos.Paxos.begin!!!org.apache.cassandra.service.paxos.Paxos$MaybeFailure.markAndThrowAsTimeoutOrFailure!!!Paxos.java:992!!!org.apache.cassandra.exceptions.RequestFailureException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/service/paxos/Paxos.java#L953!!!org.apache.cassandra.service.paxos.Paxos.begin!!!org.apache.cassandra.service.paxos.Paxos$MaybeFailure.markAndThrowAsTimeoutOrFailure!!!Paxos.java:992!!!org.apache.cassandra.exceptions.RequestTimeoutException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/service/paxos/Paxos.java#L953!!!org.apache.cassandra.service.paxos.Paxos.begin!!!org.apache.cassandra.service.paxos.PaxosPrepare.prepare!!!Paxos.java:1013!!!org.apache.cassandra.exceptions.UnavailableException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/service/paxos/Paxos.java#L953!!!org.apache.cassandra.service.paxos.Paxos.begin!!!org.apache.cassandra.service.reads.ResponseResolver.preprocess!!!Paxos.java:1025!!!java.lang.IllegalArgumentException
+https://github.com/apache/cassandra/tree//f0ad7ea//src/java/org/apache/cassandra/service/paxos/Paxos.java#L953!!!org.apache.cassandra.service.paxos.Paxos.begin!!!org.apache.cassandra.service.reads.ResponseResolver.preprocess!!!Paxos.java:1025!!!java.lang.IllegalStateException
+https://github.com/apache/cassandra/blob/f0ad7eadbeb3208e08a9339881931222fdab253b/src/java/org/apache/cassandra/utils/binlog/ExternalArchiver.java#L86!!!org.apache.cassandra.utils.binlog.ExternalArchiver.ExternalArchiver!!!org.apache.cassandra.utils.binlog.ExternalArchiver.archiveFile!!!ExternalArchiver.java:93!!!java.io.IOException
+https://github.com/apache/cassandra/blob/360128b3eb8f1b19dfc887a60d0678bc1f67703f/src/java/org/apache/cassandra/db/repair/PendingAntiCompaction.java!!!PendingAntiCompaction.AcquisitionCallable.call!!!PendingAntiCompaction.AcquisitionCallable.acquireSSTables!!!PendingAntiCompaction.SSTableAcquisitionException!!!PendingAntiCompaction.SSTableAcquisitionException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/ConfigParser.class b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/ConfigParser.class
new file mode 100644
index 00000000..b1aa7e61
Binary files /dev/null and b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/ConfigParser.class differ
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/ConfigParser.java b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/ConfigParser.java
new file mode 100644
index 00000000..0b71df54
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/ConfigParser.java
@@ -0,0 +1,163 @@
+package edu.uchicago.cs.systems.wasabi;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.BufferedReader;
+import java.io.FileReader;
+import java.io.IOException;
+import java.lang.System;
+import java.nio.file.Paths;
+import java.nio.file.Path;
+import java.util.Arrays;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+
+import edu.uchicago.cs.systems.wasabi.InjectionPoint;
+
+import org.junit.Assert;
+
+class ConfigParser {
+
+ private static WasabiLogger LOG;
+ private static String configFile;
+
+ private static String wasabiRootDir;
+ private static String retryDataFile;
+ private static String injectionPolicy;
+ private static int maxInjectionCount;
+
+ private static final ArrayList rawRecords = new ArrayList<>();
+ private static final Map injectionPlan = new HashMap<>();
+
+ private static final HashingPrimitives hashingPrimitives = new HashingPrimitives();
+ private static final String[] RETRY_DATA_COLUMN_NAMES = {"Retry location", "Retry caller", "Injection site", "Injection location", "Exception"};
+
+ public ConfigParser(WasabiLogger logger, String configFile) {
+ this.LOG = logger;
+ this.configFile = configFile;
+
+ this.wasabiRootDir = System.getenv("WASABI_ROOT_DIR");
+ if (this.wasabiRootDir == null) {
+ this.LOG.printMessage(
+ LOG.LOG_LEVEL_ERROR,
+ String.format("WASABI_ROOT_DIR environment variable is not set.")
+ );
+ throw new IllegalStateException("[wasabi] WASABI_ROOT_DIR environment variable is not set.");
+ }
+
+ parseConfigFile();
+ parseCodeQLOutput();
+ }
+
+ private void parseConfigFile() {
+ try (BufferedReader br = new BufferedReader(new FileReader(this.configFile))) {
+ String line;
+ while ((line = br.readLine()) != null) {
+ String[] parts = line.split(":");
+ Assert.assertEquals("[wasabi] Invalid line format for <" + line + ">", 2, parts.length);
+
+ String parameter = parts[0].trim();
+ String value = parts[1].replaceAll("\\s+", "").trim();
+ switch (parameter) {
+ case "retry_data_file":
+ try {
+ Path retryDataFilePath = Paths.get(this.wasabiRootDir).resolve(value).normalize();
+ this.retryDataFile = retryDataFilePath.toString();
+ } catch (Exception e) {
+ this.LOG.printMessage(
+ LOG.LOG_LEVEL_ERROR,
+ String.format("[wasabi] Invalid path: %s/%s", this.wasabiRootDir, value)
+ );
+ e.printStackTrace();
+ throw new IllegalStateException("[wasabi] Invalid path: " + this.wasabiRootDir + "/" + value);
+ }
+ break;
+ case "injection_policy":
+ this.injectionPolicy = value;
+ break;
+ case "max_injection_count":
+ try {
+ this.maxInjectionCount = Integer.parseInt(value);
+ } catch (Exception e) {
+ this.LOG.printMessage(
+ LOG.LOG_LEVEL_ERROR,
+ String.format("An exception occurred when parsing line <%s>: %s\n",
+ line, e.getMessage())
+ );
+ e.printStackTrace();
+ }
+ break;
+ }
+ }
+ } catch (IOException e) {
+ this.LOG.printMessage(
+ LOG.LOG_LEVEL_ERROR,
+ String.format("An exception occurred when parsing the config file: %s\n", e.getMessage())
+ );
+ e.printStackTrace();
+ }
+ }
+
+ private void parseCodeQLOutput() {
+ try (BufferedReader br = new BufferedReader(new FileReader(this.retryDataFile))) {
+ boolean foundHeader = false;
+ String line;
+ while ((line = br.readLine()) != null) {
+ String[] values = line.split("!!!");
+
+ if (!foundHeader && Arrays.equals(RETRY_DATA_COLUMN_NAMES, values)) {
+ foundHeader = true;
+ continue;
+ }
+
+ if (foundHeader) {
+ rawRecords.add(values);
+ }
+ }
+ } catch (IOException e) {
+ this.LOG.printMessage(
+ LOG.LOG_LEVEL_ERROR,
+ String.format("An exception occurred when parsing the retry data file: %s\n", e.getMessage())
+ );
+ e.printStackTrace();
+ }
+
+ for (String[] record : rawRecords) {
+ String retrySourceLocation = record[0];
+ String retryCallerFunction = record[1];
+ String injectionSite = record[2];
+ String injectionLocation = record[3];
+ String retryException = record[4];
+
+ InjectionPoint entry = new InjectionPoint(
+ null,
+ retrySourceLocation,
+ retryCallerFunction,
+ injectionSite,
+ retryException,
+ -1
+ );
+
+ injectionPlan.put(injectionLocation, entry);
+ }
+ }
+
+ public ArrayList getRawRecords() {
+ return rawRecords;
+ }
+
+ public Map getInjectionPlan() {
+ return Collections.unmodifiableMap(injectionPlan);
+ }
+
+ public int getMaxInjectionCount() {
+ return this.maxInjectionCount;
+ }
+
+ public String getInjectionPolicy() {
+ return this.injectionPolicy;
+ }
+}
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/ExecutionTrace.class b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/ExecutionTrace.class
new file mode 100644
index 00000000..b684323d
Binary files /dev/null and b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/ExecutionTrace.class differ
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/ExecutionTrace.java b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/ExecutionTrace.java
new file mode 100644
index 00000000..045b7716
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/ExecutionTrace.java
@@ -0,0 +1,243 @@
+package edu.uchicago.cs.systems.wasabi;
+
+import java.util.ArrayDeque;
+import java.util.ArrayList;
+import java.util.concurrent.locks.Lock;
+import java.util.concurrent.locks.ReentrantLock;
+import java.util.Deque;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.Objects;
+
+class OpEntry {
+
+ public static final Integer RETRY_CALLER_OP = 0;
+ public static final Integer THREAD_SLEEP_OP = 1;
+
+ private String opName = "";
+ private Integer opType = this.RETRY_CALLER_OP;
+ private StackSnapshot stackSnapshot = null;
+ private Long timestamp = 0L;
+ private String exception = null;
+
+ public OpEntry(String opName,
+ Integer opType,
+ Long timestamp,
+ StackSnapshot stackSnapshot) {
+ this.opName = opName;
+ this.opType = opType;
+ this.timestamp = timestamp;
+ this.stackSnapshot = stackSnapshot;
+ this.exception = null;
+ }
+
+ public OpEntry(String opName,
+ Integer opType,
+ Long timestamp,
+ StackSnapshot stackSnapshot,
+ String exception) {
+ this.opName = opName;
+ this.opType = opType;
+ this.timestamp = timestamp;
+ this.stackSnapshot = stackSnapshot;
+ this.exception = exception;
+ }
+
+ public OpEntry(String opName,
+ Integer opType,
+ StackSnapshot stackSnapshot,
+ String exception) {
+ this.opName = opName;
+ this.opType = opType;
+ this.timestamp = 0L;
+ this.stackSnapshot = stackSnapshot;
+ this.exception = exception;
+ }
+
+ public Boolean isOfType(Integer opType) {
+ return Objects.equals(this.opType, opType);
+ }
+
+ public Boolean hasFrame(String target) {
+ return this.stackSnapshot.hasFrame(target);
+ }
+
+ public Boolean isSameOp(OpEntry target) {
+ return (
+ this.opType == target.opType &&
+ (this.exception == null || this.exception.equals(target.exception)) &&
+ this.stackSnapshot.isEqual(target.stackSnapshot)
+ );
+ }
+
+ public void printOpEntry(WasabiLogger log) {
+ log.printMessage(WasabiLogger.LOG_LEVEL_WARN,
+ String.format("\n Op type: %s\n Op name: %s\n Timestamp: %d\n Callstack (top):\n%s\n Exception: %s\n",
+ this.opType == this.RETRY_CALLER_OP ? "retry" : "sleep",
+ this.opName,
+ this.timestamp,
+ this.stackSnapshot.serializeTopFrames(5),
+ this.exception
+ )
+ );
+ }
+}
+
+class ExecutionTrace {
+
+ private final Lock mutex = new ReentrantLock();
+ private final int INFINITE_CACHE = -1;
+
+ private ArrayDeque opCache;
+ private int maxOpCacheSize;
+
+ public ExecutionTrace() {
+ this.opCache = new ArrayDeque();
+ this.maxOpCacheSize = this.INFINITE_CACHE;
+ }
+
+ public ExecutionTrace(int maxOpCacheSize) {
+ this.opCache = new ArrayDeque();
+ this.maxOpCacheSize = maxOpCacheSize;
+ }
+
+ public Boolean isNullOrEmpty() {
+ mutex.lock();
+ try {
+ return this.opCache == null || this.opCache.isEmpty();
+ } finally {
+ mutex.unlock();
+ }
+ }
+
+ public int getMaxOpCacheSize() {
+ mutex.lock();
+ try {
+ return this.maxOpCacheSize;
+ } finally {
+ mutex.unlock();
+ }
+ }
+
+ public int getSize() {
+ mutex.lock();
+ try {
+ return this.opCache.size();
+ } finally {
+ mutex.unlock();
+ }
+ }
+
+ public void addLast(OpEntry opEntry) {
+ mutex.lock();
+ try {
+ if (this.maxOpCacheSize != this.INFINITE_CACHE && this.opCache.size() >= this.maxOpCacheSize) {
+ this.opCache.removeFirst();
+ }
+ this.opCache.addLast(opEntry);
+ } finally {
+ mutex.unlock();
+ }
+ }
+
+ public Boolean checkIfOpsAreEqual(int leftIndex, int rightIndex) {
+ mutex.lock();
+ try {
+ if (this.opCache.size() < Math.max(leftIndex, rightIndex)) {
+ return false;
+ }
+
+ OpEntry leftOp = null;
+ OpEntry rightOp = null;
+
+ int index = this.opCache.size() - 1;
+ Iterator itr = this.opCache.descendingIterator();
+ while (itr.hasNext() && index >= Math.min(leftIndex, rightIndex)) {
+ OpEntry current = itr.next();
+
+ if (index == leftIndex) {
+ leftOp = current;
+ } else if (index == rightIndex) {
+ rightOp = current;
+ }
+
+ --index;
+ }
+
+ return leftOp != null && rightOp != null && leftOp.isSameOp(rightOp);
+
+ } finally {
+ mutex.unlock();
+ }
+ }
+
+ public Boolean checkIfOpIsOfType(int targetIndex, int targetOpType) {
+ mutex.lock();
+ try {
+ if (this.opCache.size() < targetIndex) {
+ return false;
+ }
+
+ OpEntry targetOp = null;
+
+ int index = this.opCache.size() - 1;
+ Iterator itr = this.opCache.descendingIterator();
+ while (itr.hasNext() && index >= targetIndex) {
+ OpEntry current = itr.next();
+
+ if (index == targetIndex) {
+ targetOp = current;
+ }
+
+ --index;
+ }
+
+ return targetOp != null && targetOp.isOfType(targetOpType);
+
+ } finally {
+ mutex.unlock();
+ }
+ }
+
+ public Boolean checkIfOpHasFrame(int targetIndex, String targetFrame) {
+ mutex.lock();
+ try {
+ if (this.opCache.size() < targetIndex) {
+ return false;
+ }
+
+ OpEntry targetOp = null;
+
+ int index = this.opCache.size() - 1;
+ Iterator itr = this.opCache.descendingIterator();
+ while (itr.hasNext() && index >= targetIndex) {
+ OpEntry current = itr.next();
+
+ if (index == targetIndex) {
+ targetOp = current;
+ }
+
+ --index;
+ }
+
+ return targetOp != null && targetOp.hasFrame(targetFrame);
+
+ } finally {
+ mutex.unlock();
+ }
+ }
+
+ public void printExecutionTrace(WasabiLogger log, String msg) {
+ mutex.lock();
+ try {
+ log.printMessage(WasabiLogger.LOG_LEVEL_WARN, String.format("================================ %s", msg));
+ for (OpEntry op : this.opCache) {
+ op.printOpEntry(log);
+ }
+ log.printMessage(WasabiLogger.LOG_LEVEL_WARN, String.format("================================================================\n\n"));
+
+ } finally {
+ mutex.unlock();
+ }
+ }
+}
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/HashingPrimitives.class b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/HashingPrimitives.class
new file mode 100644
index 00000000..55ac342a
Binary files /dev/null and b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/HashingPrimitives.class differ
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/HashingPrimitives.java b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/HashingPrimitives.java
new file mode 100644
index 00000000..c102a964
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/HashingPrimitives.java
@@ -0,0 +1,46 @@
+package edu.uchicago.cs.systems.wasabi;
+
+//import java.nio.charset.StandardCharsets;
+//import org.apache.commons.codec.digest.MurmurHash3;
+import java.util.ArrayList;
+
+class HashingPrimitives {
+ public static int getHashValue(String str1, String str2, String str3) {
+ return 0;
+ /*
+ byte[] bytes1 = str1.getBytes(StandardCharsets.UTF_8);
+ byte[] bytes2 = str2.getBytes(StandardCharsets.UTF_8);
+ byte[] bytes3 = str3.getBytes(StandardCharsets.UTF_8);
+
+ byte[] bytes = new byte[bytes1.length + bytes2.length + bytes3.length];
+
+ System.arraycopy(bytes1, 0, bytes, 0, bytes1.length);
+ System.arraycopy(bytes2, 0, bytes, bytes1.length, bytes2.length);
+ System.arraycopy(bytes3, 0, bytes, bytes1.length + bytes2.length, bytes3.length);
+
+ return MurmurHash3.hash32x86(bytes, 0, bytes.length, 0);
+ */
+ }
+
+ public static int getHashValue(ArrayList arr) {
+ return 0;
+ /*
+ ArrayList byteList = new ArrayList<>();
+ int totalLength = 0;
+ for (String e : arr) {
+ byte[] bytes = e.getBytes(StandardCharsets.UTF_8);
+ byteList.add(bytes);
+ totalLength += bytes.length;
+ }
+
+ byte[] bytes = new byte[totalLength];
+ int offset = 0;
+ for (byte[] b : byteList) {
+ System.arraycopy(b, 0, bytes, offset, b.length);
+ offset += b.length;
+ }
+
+ return MurmurHash3.hash32x86(bytes, 0, bytes.length, 0);
+ */
+ }
+}
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/InjectForever.class b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/InjectForever.class
new file mode 100644
index 00000000..3c09c831
Binary files /dev/null and b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/InjectForever.class differ
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/InjectUpToMaxCount.class b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/InjectUpToMaxCount.class
new file mode 100644
index 00000000..9f624869
Binary files /dev/null and b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/InjectUpToMaxCount.class differ
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/InjectionPoint.class b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/InjectionPoint.class
new file mode 100644
index 00000000..514aae5b
Binary files /dev/null and b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/InjectionPoint.class differ
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/InjectionPoint.java b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/InjectionPoint.java
new file mode 100644
index 00000000..43b9138e
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/InjectionPoint.java
@@ -0,0 +1,35 @@
+package edu.uchicago.cs.systems.wasabi;
+
+class InjectionPoint {
+
+ public StackSnapshot stackSnapshot = null;
+ public String retrySourceLocation = null;
+ public String retryCallerFunction = null;
+ public String injectionSite = null;
+ public String retryException = null;
+ public Integer injectionCount = 0;
+
+ public InjectionPoint(StackSnapshot stackSnapshot,
+ String retrySourceLocation,
+ String retryCallerFunction,
+ String injectionSite,
+ String retryException,
+ Integer injectionCount) {
+ this.stackSnapshot = stackSnapshot;
+ this.retrySourceLocation = retrySourceLocation;
+ this.retryCallerFunction = retryCallerFunction;
+ this.injectionSite = injectionSite;
+ this.retryException = retryException;
+ this.injectionCount = injectionCount;
+ }
+
+ public Boolean isEmpty() {
+ return (
+ this.stackSnapshot.isNullOrEmpty() &&
+ this.retrySourceLocation == null &&
+ this.retryCallerFunction == null &&
+ this.injectionSite == null &&
+ this.retryException == null
+ );
+ }
+}
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/InjectionPolicies.java b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/InjectionPolicies.java
new file mode 100644
index 00000000..aca90d41
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/InjectionPolicies.java
@@ -0,0 +1,38 @@
+package edu.uchicago.cs.systems.wasabi;
+
+import java.util.Random;
+
+abstract class InjectionPolicy {
+
+ public abstract boolean shouldInject(int injectionCount);
+}
+
+class NoInjection extends InjectionPolicy {
+ @Override
+ public boolean shouldInject(int injectionCount) {
+ return false;
+ }
+}
+
+class InjectForever extends InjectionPolicy {
+ @Override
+ public boolean shouldInject(int injectionCount) {
+ return true;
+ }
+}
+
+class InjectUpToMaxCount extends InjectionPolicy {
+ private int maxInjectionCount = 0;
+
+ InjectUpToMaxCount(int maxInjectionCount) {
+ this.maxInjectionCount = maxInjectionCount;
+ }
+
+ @Override
+ public boolean shouldInject(int injectionCount) {
+ if (injectionCount < this.maxInjectionCount) {
+ return true;
+ }
+ return false;
+ }
+}
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/InjectionPolicy.class b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/InjectionPolicy.class
new file mode 100644
index 00000000..40dc94b6
Binary files /dev/null and b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/InjectionPolicy.class differ
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/InterceptHive.class b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/InterceptHive.class
new file mode 100644
index 00000000..46ea8b46
Binary files /dev/null and b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/InterceptHive.class differ
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/NoInjection.class b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/NoInjection.class
new file mode 100644
index 00000000..0048a070
Binary files /dev/null and b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/NoInjection.class differ
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/OpEntry.class b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/OpEntry.class
new file mode 100644
index 00000000..fa634740
Binary files /dev/null and b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/OpEntry.class differ
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/StackSnapshot.class b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/StackSnapshot.class
new file mode 100644
index 00000000..b1cfbd42
Binary files /dev/null and b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/StackSnapshot.class differ
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/StackSnapshot.java b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/StackSnapshot.java
new file mode 100644
index 00000000..f384e319
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/StackSnapshot.java
@@ -0,0 +1,107 @@
+package edu.uchicago.cs.systems.wasabi;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.stream.Collectors;
+
+class StackSnapshot {
+ private ArrayList stacktrace;
+
+ public StackSnapshot() {
+ this.stacktrace = new ArrayList();
+
+ StackTraceElement[] ste = Thread.currentThread().getStackTrace();
+ for (StackTraceElement frame : ste) {
+ if (!frame.toString().contains("edu.uchicago.cs.systems.wasabi") &&
+ !frame.toString().contains("java.lang.Thread.getStackTrace(Thread.java:")) {
+ this.stacktrace.add(frame.toString());
+ }
+ }
+ }
+
+ public StackSnapshot(ArrayList stacktrace) {
+ this.stacktrace = stacktrace;
+ }
+
+ public int getSize() {
+ return this.stacktrace.size();
+ }
+
+ public Boolean isNullOrEmpty() {
+ return this.stacktrace == null || this.stacktrace.isEmpty();
+ }
+
+ public String toString() {
+ return this.stacktrace.stream().map(frame -> "\t" + frame).collect(Collectors.joining("\n"));
+ }
+
+ public ArrayList getStacktrace() {
+ return this.stacktrace;
+ }
+
+ public String serializeTopFrames(int maxLevel) {
+ ArrayList topOfStack = new ArrayList();
+ int level = 0;
+
+ for (String frame : this.stacktrace) {
+ if (++level > maxLevel) {
+ break;
+ }
+ topOfStack.add(frame);
+ }
+
+ return topOfStack.stream().map(frame -> "\t" + frame).collect(Collectors.joining("\n"));
+ }
+
+ public String getFrame(int index) {
+ if (index >= 0 && index < this.stacktrace.size()) {
+ return stacktrace.get(index);
+ }
+ return null;
+ }
+
+ public Boolean hasFrame(String target) {
+ return this.stacktrace.stream().anyMatch(frame -> frame.contains(target));
+ }
+
+ public Boolean isEqual(StackSnapshot target) {
+ if (target.isNullOrEmpty()) {
+ return false;
+ }
+
+ if (this.stacktrace.size() != target.stacktrace.size()) {
+ return false;
+ }
+
+ for (int i = 0; i < this.stacktrace.size(); ++i) {
+ if (!this.stacktrace.get(i).equals(target.stacktrace.get(i))) {
+ return false;
+ }
+ }
+
+ return true;
+ }
+
+ public ArrayList normalizeStackBelowFrame(String target) {
+ ArrayList normalizedStack = new ArrayList();
+ Boolean targetFound = false;
+
+ for (String frame : stacktrace) {
+ if (frame.contains(target)) {
+ targetFound = true;
+ normalizedStack.add(target);
+ continue;
+ }
+
+ if (targetFound) {
+ normalizedStack.add(frame);
+ }
+ }
+
+ return normalizedStack;
+ }
+
+ public static String getQualifiedName(String frame) {
+ return frame != null ? frame.split("\\(")[0] : null;
+ }
+}
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/WasabiContext.class b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/WasabiContext.class
new file mode 100644
index 00000000..cb4cf565
Binary files /dev/null and b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/WasabiContext.class differ
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/WasabiContext.java b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/WasabiContext.java
new file mode 100644
index 00000000..4c63a57f
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/WasabiContext.java
@@ -0,0 +1,120 @@
+package edu.uchicago.cs.systems.wasabi;
+
+import java.util.ArrayList;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Collections;
+
+import edu.uchicago.cs.systems.wasabi.ConfigParser;
+import edu.uchicago.cs.systems.wasabi.WasabiLogger;
+import edu.uchicago.cs.systems.wasabi.InjectionPolicy;
+import edu.uchicago.cs.systems.wasabi.StackSnapshot;
+import edu.uchicago.cs.systems.wasabi.InjectionPoint;
+import edu.uchicago.cs.systems.wasabi.ExecutionTrace;
+
+class WasabiContext {
+
+ private WasabiLogger LOG;
+ private ConfigParser configParser;
+
+ private final HashingPrimitives hashingPrimitives = new HashingPrimitives();
+
+ private Map injectionPlan;
+ private InjectionPolicy injectionPolicy;
+
+ private ExecutionTrace executionTrace = new ExecutionTrace(10);
+ private ConcurrentHashMap injectionCounts = new ConcurrentHashMap<>();
+
+ public WasabiContext(WasabiLogger logger,
+ ConfigParser configParser) {
+ this.LOG = logger;
+ this.configParser = configParser;
+
+ int maxInjectionCount = this.configParser.getMaxInjectionCount();
+
+ String injectionPolicyString = this.configParser.getInjectionPolicy();
+ switch (injectionPolicyString) {
+ case "no-injection":
+ injectionPolicy = new NoInjection();
+ break;
+ case "forever":
+ injectionPolicy = new InjectForever();
+ break;
+ case "max-count":
+ injectionPolicy = new InjectUpToMaxCount(maxInjectionCount);
+ break;
+ default:
+ injectionPolicy = new NoInjection();
+ break;
+ }
+
+ injectionPlan = Collections.unmodifiableMap(this.configParser.getInjectionPlan());
+ }
+
+ private Boolean isNullOrEmpty(String str) {
+ return str == null || str.isEmpty();
+ }
+
+ private synchronized int getInjectionCount(ArrayList stacktrace) {
+ int hval = hashingPrimitives.getHashValue(stacktrace);
+ return injectionCounts.getOrDefault(hval, 0);
+ }
+
+ private synchronized int updateInjectionCount(ArrayList stacktrace) {
+ int hval = hashingPrimitives.getHashValue(stacktrace);
+ return injectionCounts.compute(hval, (k, v) -> (v == null) ? 1 : v + 1);
+ }
+
+ public synchronized void addToExecTrace(String opName, int opType, StackSnapshot stackSnapshot) {
+ long currentTime = System.nanoTime();
+ executionTrace.addLast(new OpEntry(opName, opType, currentTime, stackSnapshot));
+ }
+
+ public synchronized void addToExecTrace(String opName, int opType, StackSnapshot stackSnapshot, String retryException) {
+ long currentTime = System.nanoTime();
+ executionTrace.addLast(new OpEntry(opName, opType, currentTime, stackSnapshot, retryException));
+ }
+
+ public synchronized InjectionPoint getInjectionPoint(String testName,
+ String injectionSite,
+ String injectionSourceLocation,
+ String retryException,
+ String retryCallerFunction,
+ StackSnapshot stackSnapshot) {
+
+ if (!injectionPlan.containsKey(injectionSourceLocation)) {
+ return null;
+ }
+
+ String retrySourceLocation = injectionPlan.get(injectionSourceLocation).retryCallerFunction;
+ int injectionCount = getInjectionCount(stackSnapshot.getStacktrace());
+
+ addToExecTrace(injectionSite, OpEntry.RETRY_CALLER_OP, stackSnapshot, retryException);
+
+ return new InjectionPoint(
+ stackSnapshot,
+ retrySourceLocation,
+ retryCallerFunction,
+ injectionSite,
+ retryException,
+ injectionCount
+ );
+ }
+
+ public Boolean shouldInject(InjectionPoint ipt) {
+ if (injectionPolicy.shouldInject(ipt.injectionCount)) {
+ ipt.injectionCount = updateInjectionCount(ipt.stackSnapshot.getStacktrace());
+ return true;
+ }
+
+ return false;
+ }
+
+ public void printExecTrace(WasabiLogger log, String msg) {
+ if (executionTrace.getSize() > 0) {
+ executionTrace.printExecutionTrace(log, msg);
+ }
+ }
+
+}
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/WasabiContextHolder.class b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/WasabiContextHolder.class
new file mode 100644
index 00000000..8af3ea8f
Binary files /dev/null and b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/WasabiContextHolder.class differ
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/WasabiContextHolder.java b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/WasabiContextHolder.java
new file mode 100644
index 00000000..cf900b3c
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/WasabiContextHolder.java
@@ -0,0 +1,7 @@
+package edu.uchicago.cs.systems.wasabi;
+
+// A simple interface for runnables that can hold a WasabiContext object
+public interface WasabiContextHolder {
+ public void setWasabiContext(WasabiContext ctx);
+ public WasabiContext getWasabiContext();
+ }
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/WasabiLogger.class b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/WasabiLogger.class
new file mode 100644
index 00000000..a21a7655
Binary files /dev/null and b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/WasabiLogger.class differ
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/WasabiLogger.java b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/WasabiLogger.java
new file mode 100644
index 00000000..ae6e3cca
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/edu/uchicago/cs/systems/wasabi/WasabiLogger.java
@@ -0,0 +1,33 @@
+package edu.uchicago.cs.systems.wasabi;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+class WasabiLogger {
+ private final Logger LOG = LoggerFactory.getLogger(WasabiLogger.class);
+
+ public static final int LOG_LEVEL_INFO = 1;
+ public static final int LOG_LEVEL_WARN = 2;
+ public static final int LOG_LEVEL_DEBUG = 3;
+ public static final int LOG_LEVEL_ERROR = 4;
+
+ public synchronized void printMessage(int logLevel, String msg) {
+ long timestamp = System.nanoTime();
+ long threadId = Thread.currentThread().getId();
+
+ switch(logLevel) {
+ case LOG_LEVEL_INFO:
+ LOG.info("[wasabi] [" + timestamp + "] [thread=" + threadId + "] " + msg);
+ break;
+ case LOG_LEVEL_WARN:
+ LOG.warn("[wasabi] [" + timestamp + "] [thread=" + threadId + "] " + msg);
+ break;
+ case LOG_LEVEL_DEBUG:
+ LOG.debug("[wasabi] [" + timestamp + "] [thread=" + threadId + "] " + msg);
+ break;
+ case LOG_LEVEL_ERROR:
+ LOG.error("[wasabi] [" + timestamp + "] [thread=" + threadId + "] " + msg);
+ break;
+ }
+ }
+}
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/elasticsearch/elasticsearch_retry_locations.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/elasticsearch/elasticsearch_retry_locations.data
new file mode 100644
index 00000000..32701aa0
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/elasticsearch/elasticsearch_retry_locations.data
@@ -0,0 +1,50 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/indices/IndicesService.java#L1205!!!org.elasticsearch.indices.IndicesService.processPendingDeletes!!!org.elasticsearch.env.NodeEnvironment.deleteIndexDirectoryUnderLock!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/indices/IndicesService.java#L1205!!!org.elasticsearch.indices.IndicesService.processPendingDeletes!!!org.elasticsearch.indices.IndicesService.deleteShardStore!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//x-pack/plugin/watcher/src/main/java/org/elasticsearch/xpack/watcher/notification/email/attachment/ReportingAttachmentParser.java#L179!!!org.elasticsearch.xpack.watcher.notification.email.attachment.ReportingAttachmentParser.toAttachment!!!org.elasticsearch.xpack.watcher.common.http.HttpClient.execute!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//x-pack/plugin/watcher/src/main/java/org/elasticsearch/xpack/watcher/notification/email/attachment/ReportingAttachmentParser.java#L179!!!org.elasticsearch.xpack.watcher.notification.email.attachment.ReportingAttachmentParser.toAttachment!!!org.elasticsearch.xpack.watcher.notification.email.attachment.ReportingAttachmentParser.sleep!!!N/A!!!org.elasticsearch.ElasticsearchException
+https://github.com/elastic/elasticsearch/blob/832e48e87faa547a29aafc008f4d4fc2820a3074/server/src/main/java/org/elasticsearch/cluster/coordination/ClusterBootstrapService.java#L263!!!org.elasticsearch.cluster.coordination.ClusterBootstrapService.doBootstrap!!!java.util.function.Consumer.accept!!!N/A!!!java.lang.Exception
+https://github.com/elastic/elasticsearch/blob/832e48e87faa547a29aafc008f4d4fc2820a3074/server/src/main/java/org/elasticsearch/index/IndexService.java#L617!!!org.elasticsearch.index.IndexService.onShardClose!!!beforeIndexShardDeleted!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/blob/1b84ea742143874a966d06daa373b31c9e99822f/server/src/main/java/org/elasticsearch/gateway/PersistedClusterStateService.java#L1289!!!org.elasticsearch.gateway.PersistedClusterStateService.completeCommit!!!org.elasticsearch.gateway.PersistedClusterStateService.commit!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/blob/832e48e87faa547a29aafc008f4d4fc2820a3074/server/src/main/java/org/elasticsearch/indices/IndicesService.java#L1346!!!org.elasticsearch.indices.IndicesService.processPendingDeletes!!!org.elasticsearch.env.NodeEnvironment.deleteIndexDirectoryUnderLock!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/blob/832e48e87faa547a29aafc008f4d4fc2820a3074/server/src/main/java/org/elasticsearch/indices/IndicesService.java!!!org.elasticsearch.indices.IndicesService.processPendingDeletes!!!org.elasticsearch.indices.IndicesService.deleteShardStore!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/blob/832e48e87faa547a29aafc008f4d4fc2820a3074/server/src/main/java/org/elasticsearch/common/blobstore/fs/FsBlobContainer.java#L330!!!org.elasticsearch.common.blobstore.fs.FsBlobContainer.moveBlobAtomic!!!java.nio.file.Files.move!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/blob/832e48e87faa547a29aafc008f4d4fc2820a3074/server/src/main/java/org/elasticsearch/common/file/AbstractFileWatchingService.java#L271!!!org.elasticsearch.common.file.AbstractFileWatchingService.enableDirectoryWatcher!!!org.elasticsearch.monitor.fs.FsInfo.Path.register!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/blob/832e48e87faa547a29aafc008f4d4fc2820a3074/x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/security/CommandLineHttpClient.java#L271!!!org.elasticsearch.xpack.core.security.CommandLineHttpClient.checkClusterHealthWithRetriesWaitingForCluster!!!org.elasticsearch.xpack.core.security.CommandLineHttpClient.execute!!!N/A!!!java.lang.Exception
+https://github.com/elastic/elasticsearch/tree//7556157//plugins/repository-gcs/src/main/java/org/elasticsearch/repositories/gcs/GoogleCloudStorageBlobStore.java#L257!!!org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobStore.writeBlobResumable!!!org.elasticsearch.core.internal.io.Streams.copy!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//plugins/repository-gcs/src/main/java/org/elasticsearch/repositories/gcs/GoogleCloudStorageBlobStore.java#L257!!!org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobStore.writeBlobResumable!!!org.elasticsearch.repositories.gcs.SocketAccess.doPrivilegedIOException!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.action.delete.DeleteRequest.setIfPrimaryTerm!!!N/A!!!java.lang.IllegalArgumentException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.action.delete.DeleteRequest.setIfSeqNo!!!N/A!!!java.lang.IllegalArgumentException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.action.index.IndexRequest.setIfPrimaryTerm!!!N/A!!!java.lang.IllegalArgumentException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.action.index.IndexRequest.setIfSeqNo!!!N/A!!!java.lang.IllegalArgumentException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.action.update.UpdateRequest.fromXContent!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.action.update.UpdateRequest.setIfPrimaryTerm!!!N/A!!!java.lang.IllegalArgumentException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.action.update.UpdateRequest.setIfSeqNo!!!N/A!!!java.lang.IllegalArgumentException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.common.xcontent.XContentParser.booleanValue!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.common.xcontent.XContentParser.longValue!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.common.xcontent.XContentParser.intValue!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.common.xcontent.XContentParser.text!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.common.xcontent.XContentParser.currentName!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.common.xcontent.XContentParser.nextToken!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.index.VersionType.fromString!!!N/A!!!java.lang.IllegalArgumentException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.search.fetch.subphase.FetchSourceContext.fromXContent!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.search.fetch.subphase.FetchSourceContext.fromXContent!!!N/A!!!org.elasticsearch.common.ParsingException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.action.bulk.BulkRequestParser.createParser!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L117!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.action.bulk.BulkRequestParser.findNextMarker!!!N/A!!!java.lang.IllegalArgumentException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L166!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.common.xcontent.XContentParser.booleanValue!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L166!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.common.xcontent.XContentParser.longValue!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L166!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.common.xcontent.XContentParser.intValue!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L166!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.common.xcontent.XContentParser.text!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L166!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.common.xcontent.XContentParser.currentName!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L166!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.common.xcontent.XContentParser.nextToken!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L166!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.index.VersionType.fromString!!!N/A!!!java.lang.IllegalArgumentException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/action/bulk/BulkRequestParser.java#L166!!!org.elasticsearch.action.bulk.BulkRequestParser.parse!!!org.elasticsearch.search.fetch.subphase.FetchSourceContext.fromXContent!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/index/reindex/BulkByScrollTask.java#L614!!!org.elasticsearch.index.reindex.BulkByScrollTask$Status.innerFromXContent!!!org.elasticsearch.common.xcontent.XContentParser.floatValue!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/index/reindex/BulkByScrollTask.java#L614!!!org.elasticsearch.index.reindex.BulkByScrollTask$Status.innerFromXContent!!!org.elasticsearch.common.xcontent.XContentParser.longValue!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/index/reindex/BulkByScrollTask.java#L614!!!org.elasticsearch.index.reindex.BulkByScrollTask$Status.innerFromXContent!!!org.elasticsearch.common.xcontent.XContentParser.intValue!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/index/reindex/BulkByScrollTask.java#L614!!!org.elasticsearch.index.reindex.BulkByScrollTask$Status.innerFromXContent!!!org.elasticsearch.common.xcontent.XContentParser.text!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/index/reindex/BulkByScrollTask.java#L614!!!org.elasticsearch.index.reindex.BulkByScrollTask$Status.innerFromXContent!!!org.elasticsearch.common.xcontent.XContentParser.currentName!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/index/reindex/BulkByScrollTask.java#L614!!!org.elasticsearch.index.reindex.BulkByScrollTask$Status.innerFromXContent!!!org.elasticsearch.common.xcontent.XContentParser.skipChildren!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/index/reindex/BulkByScrollTask.java#L614!!!org.elasticsearch.index.reindex.BulkByScrollTask$Status.innerFromXContent!!!org.elasticsearch.common.xcontent.XContentParser.nextToken!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/index/reindex/BulkByScrollTask.java#L614!!!org.elasticsearch.index.reindex.BulkByScrollTask$Status.innerFromXContent!!!org.elasticsearch.index.reindex.BulkByScrollTask$StatusOrException.fromXContent!!!N/A!!!java.io.IOException
+https://github.com/elastic/elasticsearch/tree//7556157//server/src/main/java/org/elasticsearch/index/reindex/BulkByScrollTask.java#L614!!!org.elasticsearch.index.reindex.BulkByScrollTask$Status.innerFromXContent!!!org.elasticsearch.common.xcontent.ConstructingObjectParser,Void>.parse!!!N/A!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/example_hdfs.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/example_hdfs.conf
new file mode 100644
index 00000000..d7a10a5a
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/example_hdfs.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/example_hdfs.data
+injection_policy: max-count
+max_injection_count: 97
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/example_hdfs.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/example_hdfs.data
new file mode 100644
index 00000000..35784eee
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/example_hdfs.data
@@ -0,0 +1,3 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java#L241!!!org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader!!!org.apache.hadoop.hdfs.DFSStripedInputStream.refreshLocatedBlock!!!DFSStripedInputStream.java:245!!!java.io.IOException
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java#L241!!!org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader!!!org.apache.hadoop.hdfs.DFSInputStream.getBlockReader!!!DFSStripedInputStream.java:256!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/example.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/example.conf
new file mode 100644
index 00000000..3567cd66
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/example.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/example.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/example.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/example.data
new file mode 100644
index 00000000..35784eee
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/example.data
@@ -0,0 +1,3 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java#L241!!!org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader!!!org.apache.hadoop.hdfs.DFSStripedInputStream.refreshLocatedBlock!!!DFSStripedInputStream.java:245!!!java.io.IOException
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java#L241!!!org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader!!!org.apache.hadoop.hdfs.DFSInputStream.getBlockReader!!!DFSStripedInputStream.java:256!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/hadoop.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/hadoop.conf
new file mode 100644
index 00000000..be986fa6
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/hadoop.conf
@@ -0,0 +1,3 @@
+retry_data_file: /home/bastoica/projects/current/wasabi/tool/config/hadoop/hadoop_retry_locations.data
+injection_policy: max-count
+max_injection_count: 0
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/hadoop_retry_bounds.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/hadoop_retry_bounds.data
new file mode 100644
index 00000000..3a333dd1
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/hadoop_retry_bounds.data
@@ -0,0 +1,195 @@
+Var name!!!Assigned value!!!Assign method!!!Test class
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestAMRestart
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestAMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestAMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestAMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestAMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestAMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestAMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestAMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestAMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestAMRestart
+RM_AM_MAX_ATTEMPTS!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestAMRestart
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestApplicationCleanup
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestApplicationCleanup
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestApplicationCleanup
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestApplicationCleanup
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestApplicationCleanup
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestApplicationMasterLauncher
+IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SASL_KEY!!!10!!!org.apache.hadoop.conf.Configuration.setInt!!!TestBalancer
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestBalancerService
+MAX_ATTEMPTS_KEY!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestBalancerService
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestBalancerWithHANameNodes
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestBlockRecovery
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestBlockRecovery2
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestBlockTokenWithDFS
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestBlockTokenWithShortCircuitRead
+RM_AM_MAX_ATTEMPTS!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestCapacityScheduler
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestCapacitySchedulerApps
+RM_AM_MAX_ATTEMPTS!!!"1"!!!org.apache.hadoop.conf.Configuration.set!!!TestCapacitySchedulerSurgicalPreemption
+DFS_NAMENODE_CHECKPOINT_MAX_RETRIES_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestCheckpoint
+MR_CLIENT_JOB_MAX_RETRIES!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestCLI
+MR_CLIENT_JOB_MAX_RETRIES!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestCLI
+LOCATEFOLLOWINGBLOCK_RETRIES_KEY!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestClientProtocolForPipelineRecovery
+CLIENT_FAILOVER_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setLong!!!TestClientRMProxy
+MR_CLIENT_MAX_RETRIES!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestClientServiceDelegate
+MR_CLIENT_MAX_RETRIES!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestClientServiceDelegate
+MAX_ATTEMPTS_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestConsistentReadsObserver
+MAX_ATTEMPTS_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestConsistentReadsObserver
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestContainerResizing
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestContainerResourceUsage
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDataNodeMetricsLogger
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDatanodeProtocolRetryPolicy
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDataNodeReconfiguration
+RM_DT_RENEWER_THREAD_RETRY_MAX_ATTEMPTS!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDelegationTokenRenewer
+RM_DT_RENEWER_THREAD_RETRY_MAX_ATTEMPTS!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDelegationTokenRenewer
+RM_DT_RENEWER_THREAD_RETRY_MAX_ATTEMPTS!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDelegationTokenRenewer
+RM_DT_RENEWER_THREAD_RETRY_MAX_ATTEMPTS!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDelegationTokenRenewer
+OBSERVER_PROBE_RETRY_PERIOD_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDelegationTokensWithHA
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDFSAdmin
+MAX_ATTEMPTS_KEY!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDFSAdminWithHA
+MAX_ATTEMPTS_KEY!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDFSAdminWithHA
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDFSAdminWithHA
+IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SASL_KEY!!!10!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDFSInotifyEventInputStreamKerberized
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDFSStripedOutputStreamWithFailure
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDSTimelineV10
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestDSTimelineV10
+DFS_HA_TAILEDITS_ALL_NAMESNODES_RETRY_KEY!!!100!!!org.apache.hadoop.conf.Configuration.setInt!!!TestEditLogTailer
+DFS_HA_TAILEDITS_ALL_NAMESNODES_RETRY_KEY!!!100!!!org.apache.hadoop.conf.Configuration.setInt!!!TestEditLogTailer
+DFS_HA_TAILEDITS_ALL_NAMESNODES_RETRY_KEY!!!100!!!org.apache.hadoop.conf.Configuration.setInt!!!TestEditLogTailer
+DFS_HA_TAILEDITS_ALL_NAMESNODES_RETRY_KEY!!!100!!!org.apache.hadoop.conf.Configuration.setInt!!!TestEditLogTailer
+IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SASL_KEY!!!10!!!org.apache.hadoop.conf.Configuration.setInt!!!TestExternalStoragePolicySatisfier
+DFS_STORAGE_POLICY_SATISFIER_SELF_RETRY_TIMEOUT_MILLIS_KEY!!!"5000"!!!org.apache.hadoop.conf.Configuration.set!!!TestExternalStoragePolicySatisfier
+DFS_STORAGE_POLICY_SATISFIER_SELF_RETRY_TIMEOUT_MILLIS_KEY!!!"5000"!!!org.apache.hadoop.conf.Configuration.set!!!TestExternalStoragePolicySatisfier
+DFS_STORAGE_POLICY_SATISFIER_SELF_RETRY_TIMEOUT_MILLIS_KEY!!!"5000"!!!org.apache.hadoop.conf.Configuration.set!!!TestExternalStoragePolicySatisfier
+REDUCE_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFail
+REDUCE_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFail
+REDUCE_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFail
+REDUCE_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFail
+MAP_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFail
+MAP_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFail
+MAP_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFail
+MAP_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFail
+MAP_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFail
+MAP_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFail
+REDUCE_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFail
+REDUCE_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFail
+MAP_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFail
+MAP_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFail
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFileAppend4
+FILEOUTPUTCOMMITTER_FAILURE_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFileOutputCommitter
+FS_RM_STATE_STORE_NUM_RETRIES!!!8!!!org.apache.hadoop.conf.Configuration.setInt!!!TestFSRMStateStore
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestHealthMonitor
+IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestIPC
+IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_KEY!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestIPC
+MR_CLIENT_JOB_MAX_RETRIES!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestJobClients
+MR_JOB_END_RETRY_ATTEMPTS!!!"0"!!!org.apache.hadoop.conf.Configuration.set!!!TestJobEndNotifier
+MR_JOB_END_NOTIFICATION_MAX_ATTEMPTS!!!"0"!!!org.apache.hadoop.conf.Configuration.set!!!TestJobEndNotifier
+MR_JOB_END_RETRY_ATTEMPTS!!!"0"!!!org.apache.hadoop.conf.Configuration.set!!!TestJobEndNotifier
+MR_JOB_END_NOTIFICATION_MAX_ATTEMPTS!!!"1"!!!org.apache.hadoop.conf.Configuration.set!!!TestJobEndNotifier
+MR_JOB_END_NOTIFICATION_MAX_ATTEMPTS!!!"1"!!!org.apache.hadoop.conf.Configuration.set!!!TestJobEndNotifier
+MR_JOB_END_RETRY_ATTEMPTS!!!"10"!!!org.apache.hadoop.conf.Configuration.set!!!TestJobEndNotifier
+MR_JOB_END_RETRY_ATTEMPTS!!!"10"!!!org.apache.hadoop.conf.Configuration.set!!!TestJobEndNotifier
+MR_JOB_END_NOTIFICATION_MAX_ATTEMPTS!!!"20"!!!org.apache.hadoop.conf.Configuration.set!!!TestJobEndNotifier
+MR_JOB_END_RETRY_ATTEMPTS!!!"3"!!!org.apache.hadoop.conf.Configuration.set!!!TestJobEndNotifier
+MR_JOB_END_NOTIFICATION_MAX_ATTEMPTS!!!"3"!!!org.apache.hadoop.conf.Configuration.set!!!TestJobEndNotifier
+MR_JOB_END_RETRY_ATTEMPTS!!!"3"!!!org.apache.hadoop.conf.Configuration.set!!!TestJobEndNotifier
+MAP_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestJobImpl
+MR_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestJobImpl
+AUTH_RETRY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestKMS
+LDAP_NUM_ATTEMPTS_KEY!!!10!!!org.apache.hadoop.conf.Configuration.setInt!!!TestLdapGroupsMappingWithBindUserSwitch
+LDAP_NUM_ATTEMPTS_KEY!!!"1"!!!org.apache.hadoop.conf.Configuration.set!!!TestLdapGroupsMappingWithOneQuery
+KMS_CLIENT_FAILOVER_MAX_RETRIES_KEY!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestLoadBalancingKMSClientProvider
+KMS_CLIENT_FAILOVER_MAX_RETRIES_KEY!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestLoadBalancingKMSClientProvider
+KMS_CLIENT_FAILOVER_MAX_RETRIES_KEY!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestLoadBalancingKMSClientProvider
+KMS_CLIENT_FAILOVER_MAX_RETRIES_KEY!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestLoadBalancingKMSClientProvider
+KMS_CLIENT_FAILOVER_MAX_RETRIES_KEY!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestLoadBalancingKMSClientProvider
+KMS_CLIENT_FAILOVER_MAX_RETRIES_KEY!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestLoadBalancingKMSClientProvider
+KMS_CLIENT_FAILOVER_MAX_RETRIES_KEY!!!3!!!org.apache.hadoop.conf.Configuration.setInt!!!TestLoadBalancingKMSClientProvider
+KMS_CLIENT_FAILOVER_MAX_RETRIES_KEY!!!4!!!org.apache.hadoop.conf.Configuration.setInt!!!TestLoadBalancingKMSClientProvider
+KMS_CLIENT_FAILOVER_MAX_RETRIES_KEY!!!5!!!org.apache.hadoop.conf.Configuration.setInt!!!TestLoadBalancingKMSClientProvider
+KMS_CLIENT_FAILOVER_MAX_RETRIES_KEY!!!10!!!org.apache.hadoop.conf.Configuration.setInt!!!TestLoadBalancingKMSClientProvider
+KMS_CLIENT_FAILOVER_MAX_RETRIES_KEY!!!10!!!org.apache.hadoop.conf.Configuration.setInt!!!TestLoadBalancingKMSClientProvider
+KMS_CLIENT_FAILOVER_MAX_RETRIES_KEY!!!10!!!org.apache.hadoop.conf.Configuration.setInt!!!TestLoadBalancingKMSClientProvider
+KMS_CLIENT_FAILOVER_MAX_RETRIES_KEY!!!10!!!org.apache.hadoop.conf.Configuration.setInt!!!TestLoadBalancingKMSClientProvider
+DFS_MOVER_RETRY_MAX_ATTEMPTS_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestMover
+DFS_MOVER_RETRY_MAX_ATTEMPTS_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestMover
+IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SASL_KEY!!!10!!!org.apache.hadoop.conf.Configuration.setInt!!!TestMover
+DFS_MOVER_RETRY_MAX_ATTEMPTS_KEY!!!10000!!!org.apache.hadoop.conf.Configuration.setInt!!!TestMover
+DFS_MOVER_RETRY_MAX_ATTEMPTS_KEY!!!10000!!!org.apache.hadoop.conf.Configuration.setInt!!!TestMover
+DFS_MOVER_RETRY_MAX_ATTEMPTS_KEY!!!"2"!!!org.apache.hadoop.conf.Configuration.set!!!TestMover
+DFS_MOVER_RETRY_MAX_ATTEMPTS_KEY!!!"2"!!!org.apache.hadoop.conf.Configuration.set!!!TestMover
+DFS_MOVER_RETRY_MAX_ATTEMPTS_KEY!!!"2"!!!org.apache.hadoop.conf.Configuration.set!!!TestMover
+DFS_MOVER_RETRY_MAX_ATTEMPTS_KEY!!!"2"!!!org.apache.hadoop.conf.Configuration.set!!!TestMover
+MAP_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestMRJobs
+MAP_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestMRJobs
+LOCATEFOLLOWINGBLOCK_RETRIES_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestNamenodeCapacityReport
+IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_KEY!!!100!!!org.apache.hadoop.conf.Configuration.setInt!!!TestNMProxy
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!100!!!org.apache.hadoop.conf.Configuration.setInt!!!TestNMProxy
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestNNStartupWhenViewFSOverloadSchemeEnabled
+RM_AM_MAX_ATTEMPTS!!!100!!!org.apache.hadoop.conf.Configuration.setInt!!!TestNodeBlacklistingOnAMFailures
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestNodeStatusUpdater
+OBSERVER_PROBE_RETRY_PERIOD_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setTimeDuration!!!TestObserverNode
+OBSERVER_PROBE_RETRY_PERIOD_KEY!!!5000!!!org.apache.hadoop.conf.Configuration.setLong!!!TestObserverNode
+OBSERVER_PROBE_RETRY_PERIOD_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setTimeDuration!!!TestObserverReadProxyProvider
+DFS_STORAGE_POLICY_SATISFIER_MAX_RETRY_ATTEMPTS_KEY!!!20!!!org.apache.hadoop.conf.Configuration.setInt!!!TestPersistentStoragePolicySatisfier
+DFS_STORAGE_POLICY_SATISFIER_MAX_RETRY_ATTEMPTS_KEY!!!20!!!org.apache.hadoop.conf.Configuration.setInt!!!TestPersistentStoragePolicySatisfier
+RM_PLACEMENT_CONSTRAINTS_RETRY_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestPlacementProcessor
+RM_PLACEMENT_CONSTRAINTS_RETRY_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestPlacementProcessor
+RM_PLACEMENT_CONSTRAINTS_RETRY_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestPlacementProcessor
+RM_PLACEMENT_CONSTRAINTS_RETRY_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestPlacementProcessor
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestQJMWithFaults
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestQuorumJournalManager
+REDUCE_MAX_ATTEMPTS!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestReduceFetchFromPartialMem
+MAP_MAX_ATTEMPTS!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestReduceFetchFromPartialMem
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRM
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRMContainerImpl
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRMContainerImpl
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRMRestart
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRMRestart
+RM_AM_MAX_ATTEMPTS!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRMRestart
+RM_AM_MAX_ATTEMPTS!!!5!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRMRestart
+RM_AM_MAX_ATTEMPTS!!!40!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRMRestart
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRMWebServicesAppAttempts
+IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SASL_KEY!!!10!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRollingFileSystemSinkWithSecureHdfs
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRouterRPCClientRetries
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRPC
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRPC
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRPC
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestRPCServerShutdown
+SPECULATIVE_RETRY_AFTER_SPECULATE!!!5000L!!!org.apache.hadoop.conf.Configuration.setLong!!!TestRuntimeEstimators
+SPECULATIVE_RETRY_AFTER_NO_SPECULATE!!!500L!!!org.apache.hadoop.conf.Configuration.setLong!!!TestRuntimeEstimators
+IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SASL_KEY!!!10!!!org.apache.hadoop.conf.Configuration.setInt!!!TestSecureEncryptionZoneWithKMS
+IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SASL_KEY!!!10!!!org.apache.hadoop.conf.Configuration.setInt!!!TestSecureNNWithQJM
+MAX_ATTEMPTS_KEY!!!128!!!org.apache.hadoop.conf.Configuration.setInt!!!TestSeveralNameNodes
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestSpaceReservation
+SPECULATIVE_RETRY_AFTER_NO_SPECULATE!!!3000L!!!org.apache.hadoop.conf.Configuration.setLong!!!TestSpeculativeExecutionWithMRApp
+RETRY_LIMIT!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestStagingCommitter
+DFS_STORAGE_POLICY_SATISFIER_MAX_RETRY_ATTEMPTS_KEY!!!30!!!org.apache.hadoop.conf.Configuration.setInt!!!TestStoragePolicySatisfierWithStripedFile
+DFS_STORAGE_POLICY_SATISFIER_MAX_RETRY_ATTEMPTS_KEY!!!30!!!org.apache.hadoop.conf.Configuration.setInt!!!TestStoragePolicySatisfierWithStripedFile
+DFS_STORAGE_POLICY_SATISFIER_SELF_RETRY_TIMEOUT_MILLIS_KEY!!!"5000"!!!org.apache.hadoop.conf.Configuration.set!!!TestStoragePolicySatisfierWithStripedFile
+TIMELINE_SERVICE_CLIENT_MAX_RETRIES!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestTimelineAuthenticationFilterForV1
+TIMELINE_SERVICE_CLIENT_MAX_RETRIES!!!-2!!!org.apache.hadoop.conf.Configuration.setInt!!!TestTimelineClient
+TIMELINE_SERVICE_CLIENT_MAX_RETRIES!!!0!!!org.apache.hadoop.conf.Configuration.setInt!!!TestTimelineClient
+TIMELINE_SERVICE_CLIENT_MAX_RETRIES!!!5!!!org.apache.hadoop.conf.Configuration.setInt!!!TestTimelineCollector
+TIMELINE_SERVICE_CLIENT_MAX_RETRIES!!!5!!!org.apache.hadoop.conf.Configuration.setInt!!!TestTimelineCollector
+TIMELINE_SERVICE_CLIENT_MAX_RETRIES!!!5!!!org.apache.hadoop.conf.Configuration.setInt!!!TestTimelineCollector
+TIMELINE_SERVICE_CLIENT_MAX_RETRIES!!!5!!!org.apache.hadoop.conf.Configuration.setInt!!!TestTimelineCollector
+IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SASL_KEY!!!10!!!org.apache.hadoop.conf.Configuration.setInt!!!TestTrashWithSecureEncryptionZones
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestViewDistributedFileSystemWithMountLinks
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestViewFileSystemHdfs
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestViewFileSystemOverloadSchemeWithDFSAdmin
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestViewFileSystemOverloadSchemeWithFSCommands
+IPC_CLIENT_CONNECT_MAX_RETRIES_KEY!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestViewFileSystemOverloadSchemeWithHdfsScheme
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestWorkPreservingRMRestart
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestWorkPreservingRMRestart
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestWorkPreservingRMRestart
+RM_AM_MAX_ATTEMPTS!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestWorkPreservingRMRestart
+HA_FC_ELECTOR_ZK_OP_RETRIES_KEY!!!100!!!org.apache.hadoop.conf.Configuration.setInt!!!TestZKFailoverControllerStress
+ZK_NUM_RETRIES!!!1!!!org.apache.hadoop.conf.Configuration.setInt!!!TestZKRMStateStoreZKClientConnections
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/hadoop_retry_locations.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/hadoop_retry_locations.data
new file mode 100644
index 00000000..4e970b83
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/hadoop_retry_locations.data
@@ -0,0 +1,202 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java#L151!!!org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash!!!org.apache.hadoop.fs.FileSystem.mkdirs!!!TrashPolicyDefault.java:161!!!java.io.IOException
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java#L151!!!org.apache.hadoop.fs.TrashPolicyDefault.run!!!deleteCheckpoint!!!TrashPolicyDefault.java:303!!!java.io.IOException
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java#L151!!!org.apache.hadoop.fs.TrashPolicyDefault.run!!!createCheckpoint!!!TrashPolicyDefault.java:304!!!java.io.IOException
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HealthMonitor.java#L170!!!org.apache.hadoop.ha.HealthMonitor.tryConnect!!!org.apache.hadoop.ha.HealthMonitor.createProxy!!!HealthMonitor.java:175!!!java.io.IOException
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryInvocationHandler.java#L89!!!org.apache.hadoop.io.retry.RetryInvocationHandler.invokeOnce!!!invoke!!!RetryInvocationHandler.java:100!!!java.io.IOException
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DiskChecker.java#L262!!!org.apache.hadoop.util.DiskChecker.doDiskIo!!!org.apache.hadoop.util.DiskChecker.diskIoCheckWithoutNativeIo!!!DiskChecker.java:262!!!java.io.IOException
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveIterator.java#L96!!!org.apache.hadoop.hdfs.protocol.CacheDirectiveIterator.makeRequest!!!org.apache.hadoop.hdfs.protocol.ClientProtocol.listCacheDirectives!!!CacheDirectiveIterator.java:97!!!java.io.IOException
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyRackFaultTolerant.java#L172!!!org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyRackFaultTolerant.chooseEvenlyFromRemainingRacks!!!org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyRackFaultTolerant.chooseOnce!!!BlockPlacementPolicyRackFaultTolerant.java:187!!!NotEnoughReplicasException
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithNodeGroup.java#L64!!!org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyWithNodeGroup.chooseFavouredNodes!!!chooseRandom!!!BlockPlacementPolicyWithNodeGroup.java:91!!!NotEnoughReplicasException
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/sps/ExternalStoragePolicySatisfier.java#L111!!!org.apache.hadoop.hdfs.server.sps.ExternalStoragePolicySatisfier.getNameNodeConnector!!!newNameNodeConnectors!!!ExternalStoragePolicySatisfier.java:114!!!java.io.IOException
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/local/LocalContainerAllocator.java#L106!!!org.apache.hadoop.mapreduce.v2.app.local.LocalContainerAllocator.heartbeat!!!org.apache.hadoop.yarn.api.ApplicationMasterProtocol.allocate!!!LocalContainerAllocator.java:113!!!java.io.IOException
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/committer/manifest/stages/CreateOutputDirectoriesStage.java#L299!!!CreateOutputDirectoriesStage.maybeCreateOneDirectory!!!org.apache.hadoop.fs.FileSystem.mkdirs!!!CreateOutputDirectoriesStage.java:305!!!java.io.IOException
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/AMRMClientRelayer.java#L347!!!org.apache.hadoop.yarn.server.AMRMClientRelayer.allocate!!!org.apache.hadoop.yarn.server.AMRMClientRelayer.reRegisterApplicationMaster!!!AMRMClientRelayer.java:386!!!java.io.IOException
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/algorithm/DefaultPlacementAlgorithm.java#L153!!!org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.algorithm.DefaultPlacementAlgorithm.doPlacement!!!org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.algorithm.DefaultPlacementAlgorithm.attemptPlacementOnNode!!!DefaultPlacementAlgorithm.java:162!!!InvalidAllocationTagsQueryException
+https://github.com/apache/hadoop/blob/ee7d178/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java#L382!!!org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.run!!!doAs!!!DelegationTokenRenewer.java:391!!!java.io.IOException
+https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java#L617!!!org.apache.hadoop.hdfs.DFSClient.renewLease!!!org.apache.hadoop.hdfs.protocol.ClientProtocol.renewLease!!!DFSClient.java:618!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/CosNativeFileSystemStore.java#L692!!!org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.callCOSClientWithRetry!!!org.apache.hadoop.fs.azure.StorageInterface$CloudBlockBlobWrapper.commitBlockList!!!N/A!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/CosNFileReadTask.java#L85!!!org.apache.hadoop.fs.cosn.CosNFileReadTask.run!!!org.apache.hadoop.fs.cosn.NativeFileSystemStore.retrieveBlock!!!CosNFileReadTask.java:87!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/CosNFileReadTask.java#L85!!!org.apache.hadoop.fs.cosn.CosNFileReadTask.run!!!org.apache.hadoop.io.IOUtils.readFully!!!CosNFileReadTask.java:89!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/CosNFileReadTask.java#L85!!!org.apache.hadoop.fs.cosn.CosNFileReadTask.run!!!java.io.InputStream.close!!!CosNFileReadTask.java:91!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSCommonUtils.java#L891!!!org.apache.hadoop.fs.obs.OBSCommonUtils.isFolderEmpty!!!org.apache.hadoop.fs.obs.OBSCommonUtils.innerIsFolderEmpty!!!OBSCommonUtils.java:893!!!java.io.FileNotFoundException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSFileSystem.java#L1214!!!org.apache.hadoop.fs.obs.OBSFileSystem.getFileStatus!!!org.apache.hadoop.fs.obs.OBSFileSystem.innerGetFileStatus!!!OBSFileSystem.java:1217!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSInputStream.java#L373!!!org.apache.hadoop.fs.obs.OBSInputStream.lazySeek!!!org.apache.hadoop.fs.obs.OBSInputStream.seekInStream!!!OBSInputStream.java:376!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSInputStream.java#L373!!!org.apache.hadoop.fs.obs.OBSInputStream.lazySeek!!!org.apache.hadoop.fs.obs.OBSInputStream.reopen!!!OBSInputStream.java:380!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSInputStream.java#L457!!!org.apache.hadoop.fs.obs.OBSInputStream.read!!!java.io.InputStream.read!!!OBSInputStream.java:459!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSInputStream.java#L526!!!org.apache.hadoop.fs.obs.OBSInputStream.onReadFailure!!!org.apache.hadoop.fs.obs.OBSInputStream.reopen!!!OBSInputStream.java:528!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSInputStream.java#L577!!!org.apache.hadoop.fs.obs.OBSInputStream.read!!!org.apache.hadoop.fs.obs.OBSInputStream.tryToReadFromInputStream!!!N/A!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSInputStream.java#L687!!!org.apache.hadoop.fs.obs.OBSInputStream.read!!!org.apache.hadoop.fs.obs.OBSInputStream.tryToReadFromInputStream!!!N/A!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSInputStream.java#L970!!!org.apache.hadoop.fs.obs.OBSInputStream.randomReadWithNewInputStream!!!org.apache.hadoop.fs.obs.OBSInputStream.tryToReadFromInputStream!!!OBSInputStream.java:976!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSObjectBucketUtils.java#L479!!!org.apache.hadoop.fs.obs.OBSObjectBucketUtils.createEmptyObject!!!org.apache.hadoop.fs.obs.OBSObjectBucketUtils.innerCreateEmptyObject!!!N/A!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSObjectBucketUtils.java#L542!!!org.apache.hadoop.fs.obs.OBSObjectBucketUtils.copyFile!!!org.apache.hadoop.fs.obs.OBSObjectBucketUtils.innerCopyFile!!!N/A!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSPosixBucketUtils.java#L182!!!org.apache.hadoop.fs.obs.OBSPosixBucketUtils.innerFsRenameWithRetry!!!org.apache.hadoop.fs.obs.OBSPosixBucketUtils.innerFsRenameFile!!!N/A!!!java.io.FileNotFoundException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSPosixBucketUtils.java#L182!!!org.apache.hadoop.fs.obs.OBSPosixBucketUtils.innerFsRenameWithRetry!!!org.apache.hadoop.fs.obs.OBSPosixBucketUtils.innerFsRenameFile!!!N/A!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java#L173!!!org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp!!!org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$ProviderCallable.call!!!LoadBalancingKMSClientProvider.java:176!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSInputChecker.java#L301!!!org.apache.hadoop.fs.FSInputChecker.readChecksumChunk!!!org.apache.hadoop.fs.FSInputChecker.readChunk!!!FSInputChecker.java:305!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/CachingBlockManager.java#L149!!!org.apache.hadoop.fs.impl.prefetch.CachingBlockManager.get!!!org.apache.hadoop.fs.impl.prefetch.CachingBlockManager.getInternal!!!CachingBlockManager.java:160!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/CachingBlockManager.java#L149!!!org.apache.hadoop.fs.impl.prefetch.CachingBlockManager.get!!!org.apache.hadoop.fs.impl.prefetch.BufferPool.acquire!!!N/A!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java#L1126!!!org.apache.hadoop.ha.ActiveStandbyElector.zkDoWithRetries!!!org.apache.hadoop.ha.ActiveStandbyElector$ZKAction.run!!!ActiveStandbyElector.java:1150!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java#L853!!!org.apache.hadoop.ha.ActiveStandbyElector.reEstablishSession!!!org.apache.hadoop.ha.ActiveStandbyElector.createConnection!!!ActiveStandbyElector.java:880!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L614!!!org.apache.hadoop.ipc.Client$Connection.setupConnection!!!javax.net.SocketFactory.createSocket!!!Client.java:625!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L614!!!org.apache.hadoop.ipc.Client$Connection.setupConnection!!!java.net.Socket.setTcpNoDelay!!!Client.java:626!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L614!!!org.apache.hadoop.ipc.Client$Connection.setupConnection!!!java.net.Socket.setKeepAlive!!!Client.java:627!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L614!!!org.apache.hadoop.ipc.Client$Connection.setupConnection!!!java.net.Socket.setTrafficClass!!!Client.java:639!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L614!!!org.apache.hadoop.ipc.Client$Connection.setupConnection!!!org.apache.hadoop.net.NetUtils.getLocalInetAddress!!!Client.java:656!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L614!!!org.apache.hadoop.ipc.Client$Connection.setupConnection!!!java.net.Socket.setReuseAddress!!!Client.java:658!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L614!!!org.apache.hadoop.ipc.Client$Connection.setupConnection!!!java.net.Socket.bind!!!Client.java:663!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L614!!!org.apache.hadoop.ipc.Client$Connection.setupConnection!!!org.apache.hadoop.net.NetUtils.connect!!!Client.java:668!!!java.net.ConnectException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L614!!!org.apache.hadoop.ipc.Client$Connection.setupConnection!!!java.net.Socket.setSoTimeout!!!Client.java:669!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L790!!!org.apache.hadoop.ipc.Client$Connection.setupIOstreams!!!org.apache.hadoop.ipc.Client$Connection.setupConnection!!!Client.java:789!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L790!!!org.apache.hadoop.ipc.Client$Connection.setupIOstreams!!!org.apache.hadoop.ipc.Client$Connection.writeConnectionHeader!!!Client.java:791!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L790!!!org.apache.hadoop.ipc.Client$Connection.setupIOstreams!!!org.apache.hadoop.security.UserGroupInformation.doAs!!!Client.java:795!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L790!!!org.apache.hadoop.ipc.Client$Connection.setupIOstreams!!!org.apache.hadoop.ipc.Client$IpcStreams.setSaslClient!!!Client.java:818!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L790!!!org.apache.hadoop.ipc.Client$Connection.setupIOstreams!!!org.apache.hadoop.ipc.Client$Connection.writeConnectionContext!!!Client.java:831!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java#L419!!!org.apache.hadoop.ipc.RPC.waitForProtocolProxy!!!org.apache.hadoop.ipc.RPC.getProtocolProxy!!!RPC.java:421!!!java.net.ConnectException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java#L419!!!org.apache.hadoop.ipc.RPC.waitForProtocolProxy!!!org.apache.hadoop.security.UserGroupInformation.getCurrentUser!!!RPC.java:422!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L967!!!org.apache.hadoop.security.UserGroupInformation$AutoRenewalForUserCredsRunnable.run!!!org.apache.hadoop.security.UserGroupInformation$AutoRenewalForUserCredsRunnable.relogin!!!UserGroupInformation.java:986!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java#L1834!!!org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcRequestHeaderProto.RpcRequestHeaderProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readTag!!!RpcHeaderProtos.java:1835!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java#L1834!!!org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcRequestHeaderProto.RpcRequestHeaderProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readEnum!!!RpcHeaderProtos.java:1841!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java#L1834!!!org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcRequestHeaderProto.RpcRequestHeaderProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readSInt32!!!RpcHeaderProtos.java:1866!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java#L1834!!!org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcRequestHeaderProto.RpcRequestHeaderProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readBytes!!!RpcHeaderProtos.java:1871!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java#L1834!!!org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcRequestHeaderProto.RpcRequestHeaderProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readMessage!!!RpcHeaderProtos.java:1884!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java#L1834!!!org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcRequestHeaderProto.RpcRequestHeaderProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readInt64!!!RpcHeaderProtos.java:1907!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java#L1834!!!org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcRequestHeaderProto.RpcRequestHeaderProto!!!org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3.parseUnknownField!!!RpcHeaderProtos.java:1916!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java#L3784!!!org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.RpcResponseHeaderProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readTag!!!RpcHeaderProtos.java:3785!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java#L3784!!!org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.RpcResponseHeaderProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readUInt32!!!RpcHeaderProtos.java:3792!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java#L3784!!!org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.RpcResponseHeaderProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readEnum!!!RpcHeaderProtos.java:3796!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java#L3784!!!org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.RpcResponseHeaderProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readBytes!!!RpcHeaderProtos.java:3831!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java#L3784!!!org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.RpcResponseHeaderProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readSInt32!!!RpcHeaderProtos.java:3843!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java#L3784!!!org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.RpcResponseHeaderProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readInt64!!!RpcHeaderProtos.java:3848!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java#L3784!!!org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.RpcResponseHeaderProto!!!org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3.parseUnknownField!!!RpcHeaderProtos.java:3857!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java#L440!!!org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run!!!org.apache.hadoop.hdfs.client.impl.LeaseRenewer.renew!!!LeaseRenewer.java:445!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1550!!!org.apache.hadoop.hdfs.DataStreamer.transfer!!!org.apache.hadoop.hdfs.DataStreamer$StreamerStreams.sendTransferBlock!!!DataStreamer.java:1594!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1826!!!org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream!!!org.apache.hadoop.hdfs.DataStreamer.createSocketForPipeline!!!DataStreamer.java:1868!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1826!!!org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream!!!org.apache.hadoop.net.NetUtils.getOutputStream!!!DataStreamer.java:1872!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1826!!!org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream!!!org.apache.hadoop.net.NetUtils.getInputStream!!!DataStreamer.java:1873!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1826!!!org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream!!!org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend!!!DataStreamer.java:1874!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1826!!!org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream!!!org.apache.hadoop.hdfs.protocol.datatransfer.BlockConstructionStage.getRecoveryStage!!!DataStreamer.java:1887!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1826!!!org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream!!!org.apache.hadoop.hdfs.protocol.datatransfer.Sender.writeBlock!!!DataStreamer.java:1896!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1826!!!org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream!!!org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos$BlockOpResponseProto.parseFrom!!!DataStreamer.java:1904!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1826!!!org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream!!!org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed!!!DataStreamer.java:1905!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1826!!!org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream!!!org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed!!!DataStreamer.java:1905!!!java.io.EOFException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1826!!!org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream!!!org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus!!!DataStreamer.java:1921!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L1156!!!org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode!!!org.apache.hadoop.hdfs.DFSInputStream.getBlockReader!!!DFSInputStream.java:1218!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L1156!!!org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode!!!org.apache.hadoop.fs.ByteBufferReadable.read!!!DFSInputStream.java:1229!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L235!!!org.apache.hadoop.hdfs.DFSInputStream.openInfo!!!org.apache.hadoop.hdfs.DFSInputStream.fetchAndCheckLocatedBlocks!!!DFSInputStream.java:238!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L235!!!org.apache.hadoop.hdfs.DFSInputStream.openInfo!!!org.apache.hadoop.hdfs.DFSInputStream.getLastBlockLength!!!DFSInputStream.java:243!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L344!!!org.apache.hadoop.hdfs.DFSInputStream.readBlockLength!!!org.apache.hadoop.hdfs.DFSUtilClient.createClientDatanodeProtocolProxy!!!DFSInputStream.java:348!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L344!!!org.apache.hadoop.hdfs.DFSInputStream.readBlockLength!!!org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol.getReplicaVisibleLength!!!DFSInputStream.java:352!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L622!!!org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo!!!org.apache.hadoop.hdfs.DFSInputStream.getBlockAt!!!DFSInputStream.java:627!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L622!!!org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo!!!org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode!!!DFSInputStream.java:637!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L622!!!org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo!!!org.apache.hadoop.hdfs.DFSInputStream.getBlockReader!!!DFSInputStream.java:645!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L786!!!org.apache.hadoop.hdfs.DFSInputStream.readBuffer!!!org.apache.hadoop.hdfs.ReaderStrategy.readFromBlock!!!DFSInputStream.java:790!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L786!!!org.apache.hadoop.hdfs.DFSInputStream.readBuffer!!!org.apache.hadoop.hdfs.DFSInputStream.seekToBlockSource!!!DFSInputStream.java:820!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L786!!!org.apache.hadoop.hdfs.DFSInputStream.readBuffer!!!org.apache.hadoop.hdfs.DFSInputStream.seekToNewSource!!!DFSInputStream.java:824!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L840!!!org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy!!!org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo!!!DFSInputStream.java:879!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L840!!!org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy!!!org.apache.hadoop.hdfs.DFSInputStream.readBuffer!!!DFSInputStream.java:889!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java#L1141!!!org.apache.hadoop.hdfs.DFSOutputStream.addBlock!!!org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock!!!DFSOutputStream.java:1148!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java#L291!!!org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate!!!org.apache.hadoop.hdfs.protocol.ClientProtocol.create!!!DFSOutputStream.java:294!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java#L990!!!org.apache.hadoop.hdfs.DFSOutputStream.completeFile!!!org.apache.hadoop.hdfs.protocol.ClientProtocol.complete!!!DFSOutputStream.java:997!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java#L241!!!org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader!!!org.apache.hadoop.hdfs.DFSStripedInputStream.refreshLocatedBlock!!!DFSStripedInputStream.java:247!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java#L241!!!org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader!!!org.apache.hadoop.hdfs.DFSInputStream.getBlockReader!!!DFSStripedInputStream.java:258!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/FileChecksumHelper.java#L521!!!org.apache.hadoop.hdfs.FileChecksumHelper$ReplicatedFileChecksumComputer.checksumBlock!!!org.apache.hadoop.hdfs.FileChecksumHelper$ReplicatedFileChecksumComputer.tryDatanode!!!FileChecksumHelper.java:523!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/FileChecksumHelper.java#L653!!!org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlockGroup!!!org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.tryDatanode!!!FileChecksumHelper.java:655!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java#L431!!!org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider$ObserverReadInvocationHandler.invoke!!!java.lang.reflect.Method.invoke!!!ObserverReadProxyProvider.java:543!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java#L197!!!org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache$SlotReleaser.run!!!org.apache.hadoop.hdfs.protocol.datatransfer.Sender.releaseShortCircuitFds!!!ShortCircuitCache.java:209!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java#L197!!!org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache$SlotReleaser.run!!!org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos$ReleaseShortCircuitAccessResponseProto.parseFrom!!!ShortCircuitCache.java:214!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java#L197!!!org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache$SlotReleaser.run!!!org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed!!!ShortCircuitCache.java:214!!!java.io.EOFException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java#L197!!!org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache$SlotReleaser.run!!!org.apache.hadoop.net.unix.DomainSocket.connect!!!UserGroupInformation.java:986!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java#L824!!!org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry!!!org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.getUrl!!!WebHdfsFileSystem.java:827!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java#L824!!!org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry!!!org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.connect!!!WebHdfsFileSystem.java:829!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java#L824!!!org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry!!!org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.getResponse!!!WebHdfsFileSystem.java:830!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java#L885!!!org.apache.hadoop.hdfs.server.balancer.Balancer.run!!!org.apache.hadoop.hdfs.server.balancer.Balancer.doBalance!!!Balancer.java:887!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java#L880!!!org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run!!!org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake!!!BPServiceActor.java:893!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java#L226!!!org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run!!!org.apache.hadoop.hdfs.net.PeerServer.accept!!!DataXceiverServer.java:242!!!java.net.SocketTimeoutException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java#L226!!!org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run!!!org.apache.hadoop.hdfs.server.datanode.DataXceiver.create!!!DataXceiverServer.java:253!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java#L163!!!org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ProvidedVolumeImpl$ProvidedBlockPoolSlice.fetchVolumeMap!!!org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap.getReader!!!ProvidedVolumeImpl.java:165!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirEncryptionZoneOp.java#L585!!!org.apache.hadoop.hdfs.server.namenode.FSDirEncryptionZoneOp$EDEKCacheLoader.run!!!org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.warmUpEncryptedKeys!!!FSDirEncryptionZoneOp.java:587!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L4632!!!org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run!!!org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.clearCorruptLazyPersistFiles!!!FSNamesystem.java:4671!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/EditLogTailer.java#L609!!!org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$MultipleNameNodeProxy.getActiveNodeProxy!!!org.apache.hadoop.ipc.RPC.waitForProxy!!!EditLogTailer.java:632!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/EditLogTailer.java#L609!!!org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$MultipleNameNodeProxy.getActiveNodeProxy!!!org.apache.hadoop.ipc.RPC.getProtocolVersion!!!EditLogTailer.java:633!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java#L328!!!org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler.run!!!org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler$ReencryptionPendingInodeIdCollector.checkPauseForTesting!!!ReencryptionHandler.java:333!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionUpdater.java#L436!!!org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater.takeAndProcessTasks!!!org.apache.hadoop.util.StopWatch.start!!!ReencryptionUpdater.java:439!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionUpdater.java#L436!!!org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater.takeAndProcessTasks!!!org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater.processTask!!!ReencryptionUpdater.java:440!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java#L341!!!org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork!!!org.apache.hadoop.security.UserGroupInformation.checkTGTAndReloginFromKeytab!!!SecondaryNameNode.java:353!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java#L341!!!org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork!!!org.apache.hadoop.security.UserGroupInformation.getCurrentUser!!!SecondaryNameNode.java:353!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java#L341!!!org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork!!!org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.shouldCheckpointBasedOnCount!!!SecondaryNameNode.java:358!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java#L341!!!org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork!!!org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint!!!SecondaryNameNode.java:360!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/BlockStorageMovementNeeded.java#L238!!!org.apache.hadoop.hdfs.server.namenode.sps.BlockStorageMovementNeeded$SPSPathIdProcessor.run!!!org.apache.hadoop.hdfs.server.namenode.sps.Context.scanAndCollectFiles!!!BlockStorageMovementNeeded.java:249!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/BlockStorageMovementNeeded.java#L238!!!org.apache.hadoop.hdfs.server.namenode.sps.BlockStorageMovementNeeded$SPSPathIdProcessor.run!!!org.apache.hadoop.hdfs.server.namenode.sps.Context.removeSPSHint!!!BlockStorageMovementNeeded.java:256!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/StoragePolicySatisfier.java#L217!!!org.apache.hadoop.hdfs.server.namenode.sps.StoragePolicySatisfier.run!!!org.apache.hadoop.hdfs.server.namenode.sps.BlockStorageMovementNeeded.removeItemTrackInfo!!!StoragePolicySatisfier.java:235!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/StoragePolicySatisfier.java#L217!!!org.apache.hadoop.hdfs.server.namenode.sps.StoragePolicySatisfier.run!!!org.apache.hadoop.hdfs.server.namenode.sps.Context.getFileInfo!!!StoragePolicySatisfier.java:243!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/StoragePolicySatisfier.java#L217!!!org.apache.hadoop.hdfs.server.namenode.sps.StoragePolicySatisfier.run!!!org.apache.hadoop.hdfs.server.namenode.sps.StoragePolicySatisfier.analyseBlocksStorageMovementsAndAssignToDN!!!StoragePolicySatisfier.java:255!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/sps/ExternalSPSBlockMoveTaskHandler.java#L203!!!org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock!!!org.apache.hadoop.hdfs.server.balancer.KeyManager.getAccessToken!!!ExternalSPSBlockMoveTaskHandler.java:206!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/sps/ExternalSPSBlockMoveTaskHandler.java#L203!!!org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock!!!org.apache.hadoop.hdfs.server.common.sps.BlockDispatcher.moveBlock!!!ExternalSPSBlockMoveTaskHandler.java:209!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java#L379!!!org.apache.hadoop.hdfs.tools.DebugAdmin$RecoverLeaseCommand.run!!!org.apache.hadoop.hdfs.DistributedFileSystem.recoverLease!!!DebugAdmin.java:384!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/YarnChild.java#L135!!!org.apache.hadoop.mapred.YarnChild.main!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.getTask!!!YarnChild.java:140!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobClient.java#L633!!!org.apache.hadoop.mapred.JobClient.getJob!!!org.apache.hadoop.mapred.JobClient.getJobInner!!!JobClient.java:639!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobEndNotifier.java#L87!!!org.apache.hadoop.mapred.JobEndNotifier.localRunnerNotification!!!org.apache.hadoop.mapred.JobEndNotifier.httpNotification!!!JobEndNotifier.java:89!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L1251!!!org.apache.hadoop.mapred.Task.done!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.commitPending!!!Task.java:1253!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L1315!!!org.apache.hadoop.mapred.Task.statusUpdate!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.statusUpdate!!!Task.java:1317!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L1377!!!org.apache.hadoop.mapred.Task.sendDone!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.done!!!Task.java:1379!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L1397!!!org.apache.hadoop.mapred.Task.commit!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.canCommit!!!Task.java:1399!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L860!!!org.apache.hadoop.mapred.Task$TaskReporter.run!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.statusUpdate!!!Task.java:885!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L860!!!org.apache.hadoop.mapred.Task$TaskReporter.run!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.statusUpdate!!!Task.java:891!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java#L375!!!org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJob!!!org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJobInternal!!!FileOutputCommitter.java:377!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/EventFetcher.java#L64!!!org.apache.hadoop.mapreduce.task.reduce.EventFetcher.run!!!org.apache.hadoop.mapreduce.task.reduce.EventFetcher.getMapCompletionEvents!!!EventFetcher.java:66!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java#L343!!!org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost!!!org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput!!!Fetcher.java:346!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java#L410!!!org.apache.hadoop.mapreduce.task.reduce.Fetcher.setupConnectionsWithRetry!!!org.apache.hadoop.mapreduce.task.reduce.Fetcher.openConnection!!!Fetcher.java:413!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java#L713!!!org.apache.hadoop.mapreduce.task.reduce.Fetcher.connect!!!java.net.URLConnection.connect!!!Fetcher.java:717!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/tools/CLI.java#L662!!!org.apache.hadoop.mapreduce.tools.CLI.getJob!!!org.apache.hadoop.mapreduce.Cluster.getJob!!!CLI.java:660!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/tools/CLI.java#L662!!!org.apache.hadoop.mapreduce.tools.CLI.getJob!!!org.apache.hadoop.mapreduce.Cluster.getJob!!!CLI.java:670!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ClientServiceDelegate.java#L322!!!org.apache.hadoop.mapred.ClientServiceDelegate.invoke!!!org.apache.hadoop.mapred.ClientServiceDelegate.getProxy!!!ClientServiceDelegate.java:325!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ClientServiceDelegate.java#L322!!!org.apache.hadoop.mapred.ClientServiceDelegate.invoke!!!java.lang.reflect.Method.invoke!!!ClientServiceDelegate.java:326!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java#L72!!!org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileReaderTask.run!!!org.apache.hadoop.io.IOUtils.readFully!!!AliyunOSSFileReaderTask.java:75!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Invoker.java#L462!!!org.apache.hadoop.fs.s3a.Invoker.retryUntranslated!!!org.apache.hadoop.util.functional.CallableRaisingIOE.apply!!!Invoker.java:468!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/BlockBlobAppendStream.java#L716!!!org.apache.hadoop.fs.azure.BlockBlobAppendStream.writeBlockRequestInternal!!!org.apache.hadoop.fs.azure.StorageInterface$CloudBlockBlobWrapper.uploadBlock!!!BlockBlobAppendStream.java:720!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/BlockBlobAppendStream.java#L782!!!org.apache.hadoop.fs.azure.BlockBlobAppendStream.writeBlockListRequestInternal!!!org.apache.hadoop.fs.azure.StorageInterface$CloudBlockBlobWrapper.commitBlockList!!!BlockBlobAppendStream.java:787!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/WasbRemoteCallHelper.java#L129!!!org.apache.hadoop.fs.azure.WasbRemoteCallHelper.retryableRequest!!!org.apache.hadoop.fs.azure.WasbRemoteCallHelper.getHttpRequest!!!WasbRemoteCallHelper.java:148!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/WasbRemoteCallHelper.java#L129!!!org.apache.hadoop.fs.azure.WasbRemoteCallHelper.retryableRequest!!!org.apache.http.client.HttpClient.execute!!!WasbRemoteCallHelper.java:151!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/WasbRemoteCallHelper.java#L129!!!org.apache.hadoop.fs.azure.WasbRemoteCallHelper.retryableRequest!!!org.apache.http.HttpEntity.getContent!!!WasbRemoteCallHelper.java:203!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/WasbRemoteCallHelper.java#L129!!!org.apache.hadoop.fs.azure.WasbRemoteCallHelper.retryableRequest!!!java.io.BufferedReader.readLine!!!WasbRemoteCallHelper.java:206!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/AzureADAuthenticator.java#L303!!!org.apache.hadoop.fs.azurebfs.oauth2.AzureADAuthenticator.getTokenCall!!!org.apache.hadoop.fs.azurebfs.oauth2.AzureADAuthenticator.getTokenSingleCall!!!AzureADAuthenticator.java:307!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/CustomTokenProviderAdapter.java#L72!!!org.apache.hadoop.fs.azurebfs.oauth2.CustomTokenProviderAdapter.refreshToken!!!org.apache.hadoop.fs.azurebfs.extensions.CustomTokenProviderAdaptee.getAccessToken!!!CustomTokenProviderAdapter.java:75!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java#L722!!!org.apache.hadoop.tools.SimpleCopyListing$TraverseDirectory.traverseDirectoryMultiThreaded!!!org.apache.hadoop.tools.util.ProducerConsumer.take!!!SimpleCopyListing.java:750!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java#L722!!!org.apache.hadoop.tools.SimpleCopyListing$TraverseDirectory.traverseDirectoryMultiThreaded!!!org.apache.hadoop.tools.util.DistCpUtils.toCopyListingFileStatus!!!SimpleCopyListing.java:757!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java#L722!!!org.apache.hadoop.tools.SimpleCopyListing$TraverseDirectory.traverseDirectoryMultiThreaded!!!org.apache.hadoop.tools.SimpleCopyListing.addToFileListing!!!SimpleCopyListing.java:765!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java#L722!!!org.apache.hadoop.tools.SimpleCopyListing$TraverseDirectory.traverseDirectoryMultiThreaded!!!org.apache.hadoop.tools.SimpleCopyListing.writeToFileListing!!!SimpleCopyListing.java:768!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/RetriableCommand.java#L85!!!org.apache.hadoop.tools.util.RetriableCommand.execute!!!org.apache.hadoop.tools.util.RetriableCommand.doExecute!!!RetriableCommand.java:87!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/main/java/org/apache/hadoop/tools/dynamometer/DynoInfraUtils.java#L235!!!org.apache.hadoop.tools.dynamometer.DynoInfraUtils.waitForAndGetNameNodeProperties!!!org.apache.hadoop.fs.FileSystem.open!!!DynoInfraUtils.java:237!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/main/java/org/apache/hadoop/tools/dynamometer/DynoInfraUtils.java#L235!!!org.apache.hadoop.tools.dynamometer.DynoInfraUtils.waitForAndGetNameNodeProperties!!!org.apache.hadoop.fs.Path.getFileSystem!!!DynoInfraUtils.java:237!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/main/java/org/apache/hadoop/tools/dynamometer/DynoInfraUtils.java#L235!!!org.apache.hadoop.tools.dynamometer.DynoInfraUtils.waitForAndGetNameNodeProperties!!!java.util.Properties.load!!!DynoInfraUtils.java:239!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/main/java/org/apache/hadoop/tools/dynamometer/DynoInfraUtils.java#L458!!!org.apache.hadoop.tools.dynamometer.DynoInfraUtils.waitForNameNodeJMXValue!!!org.apache.hadoop.tools.dynamometer.DynoInfraUtils.fetchNameNodeJMXValue!!!DynoInfraUtils.java:460!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/target/generated-sources/java/org/apache/hadoop/yarn/proto/YarnProtos.java#L83812!!!org.apache.hadoop.yarn.proto.YarnProtos$ContainerLaunchContextProto.ContainerLaunchContextProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readTag!!!YarnProtos.java:88317!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/target/generated-sources/java/org/apache/hadoop/yarn/proto/YarnProtos.java#L83812!!!org.apache.hadoop.yarn.proto.YarnProtos$ContainerLaunchContextProto.ContainerLaunchContextProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readMessage!!!YarnProtos.java:88328!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/target/generated-sources/java/org/apache/hadoop/yarn/proto/YarnProtos.java#L83812!!!org.apache.hadoop.yarn.proto.YarnProtos$ContainerLaunchContextProto.ContainerLaunchContextProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readBytes!!!YarnProtos.java:88333!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/target/generated-sources/java/org/apache/hadoop/yarn/proto/YarnProtos.java#L83812!!!org.apache.hadoop.yarn.proto.YarnProtos$ContainerLaunchContextProto.ContainerLaunchContextProto!!!org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3.parseUnknownField!!!YarnProtos.java:88391!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/target/generated-sources/java/org/apache/hadoop/yarn/proto/YarnProtos.java#L87908!!!org.apache.hadoop.yarn.proto.YarnProtos$ContainerRetryContextProto.ContainerRetryContextProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readTag!!!YarnProtos.java:92413!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/target/generated-sources/java/org/apache/hadoop/yarn/proto/YarnProtos.java#L87908!!!org.apache.hadoop.yarn.proto.YarnProtos$ContainerRetryContextProto.ContainerRetryContextProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readEnum!!!YarnProtos.java:92419!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/target/generated-sources/java/org/apache/hadoop/yarn/proto/YarnProtos.java#L87908!!!org.apache.hadoop.yarn.proto.YarnProtos$ContainerRetryContextProto.ContainerRetryContextProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readInt32!!!YarnProtos.java:92435!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/target/generated-sources/java/org/apache/hadoop/yarn/proto/YarnProtos.java#L87908!!!org.apache.hadoop.yarn.proto.YarnProtos$ContainerRetryContextProto.ContainerRetryContextProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readInt64!!!YarnProtos.java:92435!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/target/generated-sources/java/org/apache/hadoop/yarn/proto/YarnProtos.java#L87908!!!org.apache.hadoop.yarn.proto.YarnProtos$ContainerRetryContextProto.ContainerRetryContextProto!!!org.apache.hadoop.thirdparty.protobuf.CodedInputStream.readRawVarint32!!!YarnProtos.java:92463!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/target/generated-sources/java/org/apache/hadoop/yarn/proto/YarnProtos.java#L87908!!!org.apache.hadoop.yarn.proto.YarnProtos$ContainerRetryContextProto.ContainerRetryContextProto!!!org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3.parseUnknownField!!!YarnProtos.java:92467!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java#L1542!!!org.apache.hadoop.yarn.client.cli.LogsCLI$ClientConnectionRetry.retryOn!!!org.apache.hadoop.yarn.client.cli.LogsCLI$ClientRetryOp.run!!!LogsCLI.java:1545!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineConnector.java#L342!!!org.apache.hadoop.yarn.client.api.impl.TimelineConnector$TimelineClientConnectionRetry.retryOn!!!org.apache.hadoop.yarn.client.api.impl.TimelineConnector$TimelineClientRetryOp.run!!!TimelineConnector.java:341!!!java.net.SocketException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineV2ClientImpl.java#L251!!!org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl.putObjects!!!org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl.putObjects!!!TimelineV2ClientImpl.java:255!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/ifile/LogAggregationIndexedFileController.java#L1278!!!org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController$FSAction.runWithRetries!!!org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController$FSAction.run!!!LogAggregationIndexedFileController.java:1279!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/ifile/LogAggregationIndexedFileController.java#L1321!!!org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.loadUUIDFromLogFile!!!org.apache.hadoop.fs.RemoteIterator.hasNext!!!LogAggregationIndexedFileController.java:1320!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/ifile/LogAggregationIndexedFileController.java#L1321!!!org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.loadUUIDFromLogFile!!!org.apache.hadoop.fs.RemoteIterator.next!!!LogAggregationIndexedFileController.java:1322!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/ifile/LogAggregationIndexedFileController.java#L1321!!!org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.loadUUIDFromLogFile!!!org.apache.hadoop.fs.FileContext.open!!!LogAggregationIndexedFileController.java:1326!!!java.io.FileNotFoundException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/ifile/LogAggregationIndexedFileController.java#L1321!!!org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.loadUUIDFromLogFile!!!java.io.DataInputStream.readFully!!!LogAggregationIndexedFileController.java:1328!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/ifile/LogAggregationIndexedFileController.java#L1321!!!org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.loadUUIDFromLogFile!!!org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.deleteFileWithRetries!!!LogAggregationIndexedFileController.java:1331!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/retry/FederationActionRetry.java#L31!!!org.apache.hadoop.yarn.server.federation.retry.FederationActionRetry.runWithRetries!!!org.apache.hadoop.yarn.server.federation.retry.FederationActionRetry.run!!!FederationActionRetry.java:33!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedApplicationManager.java#L460!!!org.apache.hadoop.yarn.server.uam.UnmanagedApplicationManager.monitorCurrentAppAttempt!!!org.apache.hadoop.yarn.server.uam.UnmanagedApplicationManager.getApplicationReport!!!UnmanagedApplicationManager.java:475!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedApplicationManager.java#L460!!!org.apache.hadoop.yarn.server.uam.UnmanagedApplicationManager.monitorCurrentAppAttempt!!!org.apache.hadoop.yarn.server.uam.UnmanagedApplicationManager.getApplicationReport!!!UnmanagedApplicationManager.java:486!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedApplicationManager.java#L460!!!org.apache.hadoop.yarn.server.uam.UnmanagedApplicationManager.monitorCurrentAppAttempt!!!org.apache.hadoop.yarn.api.ApplicationBaseProtocol.getApplicationAttemptReport!!!UnmanagedApplicationManager.java:499!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/recovery/NMLeveldbStateStoreService.java#L355!!!org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.loadContainerState!!!org.apache.hadoop.yarn.server.utils.BuilderUtils.newContainerTokenIdentifier!!!NMLeveldbStateStoreService.java:368!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/recovery/NMLeveldbStateStoreService.java#L355!!!org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.loadContainerState!!!org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ResourceMappings$AssignedResources.fromBytes!!!NMLeveldbStateStoreService.java:432!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java#L788!!!org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore$FSAction.runWithRetries!!!org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore$FSAction.run!!!FileSystemRMStateStore.java:790!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java#L1000!!!org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.submitReservation!!!org.apache.hadoop.yarn.api.ApplicationClientProtocol.submitReservation!!!FederationClientInterceptor.java:1218!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java#L963!!!org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.getNewReservation!!!org.apache.hadoop.yarn.api.ApplicationClientProtocol.getNewReservation!!!FederationClientInterceptor.java:1151!!!java.io.IOException
+https://github.com/apache/hadoop/tree//ee7d178//hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/FileSystemTimelineWriterImpl.java#L268!!!org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineWriterImpl$FSAction.runWithRetries!!!org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineWriterImpl$FSAction.run!!!FileSystemTimelineWriterImpl.java:271!!!java.io.IOException
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/hadoop_timeout_bounds.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/hadoop_timeout_bounds.data
new file mode 100644
index 00000000..975f93e4
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/hadoop_timeout_bounds.data
@@ -0,0 +1,221 @@
+TestBatchIbr.testIbr
+TestCheckpoint.testActiveImageWithTimeDeltaRelaxation
+TestCheckpoint.testActiveRejectSmallerTxidDeltaImage
+TestCheckpoint.testCheckpoint
+TestCheckpoint.testCheckpointAfterTwoFailedUploads
+TestCheckpoint.testCheckpointTriggerOnTxnCount
+TestCheckpoint.testCheckpointWithFailedStorageDir
+TestCheckpoint.testCheckpointWithSeparateDirsAfterNameFails
+TestCheckpoint.testDeleteTemporaryEditsOnStartup
+TestCheckpoint.testEditFailureBeforeRename
+TestCheckpoint.testEditFailureOnFirstCheckpoint
+TestCheckpoint.testFailureBeforeRename
+TestCheckpoint.testImportCheckpoint
+TestCheckpoint.testLegacyOivImage
+TestCheckpoint.testMultipleSecondaryNNsAgainstSameNN
+TestCheckpoint.testMultipleSecondaryNNsAgainstSameNN2
+TestCheckpoint.testMultipleSecondaryNamenodes
+TestCheckpoint.testNameDirError
+TestCheckpoint.testNameDirLocking
+TestCheckpoint.testNameNodeImageSendFailWrongDigest
+TestCheckpoint.testNameNodeImageSendFailWrongSize
+TestCheckpoint.testNamespaceVerifiedOnFileTransfer
+TestCheckpoint.testReloadOnEditReplayFailure
+TestCheckpoint.testSaveNamespace
+TestCheckpoint.testSecondaryFailsWithErrorBeforeSettingHeaders
+TestCheckpoint.testSecondaryImageDownload
+TestCheckpoint.testSecondaryNameNodeLocking
+TestCheckpoint.testSecondaryNameNodeWithDelegationTokens
+TestCheckpoint.testSecondaryNamenodeError1
+TestCheckpoint.testSecondaryNamenodeError2
+TestCheckpoint.testSecondaryNamenodeError3
+TestCheckpoint.testSecondaryPurgesEditLogs
+TestCheckpoint.testStorageAlreadyLockedErrorMessage
+TestCheckpoint.testTooManyEditReplayFailures
+TestComparators.testAllUserComparators
+TestComparators.testBakedUserComparator
+TestComparators.testDefaultMRComparator
+TestComparators.testUserMRComparator
+TestComparators.testUserValueGroupingComparator
+TestCompressionEmulationUtils.testCompressibleGridmixRecord
+TestCompressionEmulationUtils.testCompressionRatios
+TestCompressionEmulationUtils.testFileQueueDecompression
+TestCompressionEmulationUtils.testPossiblyCompressedDecompressedStreams
+TestCompressionEmulationUtils.testRandomCompressedTextDataGenerator
+TestCopyToLocal.testCopy
+TestCopyToLocal.testCopySingleFile
+TestCopyToLocal.testCopyWithThreads
+TestCopyToLocal.testCopyWithThreadsAndQueueSize
+TestCopyToLocal.testCopyWithThreadsAndQueueSizeWrong
+TestDataDrivenDBInputFormat.testDateSplits
+TestDatanodeDeath.testComplex
+TestDatanodeDeath.testSimple0
+TestDatanodeDeath.testSimple1
+TestDatanodeDeath.testSimple2
+TestDecommissionWithStriped.testCountNodes
+TestDecommissionWithStriped.testDecommission2NodeWithBusyNode
+TestDecommissionWithStriped.testDecommissionTwoNodes
+TestDecommissionWithStriped.testDecommissionWithBusyNode
+TestDecommissionWithStriped.testDecommissionWithFailedReplicating
+TestDecommissionWithStriped.testDecommissionWithMissingBlock
+TestDecommissionWithStriped.testDecommissionWithURBlockForSameBlockGroup
+TestDecommissionWithStriped.testFileChecksumAfterDecommission
+TestDecommissionWithStriped.testFileFullBlockGroup
+TestDecommissionWithStriped.testFileMultipleBlockGroups
+TestDecommissionWithStriped.testFileSmallerThanOneCell
+TestDecommissionWithStriped.testFileSmallerThanOneStripe
+TestDecommissionWithStriped.testRecoveryWithDecommission
+TestDirectoryCommitterScale.test_010_createTaskFiles
+TestDirectoryCommitterScale.test_030_commitFiles
+TestDirectoryCommitterScale.test_040_abortFiles
+TestDistCh.testDistCh
+TestFSEditLogLoader.testAddNewStripedBlock
+TestFSEditLogLoader.testDisplayRecentEditLogOpCodes
+TestFSEditLogLoader.testErasureCodingPolicyOperations
+TestFSEditLogLoader.testFSEditLogOpCodes
+TestFSEditLogLoader.testHasNonEcBlockUsingStripedIDForAddBlock
+TestFSEditLogLoader.testHasNonEcBlockUsingStripedIDForUpdateBlocks
+TestFSEditLogLoader.testReplicationAdjusted
+TestFSEditLogLoader.testUpdateStripedBlocks
+TestFSEditLogLoader.testValidateEmptyEditLog
+TestFileOutputCommitter.testAbortV1
+TestFileOutputCommitter.testCommitterV1
+TestFileOutputCommitter.testCommitterV2
+TestFileOutputCommitter.testCommitterWithDuplicatedCommitV1
+TestFileOutputCommitter.testCommitterWithDuplicatedCommitV2
+TestFileOutputCommitter.testCommitterWithFailureV1
+TestFileOutputCommitter.testCommitterWithFailureV2
+TestFileOutputCommitter.testMapFileOutputCommitterV2
+TestFileOutputCommitter.testMapOnlyNoOutputV1
+TestFileOutputCommitter.testMapOnlyNoOutputV2
+TestFileOutputCommitter.testRecoveryUpgradeV1V2
+TestFileOutputCommitter.testRecoveryV1
+TestFileOutputCommitter.testRecoveryV2
+TestFileSystemAccessService.createFileSystem
+TestFileSystemAccessService.fileSystemCache
+TestFileSystemAccessService.fileSystemExecutor
+TestFileSystemAccessService.serviceHadoopConf
+TestFsVolumeList.testAddRplicaProcessorForAddingReplicaInMap
+TestFsVolumeList.testExcludeSlowDiskWhenChoosingVolume
+TestFsVolumeList.testGetNextVolumeWithClosedVolume
+TestFsVolumeList.testInstanceOfAddReplicaThreadPool
+TestHDFSCLI.testAll
+TestHFlush.hFlush_01
+TestHFlush.hFlush_02
+TestHFlush.hFlush_03
+TestHFlush.hSyncEndBlockAndUpdateLength
+TestHFlush.hSyncEndBlock_00
+TestHFlush.hSyncEndBlock_01
+TestHFlush.hSyncEndBlock_02
+TestHFlush.hSyncEndBlock_03
+TestHFlush.hSyncUpdateLength_00
+TestHFlush.hSyncUpdateLength_01
+TestHFlush.hSyncUpdateLength_02
+TestHFlush.hSyncUpdateLength_03
+TestHFlush.testHFlushInterrupted
+TestHFlush.testPipelineHeartbeat
+TestHadoopArchives.testReadFileContent
+TestHttpFSServer.testAccess
+TestHttpFSServer.testAllowSnapshot
+TestHttpFSServer.testContentType
+TestHttpFSServer.testCreateFileWithUnmaskedPermissions
+TestHttpFSServer.testCreateSnapshot
+TestHttpFSServer.testCreateSnapshotNoSnapshotName
+TestHttpFSServer.testCustomizedUserAndGroupNames
+TestHttpFSServer.testDelegationTokenOperations
+TestHttpFSServer.testDelegationTokenOperationsSsl
+TestHttpFSServer.testDeleteSnapshot
+TestHttpFSServer.testDirAcls
+TestHttpFSServer.testDisallowSnapshot
+TestHttpFSServer.testDisallowSnapshotException
+TestHttpFSServer.testECPolicy
+TestHttpFSServer.testErasureCodingPolicy
+TestHttpFSServer.testFileAcls
+TestHttpFSServer.testGetFileBlockLocations
+TestHttpFSServer.testGetServerDefaults
+TestHttpFSServer.testGetSnapshotDiff
+TestHttpFSServer.testGetSnapshotDiffIllegalParam
+TestHttpFSServer.testGetSnapshotList
+TestHttpFSServer.testGetSnapshottableDirectoryList
+TestHttpFSServer.testGetTrashRoot
+TestHttpFSServer.testGlobFilter
+TestHttpFSServer.testHdfsAccess
+TestHttpFSServer.testMkdirWithUnmaskedPermissions
+TestHttpFSServer.testMkdirs
+TestHttpFSServer.testNoRedirect
+TestHttpFSServer.testNoRedirectWithData
+TestHttpFSServer.testOpenOffsetLength
+TestHttpFSServer.testPerms
+TestHttpFSServer.testRenameSnapshot
+TestHttpFSServer.testStoragePolicySatisfier
+TestHttpFSServer.testXAttrs
+TestKeyFieldBasedComparator.testBasicUnixComparator
+TestLineRecordReaderJobs.testCustomRecordDelimiters
+TestLineRecordReaderJobs.testDefaultRecordDelimiters
+TestMRKeyFieldBasedComparator.testBasicUnixComparator
+TestMapRed.testBiggerInput
+TestMapRed.testCompression
+TestMapRed.testMapred
+TestMapRed.testNullKeys
+TestMapRed.testSmallInput
+TestMapReduce.testMapred
+TestMultipleCachefiles.testMultipleCachefiles
+TestNameserviceRPCMetrics.testProxyOp
+TestNameserviceRPCMetrics.testProxyOpCompleteConcurrent
+TestRMFailover.testAutomaticFailover
+TestRMFailover.testEmbeddedWebAppProxy
+TestRMFailover.testExplicitFailover
+TestRMFailover.testRMWebAppRedirect
+TestRMFailover.testUncaughtExceptionHandlerWithHAEnabled
+TestRMFailover.testWebAppProxyInStandAloneMode
+TestReencryption.testCancelFutureThenReencrypt
+TestReencryption.testCancelFutureThenRestart
+TestReencryption.testDeleteDuringReencrypt
+TestReencryption.testRaceCreateHandler
+TestReencryption.testRaceDeleteCreateHandler
+TestReencryption.testRaceDeleteCreateUpdater
+TestReencryption.testRaceDeleteCurrentDirHandler
+TestReencryption.testRaceDeleteCurrentDirUpdater
+TestReencryption.testRaceDeleteHandler
+TestReencryption.testRaceDeleteUpdater
+TestReencryption.testRaceDeleteZoneHandler
+TestReencryption.testReencryptCancel
+TestReencryption.testReencryptCancelForUpdater
+TestReencryption.testReencryptCommandsQueuedOrdering
+TestReencryption.testReencryptLoadedFromEdits
+TestReencryption.testReencryptLoadedFromFsimage
+TestReencryption.testReencryptNestedZones
+TestReencryption.testReencryptOrdering
+TestReencryption.testReencryptRaceRename
+TestReencryption.testReencryptSnapshots
+TestReencryption.testReencryptionBasic
+TestReencryption.testReencryptionKMSDown
+TestReencryption.testReencryptionNNSafeMode
+TestReencryption.testReencryptionUpdaterFaultCkpt
+TestReencryption.testReencryptionUpdaterFaultOneTask
+TestReencryption.testReencryptionUpdaterFaultRecover
+TestReencryption.testReencryptionWithoutProvider
+TestReencryption.testRestartAfterReencrypt
+TestReencryption.testRestartAfterReencryptAndCheckpoint
+TestReencryption.testRestartDuringReencrypt
+TestReencryption.testRestartWithRenames
+TestReencryption.testZoneDeleteDuringReencrypt
+TestReplaceDatanodeOnFailure.testAppend
+TestReplaceDatanodeOnFailure.testBestEffort
+TestReplaceDatanodeOnFailure.testDefaultPolicy
+TestReplaceDatanodeOnFailure.testReplaceDatanodeOnFailure
+TestRouterAllResolver.testHashAll
+TestRouterAllResolver.testRandomAll
+TestRouterAllResolver.testSpaceAll
+TestStoragePolicySatisfierWithStripedFile.testMoverWithFullStripe
+TestStoragePolicySatisfierWithStripedFile.testSPSWhenFileHasLowRedundancyBlocks
+TestStoragePolicySatisfierWithStripedFile.testWhenNoTargetDatanodeToSatisfyStoragePolicy
+TestStoragePolicySatisfierWithStripedFile.testWhenOnlyFewTargetNodesAreAvailableToSatisfyStoragePolicy
+TestStreamAggregate.testCommandLine
+TestStreamXmlRecordReader.testStreamXmlRecordReader
+TestStreaming.testCommandLine
+TestViewFileSystemLinkRegex.testConfLinkRegexFixedDestMapping
+TestViewFileSystemLinkRegex.testConfLinkRegexIndexMapping
+TestViewFileSystemLinkRegex.testConfLinkRegexNamedGroupMapping
+TestViewFileSystemLinkRegex.testConfLinkRegexWithInterceptors
+TestViewFileSystemLinkRegex.testConfLinkRegexWithSingleInterceptor
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/pom-hadoop.xml b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/pom-hadoop.xml
new file mode 100644
index 00000000..9960fc0b
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/pom-hadoop.xml
@@ -0,0 +1,963 @@
+
+
+
+ 4.0.0
+ org.apache.hadoop
+ hadoop-main
+ 3.4.0-SNAPSHOT
+ Apache Hadoop Main
+ Apache Hadoop Main
+ pom
+
+
+
+
+ com.cenqua.clover
+ clover
+
+ 3.0.2
+
+
+ org.opentest4j
+ opentest4j
+
+ 1.2.0
+ test
+
+
+
+
+
+
+ ${distMgmtStagingId}
+ ${distMgmtStagingName}
+ ${distMgmtStagingUrl}
+
+
+ ${distMgmtSnapshotsId}
+ ${distMgmtSnapshotsName}
+ ${distMgmtSnapshotsUrl}
+
+
+ apache.website
+ scpexe://people.apache.org/www/hadoop.apache.org/docs/r${project.version}
+
+
+
+
+
+
+ org.aspectj
+ aspectjrt
+ ${aspectj.version}
+
+
+ edu.uchicago.cs.systems
+ wasabi
+ ${wasabi.version}
+
+
+
+
+
+ ${distMgmtSnapshotsId}
+ ${distMgmtSnapshotsName}
+ ${distMgmtSnapshotsUrl}
+
+
+ repository.jboss.org
+ https://repository.jboss.org/nexus/content/groups/public/
+
+ false
+
+
+
+
+
+
+ Apache License, Version 2.0
+ https://www.apache.org/licenses/LICENSE-2.0.txt
+
+
+
+
+ Apache Software Foundation
+ https://www.apache.org
+
+
+
+
+ 3.4.0-SNAPSHOT
+
+ apache.snapshots.https
+ Apache Development Snapshot Repository
+ https://repository.apache.org/content/repositories/snapshots
+ apache.staging.https
+ Apache Release Distribution Repository
+ https://repository.apache.org/service/local/staging/deploy/maven2
+
+
+ UTF-8
+ UTF-8
+
+
+ 2.8.1
+ 3.9.1
+ 1.5
+ 1.7
+ 2.4
+ 3.0.2
+ 3.0.0
+ 2.0.0
+ 3.0.1
+ 1.5
+ 1.5
+ 3.0.1
+ 0.12
+ 2.4
+ 4.4.1
+ 2.5.0
+ 1.0.0
+ 3.1.0
+ 8.29
+ 7.1.1
+ 4.2.2
+ 4.2.0
+ 1.1.1
+ 3.8.1
+ 2.7.6
+
+ bash
+
+ org.fusesource.leveldbjni
+
+
+ 1.9.8.M1
+ 1.13
+ 1.0.0
+
+
+
+
+ hadoop-project
+ hadoop-project-dist
+ hadoop-assemblies
+ hadoop-maven-plugins
+ hadoop-common-project
+ hadoop-hdfs-project
+ hadoop-yarn-project
+ hadoop-mapreduce-project
+ hadoop-tools
+ hadoop-dist
+ hadoop-minicluster
+ hadoop-client-modules
+ hadoop-build-tools
+ hadoop-cloud-storage-project
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-dependency-plugin
+ ${maven-dependency-plugin.version}
+
+
+ org.apache.maven.plugins
+ maven-enforcer-plugin
+ ${maven-enforcer-plugin.version}
+
+
+
+ [3.0.2,)
+
+
+ [1.8,)
+
+
+
+
+
+ de.skuzzle.enforcer
+ restrict-imports-enforcer-rule
+ ${restrict-imports.enforcer.version}
+
+
+
+
+ banned-illegal-imports
+ process-sources
+
+ enforce
+
+
+
+
+ true
+ Use hadoop-thirdparty shaded instead of curator shaded
+
+ org.apache.curator.shaded.**
+
+
+
+ true
+ Use hadoop-common provided Sets rather than Guava provided Sets
+
+ org.apache.hadoop.thirdparty.com.google.common.collect.Sets
+ org.apache.hadoop.thirdparty.com.google.common.collect.Sets.**
+
+
+
+ true
+ Use hadoop-common provided Lists rather than Guava provided Lists
+
+ org.apache.hadoop.thirdparty.com.google.common.collect.Lists
+ org.apache.hadoop.thirdparty.com.google.common.collect.Lists.**
+
+
+
+ true
+ Use hadoop-annotation provided VisibleForTesting rather than the one provided by Guava
+
+ org.apache.hadoop.thirdparty.com.google.common.annotations.VisibleForTesting
+
+
+
+ true
+ Use alternatives to Guava common classes
+
+ com.google.common.**
+
+
+
+ true
+ Use alternative to Guava provided BaseEncoding
+
+ org.apache.hadoop.thirdparty.com.google.common.io.BaseEncoding
+ org.apache.hadoop.thirdparty.com.google.common.io.BaseEncoding.**
+
+
+
+ true
+ Use alternative to Guava provided Optional
+
+ org.apache.hadoop.thirdparty.com.google.common.base.Optional
+ org.apache.hadoop.thirdparty.com.google.common.base.Optional.**
+
+
+
+ true
+ Use alternative to Guava provided Function
+
+ org.apache.hadoop.thirdparty.com.google.common.base.Function
+ org.apache.hadoop.thirdparty.com.google.common.base.Function.**
+
+
+
+ true
+ Use alternative to Guava provided Predicate
+
+ org.apache.hadoop.thirdparty.com.google.common.base.Predicate
+ org.apache.hadoop.thirdparty.com.google.common.base.Predicate.**
+
+
+
+ true
+ Use alternative to Guava provided Supplier
+
+ org.apache.hadoop.thirdparty.com.google.common.base.Supplier
+ org.apache.hadoop.thirdparty.com.google.common.base.Supplier.**
+
+
+
+ true
+ Use alternative to Guava provided ImmutableListMultimap
+
+ org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableListMultimap
+ org.apache.hadoop.thirdparty.com.google.common.collect.ImmutableListMultimap.**
+
+
+
+ true
+ Use hadoop-common provided Preconditions rather than Guava provided
+
+ org.apache.hadoop.thirdparty.com.google.common.base.Preconditions
+ org.apache.hadoop.thirdparty.com.google.common.base.Preconditions.**
+
+
+
+ true
+ Use Fasterxml Jackson 2 dependency in place of org.codehaus Jackson 1
+
+ org.codehaus.jackson.**
+
+
+
+ true
+ Use HttpServlet APIs instead
+
+ org.glassfish.grizzly
+ org.glassfish.grizzly.**
+
+
+
+ true
+ Use slf4j based Logger
+
+ org.apache.commons.logging.**
+
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-assembly-plugin
+ ${maven-assembly-plugin.version}
+
+
+ org.apache.maven.plugins
+ maven-deploy-plugin
+ ${maven-deploy-plugin.version}
+
+
+ org.apache.rat
+ apache-rat-plugin
+ ${apache-rat-plugin.version}
+
+
+ org.apache.maven.plugins
+ maven-antrun-plugin
+ ${maven-antrun-plugin.version}
+
+
+ org.apache.maven.plugins
+ maven-site-plugin
+ ${maven-site-plugin.version}
+
+
+ org.apache.maven.wagon
+ wagon-ssh
+ ${wagon-ssh.version}
+
+
+
+
+
+ org.eclipse.m2e
+ lifecycle-mapping
+ ${lifecycle-mapping.version}
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-antrun-plugin
+ [1.7,)
+
+ run
+
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-resources-plugin
+ [2.2,)
+
+ testResources
+ resources
+
+
+
+
+
+
+
+
+ org.apache.avro
+ avro-maven-plugin
+ [1.5.3,)
+
+ schema
+ protocol
+
+
+
+
+
+
+
+
+ org.codehaus.mojo.jspc
+ jspc-maven-plugin
+ [2.0-alpha-3,)
+
+ compile
+
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-dependency-plugin
+ [2.4,)
+
+ copy-dependencies
+ build-classpath
+
+
+
+
+
+
+
+
+ org.codehaus.mojo
+ exec-maven-plugin
+ [1.2,)
+
+ exec
+
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-jar-plugin
+ [2.3.1,)
+
+ test-jar
+
+
+
+
+
+
+
+
+
+
+
+ org.openclover
+ clover-maven-plugin
+ ${clover-maven-plugin.version}
+
+
+ org.apache.felix
+ maven-bundle-plugin
+ ${maven-bundle-plugin.version}
+
+
+ org.apache.maven.plugins
+ maven-checkstyle-plugin
+ ${maven-checkstyle-plugin.version}
+
+
+ org.apache.hadoop
+ hadoop-build-tools
+ ${hadoop.version}
+
+
+ com.puppycrawl.tools
+ checkstyle
+ ${checkstyle.version}
+
+
+
+ checkstyle/checkstyle.xml
+ checkstyle/suppressions.xml
+ true
+ false
+ ${project.build.directory}/test/checkstyle-errors.xml
+
+
+
+ org.owasp
+ dependency-check-maven
+ ${dependency-check-maven.version}
+
+
+ com.github.spotbugs
+ spotbugs-maven-plugin
+ ${spotbugs-maven-plugin.version}
+
+
+ com.github.spotbugs
+ spotbugs
+ ${spotbugs.version}
+
+
+
+
+ org.jsonschema2pojo
+ jsonschema2pojo-maven-plugin
+ ${jsonschema2pojo-maven-plugin.version}
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ ${maven-compiler-plugin.version}
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-enforcer-plugin
+ false
+
+
+ clean
+
+ enforce
+
+ pre-clean
+
+
+ default
+
+ enforce
+
+ validate
+
+
+ site
+
+ enforce
+
+ pre-site
+
+
+ enforce-property
+
+ enforce
+
+
+
+
+ hadoop.version
+ You must set a hadoop.version to be the same as ${project.version}
+ ${project.version}
+ The hadoop.version property should be set and should be ${project.version}.
+
+
+ true
+
+
+
+
+
+ org.apache.rat
+ apache-rat-plugin
+
+
+ .gitattributes
+ .gitignore
+ .git/**
+ .github/pull_request_template.md
+ .idea/**
+ **/build/**
+ **/patchprocess/**
+ **/*.js
+ licenses/**
+ licenses-binary/**
+ dev-support/docker/pkg-resolver/packages.json
+ dev-support/docker/pkg-resolver/platforms.json
+ **/target/**
+
+
+
+
+ maven-site-plugin
+
+
+ attach-descriptor
+
+ attach-descriptor
+
+
+
+
+
+ org.apache.felix
+ maven-bundle-plugin
+ true
+ true
+
+
+ org.apache.maven.plugins
+ maven-checkstyle-plugin
+ ${maven-checkstyle-plugin.version}
+
+
+
+ org.owasp
+ dependency-check-maven
+ ${dependency-check-maven.version}
+
+
+ com.github.spotbugs
+ spotbugs-maven-plugin
+
+
+ org.cyclonedx
+ cyclonedx-maven-plugin
+ ${cyclonedx.version}
+
+
+
+
+ dev.aspectj
+ aspectj-maven-plugin
+ ${aspectj-maven.version}
+
+ 1.8
+ 1.8
+ true
+ true
+
+
+ edu.uchicago.cs.systems
+ wasabi
+
+
+
+ --add-exports=java.base/sun.net.spi.nameservice=ALL-UNNAMED
+ --add-opens=java.base/sun.net.spi.nameservice=ALL-UNNAMED
+
+
+
+
+
+ test-compile
+ compile
+
+
+ 1.8
+ 1.8
+
+ false
+ true
+ true
+ unmatchedSuperTypeInCall=ignore,adviceDidNotMatch=ignore,typeNotExposedToWeaver=ignore,uncheckedAdviceConversion=ignore,invalidAbsoluteTypeName=ignore,cantFindType=ignore
+
+
+
+
+
+ org.aspectj
+ aspectjtools
+ ${aspectj.version}
+
+
+
+
+
+
+
+
+ true
+
+
+ org.apache.maven.plugins
+ maven-javadoc-plugin
+ ${maven-javadoc-plugin.version}
+ false
+
+
+ aggregate
+
+ 1024m
+ true
+ false
+ ${maven.compile.source}
+ ${maven.compile.encoding}
+ ${project.build.directory}/site
+ hadoop-project/api
+
+ org.apache.hadoop.authentication*,org.apache.hadoop.mapreduce.v2.proto,org.apache.hadoop.yarn.proto,org.apache.hadoop.yarn.server*,org.apache.hadoop.yarn.webapp*
+
+
+ Common
+ org.apache.hadoop*
+
+
+ HDFS
+ org.apache.hadoop.hdfs*
+
+
+ MapReduce
+ org.apache.hadoop.mapred*
+
+
+ YARN
+ org.apache.hadoop.yarn*
+
+
+ org.apache.hadoop.classification.tools.IncludePublicAnnotationsStandardDoclet
+
+
+ org.apache.hadoop
+ hadoop-annotations
+ ${project.version}
+
+
+ true
+
+
+ false
+
+
+
+ org.apache.hadoop:hadoop-annotations
+
+
+
+
+ aggregate
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-dependency-plugin
+ ${maven-dependency-plugin.version}
+
+
+
+ analyze-report
+
+
+
+
+
+
+
+
+
+ src
+
+ false
+
+
+
+
+ org.apache.maven.plugins
+ maven-assembly-plugin
+ false
+
+
+ src-dist
+ package
+
+ single
+
+
+ false
+ false
+ hadoop-${project.version}-src
+ hadoop-dist/target
+
+
+
+ hadoop-assemblies/src/main/resources/assemblies/hadoop-src.xml
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-antrun-plugin
+ false
+
+
+ src-dist-msg
+ package
+
+ run
+
+
+
+
+ Hadoop source tar available at: ${basedir}/hadoop-dist/target/hadoop-${project.version}-src.tar.gz
+
+
+
+
+
+
+
+
+
+
+
+ dist
+
+
+
+ org.cyclonedx
+ cyclonedx-maven-plugin
+ ${cyclonedx.version}
+
+
+ package
+
+ makeBom
+
+
+
+
+ xml
+
+
+
+
+
+
+
+ sign
+
+
+
+ org.apache.maven.plugins
+ maven-gpg-plugin
+ ${maven-gpg-plugin.version}
+
+
+ sign-artifacts
+ verify
+
+ sign
+
+
+
+
+
+
+
+
+ clover
+
+ false
+
+ clover
+
+
+
+ ${project.build.directory}/clover/hadoop-coverage.db
+
+ true
+ true
+ true
+ false
+
+
+
+
+ org.openclover
+ clover-maven-plugin
+
+ false
+ true
+ ${cloverDatabase}
+ 50%
+ ${project.build.directory}/clover
+ ${cloverAlwaysReport}
+ ${cloverGenHtml}
+ ${cloverGenXml}
+ ${cloverGenHistorical}
+
+ **/examples/**/*.java
+ **/hamlet/*.java
+ **/ha/proto/*.java
+ **/protocol/proto/*.java
+ **/compiler/generated/*.java
+ **/protobuf/*.java
+ **/v2/proto/*.java
+ **/yarn/proto/*.java
+ **/security/proto/*.java
+ **/tools/proto/*.java
+ **/hs/proto/*.java
+
+
+
+
+ clover-setup
+ process-sources
+
+ setup
+
+
+
+ clover
+ test
+
+ clover
+
+
+
+
+
+
+
+
+ aarch64
+
+ org.openlabtesting.leveldbjni
+
+
+
+ linux
+ aarch64
+
+
+
+
+
+
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.cli.TestHDFSCLI.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.cli.TestHDFSCLI.conf
new file mode 100644
index 00000000..c6d1e4d7
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.cli.TestHDFSCLI.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.cli.TestHDFSCLI.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.cli.TestHDFSCLI.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.cli.TestHDFSCLI.data
new file mode 100644
index 00000000..397dd50a
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.cli.TestHDFSCLI.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L622!!!org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo!!!org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode!!!DFSInputStream.java:637!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.http.server.TestHttpFSServer.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.http.server.TestHttpFSServer.conf
new file mode 100644
index 00000000..4d3874cb
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.http.server.TestHttpFSServer.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.http.server.TestHttpFSServer.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.http.server.TestHttpFSServer.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.http.server.TestHttpFSServer.data
new file mode 100644
index 00000000..ec89adb5
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.http.server.TestHttpFSServer.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirEncryptionZoneOp.java#L585!!!org.apache.hadoop.hdfs.server.namenode.FSDirEncryptionZoneOp$EDEKCacheLoader.run!!!org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.warmUpEncryptedKeys!!!FSDirEncryptionZoneOp.java:587!!!java.net.SocketException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.s3a.commit.staging.TestDirectoryCommitterScale.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.s3a.commit.staging.TestDirectoryCommitterScale.conf
new file mode 100644
index 00000000..e0e4dd26
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.s3a.commit.staging.TestDirectoryCommitterScale.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.s3a.commit.staging.TestDirectoryCommitterScale.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.s3a.commit.staging.TestDirectoryCommitterScale.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.s3a.commit.staging.TestDirectoryCommitterScale.data
new file mode 100644
index 00000000..592cf524
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.s3a.commit.staging.TestDirectoryCommitterScale.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//warmUpEncryptedKeys//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSInputChecker.java#L301!!!org.apache.hadoop.fs.FSInputChecker.readChecksumChunk!!!org.apache.hadoop.fs.FSInputChecker.readChunk!!!FSInputChecker.java:305!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.shell.TestCopyToLocal.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.shell.TestCopyToLocal.conf
new file mode 100644
index 00000000..f4d70289
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.shell.TestCopyToLocal.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.shell.TestCopyToLocal.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.shell.TestCopyToLocal.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.shell.TestCopyToLocal.data
new file mode 100644
index 00000000..6ef0bfcf
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.shell.TestCopyToLocal.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSInputChecker.java#L301!!!org.apache.hadoop.fs.FSInputChecker.readChecksumChunk!!!org.apache.hadoop.fs.FSInputChecker.readChunk!!!FSInputChecker.java:305!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.viewfs.TestViewFileSystemLinkRegex.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.viewfs.TestViewFileSystemLinkRegex.conf
new file mode 100644
index 00000000..dd07e7ce
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.viewfs.TestViewFileSystemLinkRegex.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.viewfs.TestViewFileSystemLinkRegex.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.viewfs.TestViewFileSystemLinkRegex.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.viewfs.TestViewFileSystemLinkRegex.data
new file mode 100644
index 00000000..a95e0b33
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.fs.viewfs.TestViewFileSystemLinkRegex.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java#L1141!!!org.apache.hadoop.hdfs.DFSOutputStream.addBlock!!!org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock!!!DFSOutputStream.java:1143!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestDatanodeDeath.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestDatanodeDeath.conf
new file mode 100644
index 00000000..cf340a0c
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestDatanodeDeath.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestDatanodeDeath.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestDatanodeDeath.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestDatanodeDeath.data
new file mode 100644
index 00000000..b2573113
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestDatanodeDeath.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L1177!!!org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode!!!org.apache.hadoop.hdfs.DFSInputStream.getBlockReader!!!DFSInputStream.java:1181!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestDecommissionWithStriped.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestDecommissionWithStriped.conf
new file mode 100644
index 00000000..eec6fbfc
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestDecommissionWithStriped.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestDecommissionWithStriped.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestDecommissionWithStriped.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestDecommissionWithStriped.data
new file mode 100644
index 00000000..47b321ca
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestDecommissionWithStriped.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/FileChecksumHelper.java#L521!!!org.apache.hadoop.hdfs.FileChecksumHelper$ReplicatedFileChecksumComputer.checksumBlock!!!org.apache.hadoop.hdfs.FileChecksumHelper$ReplicatedFileChecksumComputer.tryDatanode!!!FileChecksumHelper.java:523!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestHFlush.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestHFlush.conf
new file mode 100644
index 00000000..225538d3
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestHFlush.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestHFlush.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestHFlush.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestHFlush.data
new file mode 100644
index 00000000..3dc83d98
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestHFlush.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L344!!!org.apache.hadoop.hdfs.DFSInputStream.readBlockLength!!!org.apache.hadoop.hdfs.DFSUtilClient.createClientDatanodeProtocolProxy!!!DFSInputStream.java:348!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestLeaseRecovery.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestLeaseRecovery.conf
new file mode 100644
index 00000000..a3eaff9b
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestLeaseRecovery.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestLeaseRecovery.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestLeaseRecovery.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestLeaseRecovery.data
new file mode 100644
index 00000000..019d2604
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestLeaseRecovery.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java#L440!!!org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run!!!org.apache.hadoop.hdfs.client.impl.LeaseRenewer.renew!!!LeaseRenewer.java:445!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure.conf
new file mode 100644
index 00000000..ce7498d1
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure.data
new file mode 100644
index 00000000..f3ab6767
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1550!!!org.apache.hadoop.hdfs.DataStreamer.transfer!!!org.apache.hadoop.hdfs.DataStreamer$StreamerStreams.sendTransferBlock!!!DataStreamer.java:1558!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer.conf
new file mode 100644
index 00000000..6cc9d084
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer.data
new file mode 100644
index 00000000..019d2604
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java#L440!!!org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run!!!org.apache.hadoop.hdfs.client.impl.LeaseRenewer.renew!!!LeaseRenewer.java:445!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.nfs.TestMountd.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.nfs.TestMountd.conf
new file mode 100644
index 00000000..71483c63
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.nfs.TestMountd.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.nfs.TestMountd.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.nfs.TestMountd.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.nfs.TestMountd.data
new file mode 100644
index 00000000..81cd9bf5
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.nfs.TestMountd.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java#L895!!!org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run!!!org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake!!!BPServiceActor.java:903!!!java.net.SocketException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.TestBatchIbr.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.TestBatchIbr.conf
new file mode 100644
index 00000000..66b77e5a
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.TestBatchIbr.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.TestBatchIbr.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.TestBatchIbr.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.TestBatchIbr.data
new file mode 100644
index 00000000..4c0affa3
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.TestBatchIbr.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L622!!!org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo!!!org.apache.hadoop.hdfs.DFSInputStream.getBlockReader!!!DFSInputStream.java:645!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList.conf
new file mode 100644
index 00000000..3a7d402e
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList.data
new file mode 100644
index 00000000..0943556c
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1826!!!org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream!!!org.apache.hadoop.net.NetUtils.getInputStream!!!DataStreamer.java:1837!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation.conf
new file mode 100644
index 00000000..c618b37a
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation.data
new file mode 100644
index 00000000..019d2604
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java#L440!!!org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run!!!org.apache.hadoop.hdfs.client.impl.LeaseRenewer.renew!!!LeaseRenewer.java:445!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.metrics.TestNameserviceRPCMetrics.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.metrics.TestNameserviceRPCMetrics.conf
new file mode 100644
index 00000000..ccff78e0
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.metrics.TestNameserviceRPCMetrics.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.metrics.TestNameserviceRPCMetrics.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.metrics.TestNameserviceRPCMetrics.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.metrics.TestNameserviceRPCMetrics.data
new file mode 100644
index 00000000..4271ce5d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.metrics.TestNameserviceRPCMetrics.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L613!!!org.apache.hadoop.ipc.Client$Connection.setupConnection!!!org.apache.hadoop.net.NetUtils.connect!!!Client.java:668!!!java.net.ConnectException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.router.TestRouterAllResolver.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.router.TestRouterAllResolver.conf
new file mode 100644
index 00000000..b82ae3f5
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.router.TestRouterAllResolver.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.router.TestRouterAllResolver.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.router.TestRouterAllResolver.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.router.TestRouterAllResolver.data
new file mode 100644
index 00000000..18318d46
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.router.TestRouterAllResolver.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1826!!!org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream!!!org.apache.hadoop.hdfs.DataStreamer.createSocketForPipeline!!!DataStreamer.java:1832!!!java.net.SocketException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.router.TestRouterFsck.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.router.TestRouterFsck.conf
new file mode 100644
index 00000000..b50a14b4
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.router.TestRouterFsck.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.router.TestRouterFsck.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.router.TestRouterFsck.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.router.TestRouterFsck.data
new file mode 100644
index 00000000..a7d82f22
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.federation.router.TestRouterFsck.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java#L291!!!org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate!!!org.apache.hadoop.hdfs.protocol.ClientProtocol.create!!!DFSOutputStream.java:294!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestCheckpoint.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestCheckpoint.conf
new file mode 100644
index 00000000..caba43a4
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestCheckpoint.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestCheckpoint.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestCheckpoint.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestCheckpoint.data
new file mode 100644
index 00000000..e463af73
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestCheckpoint.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java#L341!!!org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork!!!org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.shouldCheckpointBasedOnCount!!!SecondaryNameNode.java:358!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader.conf
new file mode 100644
index 00000000..21995051
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader.data
new file mode 100644
index 00000000..a262fcd3
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java#L241!!!org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader!!!org.apache.hadoop.hdfs.DFSStripedInputStream.refreshLocatedBlock!!!DFSStripedInputStream.java:245!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestReencryption.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestReencryption.conf
new file mode 100644
index 00000000..83b00166
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestReencryption.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestReencryption.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestReencryption.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestReencryption.data
new file mode 100644
index 00000000..de18cb5d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.TestReencryption.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionUpdater.java#L436!!!org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater.takeAndProcessTasks!!!org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater.processTask!!!ReencryptionUpdater.java:440!!!RetriableException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots.conf
new file mode 100644
index 00000000..adcf412b
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots.data
new file mode 100644
index 00000000..019d2604
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java#L440!!!org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run!!!org.apache.hadoop.hdfs.client.impl.LeaseRenewer.renew!!!LeaseRenewer.java:445!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.sps.TestStoragePolicySatisfierWithStripedFile.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.sps.TestStoragePolicySatisfierWithStripedFile.conf
new file mode 100644
index 00000000..12e0a377
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.sps.TestStoragePolicySatisfierWithStripedFile.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.sps.TestStoragePolicySatisfierWithStripedFile.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.sps.TestStoragePolicySatisfierWithStripedFile.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.sps.TestStoragePolicySatisfierWithStripedFile.data
new file mode 100644
index 00000000..7ad9b323
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.hdfs.server.namenode.sps.TestStoragePolicySatisfierWithStripedFile.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/sps/ExternalSPSBlockMoveTaskHandler.java#L203!!!org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock!!!org.apache.hadoop.hdfs.server.balancer.KeyManager.getAccessToken!!!ExternalSPSBlockMoveTaskHandler.java:206!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.lib.service.hadoop.TestFileSystemAccessService.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.lib.service.hadoop.TestFileSystemAccessService.conf
new file mode 100644
index 00000000..bbb0a548
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.lib.service.hadoop.TestFileSystemAccessService.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.lib.service.hadoop.TestFileSystemAccessService.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.lib.service.hadoop.TestFileSystemAccessService.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.lib.service.hadoop.TestFileSystemAccessService.data
new file mode 100644
index 00000000..ec89adb5
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.lib.service.hadoop.TestFileSystemAccessService.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirEncryptionZoneOp.java#L585!!!org.apache.hadoop.hdfs.server.namenode.FSDirEncryptionZoneOp$EDEKCacheLoader.run!!!org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.warmUpEncryptedKeys!!!FSDirEncryptionZoneOp.java:587!!!java.net.SocketException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestComparators.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestComparators.conf
new file mode 100644
index 00000000..dc3f7016
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestComparators.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestComparators.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestComparators.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestComparators.data
new file mode 100644
index 00000000..a7456842
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestComparators.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L1377!!!org.apache.hadoop.mapred.Task.sendDone!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.done!!!Task.java:1379!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestFileOutputCommitter.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestFileOutputCommitter.conf
new file mode 100644
index 00000000..b58e3720
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestFileOutputCommitter.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestFileOutputCommitter.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestFileOutputCommitter.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestFileOutputCommitter.data
new file mode 100644
index 00000000..53fc96a6
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestFileOutputCommitter.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java#L375!!!org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJob!!!org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJobInternal!!!FileOutputCommitter.java:377!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestLineRecordReaderJobs.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestLineRecordReaderJobs.conf
new file mode 100644
index 00000000..b7c588be
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestLineRecordReaderJobs.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestLineRecordReaderJobs.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestLineRecordReaderJobs.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestLineRecordReaderJobs.data
new file mode 100644
index 00000000..a7456842
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestLineRecordReaderJobs.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L1377!!!org.apache.hadoop.mapred.Task.sendDone!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.done!!!Task.java:1379!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestMapRed.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestMapRed.conf
new file mode 100644
index 00000000..f92bce47
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestMapRed.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestMapRed.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestMapRed.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestMapRed.data
new file mode 100644
index 00000000..a7456842
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.TestMapRed.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L1377!!!org.apache.hadoop.mapred.Task.sendDone!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.done!!!Task.java:1379!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.gridmix.TestCompressionEmulationUtils.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.gridmix.TestCompressionEmulationUtils.conf
new file mode 100644
index 00000000..cf775654
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.gridmix.TestCompressionEmulationUtils.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.gridmix.TestCompressionEmulationUtils.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.gridmix.TestCompressionEmulationUtils.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.gridmix.TestCompressionEmulationUtils.data
new file mode 100644
index 00000000..3f2b005e
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.gridmix.TestCompressionEmulationUtils.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L1315!!!org.apache.hadoop.mapred.Task.statusUpdate!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.statusUpdate!!!Task.java:1317!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.lib.TestKeyFieldBasedComparator.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.lib.TestKeyFieldBasedComparator.conf
new file mode 100644
index 00000000..0415330e
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.lib.TestKeyFieldBasedComparator.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.lib.TestKeyFieldBasedComparator.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.lib.TestKeyFieldBasedComparator.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.lib.TestKeyFieldBasedComparator.data
new file mode 100644
index 00000000..a7456842
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapred.lib.TestKeyFieldBasedComparator.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L1377!!!org.apache.hadoop.mapred.Task.sendDone!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.done!!!Task.java:1379!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.TestMapReduce.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.TestMapReduce.conf
new file mode 100644
index 00000000..1a651be8
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.TestMapReduce.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.TestMapReduce.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.TestMapReduce.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.TestMapReduce.data
new file mode 100644
index 00000000..3f2b005e
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.TestMapReduce.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L1315!!!org.apache.hadoop.mapred.Task.statusUpdate!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.statusUpdate!!!Task.java:1317!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.lib.db.TestDataDrivenDBInputFormat.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.lib.db.TestDataDrivenDBInputFormat.conf
new file mode 100644
index 00000000..c0b72643
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.lib.db.TestDataDrivenDBInputFormat.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.lib.db.TestDataDrivenDBInputFormat.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.lib.db.TestDataDrivenDBInputFormat.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.lib.db.TestDataDrivenDBInputFormat.data
new file mode 100644
index 00000000..3f2b005e
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.lib.db.TestDataDrivenDBInputFormat.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L1315!!!org.apache.hadoop.mapred.Task.statusUpdate!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.statusUpdate!!!Task.java:1317!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.lib.partition.TestMRKeyFieldBasedComparator.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.lib.partition.TestMRKeyFieldBasedComparator.conf
new file mode 100644
index 00000000..b6568012
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.lib.partition.TestMRKeyFieldBasedComparator.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.lib.partition.TestMRKeyFieldBasedComparator.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.lib.partition.TestMRKeyFieldBasedComparator.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.lib.partition.TestMRKeyFieldBasedComparator.data
new file mode 100644
index 00000000..ce98ff5b
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.mapreduce.lib.partition.TestMRKeyFieldBasedComparator.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L1251!!!org.apache.hadoop.mapred.Task.done!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.commitPending!!!Task.java:1253!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.security.TestUGIWithMiniKdc.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.security.TestUGIWithMiniKdc.conf
new file mode 100644
index 00000000..497bfb35
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.security.TestUGIWithMiniKdc.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.security.TestUGIWithMiniKdc.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.security.TestUGIWithMiniKdc.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.security.TestUGIWithMiniKdc.data
new file mode 100644
index 00000000..7ac78106
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.security.TestUGIWithMiniKdc.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L979!!!org.apache.hadoop.security.UserGroupInformation$AutoRenewalForUserCredsRunnable.run!!!org.apache.hadoop.security.UserGroupInformation$AutoRenewalForUserCredsRunnable.relogin!!!UserGroupInformation.java:986!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestDumpTypedBytes.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestDumpTypedBytes.conf
new file mode 100644
index 00000000..1839ee15
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestDumpTypedBytes.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestDumpTypedBytes.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestDumpTypedBytes.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestDumpTypedBytes.data
new file mode 100644
index 00000000..cbc4b54c
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestDumpTypedBytes.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L235!!!org.apache.hadoop.hdfs.DFSInputStream.openInfo!!!org.apache.hadoop.hdfs.DFSInputStream.getLastBlockLength!!!DFSInputStream.java:243!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestMultipleCachefiles.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestMultipleCachefiles.conf
new file mode 100644
index 00000000..06bfbc0f
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestMultipleCachefiles.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestMultipleCachefiles.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestMultipleCachefiles.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestMultipleCachefiles.data
new file mode 100644
index 00000000..56af3faa
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestMultipleCachefiles.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java#L240!!!org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run!!!org.apache.hadoop.hdfs.server.datanode.DataXceiver.create!!!DataXceiverServer.java:253!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestStreamAggregate.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestStreamAggregate.conf
new file mode 100644
index 00000000..f29fc968
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestStreamAggregate.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestStreamAggregate.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestStreamAggregate.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestStreamAggregate.data
new file mode 100644
index 00000000..a7456842
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestStreamAggregate.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L1377!!!org.apache.hadoop.mapred.Task.sendDone!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.done!!!Task.java:1379!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestStreaming.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestStreaming.conf
new file mode 100644
index 00000000..61d0260a
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestStreaming.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestStreaming.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestStreaming.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestStreaming.data
new file mode 100644
index 00000000..ce98ff5b
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.TestStreaming.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java#L1251!!!org.apache.hadoop.mapred.Task.done!!!org.apache.hadoop.mapred.TaskUmbilicalProtocol.commitPending!!!Task.java:1253!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.mapreduce.TestStreamXmlRecordReader.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.mapreduce.TestStreamXmlRecordReader.conf
new file mode 100644
index 00000000..e7049c21
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.mapreduce.TestStreamXmlRecordReader.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.mapreduce.TestStreamXmlRecordReader.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.mapreduce.TestStreamXmlRecordReader.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.mapreduce.TestStreamXmlRecordReader.data
new file mode 100644
index 00000000..2b4f0088
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.streaming.mapreduce.TestStreamXmlRecordReader.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/EventFetcher.java#L64!!!org.apache.hadoop.mapreduce.task.reduce.EventFetcher.run!!!org.apache.hadoop.mapreduce.task.reduce.EventFetcher.getMapCompletionEvents!!!EventFetcher.java:66!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.test.TestHFSTestCase.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.test.TestHFSTestCase.conf
new file mode 100644
index 00000000..d56df585
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.test.TestHFSTestCase.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.test.TestHFSTestCase.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.test.TestHFSTestCase.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.test.TestHFSTestCase.data
new file mode 100644
index 00000000..ec89adb5
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.test.TestHFSTestCase.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirEncryptionZoneOp.java#L585!!!org.apache.hadoop.hdfs.server.namenode.FSDirEncryptionZoneOp$EDEKCacheLoader.run!!!org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.warmUpEncryptedKeys!!!FSDirEncryptionZoneOp.java:587!!!java.net.SocketException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.TestDistCh.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.TestDistCh.conf
new file mode 100644
index 00000000..667e321f
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.TestDistCh.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.TestDistCh.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.TestDistCh.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.TestDistCh.data
new file mode 100644
index 00000000..0943556c
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.TestDistCh.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1826!!!org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream!!!org.apache.hadoop.net.NetUtils.getInputStream!!!DataStreamer.java:1837!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.TestHadoopArchives.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.TestHadoopArchives.conf
new file mode 100644
index 00000000..10e261c3
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.TestHadoopArchives.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.TestHadoopArchives.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.TestHadoopArchives.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.TestHadoopArchives.data
new file mode 100644
index 00000000..56af3faa
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.TestHadoopArchives.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java#L240!!!org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run!!!org.apache.hadoop.hdfs.server.datanode.DataXceiver.create!!!DataXceiverServer.java:253!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.fedbalance.TestDistCpProcedure.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.fedbalance.TestDistCpProcedure.conf
new file mode 100644
index 00000000..fc63c9d5
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.fedbalance.TestDistCpProcedure.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.fedbalance.TestDistCpProcedure.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.fedbalance.TestDistCpProcedure.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.fedbalance.TestDistCpProcedure.data
new file mode 100644
index 00000000..ca8b1acd
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.tools.fedbalance.TestDistCpProcedure.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java#L748!!!org.apache.hadoop.tools.SimpleCopyListing$TraverseDirectory.traverseDirectoryMultiThreaded!!!org.apache.hadoop.tools.util.DistCpUtils.toCopyListingFileStatus!!!SimpleCopyListing.java:757!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.yarn.client.TestRMFailover.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.yarn.client.TestRMFailover.conf
new file mode 100644
index 00000000..7e6114c8
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.yarn.client.TestRMFailover.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.yarn.client.TestRMFailover.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.yarn.client.TestRMFailover.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.yarn.client.TestRMFailover.data
new file mode 100644
index 00000000..6ea9f2fb
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hadoop/test-plan/hadoop_retry_locations-org.apache.hadoop.yarn.client.TestRMFailover.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hadoop/tree//60867de//hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java#L853!!!org.apache.hadoop.ha.ActiveStandbyElector.reEstablishSession!!!org.apache.hadoop.ha.ActiveStandbyElector.createConnection!!!ActiveStandbyElector.java:858!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/hbase.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/hbase.conf
new file mode 100644
index 00000000..ba08c39f
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/hbase.conf
@@ -0,0 +1,3 @@
+retry_data_file: /home/bastoica/projects/wasabi/tool/wasabi/config/hbase/hbase_retry_locations.data
+injection_policy: max-count
+max_injection_count: 0
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/hbase_retry_bounds.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/hbase_retry_bounds.data
new file mode 100644
index 00000000..4ea9802f
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/hbase_retry_bounds.data
@@ -0,0 +1,158 @@
+Var name!!!Assigned value!!!Assign method!!!Test class
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!6!!!setInt!!!AbstractTestShell
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!6!!!setInt!!!TestShellRSGroups
+hbase.client.retries.number!!!100!!!setInt!!!IntegrationTestMobCompaction
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!3!!!setInt!!!TestShadeSaslAuthenticationProvider
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestRefreshHFilesBase
+ReadOnlyZKClient.RECOVERY_RETRY!!!3!!!setInt!!!TestReadOnlyZKClient
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestAsyncCoprocessorOnAllRegionServersEndpoint
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!TestCoprocessorEndpoint
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestAsyncCoprocessorEndpoint
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestZstdDictionarySplitMerge
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!TestBackupDeleteWithFailures
+|hbase.client.retries.number"!!!"3"!!!setInt!!!TestThriftHBaseServiceHandler
+"hbase.client.retries.number"!!!"3"!!!setInt!!!TestThriftHBaseServiceHandler
+"hbase.client.retries.number"!!!"3"!!!setInt!!!TestThriftHBaseServiceHandlerWithReadOnly
+"hbase.client.retries.number"!!!3!!!setInt!!!TestThriftServer
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!6!!!setInt!!!TestExportSnapshotV1NoCluster
+DFSConfigKeys.DFS_CLIENT_RETRY_WINDOW_BASE!!!0!!!setInt!!!TestFSUtils
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!5!!!setInt!!!TestNamespaceAuditor
+hbase.client.retries.number!!!100!!!setInt!!!MobStressToolRunner
+hbase.client.retries.number!!!100!!!setInt!!!TestRSMobFileCleanerChore
+hbase.client.retries.number!!!100!!!setInt!!!TestMobFileCleanerChore
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestBulkLoadHFilesSplitRecovery
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestBulkLoadHFilesSplitRecovery
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!6!!!setInt!!!TestRestoreFlushSnapshotFromClient
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestRegionServerCoprocessorExceptionWithAbort
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestRegionServerCoprocessorExceptionWithAbort
+dfs.client.block.recovery.retries!!!2!!!setInt!!!TestWALObserver
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestMasterCoprocessorExceptionWithAbort
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestIncrementAndAppendWithNullResult
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestNegativeMemStoreSizeWithSlowCoprocessor
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setLong!!!TestClientOperationTimeout
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestPassCustomCellViaRegionObserver
+hbase.ipc.client.connect.max.retries!!!1!!!setInt!!!TestFullLogReconstruction
+dfs.client.block.recovery.retries!!!1!!!setInt!!!TestFullLogReconstruction
+zookeeper.recovery.retry!!!1!!!setInt!!!TestReplicationBase
+zookeeper.recovery.retry.intervalmill!!!10!!!setInt!!!TestReplicationBase
+replication.source.maxretriesmultiplier!!!10!!!setInt!!!TestReplicationBase
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!6!!!setInt!!!TestReplicationBase
+zookeeper.recovery.retry!!!1!!!setInt!!!TestReplicationWithTags
+zookeeper.recovery.retry.intervalmill!!!10!!!setInt!!!TestReplicationWithTags
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!6!!!setInt!!!TestReplicationWithTags
+zookeeper.recovery.retry!!!1!!!setInt!!!SyncReplicationTestBase
+zookeeper.recovery.retry.intervalmill!!!10!!!setInt!!!SyncReplicationTestBase
+replication.source.maxretriesmultiplier!!!10!!!setInt!!!SyncReplicationTestBase
+hbase.security.relogin.maxretries!!!1!!!setInt!!!TestRpcSkipInitialSaslHandshake
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestRpcClientLeaks
+zookeeper.recovery.retry!!!1!!!setInt!!!TestMetaRegionReplicaReplication
+zookeeper.recovery.retry.intervalmill!!!10!!!setInt!!!TestMetaRegionReplicaReplication
+HConstants.HBASE_CLIENT_SERVERSIDE_RETRIES_MULTIPLIER!!!1!!!setInt!!!TestMetaRegionReplicaReplication
+replication.source.maxretriesmultiplier!!!1!!!setInt!!!TestWALEntryStreamCompressionReset
+replication.source.maxretriesmultiplier!!!1!!!setInt!!!TestReplicationSource
+replication.source.maxretriesmultiplier!!!1!!!setInt!!!TestReplicationSource
+replication.source.maxretriesmultiplier!!!1!!!setInt!!!TestReplicationSource
+replication.source.maxretriesmultiplier!!!1!!!setInt!!!TestReplicationSource
+replication.source.maxretriesmultiplier!!!1!!!setInt!!!TestBasicWALEntryStream
+replication.source.maxretriesmultiplier!!!1!!!setInt!!!TestBasicWALEntryStream
+replication.source.maxretriesmultiplier!!!1!!!setInt!!!TestBasicWALEntryStream
+zookeeper.recovery.retry!!!1!!!setInt!!!TestRegionReplicaReplication
+zookeeper.recovery.retry.intervalmill!!!10!!!setInt!!!TestRegionReplicaReplication
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!5!!!setInt!!!TestRegionReplicaReplication
+HConstants.HBASE_CLIENT_SERVERSIDE_RETRIES_MULTIPLIER!!!1!!!setInt!!!TestRegionReplicaReplication
+hbase.security.relogin.maxretries!!!1!!!setInt!!!TestSecurityRpcSentBytesMetrics
+zookeeper.recovery.retry!!!1!!!setInt!!!TestReplicationWithWALExtendedAttributes
+zookeeper.recovery.retry.intervalmill!!!10!!!setInt!!!TestReplicationWithWALExtendedAttributes
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!6!!!setInt!!!TestReplicationWithWALExtendedAttributes
+hbase.ipc.client.connect.max.retries!!!1!!!setInt!!!TestBoundedRegionGroupingStrategy
+dfs.client.block.recovery.retries!!!1!!!setInt!!!TestBoundedRegionGroupingStrategy
+hbase.ipc.client.connect.max.retries!!!1!!!setInt!!!TestFSHLogProvider
+dfs.client.block.recovery.retries!!!1!!!setInt!!!TestFSHLogProvider
+hbase.ipc.client.connect.max.retries!!!1!!!setInt!!!TestWALFactory
+dfs.client.block.recovery.retries!!!1!!!setInt!!!TestWALFactory
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestSplitMerge
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!TestRegionServerScan
+RegionReplicationSink.RETRIES_NUMBER!!!1!!!setInt!!!TestRegionReplicationForWriteException
+RegionReplicationSink.RETRIES_NUMBER!!!1!!!setInt!!!TestRegionReplicationForFlushMarker
+RegionReplicationSink.RETRIES_NUMBER!!!15!!!setInt!!!TestRegionReplicationSinkCallbackAndFlushConcurrently
+hbase.client.retries.number!!!2!!!setInt!!!TestIsDeleteFailure
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!5!!!setInt!!!TestEndToEndSplitTransaction
+zookeeper.recovery.retry!!!0!!!setInt!!!TestRemoveRegionMetrics
+zookeeper.recovery.retry!!!0!!!setInt!!!TestRegionServerMetrics
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!TestSettingTimeoutOnBlockingPoint
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!TestRegionInterrupt
+dfs.client.block.write.retries!!!10!!!setInt!!!TestLogRollAbort
+FanOutOneBlockAsyncDFSOutputHelper.ASYNC_DFS_OUTPUT_CREATE_MAX_RETRIES!!!100!!!setInt!!!TestAsyncLogRolling
+dfs.client.block.recovery.retries!!!1!!!setInt!!!AbstractTestProtobufLog
+dfs.client.block.recovery.retries!!!2!!!setInt!!!AbstractTestWALReplay
+hbase.hstore.flush.retries.number!!!1!!!setInt!!!TestHRegion
+hbase.hstore.flush.retries.number!!!1!!!setInt!!!TestHRegion
+RegionReplicationSink.RETRIES_NUMBER!!!1!!!setInt!!!TestWALSyncTimeoutException
+dfs.client.block.write.retries!!!30!!!setInt!!!TestLogRolling
+hbase.ipc.client.connect.max.retries!!!1!!!setInt!!!AbstractTestFSWAL
+dfs.client.block.recovery.retries!!!1!!!setInt!!!AbstractTestFSWAL
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!0!!!setInt!!!TestCompactionWithShippingCoprocessor
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!0!!!setInt!!!TestScannerTimeoutHandling
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!TestFSErrorsExposed
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!TestTags
+hbase.hstore.flush.retries.number!!!1!!!setInt!!!TestHStore
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!100!!!setInt!!!TestConnection
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!TestConnection
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!6!!!setInt!!!CloneSnapshotFromClientTestBase
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestTableOperationException
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestAsyncReplicationAdminApi
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestFromClientSide3
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestFromClientSide3
+zookeeper.recovery.retry!!!1!!!setInt!!!TestSeparateClientZKCluster
+zookeeper.recovery.retry!!!1!!!setInt!!!TestReplicaWithCluster
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!0!!!setInt!!!TestBlockEvictionFromClient
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!100!!!setInt!!!TestAsyncAdminBuilder
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestAsyncClusterAdminApi2
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestAsyncAdminBase
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestAsyncProcedureAdminApi
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!5!!!setInt!!!AbstractTestCITimeout
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!TestReplicasClient
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!3!!!setInt!!!TestFromClientSideScanExcpetion
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!0!!!setInt!!!TestAvoidCellReferencesIntoShippedBlocks
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestAsyncQuotaAdminApi
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!0!!!setInt!!!TestMalformedCellFromClient
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!6!!!setInt!!!RestoreSnapshotFromClientTestBase
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!TestBadReplicationPeer
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!TestReplicationAdminForSyncReplication
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestAsyncReplicationAdminApiWithClusters
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!TestAsyncClientPauseForRpcThrottling
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestAsyncNamespaceAdminApi
+hbase.client.retries.number!!!3!!!setInt!!!TestEntityLocks
+dfs.client.block.write.retries!!!30!!!setInt!!!TestAdmin2
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestAsyncClusterAdminApi
+dfs.client.block.write.retries!!!30!!!setInt!!!TestAsyncClusterAdminApi
+hbase.client.retries.number!!!1!!!setInt!!!TestCheckAndMutateWithByteBuff
+hbase.client.retries.number!!!6!!!setInt!!!TestAdminBase
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!TestAssignmentManagerMetrics
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!3!!!setInt!!!TestSimpleRegionNormalizerOnCluster
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!3!!!setInt!!!TestMaster
+zookeeper.recovery.retry!!!0!!!setInt!!!AbstractTestDLS
+WALProcedureStore.ROLL_RETRIES_CONF_KEY!!!10!!!setInt!!!TestWALProcedureStoreOnHDFS
+hbase.client.retries.number!!!1!!!setInt!!!TestMasterShutdown
+ReadOnlyZKClient.RECOVERY_RETRY!!!3!!!setInt!!!TestMasterShutdown
+ReadOnlyZKClient.RECOVERY_RETRY_INTERVAL_MILLIS!!!100!!!setInt!!!TestMasterShutdown
+zookeeper.recovery.retry!!!1!!!setInt!!!TestVisibilityLabelReplicationWithExpAsString
+zookeeper.recovery.retry.intervalmill!!!10!!!setInt!!!TestVisibilityLabelReplicationWithExpAsString
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!6!!!setInt!!!TestVisibilityLabelReplicationWithExpAsString
+zookeeper.recovery.retry!!!1!!!setInt!!!TestVisibilityLabelsReplication
+zookeeper.recovery.retry.intervalmill!!!10!!!setInt!!!TestVisibilityLabelsReplication
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!6!!!setInt!!!TestVisibilityLabelsReplication
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!CustomSaslAuthenticationProviderTestBase
+hbase.client.retries.number!!!5!!!setInt!!!TestCoprocessorWhitelistMasterObserver
+hbase.client.retries.number!!!5!!!setInt!!!TestCoprocessorWhitelistMasterObserver
+hbase.client.retries.number!!!5!!!setInt!!!TestCoprocessorWhitelistMasterObserver
+hbase.client.retries.number!!!5!!!setInt!!!TestCoprocessorWhitelistMasterObserver
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!2!!!setInt!!!TestNamespaceCommands
+hbase.security.relogin.maxretries!!!1!!!setInt!!!AbstractTestSecureIPC
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!3!!!setInt!!!TestMetaTableLocator
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!FilterTestingCluster
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!1!!!setInt!!!TestFilterWrapper
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!10!!!setInt!!!TestMetaTableAccessor
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!6!!!setInt!!!TestQuotaTableUtil
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!6!!!setInt!!!TestQuotaAdmin
+HConstants.HBASE_CLIENT_RETRIES_NUMBER!!!6!!!setInt!!!TestQuotaThrottle
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/hbase_retry_locations.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/hbase_retry_locations.data
new file mode 100644
index 00000000..8446d3e7
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/hbase_retry_locations.data
@@ -0,0 +1,137 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/blob/89ca7f4ade84c84a246281c71898543b6161c099/hbase-server/src/main/java/org/apache/hadoop/hbase/io/FileLink.java#L114!!!org.apache.hadoop.hbase.io.FileLink.read!!!org.apache.hadoop.fs.FSDataInputStream.read!!!FileLink.java:117!!!java.io.FileNotFoundException
+https://github.com/apache/hbase/blob/89ca7f4ade84c84a246281c71898543b6161c099/hbase-server/src/main/java/org/apache/hadoop/hbase/io/FileLink.java#L164!!!org.apache.hadoop.hbase.io.FileLink.readFully!!!readFully!!!FileLink.java:166!!!java.io.FileNotFoundException
+https://github.com/apache/hbase/blob/89ca7f4ade84c84a246281c71898543b6161c099/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/RSProcedureDispatcher.java#L349!!!org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher.run!!!org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher.sendRequest!!!RSProcedureDispatcher.java:398!!!java.io.IOException
+https://github.com/apache/hbase/blob/89ca7f4ade84c84a246281c71898543b6161c099/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ServerCrashProcedure.java#L136!!!org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState!!!org.apache.hadoop.hbase.master.MasterServices.getProcedures!!!ServerCrashProcedure.java:278!!!java.io.IOException
+https://github.com/apache/hbase/blob/89ca7f4ade84c84a246281c71898543b6161c099/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/SnapshotVerifyProcedure.java#L124!!!org.apache.hadoop.hbase.master.procedure.SnapshotVerifyProcedure.execute!!!org.apache.hadoop.hbase.master.procedure.ServerRemoteProcedure.execute!!!SnapshotVerifyProcedure.java:142!!!java.lang.InterruptedException
+https://github.com/apache/hbase/blob/89ca7f4ade84c84a246281c71898543b6161c099/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/SplitWALProcedure.java#L64!!!org.apache.hadoop.hbase.master.procedure.SplitWALProcedure.executeFromState!!!org.apache.hadoop.hbase.master.SplitWALManager.isSplitWALFinished!!!SplitWALProcedure.java:80!!!java.io.IOException
+https://github.com/apache/hbase/blob/89ca7f4ade84c84a246281c71898543b6161c099/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/SwitchRpcThrottleProcedure.java#L65!!!org.apache.hadoop.hbase.master.procedure.SwitchRpcThrottleProcedure.executeFromState!!!org.apache.hadoop.hbase.master.procedure.SwitchRpcThrottleProcedure.switchThrottleState!!!SwitchRpcThrottleProcedure.java:70!!!java.io.IOException
+https://github.com/apache/hbase/blob/89ca7f4ade84c84a246281c71898543b6161c099/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/SyncReplicationReplayWALProcedure.java#L59!!!org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALProcedure.executeFromState!!!org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALManager.isReplayWALFinished!!!SyncReplicationReplayWALProcedure.java:75!!!java.io.IOException
+https://github.com/apache/hbase/blob/89ca7f4ade84c84a246281c71898543b6161c099/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/SyncReplicationReplayWALRemoteProcedure.java#L89!!!org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALRemoteProcedure.truncateWALs!!!org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALManager.finishReplayWAL!!!SyncReplicationReplayWALRemoteProcedure.java:92!!!java.io.IOException
+https://github.com/apache/hbase/blob/89ca7f4ade84c84a246281c71898543b6161c099/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/BootstrapNodeManager.java#L135!!!org.apache.hadoop.hbase.regionserver.BootstrapNodeManager.getFromMaster!!!org.apache.hadoop.hbase.util.FutureUtils.get!!!BootstrapNodeManager.java:140!!!java.io.IOException
+https://github.com/apache/hbase/blob/89ca7f4ade84c84a246281c71898543b6161c099/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsckRepair.java#L103!!!org.apache.hadoop.hbase.util.HBaseFsckRepair.waitUntilAssigned!!!org.apache.hadoop.hbase.client.Admin.getClusterMetrics!!!HBaseFsckRepair.java:110!!!java.io.IOException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/ZkSplitLogWorkerCoordination.java#L446!!!ZkSplitLogWorkerCoordination.getTaskList!!!listChildrenAndWatchForNewChildren!!!ZkSplitLogWorkerCoordination.java:449!!!org.apache.zookeeper.KeeperExceptionSessionExpiredException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/HBaseServerBase.java#L336!!!org.apache.hadoop.hbase.HBaseServerBase.putUpWebUI!!!org.apache.hadoop.hbase.http.InfoServer.start!!!HBaseServerBase.java:348!!!java.net.BindException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/TransitRegionStateProcedure.java#L339!!!org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState!!!openRegion!!!TransitRegionStateProcedure.java:491!!!java.io.IOException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/TransitRegionStateProcedure.java#L339!!!org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState!!!confirmOpened!!!TransitRegionStateProcedure.java:494!!!java.io.IOException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/TransitRegionStateProcedure.java#L339!!!org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState!!!closeRegion!!!TransitRegionStateProcedure.java:496!!!java.io.IOException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/TransitRegionStateProcedure.java#L339!!!org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState!!!confirmClosed!!!TransitRegionStateProcedure.java:499!!!java.io.IOException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterWalManager.java#L215!!!org.apache.hadoop.hbase.master.MasterWalManager.getFailedServersFromLogFolders!!!listStatus!!!MasterWalManager.jav:234!!!java.io.IOException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ClaimReplicationQueuesProcedure.java#L84!!!org.apache.hadoop.hbase.master.replication.ClaimReplicationQueuesProcedure.execute!!!removeQueue!!!ClaimReplicationQueuesProcedure.java:102!!!ReplicationException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ModifyPeerProcedure.java#L158!!!org.apache.hadoop.hbase.master.replication.ModifyPeerProcedure.executeFromState!!!updatePeerStorage!!!ModifyPeerProcedure.java:205!!!ReplicationException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ModifyPeerProcedure.java#L158!!!org.apache.hadoop.hbase.master.replication.ModifyPeerProcedure.executeFromState!!!reopenRegions!!!ModifyPeerProcedure.java:220!!!java.io.IOException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ModifyPeerProcedure.java#L158!!!org.apache.hadoop.hbase.master.replication.ModifyPeerProcedure.executeFromState!!!updateLastPushedSequenceIdForSerialPeer!!!ModifyPeerProcedure.java:231!!!java.io.IOException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ModifyPeerProcedure.java#L158!!!org.apache.hadoop.hbase.master.replication.ModifyPeerProcedure.executeFromState!!!enablePeer!!!ModifyPeerProcedure.java:238!!!ReplicationException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ModifyPeerProcedure.java#L158!!!org.apache.hadoop.hbase.master.replication.ModifyPeerProcedure.executeFromState!!!postPeerModification!!!ModifyPeerProcedure.java:259!!!java.io.IOException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ModifyPeerProcedure.java#L176!!!org.apache.hadoop.hbase.master.replication.ModifyPeerProcedure.executeFromState!!!prePeerModification!!!ModifyPeerProcedure.java:188!!!java.io.IOException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/RecoverStandbyProcedure.java#L59!!!org.apache.hadoop.hbase.master.replication.RecoverStandbyProcedure.executeFromState!!!renameToPeerReplayWALDir!!!RecoverStandbyProcedure.java:62!!!java.io.IOException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/RecoverStandbyProcedure.java#L59!!!org.apache.hadoop.hbase.master.replication.RecoverStandbyProcedure.executeFromState!!!renameToPeerSnapshotWALDir!!!RecoverStandbyProcedure.java:84!!!java.io.IOException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobFileCleanerChore.java#L178!!!org.apache.hadoop.hbase.mob.MobFileCleanerChore.cleanupObsoleteMobFiles!!!initReader!!!MobFileCleanupUtil.java:125!!!java.io.IOException
+https://github.com/apache/hbase/blob/e1ad781dd9ddc201123a63122e22496ee7f8a4b0/hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobFileCleanupUtil.java#L100!!!org.apache.hadoop.hbase.mob.MobFileCleanerChore.cleanupObsoleteMobFiles!!!closeStoreFile!!!MobFileCleanupUtil.java:129!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java#L470!!!org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createOutput!!!org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock!!!FanOutOneBlockAsyncDFSOutputHelper.java:493!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java#L589!!!org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile!!!org.apache.hadoop.hdfs.protocol.ClientProtocol.complete!!!FanOutOneBlockAsyncDFSOutputHelper.java:592!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/impl/FullTableBackupClient.java#L950!!!org.apache.hadoop.hbase.backup.impl.FullTableBackupClient.snapshotTable!!!snapshot!!!FullTableBackupClient.java:209!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcConnection.java#L250!!!org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupConnection!!!org.apache.hadoop.net.NetUtils.connect!!!BlockingRpcConnection.java:258!!!java.net.SocketException
+https://github.com/apache/hbase/tree//e1ad781//hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcConnection.java#L461!!!org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupIOstreams!!!org.apache.hadoop.security.UserGroupInformation.doAs!!!BlockingRpcConnection.java:476!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-it/src/main/java/org/apache/hadoop/hbase/chaos/ChaosAgent.java#L412!!!org.apache.hadoop.hbase.chaos.ChaosAgent.execWithRetries!!!org.apache.hadoop.hbase.chaos.ChaosAgent.exec!!!ChaosAgent.java:414!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/WALInputFormat.java#L157!!!org.apache.hadoop.hbase.wal.AbstractFSWALProvider.openReader!!!org.apache.hadoop.fs.Path.getFileSystem!!!WALInputFormat.java:162!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/WALInputFormat.java#L157!!!org.apache.hadoop.hbase.wal.AbstractFSWALProvider.openReader!!!org.apache.hadoop.hbase.wal.WALFactory.createStreamReader!!!WALInputFormat.java:162!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java#L402!!!org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease!!!org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.getLogFiles!!!WALProcedureStore.java:410!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java#L402!!!org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease!!!org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.initOldLogs!!!WALProcedureStore.java:413!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java#L402!!!org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease!!!org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter!!!WALProcedureStore.java:420!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java#L402!!!org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease!!!org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFile.removeFile!!!WALProcedureStore.java:430!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java#L898!!!org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.syncSlots!!!org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.syncSlots!!!WALProcedureStore.java:900 !!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java#L950!!!org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriterWithRetries!!!org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter!!!WALProcedureStore.java:956!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-protocol-shaded/target/generated-sources/protobuf/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RPCProtos.java#L4652!!!org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom!!!org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream.readTag!!!RPCProtos.java:5020!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-protocol-shaded/target/generated-sources/protobuf/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RPCProtos.java#L4652!!!org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom!!!org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream.readBytes!!!RPCProtos.java:5026!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-protocol-shaded/target/generated-sources/protobuf/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RPCProtos.java#L4652!!!org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom!!!org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream.readBytes!!!RPCProtos.java:5031!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-protocol-shaded/target/generated-sources/protobuf/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RPCProtos.java#L4652!!!org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom!!!org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream.readBytes!!!RPCProtos.java:5036!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-protocol-shaded/target/generated-sources/protobuf/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RPCProtos.java#L4652!!!org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom!!!org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream.readInt32!!!RPCProtos.java:5041!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-protocol-shaded/target/generated-sources/protobuf/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RPCProtos.java#L4652!!!org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom!!!org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream.readBool!!!RPCProtos.java:5046!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-protocol-shaded/target/generated-sources/protobuf/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RPCProtos.java#L4652!!!org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom!!!org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream.readBool!!!RPCProtos.java:5051!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-protocol-shaded/target/generated-sources/protobuf/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RPCProtos.java#L4652!!!org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom!!!org.apache.hbase.thirdparty.com.google.protobuf.GeneratedMessageV3$Builder.parseUnknownField!!!RPCProtos.java:5056!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ZKReplicationQueueStorage.java#L233!!!org.apache.hadoop.hbase.replication.ZKReplicationQueueStorage.setWALPosition!!!org.apache.hadoop.hbase.zookeeper.ZKUtil.multiOrSequential!!!N/A!!!java.net.BindException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/backup/HFileArchiver.java#L548!!!org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile!!!org.apache.hadoop.fs.FileSystem.exists!!!HFileArchiver.java:555!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/backup/HFileArchiver.java#L548!!!org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile!!!org.apache.hadoop.fs.FileSystem.mkdirs!!!HFileArchiver.java:556!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/backup/HFileArchiver.java#L548!!!org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile!!!org.apache.hadoop.hbase.backup.HFileArchiver$File.moveAndClose!!!HFileArchiver.java:566!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterWalManager.java#L215!!!org.apache.hadoop.hbase.master.MasterWalManager.getFailedServersFromLogFolders!!!org.apache.hadoop.fs.FileSystem.exists!!!MasterWalManager.java:233!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterWalManager.java#L215!!!org.apache.hadoop.hbase.master.MasterWalManager.getFailedServersFromLogFolders!!!org.apache.hadoop.hbase.util.CommonFSUtils.listStatus!!!MasterWalManager.java:234!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/TransitPeerSyncReplicationStateProcedure.java#L234!!!org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.executeFromState!!!org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.setPeerNewSyncReplicationState!!!TransitPeerSyncReplicationStateProcedure.java:265!!!ReplicationException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/TransitPeerSyncReplicationStateProcedure.java#L234!!!org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.executeFromState!!!org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.setLastPushedSequenceId!!!TransitPeerSyncReplicationStateProcedure.java:296!!!ReplicationException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/TransitPeerSyncReplicationStateProcedure.java#L234!!!org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.executeFromState!!!org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.removeAllReplicationQueues !!!TransitPeerSyncReplicationStateProcedure.java:314!!!ReplicationException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/TransitPeerSyncReplicationStateProcedure.java#L234!!!org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.executeFromState!!!org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.transitPeerSyncReplicationState!!!TransitPeerSyncReplicationStateProcedure.java:329!!!ReplicationException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/TransitPeerSyncReplicationStateProcedure.java#L234!!!org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.executeFromState!!!org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.enablePeer!!!TransitPeerSyncReplicationStateProcedure.java:349!!!ReplicationException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/TransitPeerSyncReplicationStateProcedure.java#L234!!!org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.executeFromState!!!org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.createDirForRemoteWAL!!!TransitPeerSyncReplicationStateProcedure.java:367!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/TransitPeerSyncReplicationStateProcedure.java#L234!!!org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.executeFromState!!!org.apache.hadoop.hbase.master.replication.TransitPeerSyncReplicationStateProcedure.postTransit!!!TransitPeerSyncReplicationStateProcedure.java:381!!!ReplicationException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/master/zksyncer/ClientZKSyncer.java#L169!!!org.apache.hadoop.hbase.master.zksyncer.ClientZKSyncer.setDataForClientZkUntilSuccess!!!org.apache.hadoop.hbase.zookeeper.ZKUtil.createNodeIfNotExistsNoWatch!!!ClientZKSyncer.java:173!!!org.apache.zookeeper.KeeperExceptionSessionExpiredException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/master/zksyncer/ClientZKSyncer.java#L169!!!org.apache.hadoop.hbase.master.zksyncer.ClientZKSyncer.setDataForClientZkUntilSuccess!!!org.apache.hadoop.hbase.zookeeper.ZKUtil.setData!!!ClientZKSyncer.java:175!!!org.apache.zookeeper.KeeperExceptionNoNodeException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/master/zksyncer/ClientZKSyncer.java#L169!!!org.apache.hadoop.hbase.master.zksyncer.ClientZKSyncer.deleteDataForClientZkUntilSuccess!!!org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode!!!ClientZKSyncer.java:198!!!org.apache.zookeeper.KeeperExceptionSessionExpiredException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/master/zksyncer/ClientZKSyncer.java#L169!!!org.apache.hadoop.hbase.master.zksyncer.ClientZKSyncer.reconnectAfterExpiration!!!ZKWatcher.reconnectAfterExpiration!!!ClientZKSyncer.java:216!!!org.apache.zookeeper.KeeperExceptionSessionExpiredException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/master/zksyncer/ClientZKSyncer.java#L195!!!org.apache.hadoop.hbase.master.zksyncer.ClientZKSyncer.deleteDataForClientZkUntilSuccess!!!org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode!!!ClientZKSyncer.java:198!!!org.apache.zookeeper.KeeperExceptionSessionExpiredException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/MetaRegionLocationCache.java#L107!!!org.apache.hadoop.hbase.MetaRegionLocationCache.loadMetaLocationsFromZk!!!org.apache.hadoop.hbase.zookeeper.ZKWatcher.getMetaReplicaNodesAndWatchChildren!!!MetaRegionLocationCache.java:109!!!org.apache.zookeeper.KeeperExceptionSessionExpiredException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/MetaRegionLocationCache.java#L171!!!org.apache.hadoop.hbase.MetaRegionLocationCache.updateMetaLocation!!!org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists!!!MetaRegionLocationCache.java:174!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/MetaRegionLocationCache.java#L171!!!org.apache.hadoop.hbase.MetaRegionLocationCache.updateMetaLocation!!!org.apache.hadoop.hbase.MetaRegionLocationCache.getMetaRegionLocation!!!MetaRegionLocationCache.java:181!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/namequeues/WALEventTrackerTableAccessor.java#L70!!!org.apache.hadoop.hbase.namequeues.WALEventTrackerTableAccessor.doPut!!!org.apache.hadoop.hbase.client.Connection.getTable!!!WALEventTrackerTableAccessor.java:71!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/namequeues/WALEventTrackerTableAccessor.java#L70!!!org.apache.hadoop.hbase.namequeues.WALEventTrackerTableAccessor.doPut!!!org.apache.hadoop.hbase.client.Table.put!!!WALEventTrackerTableAccessor.java:72!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/RegionReplicaFlushHandler.java#L107!!!org.apache.hadoop.hbase.regionserver.handler.RegionReplicaFlushHandler.triggerFlushInPrimaryRegion!!!org.apache.hadoop.hbase.util.FutureUtils.get!!!RegionReplicaFlushHandler.java:114!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java#L1078!!!org.apache.hadoop.hbase.regionserver.HRegionFileSystem.createDir!!!org.apache.hadoop.hbase.regionserver.HRegionFileSystem.mkdirs!!!HRegionFileSystem.java:1080!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java#L1101!!!org.apache.hadoop.hbase.regionserver.HRegionFileSystem.rename!!!org.apache.hadoop.fs.FileSystem.rename!!!HRegionFileSystem.java:1103!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java#L1126!!!org.apache.hadoop.hbase.regionserver.HRegionFileSystem.deleteDir!!!org.apache.hadoop.fs.FileSystem.delete!!!HRegionFileSystem.java:1128!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java#L1165!!!org.apache.hadoop.hbase.regionserver.HRegionFileSystem.createDirOnFileSystem!!!org.apache.hadoop.fs.FileSystem.mkdirs!!!HRegionFileSystem.java:1167!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L2505!!!org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub!!!org.apache.hadoop.hbase.security.UserProvider.getCurrent!!!HRegionServer.java:2590!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java#L817!!!org.apache.hadoop.hbase.regionserver.HStore.flushCache!!!org.apache.hadoop.hbase.regionserver.StoreFlusher.flushSnapshot!!!HStore.java:828!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RemoteProcedureResultReporter.java#L71!!!org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run!!!org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$ReportProcedureDoneRequest$Builder.addResult!!!RemoteProcedureResultReporter.java:74!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RemoteProcedureResultReporter.java#L71!!!org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run!!!org.apache.hadoop.hbase.regionserver.HRegionServer.reportProcedureDone!!!RemoteProcedureResultReporter.java:89!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/snapshot/FlushSnapshotSubprocedure.java#L113!!!org.apache.hadoop.hbase.regionserver.snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask.call!!!org.apache.hadoop.hbase.regionserver.HRegion.flush!!!FlushSnapshotSubprocedure.java:114!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SnapshotRegionCallable.java#L57!!!org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall!!!org.apache.hadoop.hbase.regionserver.HRegion.flush!!!SnapshotRegionCallable.java:58!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AbstractFSWAL.java#L783!!!org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive!!!org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archiveLogFile!!!AbstractFSWAL.java:923!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/DualAsyncFSWAL.java#L76!!!org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance!!!org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createAsyncWriter!!!N/A!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java#L452!!!org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.replicate!!!org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.parallelReplicate!!!HBaseInterClusterReplicationEndpoint.java:461!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HFileReplicator.java#L169!!!org.apache.hadoop.hbase.replication.regionserver.HFileReplicator.doBulkLoad!!!org.apache.hadoop.hbase.tool.BulkLoadHFilesTool.loadHFileQueue!!!HFileReplicator.java:179!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/RecoveredReplicationSourceShipper.java#L57!!!org.apache.hadoop.hbase.replication.regionserver.RecoveredReplicationSourceShipper.getStartPosition!!!org.apache.hadoop.hbase.replication.regionserver.RecoveredReplicationSource.locateRecoveredPaths!!!N/A!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java#L432!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.uncaughtException!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.refreshSources!!!N/A!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java#L512!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.createReplicationEndpoint!!!ReplicationSource.java:555!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java#L512!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initAndStartReplicationEndpoint!!!ReplicationSource.java:565!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java#L706!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.cleanOldLogs!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.removeRemoteWALs!!!ReplicationSourceManager.java:739!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceShipper.java#L179!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceShipper.shipEdits!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceShipper.cleanUpHFileRefs!!!ReplicationSourceShipper.java:195!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java#L130!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.tryAdvanceStreamAndCreateWALBatch!!!N/A!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java#L130!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run!!!org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.reset!!!N/A!!!WALEntryFilterRetryableException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java#L130!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries!!!ReplicationSourceWALReader.java:171!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java#L1019!!!org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.moveRegionsBetweenGroups!!!moveAsync!!!RSGroupInfoManagerImpl.java:1037!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/tool/BulkLoadHFilesTool.java#L908!!!org.apache.hadoop.hbase.tool.BulkLoadHFilesTool.performBulkLoad!!!org.apache.hadoop.hbase.util.FutureUtils.get!!!BulkLoadHFilesTool.java:966!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/tool/BulkLoadHFilesTool.java#L908!!!org.apache.hadoop.hbase.tool.BulkLoadHFilesTool.performBulkLoad!!!org.apache.hadoop.hbase.tool.BulkLoadHFilesTool.groupOrSplitPhase!!!BulkLoadHFilesTool.java:981!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/tool/BulkLoadHFilesTool.java#L908!!!org.apache.hadoop.hbase.tool.BulkLoadHFilesTool.performBulkLoad!!!org.apache.hadoop.hbase.tool.BulkLoadHFilesTool.bulkLoadPhase!!!BulkLoadHFilesTool.java:990!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java#L622!!!org.apache.hadoop.hbase.util.FSTableDescriptors.writeTableDescriptor!!!org.apache.hadoop.fs.FileSystem.create!!!FSTableDescriptors.java:626!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java#L622!!!org.apache.hadoop.hbase.util.FSTableDescriptors.writeTableDescriptor!!!java.io.FilterOutputStream.write!!!FSTableDescriptors.java:627!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java#L622!!!org.apache.hadoop.hbase.util.FSTableDescriptors.writeTableDescriptor!!!org.apache.hadoop.hbase.util.FSTableDescriptors.deleteTableDescriptorFiles!!!FSTableDescriptors.java:635!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java#L444!!!org.apache.hadoop.hbase.util.FSUtils.setVersion!!!org.apache.hadoop.fs.FileSystem.create!!!FSUtils.java:440!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java#L499!!!org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists!!!org.apache.hadoop.fs.FileSystem.exists!!!FSUtils.java:495!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java#L601!!!org.apache.hadoop.hbase.util.FSUtils.setClusterId!!!org.apache.hadoop.fs.FileSystem.create!!!FSUtils.java:598!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java#L601!!!org.apache.hadoop.hbase.util.FSUtils.setClusterId!!!java.io.FilterOutputStream.write!!!FSUtils.java:599!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java#L601!!!org.apache.hadoop.hbase.util.FSUtils.setClusterId!!!org.apache.hadoop.fs.FileSystem.rename!!!FSUtils.java:609!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java#L426!!!org.apache.hadoop.hbase.util.HBaseFsck$FileLockCallable.createFileWithRetries!!!org.apache.hadoop.hbase.util.CommonFSUtils.create!!!HBaseFsck.java:428!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java#L481!!!org.apache.hadoop.hbase.util.HBaseFsck.unlockHbck!!!org.apache.hbase.thirdparty.com.google.common.io.Closeables.close!!!HBaseFsck.java:483!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java#L481!!!org.apache.hadoop.hbase.util.HBaseFsck.unlockHbck!!!org.apache.hadoop.hbase.util.CommonFSUtils.delete!!!HBaseFsck.java:484!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java#L481!!!org.apache.hadoop.hbase.util.HBaseFsck.unlockHbck!!!org.apache.hadoop.hbase.util.CommonFSUtils.getCurrentFileSystem!!!HBaseFsck.java:484!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java#L720!!!org.apache.hadoop.hbase.util.HBaseFsck.setMasterInMaintenanceMode!!!org.apache.hadoop.hbase.zookeeper.ZKUtil.createEphemeralNodeAndWatch!!!HBaseFsck.java:735!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsckRepair.java#L107!!!org.apache.hadoop.hbase.util.HBaseFsckRepair.waitUntilAssigned!!!org.apache.hadoop.hbase.client.Admin.getClusterMetrics!!!HBaseFsckRepair.java:110!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/MoveWithAck.java#L76!!!org.apache.hadoop.hbase.util.MoveWithAck.call!!!org.apache.hadoop.hbase.client.Admin.move!!!MoveWithAck.java:82!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/util/MoveWithAck.java#L76!!!org.apache.hadoop.hbase.util.MoveWithAck.call!!!org.apache.hadoop.hbase.util.MoveWithAck.isSameServer!!!MoveWithAck.java:85!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/wal/AbstractWALRoller.java#L174!!!org.apache.hadoop.hbase.wal.AbstractWALRoller.run!!!org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal!!!AbstractWALRoller.java:212!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALFactory.java#L339!!!org.apache.hadoop.hbase.wal.WALFactory.createStreamReader!!!org.apache.hadoop.hbase.wal.AbstractFSWALProvider$Reader.init!!!WALFactory.java:417!!!java.io.IOException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java#L208!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.delete!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk!!!RecoverableZooKeeper.java:210!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java#L258!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk!!!RecoverableZooKeeper.java:262!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java#L258!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk!!!RecoverableZooKeeper.java:264!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java#L320!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getChildren!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk!!!RecoverableZooKeeper.java:324!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java#L320!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getChildren!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk!!!RecoverableZooKeeper.java:326!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java#L373!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk!!!RecoverableZooKeeper.java:377!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java#L373!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk!!!RecoverableZooKeeper.java:379!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java#L425!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk!!!RecoverableZooKeeper.java:428!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java#L475!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getAcl!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk!!!RecoverableZooKeeper.java:478!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java#L509!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setAcl!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk!!!RecoverableZooKeeper.java:512!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java#L574!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.createNonSequential!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk!!!RecoverableZooKeeper.java:576!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java#L616!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.createSequential!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk!!!RecoverableZooKeeper.java:626!!!org.apache.zookeeper.KeeperExceptionOperationTimeoutException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java#L680!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.multi!!!org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk!!!RecoverableZooKeeper.java:683!!!org.apache.zookeeper.KeeperExceptionSessionExpiredException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKNodeTracker.java#L140!!!org.apache.hadoop.hbase.zookeeper.ZKNodeTracker.blockUntilAvailable!!!org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataAndWatch!!!ZKNodeTracker.java:131!!!org.apache.zookeeper.KeeperExceptionSessionExpiredException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKNodeTracker.java#L140!!!org.apache.hadoop.hbase.zookeeper.ZKNodeTracker.blockUntilAvailable!!!org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists!!!ZKNodeTracker.java:143!!!org.apache.zookeeper.KeeperExceptionSessionExpiredException
+https://github.com/apache/hbase/tree//e1ad781//hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java#L1400!!!org.apache.hadoop.hbase.zookeeper.ZKUtil.waitForBaseZNode!!!org.apache.zookeeper.ZooKeeper.exists!!!ZKUtil.java:1402!!!org.apache.zookeeper.KeeperExceptionSessionExpiredException
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/hbase_timeout_bounds.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/hbase_timeout_bounds.data
new file mode 100644
index 00000000..e49cb2c6
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/hbase_timeout_bounds.data
@@ -0,0 +1,81 @@
+AbstractTestFSWAL.testFailedToCreateWALIfParentRenamed
+AbstractTestFSWAL.testFindMemStoresEligibleForFlush
+AbstractTestFSWAL.testFlushSequenceIdIsGreaterThanAllEditsInHFile
+AbstractTestFSWAL.testMaxFlushedSequenceIdGoBackwards
+AbstractTestFSWAL.testRollWriterForClosedWAL
+AbstractTestFSWAL.testSyncNoAppend
+AbstractTestFSWAL.testUnflushedSeqIdTrackingWithAsyncWal
+AbstractTestFSWAL.testWALCoprocessorLoaded
+AbstractTestFSWAL.testWriteEntryCanBeNull
+TestAsyncTable.testDisabled
+TestAsyncTable.testIncrement
+TestBackupDeleteRestore.testBackupDeleteRestore
+TestBasicWALEntryStream.testEOFExceptionInOldWALsDirectory
+TestBootstrapNodeManager.testNormal
+TestBootstrapNodeManager.testOnlyMaster
+TestBootstrapNodeManager.testRegionServerError
+TestBulkLoadReplicationHFileRefs.testWhenExcludeCF
+TestBulkLoadReplicationHFileRefs.testWhenExcludeNamespace
+TestBulkLoadReplicationHFileRefs.testWhenExcludeTable
+TestClassLoading.testClassLoadingFromHDFS
+TestClassLoading.testClassLoadingFromLibDirInJar
+TestClassLoading.testClassLoadingFromLocalFS
+TestClassLoading.testClassLoadingFromRelativeLibDirInJar
+TestClassLoading.testHBase3810
+TestClassLoading.testPrivateClassLoader
+TestClientSideRegionScanner.testContinuesToScanIfHasMore
+TestClientTimeouts.testAdminTimeout
+TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+TestDrainReplicationQueuesForStandBy.test
+TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+TestFSHLog.testUnflushedSeqIdTracking
+TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+TestFlushSnapshotFromClient.testAsyncFlushSnapshot
+TestFlushSnapshotFromClient.testFlushCreateListDestroy
+TestFlushSnapshotFromClient.testFlushTableSnapshot
+TestFlushSnapshotFromClient.testFlushTableSnapshotWithProcedure
+TestFlushSnapshotFromClient.testSkipFlushTableSnapshot
+TestFlushSnapshotFromClient.testSnapshotFailsOnNonExistantTable
+TestFlushSnapshotFromClient.testSnapshotStateAfterMerge
+TestFlushSnapshotFromClient.testTakeSnapshotAfterMerge
+TestHelloHBase.testCreateNamespaceAndTable
+TestHelloHBase.testDeleteRow
+TestHelloHBase.testNamespaceExists
+TestHelloHBase.testPutRowToTable
+TestMetaWithReplicasShutdownHandling.testShutdownHandling
+TestMultiVersions.testGetRowVersions
+TestMultiVersions.testScanMultipleVersions
+TestMultiVersions.testTimestamps
+TestRSGroupsBalance.testGetRSGroupAssignmentsByTable
+TestRSGroupsBalance.testGroupBalance
+TestRSGroupsBalance.testGroupDryRunBalance
+TestRSGroupsBalance.testMisplacedRegions
+TestRefreshRecoveredReplication.testReplicationRefreshSource
+TestRegionAssignedToMultipleRegionServers.test
+TestRegionMoverWithRSGroupEnable.testUnloadRegions
+TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+TestRegionObserverScannerOpenHook.testRegionObserverFlushTimeStacking
+TestRegionObserverScannerOpenHook.testRegionObserverScanTimeStacking
+TestRegionReplicaSplit.testAssignFakeReplicaRegion
+TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+TestRegionReplicationLagEvaluation.test
+TestRegionServerCrashDisableWAL.test
+TestReplicator.testReplicatorBatching
+TestReplicator.testReplicatorWithErrors
+TestRetainAssignmentOnRestart.testForceRetainAssignment
+TestRetainAssignmentOnRestart.testRetainAssignmentOnClusterRestart
+TestRetainAssignmentOnRestart.testRetainAssignmentOnSingleRSRestart
+TestSecurityHeadersFilter.testDefaultValues
+TestSecurityHeadersFilter.testHstsAndCspSettings
+TestSerialReplicationFailover.testKillRS
+TestSnapshotProcedureMasterRestarts.testMasterRestarts
+TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileVerifyingSnapshot
+TestSuperUserQuotaPermissions.testSuperUserCanStillCompact
+TestSuperUserQuotaPermissions.testSuperuserCanRemoveQuota
+TestSyncReplicationWALProvider.test
+TestTableMapReduceUtil.testInitCredentialsForCluster1
+TestTableMapReduceUtil.testInitCredentialsForCluster2
+TestTableMapReduceUtil.testInitCredentialsForCluster3
+TestTableMapReduceUtil.testInitCredentialsForCluster4
+TestZooKeeperScanPolicyObserver.test
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/pom-hbase.xml b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/pom-hbase.xml
new file mode 100644
index 00000000..6a74163b
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/pom-hbase.xml
@@ -0,0 +1,4721 @@
+
+
+
+ 4.0.0
+
+ org.apache
+ apache
+ 23
+
+
+
+ org.apache.hbase
+ hbase
+ ${revision}
+ pom
+ Apache HBase
+ Apache HBase is the Hadoop database. Use it when you need
+ random, realtime read/write access to your Big Data.
+ This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters
+ of commodity hardware.
+ https://hbase.apache.org
+ 2007
+
+
+
+ Apache License, Version 2.0
+ https://www.apache.org/licenses/LICENSE-2.0.txt
+ repo
+
+
+
+
+ achouhan
+ Abhishek Singh Chouhan
+ achouhan@apache.org
+ +5
+
+
+ acube123
+ Amitanand S. Aiyer
+ acube123@apache.org
+ -8
+
+
+ allan163
+ Allan Yang
+ allan163@apache.org
+ +8
+
+
+ appy
+ Apekshit Sharma
+ appy@apache.org
+ -8
+
+
+ anastasia
+ Anastasia Braginsky
+ anastasia@apache.org
+ +2
+
+
+ apurtell
+ Andrew Purtell
+ apurtell@apache.org
+ -8
+
+
+ anoopsamjohn
+ Anoop Sam John
+ anoopsamjohn@apache.org
+ +5
+
+
+ antonov
+ Mikhail Antonov
+ antonov@apache.org
+ -8
+
+
+ ashishsinghi
+ Ashish Singhi
+ ashishsinghi@apache.org
+ +5
+
+
+ ashu
+ Ashu Pachauri
+ ashu@apache.org
+ +5
+
+
+ bharathv
+ Bharath Vissapragada
+ bharathv@apache.org
+ -8
+
+
+ binlijin
+ Lijin Bin
+ binlijin@apache.org
+ +8
+
+
+ brfrn169
+ Toshihiro Suzuki
+ brfrn169@apache.org
+ +9
+
+
+ busbey
+ Sean Busbey
+ busbey@apache.org
+ -6
+
+
+ chenglei
+ Cheng Lei
+ chenglei@apache.org
+ +8
+
+
+ chenheng
+ Heng Chen
+ chenheng@apache.org
+ +8
+
+
+ chia7712
+ Chia-Ping Tsai
+ chia7712@apache.org
+ +8
+
+
+ ddas
+ Devaraj Das
+ ddas@apache.org
+ -8
+
+
+ dimaspivak
+ Dima Spivak
+ dimaspivak@apache.org
+ -8
+
+
+ dmeil
+ Doug Meil
+ dmeil@apache.org
+ -5
+
+
+ eclark
+ Elliott Clark
+ eclark@apache.org
+ -8
+
+
+ elserj
+ Josh Elser
+ elserj@apache.org
+ -5
+
+
+ enis
+ Enis Soztutar
+ enis@apache.org
+ -8
+
+
+ eshcar
+ Eshcar Hillel
+ eshcar@apache.org
+ +2
+
+
+ fenghh
+ Honghua Feng
+ fenghh@apache.org
+ +8
+
+
+ garyh
+ Gary Helmling
+ garyh@apache.org
+ -8
+
+
+ gchanan
+ Gregory Chanan
+ gchanan@apache.org
+ -8
+
+
+ gjacoby
+ Geoffrey Jacoby
+ gjacoby@apache.org
+ -5
+
+
+ gxcheng
+ Guangxu Cheng
+ gxcheng@apache.org
+ +8
+
+
+ haxiaolin
+ Xiaolin Ha
+ haxiaolin@apache.org
+ +8
+
+
+ huaxiangsun
+ Huaxiang Sun
+ huaxiangsun@apache.org
+ -8
+
+
+ jdcryans
+ Jean-Daniel Cryans
+ jdcryans@apache.org
+ -8
+
+
+ jeffreyz
+ Jeffrey Zhong
+ jeffreyz@apache.org
+ -8
+
+
+ jerryjch
+ Jing Chen (Jerry) He
+ jerryjch@apache.org
+ -8
+
+
+ jyates
+ Jesse Yates
+ jyates@apache.org
+ -8
+
+
+ jgray
+ Jonathan Gray
+ jgray@fb.com
+ -8
+
+
+ jingchengdu
+ Jingcheng Du
+ jingchengdu@apache.org
+ +8
+
+
+ esteban
+ Esteban Gutierrez
+ esteban@apache.org
+ -8
+
+
+ janh
+ Jan Hentschel
+ janh@apache.org
+ +1
+
+
+ jmhsieh
+ Jonathan Hsieh
+ jmhsieh@apache.org
+ -8
+
+
+ jxiang
+ Jimmy Xiang
+ jxiang@apache.org
+ -8
+
+
+ kannan
+ Kannan Muthukkaruppan
+ kannan@fb.com
+ -8
+
+
+ karthik
+ Karthik Ranganathan
+ kranganathan@fb.com
+ -8
+
+
+ larsfrancke
+ Lars Francke
+ larsfrancke@apache.org
+ Europe/Berlin
+
+
+ larsgeorge
+ Lars George
+ larsgeorge@apache.org
+ +1
+
+
+ larsh
+ Lars Hofhansl
+ larsh@apache.org
+ -8
+
+
+ liangxie
+ Liang Xie
+ liangxie@apache.org
+ +8
+
+
+ liushaohui
+ Shaohui Liu
+ liushaohui@apache.org
+ +8
+
+
+ liyin
+ Liyin Tang
+ liyin.tang@fb.com
+ -8
+
+
+ liyu
+ Yu Li
+ liyu@apache.org
+ +8
+
+
+ mbautin
+ Mikhail Bautin
+ mbautin@apache.org
+ -8
+
+
+ mbertozzi
+ Matteo Bertozzi
+ mbertozzi@apache.org
+ 0
+
+
+ mdrob
+ Mike Drob
+ mdrob@apache.org
+ -5
+
+
+ meszibalu
+ Balazs Meszaros
+ meszibalu@apache.org
+ +1
+
+
+ misty
+ Misty Stanley-Jones
+ misty@apache.org
+ -8
+
+
+ ndimiduk
+ Nick Dimiduk
+ ndimiduk@apache.org
+ -8
+
+
+ nihaljain
+ Nihal Jain
+ nihaljain@apache.org
+ +5
+
+
+ niuyulin
+ Yulin Niu
+ niuyulin@apache.org
+ +8
+
+
+ nkeywal
+ Nicolas Liochon
+ nkeywal@apache.org
+ +1
+
+
+ nspiegelberg
+ Nicolas Spiegelberg
+ nspiegelberg@fb.com
+ -8
+
+
+ octo47
+ Andrey Stepachev
+ octo47@gmail.com
+ 0
+
+
+ openinx
+ Zheng Hu
+ openinx@apache.org
+ +8
+
+
+ pankajkumar
+ Pankaj Kumar
+ pankajkumar@apache.org
+ +5
+
+
+ psomogyi
+ Peter Somogyi
+ psomogyi@apache.org
+ +1
+
+
+ rajeshbabu
+ Rajeshbabu Chintaguntla
+ rajeshbabu@apache.org
+ +5
+
+
+ ramkrishna
+ Ramkrishna S Vasudevan
+ ramkrishna@apache.org
+ +5
+
+
+ rawson
+ Ryan Rawson
+ rawson@apache.org
+ -8
+
+
+ reidchan
+ Reid Chan
+ reidchan@apache.org
+ +8
+
+
+ shahrs87
+ Rushabh Shah
+ shahrs87@apache.org
+ -8
+
+
+ sakthi
+ Sakthi Vel
+ sakthi@apache.org
+ -8
+
+
+ sershe
+ Sergey Shelukhin
+ sershe@apache.org
+ -8
+
+
+ ssrungarapu
+ Srikanth Srungarapu
+ ssrungarapu@apache.org
+ -8
+
+
+ stack
+ Michael Stack
+ stack@apache.org
+ -8
+
+
+ syuanjiang
+ Stephen Yuan Jiang
+ syuanjiang@apache.org
+ -8
+
+
+ taklwu
+ Tak-Lon (Stephen) Wu
+ taklwu@apache.org
+ -8
+
+
+ tedyu
+ Ted Yu
+ yuzhihong@gmail.com
+ -8
+
+
+ tianhang
+ Tianhang Tang
+ tianhang@apache.org
+ +8
+
+
+ tianjy
+ tianjy@apache.org
+ +8
+
+
+ todd
+ Todd Lipcon
+ todd@apache.org
+ -8
+
+
+ toffer
+ Francis Liu
+ toffer@apache.org
+ -8
+
+
+ vikasv
+ Vikas Vishwakarma
+ vikasv@apache.org
+ +5
+
+
+ virag
+ Virag Kothari
+ virag@yahoo-inc.com
+ -8
+
+
+ vjasani
+ Viraj Jasani
+ vjasani@apache.org
+ +5
+
+
+ water
+ Xiang Li
+ xiangli@apache.org
+ +8
+
+
+ wchevreuil
+ Wellington Chevreuil
+ wchevreuil@apache.org
+ 0
+
+
+ weichiu
+ Wei-Chiu Chuang
+ weichiu@apache.org
+ -8
+
+
+ xucang
+ Xu Cang
+ xucang@apache.org
+ -8
+
+
+ yangzhe1991
+ Phil Yang
+ yangzhe1991@apache.org
+ +8
+
+
+ zghao
+ Guanghao Zhang
+ zghao@apache.org
+ +8
+
+
+ zhangduo
+ Duo Zhang
+ zhangduo@apache.org
+ +8
+
+
+ zhaobaiqiang
+ Baiqiang Zhao
+ zhaobaiqiang@apache.org
+ +8
+
+
+ zjushch
+ Chunhui Shen
+ zjushch@apache.org
+ +8
+
+
+ churro
+ Rahul Gidwani
+ churro@apache.org
+ -8
+
+
+ yiliang
+ Yi Liang
+ yiliang@apache.org
+ -8
+
+
+ zyork
+ Zach York
+ zyork@apache.org
+ -8
+
+
+ meiyi
+ Yi Mei
+ meiyi@apache.org
+ +8
+
+
+ wangzheng
+ Zheng (bsglz) Wang
+ wangzheng@apache.org
+ +8
+
+
+ sunxin
+ Xin Sun
+ sunxin@apache.org
+ +8
+
+
+ huangzhuoyue
+ Zhuoyue Huang
+ huangzhuoyue@apache.org
+ +8
+
+
+ xiaoyt
+ Yutong Xiao
+ xiaoyt@apache.org
+ +8
+
+
+ bbeaudreault
+ Bryan Beaudreault
+ bbeaudreault@apache.org
+ -5
+
+
+ heliangjun
+ Liangjun He
+ heliangjun@apache.org
+ +8
+
+
+
+
+ User List
+ user-subscribe@hbase.apache.org
+ user-unsubscribe@hbase.apache.org
+ user@hbase.apache.org
+ https://lists.apache.org/list.html?user@hbase.apache.org
+
+ https://dir.gmane.org/gmane.comp.java.hadoop.hbase.user
+
+
+
+ Developer List
+ dev-subscribe@hbase.apache.org
+ dev-unsubscribe@hbase.apache.org
+ dev@hbase.apache.org
+ https://lists.apache.org/list.html?dev@hbase.apache.org
+
+ https://dir.gmane.org/gmane.comp.java.hadoop.hbase.devel
+
+
+
+ Commits List
+ commits-subscribe@hbase.apache.org
+ commits-unsubscribe@hbase.apache.org
+ https://lists.apache.org/list.html?commits@hbase.apache.org
+
+
+ Issues List
+ issues-subscribe@hbase.apache.org
+ issues-unsubscribe@hbase.apache.org
+ https://lists.apache.org/list.html?issues@hbase.apache.org
+
+
+ Builds List
+ builds-subscribe@hbase.apache.org
+ builds-unsubscribe@hbase.apache.org
+ https://lists.apache.org/list.html?builds@hbase.apache.org
+
+
+ User (ZH) List
+ user-zh-subscribe@hbase.apache.org
+ user-zh-unsubscribe@hbase.apache.org
+ user-zh@hbase.apache.org
+ https://lists.apache.org/list.html?user-zh@hbase.apache.org
+
+
+
+
+ hbase-build-configuration
+ hbase-replication
+ hbase-balancer
+ hbase-mapreduce
+ hbase-resource-bundle
+ hbase-http
+ hbase-server
+ hbase-thrift
+ hbase-shell
+ hbase-protocol-shaded
+ hbase-client
+ hbase-hadoop-compat
+ hbase-common
+ hbase-procedure
+ hbase-endpoint
+ hbase-it
+ hbase-examples
+ hbase-assembly
+ hbase-testing-util
+ hbase-annotations
+ hbase-rest
+ hbase-checkstyle
+ hbase-external-blockcache
+ hbase-shaded
+ hbase-archetypes
+ hbase-metrics-api
+ hbase-metrics
+ hbase-backup
+ hbase-zookeeper
+ hbase-hbtop
+ hbase-asyncfs
+ hbase-logging
+ hbase-compression
+
+
+ scm:git:git://gitbox.apache.org/repos/asf/hbase.git
+ scm:git:https://gitbox.apache.org/repos/asf/hbase.git
+ https://gitbox.apache.org/repos/asf?p=hbase.git
+
+
+ JIRA
+ https://issues.apache.org/jira/browse/HBASE
+
+
+
+ hbase.apache.org
+ HBase Website at hbase.apache.org
+
+ file:///tmp
+
+
+
+
+
+ 1.9.8.M1
+ 1.13
+ 1.0.0
+
+ 4.0.0-alpha-1-SNAPSHOT
+
+ false
+
+ false
+
+ false
+
+ false
+
+ false
+
+ false
+ ${project.build.finalName}.tar.gz
+ yyyy-MM-dd'T'HH:mm
+ ${maven.build.timestamp}
+ 1.8
+ 8
+
+
+ 3.5.0
+ ${compileSource}
+
+ 3.2.4
+
+ ${hadoop-three.version}
+ src/main/assembly/hadoop-three-compat.xml
+
+ 3.10.5.Final
+
+ 0.13.0
+
+ 0.13.0
+ 1.11.0
+ 2.8.1
+ 1.15
+ 1.7
+ 2.11.0
+ 3.9
+ 3.6.1
+ 3.4.4
+ 4.5.13
+ 4.4.13
+ 3.2.6
+ 2.14.1
+ 2.14.1
+ 2.3.1
+ 3.1.0
+ 2.1.1
+ 2.3.2
+ 3.0.1-b08
+ 9.3.9.0
+ 4.13.2
+ 1.3
+ 1.15.0
+ 1.15.0
+ 2.17.2
+ 4.11.0
+ 0.6.1
+ thrift
+ 0.14.1
+ 3.5.7
+ 2.11
+ 1.7.30
+ 4.0.3
+ 2.4.1
+ 1.5.4
+
+ 2.1.43
+ 1.0.57
+ 2.12.2
+ 1.70
+ 1.5.1
+ 1.0.1
+ 1.1.0
+ 4.2.0
+
+ 2.2.2
+ 2.0.6
+ 3.0.0
+ 1.4
+
+ 8.29
+ 3.1.0
+ 2.16
+ 2.4.2
+ 1.0.0
+ 1.8
+ 3.3.0
+ 3.1.0
+ 2.10
+ 3.0.1
+ 3.4.0
+ 1.1.0
+ 3.1.2
+ 1.5.0.Final
+ 1.3.9-1
+ 4.7.3
+ 4.7.2.1
+ 3.1.0
+ 2.12
+ 1.0.1
+ 2.27.2
+ 3.12.0
+
+ 0.24
+ 1.11.0
+ 1.8.0
+ 1.1.10.1
+ 1.9
+ 1.5.5-2
+ 4.1.4
+
+ 0.8.8
+
+ 3.9.1.2184
+
+
+ hbase-server-${project.version}-tests.jar
+ hbase-common-${project.version}-tests.jar
+ hbase-procedure-${project.version}-tests.jar
+ hbase-it-${project.version}-tests.jar
+ hbase-annotations-${project.version}-tests.jar
+ hbase-mapreduce-${project.version}-tests.jar
+ hbase-zookeeper-${project.version}-tests.jar
+ hbase-asyncfs-${project.version}-tests.jar
+ bash
+ surefire-junit47
+
+ false
+ false
+
+ 0.25C
+ 0.25C
+ org.apache.hadoop.hbase.testclassification.SmallTests
+ org.apache.hadoop.hbase.testclassification.MediumTests
+ false
+ true
+ 900
+
+
+ 2200m
+ 2200m
+
+ -enableassertions -Dhbase.build.id=${build.id} -Xmx${surefire.Xmx}
+ -Djava.security.egd=file:/dev/./urandom -Djava.net.preferIPv4Stack=true
+ -Djava.awt.headless=true -Djdk.net.URLClassPath.disableClassPathURLCheck=true
+ -Dorg.apache.hbase.thirdparty.io.netty.leakDetection.level=advanced
+ -Dio.netty.eventLoopThreads=3 -Dio.opentelemetry.context.enableStrictContext=true
+ -enableassertions -Xmx${surefire.cygwinXmx}
+ -Djava.security.egd=file:/dev/./urandom -Djava.net.preferIPv4Stack=true
+ "-Djava.library.path=${hadoop.library.path};${java.library.path}"
+ -Dorg.apache.hbase.thirdparty.io.netty.leakDetection.level=advanced
+ -Dio.opentelemetry.context.enableStrictContext=true
+ -Dorg.apache.hbase.thirdparty.io.netty.tryReflectionSetAccessible=true
+ --add-modules jdk.unsupported
+ --add-opens java.base/java.nio=ALL-UNNAMED
+ --add-opens java.base/sun.nio.ch=ALL-UNNAMED
+ --add-opens java.base/java.lang=ALL-UNNAMED
+ --add-opens java.base/jdk.internal.ref=ALL-UNNAMED
+ --add-opens java.base/java.lang.reflect=ALL-UNNAMED
+ --add-opens java.base/java.util=ALL-UNNAMED
+ --add-opens java.base/java.util.concurrent=ALL-UNNAMED
+ --add-exports java.base/jdk.internal.misc=ALL-UNNAMED
+ --add-exports java.security.jgss/sun.security.krb5=ALL-UNNAMED
+ --add-opens java.base/jdk.internal.util.random=ALL-UNNAMED
+
+ ${hbase-surefire.argLine} @{jacocoArgLine}
+ 1.5.1
+ 3.0.0
+ 0.14.0
+
+ ${project.build.directory}/test-classes
+ ${project.build.directory}
+ yyyy-MM-dd'T'HH:mm:ss'Z'
+
+ ${maven.build.timestamp}
+ bash
+
+ none
+
+ 2.0.0.AM26
+ 2.0.0
+
+
+
+
+
+
+
+ org.apache.hbase
+ hbase-annotations
+ ${project.version}
+ test-jar
+
+
+
+ org.apache.hbase
+ hbase-backup
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-common
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-common
+ ${project.version}
+ test-jar
+ test
+
+
+ org.apache.hbase
+ hbase-logging
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-logging
+ ${project.version}
+ test-jar
+ test
+
+
+ org.apache.hbase
+ hbase-protocol-shaded
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-procedure
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-procedure
+ ${project.version}
+ test-jar
+
+
+ org.apache.hbase
+ hbase-hadoop-compat
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-hadoop-compat
+ ${project.version}
+ test-jar
+
+
+ org.apache.hbase
+ hbase-replication
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-replication
+ ${project.version}
+ test-jar
+
+
+ org.apache.hbase
+ hbase-balancer
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-balancer
+ ${project.version}
+ test-jar
+ test
+
+
+ org.apache.hbase
+ hbase-http
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-http
+ ${project.version}
+ test-jar
+ test
+
+
+ org.apache.hbase
+ hbase-server
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-server
+ ${project.version}
+ test-jar
+ test
+
+
+ org.apache.hbase
+ hbase-mapreduce
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-mapreduce
+ ${project.version}
+ test-jar
+ test
+
+
+ org.apache.hbase
+ hbase-endpoint
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-shell
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-shell
+ ${project.version}
+ test-jar
+ test
+
+
+ org.apache.hbase
+ hbase-thrift
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-thrift
+ ${project.version}
+ test-jar
+ test
+
+
+ org.apache.hbase
+ hbase-testing-util
+ ${project.version}
+ test
+
+
+ org.apache.hbase
+ hbase-examples
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-external-blockcache
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-it
+ ${project.version}
+ test-jar
+ test
+
+
+ org.apache.hbase
+ hbase-client
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-client
+ ${project.version}
+ test-jar
+ test
+
+
+ org.apache.hbase
+ hbase-metrics-api
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-metrics-api
+ ${project.version}
+ test-jar
+ test
+
+
+ org.apache.hbase
+ hbase-metrics
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-metrics
+ ${project.version}
+ test-jar
+ test
+
+
+ org.apache.hbase
+ hbase-rest
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-resource-bundle
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-zookeeper
+ ${project.version}
+
+
+ com.google.code.findbugs
+ jsr305
+
+
+ com.github.spotbugs
+ spotbugs-annotations
+
+
+
+
+ org.apache.hbase
+ hbase-zookeeper
+ ${project.version}
+ test-jar
+ test
+
+
+ org.apache.hbase
+ hbase-hbtop
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-shaded-client
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-shaded-client-byo-hadoop
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-shaded-mapreduce
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-asyncfs
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-asyncfs
+ ${project.version}
+ test-jar
+ test
+
+
+ org.apache.hbase
+ hbase-compression-aircompressor
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-compression-brotli
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-compression-lz4
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-compression-snappy
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-compression-xz
+ ${project.version}
+
+
+ org.apache.hbase
+ hbase-compression-zstd
+ ${project.version}
+
+
+
+ com.github.stephenc.findbugs
+ findbugs-annotations
+ ${findbugs-annotations.version}
+
+
+
+ org.codehaus.jettison
+ jettison
+ ${jettison.version}
+
+
+
+ org.slf4j
+ slf4j-api
+ ${slf4j.version}
+
+
+ org.slf4j
+ jcl-over-slf4j
+ ${slf4j.version}
+
+
+ org.slf4j
+ jul-to-slf4j
+ ${slf4j.version}
+
+
+ org.apache.logging.log4j
+ log4j-api
+ ${log4j2.version}
+
+
+ org.apache.logging.log4j
+ log4j-core
+ ${log4j2.version}
+
+
+ org.apache.logging.log4j
+ log4j-slf4j-impl
+ ${log4j2.version}
+
+
+ org.apache.logging.log4j
+ log4j-1.2-api
+ ${log4j2.version}
+
+
+
+ org.apache.avro
+ avro
+ ${avro.version}
+
+
+ com.github.ben-manes.caffeine
+ caffeine
+ ${caffeine.version}
+
+
+ io.dropwizard.metrics
+ metrics-core
+ ${metrics-core.version}
+
+
+ org.apache.httpcomponents
+ httpclient
+ ${httpclient.version}
+
+
+ org.apache.httpcomponents
+ httpcore
+ ${httpcore.version}
+
+
+ commons-codec
+ commons-codec
+ ${commons-codec.version}
+
+
+ commons-validator
+ commons-validator
+ ${commons-validator.version}
+
+
+ commons-io
+ commons-io
+ ${commons-io.version}
+
+
+ org.apache.commons
+ commons-lang3
+ ${commons-lang3.version}
+
+
+ org.apache.commons
+ commons-math3
+ ${commons-math.version}
+
+
+
+ commons-logging
+ commons-logging
+ 1.2
+
+
+ org.apache.zookeeper
+ zookeeper
+ ${zookeeper.version}
+
+
+ com.google.code.findbugs
+ jsr305
+
+
+ com.github.spotbugs
+ spotbugs-annotations
+
+
+ jline
+ jline
+
+
+ com.sun.jmx
+ jmxri
+
+
+ com.sun.jdmk
+ jmxtools
+
+
+ javax.jms
+ jms
+
+
+ io.netty
+ netty
+
+
+ io.netty
+ netty-all
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ log4j
+ log4j
+
+
+
+
+ jline
+ jline
+ ${jline.version}
+
+
+ org.apache.thrift
+ libthrift
+ ${thrift.version}
+
+
+ org.apache.tomcat.embed
+ tomcat-embed-core
+
+
+
+
+ org.jruby
+ jruby-complete
+ ${jruby.version}
+
+
+ org.jruby.jcodings
+ jcodings
+ ${jcodings.version}
+
+
+ org.jruby.joni
+ joni
+ ${joni.version}
+
+
+ com.fasterxml.jackson.core
+ jackson-annotations
+ ${jackson.version}
+
+
+ com.fasterxml.jackson.core
+ jackson-core
+ ${jackson.version}
+
+
+ com.fasterxml.jackson.core
+ jackson-databind
+ ${jackson.databind.version}
+
+
+ org.jamon
+ jamon-runtime
+ ${jamon-runtime.version}
+
+
+
+ javax.servlet
+ javax.servlet-api
+ ${servlet.api.version}
+
+
+ javax.ws.rs
+ javax.ws.rs-api
+ ${wx.rs.api.version}
+
+
+ com.sun.activation
+ javax.activation
+ 1.2.0
+
+
+ javax.annotation
+ javax.annotation-api
+ 1.2
+
+
+
+ org.glassfish.web
+ javax.servlet.jsp
+ ${glassfish.jsp.version}
+
+
+
+ javax.servlet.jsp
+ javax.servlet.jsp-api
+ 2.3.1
+
+
+ org.glassfish
+ javax.el
+ ${glassfish.el.version}
+
+
+ javax.xml.bind
+ jaxb-api
+ ${jaxb-api.version}
+
+
+ javax.xml.stream
+ stax-api
+
+
+
+
+ junit
+ junit
+ ${junit.version}
+
+
+ org.hamcrest
+ hamcrest-core
+ ${hamcrest.version}
+
+
+ org.hamcrest
+ hamcrest-library
+ ${hamcrest.version}
+
+
+ org.mockito
+ mockito-bom
+ ${mockito.version}
+ pom
+ import
+
+
+ io.opentelemetry
+ opentelemetry-bom
+ ${opentelemetry.version}
+ pom
+ import
+
+
+ io.opentelemetry
+ opentelemetry-semconv
+ ${opentelemetry.version}-alpha
+
+
+ io.opentelemetry.javaagent
+ opentelemetry-javaagent
+ ${opentelemetry-javaagent.version}
+
+
+ com.lmax
+ disruptor
+ ${disruptor.version}
+
+
+ net.spy
+ spymemcached
+ ${spy.version}
+ true
+
+
+ org.bouncycastle
+ bcprov-jdk15on
+ ${bouncycastle.version}
+ test
+
+
+ org.skyscreamer
+ jsonassert
+ ${skyscreamer.version}
+ test
+
+
+ org.bouncycastle
+ bcpkix-jdk15on
+ ${bouncycastle.version}
+ test
+
+
+ org.apache.kerby
+ kerb-core
+ ${kerby.version}
+
+
+ org.apache.kerby
+ kerb-client
+ ${kerby.version}
+
+
+ org.apache.kerby
+ kerb-simplekdc
+ ${kerby.version}
+
+
+ org.apache.commons
+ commons-crypto
+ ${commons-crypto.version}
+
+
+ net.java.dev.jna
+ jna
+
+
+
+
+ org.apache.curator
+ curator-framework
+ ${curator.version}
+
+
+ org.apache.zookeeper
+ zookeeper
+
+
+
+
+ org.apache.curator
+ curator-client
+ ${curator.version}
+
+
+ com.google.guava
+ guava
+
+
+ org.apache.zookeeper
+ zookeeper
+
+
+
+
+ org.apache.curator
+ curator-recipes
+ ${curator.version}
+
+
+ com.google.guava
+ guava
+
+
+ org.apache.zookeeper
+ zookeeper
+
+
+
+
+ org.apache.yetus
+ audience-annotations
+ ${audience-annotations.version}
+
+
+
+ io.airlift
+ aircompressor
+ ${aircompressor.version}
+
+
+ org.lz4
+ lz4-java
+ ${lz4.version}
+
+
+ org.tukaani
+ xz
+ ${xz.version}
+
+
+ org.xerial.snappy
+ snappy-java
+ ${snappy.version}
+
+
+ com.github.luben
+ zstd-jni
+ ${zstd-jni.version}
+
+
+
+ org.apache.hbase.thirdparty
+ hbase-shaded-gson
+ ${hbase-thirdparty.version}
+
+
+ org.apache.hbase.thirdparty
+ hbase-shaded-miscellaneous
+ ${hbase-thirdparty.version}
+
+
+ org.apache.hbase.thirdparty
+ hbase-shaded-netty
+ ${hbase-thirdparty.version}
+
+
+ org.apache.hbase.thirdparty
+ hbase-shaded-protobuf
+ ${hbase-thirdparty.version}
+
+
+ org.apache.hbase.thirdparty
+ hbase-shaded-jetty
+ ${hbase-thirdparty.version}
+
+
+ org.apache.hbase.thirdparty
+ hbase-shaded-jersey
+ ${hbase-thirdparty.version}
+
+
+ org.apache.hbase.thirdparty
+ hbase-shaded-jackson-jaxrs-json-provider
+ ${hbase-thirdparty.version}
+
+
+ org.apache.hbase.thirdparty
+ hbase-unsafe
+ ${hbase-thirdparty.version}
+
+
+ com.sun.xml.ws
+ jaxws-ri
+ 2.3.2
+ pom
+
+
+ javax.activation
+ javax.activation-api
+
+
+
+
+
+
+
+
+
+ org.aspectj
+ aspectjrt
+ ${aspectj.version}
+
+
+ edu.uchicago.cs.systems
+ wasabi
+ ${wasabi.version}
+
+
+
+
+ junit
+ junit
+ test
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-remote-resources-plugin
+
+
+ org.apache.maven.plugins
+ maven-release-plugin
+
+
+ apache-release
+
+ -Dmaven.test.skip.exec ${arguments}
+ ${goals}
+ pom.xml
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.11.0
+
+ true
+ false
+ false
+ -Xlint:-options
+
+
+
+ org.apache.maven.plugins
+ maven-javadoc-plugin
+ ${maven.javadoc.version}
+
+ ${compileSource}
+
+
+
+
+ org.apache.maven.plugins
+ maven-surefire-plugin
+ ${surefire.version}
+
+
+ ${surefire.firstPartGroups}
+ false
+ false
+ false
+ ${surefire.skipFirstPart}
+ ${surefire.firstPartForkCount}
+
+
+ false
+ ${surefire.reportsDirectory}
+ ${surefire.tempDir}
+ ${surefire.testFailureIgnore}
+ ${surefire.timeout}
+ ${test.output.tofile}
+
+ ${test.build.classes}
+ ${test.tmp.dir}
+ org.apache.hadoop.hbase.logging.JulToSlf4jInitializer
+
+
+
+ ${test.exclude.pattern}
+
+
+
+ listener
+ org.apache.hadoop.hbase.TimedOutTestsListener,org.apache.hadoop.hbase.HBaseClassTestRuleChecker,org.apache.hadoop.hbase.ResourceCheckerJUnitListener
+
+
+
+
+
+
+ org.apache.maven.surefire
+ ${surefire.provider}
+ ${surefire.version}
+
+
+
+
+ secondPartTestsExecution
+
+ test
+
+ test
+
+ ${surefire.skipSecondPart}
+ ${surefire.testFailureIgnore}
+
+ false
+ ${surefire.secondPartForkCount}
+
+ ${surefire.secondPartGroups}
+ ${surefire.timeout}
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-surefire-report-plugin
+ ${surefire.version}
+
+
+ org.codehaus.mojo
+ buildnumber-maven-plugin
+ ${buildnumber.maven.version}
+
+
+ com.github.spotbugs
+ spotbugs-maven-plugin
+ ${spotbugs.maven.version}
+
+ ${project.basedir}/../dev-support/spotbugs-exclude.xml
+ true
+ true
+ Max
+
+
+
+
+ com.github.spotbugs
+ spotbugs
+ ${spotbugs.version}
+
+
+
+
+ org.codehaus.mojo
+ build-helper-maven-plugin
+ ${build.helper.maven.version}
+
+
+ maven-antrun-plugin
+ ${maven.antrun.version}
+
+
+ org.jamon
+ jamon-maven-plugin
+ ${jamon.plugin.version}
+
+
+
+ org.apache.maven.plugins
+ maven-source-plugin
+
+
+ attach-sources
+
+ jar-no-fork
+ test-jar-no-fork
+
+ prepare-package
+
+
+ log4j2.xml
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-jar-plugin
+
+ true
+
+ hbase-site.xml
+ hdfs-site.xml
+ mapred-queues.xml
+ mapred-site.xml
+
+
+
+
+
+
+ test-jar
+
+ prepare-package
+
+
+
+
+
+ org.apache.rat
+ apache-rat-plugin
+
+
+ **/*.versionsBackup
+ **/*.log
+ **/.*
+ **/*.tgz
+ **/*.orig
+ **/0000000000000016310
+ **/a6a6562b777440fd9c34885428f5cb61.21e75333ada3d5bafb34bb918f29576c
+ **/8e8ab58dcf39412da19833fcd8f687ac
+ **/.idea/**
+ **/*.iml
+ **/CHANGES.txt
+ **/generated/**
+ **/gen-*/**
+
+ conf/regionservers
+ **/*.avpr
+ **/*.svg
+
+ **/src/main/resources/META-INF/LEGAL
+
+ **/src/main/asciidoc/hbase.css
+
+ **/jquery.min.js
+ **/jquery.tablesorter.min.js
+ **/parser-date-iso8601.min.js
+
+ **/src/main/resources/hbase-webapps/static/*/bootstrap*
+
+ **/hbase-webapps/static/js/vega*.min.js
+
+ **/*.vm
+
+ **/control
+ **/conffile
+
+ docs/*
+ logs/*
+
+ .git/**
+ .svn/**
+ **/.settings/**
+ **/patchprocess/**
+ src/site/resources/repo/**
+ **/dependency-reduced-pom.xml
+ **/rat.txt
+
+ **/shaded/com/google/protobuf/**
+ **/src/main/patches/**
+ **/vote.tmpl
+
+ **/CC-MAIN-2021-10-warc.paths.gz
+
+
+
+
+ maven-assembly-plugin
+
+
+ true
+
+
+
+ org.xolstice.maven.plugins
+ protobuf-maven-plugin
+ ${protobuf.plugin.version}
+
+ ${basedir}/src/main/protobuf/
+ false
+ true
+
+
+
+
+ org.apache.maven.plugins
+ maven-checkstyle-plugin
+ ${maven.checkstyle.version}
+
+ hbase/checkstyle.xml
+ hbase/checkstyle-suppressions.xml
+ true
+
+
+
+ org.apache.hbase
+ hbase-checkstyle
+ ${project.version}
+
+
+ com.puppycrawl.tools
+ checkstyle
+ ${checkstyle.version}
+
+
+
+
+ net.revelc.code
+ warbucks-maven-plugin
+ ${maven.warbucks.version}
+
+ false
+
+
+
+ (?!.*(.generated.|.tmpl.|\$)).*
+ false
+ true
+ false
+ false
+ false
+ org[.]apache[.]yetus[.]audience[.]InterfaceAudience.*
+
+
+
+
+
+ run-warbucks
+
+ check
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-enforcer-plugin
+ ${enforcer.version}
+
+
+ org.codehaus.mojo
+ extra-enforcer-rules
+ ${extra.enforcer.version}
+
+
+ de.skuzzle.enforcer
+ restrict-imports-enforcer-rule
+ ${restrict-imports.enforcer.version}
+
+
+
+
+ org.apache.maven.plugins
+ maven-gpg-plugin
+ ${maven.gpg.version}
+
+
+
+
+
+ org.codehaus.mojo
+ flatten-maven-plugin
+ 1.3.0
+
+ true
+ true
+ oss
+
+
+
+
+ flatten
+
+ flatten
+
+ process-resources
+
+
+
+ flatten.clean
+
+ clean
+
+ clean
+
+
+
+
+ org.codehaus.mojo
+ build-helper-maven-plugin
+
+
+ negate-license-bundles-property
+
+ bsh-property
+
+
+ skip.license.check = !${license.bundles.dependencies};
+
+ skip.license.check
+
+
+
+
+
+ create-license-file-path-property
+
+ regex-property
+
+
+ license.aggregate.path
+ ${project.build.directory}/maven-shared-archive-resources/META-INF/LICENSE
+ \\
+ /
+ false
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-enforcer-plugin
+
+
+ display-info
+
+ display-info
+
+ initialize
+ false
+
+
+ hadoop-profile-min-maven-min-java-banned-xerces
+
+ enforce
+
+
+
+
+
+ System.getProperty("hadoop-profile", "").isEmpty()
+ The hadoop-profile property is unused, did you mean to set hadoop.profile instead?
+
+
+
+ [${maven.min.version},)
+ Maven is out of date.
+ HBase requires at least version ${maven.min.version} of Maven to properly build from source.
+ You appear to be using an older version. You can use either "mvn -version" or
+ "mvn enforcer:display-info" to verify what version is active.
+ See the reference guide on building for more information: https://hbase.apache.org/book.html#build
+
+
+
+ [${java.min.version},)
+ Java is out of date.
+ HBase requires at least version ${java.min.version} of the JDK to properly build from source.
+ You appear to be using an older version. You can use either "mvn -version" or
+ "mvn enforcer:display-info" to verify what version is active.
+ See the reference guide on building for more information: https://hbase.apache.org/book.html#build
+
+
+
+ xerces:xercesImpl
+
+ We avoid adding our own Xerces jars to the classpath, see HBASE-16340.
+
+
+
+
+
+ banned-jsr305
+
+ enforce
+
+
+
+
+
+ com.google.code.findbugs:jsr305
+
+ We don't allow the JSR305 jar from the Findbugs project, see HBASE-16321.
+
+
+
+
+
+ banned-scala
+
+ enforce
+
+
+
+
+
+ org.scala-lang:scala-library
+
+ We don't allow Scala, see HBASE-13992.
+
+
+
+
+
+ banned-commons-logging
+
+ enforce
+
+
+
+
+
+ commons-logging:commons-logging
+
+ We don't use commons-logging any more, so do not depend on it directly.
+ false
+
+
+
+
+
+ banned-other-logging-framework
+
+ enforce
+
+
+
+
+
+ log4j:*
+ org.slf4j:slf4j-log4j12
+ ch.qos.reload4j:*
+ org.slf4j:slf4j-reload4j
+ ch.qos.logback:*
+
+ We do not allow other logging frameworks as now we use log4j2
+
+
+
+
+
+ banned-jetty
+
+ enforce
+
+
+
+
+
+ org.eclipse.jetty:**
+
+ Use shaded jetty instead
+ false
+
+
+
+
+
+ banned-jersey
+
+ enforce
+
+
+
+
+
+ org.glassfish.jersey.containers:**
+ org.glassfish.jersey.core:**
+
+ Use shaded jersey instead
+ false
+
+
+
+
+
+ banned-htrace
+
+ enforce
+
+
+
+
+
+ org.apache.htrace:**
+
+ Use OpenTelemetry instead
+ false
+
+
+
+
+
+ check-aggregate-license
+
+ enforce
+
+
+ process-resources
+
+
+
+ File license = new File("${license.aggregate.path}");
+
+ // Beanshell does not support try-with-resources,
+ // so we must close this scanner manually
+ Scanner scanner = new Scanner(license);
+
+ while (scanner.hasNextLine()) {
+ if (scanner.nextLine().startsWith("ERROR:")) {
+ scanner.close();
+ return false;
+ }
+ }
+ scanner.close();
+ return true;
+ License errors detected, for more detail find ERROR in
+ ${license.aggregate.path}
+
+
+ ${skip.license.check}
+
+
+
+ banned-illegal-imports
+
+ enforce
+
+ process-sources
+
+
+
+ true
+ 512
+ Use SLF4j for logging
+
+ org.apache.commons.logging.**
+ org.apache.log4j.**
+ org.apache.logging.log4j.**
+
+
+
+ org.apache.hadoop.hbase.logging.HBaseTestAppender
+
+
+
+ false
+ 512
+ Do not use log4j2 directly in code, see Log4jUtils in hbase-logging for more details.
+
+ org.apache.logging.log4j.**
+
+
+
+ true
+ 512
+ Use shaded version in hbase-thirdparty
+
+ com.google.common.**
+ io.netty.**
+ org.apache.commons.cli.**
+ org.apache.commons.collections.**
+ org.apache.commons.collections4.**
+
+
+
+ true
+ 512
+ Do not use shaded classes from other dependencies
+
+ org.apache.curator.shaded.**
+ org.apache.htrace.shaded.**
+
+
+
+ true
+ 512
+ Use shaded gson in hbase-thirdparty
+
+ org.codehaus.jackson.**
+
+
+
+ true
+ 512
+ Use commons lang 3
+
+ org.apache.commons.lang.**
+
+
+
+ true
+ 512
+ Use yetus IA and IS annotations
+
+ org.apache.hadoop.classificatio.**
+
+
+
+ true
+ 512
+ Do not use htrace
+
+ org.htrace.**
+ org.apache.htrace.**
+
+
+
+ true
+ 512
+ Use shaded jetty in hbase-thirdparty
+
+ org.eclipse.jetty.**
+
+
+
+ true
+ 512
+ Use shaded jersey in hbase-thirdparty
+
+ org.glassfish.jersey.**
+
+
+
+ true
+ 512
+ You should never use this style of annotations(i.e, 'this is for test only')
+ in IA.Public or IA.LimitedPrivate classes. Use IA.Private to tell users this is
+ not for public use.
+ For IA.Private classes, use RestrictedApi annotation in error prone instead.
+
+ org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting
+
+
+
+ true
+ 512
+ Use shaded javax.ws.rs in hbase-thirdparty
+
+ javax.ws.rs.**
+
+
+
+ true
+ 512
+ Use shaded jackson-jaxrs-json-provider in hbase-thirdparty
+
+ com.fasterxml.jackson.jaxrs.**
+
+
+
+ true
+ 512
+ Use junit4 instead
+
+ junit.framework.**
+
+
+
+
+
+
+
+
+
+ org.codehaus.mojo
+ xml-maven-plugin
+ ${xml.maven.version}
+ false
+
+
+
+
+
+ ${basedir}/hbase-common/src/main/resources/
+
+ hbase-default.xml
+
+ ${basedir}/src/main/xslt/configuration_to_asciidoc_chapter.xsl
+
+
+ ^(.*)\.xml$
+ $1.adoc
+
+
+ ${basedir}/target/asciidoc
+
+
+
+
+
+
+
+ transform
+
+ site
+
+
+
+
+
+ com.github.spotbugs
+ spotbugs-maven-plugin
+
+
+
+ spotbugs
+
+ false
+
+ ${basedir}/dev-support/spotbugs-exclude.xml
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-checkstyle-plugin
+
+
+ org.apache.maven.plugins
+ maven-site-plugin
+ ${maven-site.version}
+
+ ${basedir}/src/site
+ ${basedir}/src/site/custom/project-info-report.properties
+ UTF-8
+ UTF-8
+
+
+
+
+ org.apache.maven.wagon
+ wagon-ssh
+ ${wagon.ssh.version}
+
+
+
+
+
+ org.asciidoctor
+ asciidoctor-maven-plugin
+ ${asciidoctor.plugin.version}
+ false
+
+ ${project.reporting.outputDirectory}/
+ book
+
+ ${project.version}
+ images
+ coderay
+
+
+
+
+ org.asciidoctor
+ asciidoctorj-pdf
+ ${asciidoctorj.pdf.version}
+
+
+
+
+ output-html
+
+ process-asciidoc
+
+ site
+
+
+ hbase.css
+
+ html5
+
+
+
+ output-pdf
+
+ process-asciidoc
+
+ site
+
+ pdf
+
+
+
+
+ -
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-resources-plugin
+
+ false
+
+ \
+
+
+
+ copy-htaccess
+
+ copy-resources
+
+ site
+
+ ${project.reporting.outputDirectory}/
+
+
+ ${basedir}/src/site/resources/
+
+ .htaccess
+
+
+
+
+
+
+
+ copy-empty-book-dir
+
+ copy-resources
+
+ site
+
+ ${project.reporting.outputDirectory}/
+
+
+ ${basedir}/src/site/resources/
+
+ book/**
+
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-antrun-plugin
+ ${maven.antrun.version}
+ false
+
+
+
+ rename-pdf
+
+ run
+
+ site
+
+
+
+
+
+
+
+
+
+ org.codehaus.mojo
+ buildnumber-maven-plugin
+
+ yyyy
+ build.year
+
+
+
+
+ create-timestamp
+
+ validate
+
+
+
+
+ org.apache.felix
+ maven-bundle-plugin
+ ${maven.bundle.version}
+ true
+ true
+
+
+ com.diffplug.spotless
+ spotless-maven-plugin
+ ${spotless.version}
+
+
+
+
+ **/generated/*
+ **/package-info.java
+
+
+
+ Remove unhelpful javadoc stubs
+ (?m)^ *\* *@(?:param|throws|return) *\w* *\n
+
+
+
+
+ Purge single returns tag multi line
+ (?m)^ */\*\*\n *\* *@return *(.*) *\n *\*/$
+ /** Returns $1 */
+
+
+ Purge single returns tag single line
+ ^ */\*\* *@return *(.*) *\*/$
+ /** Returns $1 */
+
+
+
+ ${session.executionRootDirectory}/dev-support/hbase_eclipse_formatter.xml
+
+
+ ${session.executionRootDirectory}/dev-support/eclipse.importorder
+
+
+
+
+
+
+
+ false
+
+
+
+
+
+
+
+ **/*.xml
+ **/*.sh
+ **/*.py
+ **/Jenkinsfile*
+ **/*.md
+ *.md
+ **/*.txt
+ *.txt
+
+
+ **/target/**
+ **/dependency-reduced-pom.xml
+
+
+
+
+
+
+
+
+ src/main/java/**/*.java
+ src/test/java/**/*.java
+
+
+ **/generated/*
+ **/package-info.java
+
+ src/main/java/org/apache/hadoop/hbase/util/AbstractByteRange.java
+ src/main/java/org/apache/hadoop/hbase/util/SimpleMutableByteRange.java
+ src/main/java/org/apache/hadoop/hbase/util/SimplePositionedMutableByteRange.java
+
+ src/main/java/org/apache/hadoop/hbase/metrics/impl/HBaseMetrics2HadoopMetricsAdapter.java
+
+ src/test/java/org/apache/hadoop/hbase/test/util/warc/WARCFileReader.java
+ src/test/java/org/apache/hadoop/hbase/test/util/warc/WARCFileWriter.java
+ src/test/java/org/apache/hadoop/hbase/test/util/warc/WARCInputFormat.java
+ src/test/java/org/apache/hadoop/hbase/test/util/warc/WARCOutputFormat.java
+ src/test/java/org/apache/hadoop/hbase/test/util/warc/WARCRecord.java
+ src/test/java/org/apache/hadoop/hbase/test/util/warc/WARCWritable.java
+
+
+ ${session.executionRootDirectory}/dev-support/license-header
+ package
+
+
+
+
+
+
+
+
+ dev.aspectj
+ aspectj-maven-plugin
+ ${aspectj-maven.version}
+
+
+
+ edu.uchicago.cs.systems
+ wasabi
+
+
+
+
+
+
+ test-compile
+ compile
+
+
+ 1.8
+ 1.8
+ false
+ true
+ true
+ unmatchedSuperTypeInCall=ignore,adviceDidNotMatch=ignore,typeNotExposedToWeaver=ignore,uncheckedAdviceConversion=ignore,invalidAbsoluteTypeName=ignore,cantFindType=ignore
+
+
+
+
+
+ org.aspectj
+ aspectjtools
+ ${aspectj.version}
+
+
+
+
+
+
+ kr.motd.maven
+ os-maven-plugin
+ ${os.maven.version}
+
+
+
+
+
+
+
+ maven-project-info-reports-plugin
+ ${maven.project.info.report.version}
+
+
+ false
+
+
+
+
+ dependencies
+ dependency-convergence
+ dependency-info
+ dependency-management
+ index
+ issue-management
+ licenses
+ mailing-lists
+ plugin-management
+ plugins
+ team
+ scm
+ summary
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-javadoc-plugin
+
+
+
+ apiNote
+ a
+ API Note:
+
+
+
+
+
+
+ devapi
+
+ aggregate-no-fork
+
+
+ devapidocs
+ Developer API
+ The full HBase API, including private and unstable APIs
+
+ **/generated/*
+ **/protobuf/*
+
+ org.apache.hadoop.hbase.tmpl.common:com.google.protobuf:org.apache.hadoop.hbase.generated*
+ private
+
+ true
+ true
+ 2
+ true
+ true
+ true
+ true
+ all
+ true
+ en_US
+
+ -J-Xmx2G
+
+
+
+ org.mockito
+ mockito-core
+ ${mockito.version}
+
+
+ org.hamcrest
+ hamcrest-core
+ ${hamcrest.version}
+
+
+
+ com.google.code.findbugs
+ jsr305
+ 3.0.2
+
+
+ false
+
+
+
+ testdevapi
+
+ test-aggregate-no-fork
+
+
+ testdevapidocs
+ Developer API
+ The full HBase API test code, including private and unstable APIs
+
+ **/generated/*
+ **/protobuf/*
+
+ org.apache.hadoop.hbase.tmpl.common:com.google.protobuf:org.apache.hadoop.hbase.generated*
+ private
+
+ true
+ true
+ 2
+ true
+ true
+ true
+ true
+ all
+ true
+ en_US
+
+ -J-Xmx2G
+
+
+
+ org.mockito
+ mockito-core
+ ${mockito.version}
+
+
+ org.hamcrest
+ hamcrest-core
+ ${hamcrest.version}
+
+
+
+ com.google.code.findbugs
+ jsr305
+ 3.0.2
+
+
+ false
+
+
+
+
+
+ userapi
+
+ aggregate-no-fork
+
+
+ org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet
+
+ org.apache.yetus
+ audience-annotations
+ ${javadoc.audience-annotations.version}
+
+ true
+ apidocs
+ User API
+ The HBase Application Programmer's API
+ org.apache.hadoop.hbase.backup*:org.apache.hadoop.hbase.catalog:org.apache.hadoop.hbase.client.coprocessor:org.apache.hadoop.hbase.client.metrics:org.apache.hadoop.hbase.codec*:org.apache.hadoop.hbase.constraint:org.apache.hadoop.hbase.coprocessor.*:org.apache.hadoop.hbase.executor:org.apache.hadoop.hbase.fs:*.generated.*:org.apache.hadoop.hbase.io.hfile.*:org.apache.hadoop.hbase.mapreduce.hadoopbackport:org.apache.hadoop.hbase.mapreduce.replication:org.apache.hadoop.hbase.master.*:org.apache.hadoop.hbase.metrics*:org.apache.hadoop.hbase.migration:org.apache.hadoop.hbase.monitoring:org.apache.hadoop.hbase.p*:org.apache.hadoop.hbase.regionserver.compactions:org.apache.hadoop.hbase.regionserver.handler:org.apache.hadoop.hbase.regionserver.snapshot:org.apache.hadoop.hbase.replication.*:org.apache.hadoop.hbase.rest.filter:org.apache.hadoop.hbase.rest.model:org.apache.hadoop.hbase.rest.p*:org.apache.hadoop.hbase.security.*:org.apache.hadoop.hbase.thrift*:org.apache.hadoop.hbase.tmpl.*:org.apache.hadoop.hbase.tool:org.apache.hadoop.hbase.trace:org.apache.hadoop.hbase.util.byterange*:org.apache.hadoop.hbase.util.test:org.apache.hadoop.hbase.util.vint:org.apache.hadoop.metrics2*:org.apache.hadoop.hbase.io.compress*
+
+ false
+ **/generated/*
+ protected
+
+ true
+ true
+ 2
+ true
+ true
+ true
+ true
+ all
+ true
+ en_US
+
+ -J-Xmx2G
+
+
+
+ org.mockito
+ mockito-core
+ ${mockito.version}
+
+
+ org.hamcrest
+ hamcrest-core
+ ${hamcrest.version}
+
+
+
+ com.google.code.findbugs
+ jsr305
+ 3.0.2
+
+
+ false
+
+
+
+
+ testuserapi
+
+ test-aggregate-no-fork
+
+
+ org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet
+
+ org.apache.yetus
+ audience-annotations
+ ${javadoc.audience-annotations.version}
+
+ true
+ testapidocs
+ User API
+ The HBase Application Programmer's API
+ org.apache.hadoop.hbase.backup*:org.apache.hadoop.hbase.catalog:org.apache.hadoop.hbase.client.coprocessor:org.apache.hadoop.hbase.client.metrics:org.apache.hadoop.hbase.codec*:org.apache.hadoop.hbase.constraint:org.apache.hadoop.hbase.coprocessor.*:org.apache.hadoop.hbase.executor:org.apache.hadoop.hbase.fs:*.generated.*:org.apache.hadoop.hbase.io.hfile.*:org.apache.hadoop.hbase.mapreduce.hadoopbackport:org.apache.hadoop.hbase.mapreduce.replication:org.apache.hadoop.hbase.master.*:org.apache.hadoop.hbase.metrics*:org.apache.hadoop.hbase.migration:org.apache.hadoop.hbase.monitoring:org.apache.hadoop.hbase.p*:org.apache.hadoop.hbase.regionserver.compactions:org.apache.hadoop.hbase.regionserver.handler:org.apache.hadoop.hbase.regionserver.snapshot:org.apache.hadoop.hbase.replication.*:org.apache.hadoop.hbase.rest.filter:org.apache.hadoop.hbase.rest.model:org.apache.hadoop.hbase.rest.p*:org.apache.hadoop.hbase.security.*:org.apache.hadoop.hbase.thrift*:org.apache.hadoop.hbase.tmpl.*:org.apache.hadoop.hbase.tool:org.apache.hadoop.hbase.trace:org.apache.hadoop.hbase.util.byterange*:org.apache.hadoop.hbase.util.test:org.apache.hadoop.hbase.util.vint:org.apache.hadoop.metrics2*:org.apache.hadoop.hbase.io.compress*
+
+ false
+ **/generated/*
+ protected
+
+ true
+ true
+ 2
+ true
+ true
+ true
+ true
+ all
+ true
+ en_US
+
+ -J-Xmx2G
+
+
+
+ org.mockito
+ mockito-core
+ ${mockito.version}
+
+
+ org.hamcrest
+ hamcrest-core
+ ${hamcrest.version}
+
+
+
+ com.google.code.findbugs
+ jsr305
+ 3.0.2
+
+
+ false
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-checkstyle-plugin
+ ${maven.checkstyle.version}
+
+ target/**
+
+
+
+
+
+
+
+
+
+ build-with-jdk8
+
+ 1.8
+
+
+ ${compileSource}
+ ${compileSource}
+
+
+
+ build-with-jdk11
+
+ [11,)
+
+
+ ${releaseTarget}
+
+ ${hbase-surefire.jdk11.flags}
+ ${hbase-surefire.argLine}
+ @{jacocoArgLine}
+
+ 2200m
+
+ 0.14.1
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-javadoc-plugin
+ ${maven.javadoc.version}
+
+ ${compileSource}
+
+ --ignore-source-errors
+
+ -J-Xmx2G
+ -J--add-exports
+ -Jjdk.javadoc/jdk.javadoc.internal.tool=ALL-UNNAMED
+
+
+
+
+
+
+
+
+ build-with-jdk17
+
+ [17,)
+
+
+ ${hbase-surefire.jdk11.flags}
+ ${hbase-surefire.jdk17.flags}
+ ${hbase-surefire.argLine}
+ @{jacocoArgLine}
+
+
+
+
+ jenkins.patch
+
+ false
+
+ HBasePatchProcess
+
+
+
+ 2
+
+
+
+
+ org.apache.maven.plugins
+ maven-antrun-plugin
+ false
+
+
+
+ run
+
+ validate
+
+
+ Maven Execution Environment
+ MAVEN_OPTS="${env.MAVEN_OPTS}"
+
+
+
+
+
+
+
+
+
+ jacoco
+
+ false
+
+
+ **/generated/**/*
+ **/generated/**/*,hbase-it/**,**/hbase-logging/**/*,**/hbase-testing-util/**/*,
+ **/hbase-protocol-shaded/**/*,**/hbase-external-blockcache/**/*,**/hbase-examples/**/*,
+ **/hbase-archetypes/**/*
+
+
+
+
+ org.jacoco
+ jacoco-maven-plugin
+ ${jacoco.version}
+
+
+ **/generated/**/*
+
+
+
+
+ prepare-agent
+
+ prepare-agent
+
+ initialize
+
+ jacocoArgLine
+ true
+
+
+
+ report
+
+ report
+
+ prepare-package
+
+
+
+
+ org.sonarsource.scanner.maven
+ sonar-maven-plugin
+ ${sonar-maven-plugin.version}
+
+
+
+
+
+ os.linux
+
+ false
+
+ Linux
+
+
+
+ ${os.name}-${os.arch}-${sun.arch.data.model}
+
+
+
+ os.mac
+
+
+ Mac
+
+
+
+ Mac_OS_X-${sun.arch.data.model}
+
+
+
+ os.windows
+
+
+ Windows
+
+
+
+ cygwin
+ ${hbase-surefire.cygwin-argLine} @{jacocoArgLine}
+
+
+
+
+ apache-release
+
+
+
+
+ org.sonatype.plugins
+ nexus-staging-maven-plugin
+ 1.6.8
+ true
+
+ https://repository.apache.org/
+ apache.releases.https
+
+
+
+
+
+
+
+ release
+
+
+
+ org.apache.rat
+ apache-rat-plugin
+
+
+
+ check
+
+ package
+
+
+
+
+ org.apache.maven.plugins
+ maven-enforcer-plugin
+ ${enforcer.version}
+
+
+
+ ${compileSource}
+ HBase has unsupported dependencies.
+ HBase requires that all dependencies be compiled with version ${compileSource} or earlier
+ of the JDK to properly build from source. You appear to be using a newer dependency. You can use
+ either "mvn -version" or "mvn enforcer:display-info" to verify what version is active.
+ Non-release builds can temporarily build with a newer JDK version by setting the
+ 'compileSource' property (eg. mvn -DcompileSource=1.8 clean package).
+
+ module-info
+
+
+
+
+
+
+ org.codehaus.mojo
+ extra-enforcer-rules
+ ${extra.enforcer.version}
+
+
+
+
+ org.cyclonedx
+ cyclonedx-maven-plugin
+ 2.7.6
+
+
+
+ makeBom
+
+ package
+
+
+
+
+
+
+
+
+
+
+ hadoop-3.0
+
+
+ !hadoop.profile
+
+
+
+ ${hadoop-three.version}
+ src/main/assembly/hadoop-three-compat.xml
+
+
+
+
+ org.apache.hadoop
+ hadoop-mapreduce-client-core
+ ${hadoop-three.version}
+
+
+ com.google.guava
+ guava
+
+
+ com.sun.jersey
+ jersey-core
+
+
+ javax.xml.bind
+ jaxb-api
+
+
+ javax.ws.rs
+ jsr311-api
+
+
+ org.codehaus.jackson
+ *
+
+
+ io.netty
+ netty
+
+
+ io.netty
+ netty-all
+
+
+ javax.servlet
+ servlet-api
+
+
+ javax.inject
+ javax.inject
+
+
+ com.google.guava
+ guava
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ log4j
+ log4j
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+
+
+ org.apache.hadoop
+ hadoop-mapreduce-client-app
+ ${hadoop-three.version}
+ test-jar
+
+
+ org.codehaus.jackson
+ jackson-mapper-asl
+
+
+ org.codehaus.jackson
+ jackson-core-asl
+
+
+ javax.xml.bind
+ jaxb-api
+
+
+ javax.ws.rs
+ jsr311-api
+
+
+ org.codehaus.jackson
+ *
+
+
+ javax.xml.bind
+ jaxb-api
+
+
+ javax.ws.rs
+ jsr311-api
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ log4j
+ log4j
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+
+
+ org.apache.hadoop
+ hadoop-mapreduce-client-jobclient
+ ${hadoop-three.version}
+
+
+ com.sun.jersey
+ jersey-core
+
+
+ io.netty
+ netty
+
+
+ io.netty
+ netty-all
+
+
+ javax.servlet
+ servlet-api
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+
+
+ org.apache.hadoop
+ hadoop-mapreduce-client-jobclient
+ ${hadoop-three.version}
+ test-jar
+ test
+
+
+ com.sun.jersey
+ jersey-core
+
+
+ io.netty
+ netty
+
+
+ io.netty
+ netty-all
+
+
+ javax.servlet
+ servlet-api
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+
+
+ org.apache.hadoop
+ hadoop-hdfs
+ ${hadoop-three.version}
+
+
+ com.sun.jersey
+ jersey-core
+
+
+ com.sun.jersey
+ jersey-server
+
+
+ javax.servlet.jsp
+ jsp-api
+
+
+ javax.servlet
+ servlet-api
+
+
+ stax
+ stax-api
+
+
+ xerces
+ xercesImpl
+
+
+ org.codehaus.jackson
+ *
+
+
+ com.google.guava
+ guava
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ log4j
+ log4j
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+ org.fusesource.leveldbjni
+ leveldbjni-all
+
+
+ org.openlabtesting.leveldbjni
+ leveldbjni-all
+
+
+
+
+ org.apache.hadoop
+ hadoop-hdfs
+ ${hadoop-three.version}
+ test-jar
+ test
+
+
+ javax.servlet.jsp
+ jsp-api
+
+
+ javax.servlet
+ servlet-api
+
+
+ stax
+ stax-api
+
+
+ xerces
+ xercesImpl
+
+
+ org.codehaus.jackson
+ *
+
+
+ com.google.guava
+ guava
+
+
+ com.sun.jersey
+ jersey-core
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ log4j
+ log4j
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+
+
+ org.apache.hadoop
+ hadoop-auth
+ ${hadoop-three.version}
+
+
+ com.google.guava
+ guava
+
+
+ net.minidev
+ json-smart
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ log4j
+ log4j
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+
+
+ org.apache.hadoop
+ hadoop-common
+ ${hadoop-three.version}
+
+
+ com.sun.jersey
+ jersey-core
+
+
+ com.sun.jersey
+ jersey-json
+
+
+ com.sun.jersey
+ jersey-servlet
+
+
+ com.sun.jersey
+ jersey-server
+
+
+ javax.servlet.jsp
+ jsp-api
+
+
+ javax.servlet
+ javax.servlet-api
+
+
+ stax
+ stax-api
+
+
+ io.netty
+ netty
+
+
+
+ com.google.code.findbugs
+ jsr305
+
+
+ junit
+ junit
+
+
+ org.codehaus.jackson
+ *
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ log4j
+ log4j
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+ io.netty
+ netty
+
+
+ io.netty
+ netty-all
+
+
+
+
+
+ javax.activation
+ javax.activation-api
+ 1.2.0
+ test
+
+
+ org.apache.hadoop
+ hadoop-common
+ ${hadoop-three.version}
+ test-jar
+
+
+ com.sun.jersey
+ jersey-core
+
+
+ com.google.code.findbugs
+ jsr305
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ log4j
+ log4j
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+
+ org.codehaus.jackson
+ *
+
+
+ javax.servlet.jsp
+ jsp-api
+
+
+ javax.xml.bind
+ jaxb-api
+
+
+ javax.ws.rs
+ jsr311-api
+
+
+
+
+
+ org.apache.hadoop
+ hadoop-client
+ ${hadoop-three.version}
+
+
+ org.apache.hadoop
+ hadoop-annotations
+ ${hadoop-three.version}
+
+
+
+ org.apache.hadoop
+ hadoop-minicluster
+ ${hadoop-three.version}
+
+
+
+ commons-httpclient
+ commons-httpclient
+
+
+ javax.servlet.jsp
+ jsp-api
+
+
+ javax.servlet
+ servlet-api
+
+
+ stax
+ stax-api
+
+
+ io.netty
+ netty
+
+
+ io.netty
+ netty-all
+
+
+ com.google.code.findbugs
+ jsr305
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ log4j
+ log4j
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+
+
+ org.apache.hadoop
+ hadoop-minikdc
+ ${hadoop-three.version}
+ test
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+ bouncycastle
+ bcprov-jdk15
+
+
+
+
+ org.apache.hadoop
+ hadoop-distcp
+ ${hadoop-three.version}
+
+
+
+ org.apache.hadoop
+ hadoop-hdfs-client
+ ${hadoop-three.version}
+
+
+
+
+
+
+
+
+ singleJVMTests
+
+ false
+
+
+ 1
+ false
+ true
+
+
+
+
+
+ runSmallTests
+
+ false
+
+
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.SmallTests
+
+
+
+
+
+ runMediumTests
+
+ false
+
+
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.MediumTests
+
+
+
+
+
+ runLargeTests
+
+ false
+
+
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.LargeTests
+
+
+
+
+
+ runDevTests
+
+ false
+
+
+ 1
+ false
+ false
+ org.apache.hadoop.hbase.testclassification.SmallTests
+ org.apache.hadoop.hbase.testclassification.MediumTests
+
+
+
+
+ runAllTests
+
+ false
+
+
+ false
+ false
+ org.apache.hadoop.hbase.testclassification.SmallTests
+ org.apache.hadoop.hbase.testclassification.MediumTests,org.apache.hadoop.hbase.testclassification.LargeTests
+
+
+
+ runMiscTests
+
+ false
+
+
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.MiscTests
+
+
+
+
+ runCoprocessorTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.CoprocessorTests
+
+
+
+
+ runClientTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.ClientTests
+
+
+
+
+ runMasterTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.MasterTests
+
+
+
+
+ runMapredTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.MapredTests
+
+
+
+
+ runMapreduceTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.MapReduceTests
+
+
+
+
+ runRegionServerTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.RegionServerTests
+
+
+
+
+ runVerySlowMapReduceTests
+
+ false
+
+
+ 2
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests
+
+
+
+
+
+ runVerySlowRegionServerTests
+
+ false
+
+
+ 2
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.VerySlowRegionServerTests
+
+
+
+
+
+ runFilterTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.FilterTests
+
+
+
+
+ runIOTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.IOTests
+
+
+
+
+ runRestTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.RestTests
+
+
+
+
+ runRPCTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.RPCTests
+
+
+
+
+ runReplicationTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.ReplicationTests
+
+
+
+
+ runSecurityTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.SecurityTests
+
+
+
+
+ runFlakeyTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.FlakeyTests
+
+
+
+
+ runZKTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.ZKTests
+
+
+
+
+ runRSGroupTests
+
+ false
+
+
+ 1
+ 1
+ false
+ true
+ org.apache.hadoop.hbase.testclassification.RSGroupTests
+
+
+
+
+
+
+ localTests
+
+
+ test
+
+
+
+ surefire-junit4
+ false
+ true
+
+
+
+
+
+ clover
+
+ false
+
+ clover
+
+
+
+ ${user.home}/.clover.license
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-javadoc-plugin
+
+
+ com.atlassian.maven.plugins
+ maven-clover2-plugin
+ ${clover.version}
+
+
+
+
+ com.atlassian.maven.plugins
+ maven-clover2-plugin
+ ${clover.version}
+
+ true
+ true
+ 50%
+ true
+ true
+
+ **/generated/**
+
+
+
+
+ clover-setup
+
+ setup
+
+ process-sources
+
+
+ clover
+
+ clover
+
+ site
+
+
+
+
+
+
+
+
+ site-install-step
+
+ true
+ true
+ true
+ true
+ true
+ true
+
+
+
+
+ site-build-step
+
+ true
+ true
+ true
+ true
+ true
+ true
+ true
+
+
+
+ eclipse-specific
+
+
+ m2e.version
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-eclipse-plugin
+ ${maven.eclipse.version}
+
+
+
+ org.eclipse.m2e
+ lifecycle-mapping
+ ${lifecycle.mapping.version}
+
+
+
+
+
+ org.jacoco
+ jacoco-maven-plugin
+ [0.6.2.201302030002,)
+
+ prepare-agent
+
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-enforcer-plugin
+ ${enforcer.version}
+
+ enforce
+
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-remote-resources-plugin
+ [1.5,)
+
+ process
+ bundle
+
+
+
+
+
+
+
+
+ org.codehaus.mojo
+ buildnumber-maven-plugin
+ [1.3,)
+
+ create-timestamp
+
+
+
+
+ true
+ true
+
+
+
+
+
+
+
+
+
+
+
+
+ aarch64
+
+
+ linux
+ aarch64
+
+
+
+ org.openlabtesting.protobuf
+
+
+
+
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.TestMultiVersions.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.TestMultiVersions.conf
new file mode 100644
index 00000000..d5ccdcab
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.TestMultiVersions.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.TestMultiVersions.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.TestMultiVersions.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.TestMultiVersions.data
new file mode 100644
index 00000000..49dd7e4e
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.TestMultiVersions.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java#L491!!!org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists!!!org.apache.hadoop.fs.FileSystem.exists!!!FSUtils.java:494!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.TestRegionReplicationLagEvaluation.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.TestRegionReplicationLagEvaluation.conf
new file mode 100644
index 00000000..ae32237b
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.TestRegionReplicationLagEvaluation.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.TestRegionReplicationLagEvaluation.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.TestRegionReplicationLagEvaluation.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.TestRegionReplicationLagEvaluation.data
new file mode 100644
index 00000000..62344dac
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.TestRegionReplicationLagEvaluation.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/RegionReplicaFlushHandler.java#L107!!!org.apache.hadoop.hbase.regionserver.handler.RegionReplicaFlushHandler.triggerFlushInPrimaryRegion!!!org.apache.hadoop.hbase.util.FutureUtils.get!!!RegionReplicaFlushHandler.java:114!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.backup.TestBackupDeleteRestore.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.backup.TestBackupDeleteRestore.conf
new file mode 100644
index 00000000..bb834696
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.backup.TestBackupDeleteRestore.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.backup.TestBackupDeleteRestore.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.backup.TestBackupDeleteRestore.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.backup.TestBackupDeleteRestore.data
new file mode 100644
index 00000000..0bf27050
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.backup.TestBackupDeleteRestore.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/tool/BulkLoadHFilesTool.java#L963!!!org.apache.hadoop.hbase.tool.BulkLoadHFilesTool.performBulkLoad!!!org.apache.hadoop.hbase.tool.BulkLoadHFilesTool.bulkLoadPhase!!!BulkLoadHFilesTool.java:990!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestAsyncTable.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestAsyncTable.conf
new file mode 100644
index 00000000..383bd64b
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestAsyncTable.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestAsyncTable.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestAsyncTable.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestAsyncTable.data
new file mode 100644
index 00000000..342154a3
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestAsyncTable.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-protocol-shaded/target/generated-sources/protobuf/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RPCProtos.java#L5026!!!org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom!!!org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream.readBool!!!RPCProtos.java:5060!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestClientSideRegionScanner.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestClientSideRegionScanner.conf
new file mode 100644
index 00000000..fff77a77
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestClientSideRegionScanner.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestClientSideRegionScanner.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestClientSideRegionScanner.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestClientSideRegionScanner.data
new file mode 100644
index 00000000..9c88deb7
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestClientSideRegionScanner.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/TransitRegionStateProcedure.java#L409!!!org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState!!!confirmOpened!!!TransitRegionStateProcedure.java:437!!!java.io.IOException
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestClientTimeouts.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestClientTimeouts.conf
new file mode 100644
index 00000000..3a43c78e
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestClientTimeouts.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestClientTimeouts.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestClientTimeouts.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestClientTimeouts.data
new file mode 100644
index 00000000..a44778da
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestClientTimeouts.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcConnection.java#L250!!!org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupConnection!!!org.apache.hadoop.net.NetUtils.connect!!!BlockingRpcConnection.java:259!!!java.net.SocketException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestEnableTable.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestEnableTable.conf
new file mode 100644
index 00000000..cb7e668f
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestEnableTable.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestEnableTable.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestEnableTable.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestEnableTable.data
new file mode 100644
index 00000000..5506033e
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestEnableTable.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/backup/HFileArchiver.java#L548!!!org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile!!!org.apache.hadoop.hbase.backup.HFileArchiver$File.moveAndClose!!!HFileArchiver.java:566!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestMetaWithReplicasShutdownHandling.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestMetaWithReplicasShutdownHandling.conf
new file mode 100644
index 00000000..94e807cb
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestMetaWithReplicasShutdownHandling.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestMetaWithReplicasShutdownHandling.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestMetaWithReplicasShutdownHandling.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestMetaWithReplicasShutdownHandling.data
new file mode 100644
index 00000000..488008f1
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.client.TestMetaWithReplicasShutdownHandling.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RemoteProcedureResultReporter.java#L71!!!org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run!!!org.apache.hadoop.hbase.regionserver.HRegionServer.reportProcedureDone!!!RemoteProcedureResultReporter.java:89!!!java.io.IOException
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.TestClassLoading.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.TestClassLoading.conf
new file mode 100644
index 00000000..73ff36cd
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.TestClassLoading.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.TestClassLoading.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.TestClassLoading.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.TestClassLoading.data
new file mode 100644
index 00000000..5506033e
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.TestClassLoading.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/backup/HFileArchiver.java#L548!!!org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile!!!org.apache.hadoop.hbase.backup.HFileArchiver$File.moveAndClose!!!HFileArchiver.java:566!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.TestRegionObserverScannerOpenHook.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.TestRegionObserverScannerOpenHook.conf
new file mode 100644
index 00000000..efd03df7
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.TestRegionObserverScannerOpenHook.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.TestRegionObserverScannerOpenHook.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.TestRegionObserverScannerOpenHook.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.TestRegionObserverScannerOpenHook.data
new file mode 100644
index 00000000..b97f8839
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.TestRegionObserverScannerOpenHook.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java#L593!!!org.apache.hadoop.hbase.util.FSUtils.setClusterId!!!org.apache.hadoop.fs.FileSystem.write!!!FSUtils.java:598!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.example.TestRefreshHFilesEndpoint.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.example.TestRefreshHFilesEndpoint.conf
new file mode 100644
index 00000000..e69de29b
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.example.TestRefreshHFilesEndpoint.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.example.TestRefreshHFilesEndpoint.data
new file mode 100644
index 00000000..b97f8839
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.example.TestRefreshHFilesEndpoint.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java#L593!!!org.apache.hadoop.hbase.util.FSUtils.setClusterId!!!org.apache.hadoop.fs.FileSystem.write!!!FSUtils.java:598!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.example.TestZooKeeperScanPolicyObserver.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.example.TestZooKeeperScanPolicyObserver.conf
new file mode 100644
index 00000000..9563643f
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.example.TestZooKeeperScanPolicyObserver.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.example.TestZooKeeperScanPolicyObserver.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.example.TestZooKeeperScanPolicyObserver.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.example.TestZooKeeperScanPolicyObserver.data
new file mode 100644
index 00000000..5bf1851d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.coprocessor.example.TestZooKeeperScanPolicyObserver.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RemoteProcedureResultReporter.java#L71!!!org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run!!!org.apache.hadoop.hbase.regionserver.HRegionServer.reportProcedureDone!!!RemoteProcedureResultReporter.java:89!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.io.asyncfs.TestFanOutOneBlockAsyncDFSOutput.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.io.asyncfs.TestFanOutOneBlockAsyncDFSOutput.conf
new file mode 100644
index 00000000..ddd75ec1
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.io.asyncfs.TestFanOutOneBlockAsyncDFSOutput.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.io.asyncfs.TestFanOutOneBlockAsyncDFSOutput.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.io.asyncfs.TestFanOutOneBlockAsyncDFSOutput.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.io.asyncfs.TestFanOutOneBlockAsyncDFSOutput.data
new file mode 100644
index 00000000..f55d877d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.io.asyncfs.TestFanOutOneBlockAsyncDFSOutput.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java#L589!!!org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile!!!org.apache.hadoop.hdfs.protocol.ClientProtocol.complete!!!FanOutOneBlockAsyncDFSOutputHelper.java:591!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.mapreduce.TestTableMapReduceUtil.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.mapreduce.TestTableMapReduceUtil.conf
new file mode 100644
index 00000000..8ce0058d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.mapreduce.TestTableMapReduceUtil.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.mapreduce.TestTableMapReduceUtil.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.mapreduce.TestTableMapReduceUtil.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.mapreduce.TestTableMapReduceUtil.data
new file mode 100644
index 00000000..2e8b30d1
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.mapreduce.TestTableMapReduceUtil.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L2524!!!org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub!!!org.apache.hadoop.hbase.security.UserProvider.getCurrent!!!HRegionServer.java:2544!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.TestRetainAssignmentOnRestartSplitWithoutZk.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.TestRetainAssignmentOnRestartSplitWithoutZk.conf
new file mode 100644
index 00000000..27480884
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.TestRetainAssignmentOnRestartSplitWithoutZk.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.TestRetainAssignmentOnRestartSplitWithoutZk.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.TestRetainAssignmentOnRestartSplitWithoutZk.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.TestRetainAssignmentOnRestartSplitWithoutZk.data
new file mode 100644
index 00000000..7c637b59
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.TestRetainAssignmentOnRestartSplitWithoutZk.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java#L593!!!org.apache.hadoop.hbase.util.FSUtils.setClusterId!!!org.apache.hadoop.fs.FileSystem.rename!!!FSUtils.java:608!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.assignment.TestRegionAssignedToMultipleRegionServers.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.assignment.TestRegionAssignedToMultipleRegionServers.conf
new file mode 100644
index 00000000..1547bbc9
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.assignment.TestRegionAssignedToMultipleRegionServers.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.assignment.TestRegionAssignedToMultipleRegionServers.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.assignment.TestRegionAssignedToMultipleRegionServers.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.assignment.TestRegionAssignedToMultipleRegionServers.data
new file mode 100644
index 00000000..f55d877d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.assignment.TestRegionAssignedToMultipleRegionServers.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java#L589!!!org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile!!!org.apache.hadoop.hdfs.protocol.ClientProtocol.complete!!!FanOutOneBlockAsyncDFSOutputHelper.java:591!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.assignment.TestRegionReplicaSplit.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.assignment.TestRegionReplicaSplit.conf
new file mode 100644
index 00000000..fa27b8d9
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.assignment.TestRegionReplicaSplit.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.assignment.TestRegionReplicaSplit.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.assignment.TestRegionReplicaSplit.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.assignment.TestRegionReplicaSplit.data
new file mode 100644
index 00000000..5506033e
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.assignment.TestRegionReplicaSplit.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/backup/HFileArchiver.java#L548!!!org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile!!!org.apache.hadoop.hbase.backup.HFileArchiver$File.moveAndClose!!!HFileArchiver.java:566!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.procedure.TestSnapshotProcedureMasterRestarts.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.procedure.TestSnapshotProcedureMasterRestarts.conf
new file mode 100644
index 00000000..7e376b76
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.procedure.TestSnapshotProcedureMasterRestarts.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.procedure.TestSnapshotProcedureMasterRestarts.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.procedure.TestSnapshotProcedureMasterRestarts.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.procedure.TestSnapshotProcedureMasterRestarts.data
new file mode 100644
index 00000000..b0c79f70
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.procedure.TestSnapshotProcedureMasterRestarts.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SnapshotRegionCallable.java#L57!!!org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall!!!org.apache.hadoop.hbase.regionserver.HRegion.flush!!!SnapshotRegionCallable.java:58!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.procedure.TestSnapshotProcedureRSCrashes.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.procedure.TestSnapshotProcedureRSCrashes.conf
new file mode 100644
index 00000000..011eaeaa
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.procedure.TestSnapshotProcedureRSCrashes.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.procedure.TestSnapshotProcedureRSCrashes.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.procedure.TestSnapshotProcedureRSCrashes.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.procedure.TestSnapshotProcedureRSCrashes.data
new file mode 100644
index 00000000..b0c79f70
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.master.procedure.TestSnapshotProcedureRSCrashes.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SnapshotRegionCallable.java#L57!!!org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall!!!org.apache.hadoop.hbase.regionserver.HRegion.flush!!!SnapshotRegionCallable.java:58!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.procedure2.TestChildProcedures.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.procedure2.TestChildProcedures.conf
new file mode 100644
index 00000000..20247157
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.procedure2.TestChildProcedures.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.procedure2.TestChildProcedures.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.procedure2.TestChildProcedures.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.procedure2.TestChildProcedures.data
new file mode 100644
index 00000000..ee608d9c
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.procedure2.TestChildProcedures.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java#L402!!!org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease!!!org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.getLogFiles!!!WALProcedureStore.java:410!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.quotas.TestSuperUserQuotaPermissions.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.quotas.TestSuperUserQuotaPermissions.conf
new file mode 100644
index 00000000..5d7abee8
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.quotas.TestSuperUserQuotaPermissions.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.quotas.TestSuperUserQuotaPermissions.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.quotas.TestSuperUserQuotaPermissions.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.quotas.TestSuperUserQuotaPermissions.data
new file mode 100644
index 00000000..342154a3
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.quotas.TestSuperUserQuotaPermissions.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-protocol-shaded/target/generated-sources/protobuf/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RPCProtos.java#L5026!!!org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom!!!org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream.readBool!!!RPCProtos.java:5060!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestBootstrapNodeManager.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestBootstrapNodeManager.conf
new file mode 100644
index 00000000..2f0d763d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestBootstrapNodeManager.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestBootstrapNodeManager.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestBootstrapNodeManager.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestBootstrapNodeManager.data
new file mode 100644
index 00000000..432f7dce
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestBootstrapNodeManager.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/BootstrapNodeManager.java#L135!!!org.apache.hadoop.hbase.regionserver.BootstrapNodeManager.getFromMaster!!!org.apache.hadoop.hbase.util.FutureUtils.get!!!BootstrapNodeManager.java:140!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestBulkLoadReplicationHFileRefs.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestBulkLoadReplicationHFileRefs.conf
new file mode 100644
index 00000000..99593176
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestBulkLoadReplicationHFileRefs.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestBulkLoadReplicationHFileRefs.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestBulkLoadReplicationHFileRefs.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestBulkLoadReplicationHFileRefs.data
new file mode 100644
index 00000000..2dc7a9e7
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestBulkLoadReplicationHFileRefs.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java#L539!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initAndStartReplicationEndpoint!!!ReplicationSource.java:552!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestCompactionArchiveConcurrentClose.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestCompactionArchiveConcurrentClose.conf
new file mode 100644
index 00000000..21d0c5c5
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestCompactionArchiveConcurrentClose.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestCompactionArchiveConcurrentClose.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestCompactionArchiveConcurrentClose.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestCompactionArchiveConcurrentClose.data
new file mode 100644
index 00000000..5506033e
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestCompactionArchiveConcurrentClose.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/backup/HFileArchiver.java#L548!!!org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile!!!org.apache.hadoop.hbase.backup.HFileArchiver$File.moveAndClose!!!HFileArchiver.java:566!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestRegionServerCrashDisableWAL.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestRegionServerCrashDisableWAL.conf
new file mode 100644
index 00000000..29dc8945
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestRegionServerCrashDisableWAL.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestRegionServerCrashDisableWAL.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestRegionServerCrashDisableWAL.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestRegionServerCrashDisableWAL.data
new file mode 100644
index 00000000..2e8b30d1
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.TestRegionServerCrashDisableWAL.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L2524!!!org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub!!!org.apache.hadoop.hbase.security.UserProvider.getCurrent!!!HRegionServer.java:2544!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.wal.TestFSHLog.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.wal.TestFSHLog.conf
new file mode 100644
index 00000000..0115d5d5
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.wal.TestFSHLog.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.wal.TestFSHLog.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.wal.TestFSHLog.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.wal.TestFSHLog.data
new file mode 100644
index 00000000..cc983cac
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.regionserver.wal.TestFSHLog.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AbstractFSWAL.java#L914!!!org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive!!!org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archiveLogFile!!!AbstractFSWAL.java:916!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.TestSerialReplicationFailover.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.TestSerialReplicationFailover.conf
new file mode 100644
index 00000000..3ef2bb84
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.TestSerialReplicationFailover.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.TestSerialReplicationFailover.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.TestSerialReplicationFailover.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.TestSerialReplicationFailover.data
new file mode 100644
index 00000000..44b4c95c
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.TestSerialReplicationFailover.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ServerCrashProcedure.java#L136!!!org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState!!!org.apache.hadoop.hbase.master.MasterServices.getProcedures!!!ServerCrashProcedure.java:272!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStreamFSHLog.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStreamFSHLog.conf
new file mode 100644
index 00000000..a9e31ef1
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStreamFSHLog.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStreamFSHLog.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStreamFSHLog.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStreamFSHLog.data
new file mode 100644
index 00000000..0f616884
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStreamFSHLog.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/SyncReplicationReplayWALProcedure.java#L59!!!org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALProcedure.executeFromState!!!org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALManager.isReplayWALFinished!!!SyncReplicationReplayWALProcedure.java:75!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestDrainReplicationQueuesForStandBy.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestDrainReplicationQueuesForStandBy.conf
new file mode 100644
index 00000000..0e646cf9
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestDrainReplicationQueuesForStandBy.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestDrainReplicationQueuesForStandBy.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestDrainReplicationQueuesForStandBy.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestDrainReplicationQueuesForStandBy.data
new file mode 100644
index 00000000..0f616884
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestDrainReplicationQueuesForStandBy.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/SyncReplicationReplayWALProcedure.java#L59!!!org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALProcedure.executeFromState!!!org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALManager.isReplayWALFinished!!!SyncReplicationReplayWALProcedure.java:75!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestRefreshRecoveredReplication.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestRefreshRecoveredReplication.conf
new file mode 100644
index 00000000..ce4da4cd
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestRefreshRecoveredReplication.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestRefreshRecoveredReplication.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestRefreshRecoveredReplication.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestRefreshRecoveredReplication.data
new file mode 100644
index 00000000..2dc7a9e7
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestRefreshRecoveredReplication.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java#L539!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize!!!org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initAndStartReplicationEndpoint!!!ReplicationSource.java:552!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestReplicator.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestReplicator.conf
new file mode 100644
index 00000000..41700205
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestReplicator.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestReplicator.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestReplicator.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestReplicator.data
new file mode 100644
index 00000000..2e8b30d1
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.replication.regionserver.TestReplicator.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L2524!!!org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub!!!org.apache.hadoop.hbase.security.UserProvider.getCurrent!!!HRegionServer.java:2544!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rest.TestDeleteRow.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rest.TestDeleteRow.conf
new file mode 100644
index 00000000..5a9a57c7
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rest.TestDeleteRow.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rest.TestDeleteRow.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rest.TestDeleteRow.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rest.TestDeleteRow.data
new file mode 100644
index 00000000..8ed03bb5
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rest.TestDeleteRow.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-protocol-shaded/target/generated-sources/protobuf/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RPCProtos.java#L5026!!!org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom!!!org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream.readBool!!!RPCProtos.java:5055!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rest.TestSecurityHeadersFilter.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rest.TestSecurityHeadersFilter.conf
new file mode 100644
index 00000000..a3bf63be
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rest.TestSecurityHeadersFilter.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rest.TestSecurityHeadersFilter.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rest.TestSecurityHeadersFilter.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rest.TestSecurityHeadersFilter.data
new file mode 100644
index 00000000..49dd7e4e
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rest.TestSecurityHeadersFilter.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java#L491!!!org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists!!!org.apache.hadoop.fs.FileSystem.exists!!!FSUtils.java:494!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rsgroup.TestRSGroupsBalance.java.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rsgroup.TestRSGroupsBalance.java.conf
new file mode 100644
index 00000000..373f6db4
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rsgroup.TestRSGroupsBalance.java.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rsgroup.TestRSGroupsBalance.java.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rsgroup.TestRSGroupsBalance.java.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rsgroup.TestRSGroupsBalance.java.data
new file mode 100644
index 00000000..85eb4119
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.rsgroup.TestRSGroupsBalance.java.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java#L1019!!!org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.moveRegionsBetweenGroups!!!moveAsync!!!RSGroupInfoManagerImpl.java:1037!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient.conf
new file mode 100644
index 00000000..35f7c318
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient.data
new file mode 100644
index 00000000..aac0fbe6
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/snapshot/FlushSnapshotSubprocedure.java#L113!!!org.apache.hadoop.hbase.regionserver.snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask.call!!!org.apache.hadoop.hbase.regionserver.HRegion.flush!!!FlushSnapshotSubprocedure.java:114!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.util.TestRegionMoverWithRSGroupEnable.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.util.TestRegionMoverWithRSGroupEnable.conf
new file mode 100644
index 00000000..3cc01cc8
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.util.TestRegionMoverWithRSGroupEnable.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.util.TestRegionMoverWithRSGroupEnable.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.util.TestRegionMoverWithRSGroupEnable.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.util.TestRegionMoverWithRSGroupEnable.data
new file mode 100644
index 00000000..a4e361b9
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.util.TestRegionMoverWithRSGroupEnable.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/util/MoveWithAck.java#L76!!!org.apache.hadoop.hbase.util.MoveWithAck.call!!!org.apache.hadoop.hbase.client.Admin.move!!!MoveWithAck.java:82!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.wal.TestSyncReplicationWALProvider.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.wal.TestSyncReplicationWALProvider.conf
new file mode 100644
index 00000000..d3f5a448
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.wal.TestSyncReplicationWALProvider.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.wal.TestSyncReplicationWALProvider.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.wal.TestSyncReplicationWALProvider.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.wal.TestSyncReplicationWALProvider.data
new file mode 100644
index 00000000..ca47f588
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hadoop.hbase.wal.TestSyncReplicationWALProvider.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/DualAsyncFSWAL.java#L76!!!org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance!!!org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createAsyncWriter!!!DualAsyncFSWAL.java:82!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hbase.archetypes.exemplars.client.TestHelloHBase.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hbase.archetypes.exemplars.client.TestHelloHBase.conf
new file mode 100644
index 00000000..88b0419c
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hbase.archetypes.exemplars.client.TestHelloHBase.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hbase/test-plan/hbase_retry_locations-org.apache.hbase.archetypes.exemplars.client.TestHelloHBase.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hbase.archetypes.exemplars.client.TestHelloHBase.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hbase.archetypes.exemplars.client.TestHelloHBase.data
new file mode 100644
index 00000000..9c88deb7
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hbase/test-plan/hbase_retry_locations-org.apache.hbase.archetypes.exemplars.client.TestHelloHBase.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hbase/tree//89ca7f4//hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/TransitRegionStateProcedure.java#L409!!!org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState!!!confirmOpened!!!TransitRegionStateProcedure.java:437!!!java.io.IOException
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/hive.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/hive.conf
new file mode 100644
index 00000000..6888129b
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/hive.conf
@@ -0,0 +1,3 @@
+retry_data_file: /home/bastoica/projects/current/wasabi/tool/config/hive/hive_retry_locations.data
+injection_policy: max-count
+max_injection_count: 0
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/hive_retry_bounds.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/hive_retry_bounds.data
new file mode 100644
index 00000000..7e745685
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/hive_retry_bounds.data
@@ -0,0 +1,32 @@
+Var name!!!Assigned value!!!Assign method!!!Test class
+HIVE_SERVER2_THRIFT_CLIENT_CONNECTION_RETRY_LIMIT!!!0!!!org.apache.hadoop.hive.conf.HiveConf.setIntVar!!!TestRetryingThriftCLIServiceClient
+HIVE_SERVER2_THRIFT_CLIENT_CONNECTION_RETRY_LIMIT!!!3!!!org.apache.hadoop.hive.conf.HiveConf.setIntVar!!!TestRetryingThriftCLIServiceClient
+HIVE_SERVER2_THRIFT_CLIENT_CONNECTION_RETRY_LIMIT!!!3!!!org.apache.hadoop.hive.conf.HiveConf.setIntVar!!!TestRetryingThriftCLIServiceClient
+HIVE_SERVER2_THRIFT_CLIENT_RETRY_LIMIT!!!3!!!org.apache.hadoop.hive.conf.HiveConf.setIntVar!!!TestRetryingThriftCLIServiceClient
+HIVE_LOCK_SLEEP_BETWEEN_RETRIES!!!100!!!org.apache.hadoop.hive.conf.HiveConf.setTimeVar!!!TestConcurrentDppInserts
+HIVE_LOCK_NUMRETRIES!!!2!!!org.apache.hadoop.hive.conf.HiveConf.setIntVar!!!TestDbTxnManager2
+METASTORE_THRIFT_FAILURE_RETRIES!!!3!!!org.apache.hadoop.hive.conf.HiveConf.setIntVar!!!TestPermsGrp
+METASTORE_THRIFT_FAILURE_RETRIES!!!3!!!org.apache.hadoop.hive.conf.HiveConf.setIntVar!!!TestHiveClientCache
+METASTORE_THRIFT_FAILURE_RETRIES!!!3!!!org.apache.hadoop.hive.conf.HiveConf.setIntVar!!!TestHCatMultiOutputFormat
+METASTORE_THRIFT_FAILURE_RETRIES!!!3!!!org.apache.hadoop.hive.conf.HiveConf.setIntVar!!!TestHCatPartitionPublish
+METASTORE_THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.conf.HiveConf.setIntVar!!!TestHCatClient
+METASTORE_THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.conf.HiveConf.setIntVar!!!TestPermsGrp
+METASTORE_THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.conf.HiveConf.setIntVar!!!TestHiveClientCache
+METASTORE_THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.conf.HiveConf.setIntVar!!!TestHCatMultiOutputFormat
+METASTORE_THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.conf.HiveConf.setIntVar!!!TestHCatPartitionPublish
+THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestFilterHooks
+THRIFT_CONNECTION_RETRIES!!!10!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestHiveMetaStoreGetMetaConf
+THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestHiveMetaStorePartitionSpecs
+THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestHiveMetaStoreWithEnvironmentContext
+THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestHmsServerAuthorization
+THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestMetaStoreEndFunctionListener
+THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestMetaStoreEventListener
+THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestMetaStoreEventListenerOnlyOnCommit
+THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestMetaStoreEventListenerWithOldConf
+THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestMetaStoreInitListener
+THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestRetryingHMSHandler
+THRIFT_CONNECTION_RETRIES!!!3!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestHiveMetaStoreAuthorizer
+HMS_HANDLER_ATTEMPTS!!!4!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestObjectStoreInitRetry
+HMS_HANDLER_ATTEMPTS!!!2!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestRetryingHMSHandler
+HIVE_COMPACTOR_CLEANER_MAX_RETRY_ATTEMPTS!!!3!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.setLongVar!!!TestCleaner
+JOB_TIMEOUT_TASK_RETRY_COUNT!!!4!!!org.apache.hadoop.conf.Configuration.setInt!!!TestConcurrentJobRequestsThreadsAndTimeout
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/hive_retry_locations.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/hive_retry_locations.data
new file mode 100644
index 00000000..11cc901d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/hive_retry_locations.data
@@ -0,0 +1,74 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/ql/src/java/org/apache/hadoop/hive/llap/ProactiveEviction.java#L134!!!org.apache.hadoop.hive.llap.ProactiveEviction.run!!!evictEntity!!!ProactiveEviction.java:143!!!org.apache.hive.service.ServiceException
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/YarnQueueHelper.java#L120!!!org.apache.hadoop.hive.ql.exec.tez.YarnQueueHelper.checkQueueAccessInternal!!!checkQueueAccessFromSingleRm!!!YarnQueueHelper.java:131!!!java.io.IOException
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/leader/LeaseLeaderElection.java#L175!!!org.apache.hadoop.hive.metastore.leader.LeaseLeaderElection.tryBeLeader!!!lock!!!LeaseLeaderElection.java:177!!!org.apache.hadoop.hive.metastore.api.MetaException
+https://github.com/apache/hive/blob/e427ce0d572c9adf6f194693a1b3ba85f246f3b7/hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/listener/NotificationListener.java#L304!!!org.apache.hive.hcatalog.listener.NotificationListener.send!!!createProducer!!!NotificationListener.java:316!!!JMSException
+https://github.com/apache/hive/tree//e427ce0//common/src/java/org/apache/hive/common/util/RetryUtilities.java#L87!!!org.apache.hive.common.util.RetryUtilities$ExponentiallyDecayingBatchWork.run!!!org.apache.hive.common.util.RetryUtilities$ExponentialBackOffRetry.execute!!!RetryUtilities.java:93!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//common/src/test/org/apache/hive/common/util/Retry.java#L59!!!org.apache.hive.common.util.Retry$RetryingStatement.evaluate!!!org.junit.runners.model.Statement.evaluate!!!Retry.java:61!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandlerUtils.java#L765!!!org.apache.hadoop.hive.druid.DruidStorageHandlerUtils.publishSegmentWithShardSpec!!!org.apache.hadoop.fs.FileSystem.mkdirs!!!DruidStorageHandlerUtils.java:774!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandlerUtils.java#L765!!!org.apache.hadoop.hive.druid.DruidStorageHandlerUtils.publishSegmentWithShardSpec!!!org.apache.hadoop.fs.FileSystem.rename!!!DruidStorageHandlerUtils.java:776!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandlerUtils.java#L765!!!org.apache.hadoop.hive.druid.DruidStorageHandlerUtils.publishSegmentWithShardSpec!!!org.apache.hadoop.fs.FileSystem.exists!!!DruidStorageHandlerUtils.java:777!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/LauncherDelegator.java#L234!!!org.apache.hive.hcatalog.templeton.LauncherDelegator.killTempletonJobWithRetry!!!org.apache.hive.hcatalog.templeton.LauncherDelegator.killJob!!!LauncherDelegator.java:237!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java#L379!!!org.apache.hive.jdbc.HiveConnection.HiveConnection!!!org.apache.hive.jdbc.HiveConnection.executeInitSql!!!HiveConnection.java:387!!!java.sql.SQLException
+https://github.com/apache/hive/tree//e427ce0//jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java#L379!!!org.apache.hive.jdbc.HiveConnection.HiveConnection!!!org.apache.hive.jdbc.HiveConnection.openSession!!!HiveConnection.java:386!!!java.sql.SQLException
+https://github.com/apache/hive/tree//e427ce0//jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java#L379!!!org.apache.hive.jdbc.HiveConnection.HiveConnection!!!org.apache.hive.jdbc.HiveConnection.openTransport!!!HiveConnection.java:382!!!java.sql.SQLException
+https://github.com/apache/hive/tree//e427ce0//kafka-handler/src/java/org/apache/hadoop/hive/kafka/RetryUtils.java#L90!!!org.apache.hadoop.hive.kafka.RetryUtils.retry!!!org.apache.hadoop.hive.kafka.RetryUtils$Task.perform!!!RetryUtils.java:93!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//llap-client/src/java/org/apache/hadoop/hive/registry/impl/ZkRegistryBase.java#L609!!!org.apache.hadoop.hive.registry.impl.ZkRegistryBase.ensureInstancesCache!!!org.apache.curator.framework.recipes.cache.PathChildrenCache.start!!!ZkRegistryBase.java:644!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//llap-common/src/java/org/apache/hadoop/hive/llap/AsyncPbRpcProxy.java#L442!!!org.apache.hadoop.hive.llap.AsyncPbRpcProxy$AsyncCallableRequest.call!!!org.apache.hadoop.hive.llap.AsyncPbRpcProxy$AsyncCallableRequest.callInternal!!!AsyncPbRpcProxy.java:444!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/exec/repl/atlas/RetryingClientTimeBased.java#L49!!!org.apache.hadoop.hive.ql.exec.repl.atlas.RetryingClientTimeBased.invokeWithRetry!!!java.util.concurrent.Callable.call!!!RetryingClientTimeBased.java:52!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/TezJobMonitor.java#L166!!!org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution!!!org.apache.hadoop.hive.ql.Context.checkHeartbeaterLockException!!!TezJobMonitor.java:171!!!org.apache.hadoop.hive.ql.lockmgr.LockException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/TezJobMonitor.java#L166!!!org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution!!!org.apache.tez.dag.api.client.DAGClient.getDAGStatus!!!TezJobMonitor.java:183!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/exec/util/Retryable.java#L71!!!org.apache.hadoop.hive.ql.exec.util.Retryable.executeCallable!!!org.apache.hadoop.security.UserGroupInformation.doAs!!!Retryable.java:75!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/exec/util/Retryable.java#L71!!!org.apache.hadoop.hive.ql.exec.util.Retryable.executeCallable!!!org.apache.hadoop.security.UserGroupInformation.getLoginUser!!!Retryable.java:75!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/exec/util/Retryable.java#L71!!!org.apache.hadoop.hive.ql.exec.util.Retryable.executeCallable!!!org.apache.hadoop.hive.metastore.utils.SecurityUtils.reloginExpiringKeytabUser!!!Retryable.java:74!!!org.apache.hadoop.hive.metastore.api.MetaException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java#L3133!!!org.apache.hadoop.hive.ql.exec.Utilities.executeWithRetry!!!org.apache.hadoop.hive.ql.exec.Utilities$SQLCommand.run!!!Utilities.java:3144!!!java.sql.SQLException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java#L3173!!!org.apache.hadoop.hive.ql.exec.Utilities.connectWithRetry!!!java.sql.DriverManager.getConnection!!!Utilities.java:3144!!!java.sql.SQLException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java#L3214!!!org.apache.hadoop.hive.ql.exec.Utilities.prepareWithRetry!!!java.sql.Connection.prepareStatement!!!Utilities.java:3225!!!java.sql.SQLException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/hooks/HiveProtoLoggingHook.java#L315!!!org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook$EventLogger.writeEvent!!!org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook$EventLogger.maybeRolloverWriterForDay!!!HiveProtoLoggingHook.java:327!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/hooks/HiveProtoLoggingHook.java#L315!!!org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook$EventLogger.writeEvent!!!org.apache.tez.dag.history.logging.proto.ProtoMessageWriter.writeProto!!!HiveProtoLoggingHook.java:328!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/hooks/HiveProtoLoggingHook.java#L315!!!org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook$EventLogger.writeEvent!!!org.apache.tez.dag.history.logging.proto.ProtoMessageWriter.hflush!!!HiveProtoLoggingHook.java:329!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/hooks/HiveProtoLoggingHook.java#L315!!!org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook$EventLogger.writeEvent!!!org.apache.tez.dag.history.logging.proto.DatePartitionedLogger.getWriter!!!HiveProtoLoggingHook.java:321!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/lockmgr/DbLockManager.java#L114!!!org.apache.hadoop.hive.ql.lockmgr.DbLockManager.lock!!!org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.getMS!!!DbLockManager.java:104!!!org.apache.hadoop.hive.ql.lockmgr.LockException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/lockmgr/DbLockManager.java#L114!!!org.apache.hadoop.hive.ql.lockmgr.DbLockManager.lock!!!org.apache.hadoop.hive.metastore.IMetaStoreClient.checkLock!!!DbLockManager.java:118!!!org.apache.hadoop.hive.metastore.api.NoSuchLockException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/lockmgr/EmbeddedLockManager.java#L113!!!org.apache.hadoop.hive.ql.lockmgr.EmbeddedLockManager.lock!!!org.apache.hadoop.hive.ql.lockmgr.EmbeddedLockManager.lockPrimitive!!!EmbeddedLockManager.java:117!!!org.apache.hadoop.hive.ql.lockmgr.LockException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/lockmgr/zookeeper/ZooKeeperHiveLockManager.java#L298!!!org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager.lock!!!org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager.lockPrimitive!!!ZooKeeperHiveLockManager.java:306!!!org.apache.hadoop.hive.ql.lockmgr.LockException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/lockmgr/zookeeper/ZooKeeperHiveLockManager.java#L487!!!org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager.unlockWithRetry!!!org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager.unlockPrimitive!!!ZooKeeperHiveLockManager.java:493!!!org.apache.hadoop.hive.ql.lockmgr.LockException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/parse/repl/CopyUtils.java#L232!!!org.apache.hadoop.hive.ql.parse.repl.CopyUtils.doCopyRetry!!!org.apache.hadoop.hive.ql.parse.repl.CopyUtils.getFilesToRetry!!!CopyUtils.java:257!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//ql/src/java/org/apache/hadoop/hive/ql/parse/repl/CopyUtils.java#L232!!!org.apache.hadoop.hive.ql.parse.repl.CopyUtils.doCopyRetry!!!org.apache.hadoop.hive.ql.parse.repl.CopyUtils.doCopyOnce!!!CopyUtils.java:268!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//service/src/java/org/apache/hive/service/cli/thrift/RetryingThriftCLIServiceClient.java#L290!!!org.apache.hive.service.cli.thrift.RetryingThriftCLIServiceClient.connectWithRetry!!!org.apache.hive.service.cli.thrift.RetryingThriftCLIServiceClient.connect!!!RetryingThriftCLIServiceClient.java:292!!!org.apache.hive.service.cli.HiveSQLException
+https://github.com/apache/hive/tree//e427ce0//service/src/java/org/apache/hive/service/cli/thrift/RetryingThriftCLIServiceClient.java#L382!!!org.apache.hive.service.cli.thrift.RetryingThriftCLIServiceClient.invoke!!!org.apache.hive.service.cli.thrift.RetryingThriftCLIServiceClient.connectWithRetry!!!RetryingThriftCLIServiceClient.java:391!!!org.apache.hive.service.cli.HiveSQLException
+https://github.com/apache/hive/tree//e427ce0//service/src/java/org/apache/hive/service/cli/thrift/RetryingThriftCLIServiceClient.java#L382!!!org.apache.hive.service.cli.thrift.RetryingThriftCLIServiceClient.invoke!!!org.apache.hive.service.cli.thrift.RetryingThriftCLIServiceClient.invokeInternal!!!RetryingThriftCLIServiceClient.java:385!!!org.apache.hive.service.cli.HiveSQLException
+https://github.com/apache/hive/tree//e427ce0//service/src/java/org/apache/hive/service/server/HiveServer2.java#L1089!!!org.apache.hive.service.server.HiveServer2.startHiveServer2!!!org.apache.hive.service.server.HiveServer2.init!!!HiveServer2.java:1112!!!org.apache.hive.service.ServiceException
+https://github.com/apache/hive/tree//e427ce0//service/src/java/org/apache/hive/service/server/HiveServer2.java#L1089!!!org.apache.hive.service.server.HiveServer2.startHiveServer2!!!org.apache.hive.service.server.HiveServer2.start!!!HiveServer2.java:1113!!!org.apache.hive.service.ServiceException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java#L753!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createHttpClient!!!HiveMetaStoreClient.java:798!!!org.apache.thrift.transport.TTransportException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java#L753!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createBinaryClient!!!HiveMetaStoreClient.java:800!!!org.apache.thrift.transport.TTransportException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java#L753!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open!!!org.apache.thrift.transport.TTransport.open!!!HiveMetaStoreClient.java:816!!!org.apache.thrift.transport.TTransportException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java#L753!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open!!!org.apache.hadoop.hive.metastore.utils.SecurityUtils.getUGI!!!HiveMetaStoreClient.java:848!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java#L753!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open!!!org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Iface.set_ugi!!!HiveMetaStoreClient.java:849!!!org.apache.thrift.TException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/RetryingMetaStoreClient.java#L178!!!org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke!!!org.apache.hadoop.hive.metastore.utils.SecurityUtils.reloginExpiringKeytabUser!!!RetryingMetaStoreClient.java:175!!!org.apache.hadoop.hive.metastore.api.MetaException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/RetryingMetaStoreClient.java#L178!!!org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke!!!org.apache.hadoop.security.UserGroupInformation.doAs!!!RetryingMetaStoreClient.java:184!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-common/target/generated-sources/org/apache/hadoop/hive/metastore/grpc/HiveMetastore.java#L323587!!!org.apache.hadoop.hive.metastore.grpc.HiveMetastore$CompactionInfoStruct.CompactionInfoStruct!!!com.google.protobuf.CodedInputStream.readBool!!!N/A!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-common/target/generated-sources/org/apache/hadoop/hive/metastore/grpc/HiveMetastore.java#L323587!!!org.apache.hadoop.hive.metastore.grpc.HiveMetastore$CompactionInfoStruct.CompactionInfoStruct!!!com.google.protobuf.CodedInputStream.readEnum!!!N/A!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-common/target/generated-sources/org/apache/hadoop/hive/metastore/grpc/HiveMetastore.java#L323587!!!org.apache.hadoop.hive.metastore.grpc.HiveMetastore$CompactionInfoStruct.CompactionInfoStruct!!!com.google.protobuf.CodedInputStream.readInt64!!!N/A!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-common/target/generated-sources/org/apache/hadoop/hive/metastore/grpc/HiveMetastore.java#L323587!!!org.apache.hadoop.hive.metastore.grpc.HiveMetastore$CompactionInfoStruct.CompactionInfoStruct!!!com.google.protobuf.CodedInputStream.readStringRequireUtf8!!!N/A!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-common/target/generated-sources/org/apache/hadoop/hive/metastore/grpc/HiveMetastore.java#L323587!!!org.apache.hadoop.hive.metastore.grpc.HiveMetastore$CompactionInfoStruct.CompactionInfoStruct!!!com.google.protobuf.CodedInputStream.readTag!!!N/A!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-common/target/generated-sources/org/apache/hadoop/hive/metastore/grpc/HiveMetastore.java#L323587!!!org.apache.hadoop.hive.metastore.grpc.HiveMetastore$CompactionInfoStruct.CompactionInfoStruct!!!com.google.protobuf.GeneratedMessageV3.parseUnknownField!!!N/A!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java#L11654!!!org.apache.hadoop.hive.metastore.ObjectStore$RetryingExecutor.run!!!org.apache.hadoop.hive.metastore.ObjectStore$RetryingExecutor$Command.process!!!ObjectStore.java:11999!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/RetryingHMSHandler.java#L138!!!org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal!!!org.apache.hadoop.hive.metastore.MetaStoreInit.updateConnectionURL!!!RetryingHMSHandler.java:172!!!org.apache.hadoop.hive.metastore.api.MetaException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/RetryingHMSHandler.java#L138!!!org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal!!!org.apache.hadoop.hive.metastore.Deadline.startTimer!!!RetryingHMSHandler.java:89!!!org.apache.hadoop.hive.metastore.api.MetaException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java#L414!!!org.apache.hadoop.hive.metastore.txn.CompactionTxnHandler.findReadyToClean!!!java.sql.ResultSet.getInt!!!N/A!!!java.sql.SQLException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java#L414!!!org.apache.hadoop.hive.metastore.txn.CompactionTxnHandler.findReadyToClean!!!java.sql.ResultSet.getLong!!!N/A!!!java.sql.SQLException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java#L414!!!org.apache.hadoop.hive.metastore.txn.CompactionTxnHandler.findReadyToClean!!!java.sql.ResultSet.getString!!!N/A!!!java.sql.SQLException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java#L414!!!org.apache.hadoop.hive.metastore.txn.CompactionTxnHandler.findReadyToClean!!!java.sql.ResultSet.next!!!N/A!!!java.sql.SQLException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java#L414!!!org.apache.hadoop.hive.metastore.txn.CompactionTxnHandler.findReadyToClean!!!org.apache.hadoop.hive.metastore.txn.TxnUtils.dbCompactionType2ThriftType!!!N/A!!!org.apache.hadoop.hive.metastore.api.MetaException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/utils/MetaStoreServerUtils.java#L847!!!org.apache.hadoop.hive.metastore.utils.MetaStoreServerUtils.loopUntilHMSReady!!!java.net.Socket.close!!!MetaStoreServerUtils.java:917!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/utils/MetaStoreServerUtils.java#L847!!!org.apache.hadoop.hive.metastore.utils.MetaStoreServerUtils.loopUntilHMSReady!!!java.net.Socket.connect!!!MetaStoreServerUtils.java:916!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/utils/RetryUtilities.java#L85!!!org.apache.hadoop.hive.metastore.utils.RetryUtilities$ExponentiallyDecayingBatchWork.run!!!org.apache.hadoop.hive.metastore.utils.RetryUtilities$ExponentialBackOffRetry.execute!!!RetryUtilities.java:91!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClientPreCatalog.java#L447!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClientPreCatalog.open!!!org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Iface.set_ugi!!!HiveMetaStoreClientPreCatalog.java:560!!!org.apache.thrift.TException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClientPreCatalog.java#L447!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClientPreCatalog.open!!!org.apache.hadoop.hive.metastore.conf.MetastoreConf.getPassword!!!HiveMetaStoreClientPreCatalog.java:462!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClientPreCatalog.java#L447!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClientPreCatalog.open!!!org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Client.createClientTransport!!!HiveMetaStoreClientPreCatalog.java:507!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClientPreCatalog.java#L447!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClientPreCatalog.open!!!org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Client.createClientTransport!!!HiveMetaStoreClientPreCatalog.java:514!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClientPreCatalog.java#L447!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClientPreCatalog.open!!!org.apache.hadoop.hive.metastore.utils.SecurityUtils.getSSLSocket!!!HiveMetaStoreClientPreCatalog.java:470!!!org.apache.thrift.transport.TTransportException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClientPreCatalog.java#L447!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClientPreCatalog.open!!!org.apache.hadoop.hive.metastore.utils.SecurityUtils.getTokenStrForm!!!HiveMetaStoreClientPreCatalog.java:502!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClientPreCatalog.java#L447!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClientPreCatalog.open!!!org.apache.hadoop.hive.metastore.utils.SecurityUtils.getUGI!!!HiveMetaStoreClientPreCatalog.java:559!!!java.io.IOException
+https://github.com/apache/hive/tree//e427ce0//standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClientPreCatalog.java#L447!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClientPreCatalog.open!!!org.apache.thrift.transport.TTransport.open!!!HiveMetaStoreClientPreCatalog.java:542!!!org.apache.thrift.transport.TTransportException
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/hive_timeout_bounds.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/hive_timeout_bounds.data
new file mode 100644
index 00000000..34df60d1
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/hive_timeout_bounds.data
@@ -0,0 +1,424 @@
+TestAbortedTxnCleaner.testAbortedCleaningWithThreeTxnsWithDiffWriteIds
+TestAbortedTxnCleaner.testCleaningOfAbortedDirectoriesBelowBase
+TestAbortedTxnCleaner.testCleaningOfAbortedDirectoriesForMultiplePartitions
+TestAbortedTxnCleaner.testCleaningOfAbortedDirectoriesForSinglePartition
+TestAbortedTxnCleaner.testCleaningOfAbortedDirectoriesForUnpartitionedTables
+TestAbortedTxnCleaner.testCleaningOfAbortedDirectoriesOnTopOfBase
+TestAbortedTxnCleaner.testCleaningOfAbortedDirectoriesWithLongRunningOpenWriteTxn
+TestCliDriverMethods.testprocessInitFiles
+TestCliDriverMethods.testProcessSelectDatabase
+TestCliDriverMethods.testRun
+TestCLIServiceConnectionLimits.testConnectionForwardedIpAddresses
+TestCLIServiceConnectionLimits.testConnectionLimitPerIpAddress
+TestCLIServiceConnectionLimits.testConnectionLimitPerUser
+TestCLIServiceConnectionLimits.testConnectionLimitPerUserIpAddress
+TestCLIServiceConnectionLimits.testConnectionMultipleLimitsIPAndUserIP
+TestCLIServiceConnectionLimits.testConnectionMultipleLimitsUserAndIP
+TestCLIServiceConnectionLimits.testConnectionMultipleLimitsUserIPAndUser
+TestCLIServiceConnectionLimits.testIncrementAndDecrementConnectionsUser
+TestCLIServiceConnectionLimits.testInvalidIpaddress
+TestCLIServiceConnectionLimits.testInvalidUserIpaddress
+TestCLIServiceConnectionLimits.testInvalidUserName
+TestCLIServiceConnectionLimits.testNoLimit
+TestCLIServiceRestore.testRestore
+TestColumnAccess.testJoinTable1AndTable2
+TestColumnAccess.testJoinView1AndTable2
+TestColumnAccess.testQueryTable1
+TestColumnAccess.testShowPartitions
+TestCommands.testBasicReplEximCommands
+TestCommands.testBeelineCommands
+TestCommands.testDropDatabaseCommand
+TestCommands.testMetadataReplEximCommands
+TestCommands.testNoopReplEximCommands
+TestCommandWithSpace.testCommandWithPrefixSpace
+TestCompactionMetrics.testInitiatorPerfMetricsEnabled
+TestCompactionMetrics.testOldestReadyForCleaningAge
+TestCompactionMetrics.testWorkerPerfMetrics
+TestDbTxnManager.testDDLExclusive
+TestDbTxnManager.testDDLNoLock
+TestDbTxnManager.testDDLShared
+TestDbTxnManager.testDelete
+TestDbTxnManager.testExceptions
+TestDbTxnManager.testHeartbeater
+TestDbTxnManager.testHeartbeaterReplicationTxn
+TestDbTxnManager.testJoin
+TestDbTxnManager.testLockAcquisitionAndRelease
+TestDbTxnManager.testReadWrite
+TestDbTxnManager.testRollback
+TestDbTxnManager.testSingleReadMultiPartition
+TestDbTxnManager.testSingleReadPartition
+TestDbTxnManager.testSingleReadTable
+TestDbTxnManager.testSingleWritePartition
+TestDbTxnManager.testSingleWriteTable
+TestDbTxnManager.testUpdate
+TestDbTxnManager.testWriteDynamicPartition
+TestDbTxnManager2.testMergePartitioned
+TestDbTxnManagerIsolationProperties.testRebuildMVWhenOpenTxnPresents
+TestDeltaFilesMetricFlags.testDeltaFilesMetricFromCleanerWithAcidMetricsThreadDisabled
+TestDeltaFilesMetricFlags.testDeltaFilesMetricFromCleanerWithMetricsDisabled
+TestDeltaFilesMetricFlags.testDeltaFilesMetricFromInitiatorWithAcidMetricsThreadDisabled
+TestDeltaFilesMetricFlags.testDeltaFilesMetricFromInitiatorWithMetricsDisabled
+TestDeltaFilesMetricFlags.testDeltaFilesMetricFromWorkerWithAcidMetricsThreadDisabled
+TestDeltaFilesMetricFlags.testDeltaFilesMetricFromWorkerWithMetricsDisabled
+TestDeltaFilesMetrics.testDeltaFileMetricMultiPartitionedTable
+TestDeltaFilesMetrics.testDeltaFileMetricPartitionedTable
+TestDeltaFilesMetrics.testDeltaFileMetricUnpartitionedTable
+TestDMLSemanticAnalyzer.testDeleteAllNonPartitioned
+TestDMLSemanticAnalyzer.testDeleteAllPartitioned
+TestDMLSemanticAnalyzer.testDeleteAllWherePartitioned
+TestDMLSemanticAnalyzer.testDeleteOnePartition
+TestDMLSemanticAnalyzer.testDeleteOnePartitionWhere
+TestDMLSemanticAnalyzer.testDeleteWhereNoPartition
+TestDMLSemanticAnalyzer.testInsertSelect
+TestDMLSemanticAnalyzer.testInsertValues
+TestDMLSemanticAnalyzer.testInsertValuesPartitioned
+TestDMLSemanticAnalyzer.testUpdateAllNonPartitioned
+TestDMLSemanticAnalyzer.testUpdateAllNonPartitionedWhere
+TestDMLSemanticAnalyzer.testUpdateAllPartitioned
+TestDMLSemanticAnalyzer.testUpdateAllPartitionedWhere
+TestDMLSemanticAnalyzer.testUpdateOnePartition
+TestDMLSemanticAnalyzer.testUpdateOnePartitionWhere
+TestDruidStorageHandler.testCommitCreateTablePlusCommitDropTableWithoutPurge
+TestDruidStorageHandler.testCommitCreateTablePlusCommitDropTableWithPurge
+TestDruidStorageHandler.testCommitInsertIntoTable
+TestDruidStorageHandler.testCommitInsertIntoWhenDestinationSegmentFileExist
+TestDruidStorageHandler.testCommitInsertIntoWithConflictingIntervalSegment
+TestDruidStorageHandler.testCommitInsertIntoWithNonExtendableSegment
+TestDruidStorageHandler.testCommitInsertOverwriteTable
+TestDruidStorageHandler.testCommitInsertTable
+TestDruidStorageHandler.testCommitMultiInsertOverwriteTable
+TestDruidStorageHandler.testInsertIntoAppendOneMorePartition
+TestDummyTxnManager.testSingleReadTable
+TestE2EScenarios.testReadOrcAndRCFromPig
+TestEmbeddedLockManager.testLocking
+TestExecDriver.testMapPlan1
+TestExecDriver.testMapPlan2
+TestExecDriver.testMapRedPlan1
+TestExecDriver.testMapRedPlan2
+TestExecDriver.testMapRedPlan3
+TestExecDriver.testMapRedPlan4
+TestExecDriver.testMapRedPlan5
+TestExecDriver.testMapRedPlan6
+TestExprProcessorGetFuncExpr.testLookupFunctionOnDemand
+TestFileSinkOperator.testDeleteDynamicPartitioning
+TestFileSinkOperator.testInsertDynamicPartitioning
+TestFileSinkOperator.testNonAcidDynamicPartitioning
+TestFileSinkOperator.testNonAcidRemoveDuplicate
+TestFileSinkOperator.testNonAcidWrite
+TestFileSinkOperator.testUpdateDynamicPartitioning
+TestFilterHooks.testHMSClientWithFilter
+TestFilterHooks.testHMSClientWithoutFilter
+TestFilterHooks.testHMSServerWithFilter
+TestFilterHooks.testHMSServerWithoutFilter
+TestGenericUDTFGetSQLSchema.testWithComplexTypes
+TestGenericUDTFGetSQLSchema.testWithDDL
+TestGenericUDTFGetSQLSchema.testWithSimpleTypes
+TestGetInputSummary.testGetInputSummaryWithInputEstimator
+TestGetPartitionAuthWithBatches.testSmallNumberOfPartitions
+TestGetPartitionInBatches.testGetAllPartitionsOf
+TestHBaseQueries.testRollbackDoesNotDeleteOriginTableWhenCTLTFails
+TestHCatClient.testBasicDDLCommands
+TestHCatClient.testCreateTableLike
+TestHCatClient.testDatabaseLocation
+TestHCatClient.testDropPartitionsWithPartialSpec
+TestHCatClient.testDropTableException
+TestHCatClient.testEmptyTableInstantiation
+TestHCatClient.testGetMessageBusTopicName
+TestHCatClient.testGetPartitionsWithPartialSpec
+TestHCatClient.testObjectNotFoundException
+TestHCatClient.testOtherFailure
+TestHCatClient.testPartitionRegistrationWithCustomSchema
+TestHCatClient.testPartitionSchema
+TestHCatClient.testPartitionsHCatClientImpl
+TestHCatClient.testPartitionSpecRegistrationWithCustomSchema
+TestHCatClient.testRenameTable
+TestHCatClient.testReplicationTaskIter
+TestHCatClient.testTableSchemaPropagation
+TestHCatClient.testTransportFailure
+TestHCatClient.testUpdateTableSchema
+TestHCatDynamicPartitioned.testHCatDynamicPartitionedTable
+TestHCatDynamicPartitioned.testHCatDynamicPartitionedTableMultipleTask
+TestHCatExternalDynamicPartitioned.testHCatExternalDynamicCustomLocation
+TestHCatInputFormat.testBadRecordHandlingPasses
+TestHCatInputFormatMethods.testGetPartitionAndDataColumns
+TestHCatLoaderComplexSchema.testMapNullKey
+TestHCatLoaderComplexSchema.testMapWithComplexData
+TestHCatLoaderComplexSchema.testSyntheticComplexSchema
+TestHCatLoaderComplexSchema.testTupleInBagInTupleInBag
+TestHCatLoaderEncryption.testReadDataFromEncryptedHiveTableByPig
+TestHCatLoaderStorer.testReadWrite
+TestHCatLoaderStorer.testSmallTinyInt
+TestHCatMultiOutputFormat.testOutputFormat
+TestHCatNonPartitioned.testHCatNonPartitionedTable
+TestHCatOutputFormat.testGetTableSchema
+TestHCatOutputFormat.testSetOutput
+TestHCatPartitioned.testHCatPartitionedTable
+TestHCatPartitionPublish.testPartitionPublish
+TestHCatStorerMulti.testStorePartitionedTable
+TestHCatStorerWrapper.testStoreExternalTableWithExternalDir
+TestHive.testAutoPurgeTablesAndPartitions
+TestHive.testDropMissingPartitionsByFilter
+TestHive.testDropPartitionsWithPurge
+TestHive.testDropTableTrash
+TestHive.testGetAndDropTables
+TestHive.testGetPartitionsWithMaxLimit
+TestHive.testHiveCloseCurrent
+TestHive.testHiveRefreshOnConfChange
+TestHive.testMetaStoreApiTiming
+TestHive.testPartition
+TestHive.testTable
+TestHive.testThriftTable
+TestHive.testWmNamespaceHandling
+TestHiveAuthorizationTaskFactory.testGrantGroupTable
+TestHiveAuthorizationTaskFactory.testGrantRoleGroup
+TestHiveAuthorizationTaskFactory.testGrantRoleRole
+TestHiveAuthorizationTaskFactory.testGrantRoleTable
+TestHiveAuthorizationTaskFactory.testGrantRoleUser
+TestHiveAuthorizationTaskFactory.testGrantServer
+TestHiveAuthorizationTaskFactory.testGrantUri
+TestHiveAuthorizationTaskFactory.testGrantUserTable
+TestHiveAuthorizationTaskFactory.testRevokeGroupTable
+TestHiveAuthorizationTaskFactory.testRevokeRoleGroup
+TestHiveAuthorizationTaskFactory.testRevokeRoleRole
+TestHiveAuthorizationTaskFactory.testRevokeRoleTable
+TestHiveAuthorizationTaskFactory.testRevokeRoleUser
+TestHiveAuthorizationTaskFactory.testRevokeUserTable
+TestHiveCli.testCmd
+TestHiveCli.testCommentStripping
+TestHiveCli.testDatabaseOptions
+TestHiveCli.testErrOutput
+TestHiveCli.testInValidCmd
+TestHiveCli.testInvalidDatabaseOptions
+TestHiveCli.testNoErrorDB
+TestHiveCli.testSetHeaderValue
+TestHiveCli.testSetPromptValue
+TestHiveCli.testSourceCmd
+TestHiveCli.testSourceCmd3
+TestHiveCli.testSourceCmd4
+TestHiveCli.testSqlFromCmd
+TestHiveCli.testSqlFromCmdWithComments1
+TestHiveCli.testSqlFromCmdWithComments2
+TestHiveCli.testSqlFromCmdWithComments3
+TestHiveCli.testSqlFromCmdWithDBName
+TestHiveCli.testSqlFromCmdWithEmbeddedQuotes
+TestHiveCli.testUseCurrentDB1
+TestHiveCli.testUseCurrentDB2
+TestHiveCli.testUseCurrentDB3
+TestHiveCli.testUseInvalidDB
+TestHiveCli.testVariables
+TestHiveCli.testVariablesForSource
+TestHiveClientCache.testCacheExpiry
+TestHiveClientCache.testCacheHit
+TestHiveClientCache.testCacheMiss
+TestHiveClientCache.testCloseAllClients
+TestHiveClientCache.testMultipleThreadAccess
+TestHiveDecimalParse.testDecimalType
+TestHiveDecimalParse.testDecimalType1
+TestHiveDecimalParse.testDecimalType2
+TestHiveDecimalParse.testDecimalType3
+TestHiveDecimalParse.testDecimalType4
+TestHiveDecimalParse.testDecimalType5
+TestHiveDecimalParse.testDecimalType6
+TestHiveDecimalParse.testDecimalType7
+TestHiveDecimalParse.testDecimalType8
+TestHiveDecimalParse.testDecimalType9
+TestHiveFunctionHelper.testGetUDTFFunction
+TestHiveFunctionHelper.testGetUDTFFunctionThrowingException
+TestHiveMetaStoreChecker.testSingleThreadedDeeplyNestedTables
+TestHiveMetaStoreClientApiArgumentsChecker.testGetPartitionNames2
+TestHiveMetaStoreClientApiArgumentsChecker.testGetPartitions
+TestHiveMetaStoreGetMetaConf.testGetMetaConfDefault
+TestHiveMetaStoreTxns.testAllocateTableWriteIdForReadOnlyTxn
+TestHiveMetaStoreTxns.testGetLatestCommittedCompactionInfo
+TestHiveMetaStoreTxns.testGetValidWriteIds
+TestHiveMetaStoreTxns.testLocks
+TestHiveMetaStoreTxns.testLocksWithTxn
+TestHiveMetaStoreTxns.testOpenReadOnlyTxnExcluded
+TestHiveMetaStoreTxns.testOpenTxnNotExcluded
+TestHiveMetaStoreTxns.testOpenTxnWithType
+TestHiveMetaStoreTxns.testTxns
+TestHiveMetaStoreTxns.testTxnTypePersisted
+TestHiveMetaStoreTxns.testTxNWithKeyValue
+TestHiveMetaStoreTxns.testTxNWithKeyValueNoTableId
+TestHiveMetaStoreTxns.testTxNWithKeyWrongPrefix
+TestHiveMetaStoreWithEnvironmentContext.testEnvironmentContext
+TestHivePrivilegeObjectOwnerNameAndType.testActionTypeForPartitionedTable
+TestHivePrivilegeObjectOwnerNameAndType.testOwnerNames
+TestHivePrivilegeObjectOwnerNameAndType.testOwnerType
+TestHivePrivilegeObjectOwnerNameAndType.testSingleInstanceOfHPOForPartitionedTable
+TestHiveProtoLoggingHook.testFailureEventLog
+TestHiveProtoLoggingHook.testNonPartionedTable
+TestHiveProtoLoggingHook.testPartitionedTable
+TestHiveProtoLoggingHook.testPostEventLog
+TestHiveProtoLoggingHook.testPreAndPostEventBoth
+TestHiveProtoLoggingHook.testPreEventLog
+TestHiveProtoLoggingHook.testQueueLogs
+TestHiveProtoLoggingHook.testRolloverFiles
+TestHiveStrictManagedMigration.testUpgrade
+TestHMSFetchPartitionsWithoutCols.testPartitionsWithoutCols
+TestHmsServerAuthorization.testGetFields
+TestHooks.testQueryRedactor
+TestHS2HttpServer.testApiServletActiveSessions
+TestHS2HttpServer.testApiServletHistoricalQueries
+TestHS2HttpServerPamConfiguration.testPamCorrectConfiguration
+TestHS2HttpServerPamConfiguration.testPamServicesAreNotConfigured
+TestHS2HttpServerPamConfiguration.testSslIsFalse
+TestInitiator.testFindUserToRunAs
+TestInitiator.testInitiatorFailure
+TestInitiator.testInitiatorHostAndVersion
+TestInitiator.testMetaCache
+TestListPartitions.testListPartitionSpecsByFilterInvalidFilter
+TestListPartitionsWithXIncludeParams.testListPartitionsByExr
+TestLlapZookeeperRegistryImpl.testRegister
+TestLlapZookeeperRegistryImpl.testUpdate
+TestMacroSemanticAnalyzer.testDropMacro
+TestMacroSemanticAnalyzer.testDropMacroDoesNotExist
+TestMacroSemanticAnalyzer.testDropMacroExistsDoNotIgnoreErrors
+TestMacroSemanticAnalyzer.testDropMacroNonExistent
+TestMacroSemanticAnalyzer.testDropMacroNonExistentWithIfExists
+TestMacroSemanticAnalyzer.testDropMacroNonExistentWithIfExistsDoNotIgnoreNonExistent
+TestMacroSemanticAnalyzer.testOneInputParamters
+TestMacroSemanticAnalyzer.testOneUnusedParameterName
+TestMacroSemanticAnalyzer.testThreeDuplicateParameters
+TestMacroSemanticAnalyzer.testThreeInputParamters
+TestMacroSemanticAnalyzer.testTwoDuplicateParameterNames
+TestMacroSemanticAnalyzer.testTwoInputParamters
+TestMacroSemanticAnalyzer.testTwoUnusedParameterNames
+TestMacroSemanticAnalyzer.testUnknownInputParameter
+TestMacroSemanticAnalyzer.testZeroInputParamters
+TestMetaStoreAcidCleanup.testDropDatabaseShouldRollback_whenAcidCleanupFails
+TestMetaStoreAcidCleanup.testDropTableShouldRollback_whenAcidCleanupFails
+TestMetaStoreEndFunctionListener.testEndFunctionListener
+TestMetaStoreEventListener.testListener
+TestMetaStoreEventListener.testMetaConfDuplicateNotification
+TestMetaStoreEventListener.testMetaConfNotifyListenersClosingClient
+TestMetaStoreEventListener.testMetaConfNotifyListenersNonClosingClient
+TestMetaStoreEventListener.testMetaConfSameHandler
+TestMetaStoreEventListenerOnlyOnCommit.testEventStatus
+TestMetaStoreEventListenerWithOldConf.testMetaConfDuplicateNotification
+TestMetaStoreEventListenerWithOldConf.testMetaConfNotifyListenersClosingClient
+TestMetaStoreEventListenerWithOldConf.testMetaConfNotifyListenersNonClosingClient
+TestMetaStoreEventListenerWithOldConf.testMetaConfSameHandler
+TestMetastoreExpr.testPartitionExpr
+TestMetaStoreListenersError.testEventListenerException
+TestMetaStoreListenersError.testInitListenerException
+TestMetastoreScheduledQueries.testCreate
+TestMetastoreScheduledQueries.testCreateWithInvalidSchedule
+TestMetastoreScheduledQueries.testDeleteNonExistent
+TestMetastoreScheduledQueries.testDisable1
+TestMetastoreScheduledQueries.testDisable2
+TestMetastoreScheduledQueries.testDuplicateCreate
+TestMetastoreScheduledQueries.testExclusivePoll
+TestMetastoreScheduledQueries.testNonExistent
+TestMetastoreScheduledQueries.testNormalDelete
+TestMetastoreScheduledQueries.testNormalDeleteWithExec
+TestMetastoreScheduledQueries.testPoll
+TestMetastoreScheduledQueries.testSkip2
+TestMetastoreScheduledQueries.testUpdate
+TestMsckCreatePartitionsInBatches.testSmallNumberOfPartitions
+TestMsckDropPartitionsInBatches.testSmallNumberOfPartitions
+TestMSCKRepairOnAcid.testAddPartitionMinorCompacted
+TestObjectStore.testMaxEventResponse
+TestObjectStore.testNotificationOps
+TestOperationLogManager.testGetOperationLog
+TestOperationLogManager.testOperationLogManager
+TestOperators.testFetchOperatorContext
+TestOperators.testLlapMemoryOversubscriptionMaxExecutorsPerQueryCalculation
+TestPartitionManagement.testNoPartitionDiscoveryForReplTable
+TestPartitionManagement.testNoPartitionRetentionForReplTarget
+TestPartitionManagement.testPartitionDiscoveryDBPattern
+TestPartitionManagement.testPartitionDiscoveryDisabledByDefault
+TestPartitionManagement.testPartitionDiscoveryEnabledBothTableTypes
+TestPartitionManagement.testPartitionDiscoveryNonDefaultCatalog
+TestPartitionManagement.testPartitionDiscoverySkipInvalidPath
+TestPartitionManagement.testPartitionDiscoveryTablePattern
+TestPartitionManagement.testPartitionDiscoveryTransactionalTable
+TestPartitionManagement.testPartitionExprFilter
+TestPartitionManagement.testPartitionRetention
+TestPartitionNameWhitelistValidation.testAddPartitionWithCommas
+TestPartitionNameWhitelistValidation.testAddPartitionWithUnicode
+TestPartitionNameWhitelistValidation.testAddPartitionWithValidPartVal
+TestPartitionNameWhitelistValidation.testAppendPartitionWithCommas
+TestPartitionNameWhitelistValidation.testAppendPartitionWithUnicode
+TestPartitionNameWhitelistValidation.testAppendPartitionWithValidCharacters
+TestPassProperties.testSequenceTableWriteReadMR
+TestPermsGrp.testCustomPerms
+TestPlainSaslHelper.testDoAsSetting
+TestPluggableHiveSessionImpl.testSessionImpl
+TestPluggableHiveSessionImpl.testSessionImplWithUGI
+TestPrivilegesV1.testPrivInGrant
+TestPrivilegesV1.testPrivInGrantNotAccepted
+TestPrivilegesV2.testPrivInGrant
+TestHCatMultiOutputFormat.testOutputFormat
+TestQBCompact.testBogusLevel
+TestQBCompact.testMajor
+TestQBCompact.testMinor
+TestQBCompact.testNonPartitionedTable
+TestQueryHooks.testAllQueryLifeTimeHooks
+TestQueryHooks.testAllQueryLifeTimeWithParseHooks
+TestQueryHooks.testQueryLifeTimeWithCompileError
+TestQueryHooks.testQueryLifeTimeWithParseHooksWithCompileError
+TestQueryHooks.testQueryLifeTimeWithParseHooksWithParseError
+TestQueryLifeTimeHooksWithSQLOperation.testQueryInfoInHookContext
+TestReadEntityDirect.testSelectEntityDirect
+TestReadEntityDirect.testSelectEntityInDirect
+TestReadEntityDirect.testSelectEntityInDirectJoinAlias
+TestReadEntityDirect.testSelectEntityViewDirectJoin
+TestReadEntityDirect.testSelectEntityViewDirectUnion
+TestReaderWriter.test
+TestRemoteHiveMetastoreWithHttpJwt.testExpiredJWT
+TestRemoteHiveMetastoreWithHttpJwt.testInvalidJWT
+TestRemoteHiveMetastoreWithHttpJwt.testValidJWT
+TestReplicationMetrics.testAddMetrics
+TestReplicationMetrics.testDeleteMetrics
+TestReplicationMetrics.testGetMetricsByScheduleId
+TestReplicationMetrics.testUpdateMetrics
+TestReplicationMetricUpdateOnFailure.testReplLoadFailure
+TestReplicationMetricUpdateOnFailure.testReplLoadNonRecoverableMissingStage
+TestReplicationMetricUpdateOnFailure.testReplLoadRecoverableMissingStage
+TestReplicationTask.testCreate
+TestRetryable.testRetrySuccessSecureCallable
+TestRetryingThriftCLIServiceClient.testRetryBehaviour
+TestRetryingThriftCLIServiceClient.testSessionLifeAfterTransportClose
+TestRetryingThriftCLIServiceClient.testTransportClose
+TestRuntimeStats.testCleanup
+TestRuntimeStats.testReading
+TestRuntimeStats.testRuntimeStatHandling
+TestSemanticAnalysis.testStoredAs
+TestSemanticAnalyzerFactory.testCreate
+TestSemanticAnalyzerFactory.testDrop
+TestSessionCleanup.testTempSessionFileCleanup
+TestSessionGlobalInitFile.testSessionGlobalInitFile
+TestSessionHiveMetastoreClientAddPartitionsTempTable.testAddPartitionsNullLocationInTableToo
+TestSessionHiveMetastoreClientAlterPartitionsTempTable.testAlterPartitionsCheckRollbackNullPartition
+TestSessionHiveMetastoreClientExchangePartitionsTempTable.testExchangePartitionsNonExistingPartLocation
+TestSessionHiveMetastoreClientListPartitionsTempTable.testListPartitionsSpecByExprNullResult
+TestSessionHooks.testProxyUser
+TestSessionHooks.testSessionHook
+TestSessionManagerMetrics.testAbandonedSessionMetrics
+TestSessionManagerMetrics.testActiveSessionTimeMetrics
+TestSessionManagerMetrics.testOpenSessionMetrics
+TestSessionManagerMetrics.testOpenSessionTimeMetrics
+TestSessionManagerMetrics.testThreadPoolMetrics
+TestShowPartitionAnalyzer.testGetShowPartitionsFilter
+TestStatsUpdaterThread.testPartitionsWithDifferentColsAll
+TestStreamingDynamicPartitioning.testWriteBeforeBegin
+TestSymlinkTextInputFormat.testCombine
+TestTempAcidTable.testTempFullAcidTableTranslate
+TestTempAcidTable.testTempInsertOnlyTableTranslate
+TestTezTask.testBuildDag
+TestTezTask.testEmptyWork
+TestTxnCommands.testMergeUpdateDelete
+TestTxnCommands3.testSdpoBucketed
+TestTxnCommandsForMmTable.testInsertOverwriteForMmTable
+TestTxnConcatenate.testConcatenate
+TestTxnExIm.testMMFlatSource
+TestTxnNoBuckets.testInsertFromUnion
+TestUpgradeTool.testPostUpgrade
+TestUseDatabase.testAlterTablePass
+TestViewEntity.testSubQueryInSubView
+TestViewEntity.testUnionAllInSubView
+TestViewEntity.testUnionView
+TestViewEntity.testViewInSubQuery
+TestViewEntity.testViewInSubQueryWithWhereClauseCbo
+TestViewEntity.testViewInSubQueryWithWhereClauseRbo
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/pom-hive-standalone-metastore.xml b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/pom-hive-standalone-metastore.xml
new file mode 100644
index 00000000..d6c23715
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/pom-hive-standalone-metastore.xml
@@ -0,0 +1,719 @@
+
+
+
+ 4.0.0
+
+ org.apache
+ apache
+ 23
+
+ org.apache.hive
+ hive-standalone-metastore
+ 4.0.0-beta-2-SNAPSHOT
+ pom
+ Hive Standalone Metastore
+
+ metastore-common
+ metastore-server
+ metastore-tools
+
+
+ 4.0.0-beta-2-SNAPSHOT
+ 4.0.0-beta-2
+ .
+
+ UTF-8
+ UTF-8
+ 1.8
+ 1.8
+ false
+ ${settings.localRepository}
+ 3.1.0
+ ${basedir}/${standalone.metastore.path.to.root}/checkstyle
+
+ ${project.basedir}/src/test/resources
+ ${project.build.directory}/tmp
+ ${project.build.directory}/warehouse
+ ${project.build.directory}/external
+ file://
+ 1
+ true
+ org.apache.hadoop.hive.metastore.annotation.MetastoreUnitTest
+
+ 1.0b3
+ 2.17
+ 2.16.0
+ 3.0.0-M4
+
+ 4.9.3
+ 1.5.7
+ 3.12.0
+ 1.1.3
+ 2.9.0
+ 1.1.0-incubating
+ 5.2.8
+ 5.2.10
+ 3.2.0-release
+ 5.2.10
+ 10.14.2.0
+ 2.5.0
+ 6.2.1.jre8
+ 8.0.31
+ 42.5.1
+ 21.3.0.0
+ 0.1.2
+
+ 3.1.0
+ 22.0
+ 3.3.6
+ 4.0.3
+ 2.13.5
+ 3.3
+ 5.5.1
+ 4.13.2
+ 5.6.2
+ 5.6.3
+ 0.9.3
+ 0.16.0
+ 2.18.0
+ 3.3.3
+ 1.8.5
+ 3.21.7
+ 1.51.0
+ 1.9.0
+ 2.14.6
+ 4.0.4
+ 4.0.0-beta-2-SNAPSHOT
+ 1.9.4
+ 1.3
+ 5.2.0
+ 3.7.2
+ 9.1.6
+ 4.0.3
+ 2.8.4
+ 1.7.30
+ 4.4.13
+ 4.5.13
+ 4.5.5
+ 9.31
+ 9.4.40.v20210413
+ 1.3.2
+ 5.2.24.RELEASE
+
+ you-must-set-this-to-run-thrift
+ ${basedir}/src/gen/thrift
+ -I ${thrift.home} -strict --gen java:beans,generated_annotations=undated --gen cpp --gen php --gen py --gen rb
+
+
+
+ 1.9.8.M1
+ 1.13
+ 1.0.0
+
+
+
+
+
+ org.apache.orc
+ orc-core
+ ${orc.version}
+
+
+ com.fasterxml.jackson
+ jackson-bom
+ ${jackson.version}
+ pom
+ import
+
+
+ com.github.joshelser
+ dropwizard-metrics-hadoop-metrics2-reporter
+ ${dropwizard-metrics-hadoop-metrics2-reporter.version}
+
+
+ com.google.guava
+ guava
+ ${guava.version}
+
+
+ com.google.protobuf
+ protobuf-java
+ ${protobuf.version}
+
+
+ com.zaxxer
+ HikariCP
+ ${hikaricp.version}
+
+
+ io.dropwizard.metrics
+ metrics-core
+ ${dropwizard.version}
+
+
+ io.dropwizard.metrics
+ metrics-jvm
+ ${dropwizard.version}
+
+
+ io.dropwizard.metrics
+ metrics-json
+ ${dropwizard.version}
+
+
+ javolution
+ javolution
+ ${javolution.version}
+
+
+ org.antlr
+ antlr4-runtime
+ ${antlr.version}
+
+
+ org.antlr
+ ST4
+ ${ST4.version}
+
+
+ org.apache.commons
+ commons-lang3
+ ${commons-lang3.version}
+
+
+ org.apache.datasketches
+ datasketches-hive
+ ${datasketches.version}
+
+
+ org.slf4j
+ slf4j-simple
+
+
+
+
+ org.apache.hadoop
+ hadoop-common
+ ${hadoop.version}
+
+
+ org.apache.zookeeper
+ zookeeper
+
+
+ org.apache.curator
+ curator-test
+
+
+ org.apache.curator
+ curator-client
+
+
+ org.apache.curator
+ curator-framework
+
+
+ org.apache.curator
+ curator-recipes
+
+
+ org.eclipse.jetty
+ *
+
+
+
+
+ org.apache.hadoop
+ hadoop-distcp
+ ${hadoop.version}
+ provided
+
+
+ org.apache.hadoop
+ hadoop-hdfs
+ ${hadoop.version}
+
+
+ org.eclipse.jetty
+ *
+
+
+
+
+ org.apache.hadoop
+ hadoop-hdfs-client
+ ${hadoop.version}
+
+
+ org.apache.hadoop
+ hadoop-mapreduce-client-core
+ ${hadoop.version}
+
+
+ org.jline
+ jline
+
+
+ io.netty
+ netty
+
+
+
+
+ org.apache.hive
+ hive-storage-api
+ ${storage-api.version}
+
+
+ org.apache.commons
+ commons-dbcp2
+ ${commons-dbcp2.version}
+
+
+ org.apache.logging.log4j
+ log4j-slf4j-impl
+ ${log4j2.version}
+
+
+ org.apache.logging.log4j
+ log4j-1.2-api
+ ${log4j2.version}
+
+
+ org.apache.thrift
+ libfb303
+ ${libfb303.version}
+
+
+ org.apache.thrift
+ libthrift
+ ${libthrift.version}
+
+
+ org.datanucleus
+ datanucleus-api-jdo
+ ${datanucleus-api-jdo.version}
+
+
+ org.datanucleus
+ datanucleus-core
+ ${datanucleus-core.version}
+
+
+ org.datanucleus
+ datanucleus-rdbms
+ ${datanucleus-rdbms.version}
+
+
+ org.datanucleus
+ javax.jdo
+ ${datanucleus-jdo.version}
+
+
+ org.skyscreamer
+ jsonassert
+ 1.4.0
+ test
+
+
+ sqlline
+ sqlline
+ ${sqlline.version}
+
+
+ jline
+ jline
+ ${jline.version}
+
+
+ commons-logging
+ commons-logging
+ ${commons-logging.version}
+
+
+ com.cronutils
+ cron-utils
+ ${cron-utils.version}
+
+
+ com.github.ben-manes.caffeine
+ caffeine
+ ${caffeine.version}
+
+
+ org.slf4j
+ slf4j-api
+ ${slf4j.version}
+
+
+ org.springframework
+ spring-jdbc
+ ${spring.version}
+
+
+ org.springframework
+ spring-core
+ ${spring.version}
+
+
+
+ com.microsoft.sqlserver
+ mssql-jdbc
+ ${mssql.version}
+ runtime
+
+
+ com.oracle.database.jdbc
+ ojdbc8
+ ${oracle.version}
+ runtime
+
+
+ com.mysql
+ mysql-connector-j
+ ${mysql.version}
+ runtime
+
+
+ org.apache.derby
+ derby
+ ${derby.version}
+ runtime
+
+
+ org.mariadb.jdbc
+ mariadb-java-client
+ ${mariadb.version}
+ runtime
+
+
+ org.postgresql
+ postgresql
+ ${postgres.version}
+ runtime
+
+
+ org.apache.httpcomponents
+ httpcore
+ ${httpcomponents.core.version}
+
+
+ org.eclipse.jetty
+ jetty-util
+ ${jetty.version}
+
+
+ org.eclipse.jetty
+ jetty-server
+ ${jetty.version}
+
+
+ org.eclipse.jetty
+ jetty-servlet
+ ${jetty.version}
+
+
+
+ junit
+ junit
+ ${junit.version}
+ test
+
+
+ org.junit.jupiter
+ junit-jupiter-engine
+ ${junit.jupiter.version}
+ test
+
+
+ org.junit.vintage
+ junit-vintage-engine
+ ${junit.vintage.version}
+ test
+
+
+ org.apache.directory.server
+ apacheds-server-integ
+ ${apache-directory-server.version}
+ test
+
+
+ dom4j
+ dom4j
+
+
+
+
+ org.apache.directory.server
+ apacheds-test-framework
+ ${apache-directory-server.version}
+ test
+
+
+ org.mockito
+ mockito-core
+ ${mockito-core.version}
+ test
+
+
+
+ org.hamcrest
+ hamcrest-all
+ ${hamcrest.version}
+ test
+
+
+ org.apache.curator
+ curator-test
+ ${curator.version}
+ test
+
+
+ org.junit.jupiter
+ junit-jupiter-api
+
+
+
+
+ org.slf4j
+ slf4j-simple
+ ${slf4j.version}
+ test
+
+
+
+
+
+
+
+ org.aspectj
+ aspectjrt
+ ${aspectj.version}
+
+
+ edu.uchicago.cs.systems
+ wasabi
+ ${wasabi.version}
+
+
+
+ com.fasterxml.jackson.core
+ jackson-databind
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-assembly-plugin
+
+
+ org.codehaus.mojo
+ versions-maven-plugin
+ ${maven.versions.plugin.version}
+
+
+ org.apache.maven.plugins
+ maven-surefire-plugin
+ ${maven.surefire.plugin.version}
+
+ false
+
+
+
+ org.apache.maven.plugins
+ maven-checkstyle-plugin
+ ${maven.checkstyle.plugin.version}
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-assembly-plugin
+
+
+ assemble
+ package
+
+ single
+
+
+ apache-${project.artifactId}-${project.version}
+
+ tar.gz
+
+
+ src/assembly/src.xml
+
+ posix
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-checkstyle-plugin
+
+ ${checkstyle.conf.dir}/checkstyle.xml
+ config_loc=${checkstyle.conf.dir}
+ true
+
+
+
+ org.apache.rat
+ apache-rat-plugin
+
+
+ process-resources
+
+ check
+
+
+
+
+
+ *.patch
+ DEV-README
+ **/src/main/sql/**
+ **/README.md
+ **/*.iml
+ **/*.txt
+ **/*.log
+ **/*.arcconfig
+ **/package-info.java
+ **/*.properties
+ **/*.q
+ **/*.q.out
+ **/*.xml
+ **/gen/**
+ **/patchprocess/**
+ **/metastore_db/**
+ **/test/resources/**/*.ldif
+ **/test/resources/sql/**
+ **/test/resources/**/*.json
+
+
+
+
+
+
+ dev.aspectj
+ aspectj-maven-plugin
+ ${aspectj-maven.version}
+
+
+
+ edu.uchicago.cs.systems
+ wasabi
+
+
+
+
+
+
+ test-compile
+ compile
+
+
+ 1.8
+ 1.8
+ false
+ true
+ true
+ unmatchedSuperTypeInCall=ignore,adviceDidNotMatch=ignore,typeNotExposedToWeaver=ignore,uncheckedAdviceConversion=ignore,invalidAbsoluteTypeName=ignore,cantFindType=ignore
+
+
+
+
+
+ org.aspectj
+ aspectjtools
+ ${aspectj.version}
+
+
+
+
+
+
+
+
+ javadoc
+
+
+
+ org.apache.maven.plugins
+ maven-javadoc-plugin
+
+ none
+ false
+
+
+
+ attach-javadocs
+
+ jar
+
+
+
+
+
+
+
+
+ spotbugs
+
+
+
+
+ com.github.spotbugs
+ spotbugs-maven-plugin
+ 4.0.0
+
+
+
+ com.github.spotbugs
+ spotbugs
+ ${spotbugs.version}
+
+
+
+ true
+ 2048
+ -Djava.awt.headless=true -Xmx2048m -Xms512m
+ ${basedir}/${standalone.metastore.path.to.root}/spotbugs/spotbugs-exclude.xml
+
+
+
+
+
+
+
+ com.github.spotbugs
+ spotbugs-maven-plugin
+ 4.0.0
+
+ true
+ 2048
+ -Djava.awt.headless=true -Xmx2048m -Xms512m
+ ${basedir}/${standalone.metastore.path.to.root}/spotbugs/spotbugs-exclude.xml
+
+
+
+
+
+
+
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/pom-hive.xml b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/pom-hive.xml
new file mode 100644
index 00000000..1310a6da
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/pom-hive.xml
@@ -0,0 +1,2115 @@
+
+
+
+ 4.0.0
+
+ org.apache
+ apache
+ 23
+
+ org.apache.hive
+ hive
+ 4.0.0-beta-2-SNAPSHOT
+ pom
+ Hive
+ https://hive.apache.org
+
+ storage-api
+ accumulo-handler
+ vector-code-gen
+ beeline
+ classification
+ cli
+ common
+ contrib
+ druid-handler
+ hbase-handler
+ jdbc-handler
+ hcatalog
+ hplsql
+ jdbc
+ metastore
+ parser
+ udf
+ ql
+ serde
+ service-rpc
+ service
+ streaming
+ llap-common
+ llap-client
+ llap-ext-client
+ llap-tez
+ llap-server
+ shims
+ kudu-handler
+ testutils
+ packaging
+ standalone-metastore
+ kafka-handler
+
+
+ 4.0.0-beta-2-SNAPSHOT
+ 4.0.0-beta-2
+
+ 1.8
+ 1.8
+ false
+ ${settings.localRepository}
+ .
+ standalone
+ ${basedir}/${hive.path.to.root}/checkstyle
+
+ ${project.groupId}:${project.artifactId}
+
+
+
+ ${maven.test.classpath}
+ file://
+ ${project.build.directory}/tmp
+ ${project.build.directory}/testconf
+ file://${test.tmp.dir}
+
+ INFO
+ ${project.build.directory}/warehouse
+ ${project.build.directory}/localfs/warehouse
+ pfile://
+
+
+
+ 1.0b3
+ -Xmx2048m -DJETTY_AVAILABLE_PROCESSORS=4
+ 2.17
+ 3.4.0
+ 2.10
+ 3.1.0
+ 2.16.0
+ 3.5.0
+ 3.0.0-M4
+ 2.7.10
+ 2.3.0
+
+ 1.10.1
+ 1.10.13
+ 3.5.2
+
+ 4.9.3
+ 1.5.7
+
+ 12.0.0
+ 1.12.0
+ 1.11.3
+ 1.68
+ 1.25.0
+ 5.2.8
+ 5.2.10
+ 3.2.0-release
+ 5.2.10
+ 1.5.0
+ 1.15
+ 3.2.2
+ 4.1
+ 1.23.0
+ 1.10
+ 1.1
+ 2.12.0
+ 2.11.1
+ 3.12.0
+ 3.6.1
+ 2.9.0
+ 1.10.0
+ 10.14.2.0
+ 3.1.0
+ 0.1.2
+ 0.17.1
+ 2.2.4
+ 1.12.0
+ 22.0
+ 2.4.21
+ 2.2.220
+ 3.3.6
+ ${basedir}/${hive.path.to.root}/testutils/hadoop
+ 1.3
+ 2.5.6-hadoop3
+ 0.7.2
+
+ 3.3.7
+ 4.0.3
+
+ 4.5.13
+ 4.4.13
+ 2.5.2
+ 2.13.5
+ 2.3.4
+ 2.4.1
+ 3.1.0
+ 5.5.1
+ 1.5.4
+ 9.4.45.v20220203
+ 1.19
+ 2.14.6
+ 2.0.2
+ 2.9.9
+ 6.0.0
+ 1.8
+ 4.13.2
+ 5.6.2
+ 5.6.3
+ 2.5.0
+ 5.5.0
+ 1.11.9
+ 1.12.0
+
+ 0.9.3
+ 0.16.0
+ 2.18.0
+ 2.5.0
+ 6.2.1.jre8
+ 8.0.31
+ 42.5.1
+ 21.3.0.0
+ 2.3
+ 1.8.5
+ 3.4.4
+ 4.11.0
+ 2.0.0-M5
+ 4.1.77.Final
+ 3.10.5.Final
+
+ 4.5.5
+ 2.8
+ 1.13.1
+ 0.16.0
+ 1.5.6
+ 3.21.7
+ 1.0.1
+ 1.7.30
+ 4.0.4
+ 4.0.0-beta-2-SNAPSHOT
+ 0.10.2
+ 2.2.0
+ 1.1
+ 1.1.10.4
+ 1.4
+ 2.3
+ 2.12.2
+ 2.3.4
+ 3.7.2
+ 1.1
+ 2.4.0
+ 5.2.0
+ 3.0.0
+ 2.9.0
+ 0.10.5
+ 1.2
+ 2.0.1
+ 2.8.0
+ 3.0.11
+ 1.1.0-incubating
+ 4.0.3
+ 1.1.0.Final
+ 1.0.1
+ 1.12.499
+ 2.4.0
+ 5.2.24.RELEASE
+
+
+ 1.9.8.M1
+ 1.13
+ 1.0.0
+
+
+
+
+
+ central
+ central
+ https://repo.maven.apache.org/maven2
+ default
+
+ true
+ warn
+
+
+
+ repository-release
+ https://repository.apache.org/content/repositories/releases/
+
+ true
+
+
+ true
+
+
+
+
+ shibboleth
+ https://build.shibboleth.net/nexus/content/groups/public
+
+ true
+ warn
+
+
+ false
+
+
+
+
+
+
+
+ com.amazonaws
+ aws-java-sdk-bundle
+ ${aws-java-sdk.version}
+
+
+ io.netty
+ *
+
+
+
+
+ com.amazonaws.secretsmanager
+ aws-secretsmanager-caching-java
+ ${aws-secretsmanager-caching.version}
+
+
+ com.amazonaws
+ aws-java-sdk-secretsmanager
+
+
+
+
+ com.esotericsoftware
+ kryo
+ ${kryo.version}
+
+
+ com.esotericsoftware
+ reflectasm
+ ${reflectasm.version}
+
+
+ com.google.guava
+ guava
+ ${guava.version}
+
+
+ com.google.protobuf
+ protobuf-java
+ ${protobuf.version}
+
+
+ com.google.code.tempus-fugit
+ tempus-fugit
+ ${tempus-fugit.version}
+
+
+ org.hamcrest
+ hamcrest-core
+
+
+
+
+ com.zaxxer
+ HikariCP
+ ${hikaricp.version}
+
+
+ com.thoughtworks.paranamer
+ paranamer
+ ${paranamer.version}
+
+
+ org.apache.parquet
+ parquet
+ ${parquet.version}
+
+
+ org.apache.parquet
+ parquet-column
+ ${parquet.version}
+ tests
+
+
+ org.apache.parquet
+ parquet-hadoop-bundle
+ ${parquet.version}
+
+
+ com.sun.jersey
+ jersey-core
+ ${jersey.version}
+
+
+ com.sun.jersey
+ jersey-json
+ ${jersey.version}
+
+
+ com.sun.jersey
+ jersey-server
+ ${jersey.version}
+
+
+ com.sun.jersey.contribs
+ wadl-resourcedoc-doclet
+ ${wadl-resourcedoc-doclet.version}
+
+
+ com.sun.jersey
+ jersey-servlet
+ ${jersey.version}
+
+
+ commons-cli
+ commons-cli
+ ${commons-cli.version}
+
+
+ commons-codec
+ commons-codec
+ ${commons-codec.version}
+
+
+ commons-collections
+ commons-collections
+ ${commons-collections.version}
+
+
+ org.apache.commons
+ commons-collections4
+ ${commons-collections4.version}
+
+
+ commons-io
+ commons-io
+ ${commons-io.version}
+
+
+ org.apache.commons
+ commons-dbcp2
+ ${commons-dbcp2.version}
+
+
+ org.apache.commons
+ commons-math3
+ ${commons-math3.version}
+
+
+ io.jsonwebtoken
+ jjwt-api
+ ${jjwt.version}
+
+
+ io.jsonwebtoken
+ jjwt-impl
+ ${jjwt.version}
+
+
+ io.jsonwebtoken
+ jjwt-jackson
+ ${jjwt.version}
+
+
+ io.netty
+ netty-all
+ ${netty.version}
+
+
+ jakarta.jms
+ jakarta.jms-api
+ ${jms.version}
+
+
+ javolution
+ javolution
+ ${javolution.version}
+
+
+ jline
+ jline
+ ${jline.version}
+
+
+ joda-time
+ joda-time
+ ${joda.version}
+
+
+ junit
+ junit
+ ${junit.version}
+
+
+ org.junit.jupiter
+ junit-jupiter-engine
+ ${junit.jupiter.version}
+
+
+ org.junit.jupiter
+ junit-jupiter-params
+ ${junit.jupiter.version}
+
+
+ org.junit.vintage
+ junit-vintage-engine
+ ${junit.vintage.version}
+
+
+ org.apache.commons
+ commons-text
+ ${commons-text.version}
+
+
+ org.apache.logging.log4j
+ log4j-1.2-api
+ ${log4j2.version}
+
+
+ org.apache.logging.log4j
+ log4j-web
+ ${log4j2.version}
+
+
+ org.apache.logging.log4j
+ log4j-slf4j-impl
+ ${log4j2.version}
+
+
+ org.antlr
+ antlr-runtime
+ ${antlr.version}
+
+
+ org.antlr
+ ST4
+ ${ST4.version}
+
+
+ org.apache.commons
+ commons-compress
+ ${commons-compress.version}
+
+
+ org.apache.commons
+ commons-exec
+ ${commons-exec.version}
+
+
+ org.apache.accumulo
+ accumulo-core
+ ${accumulo.version}
+
+
+ org.apache.accumulo
+ accumulo-fate
+ ${accumulo.version}
+
+
+ org.apache.accumulo
+ accumulo-minicluster
+ ${accumulo.version}
+
+
+ org.apache.accumulo
+ accumulo-start
+ ${accumulo.version}
+
+
+ org.apache.accumulo
+ accumulo-trace
+ ${accumulo.version}
+
+
+ org.apache.calcite.avatica
+ avatica
+ ${avatica.version}
+
+
+ org.apache.calcite.avatica
+ avatica-core
+ ${avatica.version}
+
+
+ org.apache.calcite.avatica
+ avatica-metrics
+ ${avatica.version}
+
+
+ org.apache.calcite.avatica
+ avatica-server
+ ${avatica.version}
+
+
+ org.apache.avro
+ avro
+ ${avro.version}
+
+
+ org.apache.avro
+ avro-mapred
+ ${avro.version}
+
+
+ org.mortbay.jetty
+ jetty-util
+
+
+ org.mortbay.jetty
+ servlet-api
+
+
+ io.netty
+ netty
+
+
+
+
+ org.apache.httpcomponents
+ httpclient
+ ${httpcomponents.client.version}
+
+
+ org.apache.httpcomponents
+ httpcore
+ ${httpcomponents.core.version}
+
+
+ org.apache.velocity
+ velocity-engine-core
+ ${velocity.version}
+
+
+ stax
+ stax-api
+ ${stax.version}
+
+
+ org.apache.calcite
+ calcite-core
+ ${calcite.version}
+
+
+ org.apache.calcite
+ calcite-linq4j
+ ${calcite.version}
+
+
+ org.apache.calcite
+ calcite-druid
+ ${calcite.version}
+
+
+ org.apache.curator
+ curator-test
+ ${curator.version}
+ test
+
+
+ org.junit.jupiter
+ junit-jupiter-api
+
+
+
+
+ org.apache.datasketches
+ datasketches-hive
+ ${datasketches.version}
+
+
+ org.slf4j
+ slf4j-simple
+
+
+
+
+ org.apache.orc
+ orc-core
+ ${orc.version}
+
+
+ org.apache.hadoop
+ hadoop-common
+
+
+ org.apache.hive
+ hive-storage-api
+
+
+
+
+ org.apache.hive
+ hive-storage-api
+ ${storage-api.version}
+
+
+ org.apache.zookeeper
+ zookeeper
+
+
+ org.apache.curator
+ curator-client
+
+
+ org.apache.curator
+ curator-recipes
+
+
+
+
+ org.apache.pig
+ pig
+ ${pig.version}
+
+
+ org.apache.thrift
+ libfb303
+ ${libfb303.version}
+
+
+ org.apache.thrift
+ libthrift
+ ${libthrift.version}
+
+
+ org.apache.zookeeper
+ zookeeper
+ ${zookeeper.version}
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ commons-logging
+ commons-logging
+
+
+ org.apache.httpcomponents
+ httpcore
+
+
+ org.apache.httpcomponents
+ httpclient
+
+
+ io.netty
+ netty-all
+
+
+
+
+ org.apache.curator
+ curator-client
+ ${curator.version}
+
+
+ org.apache.curator
+ curator-framework
+ ${curator.version}
+
+
+ org.apache.curator
+ curator-recipes
+ ${curator.version}
+
+
+ org.codehaus.groovy
+ groovy-all
+ ${groovy.version}
+
+
+ com.fasterxml.jackson
+ jackson-bom
+ ${jackson.version}
+ pom
+ import
+
+
+ org.codehaus.jettison
+ jettison
+ ${jettison.version}
+
+
+ stax
+ stax-api
+
+
+
+
+ org.eclipse.jetty
+ jetty-rewrite
+ ${jetty.version}
+
+
+ org.eclipse.jetty
+ jetty-server
+ ${jetty.version}
+
+
+ org.eclipse.jetty
+ jetty-servlet
+ ${jetty.version}
+
+
+ org.eclipse.jetty
+ jetty-runner
+ ${jetty.version}
+
+
+ org.eclipse.jetty
+ jetty-webapp
+ ${jetty.version}
+
+
+ org.eclipse.jetty
+ jetty-http
+ ${jetty.version}
+
+
+ org.eclipse.jetty
+ jetty-util
+ ${jetty.version}
+
+
+ javax.servlet
+ javax.servlet-api
+ ${javax-servlet.version}
+
+
+ org.datanucleus
+ datanucleus-api-jdo
+ ${datanucleus-api-jdo.version}
+
+
+ org.datanucleus
+ datanucleus-core
+ ${datanucleus-core.version}
+
+
+ org.datanucleus
+ datanucleus-rdbms
+ ${datanucleus-rdbms.version}
+
+
+ org.datanucleus
+ javax.jdo
+ ${datanucleus-jdo.version}
+
+
+ org.pac4j
+ pac4j-saml-opensamlv3
+ ${pac4j-saml.version}
+
+
+ com.google.code.findbugs
+ jsr305
+
+
+ ch.qos.logback
+ logback-classic
+
+
+ xalan
+ xalan
+
+
+ org.springframework
+ spring-core
+
+
+ dom4j
+ dom4j
+
+
+ commons-collections
+ commons-collections
+
+
+ org.slf4j
+ *
+
+
+ org.jboss.logging
+ *
+
+
+ org.hibernate
+ *
+
+
+ org.hibernate.javax.persistence
+ *
+
+
+ org.springframework
+ *
+
+
+ org.javassist
+ javassist
+
+
+
+ org.bouncycastle
+ org.bouncycastle
+
+
+ org.apache.santuario
+ xmlsec
+
+
+
+
+ org.bouncycastle
+ bcprov-jdk15on
+ ${bcprov-jdk15on.version}
+
+
+ org.apache.santuario
+ xmlsec
+ ${xmlsec.version}
+
+
+ com.fasterxml.woodstox
+ woodstox-core
+
+
+
+
+ com.tdunning
+ json
+ ${json.version}
+
+
+ org.slf4j
+ slf4j-api
+ ${slf4j.version}
+
+
+ xerces
+ xercesImpl
+ ${xerces.version}
+
+
+ org.apache.hadoop
+ hadoop-client
+ ${hadoop.version}
+
+
+ commons-logging
+ commons-logging
+
+
+
+
+ org.apache.hadoop
+ hadoop-auth
+ ${hadoop.version}
+
+
+ commons-logging
+ commons-logging
+
+
+ org.apache.zookeeper
+ zookeeper
+
+
+ org.apache.curator
+ curator-framework
+
+
+ org.apache.curator
+ curator-test
+
+
+
+
+ org.apache.hadoop
+ hadoop-common
+ ${hadoop.version}
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ commons-logging
+ commons-logging
+
+
+ org.apache.httpcomponents
+ httpcore
+
+
+ org.apache.httpcomponents
+ httpclient
+
+
+ org.apache.zookeeper
+ zookeeper
+
+
+ org.apache.curator
+ curator-test
+
+
+ org.apache.curator
+ curator-client
+
+
+ org.apache.curator
+ curator-recipes
+
+
+ javax.servlet.jsp
+ jsp-api
+
+
+
+
+ org.apache.hadoop
+ hadoop-hdfs
+ ${hadoop.version}
+
+
+ io.netty
+ netty
+
+
+
+
+ org.apache.hadoop
+ hadoop-mapreduce-client-jobclient
+ ${hadoop.version}
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ commons-logging
+ commons-logging
+
+
+ com.codahale.metrics
+ metrics-core
+
+
+ io.netty
+ netty-all
+
+
+ io.netty
+ netty
+
+
+
+
+ org.apache.hadoop
+ hadoop-mapreduce-client-common
+ ${hadoop.version}
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ commons-logging
+ commons-logging
+
+
+ io.netty
+ netty-all
+
+
+ io.netty
+ netty
+
+
+
+
+ org.apache.hadoop
+ hadoop-mapreduce-client-core
+ ${hadoop.version}
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ org.jline
+ jline
+
+
+ commons-logging
+ commons-logging
+
+
+ io.netty
+ netty-all
+
+
+ io.netty
+ netty
+
+
+
+
+ org.apache.hadoop
+ hadoop-minikdc
+ ${hadoop.version}
+
+
+ io.netty
+ netty-all
+
+
+ org.slf4j
+ slf4j-log4j12
+
+
+ org.slf4j
+ slf4j-reload4j
+
+
+ ch.qos.reload4j
+ reload4j
+
+
+ commons-logging
+ commons-logging
+
+
+
+
+ org.apache.hadoop
+ hadoop-yarn-api
+ ${hadoop.version}
+
+
+ org.apache.hadoop
+ hadoop-yarn-client
+ ${hadoop.version}
+
+
+ org.jline
+ jline
+
+
+
+
+ org.apache.hadoop
+ hadoop-yarn-common
+ ${hadoop.version}
+
+
+ org.apache.hadoop
+ hadoop-yarn-registry
+ ${hadoop.version}
+
+
+ org.apache.hadoop
+ hadoop-yarn-server-web-common
+ ${hadoop.version}
+
+
+ org.apache.hadoop
+ hadoop-yarn-server-web-proxy
+ ${hadoop.version}
+
+
+ io.netty
+ netty-all
+
+
+
+
+ org.apache.hbase
+ hbase-common
+ ${hbase.version}
+
+
+ org.apache.hbase
+ hbase-client
+ ${hbase.version}
+
+
+ org.apache.hbase
+ hbase-hadoop-compat
+ ${hbase.version}
+
+
+ org.apache.hbase
+ hbase-hadoop2-compat
+ ${hbase.version}
+
+
+ javax.servlet
+ servlet-api
+
+
+ javax.servlet.jsp
+ jsp-api
+
+
+ org.jruby
+ jruby-complete
+
+
+ io.netty
+ netty-all
+
+
+ io.netty
+ netty
+
+
+ com.sun.jersey
+ jersey-core
+
+
+ com.sun.jersey
+ jersey-json
+
+
+ com.sun.jersey
+ jersey-server
+
+
+ com.codahale.metrics
+ metrics-core
+
+
+
+
+ org.apache.hbase
+ hbase-server
+ ${hbase.version}
+
+
+ org.glassfish.web
+ javax.servlet.jsp
+
+
+
+
+ org.apache.hbase
+ hbase-mapreduce
+ ${hbase.version}
+
+
+ org.apache.hbase
+ hbase-zookeeper
+ tests
+ ${hbase.version}
+
+
+ org.apache.hadoop
+ hadoop-minicluster
+ ${hadoop.version}
+
+
+ org.jamon
+ jamon-runtime
+ ${jamon-runtime.version}
+
+
+ org.xerial.snappy
+ snappy-java
+ ${snappy.version}
+
+
+ com.google.re2j
+ re2j
+ ${re2j.version}
+
+
+ com.jayway.jsonpath
+ json-path
+ ${json-path.version}
+ runtime
+
+
+ org.codehaus.janino
+ commons-compiler
+ ${janino.version}
+ runtime
+
+
+ org.codehaus.janino
+ janino
+ ${janino.version}
+ runtime
+
+
+ org.apache.tez
+ tez-runtime-internals
+ ${tez.version}
+
+
+ org.apache.tez
+ tez-runtime-library
+ ${tez.version}
+
+
+ io.netty
+ netty
+
+
+
+
+ org.apache.tez
+ tez-api
+ ${tez.version}
+
+
+ org.apache.tez
+ tez-dag
+ ${tez.version}
+
+
+ org.apache.tez
+ tez-mapreduce
+ ${tez.version}
+
+
+ io.netty
+ netty
+
+
+
+
+ org.apache.tez
+ tez-common
+ ${tez.version}
+
+
+ org.springframework
+ spring-jdbc
+ ${spring.version}
+
+
+ org.springframework
+ spring-core
+ ${spring.version}
+
+
+
+ com.microsoft.sqlserver
+ mssql-jdbc
+ ${mssql.version}
+ runtime
+
+
+ com.oracle.database.jdbc
+ ojdbc8
+ ${oracle.version}
+ runtime
+
+
+ com.mysql
+ mysql-connector-j
+ ${mysql.version}
+ runtime
+
+
+ org.apache.derby
+ derby
+ ${derby.version}
+ runtime
+
+
+ org.mariadb.jdbc
+ mariadb-java-client
+ ${mariadb.version}
+ runtime
+
+
+ org.postgresql
+ postgresql
+ ${postgres.version}
+ runtime
+
+
+
+
+
+
+
+ org.aspectj
+ aspectjrt
+ ${aspectj.version}
+
+
+ edu.uchicago.cs.systems
+ wasabi
+ ${wasabi.version}
+
+
+
+
+
+ org.slf4j
+ slf4j-api
+
+
+
+
+
+
+
+ org.antlr
+ antlr3-maven-plugin
+ ${antlr.version}
+
+
+ org.apache.avro
+ avro-maven-plugin
+ ${avro.version}
+
+
+ org.apache.maven.plugins
+ maven-antrun-plugin
+
+
+ ant-contrib
+ ant-contrib
+ ${ant.contrib.version}
+
+
+ ant
+ ant
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-eclipse-plugin
+ ${maven.eclipse.plugin.version}
+
+ false
+ true
+ target/eclipse/classes
+ Hive
+ ${basedir}/dev-support/eclipse-styles.xml
+
+
+
+ org.apache.maven.plugins
+ maven-checkstyle-plugin
+ ${maven.checkstyle.plugin.version}
+
+
+ org.codehaus.mojo
+ versions-maven-plugin
+ ${maven.versions.plugin.version}
+
+
+ org.apache.maven.plugins
+ maven-surefire-plugin
+ ${maven.surefire.plugin.version}
+
+
+ org.apache.felix
+ maven-bundle-plugin
+ ${felix.version}
+
+
+ org.apache.maven.plugins
+ maven-shade-plugin
+ ${maven.shade.plugin.version}
+
+
+ org.codehaus.mojo
+ build-helper-maven-plugin
+ ${maven.build-helper.plugin.version}
+
+
+ org.codehaus.mojo
+ exec-maven-plugin
+ ${maven.exec.plugin.version}
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-antrun-plugin
+
+
+ define-classpath
+ process-resources
+
+ run
+
+
+ true
+
+
+
+
+
+
+ setup-test-dirs
+ process-test-resources
+
+ run
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-clean-plugin
+
+
+
+ ./
+
+ datanucleus.log
+ derby.log
+
+ false
+
+
+ build
+ false
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-checkstyle-plugin
+
+ ${checkstyle.conf.dir}/checkstyle.xml
+ config_loc=${checkstyle.conf.dir}
+ true
+
+
+
+ org.apache.maven.plugins
+ maven-enforcer-plugin
+
+
+ de.skuzzle.enforcer
+ restrict-imports-enforcer-rule
+ 0.9.0
+
+
+
+
+ enforce-no-snapshots
+
+ enforce
+
+
+
+
+ Release builds are not allowed to have SNAPSHOT depenendencies
+ true
+ true
+
+
+ true
+
+
+
+ enforce-banned-dependencies-licenses
+
+ enforce
+
+
+
+
+
+
+ com.google.code.findbugs:annotations
+
+ A banned license dependency was found!
+
+
+ true
+
+
+
+ enforce-banned-dependencies-logging
+
+ enforce
+
+
+
+
+
+
+ commons-logging:commons-logging
+ log4j:log4j
+ ch.qos.reload4j:reload4j
+
+ false
+ A banned logging dependency was found!
+
+
+ true
+
+
+
+ check-banned-imports
+ initialize
+
+ enforce
+
+
+
+
+ Do not use shaded imports
+
+ **.shaded.**
+ jersey.repackaged.com.google.**
+ org.codehaus.jackson.**
+ org.apache.hive.com.**
+ org.apache.hive.org.**
+
+
+ org.apache.hadoop.hbase.shaded.protobuf.**
+
+ true
+
+
+ Do not use commons-lang
+
+ org.apache.commons.lang.**
+
+ true
+
+
+ Do not use commons-logging; use slf4j
+
+ org.apache.commons.logging.**
+
+ true
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-surefire-plugin
+
+
+ **/TestSerDe.java
+ **/TestHiveMetaStore.java
+ **/ql/exec/vector/util/*.java
+ **/ql/exec/vector/udf/legacy/*.java
+ **/ql/exec/vector/udf/generic/*.java
+ **/TestHiveServer2Concurrency.java
+ ${test.excludes.additional}
+
+ true
+ false
+ false
+ ${maven.test.jvm.args}
+ false
+
+ ${test.conf.dir}
+ ${basedir}/${hive.path.to.root}/conf
+
+
+ US/Pacific
+ en_US.UTF-8
+ ${test.conf.dir}:${basedir}/${hive.path.to.root}/conf
+ ${test.hive.hadoop.classpath}
+ ${env.PATH}${test.extra.path}
+
+
+ ${project.build.directory}
+
+ ${test.tmp.dir}
+
+ ${derby.version}
+ ${test.tmp.dir}/derby.log
+ ${hadoop.bin.path}
+
+ ${test.tmp.dir}
+ ${basedir}/${hive.path.to.root}/
+ ${project.version}
+
+ ${maven.repo.local}
+ local
+ ${test.log4j.scheme}${test.conf.dir}/hive-log4j2.properties
+ ${test.console.log.level}
+ true
+
+ ${test.tmp.dir}
+
+ ${test.tmp.dir}
+
+ ${basedir}/${hive.path.to.root}/data/files
+ ${basedir}/${hive.path.to.root}/data/files
+ ${test.tmp.dir}
+ ${test.tmp.dir.uri}
+ ${test.dfs.mkdir}
+ ${test.output.overwrite}
+ ${test.warehouse.scheme}${test.warehouse.dir}
+ ${test.warehouse.scheme}${test.local.warehouse.dir}
+ true
+
+
+ ${test.conf.dir}/krb5.conf
+ ${hadoop.version}
+ ${qfile}
+ ${initScript}
+ ${clustermode}
+ ${qfile_regex}
+ ${run_disabled}
+
+
+
+
+ org.apache.rat
+ apache-rat-plugin
+
+
+ process-resources
+
+ check
+
+
+
+
+
+ *.patch
+ .github/**
+ data/**
+ conf/**
+ checkstyle/**
+ docs/Gemfile
+ bin/**
+ itests/**
+ **/README.md
+ **/*.iml
+ **/*.txt
+ **/*.log
+ **/.factorypath
+ **/.classpath
+ **/.project
+ **/.settings/**
+ **/*.arcconfig
+ **/package-info.java
+ **/*.properties
+ **/*.q
+ **/*.q.out
+ **/*.q.out_*
+ **/*.xml
+ **/*.yml
+ **/*json
+ **/gen/**
+ **/target/**
+ **/scripts/**
+ **/resources/**
+ **/*.rc
+ **/*.rcfile
+ **/*.qv
+ **/*.out
+ **/RecordTestObj.java
+ **/*.m
+ **/gen-java/**
+ **/testdata/**
+ **/test/org/apache/hadoop/hive/hbase/avro/**
+ **/avro_test.avpr
+ **/xmlReport.pl
+ **/*.html
+ **/sit
+ **/test/queries/**/*.sql
+ **/patchprocess/**
+ **/metastore_db/**
+ **/test/resources/**/*.ldif
+ hcatalog/core/mapred/**/part-m*
+ hcatalog/core/mapred/**/*_SUCCESS*
+ **/PriorityBlockingDeque.java
+ LICENSE-binary
+
+
+
+
+ org.jamon
+ jamon-maven-plugin
+ ${jamon.plugin.version}
+
+
+
+
+ dev.aspectj
+ aspectj-maven-plugin
+ ${aspectj-maven.version}
+
+
+
+ edu.uchicago.cs.systems
+ wasabi
+
+
+
+
+
+
+ test-compile
+ compile
+
+
+ 1.8
+ 1.8
+ false
+ true
+ true
+ unmatchedSuperTypeInCall=ignore,adviceDidNotMatch=ignore,typeNotExposedToWeaver=ignore,uncheckedAdviceConversion=ignore,invalidAbsoluteTypeName=ignore,cantFindType=ignore
+
+
+
+
+
+ org.aspectj
+ aspectjtools
+ ${aspectj.version}
+
+
+
+
+
+
+
+
+ thriftif
+
+
+
+ org.codehaus.mojo
+ exec-maven-plugin
+
+
+ check-thrift-version
+ generate-sources
+
+ exec
+
+
+ sh
+ ${basedir}
+
+ -c
+ ${thrift.home}/bin/thrift -version | fgrep 'Thrift version ${libthrift.version}' && exit 0;
+ echo "=================================================================================";
+ echo "========== [FATAL] Build is configured to require Thrift version ${libthrift.version} =========";
+ echo "========== Currently installed: ";
+ ${thrift.home}/bin/thrift -version;
+ echo "=================================================================================";
+ exit 1
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-antrun-plugin
+
+
+ generate-thrift-sources
+ generate-sources
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ run
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-enforcer-plugin
+
+
+ enforce-property
+
+ enforce
+
+
+
+
+ thrift.home
+
+
+ true
+
+
+
+
+
+
+
+
+ sources
+
+
+
+ org.apache.maven.plugins
+ maven-source-plugin
+
+
+ attach-sources
+
+ jar
+
+
+
+
+
+
+
+
+ javadoc
+
+
+
+ org.apache.maven.plugins
+ maven-javadoc-plugin
+
+ none
+ false
+
+
+
+ attach-javadocs
+
+ jar
+
+
+
+
+
+
+
+
+ spotbugs
+
+
+
+
+ com.github.spotbugs
+ spotbugs-maven-plugin
+ 4.0.0
+
+
+
+ com.github.spotbugs
+ spotbugs
+ ${spotbugs.version}
+
+
+
+ true
+ 2048
+ -Djava.awt.headless=true -Xmx2048m -Xms512m
+ ${basedir}/${hive.path.to.root}/spotbugs/spotbugs-exclude.xml
+
+
+
+
+
+
+
+ com.github.spotbugs
+ spotbugs-maven-plugin
+ 4.0.0
+
+ true
+ 2048
+ -Djava.awt.headless=true -Xmx2048m -Xms512m
+ ${basedir}/${hive.path.to.root}/spotbugs/spotbugs-exclude.xml
+
+
+
+
+
+
+
+ windows-test
+
+
+ Windows
+
+
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-dependency-plugin
+ 2.8
+
+
+ copy-dependencies
+ package
+
+ copy-dependencies
+
+
+ ${project.build.directory}/deplibs/
+ false
+ false
+ true
+
+
+
+
+
+
+
+ ${basedir}/${hive.path.to.root}/testutils/hadoop.cmd
+
+ ;${env.HADOOP_HOME}/bin
+ ${project.build.directory}/deplibs/*
+ file:///${test.tmp.dir}
+ file:/
+
+
+
+ itests
+
+ itests
+
+
+
+ iceberg
+
+ iceberg
+
+
+
+ customhbase
+
+
+ hbase.version
+
+
+
+
+ dist
+
+
+
+ org.cyclonedx
+ cyclonedx-maven-plugin
+ ${maven.cyclonedx.plugin.version}
+
+
+ package
+
+ makeBom
+
+
+
+
+
+
+
+
+
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.druid.TestDruidStorageHandler.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.druid.TestDruidStorageHandler.conf
new file mode 100644
index 00000000..5e335794
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.druid.TestDruidStorageHandler.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.druid.TestDruidStorageHandler.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.druid.TestDruidStorageHandler.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.druid.TestDruidStorageHandler.data
new file mode 100644
index 00000000..bab3f821
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.druid.TestDruidStorageHandler.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandlerUtils.java#L765!!!org.apache.hadoop.hive.druid.DruidStorageHandlerUtils.publishSegmentWithShardSpec!!!org.apache.hadoop.fs.FileSystem.mkdirs!!!DruidStorageHandlerUtils.java:774!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.llap.registry.impl.TestLlapZookeeperRegistryImpl.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.llap.registry.impl.TestLlapZookeeperRegistryImpl.conf
new file mode 100644
index 00000000..d42903b7
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.llap.registry.impl.TestLlapZookeeperRegistryImpl.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.llap.registry.impl.TestLlapZookeeperRegistryImpl.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.llap.registry.impl.TestLlapZookeeperRegistryImpl.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.llap.registry.impl.TestLlapZookeeperRegistryImpl.data
new file mode 100644
index 00000000..932680e0
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.llap.registry.impl.TestLlapZookeeperRegistryImpl.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/llap-client/src/java/org/apache/hadoop/hive/registry/impl/ZkRegistryBase.java#L640!!!org.apache.hadoop.hive.registry.impl.ZkRegistryBase.ensureInstancesCache!!!org.apache.curator.framework.recipes.cache.PathChildrenCache.start!!!ZkRegistryBase.java:644!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestListPartitionsWithXIncludeParams.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestListPartitionsWithXIncludeParams.conf
new file mode 100644
index 00000000..4e9d1dc5
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestListPartitionsWithXIncludeParams.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestListPartitionsWithXIncludeParams.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestListPartitionsWithXIncludeParams.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestListPartitionsWithXIncludeParams.data
new file mode 100644
index 00000000..50c1ed80
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestListPartitionsWithXIncludeParams.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/RetryingHMSHandler.java#L83!!!org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal!!!org.apache.hadoop.hive.metastore.Deadline.startTimer!!!RetryingHMSHandler.java:89!!!org.apache.hadoop.hive.metastore.api.MetaException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestMetaStoreEventListener.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestMetaStoreEventListener.conf
new file mode 100644
index 00000000..c858eaa4
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestMetaStoreEventListener.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestMetaStoreEventListener.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestMetaStoreEventListener.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestMetaStoreEventListener.data
new file mode 100644
index 00000000..971f560b
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestMetaStoreEventListener.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java#L753!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open!!!org.apache.hadoop.hive.metastore.utils.SecurityUtils.getUGI!!!HiveMetaStoreClient.java:848!!!java.io.IOException
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestObjectStore.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestObjectStore.conf
new file mode 100644
index 00000000..5d18da41
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestObjectStore.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestObjectStore.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestObjectStore.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestObjectStore.data
new file mode 100644
index 00000000..8141f8f6
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.metastore.TestObjectStore.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java#L11654!!!org.apache.hadoop.hive.metastore.ObjectStore$RetryingExecutor.run!!!org.apache.hadoop.hive.metastore.ObjectStore$RetryingExecutor$Command.process!!!ObjectStore.java:11999!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.TestTxnCommands.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.TestTxnCommands.conf
new file mode 100644
index 00000000..0d72c0c6
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.TestTxnCommands.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.TestTxnCommands.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.TestTxnCommands.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.TestTxnCommands.data
new file mode 100644
index 00000000..35ee5865
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.TestTxnCommands.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/ql/src/java/org/apache/hadoop/hive/ql/lockmgr/DbLockManager.java#L101!!!org.apache.hadoop.hive.ql.lockmgr.DbLockManager.lock!!!org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.getMS!!!DbLockManager.java:104!!!org.apache.hadoop.hive.ql.lockmgr.LockException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.conf
new file mode 100644
index 00000000..dfdb29ae
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.data
new file mode 100644
index 00000000..35ee5865
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/ql/src/java/org/apache/hadoop/hive/ql/lockmgr/DbLockManager.java#L101!!!org.apache.hadoop.hive.ql.lockmgr.DbLockManager.lock!!!org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.getMS!!!DbLockManager.java:104!!!org.apache.hadoop.hive.ql.lockmgr.LockException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.exec.tez.TestTezOutputCommitter.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.exec.tez.TestTezOutputCommitter.conf
new file mode 100644
index 00000000..04359835
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.exec.tez.TestTezOutputCommitter.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.exec.tez.TestTezOutputCommitter.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.exec.tez.TestTezOutputCommitter.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.exec.tez.TestTezOutputCommitter.data
new file mode 100644
index 00000000..b1214e6d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.exec.tez.TestTezOutputCommitter.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/TezJobMonitor.java#L166!!!org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution!!!org.apache.tez.dag.api.client.DAGClient.getDAGStatus!!!TezJobMonitor.java:183!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.exec.util.TestRetryable.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.exec.util.TestRetryable.conf
new file mode 100644
index 00000000..07caf1e4
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.exec.util.TestRetryable.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.exec.util.TestRetryable.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.exec.util.TestRetryable.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.exec.util.TestRetryable.data
new file mode 100644
index 00000000..316ee87d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.exec.util.TestRetryable.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/ql/src/java/org/apache/hadoop/hive/ql/exec/util/Retryable.java#L71!!!org.apache.hadoop.hive.ql.exec.util.Retryable.executeCallable!!!org.apache.hadoop.hive.metastore.utils.SecurityUtils.reloginExpiringKeytabUser!!!Retryable.java:74!!!org.apache.hadoop.hive.metastore.api.MetaException
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.hooks.TestHiveProtoLoggingHook.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.hooks.TestHiveProtoLoggingHook.conf
new file mode 100644
index 00000000..5d2f2d1d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.hooks.TestHiveProtoLoggingHook.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.hooks.TestHiveProtoLoggingHook.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.hooks.TestHiveProtoLoggingHook.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.hooks.TestHiveProtoLoggingHook.data
new file mode 100644
index 00000000..86dedf9f
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.hooks.TestHiveProtoLoggingHook.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/ql/src/java/org/apache/hadoop/hive/ql/hooks/HiveProtoLoggingHook.java#L315!!!org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook$EventLogger.writeEvent!!!org.apache.hadoop.hive.ql.hooks.HiveProtoLoggingHook$EventLogger.maybeRolloverWriterForDay!!!HiveProtoLoggingHook.java:327!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestCounterMapping.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestCounterMapping.conf
new file mode 100644
index 00000000..482a2d5d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestCounterMapping.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestCounterMapping.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestCounterMapping.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestCounterMapping.data
new file mode 100644
index 00000000..b1214e6d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestCounterMapping.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/TezJobMonitor.java#L166!!!org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution!!!org.apache.tez.dag.api.client.DAGClient.getDAGStatus!!!TezJobMonitor.java:183!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.conf
new file mode 100644
index 00000000..ee35077d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.data
new file mode 100644
index 00000000..b1214e6d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/TezJobMonitor.java#L166!!!org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution!!!org.apache.tez.dag.api.client.DAGClient.getDAGStatus!!!TezJobMonitor.java:183!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestReOptimization.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestReOptimization.conf
new file mode 100644
index 00000000..376b0ebd
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestReOptimization.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestReOptimization.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestReOptimization.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestReOptimization.data
new file mode 100644
index 00000000..b1214e6d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.plan.mapping.TestReOptimization.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/TezJobMonitor.java#L166!!!org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution!!!org.apache.tez.dag.api.client.DAGClient.getDAGStatus!!!TezJobMonitor.java:183!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.schq.TestScheduledQueryService.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.schq.TestScheduledQueryService.conf
new file mode 100644
index 00000000..35359a09
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.schq.TestScheduledQueryService.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.schq.TestScheduledQueryService.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.schq.TestScheduledQueryService.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.schq.TestScheduledQueryService.data
new file mode 100644
index 00000000..b1214e6d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hadoop.hive.ql.schq.TestScheduledQueryService.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/TezJobMonitor.java#L166!!!org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution!!!org.apache.tez.dag.api.client.DAGClient.getDAGStatus!!!TezJobMonitor.java:183!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hive.hcatalog.api.repl.commands.TestCommands.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hive.hcatalog.api.repl.commands.TestCommands.conf
new file mode 100644
index 00000000..60012f49
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hive.hcatalog.api.repl.commands.TestCommands.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hive.hcatalog.api.repl.commands.TestCommands.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hive.hcatalog.api.repl.commands.TestCommands.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hive.hcatalog.api.repl.commands.TestCommands.data
new file mode 100644
index 00000000..7e3a3a47
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hive.hcatalog.api.repl.commands.TestCommands.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/ql/src/java/org/apache/hadoop/hive/ql/parse/repl/CopyUtils.java#L253!!!org.apache.hadoop.hive.ql.parse.repl.CopyUtils.doCopyRetry!!!org.apache.hadoop.hive.ql.parse.repl.CopyUtils.getFilesToRetry!!!CopyUtils.java:257!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hive.hcatalog.mapreduce.TestHCatMultiOutputFormat.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hive.hcatalog.mapreduce.TestHCatMultiOutputFormat.conf
new file mode 100644
index 00000000..918cd310
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hive.hcatalog.mapreduce.TestHCatMultiOutputFormat.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hive.hcatalog.mapreduce.TestHCatMultiOutputFormat.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hive.hcatalog.mapreduce.TestHCatMultiOutputFormat.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hive.hcatalog.mapreduce.TestHCatMultiOutputFormat.data
new file mode 100644
index 00000000..971f560b
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hive.hcatalog.mapreduce.TestHCatMultiOutputFormat.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java#L753!!!org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open!!!org.apache.hadoop.hive.metastore.utils.SecurityUtils.getUGI!!!HiveMetaStoreClient.java:848!!!java.io.IOException
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hive.testutils.TestHiveTestEnvSetup.conf b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hive.testutils.TestHiveTestEnvSetup.conf
new file mode 100644
index 00000000..d00d154c
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hive.testutils.TestHiveTestEnvSetup.conf
@@ -0,0 +1,3 @@
+retry_data_file: wasabi-testing/config/hive/test-plan/hive_retry_locations-org.apache.hive.testutils.TestHiveTestEnvSetup.data
+injection_policy: max-count
+max_injection_count: 97
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hive.testutils.TestHiveTestEnvSetup.data b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hive.testutils.TestHiveTestEnvSetup.data
new file mode 100644
index 00000000..b1214e6d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/classes/hive/test-plan/hive_retry_locations-org.apache.hive.testutils.TestHiveTestEnvSetup.data
@@ -0,0 +1,2 @@
+Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception
+https://github.com/apache/hive/blob/e08a60029f93e52182908b61fff32e8659e78daa/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/TezJobMonitor.java#L166!!!org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution!!!org.apache.tez.dag.api.client.DAGClient.getDAGStatus!!!TezJobMonitor.java:183!!!java.io.IOException
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/maven-archiver/pom.properties b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/maven-archiver/pom.properties
new file mode 100644
index 00000000..e2f40999
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/maven-archiver/pom.properties
@@ -0,0 +1,5 @@
+#Generated by Maven
+#Fri Oct 03 02:20:02 UTC 2025
+version=1.0.0
+groupId=edu.uchicago.cs.systems
+artifactId=wasabi
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/maven-status/maven-compiler-plugin/compile/default-compile/createdFiles.lst b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/maven-status/maven-compiler-plugin/compile/default-compile/createdFiles.lst
new file mode 100644
index 00000000..b71795bd
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/maven-status/maven-compiler-plugin/compile/default-compile/createdFiles.lst
@@ -0,0 +1,13 @@
+edu/uchicago/cs/systems/wasabi/ConfigParser.class
+edu/uchicago/cs/systems/wasabi/InjectForever.class
+edu/uchicago/cs/systems/wasabi/InjectionPolicy.class
+edu/uchicago/cs/systems/wasabi/NoInjection.class
+edu/uchicago/cs/systems/wasabi/OpEntry.class
+edu/uchicago/cs/systems/wasabi/WasabiContext.class
+edu/uchicago/cs/systems/wasabi/ExecutionTrace.class
+edu/uchicago/cs/systems/wasabi/HashingPrimitives.class
+edu/uchicago/cs/systems/wasabi/StackSnapshot.class
+edu/uchicago/cs/systems/wasabi/WasabiLogger.class
+edu/uchicago/cs/systems/wasabi/InjectUpToMaxCount.class
+edu/uchicago/cs/systems/wasabi/WasabiContextHolder.class
+edu/uchicago/cs/systems/wasabi/InjectionPoint.class
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/maven-status/maven-compiler-plugin/compile/default-compile/inputFiles.lst b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/maven-status/maven-compiler-plugin/compile/default-compile/inputFiles.lst
new file mode 100644
index 00000000..f6089a80
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/maven-status/maven-compiler-plugin/compile/default-compile/inputFiles.lst
@@ -0,0 +1,9 @@
+/home/cc/sosp24-ae/wasabi/wasabi-testing/src/main/java/edu/uchicago/cs/systems/wasabi/WasabiContext.java
+/home/cc/sosp24-ae/wasabi/wasabi-testing/src/main/java/edu/uchicago/cs/systems/wasabi/WasabiContextHolder.java
+/home/cc/sosp24-ae/wasabi/wasabi-testing/src/main/java/edu/uchicago/cs/systems/wasabi/StackSnapshot.java
+/home/cc/sosp24-ae/wasabi/wasabi-testing/src/main/java/edu/uchicago/cs/systems/wasabi/ConfigParser.java
+/home/cc/sosp24-ae/wasabi/wasabi-testing/src/main/java/edu/uchicago/cs/systems/wasabi/HashingPrimitives.java
+/home/cc/sosp24-ae/wasabi/wasabi-testing/src/main/java/edu/uchicago/cs/systems/wasabi/InjectionPoint.java
+/home/cc/sosp24-ae/wasabi/wasabi-testing/src/main/java/edu/uchicago/cs/systems/wasabi/ExecutionTrace.java
+/home/cc/sosp24-ae/wasabi/wasabi-testing/src/main/java/edu/uchicago/cs/systems/wasabi/InjectionPolicies.java
+/home/cc/sosp24-ae/wasabi/wasabi-testing/src/main/java/edu/uchicago/cs/systems/wasabi/WasabiLogger.java
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/wasabi-1.0.0.jar b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/wasabi-1.0.0.jar
new file mode 100644
index 00000000..8784ba7e
Binary files /dev/null and b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/target/wasabi-1.0.0.jar differ
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/README.md b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/README.md
new file mode 100644
index 00000000..3a2a768f
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/README.md
@@ -0,0 +1,98 @@
+# Helper Scripts
+
+This README outlines the functionality and provides sample usage examples for each essential helper script. These scripts are designed to help reproduce the key results from our evaluation of WASABI. We encourate users to check out our [paper](https://bastoica.github.io/files/papers/2024_sosp_wasabi.pdf) for details.
+
+### § `run.py`
+
+The `run.py` script automates WASABI's multiphase pipeline for benchmarking a set of target applications, currently Hadoop, HBase, Hive, Cassandra, and Elasticsearch. It facilitates cloning repositories, preparing code by replacing configuration files and rewriting source code, running fault injection tests, and executing bug oracles to analyze test and build reports.
+
+Usage:
+```
+python3 run.py --phase --benchmark
+```
+
+Arguments:
+* `--phase`: Specifies the phase of the pipeline,
+ * `setup`: Clone the repository and checkout a specific version for the benchmark.
+ * `prep`: Prepare the code by replacing configuration files and rewriting source code.
+ * `bug-triggering`: Run fault injection tests on the benchmark.
+ * `bug-oracles`: Run bug oracles to analyze test and build reports.
+ * `all`: Execute all WASABI's phases, in sequence.
+
+### § `run_benchmarks.py`
+
+The `run_benchmarks.py` script automates WASABI's phase of running fault injection and identifying retry bugs for a target application. It performs several tasks: cleaning up local package directories to prevent build conflicts, building WASABI and the target application, running the test suite with fault injection, and saving the output logs for analysis.
+
+Usage:
+```
+python3 run_benchmarks.py --benchmark
+```
+
+Arguments:
+* `--benchmark`: Specifies the benchmark application to build and test. Current choices include `hadoop`, `hbase`, `hive`, `cassandra`, and `elasticsearch`.
+
+### § `bug_oracles.py`
+
+The `bug_oracles.py` analyzes log files generated during the testing of a target application. It processes both build and test logs to identify and categorize "HOW" and "WHEN" type retry bugs.
+
+Usage:
+```
+python3 bug_oracles.py --benchmark
+```
+
+Arguments:
+* ``: The root directory where the build and test logs are saved.
+* `--benchmark`: Specifies the benchmark application for which to analyze logs. Current choices include `hadoop`, `hbase`, `hive`, `cassandra`, and `elasticsearch`.
+
+### § `source_rewriter.py`
+
+The `source_rewriter.py` script automates WASABI's phase of modifying of test files to adjust retry bounds and timeout values in large-scale applications. Operating in two modes, bounds-rewriting and timeout-rewriting, the script either increases the retry limits or extends the timeout durations in test methods, based on a given specification.
+
+* `--mode`: Specifies the operation mode of the script. Choices are:
+ * `bounds-rewriting`: Modifies retry bounds in Java code to a higher value.
+ * `timeout-rewriting`: Adjusts timeout annotations and wait calls in Java test methods to a higher value.
+* ``: Path to the configuration file listing the changes to be made. The format depends on the mode:
+ * For bounds-rewriting, it should contain variable names, assigned values, assignment methods, and test class names.
+ * For timeout-rewriting, it should list the test classes and test methods that require timeout adjustments.
+* ``: The root directory of the target application where the script searches for test files to modify.
+
+### § `display_bug_results.py`
+
+The `display_bug_results.py` analyzes bug reports generated by the test suite of a target application during fault injection, aggregates them based on bug types and the application name, compares them to the ground truth dataset from our [paper](https://bastoica.github.io/files/papers/2024_sosp_wasabi.pdf), and prints summary tables similar to Table 3 (see "4. Evaluation", page 9).
+
+### § `generate_aspect.py`
+
+The `generate_aspect.py` script automates the creation of AspectJ code for injecting exceptions into specific methods of a target application. It reads a specification file that details which exceptions to inject and where to inject them. The specification file is tailored for the given target application and contains the methods implementing retry along with their source code locations, the retry-triggering exceptions being handled, and the methods being retried along with their source code locations. Using this information, the script generates an AspectJ aspect that can be woven into the target application to simulate retry-triggering exceptions at specific program points that should trigger retries when such exceptions occur.
+
+Usage:
+```
+python3 generate_aspect.py --spec-file --aspect-file
+```
+
+Arguments:
+* `--spec-file`: Path to the input specification file containing exception injection details. This file should be in CSV format (though it use custom delimiters, `!!!`) and includes entries that specify the methods implementing retry, their source code locations along with the retry-triggering exceptions being handled, and the methods being retried along with their source code locations. Each line has the following format:
+```
+[Retry Enclosing Method Location]!!![Enclosing Method]!!![Retried Method]!!![Retried Method Location]!!![Exception Class]
+```
+Check out the main [README](https://github.com/bastoica/wasabi/blob/master/wasabi-testing/README.md) file for more details.
+* `--aspect-file`: Path to the output file to save the generated AspectJ aspect.
+
+### § `test_plan_generator.py`
+
+The `test_plan_generator.py` script automates the generation of test plans by matching retry locations with test cases in a target application. The script implements a heuristic that tries to ensure that each test is uniquely matched to a retry location, so no test is exercising two distinct retriable methods. The intuition is that once an exception is injected for a retried method at location `A`, another retried method executing at a later location `B` might not execute since the test could crash or hang.
+
+Usage:
+```
+python3 test-plan-generation.py --retry_locations_input \
+ --test_retry_pairs_input \
+ --path_to_configs
+```
+
+Arguments:
+* `--retry_locations_input`: Path to the input file containing details about retry locations.
+* `--test_retry_pairs_input`: Path to the input file mapping tests to retry locations. Each line should contain a test name and an injection location (retried method), separated by a comma.
+* `--path_to_configs`: Path where the generated configuration files should be saved.
+
+### § `wasabi_coverage.py`
+
+The `wasabi_coverage.py` script analyzes log files generated by a target application woven (instrumented) with WASABI, to determine code coverage statistics. Specifically, the script scans through test output logs to identify which methods have been instrumented and which have actually had exceptions injected during testing.
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/bug_oracles.py b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/bug_oracles.py
new file mode 100755
index 00000000..8b3000c7
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/bug_oracles.py
@@ -0,0 +1,478 @@
+import argparse
+from collections import defaultdict
+import os
+import re
+
+class LogMessage:
+ def __init__(self) -> None:
+ self.type = None
+ self.timestamp = None
+ self.test_name = None
+ self.injection_site = None
+ self.injection_location = None
+ self.retry_caller = None
+ self.exception_injected = None
+ self.retry_attempt = None
+ self.sleep_location = None
+ self.failure_string = None
+ self.failure_exceptions = None
+ self.stack_trace = None
+
+ def parse_log_message(self, log_message: str, is_test_report: True) -> None:
+ """Parses a single log message string and populates the LogMessage object's attributes.
+
+ Args:
+ log_message (str): A string containing a log message.
+ """
+
+ if is_test_report:
+ if "[wasabi]" in log_message:
+ tokens = log_message.split(" | ")
+ self.test_name = self.get_test_name(tokens[0], True)
+ if "[Pointcut]" in tokens[0]:
+ self.parse_pointcut_message(tokens)
+ elif "[Injection]" in tokens[0]:
+ self.parse_injection_message(tokens)
+ elif "[THREAD-SLEEP]" in tokens[0]:
+ self.parse_sleep_message(tokens)
+ elif "[FAILURE]" in tokens[0]:
+ self.parse_failure_message(log_message)
+ else:
+ if "[ERROR]" in log_message:
+ self.type = "error"
+ self.test_name = self.get_test_name(log_message, False)
+ self.stack_trace = self.get_error_details(log_message)
+
+ def get_test_name(self, log_message: str, is_test_report) -> str:
+ """Extracts the test name from an error log message.
+
+ Args:
+ log_message (str): A string containing an error log message.
+ """
+ test_name = "UNKNOWN"
+ if is_test_report:
+ token = self.get_value_between_separators(log_message, "---", "---")
+ if token:
+ match = re.search(r'\w+\(.*\s+(.*\(.*?\))\)', token)
+ test_name = match.group(1).split("(")[0] if match else test_name
+ test_name = re.sub(r'[^\w.]+.*$', '', test_name)
+ tokens = test_name.split('.')
+ if len(tokens) >= 2:
+ test_name = '.'.join(tokens[-2:])
+ else:
+ for token in log_message.split(" "):
+ if "test" in token:
+ test_name = re.sub(r'[^\w.]+.*$', '', token)
+ break
+ return test_name
+
+ def get_error_details(self, log_message: str) -> str:
+ """Extracts the failure string and stack trace from an error log message.
+
+ Args:
+ log_message (str): A string containing an error log message.
+ """
+ self.failure_string = ""
+ stack_found = False
+ stack = []
+ for line in log_message.split("\n"):
+ if line.strip().startswith("at "):
+ stack.append(line.strip().split("at ")[1])
+ stack_found = True
+ elif not stack_found:
+ self.failure_string += line + "\n"
+ else:
+ break # Stop if stack trace processing is complete
+ norm_stack = self.normalize_stack_trace(stack)
+ return norm_stack
+
+ def normalize_stack_trace(self, stack_trace: [str]) -> str:
+ """Normalizes the stack trace for a given test failure by removing
+ top frames that correspond to Java standard libraries.
+
+ Args:
+ stack_trace (str): The stack trace for a particular test failure.
+
+ Returns:
+ str: The normalized stack trace, if it exists.
+ """
+ javalib_frames_prefixes = ["java.", "jdk.", "org.junit.", "sun.", "oracle.",
+ "app//org.mockito.", "app//org.slf4j.",
+ "org.apache.maven.surefire."]
+ norm_stack_trace = []
+ for frame in stack_trace:
+ if not any(frame.startswith(prefix) for prefix in javalib_frames_prefixes):
+ norm_stack_trace.append(frame.strip())
+ return "\n".join(norm_stack_trace)
+
+ def parse_pointcut_message(self, tokens: list) -> None:
+ """Parses a pointcut log message and populates the LogMessage object's attributes.
+
+ Args:
+ tokens (list): A list of string tokens derived from the log message.
+ """
+ self.type = "pointcut"
+ self.injection_site = self.get_injection_site(tokens[1])
+ self.injection_location = self.get_value_between_separators(tokens[2], "---", "---")
+ self.retry_caller = self.get_value_between_separators(tokens[3], "---", "---")
+
+ def parse_injection_message(self, tokens: list) -> None:
+ """Parses an injection log message and populates the LogMessage object's attributes.
+
+ Args:
+ tokens (list): A list of string tokens derived from the log message.
+ """
+ self.type = "injection"
+ self.exception_injected = self.get_value_between_separators(tokens[1].split("thrown after calling")[0], "---", "---")
+ self.injection_site = self.get_injection_site(tokens[1].split("thrown after calling")[1])
+ self.retry_caller = self.get_value_between_separators(tokens[2], "---", "---")
+ self.retry_attempt = int(self.get_value_between_separators(tokens[3], "---", "---"))
+
+ def parse_sleep_message(self, tokens: list) -> None:
+ """Parses a sleep log message and populates the LogMessage object's attributes.
+
+ Args:
+ tokens (list): A list of string tokens derived from the log message.
+ """
+ self.type = "sleep"
+ self.sleep_location = self.get_value_between_separators(tokens[1], "---", "---")
+ self.retry_caller = self.get_value_between_separators(tokens[2], "---", "---")
+
+ def parse_failure_message(self, log_message: str):
+ """Parses a failure log message and populates the LogMessage object's attributes.
+
+ Args:
+ log_message (str): A string containing a log message.
+
+ """
+ if "Failure message" in log_message and "Stack trace:" in log_message:
+ self.type = "failure"
+ self.failure_string = re.search(r'Failure message :-: (.*?) :-: \| Stack trace:', log_message, re.S).group(1)
+ self.failure_exceptions = self.extract_failure_exception(self.failure_string)
+
+ def extract_failure_exception(self, log_message: str) -> set:
+ """
+ Extracts the failure exceptions from the failure log message.
+
+ Args:
+ log_message (str): A string containing a log message.
+
+ Returns:
+ list: A list of exceptions in the failure message.
+ """
+ exceptions = set()
+ tokens = log_message.split(":-:")
+
+ for token in tokens:
+ # Estract fully qualified Java exceptions
+ java_exceptions = re.findall(r'java\.[a-zA-Z]+\.[a-zA-Z]+Exception', token)
+ exceptions.update(java_exceptions)
+
+ # Extract fully qualified Apache exceptions
+ org_exceptions = re.findall(r'org\.[a-zA-Z]+\.[a-zA-Z]+Exception', token)
+ exceptions.update(org_exceptions)
+
+ # Extract truncated or unqalified exception names
+ norm_exceptions = re.findall(r'[a-zA-Z]+Exception', token)
+ exceptions.update(norm_exceptions)
+
+ norm_exceptions = {e.strip(' \t\n:.') for e in exceptions}
+ return norm_exceptions
+
+ def get_injection_site(self, token: str) -> str:
+ """Extracts the injection site from the token.
+
+ Args:
+ token (str): A string token derived from the log message.
+
+ Returns:
+ str: The extracted injection site or 'UNKNOWN' if not found.
+ """
+ match = re.search(r'\w+\(.*\s+(.*\(.*?\))\)', self.get_value_between_separators(token, "---", "---"))
+ if match:
+ return match.group(1).split("(")[0]
+ return "UNKNOWN"
+
+ @staticmethod
+ def get_value_between_separators(text: str, start_sep: str, end_sep: str) -> list[str]:
+ """Extracts a value between two separators from a given text.
+
+ Args:
+ text (str): The text containing the separators.
+ start_sep (str): The starting separator.
+ end_sep (str): The ending separator.
+
+ Returns:
+ str: The extracted value or None if not found.
+ """
+ try:
+ return text.split(start_sep)[1].split(end_sep)[0].strip()
+ except IndexError:
+ return None
+
+
+def parse_build_log(file_path: str) -> list:
+ """Parses a single build log file, handles errors, and parses the relevant log messages.
+
+ Args:
+ file_path (str): Path to the build log file.
+
+ Returns:
+ list[LogMessage]: A list of LogMessage objects parsed from the log file.
+ """
+ timeout_messages = ["TimeoutException",
+ "TimeoutIOException",
+ "SocketTimeoutException",
+ "TestTimedOut",
+ "[ERROR] There was a timeout"]
+
+ with open(file_path, "r") as file:
+ lines = file.readlines()
+
+ log_messages = []
+ log_timeout_messeges = []
+
+ index = 0
+ while index < len(lines):
+ if "[ERROR]" in lines[index] and "test" in lines[index]:
+ offset = index
+ log_message = ""
+ while index < len(lines) and (lines[index].strip().startswith("at ") or ((index - offset + 1) <= 50)):
+ log_message += lines[index].strip() + "\n"
+ index += 1
+
+ log_msg = LogMessage()
+ log_msg.parse_log_message(log_message, False)
+ log_msg.test_name = file_path.split('build-')[1].split('.')[0] + "." + log_msg.test_name.split(".")[-1]
+ log_messages.append(log_msg)
+
+ if index < len(lines) and any(exception in lines[index] for exception in timeout_messages):
+ log_timeout_messeges.append(lines[index])
+ if index < len(lines) and any(exception in log_msg.stack_trace for exception in timeout_messages):
+ log_timeout_messeges.append(lines[index])
+ else:
+ index += 1
+
+ return log_messages, log_timeout_messeges
+
+def parse_test_log(file_path: str) -> list:
+ """Parses a single test report log file to extract log messages.
+
+ Args:
+ file_path (str): Path to the test report log file.
+
+ Returns:
+ list[LogMessage]: A list of LogMessage objects parsed from the log file.
+ """
+ log_messages = []
+ with open(file_path, 'r') as file:
+ log_data = file.read()
+ log_entries = log_data.strip().split("\n")
+ for entry in log_entries:
+ msg = LogMessage()
+ msg.parse_log_message(entry, True)
+ log_messages.append(msg)
+
+ return log_messages
+
+def error_in_test_code(op: LogMessage) -> bool:
+ """Determines if a particular failure log message and call stack
+ indicate a false positive or a true retry bug.
+
+ Args:
+ failure_exception (str): The failure exception thrown by the test.
+ stack_trace (list[str]): The failing call stack.
+
+ Returns:
+ bool: 'True' if error is located in test code, 'False' otherwise.
+ """
+ test_frames_patterns = ["Test", ".test", "MiniYARNCluster", "MiniDFSCluster", "MiniRouterDFSCluster", ".doBenchmark("]
+ if op.stack_trace and len(op.stack_trace) > 0:
+ for pattern in test_frames_patterns:
+ if pattern in op.stack_trace[0]:
+ return True
+
+ test_code_exceptions = [".TimeoutException", ".TimeoutIOException", ".AssertionError", ".AssertionFailedError",
+ ".ComparisonError", ".ComparisonFailure", ".AssumptionViolatedException", ".InterruptedException",
+ ".InterruptedIOException", ".AssumptionViolatedException", ".DoNotRetry", ".DoNotRetryTest",
+ "org.mockito.exceptions", "java.lang.RuntimeException"]
+ for e in test_code_exceptions:
+ if e in op.failure_string:
+ return True
+
+ return False
+
+def check_how_bugs(test_failures: dict(), execution_trace: defaultdict(list)) -> set:
+ """Searches for HOW bugs by parsing test reports for logged failures with a different exception
+ than the one injected by WASAI.
+
+ Args:
+ log_messages (list[LogMessage]): A list of LogMessage objects parsed from a single log file.
+
+ Returns:
+ set: A set of tuples with HOW buggy retry locations and a 'how-bug' tag.
+ """
+ how_bugs = set()
+
+ for test_name, operations in execution_trace.items():
+ last_injection_op = None
+
+ for op in operations:
+ # Skip if error in test code
+ if op.test_name in test_failures and error_in_test_code(test_failures[op.test_name]):
+ continue
+ if op.type == "injection":
+ last_injection_op = op
+ elif op.type == "failure":
+ if last_injection_op is None or any(last_injection_op.exception_injected in exception for exception in op.failure_exceptions):
+ continue
+ elif error_in_test_code(op) or "| Retry attempt ---" in op.failure_string:
+ continue
+ else:
+ how_bugs.add(("how-bug", last_injection_op))
+ last_injection_op = None
+
+ return how_bugs
+
+def check_when_missing_backoff_bugs(execution_trace: defaultdict(list)) -> set:
+ """Searches for WHEN missing bacckof retry bugs by parsing test repors and checking for consecutive retry
+ attempts where WASABI did not record any Thread.sleep-like call.
+
+ Args:
+ log_messages (list[LogMessage]): A list of LogMessage objects parsed from a single log file.
+
+ Returns:
+ set: A set of tuples with WHEN missing backoff buggy retry locations ahd a 'when-missing-backoff' tag
+ """
+ when_missing_backoff_bugs = set()
+
+ for test_name, operations in execution_trace.items():
+ max_op = None
+ has_sleep = False
+ max_retry_attempts = 0
+ for op in operations:
+ if op.type == "sleep":
+ has_sleep = True
+ elif op.type == "injection" and max_retry_attempts < op.retry_attempt:
+ max_retry_attempts = op.retry_attempt
+ max_op = op
+
+ if not has_sleep and max_retry_attempts >= 2:
+ when_missing_backoff_bugs.add(("when-missing-backoff", max_op))
+
+ return when_missing_backoff_bugs
+
+def check_when_missing_cap_bugs(execution_trace: defaultdict(list)) -> set:
+ """Searches for WHEN missing cap retry bugs by parsing test repors and checking if WASABI can
+ inject a large number of exceptions that indicate infinite retry attempts.
+
+ Args:
+ log_messages (list[LogMessage]): A list of LogMessage objects parsed from a single log file.
+
+ Returns:
+ set: A set of tuples with WHEN missing cap buggy retry locations ahd a 'when-missing-cap' tag
+ """
+ MISSING_CAP_BOUND = 90
+ when_missing_cap = set()
+
+ for test_name, operations in execution_trace.items():
+ for op in operations:
+ if op.type == "injection" and op.retry_attempt >= MISSING_CAP_BOUND:
+ when_missing_cap.add(("when-missing-cap", op))
+
+ return when_missing_cap
+
+def check_when_missing_cap_timeouts(execution_trace: defaultdict(list), test_timeouts: dict()) -> set:
+ """Searches for WHEN missing cap retry bugs by parsing test repors and checking if WASABI injects
+ a large
+
+ Args:
+ log_messages (list[LogMessage]): A list of LogMessage objects parsed from a single log file.
+
+ Returns:
+ set: A set of tuples with WHEN missing cap buggy retry locations ahd a 'when-missing-cap' tag
+ """
+ MISSING_CAP_BOUND = 5
+ timeout_retry_locations = ["org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile",
+ "org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile",
+ "org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALProcedure.executeFromState",
+ "org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run",
+ "org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance",
+ "org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize",
+ "org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom",
+ "org.apache.hadoop.hbase.util.FSUtils.setClusterId",
+ "org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater.takeAndProcessTasks"]
+
+ when_missing_cap_timeout = set()
+
+ for test_name, operations in execution_trace.items():
+ for op in operations:
+ if op.type == "injection" and op.retry_attempt >= MISSING_CAP_BOUND:
+ test_class = test_name
+ if len(test_name.split(".")) > 1:
+ test_class = test_name.split(".")[0]
+ if test_class in test_timeouts and op.retry_caller in timeout_retry_locations:
+ when_missing_cap_timeout.add(("when-missing-cap", op))
+
+ return when_missing_cap_timeout
+
+def main():
+ parser = argparse.ArgumentParser(description="Parse and process log files for retry bug analysis.")
+ parser.add_argument("logs_root_dir", type=str, help="The root directory where build/test logs are saved")
+ parser.add_argument("--benchmark", choices=["hadoop", "hbase", "hive", "cassandra", "elasticsearch", "all-maven"], required=True, help="The benchmark to run")
+ args = parser.parse_args()
+ root_path = args.logs_root_dir
+
+ test_timeouts = dict()
+ test_failures = dict()
+ all_bugs = set()
+ coverage = set()
+
+ for root, _, files in os.walk(os.path.join(root_path, "build-reports/")):
+ for fname in files:
+ if "build-" in fname and fname.endswith('.log'):
+ build_log_messages, build_log_timeout_messages = parse_build_log(os.path.join(root, fname))
+
+ for msg in build_log_messages:
+ test_failures[msg.test_name] = msg
+
+ test_class = fname.split(".")[-2]
+ test_timeouts[test_class] = build_log_timeout_messages
+
+ for root, _, files in os.walk(os.path.join(root_path, "test-reports/")):
+ for fname in files:
+ if fname.endswith('-output.txt'):
+ test_log = parse_test_log(os.path.join(root, fname))
+ execution_trace = defaultdict(list)
+
+ for msg in test_log:
+ if msg.type in ["injection", "sleep", "failure"]:
+ execution_trace[msg.test_name].append(msg)
+ if msg.type == "pointcut":
+ coverage.update([f"test-injected,{msg.test_name}"])
+
+ all_bugs.update(check_when_missing_backoff_bugs(execution_trace))
+ all_bugs.update(check_when_missing_cap_bugs(execution_trace))
+ all_bugs.update(check_how_bugs(test_failures, execution_trace))
+ all_bugs.update(check_when_missing_cap_timeouts(execution_trace, test_timeouts))
+
+ print("// ----------------------------- //")
+ print(f" Retry bugs for {args.benchmark}")
+ print("// ----------------------------- //")
+ for bug_no, bug in enumerate(all_bugs, 1):
+ bug_type, op = bug
+ print(f"bug-{bug_no},{bug_type},{op.retry_caller},{op.test_name}")
+
+ bug_file = os.path.join(root_path, f"{args.benchmark}-bugs-per-test.csv")
+ with open(bug_file, "w") as f:
+ for bug_no, bug in enumerate(all_bugs, 1):
+ bug_type, op = bug
+ f.write(f"bug-{bug_no},{bug_type},{op.retry_caller},{op.test_name}\n")
+
+ cov_file = os.path.join(root_path, f"{args.benchmark}-cov.csv")
+ with open(cov_file, "w") as f:
+ for cov_msg in coverage:
+ f.write(f"{cov_msg}\n")
+
+if __name__ == "__main__":
+ main()
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/display_bug_results.py b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/display_bug_results.py
new file mode 100644
index 00000000..e4cda370
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/display_bug_results.py
@@ -0,0 +1,199 @@
+from collections import defaultdict
+import os
+import sys
+
+def get_benchmark_name(loc):
+ """
+ Classifies the location based on its prefix.
+
+ Parameters:
+ location (str): The bug location string to classify.
+
+ Returns:
+ str: The classification group (hdfs, yarn, mapreduce, hadoop, hbase, hive, cassandra, elasticsearch).
+ """
+ if loc.startswith("org.apache.hadoop.hdfs") and "SecondaryNameNode.doWork" not in loc:
+ return "hdfs"
+ elif loc.startswith("org.apache.hadoop.yarn"):
+ return "yarn"
+ elif loc.startswith("org.apache.hadoop.mapreduce") or loc.startswith("org.apache.hadoop.mapred"):
+ return "mapreduce"
+ elif loc.startswith("org.apache.hadoop.hbase"):
+ return "hbase"
+ elif loc.startswith("org.apache.hadoop.hive"):
+ return "hive"
+ elif loc.startswith("org.apache.cassandra"):
+ return "cassandra"
+ elif loc.startswith("org.apache.hadoop") or "SecondaryNameNode.doWork" in loc: # initialy found in hadoop-common, added here to match Table 3
+ return "hadoop"
+ elif loc.startswith("org.elasticsearch"):
+ return "elasticsearch"
+ else:
+ return "unknown"
+
+def aggregate_bugs(root_dir):
+ """
+ Searches for bug report files and aggregates bugs based on their type and
+ which application have been found in.
+
+ Parameters:
+ root_dir (str): The root directory to search for the bug report files.
+
+ Returns:
+ dict: A dictionary storing the benchmark, bug type, and retry location tuples.
+ """
+ bugs = defaultdict(lambda: defaultdict(set))
+ unique = dict()
+
+ for dirpath, _, files in os.walk(root_dir):
+ for file in files:
+ if file.endswith(".csv"):
+ file_path = os.path.join(dirpath, file)
+
+ with open(file_path, 'r') as f:
+ for line in f:
+ if "how-bug" in line or "when-missing-" in line:
+ tokens = line.strip().split(",")
+
+ bug_type = tokens[1]
+ bug_loc = tokens[2]
+
+ key = bug_type + bug_loc
+ if key in unique:
+ continue
+ unique[key] = "x"
+
+ benchmark = get_benchmark_name(bug_loc)
+ bugs[bug_type][benchmark].add(bug_loc)
+
+ return bugs
+
+
+def get_ground_truth_bugs(file_path: str):
+ """
+ Reads the ground truth bugs from a file and organizes them into a dictionary.
+
+ Parameters:
+ file_path (str): The path to the ground truth file.
+
+ Returns:
+ dict: A dictionary similar to the bugs dictionary with bug_type, benchmark, and retry_location.
+ """
+ ground_truth = defaultdict(lambda: defaultdict(set))
+
+ with open(file_path, 'r') as f:
+ for line in f:
+ tokens = line.strip().split(",")
+ benchmark = tokens[0]
+ bug_type = tokens[1]
+ retry_location = tokens[2]
+ ground_truth[bug_type][benchmark].add(retry_location)
+
+ return ground_truth
+
+
+def print_bug_tables(bugs, ground_truth):
+ """
+ Prints a table of bug types and the benchmark where they were found.
+
+ Parameters:
+ bugs (dict): A dictionary that aggregates all bugs found by WASABI.
+ """
+ benchmarks = ["hadoop", "hdfs", "mapreduce", "yarn", "hbase", "hive", "cassandra", "elasticsearch"]
+ ordered_bug_types = ["when-missing-cap", "when-missing-backoff", "how-bug"]
+ row_names = {
+ "how-bug": "HOW",
+ "when-missing-backoff": "WHEN-no-delay",
+ "when-missing-cap": "WHEN-no-cap"
+ }
+
+ display_table_name("Table 3 (inverted, bugs found)")
+ header = ["Bug Type"] + benchmarks + ["TOTAL"]
+ print(f"{header[0]:<20}", end="")
+ for b in benchmarks:
+ print(f"{b:<15}", end="")
+ print(f"{'TOTAL':<15}")
+
+ unmatched_ground_truth = {}
+ for bug_type in ordered_bug_types:
+ display_name = row_names.get(bug_type, bug_type)
+ print(f"{display_name:<20}", end="")
+ total_count = 0
+
+ for benchmark in benchmarks:
+ ground_truth_locations = ground_truth.get(bug_type, {}).get(benchmark, set())
+ bug_locations = bugs.get(bug_type, {}).get(benchmark, set())
+ unmatched_ground_truth.setdefault(bug_type, set())
+
+ matching_locations = set()
+ for bug in bug_locations:
+ if bug in ground_truth_locations:
+ matching_locations.add(bug)
+
+ count = len(matching_locations)
+ total_count += count
+
+ non_matching = ground_truth_locations - matching_locations
+ unmatched_ground_truth[bug_type].update(non_matching)
+
+ print(f"{count:<15}", end="")
+
+ print(f"{total_count:<15}")
+
+ display_table_name("Table 3 (original)")
+ print(f"{header[0]:<20}", end="")
+ for b in benchmarks:
+ print(f"{b:<15}", end="")
+ print(f"{'TOTAL':<15}")
+
+ for bug_type in ordered_bug_types:
+ display_name = row_names.get(bug_type, bug_type)
+ print(f"{display_name:<20}", end="")
+ total_count = 0
+
+ for benchmark in benchmarks:
+ bug_locations = bugs.get(bug_type, {}).get(benchmark, set())
+ count = len(bug_locations)
+ total_count += count
+ print(f"{count:<15}", end="")
+
+ print(f"{total_count:<15}")
+
+ print("\nUnmatched ground truth locations (not found in bugs):")
+ for bug_type, unmatched_set in unmatched_ground_truth.items():
+ if unmatched_set:
+ print(f"Bug Type: {bug_type}")
+ for location in unmatched_set:
+ print(f" - {location}")
+
+
+def display_table_name(msg: str):
+ """
+ Prints a "stylized" message indicating the table printed.
+
+ Arguments:
+ msg (str): The name of the table.
+ """
+ border_line = "*" * (len(msg) + 4)
+ inner_line = "*" + " " * (len(msg) + 2) + "*"
+ print(f"\n{border_line}")
+ print(f"{inner_line}")
+ print(f"*{msg.center(len(border_line) - 2)}*")
+ print(f"{inner_line}")
+ print(f"{border_line}\n")
+
+
+def main():
+ wasabi_root_dir = os.getenv("WASABI_ROOT_DIR")
+ if not wasabi_root_dir:
+ print("[WASABI-HELPER]: [ERROR]: The WASABI_ROOT_DIR environment variable is not set.")
+ sys.exit(1)
+ results_root_dir = os.path.join(wasabi_root_dir, "..", "results")
+ ground_truth_file = os.path.join(wasabi_root_dir, "wasabi-testing", "bugs_ground_truth.txt")
+
+ bugs = aggregate_bugs(results_root_dir)
+ ground_truth = get_ground_truth_bugs(ground_truth_file)
+ print_bug_tables(bugs, ground_truth)
+
+if __name__ == "__main__":
+ main()
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/generate_aspect.py b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/generate_aspect.py
new file mode 100644
index 00000000..d872af6d
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/generate_aspect.py
@@ -0,0 +1,264 @@
+import argparse
+
+def read_spec_file_to_dict(csv_file_path):
+ with open(csv_file_path, 'r') as f:
+ lines = f.readlines()
+ lines = lines[1:]
+
+ exception_map = {}
+ for line in lines:
+ tokens = line.strip().split("!!!")
+ enclosing_method = tokens[1]
+ retried_method = tokens[2]
+ exception = tokens[4].strip().split(".")[-1]
+
+ if exception not in exception_map:
+ exception_map[exception] = []
+ exception_map[exception].append((enclosing_method, retried_method))
+
+ return exception_map
+
+def generate_aspectj_code(exception_map):
+ pointcut_code = ""
+ for exception, method_pairs in exception_map.items():
+ patterns = []
+
+ patterns = [
+ f"(withincode(* {enclosing}(..)) && call(* {retried}(..) throws *Exception*))"
+ for enclosing, retried in method_pairs
+ ]
+ pointcut_body = " ||\n ".join(patterns)
+
+ pointcut_template = f"""
+ /* Inject {exception} */
+
+ pointcut inject{exception}():
+ ({pointcut_body}) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ after() throws {exception} : inject{exception}() {{
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ String retryCallerFunction = stackSnapshot.getSize() > 0 ? stackSnapshot.getFrame(0) : "???";
+ String injectionSite = thisJoinPoint.toString();
+ String retryException = "{exception}";
+ String injectionSourceLocation = String.format("%s:%d",
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ if (this.wasabiCtx == null) {{
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] [Non-Test-Method] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ return;
+ }}
+
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[Pointcut] Test ---%s--- | Injection site ---%s--- | Injection location ---%s--- | Retry caller ---%s---\\n",
+ this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryCallerFunction)
+ );
+
+ InjectionPoint ipt = this.wasabiCtx.getInjectionPoint(this.testMethodName,
+ injectionSite,
+ injectionSourceLocation,
+ retryException,
+ retryCallerFunction,
+ stackSnapshot);
+ if (ipt != null && this.wasabiCtx.shouldInject(ipt)) {{
+ this.activeInjectionLocations.add(retryCallerFunction);
+
+ long threadId = Thread.currentThread().getId();
+ throw new {exception}(
+ String.format("[wasabi] [thread=%d] [Injection] Test ---%s--- | ---%s--- thrown after calling ---%s--- | Retry location ---%s--- | Retry attempt ---%d---",
+ threadId,
+ this.testMethodName,
+ ipt.retryException,
+ ipt.injectionSite,
+ ipt.retrySourceLocation,
+ ipt.injectionCount)
+ );
+ }}
+ }}
+"""
+ pointcut_code += pointcut_template
+
+ pointcut_code = pointcut_code.replace("( (within", "((within")
+ pointcut_code = pointcut_code.replace(") ||\n) &&", ")) &&")
+
+ code_template = f"""package edu.uchicago.cs.systems.wasabi;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/* Add imports specific to the exceptions thrown by the Aspect program */
+
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.Set;
+
+import edu.uchicago.cs.systems.wasabi.ConfigParser;
+import edu.uchicago.cs.systems.wasabi.WasabiLogger;
+import edu.uchicago.cs.systems.wasabi.WasabiContext;
+import edu.uchicago.cs.systems.wasabi.InjectionPolicy;
+import edu.uchicago.cs.systems.wasabi.StackSnapshot;
+import edu.uchicago.cs.systems.wasabi.InjectionPoint;
+import edu.uchicago.cs.systems.wasabi.ExecutionTrace;
+
+public aspect Interceptor {{
+ private WasabiContext wasabiCtx = null;
+
+ private static final String UNKNOWN = "UNKNOWN";
+
+ private static final WasabiLogger LOG = new WasabiLogger();
+ private static final String configFile = (System.getProperty("configFile") != null) ? System.getProperty("configFile") : "default.conf";
+ private static final ConfigParser configParser = new ConfigParser(LOG, configFile);
+
+ private Set activeInjectionLocations = ConcurrentHashMap.newKeySet();
+ private String testMethodName = UNKNOWN;
+
+ pointcut testMethod():
+ (@annotation(org.junit.Test) ||
+ @annotation(org.junit.jupiter.api.Test)) &&
+ !within(org.apache.hadoop.*.TestDFSClientFailover.*) &&
+ !within(org.apache.hadoop.hdfs.*.TestOfflineImageViewer.*) &&
+ !within(org.apache.hadoop.example.ITUseHadoopCodec.*);
+
+
+ before() : testMethod() {{
+ this.wasabiCtx = new WasabiContext(LOG, configParser);
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[TEST-BEFORE]: Test ---%s--- started", thisJoinPoint.toString())
+ );
+
+ if (this.testMethodName != this.UNKNOWN) {{
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[TEST-BEFORE]: [ALERT]: Test method ---%s--- executes concurrentlly with test method ---%s---",
+ this.testMethodName, thisJoinPoint.toString())
+ );
+ }}
+
+ this.testMethodName = thisJoinPoint.toString();
+ }}
+
+ after() returning: testMethod() {{
+ if (this.wasabiCtx == null) {{ // This happens for non-test methods (e.g. config) inside test code
+ return; // Ignore retry in "before" and "after" annotated methods
+ }}
+
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[TEST-AFTER]: [SUCCESS]: Test ---%s--- done", thisJoinPoint.toString())
+ );
+
+ this.wasabiCtx.printExecTrace(this.LOG, String.format(" Test: %s", this.testMethodName));
+
+ this.testMethodName = this.UNKNOWN;
+ this.wasabiCtx = null;
+ this.activeInjectionLocations.clear();
+ }}
+
+ after() throwing (Throwable t): testMethod() {{
+ if (this.wasabiCtx == null) {{ // This happens for non-test methods (e.g. config) inside test code
+ return; // Ignore retry in "before" and "after" annotated methods
+ }}
+
+ this.wasabiCtx.printExecTrace(this.LOG, String.format(" Test: %s", this.testMethodName));
+
+ StringBuilder exception = new StringBuilder();
+ for (Throwable e = t; e != null; e = e.getCause()) {{
+ exception.append(e);
+ exception.append(" :-: ");
+ }}
+
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[TEST-AFTER] [FAILURE] Test ---%s--- | Failure message :-: %s| Stack trace:\\n%s\\n:-:-:\\n\\n",
+ thisJoinPoint.toString(), exception.toString(), stackSnapshot.toString())
+ );
+
+ this.testMethodName = this.UNKNOWN;
+ this.activeInjectionLocations.clear();
+ }}
+
+ /*
+ * Callback before calling Thread.sleep(...)
+ */
+
+ pointcut recordThreadSleep():
+ (call(* java.lang.Object.wait(..)) ||
+ call(* java.lang.Thread.sleep(..)) ||
+ call(* java.util.concurrent.locks.LockSupport.parkNanos(..)) ||
+ call(* java.util.concurrent.locks.LockSupport.parkUntil(..)) ||
+ call(* java.util.concurrent.ScheduledExecutorService.schedule(..)) ||
+ call(* java.util.concurrent.TimeUnit.*scheduledExecutionTime(..)) ||
+ call(* java.util.concurrent.TimeUnit.*sleep(..)) ||
+ call(* java.util.concurrent.TimeUnit.*timedWait(..)) ||
+ call(* java.util.Timer.schedule*(..)) ||
+ call(* java.util.TimerTask.wait(..)) ||
+ call(* org.apache.hadoop.hbase.*.Procedure.suspend(..))) &&
+ !within(edu.uchicago.cs.systems.wasabi.*);
+
+ before() : recordThreadSleep() {{
+ try {{
+ if (this.wasabiCtx == null) {{ // This happens for non-test methods (e.g. config) inside test code
+ return; // Ignore retry in "before" and "after" annotated methods
+ }}
+
+ StackSnapshot stackSnapshot = new StackSnapshot();
+ for (String retryCallerFunction : this.activeInjectionLocations) {{
+ if (stackSnapshot.hasFrame(retryCallerFunction.split("\\\\(", 2)[0])) {{
+ String sleepLocation = String.format("%s(%s:%d)",
+ retryCallerFunction.split("\\\\(", 2)[0],
+ thisJoinPoint.getSourceLocation().getFileName(),
+ thisJoinPoint.getSourceLocation().getLine());
+
+ this.wasabiCtx.addToExecTrace(sleepLocation, OpEntry.THREAD_SLEEP_OP, stackSnapshot);
+ LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_WARN,
+ String.format("[THREAD-SLEEP] Test ---%s--- | Sleep location ---%s--- | Retry location ---%s---\\n",
+ this.testMethodName,
+ sleepLocation,
+ retryCallerFunction.split("\\\\(", 2)[0])
+ );
+ }}
+ }}
+ }} catch (Exception e) {{
+ this.LOG.printMessage(
+ WasabiLogger.LOG_LEVEL_ERROR,
+ String.format("Exception occurred in recordThreadSleep(): %s", e.getMessage())
+ );
+ e.printStackTrace();
+ }}
+ }}
+
+ {pointcut_code}
+}}"""
+
+ return code_template
+
+def main():
+ parser = argparse.ArgumentParser(description="Generate AspectJ code following a particular specification.")
+ parser.add_argument("--spec-file", help="Path to the input specification file")
+ parser.add_argument("--aspect-file", help="Path to the output AspectJ file")
+
+ args = parser.parse_args()
+
+ exception_map = read_spec_file_to_dict(args.spec_file)
+ code = generate_aspectj_code(exception_map)
+
+ with open(args.aspect_file, "w") as f:
+ f.write(code)
+
+if __name__ == "__main__":
+ main()
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/gpt_cost_compute.py b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/gpt_cost_compute.py
new file mode 100644
index 00000000..45d35eb8
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/gpt_cost_compute.py
@@ -0,0 +1,39 @@
+import os
+import sys
+import tiktoken
+
+if len(sys.argv) != 2:
+ print("Usage: python script.py ")
+ sys.exit(1)
+
+project_path = sys.argv[1]
+token_price = 0.01 / 1000 # $0.01 per 1k prompt tokens per OpenaAI documentation
+number_of_rounds = 5 # number of questions/interactions with GPT per source file
+
+def count_tokens_in_file(file_path):
+ encoder = tiktoken.encoding_for_model("gpt-4")
+ with open(file_path, 'r', encoding='utf-8') as file:
+ content = file.read()
+ tokens = encoder.encode(content)
+ return len(tokens)
+
+def should_include_file(file_path):
+ return not ('/test/' in file_path or file_path.split('/')[-1].startswith('Test'))
+
+def calculate_project_cost(root_dir):
+ total_tokens = 0
+ for subdir, dirs, files in os.walk(root_dir):
+ for file in files:
+ if file.endswith('.java'):
+ file_path = os.path.join(subdir, file)
+ if should_include_file(file_path):
+ file_tokens = count_tokens_in_file(file_path)
+ total_tokens += file_tokens
+ print(f"Processed {file}: {file_tokens} tokens")
+ total_cost = total_tokens * token_price * (0.98 + 0.02 * number_of_rounds)
+ return total_tokens, total_cost
+
+total_tokens, total_cost = calculate_project_cost(project_path)
+
+print(f"Total tokens: {total_tokens}")
+print(f"Total cost at ${token_price*1000} per 1k tokens: ${total_cost:.4f}")
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/maven_tests_count.py b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/maven_tests_count.py
new file mode 100644
index 00000000..d8c4bfb3
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/maven_tests_count.py
@@ -0,0 +1,35 @@
+import sys
+
+def calculate_test_outcomes_from_file(filename):
+ total_tests_passed = 0
+ total_tests_executed = 0
+ results = False
+
+ with open(filename, 'r') as file:
+ for line in file:
+ if line.startswith("[INFO] Results:"):
+ results = True
+ if results == True and "Tests run" in line and "Failures" in line and "Errors" in line:
+ tests_run = int(line.split("Tests run: ")[1].split(",")[0])
+ failures = int(line.split("Failures: ")[1].split(",")[0])
+ errors = int(line.split("Errors: ")[1].split(",")[0])
+
+ total_tests_passed += (tests_run - failures - errors)
+ total_tests_executed += tests_run
+
+ results = False
+
+ return total_tests_passed, total_tests_executed
+
+def main():
+ if len(sys.argv) != 2:
+ print("Usage: python script.py ")
+ sys.exit(1)
+
+ filename = sys.argv[1]
+ total_tests_passed, total_tests_executed = calculate_test_outcomes_from_file(filename)
+ print("Total tests passed:", total_tests_passed)
+ print("Total tests executed:", total_tests_executed)
+
+if __name__ == "__main__":
+ main()
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/prereqs.sh b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/prereqs.sh
new file mode 100755
index 00000000..2a4e0735
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/prereqs.sh
@@ -0,0 +1,13 @@
+#!/bin/bash
+
+sudo apt-get update
+sudo apt-get upgrade -y
+
+sudo apt-get install openjdk-8-jdk -y
+sudo apt-get install openjdk-11-jdk -y
+
+sudo apt-get install python3 -y
+sudo apt-get install maven -y
+sudo apt-get install gradle -y
+sudo apt-get install ant -y
+sudo apt-get install ant-junit -y
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/run.bak b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/run.bak
new file mode 100644
index 00000000..eeb77017
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/run.bak
@@ -0,0 +1,247 @@
+import argparse
+import datetime
+import os
+import shutil
+import subprocess
+import sys
+
+
+""" Evaluation phases
+"""
+def clone_repositories(root_dir: str, benchmark: str):
+ """
+ Clone the necessary repositories and checkout specific versions for the specified benchmarks.
+
+ Arguments:
+ root_dir (str): The root directory of the repository.
+ benchmark_list (list): A list of target applications to clone.
+ """
+ repos = {
+ "hadoop": ("https://github.com/apache/hadoop.git", "60867de"),
+ "hbase": ("https://github.com/apache/hbase.git", "89ca7f4"),
+ "hive": ("https://github.com/apache/hive.git", "e08a600"),
+ "cassandra": ("https://github.com/apache/cassandra.git", "f0ad7ea"),
+ "elasticsearch": ("https://github.com/elastic/elasticsearch.git", "5ce03f2"),
+ }
+ benchmarks_dir = os.path.join(root_dir, "benchmarks")
+ os.makedirs(benchmarks_dir, exist_ok=True)
+
+ if benchmark in repos:
+ url, version = repos[benchmark]
+ repo_dir = os.path.join(benchmarks_dir, benchmark)
+
+ if os.path.exists(repo_dir):
+ result = run_command(["rm", "-rf", repo_dir], os.getcwd())
+ print(f"[WASABI-HELPER]: [INFO]: Cloning {benchmark} repository from {url}...")
+ result = run_command(["git", "clone", url, repo_dir], os.getcwd())
+ if result is None or result.returncode != 0:
+ print(f"[WASABI-HELPER]: [ERROR]: Error cloning {benchmark}:\n\t{result.stdout}\n\t{result.stderr}")
+ sys.exit(1)
+ print(f"[WASABI-HELPER]: [INFO]: Successfully cloned {benchmark}.")
+
+ print(f"Checking out version {version} for {benchmark}...")
+ result = run_command(["git", "checkout", version], repo_dir)
+ if result is None or result.returncode != 0:
+ print(f"[WASABI-HELPER]: [ERROR]: Error checking out version {version} for {benchmark}:\n\t{result.stdout}\n\t{result.stderr}")
+ sys.exit(1)
+ print(f"[WASABI-HELPER]: [INFO]: Successfully checked out version {version} for {benchmark}.")
+ else:
+ print(f"[WASABI-HELPER]: [WARNING]: Benchmark {benchmark} is not recognized and will be skipped.")
+
+def replace_config_files(root_dir: str, benchmark: str):
+ """
+ Replaces the original build (Maven pom.xml) file with a customized version
+ for each application in the benchmark list.
+
+ Arguments:
+ root_dir (str): The root directory of the repository.
+ benchmark (list): The target applications for which to replace the config/build files.
+ """
+ benchmark_dir = os.path.join(root_dir, "benchmarks", benchmark)
+ original_pom_path = os.path.join(benchmark_dir, "pom.xml")
+ backup_pom_path = os.path.join(benchmark_dir, "pom-original.xml")
+ if "hive/standalone-metastore" in benchmark:
+ custom_pom_path = os.path.join(root_dir, "wasabi", "wasabi-testing", "config", "hive", "pom-hive-standalone-metastore.xml")
+ else:
+ custom_pom_path = os.path.join(root_dir, "wasabi", "wasabi-testing", "config", benchmark, f"pom-{benchmark}.xml")
+ new_pom_path = os.path.join(benchmark_dir, "pom.xml")
+
+ if os.path.exists(backup_pom_path):
+ print(f"[WASABI-HELPER]: [INFO]: Backup pom-original.xml already exists for {benchmark}. Skipping renaming.")
+ else:
+ if os.path.exists(original_pom_path):
+ shutil.move(original_pom_path, backup_pom_path)
+ print(f"[WASABI-HELPER]: [INFO]: Renamed {original_pom_path} to {backup_pom_path}.")
+ else:
+ print(f"[WASABI-HELPER]: [INFO]: Original pom.xml not found for {benchmark}. Skipping renaming.")
+
+ if os.path.exists(custom_pom_path):
+ shutil.copy(custom_pom_path, new_pom_path)
+ print(f"[WASABI-HELPER]: [INFO]: Copied {custom_pom_path} to {new_pom_path}.")
+ else:
+ print(f"[WASABI-HELPER]: [ERROR]: Customized {custom_pom_path} not found for {benchmark}. Skipping copy.")
+
+def rewrite_source_code(root_dir: str, benchmark: str, mode: str):
+ """
+ Rewrites retry related bounds -- either retry thresholds or test timeouts.
+
+ Arguments:
+ root_dir (str): The root directory of the repository.
+ benchmark (list): The target applications for which to replace the pom.xml.
+ mode (str): The type of source rewriting -- retry bounds or timeout values.
+ """
+ benchmark_dir = os.path.join(root_dir, "benchmarks", benchmark)
+ if mode == "bounds-rewriting":
+ config_file = os.path.join(root_dir, "wasabi", "wasabi-testing", "config", benchmark, f"{benchmark}_retry_bounds.data")
+ elif mode == "timeout-rewriting":
+ config_file = os.path.join(root_dir, "wasabi", "wasabi-testing", "config", benchmark, f"{benchmark}_timeout_bounds.data")
+ else:
+ print(f"[WASABI-HELPER]: [ERROR]: Bad arguments provided to source_rewriter.py.")
+ return
+
+ cmd = ["python3", "source_rewriter.py", "--mode", mode, config_file, benchmark_dir]
+ result = run_command(cmd, os.getcwd())
+
+ if result is None or result.returncode != 0:
+ print(f"[WASABI-HELPER]: [ERROR]: Rewriting retry-related bounds failed:\n\t{result.stdout}\n\t{result.stderr}")
+ else:
+ print(f"[WASABI-HELPER]: [INFO]: Successfully overwritten retry-related bounds. Status: {result.returncode}")
+
+
+def run_fault_injection(target: str):
+ """
+ Run the run_benchmark.py script for a specific application.
+
+ Arguments:
+ root_dir (str): The root directory of the repository.
+ target (str): The name of the application.
+ """
+
+ cmd = ["python3", "run_benchmark.py", "--benchmark", target]
+ result = run_command(cmd, os.getcwd())
+ if result is None or result.returncode != 0:
+ print(f"[WASABI-HELPER]: [ERROR]: Command to run run_benchmark.py on {target} failed with error message:\n\t{result.stdout}\n\t{result.stderr}")
+ else:
+ print(f"[WASABI-HELPER]: [INFO]: Finished running test suite for {target}. Status: {result.returncode}")
+
+
+def run_bug_oracles(root_dir: str, target: str):
+ """
+ Runs bug oracels over a set of test and build reports.
+
+ Parameters:
+ root_dir (str): The root directory where the results for the target are located.
+ target (str): The name of the application.
+ """
+ target_root_dir = os.path.join(root_dir, "results", target)
+ csv_file = os.path.join(target_root_dir, f"{target}-bugs-per-test.csv")
+ if os.path.exists(csv_file):
+ cmd = ["rm", "-f", csv_file]
+ result = run_command(cmd, os.getcwd())
+
+ if result is None or result.returncode != 0:
+ print(f"[WASABI-HELPER]: [ERROR]: Command to remove {csv_file} failed:\n\t{result.stdout}\n\t{result.stderr}")
+ else:
+ print(f"[WASABI-HELPER]: [INFO]: Removed {csv_file}. Status: {result.returncode}")
+
+ for item in os.listdir(target_root_dir):
+ item_path = os.path.join(target_root_dir, item)
+ if os.path.isdir(item_path):
+ cmd = ["python3", "bug_oracles.py", item_path, "--benchmark", target]
+ result = run_command(cmd, os.getcwd())
+ if result:
+ print(result.stdout)
+
+ if result is None or result.returncode != 0:
+ print(f"[WASABI-HELPER]: [ERROR]: Command to run bug_oracles.py on {item_path} failed with error message:\n\t{result.stdout}\n\t{result.stderr}")
+ else:
+ print(f"[WASABI-HELPER]: [INFO]: Finished processing {item_path}. Status: {result.returncode}")
+
+
+""" Helper functions
+"""
+def run_command(cmd: list[str], cwd: str):
+ """
+ Run a command in a subprocess and display the output in real-time.
+
+ Arguments:
+ cmd (list): The command to run.
+ cwd (str): The working directory.
+
+ Returns:
+ CompletedProcess: The result of the command execution.
+ """
+ process = subprocess.Popen(cmd, cwd=cwd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
+
+ stdout_lines = []
+ stderr_lines = []
+
+ try:
+ for stdout_line in iter(process.stdout.readline, ""):
+ stdout_lines.append(stdout_line)
+ print(stdout_line, end="")
+
+ process.stdout.close()
+ process.wait()
+
+ stderr_lines = process.stderr.readlines()
+ process.stderr.close()
+
+ return subprocess.CompletedProcess(cmd, process.returncode, ''.join(stdout_lines), ''.join(stderr_lines))
+ except Exception as e:
+ process.kill()
+ raise e
+
+def display_phase(phase: str, benchmark: str):
+ """
+ Prints a "stylized" message indicating the current phase.
+
+ Arguments:
+ phase (str): The name of the phase to display.
+ """
+ phase_text = f" {benchmark}: {phase} "
+ border_line = "*" * (len(phase_text) + 4)
+ inner_line = "*" + " " * (len(phase_text) + 2) + "*"
+ print(f"\n{border_line}")
+ print(f"{inner_line}")
+ print(f"*{phase_text.center(len(border_line) - 2)}*")
+ print(f"{inner_line}")
+ print(f"{border_line}\n")
+
+
+""" Main
+"""
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--phase", choices=["setup", "prep", "bug-triggering", "bug-oracles", "all"], required=True, help="The pipeline phase to run")
+ parser.add_argument("--benchmark", choices=["hadoop", "hbase", "hive", "cassandra", "elasticsearch"], required=True, help="The benchmark to run")
+ args = parser.parse_args()
+
+ wasabi_root_dir = os.getenv("WASABI_ROOT_DIR")
+ if not wasabi_root_dir:
+ print("[WASABI-HELPER]: [ERROR]: The WASABI_ROOT_DIR environment variable is not set.")
+ sys.exit(1)
+ repo_root_dir = os.path.join(wasabi_root_dir, "..")
+
+ if args.phase == "setup" or args.phase == "all":
+ display_phase("setup", args.benchmark)
+ clone_repositories(repo_root_dir, args.benchmark)
+
+ if args.phase == "prep" or args.phase == "all":
+ display_phase("code preparation", args.benchmark)
+ replace_config_files(repo_root_dir, args.benchmark)
+ if args.benchmark == "hive":
+ replace_config_files(repo_root_dir, os.path.join(args.benchmark, "standalone-metastore"))
+ rewrite_source_code(repo_root_dir, args.benchmark, "bounds-rewriting")
+ rewrite_source_code(repo_root_dir, args.benchmark, "timeout-rewriting")
+
+ if args.phase == "bug-triggering" or args.phase == "all":
+ display_phase("bug triggering", args.benchmark)
+ run_fault_injection(args.benchmark)
+
+ if args.phase == "bug-oracles" or args.phase == "all":
+ display_phase("Bug oracles", args.benchmark)
+ run_bug_oracles(repo_root_dir, args.benchmark)
+
+if __name__ == "__main__":
+ main()
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/run.py b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/run.py
new file mode 100644
index 00000000..5cc2dd04
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/run.py
@@ -0,0 +1,255 @@
+import argparse
+import datetime
+import logging
+import os
+import shutil
+import subprocess
+import sys
+
+
+logging.basicConfig(
+ level=logging.INFO,
+ format="%(asctime)s [%(levelname)s] %(message)s",
+)
+
+
+""" Evaluation phases
+"""
+def clone_repositories(root_dir: str, benchmark: str):
+ """
+ Clone the necessary repositories and checkout specific versions for the specified benchmarks.
+
+ Arguments:
+ root_dir (str): The root directory of the repository.
+ benchmark_list (list): A list of target applications to clone.
+ """
+ repos = {
+ "hadoop": ("https://github.com/apache/hadoop.git", "60867de"),
+ "hbase": ("https://github.com/apache/hbase.git", "89ca7f4"),
+ "hive": ("https://github.com/apache/hive.git", "e08a600"),
+ "cassandra": ("https://github.com/apache/cassandra.git", "f0ad7ea"),
+ "elasticsearch": ("https://github.com/elastic/elasticsearch.git", "5ce03f2"),
+ }
+ benchmarks_dir = os.path.join(root_dir, "benchmarks")
+ os.makedirs(benchmarks_dir, exist_ok=True)
+
+ if benchmark in repos:
+ url, version = repos[benchmark]
+ repo_dir = os.path.join(benchmarks_dir, benchmark)
+
+ if os.path.exists(repo_dir):
+ result = run_command(["rm", "-rf", repo_dir], os.getcwd())
+ logging.info(f"[WASABI-HELPER]: [INFO]: Cloning {benchmark} repository from {url}...")
+ result = run_command(["git", "clone", url, repo_dir], os.getcwd())
+ if result is None or result.returncode != 0:
+ logging.info(f"[WASABI-HELPER]: [ERROR]: Error cloning {benchmark}:\n\t{result.stdout}\n\t{result.stderr}")
+ sys.exit(1)
+ logging.info(f"[WASABI-HELPER]: [INFO]: Successfully cloned {benchmark}.")
+
+ logging.info(f"Checking out version {version} for {benchmark}...")
+ result = run_command(["git", "checkout", version], repo_dir)
+ if result is None or result.returncode != 0:
+ logging.info(f"[WASABI-HELPER]: [ERROR]: Error checking out version {version} for {benchmark}:\n\t{result.stdout}\n\t{result.stderr}")
+ sys.exit(1)
+ logging.info(f"[WASABI-HELPER]: [INFO]: Successfully checked out version {version} for {benchmark}.")
+ else:
+ logging.info(f"[WASABI-HELPER]: [WARNING]: Benchmark {benchmark} is not recognized and will be skipped.")
+
+def replace_config_files(root_dir: str, benchmark: str):
+ """
+ Replaces the original build (Maven pom.xml) file with a customized version
+ for each application in the benchmark list.
+
+ Arguments:
+ root_dir (str): The root directory of the repository.
+ benchmark (list): The target applications for which to replace the config/build files.
+ """
+ benchmark_dir = os.path.join(root_dir, "benchmarks", benchmark)
+ original_pom_path = os.path.join(benchmark_dir, "pom.xml")
+ backup_pom_path = os.path.join(benchmark_dir, "pom-original.xml")
+ if "hive/standalone-metastore" in benchmark:
+ custom_pom_path = os.path.join(root_dir, "wasabi", "wasabi-testing", "config", "hive", "pom-hive-standalone-metastore.xml")
+ else:
+ custom_pom_path = os.path.join(root_dir, "wasabi", "wasabi-testing", "config", benchmark, f"pom-{benchmark}.xml")
+ new_pom_path = os.path.join(benchmark_dir, "pom.xml")
+
+ if os.path.exists(backup_pom_path):
+ logging.info(f"[WASABI-HELPER]: [INFO]: Backup pom-original.xml already exists for {benchmark}. Skipping renaming.")
+ else:
+ if os.path.exists(original_pom_path):
+ shutil.move(original_pom_path, backup_pom_path)
+ logging.info(f"[WASABI-HELPER]: [INFO]: Renamed {original_pom_path} to {backup_pom_path}.")
+ else:
+ logging.info(f"[WASABI-HELPER]: [INFO]: Original pom.xml not found for {benchmark}. Skipping renaming.")
+
+ if os.path.exists(custom_pom_path):
+ shutil.copy(custom_pom_path, new_pom_path)
+ logging.info(f"[WASABI-HELPER]: [INFO]: Copied {custom_pom_path} to {new_pom_path}.")
+ else:
+ logging.info(f"[WASABI-HELPER]: [ERROR]: Customized {custom_pom_path} not found for {benchmark}. Skipping copy.")
+
+def rewrite_source_code(root_dir: str, benchmark: str, mode: str):
+ """
+ Rewrites retry related bounds -- either retry thresholds or test timeouts.
+
+ Arguments:
+ root_dir (str): The root directory of the repository.
+ benchmark (list): The target applications for which to replace the pom.xml.
+ mode (str): The type of source rewriting -- retry bounds or timeout values.
+ """
+ benchmark_dir = os.path.join(root_dir, "benchmarks", benchmark)
+ if mode == "bounds-rewriting":
+ config_file = os.path.join(root_dir, "wasabi", "wasabi-testing", "config", benchmark, f"{benchmark}_retry_bounds.data")
+ elif mode == "timeout-rewriting":
+ config_file = os.path.join(root_dir, "wasabi", "wasabi-testing", "config", benchmark, f"{benchmark}_timeout_bounds.data")
+ else:
+ logging.info(f"[WASABI-HELPER]: [ERROR]: Bad arguments provided to source_rewriter.py.")
+ return
+
+ cmd = ["python3", "source_rewriter.py", "--mode", mode, config_file, benchmark_dir]
+ result = run_command(cmd, os.getcwd())
+
+ if result is None or result.returncode != 0:
+ logging.info(f"[WASABI-HELPER]: [ERROR]: Rewriting retry-related bounds failed:\n\t{result.stdout}\n\t{result.stderr}")
+ else:
+ logging.info(f"[WASABI-HELPER]: [INFO]: Successfully overwritten retry-related bounds. Status: {result.returncode}")
+
+
+def run_fault_injection(target: str):
+ """
+ Run the run_benchmark.py script for a specific application.
+
+ Arguments:
+ root_dir (str): The root directory of the repository.
+ target (str): The name of the application.
+ """
+
+ cmd = ["python3", "run_benchmark.py", "--benchmark", target]
+ result = run_command(cmd, os.getcwd())
+ if result is None or result.returncode != 0:
+ logging.info(f"[WASABI-HELPER]: [ERROR]: Command to run run_benchmark.py on {target} failed with error message:\n\t{result.stdout}\n\t{result.stderr}")
+ else:
+ logging.info(f"[WASABI-HELPER]: [INFO]: Finished running test suite for {target}. Status: {result.returncode}")
+
+
+def run_bug_oracles(root_dir: str, target: str):
+ """
+ Runs bug oracels over a set of test and build reports.
+
+ Parameters:
+ root_dir (str): The root directory where the results for the target are located.
+ target (str): The name of the application.
+ """
+ target_root_dir = os.path.join(root_dir, "results", target)
+ csv_file = os.path.join(target_root_dir, f"{target}-bugs-per-test.csv")
+ if os.path.exists(csv_file):
+ cmd = ["rm", "-f", csv_file]
+ result = run_command(cmd, os.getcwd())
+
+ if result is None or result.returncode != 0:
+ logging.info(f"[WASABI-HELPER]: [ERROR]: Command to remove {csv_file} failed:\n\t{result.stdout}\n\t{result.stderr}")
+ else:
+ logging.info(f"[WASABI-HELPER]: [INFO]: Removed {csv_file}. Status: {result.returncode}")
+
+ for item in os.listdir(target_root_dir):
+ item_path = os.path.join(target_root_dir, item)
+ if os.path.isdir(item_path):
+ cmd = ["python3", "bug_oracles.py", item_path, "--benchmark", target]
+ result = run_command(cmd, os.getcwd())
+ if result:
+ logging.info(result.stdout)
+
+ if result is None or result.returncode != 0:
+ logging.info(f"[WASABI-HELPER]: [ERROR]: Command to run bug_oracles.py on {item_path} failed with error message:\n\t{result.stdout}\n\t{result.stderr}")
+ else:
+ logging.info(f"[WASABI-HELPER]: [INFO]: Finished processing {item_path}. Status: {result.returncode}")
+
+
+""" Helper functions
+"""
+def run_command(cmd: list[str], cwd: str):
+ """
+ Run a command in a subprocess and display the output in real-time.
+
+ Arguments:
+ cmd (list): The command to run.
+ cwd (str): The working directory.
+
+ Returns:
+ CompletedProcess: The result of the command execution.
+ """
+ process = subprocess.Popen(cmd, cwd=cwd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
+
+ stdout_lines = []
+ stderr_lines = []
+
+ try:
+ for stdout_line in iter(process.stdout.readline, ""):
+ stdout_lines.append(stdout_line)
+ logging.info(stdout_line)
+
+ process.stdout.close()
+ process.wait()
+
+ stderr_lines = process.stderr.readlines()
+ process.stderr.close()
+
+ return subprocess.CompletedProcess(cmd, process.returncode, ''.join(stdout_lines), ''.join(stderr_lines))
+ except Exception as e:
+ process.kill()
+ raise e
+
+def display_phase(phase: str, benchmark: str):
+ """
+ logging.infos a "stylized" message indicating the current phase.
+
+ Arguments:
+ phase (str): The name of the phase to display.
+ """
+ phase_text = f" {benchmark}: {phase} "
+ border_line = "*" * (len(phase_text) + 4)
+ inner_line = "*" + " " * (len(phase_text) + 2) + "*"
+ logging.info("\n")
+ logging.info(f"{border_line}")
+ logging.info(f"{inner_line}")
+ logging.info(f"*{phase_text.center(len(border_line) - 2)}*")
+ logging.info(f"{inner_line}")
+ logging.info(f"{border_line}\n")
+
+
+""" Main
+"""
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--phase", choices=["setup", "prep", "bug-triggering", "bug-oracles", "all"], required=True, help="The pipeline phase to run")
+ parser.add_argument("--benchmark", choices=["hadoop", "hbase", "hive", "cassandra", "elasticsearch"], required=True, help="The benchmark to run")
+ args = parser.parse_args()
+
+ wasabi_root_dir = os.getenv("WASABI_ROOT_DIR")
+ if not wasabi_root_dir:
+ logging.info("[WASABI-HELPER]: [ERROR]: The WASABI_ROOT_DIR environment variable is not set.")
+ sys.exit(1)
+ repo_root_dir = os.path.join(wasabi_root_dir, "..")
+
+ if args.phase == "setup" or args.phase == "all":
+ display_phase("setup", args.benchmark)
+ clone_repositories(repo_root_dir, args.benchmark)
+
+ if args.phase == "prep" or args.phase == "all":
+ display_phase("code preparation", args.benchmark)
+ replace_config_files(repo_root_dir, args.benchmark)
+ if args.benchmark == "hive":
+ replace_config_files(repo_root_dir, os.path.join(args.benchmark, "standalone-metastore"))
+ rewrite_source_code(repo_root_dir, args.benchmark, "bounds-rewriting")
+ rewrite_source_code(repo_root_dir, args.benchmark, "timeout-rewriting")
+
+ if args.phase == "bug-triggering" or args.phase == "all":
+ display_phase("bug triggering", args.benchmark)
+ run_fault_injection(args.benchmark)
+
+ if args.phase == "bug-oracles" or args.phase == "all":
+ display_phase("Bug oracles", args.benchmark)
+ run_bug_oracles(repo_root_dir, args.benchmark)
+
+if __name__ == "__main__":
+ main()
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/run_benchmark.bak b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/run_benchmark.bak
new file mode 100644
index 00000000..6e5b4d87
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/run_benchmark.bak
@@ -0,0 +1,268 @@
+import argparse
+from collections import deque
+import datetime
+import glob
+import os
+import re
+import shutil
+import subprocess
+import time
+import sys
+
+
+LOG_FILE_NAME = "wasabi-install.log"
+TIMEOUT = 3600
+
+
+def run_command_with_timeout(cmd: list[str], dir_path: str):
+ """
+ Run a command with a timeout of {TIMEOUT} seconds.
+
+ Parameters:
+ cmd (list): The command to run as a list of arguments.
+ timeout (int): The timeout in seconds.
+
+ Returns:
+ subprocess.CompletedProcess: The result of the command execution.
+ """
+ try:
+ result = subprocess.run(cmd, cwd=dir_path, shell=False, capture_output=True, timeout=TIMEOUT)
+ return result
+ except subprocess.TimeoutExpired:
+ return None
+
+
+def get_conf_files(config_dir: str):
+ """
+ Find all config files (extension ".conf").
+
+ Parameters:
+ config_dir (str): The path of the config directory.
+
+ Returns:
+ list: A list of strings containing the paths of the ".conf" files.
+ """
+ return glob.glob(os.path.join(config_dir, "*.conf"))
+
+
+def get_test_file_name(config_file: str):
+ """
+ Extracts the test name from its corresponding config file.
+
+ Parameters:
+ config_file (str): The path of the config file.
+
+ Returns:
+ str: The path of the log file for the config file.
+ """
+ test_name = re.search(r"retry_locations-(.+?)\.conf", config_file).group(1)
+
+ return test_name
+
+
+def get_log_file_name(target_root_dir: str, test_path: str):
+ """
+ Constructs the log file name from the config file.
+
+ Parameters:
+ target_root_dir (str): The path of the config directory.
+ config_file (str): The path of the config file.
+
+ Returns:
+ str: The path of the log file for the config file.
+ """
+ test_name = get_test_file_name(test_path)
+ log_file_name = f"build-{test_name}.log"
+ return os.path.join(target_root_dir, log_file_name)
+
+
+def build_target(target: str, target_root_dir: str, wasabi_arg: str = None):
+ """
+ Build a target application.
+
+ Parameters:
+ target (str): The name of the target application.
+ target_root_dir (str): The path of the target root directory.
+ arg (str): The path of the log file.
+ """
+ if target == "wasabi":
+ cmd = ["mvn", "clean", "install", "-fn", "-B", "-U", "-DskipTests", f"-Dinstrumentation.target={wasabi_arg}"]
+ elif target == "hive":
+ cmd = ["mvn", "clean", "install", "-Pdist", "-fn", "-Drat.numUnapprovedLicenses=20000", "-B", "-U", "-DskipTests"]
+ elif target == "cassandra":
+ cmd = ["ant"]
+ elif target == "elasticsearch":
+ cmd = ["./gradlew", "clean", "publishToMavenLocal", "-x", "test"]
+ else:
+ cmd = ["mvn", "clean", "install", "-fn", "-B", "-U", "-DskipTests"]
+
+ print("// -------------------------------------------------------------------------- //")
+ print(f"Active directory: {target_root_dir}")
+ print(f"Command: {' '.join(cmd)}", flush=True)
+
+ result = subprocess.run(cmd, cwd=target_root_dir, shell=False, capture_output=True)
+
+ print(f"Status: {result.returncode}", flush=True)
+ print("// -------------------------------------------------------------------------- //\n")
+
+ log_file_path = os.path.join(target_root_dir, LOG_FILE_NAME)
+ with open(log_file_path, "a", encoding="utf-8") as outfile:
+ outfile.write(result.stdout.decode('utf-8'))
+ outfile.write((result.stderr.decode('utf-8')))
+
+
+def run_test_suite(target: str, target_root_dir: str, args: str):
+ """
+ Run test suite for a target application.
+
+ Parameters:
+ target (str): The name of the target application.
+ conf_files (list): A list of strings containing the paths of the ".conf" files.
+ args (str): A set of arguments to be added to the command.
+
+ Returns:
+ list: A list of tuples containing the outcome and duration of each thread.
+ """
+ cmd_queue = deque()
+ for config_file, test_name in args:
+ cmd_queue.append((config_file, test_name))
+
+ total_cmds = len(cmd_queue)
+ counter = 0
+ while cmd_queue:
+ counter += 1
+
+ config_file, test_name = cmd_queue.popleft()
+ log_file = get_log_file_name(target_root_dir, config_file)
+
+ if target == "hive":
+ cmd = ["mvn", "surefire:test", "-B", "-Drat.numUnapprovedLicenses=20000", f"-DconfigFile={config_file}", f"-Dtest={test_name}", "-fn"]
+ elif target == "cassandra":
+ cmd = ["ant", f"-Dtest={test_name}", "test"]
+ elif target == "elasticsearch":
+ cmd = ["./gradlew", f"test --tests {test_name}", f"-DconfigFile={config_file}"]
+ else:
+ cmd = ["mvn", "surefire:test", "-B", f"-DconfigFile={config_file}", f"-Dtest={test_name}", "-fn"]
+
+ print("// -------------------------------------------------------------------------- //")
+ print(f"Job count: {counter} / {total_cmds}")
+ print(f"Command: {' '.join(cmd)}")
+ print(f"Active directory: {target_root_dir}")
+ print(f"Config file: {config_file}")
+ print(f"Log file: {log_file}", flush=True)
+
+ result = run_command_with_timeout(cmd, target_root_dir)
+
+ if result is not None:
+ print(f"Status: {result.returncode}", flush=True)
+ print("// -------------------------------------------------------------------------- //\n")
+
+ with open(log_file, "a", encoding="utf-8") as outfile:
+ outfile.write(result.stdout.decode('utf-8'))
+ outfile.write(result.stderr.decode('utf-8'))
+
+ else:
+ print(f"Status: timeout -- TimeoutExpired exception", flush=True)
+ print("// -------------------------------------------------------------------------- //\n")
+
+
+def cleanup(build_system: str):
+ """
+ Clean up of local package directory.
+ """
+
+ if build_system == "maven" or build_system == "gradle":
+ package_dir = os.path.expanduser("~/.m2")
+ elif build_system == "ant":
+ package_dir = os.path.expanduser("~/.ivy2")
+
+ cmd = ["rm", "-rf", package_dir]
+
+ print("// -------------------------------------------------------------------------- //")
+ print(f"Command: {' '.join(cmd)}", flush=True)
+
+ result = run_command_with_timeout(cmd, dir_path=os.path.expanduser("~"))
+
+ if result is None:
+ print(f"Command timed out while trying to remove {package_dir}.", flush=True)
+ else:
+ print(f"Status: {result.returncode}", flush=True)
+ print("// -------------------------------------------------------------------------- //\n")
+
+
+def save_log_files(target_app: str, wasabi_root_dir: str):
+ """
+ Save test and build log files to a separate directory.
+
+ Parameters:
+ wasabi_root_dir (str): The path of the Wasabi root directory.
+ target_app (str): The target application name for which logs will be saved.
+ """
+ wasabi_results_dir = os.path.join(wasabi_root_dir, "..", "results", target_app)
+ target_root_dir = os.path.join(wasabi_root_dir, "..", "benchmarks", target_app)
+
+ date = datetime.datetime.now().strftime("%Y%m%d%H%M")
+
+ # Save test reports
+ test_reports_dir = os.path.join(wasabi_results_dir, date, "test-reports")
+ os.makedirs(test_reports_dir, exist_ok=True)
+ for dirpath, _, files in os.walk(target_root_dir):
+ for file in files:
+ if file.endswith("-output.txt"):
+ output_file = os.path.join(dirpath, file)
+ shutil.copy(output_file, os.path.join(test_reports_dir, f"{date}.{file}"))
+
+ # Save build reports
+ build_reports_dir = os.path.join(wasabi_results_dir, date, "build-reports")
+ os.makedirs(build_reports_dir, exist_ok=True)
+ for file in os.listdir(target_root_dir):
+ if file.startswith("build-") and file.endswith(".log"):
+ output_file = os.path.join(target_root_dir, file)
+ shutil.copy(output_file, os.path.join(build_reports_dir, f"{date}.{file}"))
+
+
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--benchmark", choices=["hadoop", "hbase", "hive", "cassandra", "elasticsearch"], required=True, help="The benchmark to run")
+ args = parser.parse_args()
+
+ wasabi_root_dir = os.getenv("WASABI_ROOT_DIR")
+ if not wasabi_root_dir:
+ print("[WASABI-HELPER]: [ERROR]: The WASABI_ROOT_DIR environment variable is not set.")
+ sys.exit(1)
+
+ target_root_dir = os.path.join(wasabi_root_dir, "..", "benchmarks", args.benchmark)
+ config_dir = os.path.join(wasabi_root_dir, "wasabi-testing", "config", args.benchmark, "test-plan")
+
+ conf_files = get_conf_files(config_dir)
+ test_names = [get_test_file_name(config_file) for config_file in conf_files]
+ configs = [(conf_file, test_name) for conf_file, test_name in zip(conf_files, test_names)]
+
+ # Cleanup old packages
+ if args.benchmark == "cassandra":
+ cleanup("ant")
+ elif args.benchmark == "elasticsearch":
+ cleanup("gradle")
+ else:
+ cleanup("maven")
+
+ # Build and install WASABI
+ build_target("wasabi", os.path.join(wasabi_root_dir, "wasabi-testing"), args.benchmark)
+
+ # Build and install the target application
+ build_target(args.benchmark, target_root_dir)
+
+ start_time = time.perf_counter()
+
+ # Run the test suite of the target application
+ run_test_suite(args.benchmark, target_root_dir, configs)
+
+ end_time = time.perf_counter()
+ print(f"\n\n// -------------------------------------------------------------------------- //")
+ print(f"End-to-end running time: {end_time - start_time} secs")
+
+ # Save logs to a separate directory
+ save_log_files(args.benchmark, wasabi_root_dir)
+
+if __name__ == "__main__":
+ main()
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/run_benchmark.py b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/run_benchmark.py
new file mode 100644
index 00000000..00d24eb8
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/run_benchmark.py
@@ -0,0 +1,275 @@
+import argparse
+from collections import deque
+import datetime
+import glob
+import logging
+import os
+import re
+import shutil
+import subprocess
+import time
+import sys
+
+
+logging.basicConfig(
+ level=logging.INFO,
+ format="%(asctime)s [%(levelname)s] %(message)s",
+)
+
+
+LOG_FILE_NAME = "wasabi-install.log"
+TIMEOUT = 3600
+
+
+def run_command_with_timeout(cmd: list[str], dir_path: str):
+ """
+ Run a command with a timeout of {TIMEOUT} seconds.
+
+ Parameters:
+ cmd (list): The command to run as a list of arguments.
+ timeout (int): The timeout in seconds.
+
+ Returns:
+ subprocess.CompletedProcess: The result of the command execution.
+ """
+ try:
+ result = subprocess.run(cmd, cwd=dir_path, shell=False, capture_output=True, timeout=TIMEOUT)
+ return result
+ except subprocess.TimeoutExpired:
+ return None
+
+
+def get_conf_files(config_dir: str):
+ """
+ Find all config files (extension ".conf").
+
+ Parameters:
+ config_dir (str): The path of the config directory.
+
+ Returns:
+ list: A list of strings containing the paths of the ".conf" files.
+ """
+ return glob.glob(os.path.join(config_dir, "*.conf"))
+
+
+def get_test_file_name(config_file: str):
+ """
+ Extracts the test name from its corresponding config file.
+
+ Parameters:
+ config_file (str): The path of the config file.
+
+ Returns:
+ str: The path of the log file for the config file.
+ """
+ test_name = re.search(r"retry_locations-(.+?)\.conf", config_file).group(1)
+
+ return test_name
+
+
+def get_log_file_name(target_root_dir: str, test_path: str):
+ """
+ Constructs the log file name from the config file.
+
+ Parameters:
+ target_root_dir (str): The path of the config directory.
+ config_file (str): The path of the config file.
+
+ Returns:
+ str: The path of the log file for the config file.
+ """
+ test_name = get_test_file_name(test_path)
+ log_file_name = f"build-{test_name}.log"
+ return os.path.join(target_root_dir, log_file_name)
+
+
+def build_target(target: str, target_root_dir: str, wasabi_arg: str = None):
+ """
+ Build a target application.
+
+ Parameters:
+ target (str): The name of the target application.
+ target_root_dir (str): The path of the target root directory.
+ arg (str): The path of the log file.
+ """
+ if target == "wasabi":
+ cmd = ["mvn", "clean", "install", "-fn", "-B", "-U", "-DskipTests", f"-Dinstrumentation.target={wasabi_arg}"]
+ elif target == "hive":
+ cmd = ["mvn", "clean", "install", "-Pdist", "-fn", "-Drat.numUnapprovedLicenses=20000", "-B", "-U", "-DskipTests"]
+ elif target == "cassandra":
+ cmd = ["ant"]
+ elif target == "elasticsearch":
+ cmd = ["./gradlew", "clean", "publishToMavenLocal", "-x", "test"]
+ else:
+ cmd = ["mvn", "clean", "install", "-fn", "-B", "-U", "-DskipTests"]
+
+ logging.info("// -------------------------------------------------------------------------- //")
+ logging.info(f"Active directory: {target_root_dir}")
+ logging.info(f"Command: {' '.join(cmd)}")
+
+ result = subprocess.run(cmd, cwd=target_root_dir, shell=False, capture_output=True)
+
+ logging.info(f"Status: {result.returncode}")
+ logging.info("// -------------------------------------------------------------------------- //\n")
+
+ log_file_path = os.path.join(target_root_dir, LOG_FILE_NAME)
+ with open(log_file_path, "a", encoding="utf-8") as outfile:
+ outfile.write(result.stdout.decode('utf-8'))
+ outfile.write((result.stderr.decode('utf-8')))
+
+
+def run_test_suite(target: str, target_root_dir: str, args: str):
+ """
+ Run test suite for a target application.
+
+ Parameters:
+ target (str): The name of the target application.
+ conf_files (list): A list of strings containing the paths of the ".conf" files.
+ args (str): A set of arguments to be added to the command.
+
+ Returns:
+ list: A list of tuples containing the outcome and duration of each thread.
+ """
+ cmd_queue = deque()
+ for config_file, test_name in args:
+ cmd_queue.append((config_file, test_name))
+
+ total_cmds = len(cmd_queue)
+ counter = 0
+ while cmd_queue:
+ counter += 1
+
+ config_file, test_name = cmd_queue.popleft()
+ log_file = get_log_file_name(target_root_dir, config_file)
+
+ if target == "hive":
+ cmd = ["mvn", "surefire:test", "-B", "-Drat.numUnapprovedLicenses=20000", f"-DconfigFile={config_file}", f"-Dtest={test_name}", "-fn"]
+ elif target == "cassandra":
+ cmd = ["ant", f"-Dtest={test_name}", "test"]
+ elif target == "elasticsearch":
+ cmd = ["./gradlew", f"test --tests {test_name}", f"-DconfigFile={config_file}"]
+ else:
+ cmd = ["mvn", "surefire:test", "-B", f"-DconfigFile={config_file}", f"-Dtest={test_name}", "-fn"]
+
+ logging.info("// -------------------------------------------------------------------------- //")
+ logging.info(f"Job count: {counter} / {total_cmds}")
+ logging.info(f"Command: {' '.join(cmd)}")
+ logging.info(f"Active directory: {target_root_dir}")
+ logging.info(f"Config file: {config_file}")
+ logging.info(f"Log file: {log_file}")
+
+ result = run_command_with_timeout(cmd, target_root_dir)
+
+ if result is not None:
+ logging.info(f"Status: {result.returncode}")
+ logging.info("// -------------------------------------------------------------------------- //\n")
+
+ with open(log_file, "a", encoding="utf-8") as outfile:
+ outfile.write(result.stdout.decode('utf-8'))
+ outfile.write(result.stderr.decode('utf-8'))
+
+ else:
+ logging.info(f"Status: timeout -- TimeoutExpired exception")
+ logging.info("// -------------------------------------------------------------------------- //\n")
+
+
+def cleanup(build_system: str):
+ """
+ Clean up of local package directory.
+ """
+
+ if build_system == "maven" or build_system == "gradle":
+ package_dir = os.path.expanduser("~/.m2")
+ elif build_system == "ant":
+ package_dir = os.path.expanduser("~/.ivy2")
+
+ cmd = ["rm", "-rf", package_dir]
+
+ logging.info("// -------------------------------------------------------------------------- //")
+ logging.info(f"Command: {' '.join(cmd)}")
+
+ result = run_command_with_timeout(cmd, dir_path=os.path.expanduser("~"))
+
+ if result is None:
+ logging.info(f"Command timed out while trying to remove {package_dir}.")
+ else:
+ logging.info(f"Status: {result.returncode}")
+ logging.info("// -------------------------------------------------------------------------- //\n")
+
+
+def save_log_files(target_app: str, wasabi_root_dir: str):
+ """
+ Save test and build log files to a separate directory.
+
+ Parameters:
+ wasabi_root_dir (str): The path of the Wasabi root directory.
+ target_app (str): The target application name for which logs will be saved.
+ """
+ wasabi_results_dir = os.path.join(wasabi_root_dir, "..", "results", target_app)
+ target_root_dir = os.path.join(wasabi_root_dir, "..", "benchmarks", target_app)
+
+ date = datetime.datetime.now().strftime("%Y%m%d%H%M")
+
+ # Save test reports
+ test_reports_dir = os.path.join(wasabi_results_dir, date, "test-reports")
+ os.makedirs(test_reports_dir, exist_ok=True)
+ for dirpath, _, files in os.walk(target_root_dir):
+ for file in files:
+ if file.endswith("-output.txt"):
+ output_file = os.path.join(dirpath, file)
+ shutil.copy(output_file, os.path.join(test_reports_dir, f"{date}.{file}"))
+
+ # Save build reports
+ build_reports_dir = os.path.join(wasabi_results_dir, date, "build-reports")
+ os.makedirs(build_reports_dir, exist_ok=True)
+ for file in os.listdir(target_root_dir):
+ if file.startswith("build-") and file.endswith(".log"):
+ output_file = os.path.join(target_root_dir, file)
+ shutil.copy(output_file, os.path.join(build_reports_dir, f"{date}.{file}"))
+
+
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--benchmark", choices=["hadoop", "hbase", "hive", "cassandra", "elasticsearch"], required=True, help="The benchmark to run")
+ args = parser.parse_args()
+
+ wasabi_root_dir = os.getenv("WASABI_ROOT_DIR")
+ if not wasabi_root_dir:
+ logging.info("[WASABI-HELPER]: [ERROR]: The WASABI_ROOT_DIR environment variable is not set.")
+ sys.exit(1)
+
+ target_root_dir = os.path.join(wasabi_root_dir, "..", "benchmarks", args.benchmark)
+ config_dir = os.path.join(wasabi_root_dir, "wasabi-testing", "config", args.benchmark, "test-plan")
+
+ conf_files = get_conf_files(config_dir)
+ test_names = [get_test_file_name(config_file) for config_file in conf_files]
+ configs = [(conf_file, test_name) for conf_file, test_name in zip(conf_files, test_names)]
+
+ # Cleanup old packages
+ if args.benchmark == "cassandra":
+ cleanup("ant")
+ elif args.benchmark == "elasticsearch":
+ cleanup("gradle")
+ else:
+ cleanup("maven")
+
+ # Build and install WASABI
+ build_target("wasabi", os.path.join(wasabi_root_dir, "wasabi-testing"), args.benchmark)
+
+ # Build and install the target application
+ build_target(args.benchmark, target_root_dir)
+
+ start_time = time.perf_counter()
+
+ # Run the test suite of the target application
+ run_test_suite(args.benchmark, target_root_dir, configs)
+
+ end_time = time.perf_counter()
+ logging.info(f"\n\n// -------------------------------------------------------------------------- //")
+ logging.info(f"End-to-end running time: {end_time - start_time} secs")
+
+ # Save logs to a separate directory
+ save_log_files(args.benchmark, wasabi_root_dir)
+
+if __name__ == "__main__":
+ main()
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/source_rewriter.py b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/source_rewriter.py
new file mode 100644
index 00000000..6d090294
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/source_rewriter.py
@@ -0,0 +1,201 @@
+import os
+import re
+import shutil
+import argparse
+
+RETRY_BOUND = 997
+TIMEOUT_BOUND = 303303
+
+""" Retry bounds rewriting
+"""
+class RetryBoundsRewriter:
+ def find_java_file(self, test_class, test_directory_path):
+ for root, _, files in os.walk(test_directory_path):
+ for file in files:
+ if file.endswith(".java") and file.split(".")[0] == test_class:
+ return os.path.join(root, file)
+ return None
+
+ def find_and_modify_assignment(self, test_class, assign_method, var_name, new_value, test_directory_path):
+ java_file = self.find_java_file(test_class, test_directory_path)
+ if java_file is None:
+ print(f">>> Not found: {test_class}")
+ return False
+
+ java_file_copy = f"{os.path.splitext(os.path.join(os.path.dirname(java_file), os.path.basename(java_file)))[0]}.original"
+ if os.path.isfile(java_file_copy):
+ return False
+
+ shutil.copy2(java_file, java_file_copy)
+
+ with open(java_file, 'r') as file:
+ lines = file.readlines()
+
+ modified_lines = []
+ index = 0
+ while index < len(lines):
+ if f"{assign_method}(" in lines[index] and var_name in lines[index]:
+ to_change = lines[index].rstrip("\n")
+ index += 1
+ while index < len(lines) and ");" not in lines[index - 1]:
+ to_change += lines[index].strip()
+ index += 1
+ to_change = re.sub(r"\d+\);", lambda m: f"{new_value});" if int(m.group().strip("\);")) < new_value else m.group(), to_change)
+ modified_lines.append(to_change + "\n")
+ else:
+ modified_lines.append(lines[index])
+ index += 1
+
+ with open(java_file, 'w') as file:
+ file.writelines(modified_lines)
+
+ return True
+
+ def process_input(self, input_file, test_directory_path):
+ with open(input_file, 'r') as file:
+ next(file)
+ for line in file:
+ line = line.strip()
+ var_name, assigned_value, assign_method, test_class = line.split("!!!")
+ try:
+ if int(assigned_value.strip('"')) < int(RETRY_BOUND):
+ assign_method = assign_method.strip().split('.')[-1]
+ new_value = int(RETRY_BOUND)
+
+ self.find_and_modify_assignment(test_class, assign_method, var_name, new_value, test_directory_path)
+ except:
+ print(f">>> ERROR: {test_class}")
+
+ def run(self, input_file, test_directory_path):
+ self.process_input(input_file, test_directory_path)
+
+
+""" Timeout bounds rewriting
+"""
+class TimeoutBoundsRewriter:
+ def __init__(self):
+ self.tests_to_rewrite = dict()
+ self.test_targets = dict()
+
+ def read_test_targets(self, input_file):
+ with open(input_file, "r") as target:
+ lines = target.read().splitlines()
+
+ for line in lines:
+ test_file, test_name = line.strip().split(".")
+ test_file = test_file.strip()
+ test_name = test_name.strip()
+
+ if test_file not in self.tests_to_rewrite:
+ self.tests_to_rewrite[test_file] = []
+ self.tests_to_rewrite[test_file].append(test_name)
+
+ if test_file not in self.test_targets:
+ self.test_targets[test_file] = True
+
+ def to_modify(self, line, test_class):
+ if "test" not in line:
+ return False
+
+ for test_name in self.tests_to_rewrite.get(test_class, []):
+ if test_name in line:
+ return True
+
+ return False
+
+ def is_target_test(self, lines, index, test_class):
+ while index > 0:
+ if "test" in lines[index] and "public" in lines[index] and "@Test" in lines[index - 1]:
+ return self.to_modify(lines[index], test_class)
+ index -= 1
+
+ return False
+
+ def modify_timeout_annotations(self, file_path, test_class):
+ with open(file_path, "r") as test_file:
+ lines = test_file.readlines()
+
+ modified_lines = []
+ for index in range(len(lines)):
+ modified_line = lines[index]
+
+ line_without_spaces = re.sub(r"\s", "", lines[index])
+
+ if "@Test" in line_without_spaces and "timeout" in line_without_spaces:
+ if index + 1 < len(lines) and self.to_modify(lines[index + 1], test_class):
+ if re.search(r"@Test\(timeout=(\d+)\)", line_without_spaces):
+ current_timeout = int(re.search(r"@Test\(timeout=(\d+)\)", line_without_spaces).group(1))
+ if current_timeout < TIMEOUT_BOUND:
+ modified_timeout = str(TIMEOUT_BOUND)
+ modified_line = re.sub(
+ r"@Test\(timeout=\d+\)",
+ r"\t@Test (timeout = {0})\n".format(modified_timeout),
+ line_without_spaces,
+ )
+
+ modified_lines.append(modified_line)
+
+ with open(file_path, "w") as test_file:
+ test_file.writelines(modified_lines)
+
+ def modify_wait_for_calls(self, file_path, test_class):
+ with open(file_path, 'r') as file:
+ lines = file.readlines()
+
+ modified_lines = []
+ index = 0
+ while index < len(lines):
+ if "GenericTestUtils.waitFor(" in lines[index] and self.is_target_test(lines, index, test_class):
+ to_change = lines[index]
+ opened_count = to_change.count('(')
+ closed_count = to_change.count(')')
+ index += 1
+ while index < len(lines) and opened_count != closed_count:
+ modified_lines.append(to_change)
+ to_change = lines[index]
+ opened_count += to_change.count('(')
+ closed_count += to_change.count(')')
+ index += 1
+ to_change = re.sub(r"\d+\);", lambda m: f"{TIMEOUT_BOUND});" if int(m.group().strip("\);")) < TIMEOUT_BOUND else m.group(), to_change)
+ modified_lines.append(to_change + "\n")
+ else:
+ modified_lines.append(lines[index])
+ index += 1
+
+ with open(file_path, "w") as test_file:
+ test_file.writelines(modified_lines)
+
+ def run(self, input_file, test_directory_path):
+ self.read_test_targets(input_file)
+ for root, _, files in os.walk(test_directory_path):
+ for file_name in files:
+ if file_name.endswith(".java") and file_name.startswith("Test"):
+ file_path = os.path.join(root, file_name)
+ file_base_name = os.path.splitext(file_name)[0]
+
+ if file_base_name in self.test_targets:
+ original_file_path = f"{os.path.splitext(os.path.join(os.path.dirname(file_path), os.path.basename(file_path)))[0]}.original"
+ if not os.path.isfile(original_file_path):
+ shutil.copy2(file_path, original_file_path)
+
+ self.modify_timeout_annotations(file_path, file_base_name)
+ self.modify_wait_for_calls(file_path, file_base_name)
+
+
+def main():
+ parser = argparse.ArgumentParser(description='Modify Java test files based on specified criteria.')
+ parser.add_argument('--mode', choices=['bounds-rewriting', 'timeout-rewriting'], help='Mode of operation: "bounds-rewriting" or "timeout-rewriting".')
+ parser.add_argument('config_file', help='Path to the config file describing the list of changes.')
+ parser.add_argument('target_root_dir', help='Directory path to start the search for Java test files.')
+ args = parser.parse_args()
+
+ if args.mode == 'bounds-rewriting':
+ rewriter = RetryBoundsRewriter()
+ elif args.mode == 'timeout-rewriting':
+ rewriter = TimeoutBoundsRewriter()
+
+ rewriter.run(args.config_file, args.target_root_dir)
+
+
+if __name__ == "__main__":
+ main()
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/test_plan_generator.py b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/test_plan_generator.py
new file mode 100644
index 00000000..923bb550
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/test_plan_generator.py
@@ -0,0 +1,149 @@
+import argparse
+import random
+import re
+import os
+import sys
+
+
+def injection_locations_graph(filename):
+ """
+ Builds a graph representing retry locations and their associated tests from the input file.
+
+ Args:
+ filename (str): Path to the input file containing retry locations and associated tests.
+
+ Returns:
+ dict: A dictionary representing the graph, where keys are retry locations and values are sets of tests.
+ """
+ graph = {}
+
+ with open(filename, 'r', encoding='utf-8') as file:
+ for line in file:
+ test_name = line.split(",")[0].strip()
+ injection_location = line.split(",")[1].strip()
+ graph.setdefault(injection_location, set()).add(test_name)
+
+ return graph
+
+
+def find_matching(graph):
+ """
+ Performs a best-effort matching of tests to retry locations.
+
+ Args:
+ graph (dict): The retry locations graph where keys are retry locations and values are sets of tests.
+
+ Returns:
+ dict: A dictionary representing the matching, where keys are retry locations and values are sets of tests.
+ """
+ matching = {}
+ already_matched = set()
+
+ tests = list(set().union(*graph.values()))
+ random.shuffle(tests)
+
+ for test in tests:
+ injection_locations = [location for location, tests_set in graph.items() if test in tests_set
+ and all(test not in matching_tests for matching_tests in matching.values())]
+ if injection_locations:
+ injection_location = min(injection_locations, key=lambda x: len(matching.get(x, set())))
+ matching.setdefault(injection_location, set()).add(test)
+ already_matched.add(test)
+
+ return matching
+
+
+def find_unmatched(matching, graph):
+ """
+ Finds and returns unmatched tests and retry locations.
+
+ Args:
+ matching (dict): The matching dictionary where keys are tests and values are matched retry locations.
+ graph (dict): The retry locations graph where keys are retry locations and values are sets of tests.
+
+ Returns:
+ tuple: A tuple containing three sets - the first set contains unmatched tests, the second set contains
+ retry locations that are not matched with any tests, and the third set contains tests that are
+ matched to multiple retry locations in the matching dictionary.
+ """
+
+ # Get the set of all tests and retry locations from the graph
+ all_tests = set().union(*graph.values())
+ all_injection_locations = set(graph.keys())
+
+ # Get the set of matched tests and retry locations from the matching
+ matched_tests = set().union(*matching.values())
+ matched_injection_locations = set(matching.keys())
+
+ # Get the set of unmatched tests and retry locations by taking the difference
+ unmatched_tests = all_tests - matched_tests
+ unmatched_injection_locations = all_injection_locations - matched_injection_locations
+
+ # Get the set of tests that are matched to multiple retry locations by taking the intersection
+ multi_matched_tests = {test for test in matched_tests if len([met for met, t in matching.items() if t == test]) > 1}
+
+ return matched_injection_locations, unmatched_injection_locations, unmatched_tests, multi_matched_tests
+
+
+def append_to_config_file(input_config, dir_path, matching):
+ with open(input_config, "r") as file:
+ lines = file.readlines()
+
+ header = "Retry location!!!Retry caller!!!Injection site!!!Injection location!!!Exception\n"
+
+ partitions_dir = os.path.join(dir_path, "partitions")
+ os.makedirs(partitions_dir, exist_ok=True)
+
+ for line in lines:
+ injection_location = line.strip().split("!!!")[3]
+ # Get the tests that are matched to this retry location
+ if injection_location in matching:
+ for test in matching[injection_location]:
+ # Create a data file for each test
+ output_filename = os.path.join(partitions_dir, f"{os.path.splitext(os.path.basename(input_config))[0]}-{test}.data")
+ with open(output_filename, "a") as output_file:
+ if output_file.tell() == 0:
+ output_file.write(header)
+ output_file.write(line)
+
+ # Create a configuration file for each test
+ config_filename = os.path.join(partitions_dir, f"{os.path.splitext(os.path.basename(input_config))[0]}-{test}.conf")
+ with open(config_filename, "w") as config_file:
+ config_file.write(f"retry_data_file: {output_filename}\n")
+ config_file.write("injection_policy: max-count\n")
+ config_file.write("max_injection_count: 311\n")
+
+
+def main():
+ parser = argparse.ArgumentParser(description='Matcher')
+ parser.add_argument('--retry_locations_input', help='Retry locations input file')
+ parser.add_argument('--test_retry_pairs_input', help='Tests-to-retry pairings input file')
+ parser.add_argument('--path_to_configs', help='Path to configuration files')
+ args = parser.parse_args()
+
+ if not (args.retry_locations_input and args.test_retry_pairs_input and args.path_to_configs):
+ print("[wasabi] matcher.py takes three arguments")
+ sys.exit(1)
+
+ # Step 1: Construct the "retry locations to tests" graph
+ graph = injection_locations_graph(args.test_retry_pairs_input)
+
+ # Step 2: Find a matching where each test is matched to a unique retry location.
+ matching = find_matching(graph)
+
+ # Step 3: Check if matching is complete
+ matched_injection_locations, unmatched_injection_locations, unmatched_tests, multi_matched_tests = find_unmatched(matching, graph)
+ print("================= Statistics ================")
+ print("Total matched retried methods:", len(matched_injection_locations))
+ print("Unmatched retried method:\n\t", "\n\t".join(unmatched_injection_locations))
+ print("Unmatched tests:\n\t", '\n\t'.join(unmatched_tests))
+ print("Tests matched multiple times:\n\t", "\n\t".join(multi_matched_tests))
+ print("================= ||| =================")
+
+ # Step 4: Split the larger config file based on the retry locations to tests matching.
+ append_to_config_file(args.retry_locations_input, args.path_to_configs, matching)
+
+
+if __name__ == "__main__":
+ main()
+
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/wasabi-full-eval.log b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/wasabi-full-eval.log
new file mode 100644
index 00000000..9a11635b
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/wasabi-full-eval.log
@@ -0,0 +1,3923 @@
+2025-10-02 22:51:04,111 [INFO]
+
+2025-10-02 22:51:04,111 [INFO] *******************
+2025-10-02 22:51:04,111 [INFO] * *
+2025-10-02 22:51:04,111 [INFO] * hadoop: setup *
+2025-10-02 22:51:04,111 [INFO] * *
+2025-10-02 22:51:04,111 [INFO] *******************
+
+2025-10-02 22:51:04,111 [INFO] [WASABI-HELPER]: [INFO]: Cloning hadoop repository from https://github.com/apache/hadoop.git...
+2025-10-02 22:52:06,188 [INFO] [WASABI-HELPER]: [INFO]: Successfully cloned hadoop.
+2025-10-02 22:52:06,188 [INFO] Checking out version 60867de for hadoop...
+2025-10-02 22:52:07,072 [INFO] [WASABI-HELPER]: [INFO]: Successfully checked out version 60867de for hadoop.
+2025-10-02 22:52:07,073 [INFO]
+
+2025-10-02 22:52:07,073 [INFO] ******************************
+2025-10-02 22:52:07,073 [INFO] * *
+2025-10-02 22:52:07,073 [INFO] * hadoop: code preparation *
+2025-10-02 22:52:07,073 [INFO] * *
+2025-10-02 22:52:07,073 [INFO] ******************************
+
+2025-10-02 22:52:07,073 [INFO] [WASABI-HELPER]: [INFO]: Renamed /home/cc/sosp24-ae/wasabi/../benchmarks/hadoop/pom.xml to /home/cc/sosp24-ae/wasabi/../benchmarks/hadoop/pom-original.xml.
+2025-10-02 22:52:07,074 [INFO] [WASABI-HELPER]: [INFO]: Copied /home/cc/sosp24-ae/wasabi/../wasabi/wasabi-testing/config/hadoop/pom-hadoop.xml to /home/cc/sosp24-ae/wasabi/../benchmarks/hadoop/pom.xml.
+2025-10-02 22:52:12,153 [INFO] >>> ERROR: TestRuntimeEstimators
+
+2025-10-02 22:52:12,153 [INFO] >>> ERROR: TestRuntimeEstimators
+
+2025-10-02 22:52:12,153 [INFO] >>> ERROR: TestSpeculativeExecutionWithMRApp
+
+2025-10-02 22:52:12,158 [INFO] [WASABI-HELPER]: [INFO]: Successfully overwritten retry-related bounds. Status: 0
+2025-10-02 22:52:12,335 [INFO] [WASABI-HELPER]: [INFO]: Successfully overwritten retry-related bounds. Status: 0
+2025-10-02 22:52:12,335 [INFO]
+
+2025-10-02 22:52:12,335 [INFO] ****************************
+2025-10-02 22:52:12,335 [INFO] * *
+2025-10-02 22:52:12,335 [INFO] * hadoop: bug triggering *
+2025-10-02 22:52:12,335 [INFO] * *
+2025-10-02 22:52:12,336 [INFO] ****************************
+
+2025-10-03 00:22:21,623 [INFO] [WASABI-HELPER]: [INFO]: Finished running test suite for hadoop. Status: 0
+2025-10-03 00:22:21,624 [INFO]
+
+2025-10-03 00:22:21,624 [INFO] *************************
+2025-10-03 00:22:21,624 [INFO] * *
+2025-10-03 00:22:21,624 [INFO] * hadoop: Bug oracles *
+2025-10-03 00:22:21,624 [INFO] * *
+2025-10-03 00:22:21,624 [INFO] *************************
+
+2025-10-03 00:22:31,154 [INFO] // ----------------------------- //
+
+2025-10-03 00:22:31,154 [INFO] Retry bugs for hadoop
+
+2025-10-03 00:22:31,154 [INFO] // ----------------------------- //
+
+2025-10-03 00:22:31,154 [INFO] bug-1,when-missing-backoff,org.apache.hadoop.mapred.Task.sendDone,TestLineRecordReaderJobs.testCustomRecordDelimiters
+
+2025-10-03 00:22:31,154 [INFO] bug-2,when-missing-backoff,org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJob,TestFileOutputCommitter.testCommitterWithFailureV2
+
+2025-10-03 00:22:31,154 [INFO] bug-3,how-bug,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hFlush_02
+
+2025-10-03 00:22:31,154 [INFO] bug-4,when-missing-backoff,org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run,TestHadoopArchives.testReadFileContent
+
+2025-10-03 00:22:31,155 [INFO] bug-5,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestHDFSCLI.testAll
+
+2025-10-03 00:22:31,155 [INFO] bug-6,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+
+2025-10-03 00:22:31,155 [INFO] bug-7,when-missing-backoff,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestCopyToLocal.testCopyWithThreads
+
+2025-10-03 00:22:31,155 [INFO] bug-8,when-missing-cap,org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock,TestStoragePolicySatisfierWithStripedFile.testSPSWhenFileHasLowRedundancyBlocks
+
+2025-10-03 00:22:31,155 [INFO] bug-9,when-missing-backoff,org.apache.hadoop.mapred.Task.sendDone,TestComparators.testAllUserComparators
+
+2025-10-03 00:22:31,155 [INFO] bug-10,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+
+2025-10-03 00:22:31,155 [INFO] bug-11,when-missing-backoff,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestCopyToLocal.testCopyWithThreadWrong
+
+2025-10-03 00:22:31,155 [INFO] bug-12,when-missing-backoff,org.apache.hadoop.mapred.Task.sendDone,TestComparators.testUserMRComparator
+
+2025-10-03 00:22:31,155 [INFO] bug-13,when-missing-cap,org.apache.hadoop.mapred.Task.statusUpdate,TestMapReduce.testMapred
+
+2025-10-03 00:22:31,155 [INFO] bug-14,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestHDFSCLI.testAll
+
+2025-10-03 00:22:31,155 [INFO] bug-15,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+
+2025-10-03 00:22:31,155 [INFO] bug-16,how-bug,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.testHFlushInterrupted
+
+2025-10-03 00:22:31,155 [INFO] bug-17,when-missing-backoff,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestFsVolumeList.testExcludeSlowDiskWhenChoosingVolume
+
+2025-10-03 00:22:31,155 [INFO] bug-18,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_040_abortFiles
+
+2025-10-03 00:22:31,155 [INFO] bug-19,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestHDFSCLI.testAll
+
+2025-10-03 00:22:31,155 [INFO] bug-20,when-missing-backoff,org.apache.hadoop.mapred.Task.sendDone,TestMapRed.testBiggerInput
+
+2025-10-03 00:22:31,155 [INFO] bug-21,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+
+2025-10-03 00:22:31,155 [INFO] bug-22,when-missing-backoff,org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock,TestStoragePolicySatisfierWithStripedFile.testSPSWhenFileHasLowRedundancyBlocks
+
+2025-10-03 00:22:31,155 [INFO] bug-23,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+
+2025-10-03 00:22:31,156 [INFO] bug-24,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode,TestDatanodeDeath.testSimple1
+
+2025-10-03 00:22:31,156 [INFO] bug-25,when-missing-backoff,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestRouterAllResolver.testRandomAll
+
+2025-10-03 00:22:31,156 [INFO] bug-26,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_030_commitFiles
+
+2025-10-03 00:22:31,156 [INFO] bug-27,when-missing-backoff,org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run,UNKNOWN
+
+2025-10-03 00:22:31,156 [INFO] bug-28,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_040_abortFiles
+
+2025-10-03 00:22:31,156 [INFO] bug-29,when-missing-cap,org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run,TestSpaceReservation.stressTest
+
+2025-10-03 00:22:31,156 [INFO] bug-30,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_040_abortFiles
+
+2025-10-03 00:22:31,156 [INFO] bug-31,when-missing-cap,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestFsVolumeList.testAddRplicaProcessorForAddingReplicaInMap
+
+2025-10-03 00:22:31,156 [INFO] bug-32,how-bug,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hSyncUpdateLength_03
+
+2025-10-03 00:22:31,156 [INFO] bug-33,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hFlush_01
+
+2025-10-03 00:22:31,156 [INFO] bug-34,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestBatchIbr.testIbr
+
+2025-10-03 00:22:31,156 [INFO] bug-35,when-missing-backoff,org.apache.hadoop.hdfs.DataStreamer.transfer,TestReplaceDatanodeOnFailure.testReplaceDatanodeOnFailure
+
+2025-10-03 00:22:31,156 [INFO] bug-36,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hSyncUpdateLength_01
+
+2025-10-03 00:22:31,156 [INFO] bug-37,how-bug,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestRouterAllResolver.testHashAll
+
+2025-10-03 00:22:31,156 [INFO] bug-38,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hFlush_03
+
+2025-10-03 00:22:31,156 [INFO] bug-39,when-missing-cap,org.apache.hadoop.mapred.Task.statusUpdate,TestMapReduce.testMapred
+
+2025-10-03 00:22:31,156 [INFO] bug-40,when-missing-cap,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestFsVolumeList.testAddRplicaProcessorForAddingReplicaInMap
+
+2025-10-03 00:22:31,156 [INFO] bug-41,when-missing-backoff,org.apache.hadoop.mapred.Task.sendDone,TestKeyFieldBasedComparator.testBasicUnixComparator
+
+2025-10-03 00:22:31,156 [INFO] bug-42,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestHDFSCLI.testAll
+
+2025-10-03 00:22:31,157 [INFO] bug-43,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_030_commitFiles
+
+2025-10-03 00:22:31,157 [INFO] bug-44,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+
+2025-10-03 00:22:31,157 [INFO] bug-45,when-missing-backoff,org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock,TestStoragePolicySatisfierWithStripedFile.testWhenOnlyFewTargetNodesAreAvailableToSatisfyStoragePolicy
+
+2025-10-03 00:22:31,157 [INFO] bug-46,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hSyncEndBlock_01
+
+2025-10-03 00:22:31,157 [INFO] bug-47,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestBatchIbr.testIbr
+
+2025-10-03 00:22:31,157 [INFO] bug-48,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode,TestDatanodeDeath.testSimple2
+
+2025-10-03 00:22:31,157 [INFO] bug-49,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_030_commitFiles
+
+2025-10-03 00:22:31,157 [INFO] bug-50,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+
+2025-10-03 00:22:31,157 [INFO] bug-51,when-missing-cap,org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock,TestStoragePolicySatisfierWithStripedFile.testSPSWhenFileHasLowRedundancyBlocks
+
+2025-10-03 00:22:31,157 [INFO] bug-52,when-missing-cap,org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run,TestSpaceReservation.stressTest
+
+2025-10-03 00:22:31,157 [INFO] bug-53,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+
+2025-10-03 00:22:31,157 [INFO] bug-54,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestHDFSCLI.testAll
+
+2025-10-03 00:22:31,157 [INFO] bug-55,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestHDFSCLI.testAll
+
+2025-10-03 00:22:31,157 [INFO] bug-56,when-missing-backoff,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestRouterAllResolver.testSpaceAll
+
+2025-10-03 00:22:31,157 [INFO] bug-57,when-missing-cap,org.apache.hadoop.mapred.Task.statusUpdate,TestMapReduce.testMapred
+
+2025-10-03 00:22:31,157 [INFO] bug-58,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestBatchIbr.testIbr
+
+2025-10-03 00:22:31,157 [INFO] bug-59,how-bug,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hSyncEndBlockAndUpdateLength
+
+2025-10-03 00:22:31,157 [INFO] bug-60,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hSyncEndBlock_02
+
+2025-10-03 00:22:31,157 [INFO] bug-61,when-missing-cap,org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run,TestSpaceReservation.stressTest
+
+2025-10-03 00:22:31,158 [INFO] bug-62,how-bug,org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader,TestFSEditLogLoader.testErasureCodingPolicyOperations
+
+2025-10-03 00:22:31,158 [INFO] bug-63,how-bug,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hSyncEndBlock_03
+
+2025-10-03 00:22:31,158 [INFO] bug-64,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestHDFSCLI.testAll
+
+2025-10-03 00:22:31,158 [INFO] bug-65,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+
+2025-10-03 00:22:31,158 [INFO] bug-66,when-missing-cap,org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run,TestSpaceReservation.stressTest
+
+2025-10-03 00:22:31,158 [INFO] bug-67,when-missing-backoff,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestCopyToLocal.testCopyWithThreadsAndQueueSize
+
+2025-10-03 00:22:31,158 [INFO] bug-68,when-missing-backoff,org.apache.hadoop.mapred.Task.sendDone,TestMapRed.testCompression
+
+2025-10-03 00:22:31,158 [INFO] bug-69,when-missing-backoff,org.apache.hadoop.mapred.Task.sendDone,TestComparators.testDefaultMRComparator
+
+2025-10-03 00:22:31,158 [INFO] bug-70,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_040_abortFiles
+
+2025-10-03 00:22:31,158 [INFO] bug-71,when-missing-cap,org.apache.hadoop.mapred.Task.statusUpdate,TestMapReduce.testMapred
+
+2025-10-03 00:22:31,158 [INFO] bug-72,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+
+2025-10-03 00:22:31,158 [INFO] bug-73,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hFlush_02
+
+2025-10-03 00:22:31,158 [INFO] bug-74,when-missing-backoff,org.apache.hadoop.mapred.Task.statusUpdate,TestCompressionEmulationUtils.testRandomCompressedTextDataGenerator
+
+2025-10-03 00:22:31,158 [INFO] bug-75,when-missing-cap,org.apache.hadoop.mapred.Task.statusUpdate,TestMapReduce.testMapred
+
+2025-10-03 00:22:31,158 [INFO] bug-76,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_040_abortFiles
+
+2025-10-03 00:22:31,158 [INFO] bug-77,when-missing-cap,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestFsVolumeList.testAddRplicaProcessorForAddingReplicaInMap
+
+2025-10-03 00:22:31,158 [INFO] bug-78,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestHDFSCLI.testAll
+
+2025-10-03 00:22:31,158 [INFO] bug-79,how-bug,org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater.takeAndProcessTasks,TestReencryption.testRestartDuringReencrypt
+
+2025-10-03 00:22:31,158 [INFO] bug-80,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestBatchIbr.testIbr
+
+2025-10-03 00:22:31,159 [INFO] bug-81,when-missing-backoff,org.apache.hadoop.mapred.Task.done,TestMRKeyFieldBasedComparator.testBasicUnixComparator
+
+2025-10-03 00:22:31,159 [INFO] bug-82,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_040_abortFiles
+
+2025-10-03 00:22:31,159 [INFO] bug-83,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode,TestDatanodeDeath.testSimple0
+
+2025-10-03 00:22:31,159 [INFO] bug-84,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_030_commitFiles
+
+2025-10-03 00:22:31,159 [INFO] bug-85,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestBatchIbr.testIbr
+
+2025-10-03 00:22:31,159 [INFO] bug-86,when-missing-cap,org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock,TestStoragePolicySatisfierWithStripedFile.testSPSWhenFileHasLowRedundancyBlocks
+
+2025-10-03 00:22:31,159 [INFO] bug-87,when-missing-backoff,org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock,TestStoragePolicySatisfierWithStripedFile.testMoverWithFullStripe
+
+2025-10-03 00:22:31,159 [INFO] bug-88,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestBatchIbr.testIbr
+
+2025-10-03 00:22:31,159 [INFO] bug-89,when-missing-cap,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestFsVolumeList.testAddRplicaProcessorForAddingReplicaInMap
+
+2025-10-03 00:22:31,159 [INFO] bug-90,when-missing-cap,org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock,TestStoragePolicySatisfierWithStripedFile.testSPSWhenFileHasLowRedundancyBlocks
+
+2025-10-03 00:22:31,159 [INFO] bug-91,when-missing-cap,org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run,TestSpaceReservation.stressTest
+
+2025-10-03 00:22:31,159 [INFO] bug-92,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_040_abortFiles
+
+2025-10-03 00:22:31,159 [INFO] bug-93,when-missing-backoff,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestDistCh.testDistCh
+
+2025-10-03 00:22:31,159 [INFO] bug-94,when-missing-cap,org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock,TestStoragePolicySatisfierWithStripedFile.testSPSWhenFileHasLowRedundancyBlocks
+
+2025-10-03 00:22:31,159 [INFO] bug-95,how-bug,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestRouterAllResolver.testRandomAll
+
+2025-10-03 00:22:31,159 [INFO] bug-96,when-missing-backoff,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_040_abortFiles
+
+2025-10-03 00:22:31,159 [INFO] bug-97,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hSyncUpdateLength_03
+
+2025-10-03 00:22:31,159 [INFO] bug-98,when-missing-cap,org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run,TestSpaceReservation.stressTest
+
+2025-10-03 00:22:31,159 [INFO] bug-99,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_030_commitFiles
+
+2025-10-03 00:22:31,160 [INFO] bug-100,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+
+2025-10-03 00:22:31,160 [INFO] bug-101,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestHDFSCLI.testAll
+
+2025-10-03 00:22:31,160 [INFO] bug-102,when-missing-backoff,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestRouterAllResolver.testHashAll
+
+2025-10-03 00:22:31,160 [INFO] bug-103,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_030_commitFiles
+
+2025-10-03 00:22:31,160 [INFO] bug-104,when-missing-cap,org.apache.hadoop.mapred.Task.statusUpdate,TestMapReduce.testMapred
+
+2025-10-03 00:22:31,160 [INFO] bug-105,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hSyncUpdateLength_02
+
+2025-10-03 00:22:31,160 [INFO] bug-106,when-missing-backoff,org.apache.hadoop.mapred.Task.sendDone,TestLineRecordReaderJobs.testDefaultRecordDelimiters
+
+2025-10-03 00:22:31,160 [INFO] bug-107,how-bug,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hFlush_01
+
+2025-10-03 00:22:31,160 [INFO] bug-108,when-missing-cap,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestFsVolumeList.testAddRplicaProcessorForAddingReplicaInMap
+
+2025-10-03 00:22:31,160 [INFO] bug-109,how-bug,org.apache.hadoop.mapred.Task.sendDone,TestStreamAggregate.testCommandLine
+
+2025-10-03 00:22:31,160 [INFO] bug-110,when-missing-backoff,org.apache.hadoop.mapred.Task.statusUpdate,TestMapReduce.testMapred
+
+2025-10-03 00:22:31,160 [INFO] bug-111,how-bug,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hSyncUpdateLength_01
+
+2025-10-03 00:22:31,160 [INFO] bug-112,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater.takeAndProcessTasks,TestReencryption.testRestartDuringReencrypt
+
+2025-10-03 00:22:31,160 [INFO] bug-113,how-bug,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hFlush_03
+
+2025-10-03 00:22:31,160 [INFO] bug-114,when-missing-backoff,org.apache.hadoop.ipc.Client$Connection.setupConnection,TestNameserviceRPCMetrics.testProxyOpCompleteConcurrent
+
+2025-10-03 00:22:31,160 [INFO] bug-115,when-missing-backoff,org.apache.hadoop.mapred.Task.statusUpdate,TestCompressionEmulationUtils.testCompressionRatios
+
+2025-10-03 00:22:31,160 [INFO] bug-116,when-missing-backoff,org.apache.hadoop.mapred.Task.sendDone,TestMapRed.testSmallInput
+
+2025-10-03 00:22:31,160 [INFO] bug-117,how-bug,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hSyncEndBlock_01
+
+2025-10-03 00:22:31,160 [INFO] bug-118,when-missing-cap,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestFsVolumeList.testAddRplicaProcessorForAddingReplicaInMap
+
+2025-10-03 00:22:31,160 [INFO] bug-119,when-missing-backoff,org.apache.hadoop.mapred.Task.sendDone,TestMapRed.testNullKeys
+
+2025-10-03 00:22:31,161 [INFO] bug-120,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.testHFlushInterrupted
+
+2025-10-03 00:22:31,161 [INFO] bug-121,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+
+2025-10-03 00:22:31,161 [INFO] bug-122,when-missing-backoff,org.apache.hadoop.mapred.Task.statusUpdate,TestDataDrivenDBInputFormat.testDateSplits
+
+2025-10-03 00:22:31,161 [INFO] bug-123,when-missing-cap,org.apache.hadoop.mapred.Task.statusUpdate,TestMapReduce.testMapred
+
+2025-10-03 00:22:31,161 [INFO] bug-124,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater.takeAndProcessTasks,TestReencryption.testRestartDuringReencrypt
+
+2025-10-03 00:22:31,161 [INFO] bug-125,when-missing-cap,org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run,TestSpaceReservation.stressTest
+
+2025-10-03 00:22:31,161 [INFO] bug-126,when-missing-cap,org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock,TestStoragePolicySatisfierWithStripedFile.testSPSWhenFileHasLowRedundancyBlocks
+
+2025-10-03 00:22:31,161 [INFO] bug-127,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_040_abortFiles
+
+2025-10-03 00:22:31,161 [INFO] bug-128,when-missing-backoff,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_030_commitFiles
+
+2025-10-03 00:22:31,161 [INFO] bug-129,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+
+2025-10-03 00:22:31,161 [INFO] bug-130,when-missing-backoff,org.apache.hadoop.mapred.Task.sendDone,TestMapRed.testMapred
+
+2025-10-03 00:22:31,161 [INFO] bug-131,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hSyncEndBlockAndUpdateLength
+
+2025-10-03 00:22:31,161 [INFO] bug-132,when-missing-backoff,org.apache.hadoop.mapred.Task.sendDone,TestStreamAggregate.testCommandLine
+
+2025-10-03 00:22:31,161 [INFO] bug-133,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+
+2025-10-03 00:22:31,161 [INFO] bug-134,when-missing-cap,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestFsVolumeList.testAddRplicaProcessorForAddingReplicaInMap
+
+2025-10-03 00:22:31,161 [INFO] bug-135,when-missing-backoff,org.apache.hadoop.mapred.Task.sendDone,TestComparators.testUserValueGroupingComparator
+
+2025-10-03 00:22:31,161 [INFO] bug-136,when-missing-backoff,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestCopyToLocal.testCopyWithThreadsAndQueueSizeWrong
+
+2025-10-03 00:22:31,161 [INFO] bug-137,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode,TestDatanodeDeath.testComplex
+
+2025-10-03 00:22:31,161 [INFO] bug-138,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_030_commitFiles
+
+2025-10-03 00:22:31,162 [INFO] bug-139,how-bug,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestRouterAllResolver.testSpaceAll
+
+2025-10-03 00:22:31,162 [INFO] bug-140,when-missing-cap,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestFsVolumeList.testAddRplicaProcessorForAddingReplicaInMap
+
+2025-10-03 00:22:31,162 [INFO] bug-141,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hSyncEndBlock_03
+
+2025-10-03 00:22:31,162 [INFO] bug-142,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+
+2025-10-03 00:22:31,162 [INFO] bug-143,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_030_commitFiles
+
+2025-10-03 00:22:31,162 [INFO] bug-144,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+
+2025-10-03 00:22:31,162 [INFO] bug-145,when-missing-cap,org.apache.hadoop.mapred.Task.statusUpdate,TestMapReduce.testMapred
+
+2025-10-03 00:22:31,162 [INFO] bug-146,when-missing-cap,org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock,TestStoragePolicySatisfierWithStripedFile.testSPSWhenFileHasLowRedundancyBlocks
+
+2025-10-03 00:22:31,162 [INFO] bug-147,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestBatchIbr.testIbr
+
+2025-10-03 00:22:31,162 [INFO] bug-148,how-bug,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hSyncEndBlock_02
+
+2025-10-03 00:22:31,162 [INFO] bug-149,when-missing-cap,org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run,TestSpaceReservation.stressTest
+
+2025-10-03 00:22:31,162 [INFO] bug-150,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestBatchIbr.testIbr
+
+2025-10-03 00:22:31,162 [INFO] bug-151,how-bug,org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader,TestFSEditLogLoader.testErasureCodingPolicyOperations
+
+2025-10-03 00:22:31,162 [INFO] bug-152,how-bug,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hSyncUpdateLength_02
+
+2025-10-03 00:22:31,162 [INFO] bug-153,when-missing-cap,org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock,TestStoragePolicySatisfierWithStripedFile.testSPSWhenFileHasLowRedundancyBlocks
+
+2025-10-03 00:22:31,162 [INFO] bug-154,when-missing-backoff,org.apache.hadoop.mapred.Task.done,TestStreaming.testCommandLine
+
+2025-10-03 00:22:31,162 [INFO] bug-155,when-missing-cap,org.apache.hadoop.hdfs.DFSOutputStream.addBlock,UNKNOWN
+
+2025-10-03 00:22:31,170 [INFO] // ----------------------------- //
+ Retry bugs for hadoop
+// ----------------------------- //
+bug-1,when-missing-backoff,org.apache.hadoop.mapred.Task.sendDone,TestLineRecordReaderJobs.testCustomRecordDelimiters
+bug-2,when-missing-backoff,org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJob,TestFileOutputCommitter.testCommitterWithFailureV2
+bug-3,how-bug,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hFlush_02
+bug-4,when-missing-backoff,org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run,TestHadoopArchives.testReadFileContent
+bug-5,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestHDFSCLI.testAll
+bug-6,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+bug-7,when-missing-backoff,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestCopyToLocal.testCopyWithThreads
+bug-8,when-missing-cap,org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock,TestStoragePolicySatisfierWithStripedFile.testSPSWhenFileHasLowRedundancyBlocks
+bug-9,when-missing-backoff,org.apache.hadoop.mapred.Task.sendDone,TestComparators.testAllUserComparators
+bug-10,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+bug-11,when-missing-backoff,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestCopyToLocal.testCopyWithThreadWrong
+bug-12,when-missing-backoff,org.apache.hadoop.mapred.Task.sendDone,TestComparators.testUserMRComparator
+bug-13,when-missing-cap,org.apache.hadoop.mapred.Task.statusUpdate,TestMapReduce.testMapred
+bug-14,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestHDFSCLI.testAll
+bug-15,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+bug-16,how-bug,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.testHFlushInterrupted
+bug-17,when-missing-backoff,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestFsVolumeList.testExcludeSlowDiskWhenChoosingVolume
+bug-18,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_040_abortFiles
+bug-19,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestHDFSCLI.testAll
+bug-20,when-missing-backoff,org.apache.hadoop.mapred.Task.sendDone,TestMapRed.testBiggerInput
+bug-21,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+bug-22,when-missing-backoff,org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock,TestStoragePolicySatisfierWithStripedFile.testSPSWhenFileHasLowRedundancyBlocks
+bug-23,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+bug-24,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode,TestDatanodeDeath.testSimple1
+bug-25,when-missing-backoff,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestRouterAllResolver.testRandomAll
+bug-26,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_030_commitFiles
+bug-27,when-missing-backoff,org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run,UNKNOWN
+bug-28,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_040_abortFiles
+bug-29,when-missing-cap,org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run,TestSpaceReservation.stressTest
+bug-30,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_040_abortFiles
+bug-31,when-missing-cap,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestFsVolumeList.testAddRplicaProcessorForAddingReplicaInMap
+bug-32,how-bug,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hSyncUpdateLength_03
+bug-33,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hFlush_01
+bug-34,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestBatchIbr.testIbr
+bug-35,when-missing-backoff,org.apache.hadoop.hdfs.DataStreamer.transfer,TestReplaceDatanodeOnFailure.testReplaceDatanodeOnFailure
+bug-36,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hSyncUpdateLength_01
+bug-37,how-bug,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestRouterAllResolver.testHashAll
+bug-38,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hFlush_03
+bug-39,when-missing-cap,org.apache.hadoop.mapred.Task.statusUpdate,TestMapReduce.testMapred
+bug-40,when-missing-cap,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestFsVolumeList.testAddRplicaProcessorForAddingReplicaInMap
+bug-41,when-missing-backoff,org.apache.hadoop.mapred.Task.sendDone,TestKeyFieldBasedComparator.testBasicUnixComparator
+bug-42,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestHDFSCLI.testAll
+bug-43,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_030_commitFiles
+bug-44,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+bug-45,when-missing-backoff,org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock,TestStoragePolicySatisfierWithStripedFile.testWhenOnlyFewTargetNodesAreAvailableToSatisfyStoragePolicy
+bug-46,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hSyncEndBlock_01
+bug-47,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestBatchIbr.testIbr
+bug-48,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode,TestDatanodeDeath.testSimple2
+bug-49,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_030_commitFiles
+bug-50,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+bug-51,when-missing-cap,org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock,TestStoragePolicySatisfierWithStripedFile.testSPSWhenFileHasLowRedundancyBlocks
+bug-52,when-missing-cap,org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run,TestSpaceReservation.stressTest
+bug-53,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+bug-54,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestHDFSCLI.testAll
+bug-55,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestHDFSCLI.testAll
+bug-56,when-missing-backoff,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestRouterAllResolver.testSpaceAll
+bug-57,when-missing-cap,org.apache.hadoop.mapred.Task.statusUpdate,TestMapReduce.testMapred
+bug-58,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestBatchIbr.testIbr
+bug-59,how-bug,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hSyncEndBlockAndUpdateLength
+bug-60,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hSyncEndBlock_02
+bug-61,when-missing-cap,org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run,TestSpaceReservation.stressTest
+bug-62,how-bug,org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader,TestFSEditLogLoader.testErasureCodingPolicyOperations
+bug-63,how-bug,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hSyncEndBlock_03
+bug-64,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestHDFSCLI.testAll
+bug-65,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+bug-66,when-missing-cap,org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run,TestSpaceReservation.stressTest
+bug-67,when-missing-backoff,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestCopyToLocal.testCopyWithThreadsAndQueueSize
+bug-68,when-missing-backoff,org.apache.hadoop.mapred.Task.sendDone,TestMapRed.testCompression
+bug-69,when-missing-backoff,org.apache.hadoop.mapred.Task.sendDone,TestComparators.testDefaultMRComparator
+bug-70,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_040_abortFiles
+bug-71,when-missing-cap,org.apache.hadoop.mapred.Task.statusUpdate,TestMapReduce.testMapred
+bug-72,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+bug-73,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hFlush_02
+bug-74,when-missing-backoff,org.apache.hadoop.mapred.Task.statusUpdate,TestCompressionEmulationUtils.testRandomCompressedTextDataGenerator
+bug-75,when-missing-cap,org.apache.hadoop.mapred.Task.statusUpdate,TestMapReduce.testMapred
+bug-76,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_040_abortFiles
+bug-77,when-missing-cap,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestFsVolumeList.testAddRplicaProcessorForAddingReplicaInMap
+bug-78,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestHDFSCLI.testAll
+bug-79,how-bug,org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater.takeAndProcessTasks,TestReencryption.testRestartDuringReencrypt
+bug-80,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestBatchIbr.testIbr
+bug-81,when-missing-backoff,org.apache.hadoop.mapred.Task.done,TestMRKeyFieldBasedComparator.testBasicUnixComparator
+bug-82,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_040_abortFiles
+bug-83,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode,TestDatanodeDeath.testSimple0
+bug-84,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_030_commitFiles
+bug-85,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestBatchIbr.testIbr
+bug-86,when-missing-cap,org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock,TestStoragePolicySatisfierWithStripedFile.testSPSWhenFileHasLowRedundancyBlocks
+bug-87,when-missing-backoff,org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock,TestStoragePolicySatisfierWithStripedFile.testMoverWithFullStripe
+bug-88,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestBatchIbr.testIbr
+bug-89,when-missing-cap,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestFsVolumeList.testAddRplicaProcessorForAddingReplicaInMap
+bug-90,when-missing-cap,org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock,TestStoragePolicySatisfierWithStripedFile.testSPSWhenFileHasLowRedundancyBlocks
+bug-91,when-missing-cap,org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run,TestSpaceReservation.stressTest
+bug-92,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_040_abortFiles
+bug-93,when-missing-backoff,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestDistCh.testDistCh
+bug-94,when-missing-cap,org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock,TestStoragePolicySatisfierWithStripedFile.testSPSWhenFileHasLowRedundancyBlocks
+bug-95,how-bug,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestRouterAllResolver.testRandomAll
+bug-96,when-missing-backoff,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_040_abortFiles
+bug-97,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hSyncUpdateLength_03
+bug-98,when-missing-cap,org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run,TestSpaceReservation.stressTest
+bug-99,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_030_commitFiles
+bug-100,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+bug-101,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestHDFSCLI.testAll
+bug-102,when-missing-backoff,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestRouterAllResolver.testHashAll
+bug-103,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_030_commitFiles
+bug-104,when-missing-cap,org.apache.hadoop.mapred.Task.statusUpdate,TestMapReduce.testMapred
+bug-105,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hSyncUpdateLength_02
+bug-106,when-missing-backoff,org.apache.hadoop.mapred.Task.sendDone,TestLineRecordReaderJobs.testDefaultRecordDelimiters
+bug-107,how-bug,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hFlush_01
+bug-108,when-missing-cap,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestFsVolumeList.testAddRplicaProcessorForAddingReplicaInMap
+bug-109,how-bug,org.apache.hadoop.mapred.Task.sendDone,TestStreamAggregate.testCommandLine
+bug-110,when-missing-backoff,org.apache.hadoop.mapred.Task.statusUpdate,TestMapReduce.testMapred
+bug-111,how-bug,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hSyncUpdateLength_01
+bug-112,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater.takeAndProcessTasks,TestReencryption.testRestartDuringReencrypt
+bug-113,how-bug,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hFlush_03
+bug-114,when-missing-backoff,org.apache.hadoop.ipc.Client$Connection.setupConnection,TestNameserviceRPCMetrics.testProxyOpCompleteConcurrent
+bug-115,when-missing-backoff,org.apache.hadoop.mapred.Task.statusUpdate,TestCompressionEmulationUtils.testCompressionRatios
+bug-116,when-missing-backoff,org.apache.hadoop.mapred.Task.sendDone,TestMapRed.testSmallInput
+bug-117,how-bug,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hSyncEndBlock_01
+bug-118,when-missing-cap,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestFsVolumeList.testAddRplicaProcessorForAddingReplicaInMap
+bug-119,when-missing-backoff,org.apache.hadoop.mapred.Task.sendDone,TestMapRed.testNullKeys
+bug-120,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.testHFlushInterrupted
+bug-121,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+bug-122,when-missing-backoff,org.apache.hadoop.mapred.Task.statusUpdate,TestDataDrivenDBInputFormat.testDateSplits
+bug-123,when-missing-cap,org.apache.hadoop.mapred.Task.statusUpdate,TestMapReduce.testMapred
+bug-124,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater.takeAndProcessTasks,TestReencryption.testRestartDuringReencrypt
+bug-125,when-missing-cap,org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run,TestSpaceReservation.stressTest
+bug-126,when-missing-cap,org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock,TestStoragePolicySatisfierWithStripedFile.testSPSWhenFileHasLowRedundancyBlocks
+bug-127,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_040_abortFiles
+bug-128,when-missing-backoff,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_030_commitFiles
+bug-129,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+bug-130,when-missing-backoff,org.apache.hadoop.mapred.Task.sendDone,TestMapRed.testMapred
+bug-131,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hSyncEndBlockAndUpdateLength
+bug-132,when-missing-backoff,org.apache.hadoop.mapred.Task.sendDone,TestStreamAggregate.testCommandLine
+bug-133,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+bug-134,when-missing-cap,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestFsVolumeList.testAddRplicaProcessorForAddingReplicaInMap
+bug-135,when-missing-backoff,org.apache.hadoop.mapred.Task.sendDone,TestComparators.testUserValueGroupingComparator
+bug-136,when-missing-backoff,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestCopyToLocal.testCopyWithThreadsAndQueueSizeWrong
+bug-137,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode,TestDatanodeDeath.testComplex
+bug-138,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_030_commitFiles
+bug-139,how-bug,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestRouterAllResolver.testSpaceAll
+bug-140,when-missing-cap,org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream,TestFsVolumeList.testAddRplicaProcessorForAddingReplicaInMap
+bug-141,when-missing-backoff,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hSyncEndBlock_03
+bug-142,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+bug-143,when-missing-cap,org.apache.hadoop.fs.FSInputChecker.readChecksumChunk,TestDirectoryCommitterScale.test_030_commitFiles
+bug-144,when-missing-cap,org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork,TestCheckpoint.testCheckpointTriggerOnTxnCount
+bug-145,when-missing-cap,org.apache.hadoop.mapred.Task.statusUpdate,TestMapReduce.testMapred
+bug-146,when-missing-cap,org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock,TestStoragePolicySatisfierWithStripedFile.testSPSWhenFileHasLowRedundancyBlocks
+bug-147,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestBatchIbr.testIbr
+bug-148,how-bug,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hSyncEndBlock_02
+bug-149,when-missing-cap,org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run,TestSpaceReservation.stressTest
+bug-150,when-missing-cap,org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo,TestBatchIbr.testIbr
+bug-151,how-bug,org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader,TestFSEditLogLoader.testErasureCodingPolicyOperations
+bug-152,how-bug,org.apache.hadoop.hdfs.DFSInputStream.readBlockLength,TestHFlush.hSyncUpdateLength_02
+bug-153,when-missing-cap,org.apache.hadoop.hdfs.server.sps.ExternalSPSBlockMoveTaskHandler$BlockMovingTask.moveBlock,TestStoragePolicySatisfierWithStripedFile.testSPSWhenFileHasLowRedundancyBlocks
+bug-154,when-missing-backoff,org.apache.hadoop.mapred.Task.done,TestStreaming.testCommandLine
+bug-155,when-missing-cap,org.apache.hadoop.hdfs.DFSOutputStream.addBlock,UNKNOWN
+
+2025-10-03 00:22:31,170 [INFO] [WASABI-HELPER]: [INFO]: Finished processing /home/cc/sosp24-ae/wasabi/../results/hadoop/202510030022. Status: 0
+2025-10-03 00:22:31,245 [INFO]
+
+2025-10-03 00:22:31,245 [INFO] ******************
+2025-10-03 00:22:31,245 [INFO] * *
+2025-10-03 00:22:31,245 [INFO] * hbase: setup *
+2025-10-03 00:22:31,245 [INFO] * *
+2025-10-03 00:22:31,245 [INFO] ******************
+
+2025-10-03 00:22:31,245 [INFO] [WASABI-HELPER]: [INFO]: Cloning hbase repository from https://github.com/apache/hbase.git...
+2025-10-03 00:23:15,656 [INFO] [WASABI-HELPER]: [INFO]: Successfully cloned hbase.
+2025-10-03 00:23:15,656 [INFO] Checking out version 89ca7f4 for hbase...
+2025-10-03 00:23:16,677 [INFO] [WASABI-HELPER]: [INFO]: Successfully checked out version 89ca7f4 for hbase.
+2025-10-03 00:23:16,678 [INFO]
+
+2025-10-03 00:23:16,678 [INFO] *****************************
+2025-10-03 00:23:16,678 [INFO] * *
+2025-10-03 00:23:16,678 [INFO] * hbase: code preparation *
+2025-10-03 00:23:16,678 [INFO] * *
+2025-10-03 00:23:16,678 [INFO] *****************************
+
+2025-10-03 00:23:16,678 [INFO] [WASABI-HELPER]: [INFO]: Renamed /home/cc/sosp24-ae/wasabi/../benchmarks/hbase/pom.xml to /home/cc/sosp24-ae/wasabi/../benchmarks/hbase/pom-original.xml.
+2025-10-03 00:23:16,679 [INFO] [WASABI-HELPER]: [INFO]: Copied /home/cc/sosp24-ae/wasabi/../wasabi/wasabi-testing/config/hbase/pom-hbase.xml to /home/cc/sosp24-ae/wasabi/../benchmarks/hbase/pom.xml.
+2025-10-03 00:23:17,514 [INFO] [WASABI-HELPER]: [INFO]: Successfully overwritten retry-related bounds. Status: 0
+2025-10-03 00:23:17,625 [INFO] [WASABI-HELPER]: [INFO]: Successfully overwritten retry-related bounds. Status: 0
+2025-10-03 00:23:17,625 [INFO]
+
+2025-10-03 00:23:17,625 [INFO] ***************************
+2025-10-03 00:23:17,626 [INFO] * *
+2025-10-03 00:23:17,626 [INFO] * hbase: bug triggering *
+2025-10-03 00:23:17,626 [INFO] * *
+2025-10-03 00:23:17,626 [INFO] ***************************
+
+2025-10-03 02:16:06,476 [INFO] [WASABI-HELPER]: [INFO]: Finished running test suite for hbase. Status: 0
+2025-10-03 02:16:06,476 [INFO]
+
+2025-10-03 02:16:06,477 [INFO] ************************
+2025-10-03 02:16:06,477 [INFO] * *
+2025-10-03 02:16:06,477 [INFO] * hbase: Bug oracles *
+2025-10-03 02:16:06,477 [INFO] * *
+2025-10-03 02:16:06,477 [INFO] ************************
+
+2025-10-03 02:16:11,249 [INFO] // ----------------------------- //
+
+2025-10-03 02:16:11,249 [INFO] Retry bugs for hbase
+
+2025-10-03 02:16:11,249 [INFO] // ----------------------------- //
+
+2025-10-03 02:16:11,249 [INFO] bug-1,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,250 [INFO] bug-2,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestRegionAssignedToMultipleRegionServers.test
+
+2025-10-03 02:16:11,250 [INFO] bug-3,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFindMemStoresEligibleForFlush
+
+2025-10-03 02:16:11,250 [INFO] bug-4,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+
+2025-10-03 02:16:11,250 [INFO] bug-5,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,250 [INFO] bug-6,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,250 [INFO] bug-7,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,250 [INFO] bug-8,when-missing-backoff,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,250 [INFO] bug-9,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,250 [INFO] bug-10,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,250 [INFO] bug-11,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,250 [INFO] bug-12,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,250 [INFO] bug-13,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,250 [INFO] bug-14,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,250 [INFO] bug-15,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testMaxFlushedSequenceIdGoBackwards
+
+2025-10-03 02:16:11,250 [INFO] bug-16,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,250 [INFO] bug-17,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,250 [INFO] bug-18,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,250 [INFO] bug-19,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,251 [INFO] bug-20,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,251 [INFO] bug-21,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,251 [INFO] bug-22,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,251 [INFO] bug-23,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,251 [INFO] bug-24,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,251 [INFO] bug-25,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub,TestTableMapReduceUtil.testInitCredentialsForCluster2
+
+2025-10-03 02:16:11,251 [INFO] bug-26,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,251 [INFO] bug-27,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,251 [INFO] bug-28,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,251 [INFO] bug-29,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,251 [INFO] bug-30,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestRegionAssignedToMultipleRegionServers.test
+
+2025-10-03 02:16:11,251 [INFO] bug-31,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,251 [INFO] bug-32,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testUnflushedSeqIdTrackingWithAsyncWal
+
+2025-10-03 02:16:11,251 [INFO] bug-33,when-missing-backoff,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFailedToCreateWALIfParentRenamed
+
+2025-10-03 02:16:11,251 [INFO] bug-34,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,251 [INFO] bug-35,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,251 [INFO] bug-36,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,251 [INFO] bug-37,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestClientSideRegionScanner.testContinuesToScanIfHasMore
+
+2025-10-03 02:16:11,251 [INFO] bug-38,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,252 [INFO] bug-39,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,252 [INFO] bug-40,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+
+2025-10-03 02:16:11,252 [INFO] bug-41,when-missing-backoff,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestHelloHBase.testPutRowToTable
+
+2025-10-03 02:16:11,252 [INFO] bug-42,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,252 [INFO] bug-43,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,252 [INFO] bug-44,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+
+2025-10-03 02:16:11,252 [INFO] bug-45,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,252 [INFO] bug-46,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,252 [INFO] bug-47,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,252 [INFO] bug-48,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,252 [INFO] bug-49,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,252 [INFO] bug-50,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+
+2025-10-03 02:16:11,252 [INFO] bug-51,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,252 [INFO] bug-52,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+
+2025-10-03 02:16:11,252 [INFO] bug-53,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,252 [INFO] bug-54,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,252 [INFO] bug-55,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestMultiVersions.testGetRowVersions
+
+2025-10-03 02:16:11,252 [INFO] bug-56,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub,TestTableMapReduceUtil.testInitCredentialsForCluster3
+
+2025-10-03 02:16:11,252 [INFO] bug-57,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testDefaultValues
+
+2025-10-03 02:16:11,252 [INFO] bug-58,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+
+2025-10-03 02:16:11,253 [INFO] bug-59,when-missing-cap,org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState,TestSerialReplicationFailover.testKillRS
+
+2025-10-03 02:16:11,253 [INFO] bug-60,when-missing-backoff,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testRollWriterForClosedWAL
+
+2025-10-03 02:16:11,253 [INFO] bug-61,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,253 [INFO] bug-62,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,253 [INFO] bug-63,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,253 [INFO] bug-64,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,253 [INFO] bug-65,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testAssignFakeReplicaRegion
+
+2025-10-03 02:16:11,253 [INFO] bug-66,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFlushSequenceIdIsGreaterThanAllEditsInHFile
+
+2025-10-03 02:16:11,253 [INFO] bug-67,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+
+2025-10-03 02:16:11,253 [INFO] bug-68,when-missing-backoff,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestClassLoading.testClassLoadingFromRelativeLibDirInJar
+
+2025-10-03 02:16:11,253 [INFO] bug-69,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,253 [INFO] bug-70,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestHelloHBase.testPutRowToTable
+
+2025-10-03 02:16:11,253 [INFO] bug-71,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestMultiVersions.testGetRowVersions
+
+2025-10-03 02:16:11,253 [INFO] bug-72,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,253 [INFO] bug-73,when-missing-backoff,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testSyncNoAppend
+
+2025-10-03 02:16:11,253 [INFO] bug-74,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,253 [INFO] bug-75,when-missing-cap,org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState,TestSerialReplicationFailover.testKillRS
+
+2025-10-03 02:16:11,253 [INFO] bug-76,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,253 [INFO] bug-77,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,254 [INFO] bug-78,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,254 [INFO] bug-79,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestRegionAssignedToMultipleRegionServers.test
+
+2025-10-03 02:16:11,254 [INFO] bug-80,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,254 [INFO] bug-81,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+
+2025-10-03 02:16:11,254 [INFO] bug-82,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,254 [INFO] bug-83,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,254 [INFO] bug-84,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestClientSideRegionScanner.testContinuesToScanIfHasMore
+
+2025-10-03 02:16:11,254 [INFO] bug-85,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,254 [INFO] bug-86,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+
+2025-10-03 02:16:11,254 [INFO] bug-87,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,254 [INFO] bug-88,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,254 [INFO] bug-89,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,254 [INFO] bug-90,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,254 [INFO] bug-91,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,254 [INFO] bug-92,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,254 [INFO] bug-93,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,254 [INFO] bug-94,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+
+2025-10-03 02:16:11,254 [INFO] bug-95,when-missing-cap,org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALProcedure.executeFromState,TestDrainReplicationQueuesForStandBy.test
+
+2025-10-03 02:16:11,254 [INFO] bug-96,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,254 [INFO] bug-97,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,255 [INFO] bug-98,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,255 [INFO] bug-99,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,255 [INFO] bug-100,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,255 [INFO] bug-101,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,255 [INFO] bug-102,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testHstsAndCspSettings
+
+2025-10-03 02:16:11,255 [INFO] bug-103,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,255 [INFO] bug-104,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,255 [INFO] bug-105,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testMaxFlushedSequenceIdGoBackwards
+
+2025-10-03 02:16:11,255 [INFO] bug-106,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,255 [INFO] bug-107,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,255 [INFO] bug-108,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,255 [INFO] bug-109,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,255 [INFO] bug-110,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub,TestReplicator.testReplicatorBatching
+
+2025-10-03 02:16:11,255 [INFO] bug-111,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,255 [INFO] bug-112,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,255 [INFO] bug-113,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,255 [INFO] bug-114,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,255 [INFO] bug-115,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,255 [INFO] bug-116,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub,TestReplicator.testReplicatorWithErrors
+
+2025-10-03 02:16:11,255 [INFO] bug-117,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFindMemStoresEligibleForFlush
+
+2025-10-03 02:16:11,256 [INFO] bug-118,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,256 [INFO] bug-119,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,256 [INFO] bug-120,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,256 [INFO] bug-121,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,256 [INFO] bug-122,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,256 [INFO] bug-123,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,256 [INFO] bug-124,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,256 [INFO] bug-125,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+
+2025-10-03 02:16:11,256 [INFO] bug-126,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,256 [INFO] bug-127,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,256 [INFO] bug-128,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+
+2025-10-03 02:16:11,256 [INFO] bug-129,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,256 [INFO] bug-130,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,256 [INFO] bug-131,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+
+2025-10-03 02:16:11,256 [INFO] bug-132,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,256 [INFO] bug-133,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+
+2025-10-03 02:16:11,256 [INFO] bug-134,when-missing-backoff,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,256 [INFO] bug-135,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+
+2025-10-03 02:16:11,256 [INFO] bug-136,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testDefaultValues
+
+2025-10-03 02:16:11,256 [INFO] bug-137,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,257 [INFO] bug-138,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,257 [INFO] bug-139,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,257 [INFO] bug-140,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,257 [INFO] bug-141,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,257 [INFO] bug-142,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,257 [INFO] bug-143,when-missing-backoff,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestClassLoading.testClassLoadingFromHDFS
+
+2025-10-03 02:16:11,257 [INFO] bug-144,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFlushSequenceIdIsGreaterThanAllEditsInHFile
+
+2025-10-03 02:16:11,257 [INFO] bug-145,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,257 [INFO] bug-146,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,257 [INFO] bug-147,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+
+2025-10-03 02:16:11,257 [INFO] bug-148,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,257 [INFO] bug-149,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testDefaultValues
+
+2025-10-03 02:16:11,257 [INFO] bug-150,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,257 [INFO] bug-151,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,257 [INFO] bug-152,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,257 [INFO] bug-153,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+
+2025-10-03 02:16:11,257 [INFO] bug-154,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,257 [INFO] bug-155,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestBulkLoadReplicationHFileRefs.testWhenExcludeTable
+
+2025-10-03 02:16:11,257 [INFO] bug-156,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,257 [INFO] bug-157,when-missing-backoff,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testUnflushedSeqIdTrackingWithAsyncWal
+
+2025-10-03 02:16:11,258 [INFO] bug-158,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,258 [INFO] bug-159,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,258 [INFO] bug-160,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestHelloHBase.testPutRowToTable
+
+2025-10-03 02:16:11,258 [INFO] bug-161,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,258 [INFO] bug-162,when-missing-cap,org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run,TestZooKeeperScanPolicyObserver.test
+
+2025-10-03 02:16:11,258 [INFO] bug-163,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,258 [INFO] bug-164,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,258 [INFO] bug-165,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,258 [INFO] bug-166,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,258 [INFO] bug-167,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,258 [INFO] bug-168,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,258 [INFO] bug-169,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,258 [INFO] bug-170,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,258 [INFO] bug-171,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,258 [INFO] bug-172,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,258 [INFO] bug-173,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,258 [INFO] bug-174,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub,TestTableMapReduceUtil.testInitCredentialsForCluster4
+
+2025-10-03 02:16:11,258 [INFO] bug-175,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,258 [INFO] bug-176,when-missing-cap,org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run,TestZooKeeperScanPolicyObserver.test
+
+2025-10-03 02:16:11,259 [INFO] bug-177,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,259 [INFO] bug-178,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,259 [INFO] bug-179,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,259 [INFO] bug-180,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFindMemStoresEligibleForFlush
+
+2025-10-03 02:16:11,259 [INFO] bug-181,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,259 [INFO] bug-182,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,259 [INFO] bug-183,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testMaxFlushedSequenceIdGoBackwards
+
+2025-10-03 02:16:11,259 [INFO] bug-184,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testUnflushedSeqIdTrackingWithAsyncWal
+
+2025-10-03 02:16:11,259 [INFO] bug-185,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,259 [INFO] bug-186,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,259 [INFO] bug-187,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,259 [INFO] bug-188,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,259 [INFO] bug-189,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,259 [INFO] bug-190,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,259 [INFO] bug-191,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,259 [INFO] bug-192,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestBulkLoadReplicationHFileRefs.testWhenExcludeNamespace
+
+2025-10-03 02:16:11,259 [INFO] bug-193,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,259 [INFO] bug-194,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub,TestTableMapReduceUtil.testInitCredentialsForCluster4
+
+2025-10-03 02:16:11,259 [INFO] bug-195,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFindMemStoresEligibleForFlush
+
+2025-10-03 02:16:11,259 [INFO] bug-196,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,260 [INFO] bug-197,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+
+2025-10-03 02:16:11,260 [INFO] bug-198,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,260 [INFO] bug-199,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,260 [INFO] bug-200,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,260 [INFO] bug-201,when-missing-backoff,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+
+2025-10-03 02:16:11,260 [INFO] bug-202,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestClientSideRegionScanner.testContinuesToScanIfHasMore
+
+2025-10-03 02:16:11,260 [INFO] bug-203,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,260 [INFO] bug-204,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,260 [INFO] bug-205,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+
+2025-10-03 02:16:11,260 [INFO] bug-206,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,260 [INFO] bug-207,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,260 [INFO] bug-208,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub,TestReplicator.testReplicatorWithErrors
+
+2025-10-03 02:16:11,260 [INFO] bug-209,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,260 [INFO] bug-210,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,260 [INFO] bug-211,when-missing-cap,org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run,TestMetaWithReplicasShutdownHandling.testShutdownHandling
+
+2025-10-03 02:16:11,260 [INFO] bug-212,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub,TestReplicator.testReplicatorBatching
+
+2025-10-03 02:16:11,260 [INFO] bug-213,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,260 [INFO] bug-214,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,260 [INFO] bug-215,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestMultiVersions.testGetRowVersions
+
+2025-10-03 02:16:11,260 [INFO] bug-216,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFlushSequenceIdIsGreaterThanAllEditsInHFile
+
+2025-10-03 02:16:11,261 [INFO] bug-217,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,261 [INFO] bug-218,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testHstsAndCspSettings
+
+2025-10-03 02:16:11,261 [INFO] bug-219,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,261 [INFO] bug-220,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testAssignFakeReplicaRegion
+
+2025-10-03 02:16:11,261 [INFO] bug-221,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestMultiVersions.testGetRowVersions
+
+2025-10-03 02:16:11,261 [INFO] bug-222,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,261 [INFO] bug-223,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,261 [INFO] bug-224,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+
+2025-10-03 02:16:11,261 [INFO] bug-225,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFlushSequenceIdIsGreaterThanAllEditsInHFile
+
+2025-10-03 02:16:11,261 [INFO] bug-226,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,261 [INFO] bug-227,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,261 [INFO] bug-228,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestHelloHBase.testPutRowToTable
+
+2025-10-03 02:16:11,261 [INFO] bug-229,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+
+2025-10-03 02:16:11,261 [INFO] bug-230,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+
+2025-10-03 02:16:11,261 [INFO] bug-231,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,261 [INFO] bug-232,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,261 [INFO] bug-233,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,261 [INFO] bug-234,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,261 [INFO] bug-235,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,261 [INFO] bug-236,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,262 [INFO] bug-237,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,262 [INFO] bug-238,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+
+2025-10-03 02:16:11,262 [INFO] bug-239,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,262 [INFO] bug-240,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,262 [INFO] bug-241,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,262 [INFO] bug-242,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,262 [INFO] bug-243,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,262 [INFO] bug-244,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,262 [INFO] bug-245,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,262 [INFO] bug-246,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,262 [INFO] bug-247,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,262 [INFO] bug-248,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,262 [INFO] bug-249,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,262 [INFO] bug-250,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,262 [INFO] bug-251,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,262 [INFO] bug-252,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,262 [INFO] bug-253,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,262 [INFO] bug-254,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+
+2025-10-03 02:16:11,262 [INFO] bug-255,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,262 [INFO] bug-256,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,263 [INFO] bug-257,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,263 [INFO] bug-258,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFindMemStoresEligibleForFlush
+
+2025-10-03 02:16:11,263 [INFO] bug-259,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,263 [INFO] bug-260,when-missing-cap,org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALProcedure.executeFromState,TestDrainReplicationQueuesForStandBy.test
+
+2025-10-03 02:16:11,263 [INFO] bug-261,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,263 [INFO] bug-262,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,263 [INFO] bug-263,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testMaxFlushedSequenceIdGoBackwards
+
+2025-10-03 02:16:11,263 [INFO] bug-264,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,263 [INFO] bug-265,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,263 [INFO] bug-266,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+
+2025-10-03 02:16:11,263 [INFO] bug-267,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testUnflushedSeqIdTrackingWithAsyncWal
+
+2025-10-03 02:16:11,263 [INFO] bug-268,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,263 [INFO] bug-269,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,263 [INFO] bug-270,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,263 [INFO] bug-271,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,263 [INFO] bug-272,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,263 [INFO] bug-273,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,263 [INFO] bug-274,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,263 [INFO] bug-275,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,263 [INFO] bug-276,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,264 [INFO] bug-277,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,264 [INFO] bug-278,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,264 [INFO] bug-279,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,264 [INFO] bug-280,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+
+2025-10-03 02:16:11,264 [INFO] bug-281,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,264 [INFO] bug-282,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFindMemStoresEligibleForFlush
+
+2025-10-03 02:16:11,264 [INFO] bug-283,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,264 [INFO] bug-284,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,264 [INFO] bug-285,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,264 [INFO] bug-286,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,264 [INFO] bug-287,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,264 [INFO] bug-288,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,264 [INFO] bug-289,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,264 [INFO] bug-290,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,264 [INFO] bug-291,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,264 [INFO] bug-292,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,264 [INFO] bug-293,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,264 [INFO] bug-294,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestMultiVersions.testGetRowVersions
+
+2025-10-03 02:16:11,264 [INFO] bug-295,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,264 [INFO] bug-296,when-missing-backoff,org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupConnection,TestClientTimeouts.testAdminTimeout
+
+2025-10-03 02:16:11,265 [INFO] bug-297,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+
+2025-10-03 02:16:11,265 [INFO] bug-298,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,265 [INFO] bug-299,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,265 [INFO] bug-300,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+
+2025-10-03 02:16:11,265 [INFO] bug-301,when-missing-backoff,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,265 [INFO] bug-302,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,265 [INFO] bug-303,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,265 [INFO] bug-304,when-missing-cap,org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState,TestSerialReplicationFailover.testKillRS
+
+2025-10-03 02:16:11,265 [INFO] bug-305,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,265 [INFO] bug-306,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,265 [INFO] bug-307,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,265 [INFO] bug-308,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+
+2025-10-03 02:16:11,265 [INFO] bug-309,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,265 [INFO] bug-310,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testAssignFakeReplicaRegion
+
+2025-10-03 02:16:11,265 [INFO] bug-311,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,265 [INFO] bug-312,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,265 [INFO] bug-313,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,265 [INFO] bug-314,when-missing-backoff,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestClassLoading.testClassLoadingFromLibDirInJar
+
+2025-10-03 02:16:11,265 [INFO] bug-315,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,265 [INFO] bug-316,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+
+2025-10-03 02:16:11,265 [INFO] bug-317,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,265 [INFO] bug-318,when-missing-cap,org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState,TestSerialReplicationFailover.testKillRS
+
+2025-10-03 02:16:11,265 [INFO] bug-319,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+
+2025-10-03 02:16:11,265 [INFO] bug-320,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestHelloHBase.testPutRowToTable
+
+2025-10-03 02:16:11,265 [INFO] bug-321,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,266 [INFO] bug-322,when-missing-cap,org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run,TestZooKeeperScanPolicyObserver.test
+
+2025-10-03 02:16:11,266 [INFO] bug-323,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,266 [INFO] bug-324,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,266 [INFO] bug-325,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,266 [INFO] bug-326,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,266 [INFO] bug-327,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestClientSideRegionScanner.testContinuesToScanIfHasMore
+
+2025-10-03 02:16:11,266 [INFO] bug-328,when-missing-cap,org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState,TestSerialReplicationFailover.testKillRS
+
+2025-10-03 02:16:11,266 [INFO] bug-329,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,266 [INFO] bug-330,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,266 [INFO] bug-331,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,266 [INFO] bug-332,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,266 [INFO] bug-333,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,266 [INFO] bug-334,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,266 [INFO] bug-335,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,266 [INFO] bug-336,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,266 [INFO] bug-337,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,266 [INFO] bug-338,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,266 [INFO] bug-339,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,266 [INFO] bug-340,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,266 [INFO] bug-341,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+
+2025-10-03 02:16:11,266 [INFO] bug-342,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,266 [INFO] bug-343,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,266 [INFO] bug-344,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,266 [INFO] bug-345,when-missing-backoff,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testSimpleMultiple
+
+2025-10-03 02:16:11,266 [INFO] bug-346,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,266 [INFO] bug-347,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,266 [INFO] bug-348,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,266 [INFO] bug-349,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,266 [INFO] bug-350,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+
+2025-10-03 02:16:11,266 [INFO] bug-351,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,266 [INFO] bug-352,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,267 [INFO] bug-353,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,267 [INFO] bug-354,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,267 [INFO] bug-355,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,267 [INFO] bug-356,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+
+2025-10-03 02:16:11,267 [INFO] bug-357,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,267 [INFO] bug-358,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub,TestRegionServerCrashDisableWAL.test
+
+2025-10-03 02:16:11,267 [INFO] bug-359,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,267 [INFO] bug-360,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,267 [INFO] bug-361,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,267 [INFO] bug-362,how-bug,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,267 [INFO] bug-363,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,267 [INFO] bug-364,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testMaxFlushedSequenceIdGoBackwards
+
+2025-10-03 02:16:11,267 [INFO] bug-365,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testUnflushedSeqIdTrackingWithAsyncWal
+
+2025-10-03 02:16:11,267 [INFO] bug-366,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,267 [INFO] bug-367,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,267 [INFO] bug-368,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,267 [INFO] bug-369,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,267 [INFO] bug-370,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+
+2025-10-03 02:16:11,267 [INFO] bug-371,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,267 [INFO] bug-372,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+
+2025-10-03 02:16:11,267 [INFO] bug-373,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,267 [INFO] bug-374,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+
+2025-10-03 02:16:11,267 [INFO] bug-375,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,267 [INFO] bug-376,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,267 [INFO] bug-377,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+
+2025-10-03 02:16:11,267 [INFO] bug-378,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,267 [INFO] bug-379,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,267 [INFO] bug-380,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,267 [INFO] bug-381,when-missing-cap,org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALProcedure.executeFromState,TestDrainReplicationQueuesForStandBy.test
+
+2025-10-03 02:16:11,267 [INFO] bug-382,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFindMemStoresEligibleForFlush
+
+2025-10-03 02:16:11,267 [INFO] bug-383,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,267 [INFO] bug-384,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,268 [INFO] bug-385,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,268 [INFO] bug-386,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,268 [INFO] bug-387,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,268 [INFO] bug-388,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,268 [INFO] bug-389,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,268 [INFO] bug-390,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,268 [INFO] bug-391,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,268 [INFO] bug-392,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testDefaultValues
+
+2025-10-03 02:16:11,268 [INFO] bug-393,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,268 [INFO] bug-394,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+
+2025-10-03 02:16:11,268 [INFO] bug-395,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+
+2025-10-03 02:16:11,268 [INFO] bug-396,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+
+2025-10-03 02:16:11,268 [INFO] bug-397,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,268 [INFO] bug-398,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,268 [INFO] bug-399,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,268 [INFO] bug-400,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,268 [INFO] bug-401,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,268 [INFO] bug-402,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,268 [INFO] bug-403,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestRegionAssignedToMultipleRegionServers.test
+
+2025-10-03 02:16:11,268 [INFO] bug-404,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,268 [INFO] bug-405,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,268 [INFO] bug-406,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,268 [INFO] bug-407,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,268 [INFO] bug-408,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,268 [INFO] bug-409,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,268 [INFO] bug-410,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+
+2025-10-03 02:16:11,268 [INFO] bug-411,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,268 [INFO] bug-412,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,268 [INFO] bug-413,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,268 [INFO] bug-414,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,268 [INFO] bug-415,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,268 [INFO] bug-416,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,268 [INFO] bug-417,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,268 [INFO] bug-418,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+
+2025-10-03 02:16:11,269 [INFO] bug-419,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testDefaultValues
+
+2025-10-03 02:16:11,269 [INFO] bug-420,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,269 [INFO] bug-421,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,269 [INFO] bug-422,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,269 [INFO] bug-423,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,269 [INFO] bug-424,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,269 [INFO] bug-425,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,269 [INFO] bug-426,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+
+2025-10-03 02:16:11,269 [INFO] bug-427,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub,TestTableMapReduceUtil.testInitCredentialsForCluster1
+
+2025-10-03 02:16:11,269 [INFO] bug-428,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,269 [INFO] bug-429,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,269 [INFO] bug-430,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+
+2025-10-03 02:16:11,269 [INFO] bug-431,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestHelloHBase.testPutRowToTable
+
+2025-10-03 02:16:11,269 [INFO] bug-432,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,269 [INFO] bug-433,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,269 [INFO] bug-434,when-missing-cap,org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run,TestZooKeeperScanPolicyObserver.test
+
+2025-10-03 02:16:11,269 [INFO] bug-435,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,269 [INFO] bug-436,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,269 [INFO] bug-437,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,269 [INFO] bug-438,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,269 [INFO] bug-439,when-missing-cap,org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALProcedure.executeFromState,TestDrainReplicationQueuesForStandBy.test
+
+2025-10-03 02:16:11,269 [INFO] bug-440,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,269 [INFO] bug-441,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,269 [INFO] bug-442,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,269 [INFO] bug-443,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,269 [INFO] bug-444,when-missing-backoff,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestClientSideRegionScanner.testContinuesToScanIfHasMore
+
+2025-10-03 02:16:11,269 [INFO] bug-445,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,269 [INFO] bug-446,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,269 [INFO] bug-447,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,269 [INFO] bug-448,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+
+2025-10-03 02:16:11,269 [INFO] bug-449,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,269 [INFO] bug-450,when-missing-cap,org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run,TestZooKeeperScanPolicyObserver.test
+
+2025-10-03 02:16:11,269 [INFO] bug-451,when-missing-backoff,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+
+2025-10-03 02:16:11,270 [INFO] bug-452,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,270 [INFO] bug-453,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,270 [INFO] bug-454,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,270 [INFO] bug-455,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,270 [INFO] bug-456,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,270 [INFO] bug-457,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,270 [INFO] bug-458,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,270 [INFO] bug-459,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,270 [INFO] bug-460,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+
+2025-10-03 02:16:11,270 [INFO] bug-461,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,270 [INFO] bug-462,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,270 [INFO] bug-463,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,270 [INFO] bug-464,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,270 [INFO] bug-465,when-missing-cap,org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run,TestMetaWithReplicasShutdownHandling.testShutdownHandling
+
+2025-10-03 02:16:11,270 [INFO] bug-466,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testHstsAndCspSettings
+
+2025-10-03 02:16:11,270 [INFO] bug-467,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,270 [INFO] bug-468,when-missing-backoff,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+
+2025-10-03 02:16:11,270 [INFO] bug-469,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testMaxFlushedSequenceIdGoBackwards
+
+2025-10-03 02:16:11,270 [INFO] bug-470,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,270 [INFO] bug-471,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestClientSideRegionScanner.testContinuesToScanIfHasMore
+
+2025-10-03 02:16:11,270 [INFO] bug-472,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testUnflushedSeqIdTrackingWithAsyncWal
+
+2025-10-03 02:16:11,270 [INFO] bug-473,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,270 [INFO] bug-474,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,270 [INFO] bug-475,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,270 [INFO] bug-476,when-missing-cap,org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run,TestMetaWithReplicasShutdownHandling.testShutdownHandling
+
+2025-10-03 02:16:11,270 [INFO] bug-477,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,270 [INFO] bug-478,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+
+2025-10-03 02:16:11,270 [INFO] bug-479,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+
+2025-10-03 02:16:11,270 [INFO] bug-480,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,270 [INFO] bug-481,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,270 [INFO] bug-482,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,270 [INFO] bug-483,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+
+2025-10-03 02:16:11,270 [INFO] bug-484,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFindMemStoresEligibleForFlush
+
+2025-10-03 02:16:11,270 [INFO] bug-485,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,271 [INFO] bug-486,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+
+2025-10-03 02:16:11,271 [INFO] bug-487,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,271 [INFO] bug-488,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,271 [INFO] bug-489,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub,TestTableMapReduceUtil.testInitCredentialsForCluster2
+
+2025-10-03 02:16:11,271 [INFO] bug-490,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,271 [INFO] bug-491,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,271 [INFO] bug-492,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,271 [INFO] bug-493,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,271 [INFO] bug-494,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,271 [INFO] bug-495,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+
+2025-10-03 02:16:11,271 [INFO] bug-496,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,271 [INFO] bug-497,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,271 [INFO] bug-498,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,271 [INFO] bug-499,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,271 [INFO] bug-500,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testHstsAndCspSettings
+
+2025-10-03 02:16:11,271 [INFO] bug-501,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,271 [INFO] bug-502,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,271 [INFO] bug-503,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testAssignFakeReplicaRegion
+
+2025-10-03 02:16:11,271 [INFO] bug-504,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,271 [INFO] bug-505,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,271 [INFO] bug-506,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,271 [INFO] bug-507,when-missing-backoff,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,271 [INFO] bug-508,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,271 [INFO] bug-509,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,271 [INFO] bug-510,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub,TestTableMapReduceUtil.testInitCredentialsForCluster3
+
+2025-10-03 02:16:11,271 [INFO] bug-511,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,271 [INFO] bug-512,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+
+2025-10-03 02:16:11,271 [INFO] bug-513,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,271 [INFO] bug-514,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,271 [INFO] bug-515,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestHelloHBase.testPutRowToTable
+
+2025-10-03 02:16:11,271 [INFO] bug-516,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testAssignFakeReplicaRegion
+
+2025-10-03 02:16:11,271 [INFO] bug-517,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,271 [INFO] bug-518,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,272 [INFO] bug-519,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,272 [INFO] bug-520,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,272 [INFO] bug-521,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,272 [INFO] bug-522,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,272 [INFO] bug-523,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,272 [INFO] bug-524,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,272 [INFO] bug-525,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,272 [INFO] bug-526,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,272 [INFO] bug-527,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,272 [INFO] bug-528,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,272 [INFO] bug-529,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,272 [INFO] bug-530,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestBulkLoadReplicationHFileRefs.testWhenExcludeCF
+
+2025-10-03 02:16:11,272 [INFO] bug-531,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,272 [INFO] bug-532,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,272 [INFO] bug-533,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,272 [INFO] bug-534,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testHstsAndCspSettings
+
+2025-10-03 02:16:11,272 [INFO] bug-535,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,272 [INFO] bug-536,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,272 [INFO] bug-537,when-missing-cap,org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run,TestMetaWithReplicasShutdownHandling.testShutdownHandling
+
+2025-10-03 02:16:11,272 [INFO] bug-538,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,272 [INFO] bug-539,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,272 [INFO] bug-540,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,272 [INFO] bug-541,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,272 [INFO] bug-542,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,272 [INFO] bug-543,how-bug,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testRollWriterForClosedWAL
+
+2025-10-03 02:16:11,272 [INFO] bug-544,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+
+2025-10-03 02:16:11,272 [INFO] bug-545,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+
+2025-10-03 02:16:11,272 [INFO] bug-546,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,272 [INFO] bug-547,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+
+2025-10-03 02:16:11,272 [INFO] bug-548,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+
+2025-10-03 02:16:11,272 [INFO] bug-549,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,272 [INFO] bug-550,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,272 [INFO] bug-551,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,273 [INFO] bug-552,when-missing-cap,org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run,TestMetaWithReplicasShutdownHandling.testShutdownHandling
+
+2025-10-03 02:16:11,273 [INFO] bug-553,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,273 [INFO] bug-554,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,273 [INFO] bug-555,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+
+2025-10-03 02:16:11,273 [INFO] bug-556,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,273 [INFO] bug-557,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,273 [INFO] bug-558,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,273 [INFO] bug-559,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,273 [INFO] bug-560,when-missing-backoff,org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState,TestSerialReplicationFailover.testKillRS
+
+2025-10-03 02:16:11,273 [INFO] bug-561,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testMaxFlushedSequenceIdGoBackwards
+
+2025-10-03 02:16:11,273 [INFO] bug-562,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,273 [INFO] bug-563,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testUnflushedSeqIdTrackingWithAsyncWal
+
+2025-10-03 02:16:11,273 [INFO] bug-564,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+
+2025-10-03 02:16:11,273 [INFO] bug-565,how-bug,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+
+2025-10-03 02:16:11,273 [INFO] bug-566,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+
+2025-10-03 02:16:11,273 [INFO] bug-567,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,273 [INFO] bug-568,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,273 [INFO] bug-569,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,273 [INFO] bug-570,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+
+2025-10-03 02:16:11,273 [INFO] bug-571,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+
+2025-10-03 02:16:11,273 [INFO] bug-572,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,273 [INFO] bug-573,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,273 [INFO] bug-574,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,273 [INFO] bug-575,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+
+2025-10-03 02:16:11,273 [INFO] bug-576,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,273 [INFO] bug-577,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,273 [INFO] bug-578,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub,TestRegionServerCrashDisableWAL.test
+
+2025-10-03 02:16:11,273 [INFO] bug-579,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,273 [INFO] bug-580,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,273 [INFO] bug-581,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,273 [INFO] bug-582,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,273 [INFO] bug-583,when-missing-cap,org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState,TestSerialReplicationFailover.testKillRS
+
+2025-10-03 02:16:11,273 [INFO] bug-584,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,274 [INFO] bug-585,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,274 [INFO] bug-586,when-missing-backoff,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFlushSequenceIdIsGreaterThanAllEditsInHFile
+
+2025-10-03 02:16:11,274 [INFO] bug-587,when-missing-cap,org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState,TestSerialReplicationFailover.testKillRS
+
+2025-10-03 02:16:11,274 [INFO] bug-588,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,274 [INFO] bug-589,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+
+2025-10-03 02:16:11,274 [INFO] bug-590,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,274 [INFO] bug-591,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testAssignFakeReplicaRegion
+
+2025-10-03 02:16:11,274 [INFO] bug-592,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,274 [INFO] bug-593,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestHelloHBase.testPutRowToTable
+
+2025-10-03 02:16:11,274 [INFO] bug-594,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,274 [INFO] bug-595,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,274 [INFO] bug-596,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testHstsAndCspSettings
+
+2025-10-03 02:16:11,274 [INFO] bug-597,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,274 [INFO] bug-598,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,274 [INFO] bug-599,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,274 [INFO] bug-600,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,274 [INFO] bug-601,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestClientSideRegionScanner.testContinuesToScanIfHasMore
+
+2025-10-03 02:16:11,274 [INFO] bug-602,when-missing-cap,org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState,TestSerialReplicationFailover.testKillRS
+
+2025-10-03 02:16:11,274 [INFO] bug-603,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,274 [INFO] bug-604,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,274 [INFO] bug-605,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+
+2025-10-03 02:16:11,274 [INFO] bug-606,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,274 [INFO] bug-607,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,274 [INFO] bug-608,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,274 [INFO] bug-609,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFlushSequenceIdIsGreaterThanAllEditsInHFile
+
+2025-10-03 02:16:11,274 [INFO] bug-610,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,274 [INFO] bug-611,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,274 [INFO] bug-612,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,274 [INFO] bug-613,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,274 [INFO] bug-614,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,274 [INFO] bug-615,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,274 [INFO] bug-616,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,274 [INFO] bug-617,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,275 [INFO] bug-618,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,275 [INFO] bug-619,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+
+2025-10-03 02:16:11,275 [INFO] bug-620,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,275 [INFO] bug-621,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,275 [INFO] bug-622,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,275 [INFO] bug-623,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,275 [INFO] bug-624,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,275 [INFO] bug-625,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,275 [INFO] bug-626,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,275 [INFO] bug-627,when-missing-backoff,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testAssignFakeReplicaRegion
+
+2025-10-03 02:16:11,275 [INFO] bug-628,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,275 [INFO] bug-629,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,275 [INFO] bug-630,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,275 [INFO] bug-631,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,275 [INFO] bug-632,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,275 [INFO] bug-633,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+
+2025-10-03 02:16:11,275 [INFO] bug-634,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,275 [INFO] bug-635,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testUnflushedSeqIdTrackingWithAsyncWal
+
+2025-10-03 02:16:11,275 [INFO] bug-636,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,275 [INFO] bug-637,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+
+2025-10-03 02:16:11,275 [INFO] bug-638,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestClientSideRegionScanner.testContinuesToScanIfHasMore
+
+2025-10-03 02:16:11,275 [INFO] bug-639,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,275 [INFO] bug-640,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,275 [INFO] bug-641,when-missing-backoff,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testWriteEntryCanBeNull
+
+2025-10-03 02:16:11,275 [INFO] bug-642,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,275 [INFO] bug-643,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,275 [INFO] bug-644,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,275 [INFO] bug-645,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+
+2025-10-03 02:16:11,275 [INFO] bug-646,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,275 [INFO] bug-647,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,275 [INFO] bug-648,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,275 [INFO] bug-649,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,275 [INFO] bug-650,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+
+2025-10-03 02:16:11,275 [INFO] bug-651,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,275 [INFO] bug-652,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,275 [INFO] bug-653,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testAssignFakeReplicaRegion
+
+2025-10-03 02:16:11,275 [INFO] bug-654,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,275 [INFO] bug-655,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,275 [INFO] bug-656,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestMultiVersions.testGetRowVersions
+
+2025-10-03 02:16:11,276 [INFO] bug-657,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,276 [INFO] bug-658,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,276 [INFO] bug-659,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,276 [INFO] bug-660,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,276 [INFO] bug-661,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,276 [INFO] bug-662,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,276 [INFO] bug-663,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,276 [INFO] bug-664,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestMultiVersions.testGetRowVersions
+
+2025-10-03 02:16:11,276 [INFO] bug-665,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,276 [INFO] bug-666,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,276 [INFO] bug-667,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub,TestTableMapReduceUtil.testInitCredentialsForCluster1
+
+2025-10-03 02:16:11,276 [INFO] bug-668,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,276 [INFO] bug-669,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,276 [INFO] bug-670,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,276 [INFO] bug-671,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,276 [INFO] bug-672,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testDefaultValues
+
+2025-10-03 02:16:11,276 [INFO] bug-673,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,276 [INFO] bug-674,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,276 [INFO] bug-675,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,276 [INFO] bug-676,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,276 [INFO] bug-677,when-missing-backoff,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testMaxFlushedSequenceIdGoBackwards
+
+2025-10-03 02:16:11,276 [INFO] bug-678,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,276 [INFO] bug-679,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,276 [INFO] bug-680,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,276 [INFO] bug-681,when-missing-cap,org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run,TestZooKeeperScanPolicyObserver.test
+
+2025-10-03 02:16:11,276 [INFO] bug-682,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,276 [INFO] bug-683,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,276 [INFO] bug-684,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,276 [INFO] bug-685,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+
+2025-10-03 02:16:11,276 [INFO] bug-686,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFlushSequenceIdIsGreaterThanAllEditsInHFile
+
+2025-10-03 02:16:11,276 [INFO] bug-687,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,276 [INFO] bug-688,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,276 [INFO] bug-689,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,276 [INFO] bug-690,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestHelloHBase.testPutRowToTable
+
+2025-10-03 02:16:11,276 [INFO] bug-691,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,276 [INFO] bug-692,when-missing-cap,org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run,TestZooKeeperScanPolicyObserver.test
+
+2025-10-03 02:16:11,276 [INFO] bug-693,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,276 [INFO] bug-694,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,276 [INFO] bug-695,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,276 [INFO] bug-696,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,276 [INFO] bug-697,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,276 [INFO] bug-698,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,276 [INFO] bug-699,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,276 [INFO] bug-700,when-missing-backoff,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,276 [INFO] bug-701,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,276 [INFO] bug-702,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testMaxFlushedSequenceIdGoBackwards
+
+2025-10-03 02:16:11,276 [INFO] bug-703,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,276 [INFO] bug-704,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,276 [INFO] bug-705,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+
+2025-10-03 02:16:11,277 [INFO] bug-706,when-missing-cap,org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run,TestMetaWithReplicasShutdownHandling.testShutdownHandling
+
+2025-10-03 02:16:11,277 [INFO] bug-707,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,277 [INFO] bug-708,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,277 [INFO] bug-709,when-missing-backoff,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testWALCoprocessorLoaded
+
+2025-10-03 02:16:11,277 [INFO] bug-710,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+
+2025-10-03 02:16:11,277 [INFO] bug-711,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,277 [INFO] bug-712,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,277 [INFO] bug-713,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,277 [INFO] bug-714,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+
+2025-10-03 02:16:11,277 [INFO] bug-715,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,277 [INFO] bug-716,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,277 [INFO] bug-717,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,277 [INFO] bug-718,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,277 [INFO] bug-719,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testHstsAndCspSettings
+
+2025-10-03 02:16:11,277 [INFO] bug-720,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+
+2025-10-03 02:16:11,277 [INFO] bug-721,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+
+2025-10-03 02:16:11,277 [INFO] bug-722,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,277 [INFO] bug-723,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+
+2025-10-03 02:16:11,277 [INFO] bug-724,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,277 [INFO] bug-725,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,277 [INFO] bug-726,when-missing-backoff,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,277 [INFO] bug-727,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,277 [INFO] bug-728,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,277 [INFO] bug-729,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,277 [INFO] bug-730,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,277 [INFO] bug-731,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,277 [INFO] bug-732,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testUnflushedSeqIdTrackingWithAsyncWal
+
+2025-10-03 02:16:11,277 [INFO] bug-733,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,277 [INFO] bug-734,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,277 [INFO] bug-735,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,277 [INFO] bug-736,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,277 [INFO] bug-737,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,277 [INFO] bug-738,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestMultiVersions.testGetRowVersions
+
+2025-10-03 02:16:11,277 [INFO] bug-739,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,277 [INFO] bug-740,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,277 [INFO] bug-741,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+
+2025-10-03 02:16:11,277 [INFO] bug-742,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,277 [INFO] bug-743,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,277 [INFO] bug-744,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,277 [INFO] bug-745,when-missing-cap,org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALProcedure.executeFromState,TestDrainReplicationQueuesForStandBy.test
+
+2025-10-03 02:16:11,277 [INFO] bug-746,when-missing-cap,org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALProcedure.executeFromState,TestDrainReplicationQueuesForStandBy.test
+
+2025-10-03 02:16:11,277 [INFO] bug-747,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,277 [INFO] bug-748,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+
+2025-10-03 02:16:11,277 [INFO] bug-749,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,277 [INFO] bug-750,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,277 [INFO] bug-751,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+
+2025-10-03 02:16:11,277 [INFO] bug-752,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,277 [INFO] bug-753,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,277 [INFO] bug-754,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,278 [INFO] bug-755,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+
+2025-10-03 02:16:11,278 [INFO] bug-756,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,278 [INFO] bug-757,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+
+2025-10-03 02:16:11,278 [INFO] bug-758,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+
+2025-10-03 02:16:11,278 [INFO] bug-759,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,278 [INFO] bug-760,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testHstsAndCspSettings
+
+2025-10-03 02:16:11,278 [INFO] bug-761,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,278 [INFO] bug-762,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testDefaultValues
+
+2025-10-03 02:16:11,278 [INFO] bug-763,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFlushSequenceIdIsGreaterThanAllEditsInHFile
+
+2025-10-03 02:16:11,278 [INFO] bug-764,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+
+2025-10-03 02:16:11,278 [INFO] bug-765,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+
+2025-10-03 02:16:11,278 [INFO] bug-766,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,278 [INFO] bug-767,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,278 [INFO] bug-768,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestClientSideRegionScanner.testContinuesToScanIfHasMore
+
+2025-10-03 02:16:11,278 [INFO] bug-769,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+
+2025-10-03 02:16:11,278 [INFO] bug-770,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,278 [INFO] bug-771,when-missing-backoff,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileVerifyingSnapshot
+
+2025-10-03 02:16:11,278 [INFO] bug-772,when-missing-backoff,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFindMemStoresEligibleForFlush
+
+2025-10-03 02:16:11,278 [INFO] bug-773,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testAssignFakeReplicaRegion
+
+2025-10-03 02:16:11,278 [INFO] bug-774,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,278 [INFO] bug-775,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+
+2025-10-03 02:16:11,278 [INFO] bug-776,when-missing-backoff,org.apache.hadoop.hbase.regionserver.snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask.call,TestFlushSnapshotFromClient.testFlushTableSnapshotWithProcedure
+
+2025-10-03 02:16:11,278 [INFO] bug-777,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,278 [INFO] bug-778,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,278 [INFO] bug-779,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,278 [INFO] bug-780,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,278 [INFO] bug-781,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+
+2025-10-03 02:16:11,278 [INFO] bug-782,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+
+2025-10-03 02:16:11,278 [INFO] bug-783,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testDefaultValues
+
+2025-10-03 02:16:11,278 [INFO] bug-784,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFlushSequenceIdIsGreaterThanAllEditsInHFile
+
+2025-10-03 02:16:11,278 [INFO] bug-785,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,278 [INFO] bug-786,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+
+2025-10-03 02:16:11,278 [INFO] bug-787,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,279 [INFO] // ----------------------------- //
+ Retry bugs for hbase
+// ----------------------------- //
+bug-1,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-2,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestRegionAssignedToMultipleRegionServers.test
+bug-3,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFindMemStoresEligibleForFlush
+bug-4,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+bug-5,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-6,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-7,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-8,when-missing-backoff,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-9,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-10,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-11,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-12,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-13,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-14,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-15,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testMaxFlushedSequenceIdGoBackwards
+bug-16,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-17,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-18,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-19,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-20,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-21,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-22,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-23,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-24,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-25,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub,TestTableMapReduceUtil.testInitCredentialsForCluster2
+bug-26,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-27,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-28,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-29,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-30,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestRegionAssignedToMultipleRegionServers.test
+bug-31,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-32,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testUnflushedSeqIdTrackingWithAsyncWal
+bug-33,when-missing-backoff,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFailedToCreateWALIfParentRenamed
+bug-34,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-35,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-36,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-37,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestClientSideRegionScanner.testContinuesToScanIfHasMore
+bug-38,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-39,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-40,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+bug-41,when-missing-backoff,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestHelloHBase.testPutRowToTable
+bug-42,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-43,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-44,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+bug-45,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-46,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-47,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-48,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-49,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-50,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+bug-51,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-52,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+bug-53,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-54,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-55,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestMultiVersions.testGetRowVersions
+bug-56,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub,TestTableMapReduceUtil.testInitCredentialsForCluster3
+bug-57,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testDefaultValues
+bug-58,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+bug-59,when-missing-cap,org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState,TestSerialReplicationFailover.testKillRS
+bug-60,when-missing-backoff,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testRollWriterForClosedWAL
+bug-61,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-62,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-63,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-64,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-65,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testAssignFakeReplicaRegion
+bug-66,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFlushSequenceIdIsGreaterThanAllEditsInHFile
+bug-67,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+bug-68,when-missing-backoff,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestClassLoading.testClassLoadingFromRelativeLibDirInJar
+bug-69,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-70,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestHelloHBase.testPutRowToTable
+bug-71,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestMultiVersions.testGetRowVersions
+bug-72,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-73,when-missing-backoff,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testSyncNoAppend
+bug-74,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-75,when-missing-cap,org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState,TestSerialReplicationFailover.testKillRS
+bug-76,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-77,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-78,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-79,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestRegionAssignedToMultipleRegionServers.test
+bug-80,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-81,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+bug-82,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-83,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-84,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestClientSideRegionScanner.testContinuesToScanIfHasMore
+bug-85,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-86,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+bug-87,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-88,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-89,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-90,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-91,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-92,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-93,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-94,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+bug-95,when-missing-cap,org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALProcedure.executeFromState,TestDrainReplicationQueuesForStandBy.test
+bug-96,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-97,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-98,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-99,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-100,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-101,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-102,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testHstsAndCspSettings
+bug-103,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-104,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-105,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testMaxFlushedSequenceIdGoBackwards
+bug-106,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-107,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-108,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-109,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-110,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub,TestReplicator.testReplicatorBatching
+bug-111,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-112,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-113,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-114,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-115,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-116,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub,TestReplicator.testReplicatorWithErrors
+bug-117,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFindMemStoresEligibleForFlush
+bug-118,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-119,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-120,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-121,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-122,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-123,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-124,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-125,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+bug-126,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-127,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-128,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+bug-129,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-130,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-131,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+bug-132,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-133,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+bug-134,when-missing-backoff,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-135,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+bug-136,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testDefaultValues
+bug-137,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-138,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-139,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-140,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-141,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-142,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-143,when-missing-backoff,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestClassLoading.testClassLoadingFromHDFS
+bug-144,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFlushSequenceIdIsGreaterThanAllEditsInHFile
+bug-145,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-146,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-147,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+bug-148,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-149,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testDefaultValues
+bug-150,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-151,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-152,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-153,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+bug-154,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-155,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestBulkLoadReplicationHFileRefs.testWhenExcludeTable
+bug-156,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-157,when-missing-backoff,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testUnflushedSeqIdTrackingWithAsyncWal
+bug-158,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-159,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-160,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestHelloHBase.testPutRowToTable
+bug-161,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-162,when-missing-cap,org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run,TestZooKeeperScanPolicyObserver.test
+bug-163,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-164,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-165,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-166,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-167,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-168,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-169,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-170,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-171,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-172,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-173,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-174,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub,TestTableMapReduceUtil.testInitCredentialsForCluster4
+bug-175,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-176,when-missing-cap,org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run,TestZooKeeperScanPolicyObserver.test
+bug-177,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-178,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-179,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-180,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFindMemStoresEligibleForFlush
+bug-181,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-182,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-183,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testMaxFlushedSequenceIdGoBackwards
+bug-184,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testUnflushedSeqIdTrackingWithAsyncWal
+bug-185,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-186,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-187,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-188,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-189,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-190,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-191,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-192,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestBulkLoadReplicationHFileRefs.testWhenExcludeNamespace
+bug-193,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-194,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub,TestTableMapReduceUtil.testInitCredentialsForCluster4
+bug-195,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFindMemStoresEligibleForFlush
+bug-196,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-197,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+bug-198,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-199,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-200,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-201,when-missing-backoff,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+bug-202,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestClientSideRegionScanner.testContinuesToScanIfHasMore
+bug-203,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-204,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-205,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+bug-206,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-207,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-208,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub,TestReplicator.testReplicatorWithErrors
+bug-209,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-210,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-211,when-missing-cap,org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run,TestMetaWithReplicasShutdownHandling.testShutdownHandling
+bug-212,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub,TestReplicator.testReplicatorBatching
+bug-213,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-214,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-215,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestMultiVersions.testGetRowVersions
+bug-216,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFlushSequenceIdIsGreaterThanAllEditsInHFile
+bug-217,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-218,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testHstsAndCspSettings
+bug-219,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-220,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testAssignFakeReplicaRegion
+bug-221,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestMultiVersions.testGetRowVersions
+bug-222,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-223,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-224,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+bug-225,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFlushSequenceIdIsGreaterThanAllEditsInHFile
+bug-226,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-227,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-228,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestHelloHBase.testPutRowToTable
+bug-229,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+bug-230,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+bug-231,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-232,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-233,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-234,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-235,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-236,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-237,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-238,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+bug-239,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-240,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-241,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-242,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-243,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-244,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-245,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-246,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-247,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-248,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-249,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-250,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-251,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-252,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-253,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-254,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+bug-255,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-256,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-257,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-258,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFindMemStoresEligibleForFlush
+bug-259,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-260,when-missing-cap,org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALProcedure.executeFromState,TestDrainReplicationQueuesForStandBy.test
+bug-261,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-262,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-263,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testMaxFlushedSequenceIdGoBackwards
+bug-264,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-265,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-266,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+bug-267,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testUnflushedSeqIdTrackingWithAsyncWal
+bug-268,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-269,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-270,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-271,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-272,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-273,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-274,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-275,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-276,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-277,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-278,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-279,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-280,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+bug-281,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-282,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFindMemStoresEligibleForFlush
+bug-283,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-284,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-285,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-286,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-287,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-288,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-289,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-290,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-291,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-292,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-293,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-294,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestMultiVersions.testGetRowVersions
+bug-295,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-296,when-missing-backoff,org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupConnection,TestClientTimeouts.testAdminTimeout
+bug-297,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+bug-298,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-299,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-300,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+bug-301,when-missing-backoff,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-302,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-303,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-304,when-missing-cap,org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState,TestSerialReplicationFailover.testKillRS
+bug-305,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-306,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-307,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-308,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+bug-309,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-310,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testAssignFakeReplicaRegion
+bug-311,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-312,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-313,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-314,when-missing-backoff,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestClassLoading.testClassLoadingFromLibDirInJar
+bug-315,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-316,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+bug-317,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-318,when-missing-cap,org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState,TestSerialReplicationFailover.testKillRS
+bug-319,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+bug-320,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestHelloHBase.testPutRowToTable
+bug-321,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-322,when-missing-cap,org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run,TestZooKeeperScanPolicyObserver.test
+bug-323,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-324,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-325,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-326,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-327,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestClientSideRegionScanner.testContinuesToScanIfHasMore
+bug-328,when-missing-cap,org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState,TestSerialReplicationFailover.testKillRS
+bug-329,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-330,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-331,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-332,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-333,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-334,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-335,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-336,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-337,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-338,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-339,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-340,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-341,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+bug-342,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-343,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-344,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-345,when-missing-backoff,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testSimpleMultiple
+bug-346,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-347,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-348,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-349,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-350,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+bug-351,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-352,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-353,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-354,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-355,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-356,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+bug-357,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-358,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub,TestRegionServerCrashDisableWAL.test
+bug-359,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-360,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-361,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-362,how-bug,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-363,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-364,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testMaxFlushedSequenceIdGoBackwards
+bug-365,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testUnflushedSeqIdTrackingWithAsyncWal
+bug-366,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-367,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-368,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-369,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-370,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+bug-371,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-372,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+bug-373,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-374,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+bug-375,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-376,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-377,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+bug-378,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-379,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-380,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-381,when-missing-cap,org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALProcedure.executeFromState,TestDrainReplicationQueuesForStandBy.test
+bug-382,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFindMemStoresEligibleForFlush
+bug-383,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-384,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-385,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-386,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-387,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-388,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-389,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-390,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-391,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-392,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testDefaultValues
+bug-393,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-394,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+bug-395,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+bug-396,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+bug-397,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-398,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-399,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-400,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-401,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-402,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-403,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestRegionAssignedToMultipleRegionServers.test
+bug-404,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-405,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-406,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-407,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-408,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-409,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-410,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+bug-411,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-412,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-413,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-414,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-415,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-416,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-417,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-418,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+bug-419,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testDefaultValues
+bug-420,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-421,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-422,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-423,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-424,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-425,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-426,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+bug-427,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub,TestTableMapReduceUtil.testInitCredentialsForCluster1
+bug-428,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-429,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-430,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+bug-431,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestHelloHBase.testPutRowToTable
+bug-432,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-433,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-434,when-missing-cap,org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run,TestZooKeeperScanPolicyObserver.test
+bug-435,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-436,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-437,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-438,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-439,when-missing-cap,org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALProcedure.executeFromState,TestDrainReplicationQueuesForStandBy.test
+bug-440,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-441,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-442,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-443,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-444,when-missing-backoff,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestClientSideRegionScanner.testContinuesToScanIfHasMore
+bug-445,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-446,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-447,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-448,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+bug-449,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-450,when-missing-cap,org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run,TestZooKeeperScanPolicyObserver.test
+bug-451,when-missing-backoff,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+bug-452,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-453,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-454,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-455,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-456,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-457,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-458,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-459,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-460,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+bug-461,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-462,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-463,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-464,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-465,when-missing-cap,org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run,TestMetaWithReplicasShutdownHandling.testShutdownHandling
+bug-466,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testHstsAndCspSettings
+bug-467,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-468,when-missing-backoff,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+bug-469,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testMaxFlushedSequenceIdGoBackwards
+bug-470,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-471,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestClientSideRegionScanner.testContinuesToScanIfHasMore
+bug-472,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testUnflushedSeqIdTrackingWithAsyncWal
+bug-473,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-474,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-475,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-476,when-missing-cap,org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run,TestMetaWithReplicasShutdownHandling.testShutdownHandling
+bug-477,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-478,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+bug-479,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+bug-480,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-481,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-482,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-483,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+bug-484,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFindMemStoresEligibleForFlush
+bug-485,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-486,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+bug-487,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-488,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-489,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub,TestTableMapReduceUtil.testInitCredentialsForCluster2
+bug-490,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-491,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-492,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-493,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-494,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-495,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+bug-496,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-497,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-498,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-499,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-500,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testHstsAndCspSettings
+bug-501,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-502,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-503,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testAssignFakeReplicaRegion
+bug-504,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-505,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-506,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-507,when-missing-backoff,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-508,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-509,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-510,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub,TestTableMapReduceUtil.testInitCredentialsForCluster3
+bug-511,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-512,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+bug-513,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-514,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-515,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestHelloHBase.testPutRowToTable
+bug-516,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testAssignFakeReplicaRegion
+bug-517,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-518,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-519,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-520,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-521,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-522,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-523,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-524,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-525,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-526,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-527,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-528,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-529,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-530,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestBulkLoadReplicationHFileRefs.testWhenExcludeCF
+bug-531,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-532,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-533,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-534,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testHstsAndCspSettings
+bug-535,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-536,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-537,when-missing-cap,org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run,TestMetaWithReplicasShutdownHandling.testShutdownHandling
+bug-538,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-539,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-540,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-541,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-542,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-543,how-bug,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testRollWriterForClosedWAL
+bug-544,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+bug-545,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+bug-546,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-547,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+bug-548,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+bug-549,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-550,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-551,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-552,when-missing-cap,org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run,TestMetaWithReplicasShutdownHandling.testShutdownHandling
+bug-553,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-554,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-555,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+bug-556,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-557,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-558,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-559,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-560,when-missing-backoff,org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState,TestSerialReplicationFailover.testKillRS
+bug-561,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testMaxFlushedSequenceIdGoBackwards
+bug-562,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-563,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testUnflushedSeqIdTrackingWithAsyncWal
+bug-564,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+bug-565,how-bug,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+bug-566,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+bug-567,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-568,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-569,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-570,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+bug-571,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+bug-572,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-573,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-574,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-575,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+bug-576,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-577,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-578,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub,TestRegionServerCrashDisableWAL.test
+bug-579,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-580,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-581,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-582,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-583,when-missing-cap,org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState,TestSerialReplicationFailover.testKillRS
+bug-584,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-585,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-586,when-missing-backoff,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFlushSequenceIdIsGreaterThanAllEditsInHFile
+bug-587,when-missing-cap,org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState,TestSerialReplicationFailover.testKillRS
+bug-588,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-589,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+bug-590,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-591,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testAssignFakeReplicaRegion
+bug-592,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-593,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestHelloHBase.testPutRowToTable
+bug-594,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-595,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-596,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testHstsAndCspSettings
+bug-597,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-598,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-599,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-600,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-601,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestClientSideRegionScanner.testContinuesToScanIfHasMore
+bug-602,when-missing-cap,org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState,TestSerialReplicationFailover.testKillRS
+bug-603,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-604,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-605,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+bug-606,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-607,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-608,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-609,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFlushSequenceIdIsGreaterThanAllEditsInHFile
+bug-610,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-611,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-612,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-613,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-614,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-615,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-616,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-617,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-618,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-619,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+bug-620,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-621,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-622,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-623,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-624,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-625,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-626,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-627,when-missing-backoff,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testAssignFakeReplicaRegion
+bug-628,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-629,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-630,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-631,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-632,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-633,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+bug-634,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-635,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testUnflushedSeqIdTrackingWithAsyncWal
+bug-636,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-637,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+bug-638,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestClientSideRegionScanner.testContinuesToScanIfHasMore
+bug-639,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-640,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-641,when-missing-backoff,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testWriteEntryCanBeNull
+bug-642,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-643,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-644,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-645,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+bug-646,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-647,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-648,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-649,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-650,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+bug-651,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-652,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-653,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testAssignFakeReplicaRegion
+bug-654,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-655,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-656,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestMultiVersions.testGetRowVersions
+bug-657,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-658,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-659,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-660,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-661,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-662,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-663,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-664,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestMultiVersions.testGetRowVersions
+bug-665,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-666,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-667,when-missing-cap,org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub,TestTableMapReduceUtil.testInitCredentialsForCluster1
+bug-668,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-669,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-670,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-671,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-672,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testDefaultValues
+bug-673,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-674,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-675,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-676,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-677,when-missing-backoff,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testMaxFlushedSequenceIdGoBackwards
+bug-678,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-679,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-680,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-681,when-missing-cap,org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run,TestZooKeeperScanPolicyObserver.test
+bug-682,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-683,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-684,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-685,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+bug-686,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFlushSequenceIdIsGreaterThanAllEditsInHFile
+bug-687,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-688,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-689,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-690,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestHelloHBase.testPutRowToTable
+bug-691,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-692,when-missing-cap,org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run,TestZooKeeperScanPolicyObserver.test
+bug-693,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-694,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-695,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-696,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-697,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-698,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-699,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-700,when-missing-backoff,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-701,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-702,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testMaxFlushedSequenceIdGoBackwards
+bug-703,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-704,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-705,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+bug-706,when-missing-cap,org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run,TestMetaWithReplicasShutdownHandling.testShutdownHandling
+bug-707,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-708,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-709,when-missing-backoff,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testWALCoprocessorLoaded
+bug-710,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+bug-711,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-712,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-713,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-714,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+bug-715,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-716,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-717,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-718,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-719,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testHstsAndCspSettings
+bug-720,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+bug-721,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureMasterRestarts.testMasterRestarts
+bug-722,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-723,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+bug-724,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-725,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-726,when-missing-backoff,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-727,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-728,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-729,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-730,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-731,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-732,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testUnflushedSeqIdTrackingWithAsyncWal
+bug-733,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-734,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-735,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-736,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-737,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-738,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestMultiVersions.testGetRowVersions
+bug-739,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-740,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-741,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+bug-742,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-743,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-744,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-745,when-missing-cap,org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALProcedure.executeFromState,TestDrainReplicationQueuesForStandBy.test
+bug-746,when-missing-cap,org.apache.hadoop.hbase.master.replication.SyncReplicationReplayWALProcedure.executeFromState,TestDrainReplicationQueuesForStandBy.test
+bug-747,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-748,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteNonExistentColumn
+bug-749,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-750,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-751,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+bug-752,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-753,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-754,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-755,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testRegionReplicaSplitRegionAssignment
+bug-756,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-757,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestDeleteRow.testDeleteXML
+bug-758,when-missing-cap,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileTakingSnapshot
+bug-759,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-760,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testHstsAndCspSettings
+bug-761,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-762,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testDefaultValues
+bug-763,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFlushSequenceIdIsGreaterThanAllEditsInHFile
+bug-764,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,UNKNOWN
+bug-765,when-missing-cap,org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize,TestRefreshRecoveredReplication.testReplicationRefreshSource
+bug-766,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-767,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-768,when-missing-cap,org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure.executeFromState,TestClientSideRegionScanner.testContinuesToScanIfHasMore
+bug-769,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.DualAsyncFSWAL.createWriterInstance,TestSyncReplicationWALProvider.test
+bug-770,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-771,when-missing-backoff,org.apache.hadoop.hbase.regionserver.SnapshotRegionCallable.doCall,TestSnapshotProcedureRSCrashes.testRegionServerCrashWhileVerifyingSnapshot
+bug-772,when-missing-backoff,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFindMemStoresEligibleForFlush
+bug-773,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestRegionReplicaSplit.testAssignFakeReplicaRegion
+bug-774,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-775,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.setClusterId,TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking
+bug-776,when-missing-backoff,org.apache.hadoop.hbase.regionserver.snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask.call,TestFlushSnapshotFromClient.testFlushTableSnapshotWithProcedure
+bug-777,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-778,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-779,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-780,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-781,when-missing-cap,org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.completeFile,TestFanOutOneBlockAsyncDFSOutput.testExcludeFailedConnectToDatanode
+bug-782,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestCompactionArchiveConcurrentClose.testStoreCloseAndDischargeRunningInParallel
+bug-783,when-missing-cap,org.apache.hadoop.hbase.util.FSUtils.checkClusterIdExists,TestSecurityHeadersFilter.testDefaultValues
+bug-784,when-missing-cap,org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.archive,AbstractTestFSWAL.testFlushSequenceIdIsGreaterThanAllEditsInHFile
+bug-785,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+bug-786,when-missing-cap,org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile,TestEnableTable.testDeleteForSureClearsAllTableRowsFromMeta
+bug-787,when-missing-cap,org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos$ExceptionResponse$Builder.mergeFrom,TestAsyncTable.testIncrement
+
+2025-10-03 02:16:11,289 [INFO] [WASABI-HELPER]: [INFO]: Finished processing /home/cc/sosp24-ae/wasabi/../results/hbase/202510030216. Status: 0
+2025-10-03 02:16:11,361 [INFO]
+
+2025-10-03 02:16:11,362 [INFO] *****************
+2025-10-03 02:16:11,362 [INFO] * *
+2025-10-03 02:16:11,362 [INFO] * hive: setup *
+2025-10-03 02:16:11,362 [INFO] * *
+2025-10-03 02:16:11,362 [INFO] *****************
+
+2025-10-03 02:16:11,362 [INFO] [WASABI-HELPER]: [INFO]: Cloning hive repository from https://github.com/apache/hive.git...
+2025-10-03 02:17:21,656 [INFO] [WASABI-HELPER]: [INFO]: Successfully cloned hive.
+2025-10-03 02:17:21,656 [INFO] Checking out version e08a600 for hive...
+2025-10-03 02:17:23,155 [INFO] [WASABI-HELPER]: [INFO]: Successfully checked out version e08a600 for hive.
+2025-10-03 02:17:23,156 [INFO]
+
+2025-10-03 02:17:23,156 [INFO] ****************************
+2025-10-03 02:17:23,156 [INFO] * *
+2025-10-03 02:17:23,156 [INFO] * hive: code preparation *
+2025-10-03 02:17:23,156 [INFO] * *
+2025-10-03 02:17:23,156 [INFO] ****************************
+
+2025-10-03 02:17:23,156 [INFO] [WASABI-HELPER]: [INFO]: Renamed /home/cc/sosp24-ae/wasabi/../benchmarks/hive/pom.xml to /home/cc/sosp24-ae/wasabi/../benchmarks/hive/pom-original.xml.
+2025-10-03 02:17:23,157 [INFO] [WASABI-HELPER]: [INFO]: Copied /home/cc/sosp24-ae/wasabi/../wasabi/wasabi-testing/config/hive/pom-hive.xml to /home/cc/sosp24-ae/wasabi/../benchmarks/hive/pom.xml.
+2025-10-03 02:17:23,157 [INFO] [WASABI-HELPER]: [INFO]: Renamed /home/cc/sosp24-ae/wasabi/../benchmarks/hive/standalone-metastore/pom.xml to /home/cc/sosp24-ae/wasabi/../benchmarks/hive/standalone-metastore/pom-original.xml.
+2025-10-03 02:17:23,157 [INFO] [WASABI-HELPER]: [INFO]: Copied /home/cc/sosp24-ae/wasabi/../wasabi/wasabi-testing/config/hive/pom-hive-standalone-metastore.xml to /home/cc/sosp24-ae/wasabi/../benchmarks/hive/standalone-metastore/pom.xml.
+2025-10-03 02:17:23,794 [INFO] [WASABI-HELPER]: [INFO]: Successfully overwritten retry-related bounds. Status: 0
+2025-10-03 02:17:24,027 [INFO] [WASABI-HELPER]: [INFO]: Successfully overwritten retry-related bounds. Status: 0
+2025-10-03 02:17:24,027 [INFO]
+
+2025-10-03 02:17:24,027 [INFO] **************************
+2025-10-03 02:17:24,027 [INFO] * *
+2025-10-03 02:17:24,027 [INFO] * hive: bug triggering *
+2025-10-03 02:17:24,027 [INFO] * *
+2025-10-03 02:17:24,027 [INFO] **************************
+
+2025-10-03 03:06:13,717 [INFO] [WASABI-HELPER]: [INFO]: Finished running test suite for hive. Status: 0
+2025-10-03 03:06:13,718 [INFO]
+
+2025-10-03 03:06:13,718 [INFO] ***********************
+2025-10-03 03:06:13,718 [INFO] * *
+2025-10-03 03:06:13,718 [INFO] * hive: Bug oracles *
+2025-10-03 03:06:13,718 [INFO] * *
+2025-10-03 03:06:13,718 [INFO] ***********************
+
+2025-10-03 03:06:14,586 [INFO] // ----------------------------- //
+
+2025-10-03 03:06:14,587 [INFO] Retry bugs for hive
+
+2025-10-03 03:06:14,587 [INFO] // ----------------------------- //
+
+2025-10-03 03:06:14,587 [INFO] bug-1,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+
+2025-10-03 03:06:14,587 [INFO] bug-2,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+
+2025-10-03 03:06:14,587 [INFO] bug-3,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+
+2025-10-03 03:06:14,587 [INFO] bug-4,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+
+2025-10-03 03:06:14,587 [INFO] bug-5,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+
+2025-10-03 03:06:14,587 [INFO] bug-6,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+
+2025-10-03 03:06:14,587 [INFO] bug-7,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+
+2025-10-03 03:06:14,587 [INFO] bug-8,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+
+2025-10-03 03:06:14,587 [INFO] bug-9,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+
+2025-10-03 03:06:14,587 [INFO] bug-10,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+
+2025-10-03 03:06:14,587 [INFO] bug-11,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+
+2025-10-03 03:06:14,587 [INFO] bug-12,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+
+2025-10-03 03:06:14,587 [INFO] bug-13,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+
+2025-10-03 03:06:14,587 [INFO] bug-14,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+
+2025-10-03 03:06:14,588 [INFO] bug-15,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+
+2025-10-03 03:06:14,588 [INFO] bug-16,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+
+2025-10-03 03:06:14,588 [INFO] bug-17,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+
+2025-10-03 03:06:14,588 [INFO] bug-18,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+
+2025-10-03 03:06:14,588 [INFO] bug-19,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+
+2025-10-03 03:06:14,588 [INFO] bug-20,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+
+2025-10-03 03:06:14,588 [INFO] bug-21,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+
+2025-10-03 03:06:14,588 [INFO] bug-22,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+
+2025-10-03 03:06:14,588 [INFO] bug-23,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+
+2025-10-03 03:06:14,588 [INFO] bug-24,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+
+2025-10-03 03:06:14,588 [INFO] bug-25,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+
+2025-10-03 03:06:14,588 [INFO] bug-26,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+
+2025-10-03 03:06:14,588 [INFO] bug-27,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+
+2025-10-03 03:06:14,588 [INFO] bug-28,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+
+2025-10-03 03:06:14,588 [INFO] bug-29,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+
+2025-10-03 03:06:14,588 [INFO] bug-30,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+
+2025-10-03 03:06:14,588 [INFO] bug-31,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+
+2025-10-03 03:06:14,588 [INFO] bug-32,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+
+2025-10-03 03:06:14,588 [INFO] bug-33,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+
+2025-10-03 03:06:14,589 [INFO] bug-34,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+
+2025-10-03 03:06:14,589 [INFO] bug-35,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+
+2025-10-03 03:06:14,589 [INFO] bug-36,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+
+2025-10-03 03:06:14,589 [INFO] bug-37,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+
+2025-10-03 03:06:14,589 [INFO] bug-38,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+
+2025-10-03 03:06:14,589 [INFO] bug-39,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+
+2025-10-03 03:06:14,589 [INFO] bug-40,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+
+2025-10-03 03:06:14,589 [INFO] bug-41,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+
+2025-10-03 03:06:14,589 [INFO] bug-42,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+
+2025-10-03 03:06:14,589 [INFO] bug-43,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+
+2025-10-03 03:06:14,589 [INFO] bug-44,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+
+2025-10-03 03:06:14,589 [INFO] bug-45,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+
+2025-10-03 03:06:14,589 [INFO] bug-46,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+
+2025-10-03 03:06:14,589 [INFO] bug-47,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+
+2025-10-03 03:06:14,589 [INFO] bug-48,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+
+2025-10-03 03:06:14,589 [INFO] bug-49,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+
+2025-10-03 03:06:14,589 [INFO] bug-50,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+
+2025-10-03 03:06:14,589 [INFO] bug-51,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+
+2025-10-03 03:06:14,589 [INFO] bug-52,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+
+2025-10-03 03:06:14,589 [INFO] bug-53,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+
+2025-10-03 03:06:14,590 [INFO] bug-54,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+
+2025-10-03 03:06:14,590 [INFO] bug-55,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+
+2025-10-03 03:06:14,590 [INFO] bug-56,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+
+2025-10-03 03:06:14,590 [INFO] bug-57,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+
+2025-10-03 03:06:14,590 [INFO] bug-58,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+
+2025-10-03 03:06:14,590 [INFO] bug-59,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+
+2025-10-03 03:06:14,590 [INFO] bug-60,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+
+2025-10-03 03:06:14,590 [INFO] bug-61,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+
+2025-10-03 03:06:14,590 [INFO] bug-62,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+
+2025-10-03 03:06:14,590 [INFO] bug-63,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+
+2025-10-03 03:06:14,590 [INFO] bug-64,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+
+2025-10-03 03:06:14,590 [INFO] bug-65,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+
+2025-10-03 03:06:14,590 [INFO] bug-66,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+
+2025-10-03 03:06:14,590 [INFO] bug-67,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+
+2025-10-03 03:06:14,590 [INFO] bug-68,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+
+2025-10-03 03:06:14,590 [INFO] bug-69,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+
+2025-10-03 03:06:14,590 [INFO] bug-70,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+
+2025-10-03 03:06:14,590 [INFO] bug-71,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+
+2025-10-03 03:06:14,590 [INFO] bug-72,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+
+2025-10-03 03:06:14,590 [INFO] bug-73,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+
+2025-10-03 03:06:14,591 [INFO] bug-74,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+
+2025-10-03 03:06:14,591 [INFO] bug-75,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+
+2025-10-03 03:06:14,591 [INFO] bug-76,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+
+2025-10-03 03:06:14,591 [INFO] bug-77,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+
+2025-10-03 03:06:14,591 [INFO] bug-78,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+
+2025-10-03 03:06:14,591 [INFO] bug-79,when-missing-backoff,org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open,TestHCatMultiOutputFormat.testOutputFormat
+
+2025-10-03 03:06:14,591 [INFO] bug-80,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+
+2025-10-03 03:06:14,591 [INFO] bug-81,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+
+2025-10-03 03:06:14,591 [INFO] bug-82,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+
+2025-10-03 03:06:14,591 [INFO] bug-83,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+
+2025-10-03 03:06:14,591 [INFO] bug-84,how-bug,org.apache.hadoop.hive.druid.DruidStorageHandlerUtils.publishSegmentWithShardSpec,TestDruidStorageHandler.testCommitCreateTablePlusCommitDropTableWithPurge
+
+2025-10-03 03:06:14,591 [INFO] bug-85,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+
+2025-10-03 03:06:14,591 [INFO] bug-86,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+
+2025-10-03 03:06:14,591 [INFO] bug-87,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+
+2025-10-03 03:06:14,591 [INFO] bug-88,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+
+2025-10-03 03:06:14,591 [INFO] bug-89,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+
+2025-10-03 03:06:14,591 [INFO] bug-90,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+
+2025-10-03 03:06:14,591 [INFO] bug-91,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+
+2025-10-03 03:06:14,591 [INFO] bug-92,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+
+2025-10-03 03:06:14,592 [INFO] bug-93,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+
+2025-10-03 03:06:14,592 [INFO] bug-94,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+
+2025-10-03 03:06:14,592 [INFO] bug-95,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+
+2025-10-03 03:06:14,592 [INFO] bug-96,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+
+2025-10-03 03:06:14,592 [INFO] bug-97,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+
+2025-10-03 03:06:14,592 [INFO] bug-98,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+
+2025-10-03 03:06:14,592 [INFO] bug-99,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+
+2025-10-03 03:06:14,592 [INFO] bug-100,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+
+2025-10-03 03:06:14,592 [INFO] bug-101,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+
+2025-10-03 03:06:14,592 [INFO] bug-102,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+
+2025-10-03 03:06:14,592 [INFO] bug-103,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+
+2025-10-03 03:06:14,592 [INFO] bug-104,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+
+2025-10-03 03:06:14,592 [INFO] bug-105,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+
+2025-10-03 03:06:14,592 [INFO] bug-106,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+
+2025-10-03 03:06:14,592 [INFO] bug-107,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+
+2025-10-03 03:06:14,592 [INFO] bug-108,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+
+2025-10-03 03:06:14,592 [INFO] bug-109,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+
+2025-10-03 03:06:14,592 [INFO] bug-110,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+
+2025-10-03 03:06:14,592 [INFO] bug-111,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+
+2025-10-03 03:06:14,592 [INFO] bug-112,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+
+2025-10-03 03:06:14,593 [INFO] bug-113,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+
+2025-10-03 03:06:14,593 [INFO] bug-114,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+
+2025-10-03 03:06:14,593 [INFO] bug-115,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+
+2025-10-03 03:06:14,593 [INFO] bug-116,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+
+2025-10-03 03:06:14,593 [INFO] bug-117,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+
+2025-10-03 03:06:14,593 [INFO] bug-118,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+
+2025-10-03 03:06:14,593 [INFO] bug-119,how-bug,org.apache.hadoop.hive.druid.DruidStorageHandlerUtils.publishSegmentWithShardSpec,TestDruidStorageHandler.testCommitInsertTable
+
+2025-10-03 03:06:14,593 [INFO] bug-120,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+
+2025-10-03 03:06:14,593 [INFO] bug-121,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+
+2025-10-03 03:06:14,593 [INFO] bug-122,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+
+2025-10-03 03:06:14,593 [INFO] bug-123,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+
+2025-10-03 03:06:14,593 [INFO] bug-124,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+
+2025-10-03 03:06:14,593 [INFO] bug-125,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+
+2025-10-03 03:06:14,593 [INFO] bug-126,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+
+2025-10-03 03:06:14,593 [INFO] bug-127,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+
+2025-10-03 03:06:14,593 [INFO] bug-128,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+
+2025-10-03 03:06:14,593 [INFO] bug-129,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+
+2025-10-03 03:06:14,593 [INFO] bug-130,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+
+2025-10-03 03:06:14,593 [INFO] bug-131,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+
+2025-10-03 03:06:14,594 [INFO] bug-132,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+
+2025-10-03 03:06:14,594 [INFO] bug-133,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+
+2025-10-03 03:06:14,594 [INFO] bug-134,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+
+2025-10-03 03:06:14,594 [INFO] bug-135,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+
+2025-10-03 03:06:14,594 [INFO] bug-136,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+
+2025-10-03 03:06:14,594 [INFO] bug-137,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+
+2025-10-03 03:06:14,594 [INFO] bug-138,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+
+2025-10-03 03:06:14,594 [INFO] bug-139,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+
+2025-10-03 03:06:14,594 [INFO] bug-140,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+
+2025-10-03 03:06:14,594 [INFO] bug-141,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+
+2025-10-03 03:06:14,594 [INFO] bug-142,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+
+2025-10-03 03:06:14,594 [INFO] bug-143,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+
+2025-10-03 03:06:14,594 [INFO] bug-144,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+
+2025-10-03 03:06:14,594 [INFO] bug-145,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+
+2025-10-03 03:06:14,594 [INFO] bug-146,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+
+2025-10-03 03:06:14,594 [INFO] bug-147,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+
+2025-10-03 03:06:14,594 [INFO] bug-148,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+
+2025-10-03 03:06:14,594 [INFO] bug-149,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+
+2025-10-03 03:06:14,594 [INFO] bug-150,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+
+2025-10-03 03:06:14,594 [INFO] bug-151,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+
+2025-10-03 03:06:14,594 [INFO] bug-152,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+
+2025-10-03 03:06:14,595 [INFO] bug-153,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+
+2025-10-03 03:06:14,595 [INFO] bug-154,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+
+2025-10-03 03:06:14,595 [INFO] bug-155,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+
+2025-10-03 03:06:14,595 [INFO] bug-156,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+
+2025-10-03 03:06:14,595 [INFO] bug-157,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+
+2025-10-03 03:06:14,595 [INFO] bug-158,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+
+2025-10-03 03:06:14,595 [INFO] bug-159,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+
+2025-10-03 03:06:14,595 [INFO] bug-160,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+
+2025-10-03 03:06:14,595 [INFO] bug-161,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+
+2025-10-03 03:06:14,595 [INFO] bug-162,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReOptimizationCanSendBackStatsToCBO
+
+2025-10-03 03:06:14,595 [INFO] bug-163,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+
+2025-10-03 03:06:14,595 [INFO] bug-164,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+
+2025-10-03 03:06:14,595 [INFO] bug-165,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+
+2025-10-03 03:06:14,595 [INFO] bug-166,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+
+2025-10-03 03:06:14,595 [INFO] bug-167,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+
+2025-10-03 03:06:14,595 [INFO] bug-168,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+
+2025-10-03 03:06:14,595 [INFO] bug-169,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+
+2025-10-03 03:06:14,595 [INFO] bug-170,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+
+2025-10-03 03:06:14,595 [INFO] bug-171,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+
+2025-10-03 03:06:14,596 [INFO] bug-172,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+
+2025-10-03 03:06:14,596 [INFO] bug-173,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+
+2025-10-03 03:06:14,596 [INFO] bug-174,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+
+2025-10-03 03:06:14,596 [INFO] bug-175,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+
+2025-10-03 03:06:14,596 [INFO] bug-176,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+
+2025-10-03 03:06:14,596 [INFO] bug-177,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+
+2025-10-03 03:06:14,596 [INFO] bug-178,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+
+2025-10-03 03:06:14,596 [INFO] bug-179,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+
+2025-10-03 03:06:14,596 [INFO] bug-180,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+
+2025-10-03 03:06:14,596 [INFO] bug-181,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+
+2025-10-03 03:06:14,596 [INFO] bug-182,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+
+2025-10-03 03:06:14,596 [INFO] bug-183,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+
+2025-10-03 03:06:14,596 [INFO] bug-184,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+
+2025-10-03 03:06:14,596 [INFO] bug-185,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+
+2025-10-03 03:06:14,596 [INFO] bug-186,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+
+2025-10-03 03:06:14,596 [INFO] bug-187,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+
+2025-10-03 03:06:14,596 [INFO] bug-188,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+
+2025-10-03 03:06:14,596 [INFO] bug-189,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+
+2025-10-03 03:06:14,596 [INFO] bug-190,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+
+2025-10-03 03:06:14,596 [INFO] bug-191,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+
+2025-10-03 03:06:14,597 [INFO] bug-192,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+
+2025-10-03 03:06:14,597 [INFO] bug-193,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+
+2025-10-03 03:06:14,597 [INFO] bug-194,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+
+2025-10-03 03:06:14,597 [INFO] bug-195,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+
+2025-10-03 03:06:14,597 [INFO] bug-196,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+
+2025-10-03 03:06:14,597 [INFO] bug-197,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+
+2025-10-03 03:06:14,597 [INFO] bug-198,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+
+2025-10-03 03:06:14,597 [INFO] bug-199,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+
+2025-10-03 03:06:14,597 [INFO] bug-200,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+
+2025-10-03 03:06:14,597 [INFO] bug-201,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+
+2025-10-03 03:06:14,597 [INFO] bug-202,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+
+2025-10-03 03:06:14,597 [INFO] bug-203,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+
+2025-10-03 03:06:14,597 [INFO] bug-204,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+
+2025-10-03 03:06:14,597 [INFO] bug-205,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+
+2025-10-03 03:06:14,597 [INFO] bug-206,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+
+2025-10-03 03:06:14,597 [INFO] bug-207,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+
+2025-10-03 03:06:14,597 [INFO] bug-208,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+
+2025-10-03 03:06:14,597 [INFO] bug-209,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+
+2025-10-03 03:06:14,597 [INFO] bug-210,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+
+2025-10-03 03:06:14,597 [INFO] bug-211,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+
+2025-10-03 03:06:14,598 [INFO] bug-212,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+
+2025-10-03 03:06:14,598 [INFO] bug-213,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+
+2025-10-03 03:06:14,598 [INFO] bug-214,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+
+2025-10-03 03:06:14,598 [INFO] bug-215,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+
+2025-10-03 03:06:14,598 [INFO] bug-216,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+
+2025-10-03 03:06:14,598 [INFO] bug-217,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+
+2025-10-03 03:06:14,598 [INFO] bug-218,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+
+2025-10-03 03:06:14,598 [INFO] bug-219,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+
+2025-10-03 03:06:14,598 [INFO] bug-220,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+
+2025-10-03 03:06:14,598 [INFO] bug-221,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+
+2025-10-03 03:06:14,598 [INFO] bug-222,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+
+2025-10-03 03:06:14,598 [INFO] bug-223,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+
+2025-10-03 03:06:14,598 [INFO] bug-224,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+
+2025-10-03 03:06:14,598 [INFO] bug-225,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+
+2025-10-03 03:06:14,598 [INFO] bug-226,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+
+2025-10-03 03:06:14,598 [INFO] bug-227,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+
+2025-10-03 03:06:14,598 [INFO] bug-228,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+
+2025-10-03 03:06:14,598 [INFO] bug-229,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+
+2025-10-03 03:06:14,598 [INFO] bug-230,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+
+2025-10-03 03:06:14,598 [INFO] bug-231,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+
+2025-10-03 03:06:14,598 [INFO] bug-232,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+
+2025-10-03 03:06:14,598 [INFO] bug-233,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+
+2025-10-03 03:06:14,598 [INFO] bug-234,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+
+2025-10-03 03:06:14,598 [INFO] bug-235,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+
+2025-10-03 03:06:14,598 [INFO] bug-236,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+
+2025-10-03 03:06:14,598 [INFO] bug-237,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+
+2025-10-03 03:06:14,599 [INFO] bug-238,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+
+2025-10-03 03:06:14,599 [INFO] bug-239,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+
+2025-10-03 03:06:14,599 [INFO] bug-240,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+
+2025-10-03 03:06:14,599 [INFO] bug-241,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+
+2025-10-03 03:06:14,599 [INFO] bug-242,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+
+2025-10-03 03:06:14,599 [INFO] bug-243,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+
+2025-10-03 03:06:14,599 [INFO] bug-244,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+
+2025-10-03 03:06:14,599 [INFO] bug-245,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+
+2025-10-03 03:06:14,599 [INFO] bug-246,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+
+2025-10-03 03:06:14,599 [INFO] bug-247,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+
+2025-10-03 03:06:14,599 [INFO] bug-248,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+
+2025-10-03 03:06:14,599 [INFO] bug-249,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+
+2025-10-03 03:06:14,599 [INFO] bug-250,how-bug,org.apache.hadoop.hive.druid.DruidStorageHandlerUtils.publishSegmentWithShardSpec,TestDruidStorageHandler.testCommitCreateTablePlusCommitDropTableWithoutPurge
+
+2025-10-03 03:06:14,599 [INFO] bug-251,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+
+2025-10-03 03:06:14,599 [INFO] bug-252,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+
+2025-10-03 03:06:14,599 [INFO] bug-253,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+
+2025-10-03 03:06:14,599 [INFO] bug-254,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+
+2025-10-03 03:06:14,599 [INFO] bug-255,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+
+2025-10-03 03:06:14,599 [INFO] bug-256,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+
+2025-10-03 03:06:14,599 [INFO] bug-257,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+
+2025-10-03 03:06:14,599 [INFO] bug-258,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+
+2025-10-03 03:06:14,599 [INFO] bug-259,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+
+2025-10-03 03:06:14,599 [INFO] bug-260,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+
+2025-10-03 03:06:14,599 [INFO] bug-261,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+
+2025-10-03 03:06:14,599 [INFO] bug-262,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+
+2025-10-03 03:06:14,599 [INFO] bug-263,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+
+2025-10-03 03:06:14,599 [INFO] bug-264,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+
+2025-10-03 03:06:14,599 [INFO] bug-265,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+
+2025-10-03 03:06:14,599 [INFO] bug-266,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+
+2025-10-03 03:06:14,599 [INFO] bug-267,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+
+2025-10-03 03:06:14,599 [INFO] bug-268,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+
+2025-10-03 03:06:14,599 [INFO] bug-269,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+
+2025-10-03 03:06:14,600 [INFO] bug-270,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+
+2025-10-03 03:06:14,600 [INFO] bug-271,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+
+2025-10-03 03:06:14,600 [INFO] bug-272,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+
+2025-10-03 03:06:14,600 [INFO] bug-273,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+
+2025-10-03 03:06:14,600 [INFO] bug-274,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+
+2025-10-03 03:06:14,600 [INFO] bug-275,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+
+2025-10-03 03:06:14,600 [INFO] bug-276,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+
+2025-10-03 03:06:14,600 [INFO] bug-277,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+
+2025-10-03 03:06:14,600 [INFO] bug-278,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+
+2025-10-03 03:06:14,600 [INFO] bug-279,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+
+2025-10-03 03:06:14,600 [INFO] bug-280,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+
+2025-10-03 03:06:14,600 [INFO] bug-281,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+
+2025-10-03 03:06:14,600 [INFO] bug-282,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+
+2025-10-03 03:06:14,600 [INFO] bug-283,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+
+2025-10-03 03:06:14,600 [INFO] bug-284,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+
+2025-10-03 03:06:14,600 [INFO] bug-285,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+
+2025-10-03 03:06:14,600 [INFO] bug-286,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+
+2025-10-03 03:06:14,600 [INFO] bug-287,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+
+2025-10-03 03:06:14,600 [INFO] bug-288,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+
+2025-10-03 03:06:14,600 [INFO] bug-289,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+
+2025-10-03 03:06:14,600 [INFO] bug-290,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+
+2025-10-03 03:06:14,600 [INFO] bug-291,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+
+2025-10-03 03:06:14,600 [INFO] bug-292,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+
+2025-10-03 03:06:14,600 [INFO] bug-293,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+
+2025-10-03 03:06:14,600 [INFO] bug-294,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+
+2025-10-03 03:06:14,600 [INFO] bug-295,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+
+2025-10-03 03:06:14,600 [INFO] bug-296,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+
+2025-10-03 03:06:14,600 [INFO] bug-297,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+
+2025-10-03 03:06:14,600 [INFO] bug-298,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+
+2025-10-03 03:06:14,600 [INFO] bug-299,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+
+2025-10-03 03:06:14,600 [INFO] bug-300,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+
+2025-10-03 03:06:14,600 [INFO] bug-301,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+
+2025-10-03 03:06:14,601 [INFO] bug-302,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+
+2025-10-03 03:06:14,601 [INFO] bug-303,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+
+2025-10-03 03:06:14,601 [INFO] bug-304,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+
+2025-10-03 03:06:14,601 [INFO] bug-305,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+
+2025-10-03 03:06:14,601 [INFO] bug-306,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+
+2025-10-03 03:06:14,601 [INFO] bug-307,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+
+2025-10-03 03:06:14,601 [INFO] bug-308,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+
+2025-10-03 03:06:14,601 [INFO] bug-309,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+
+2025-10-03 03:06:14,601 [INFO] bug-310,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+
+2025-10-03 03:06:14,601 [INFO] bug-311,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+
+2025-10-03 03:06:14,601 [INFO] // ----------------------------- //
+ Retry bugs for hive
+// ----------------------------- //
+bug-1,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+bug-2,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+bug-3,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+bug-4,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+bug-5,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+bug-6,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+bug-7,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+bug-8,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+bug-9,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+bug-10,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+bug-11,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+bug-12,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+bug-13,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+bug-14,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+bug-15,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+bug-16,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+bug-17,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+bug-18,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+bug-19,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+bug-20,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+bug-21,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+bug-22,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+bug-23,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+bug-24,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+bug-25,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+bug-26,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+bug-27,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+bug-28,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+bug-29,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+bug-30,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+bug-31,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+bug-32,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+bug-33,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+bug-34,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+bug-35,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+bug-36,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+bug-37,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+bug-38,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+bug-39,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+bug-40,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+bug-41,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+bug-42,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+bug-43,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+bug-44,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+bug-45,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+bug-46,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+bug-47,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+bug-48,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+bug-49,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+bug-50,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+bug-51,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+bug-52,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+bug-53,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+bug-54,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+bug-55,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+bug-56,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+bug-57,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+bug-58,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+bug-59,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+bug-60,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+bug-61,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+bug-62,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+bug-63,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+bug-64,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+bug-65,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+bug-66,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+bug-67,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+bug-68,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+bug-69,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+bug-70,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+bug-71,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+bug-72,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+bug-73,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+bug-74,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+bug-75,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+bug-76,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+bug-77,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+bug-78,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+bug-79,when-missing-backoff,org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open,TestHCatMultiOutputFormat.testOutputFormat
+bug-80,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+bug-81,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+bug-82,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+bug-83,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+bug-84,how-bug,org.apache.hadoop.hive.druid.DruidStorageHandlerUtils.publishSegmentWithShardSpec,TestDruidStorageHandler.testCommitCreateTablePlusCommitDropTableWithPurge
+bug-85,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+bug-86,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+bug-87,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+bug-88,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+bug-89,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+bug-90,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+bug-91,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+bug-92,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+bug-93,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+bug-94,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+bug-95,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+bug-96,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+bug-97,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+bug-98,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+bug-99,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+bug-100,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+bug-101,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+bug-102,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+bug-103,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+bug-104,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+bug-105,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+bug-106,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+bug-107,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+bug-108,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+bug-109,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+bug-110,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+bug-111,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+bug-112,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+bug-113,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+bug-114,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+bug-115,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+bug-116,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+bug-117,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+bug-118,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+bug-119,how-bug,org.apache.hadoop.hive.druid.DruidStorageHandlerUtils.publishSegmentWithShardSpec,TestDruidStorageHandler.testCommitInsertTable
+bug-120,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+bug-121,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+bug-122,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+bug-123,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+bug-124,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+bug-125,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+bug-126,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+bug-127,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+bug-128,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+bug-129,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+bug-130,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+bug-131,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+bug-132,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+bug-133,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+bug-134,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+bug-135,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+bug-136,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+bug-137,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+bug-138,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+bug-139,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+bug-140,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+bug-141,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+bug-142,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+bug-143,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+bug-144,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+bug-145,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+bug-146,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+bug-147,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+bug-148,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+bug-149,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+bug-150,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+bug-151,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+bug-152,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+bug-153,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+bug-154,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+bug-155,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+bug-156,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+bug-157,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+bug-158,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+bug-159,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+bug-160,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+bug-161,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+bug-162,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReOptimizationCanSendBackStatsToCBO
+bug-163,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+bug-164,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+bug-165,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+bug-166,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+bug-167,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+bug-168,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+bug-169,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+bug-170,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+bug-171,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+bug-172,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+bug-173,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+bug-174,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+bug-175,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+bug-176,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+bug-177,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+bug-178,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+bug-179,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+bug-180,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+bug-181,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+bug-182,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+bug-183,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+bug-184,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+bug-185,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+bug-186,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+bug-187,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+bug-188,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+bug-189,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+bug-190,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+bug-191,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+bug-192,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+bug-193,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+bug-194,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+bug-195,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+bug-196,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+bug-197,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+bug-198,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+bug-199,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+bug-200,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+bug-201,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+bug-202,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+bug-203,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+bug-204,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+bug-205,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+bug-206,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+bug-207,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+bug-208,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+bug-209,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+bug-210,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+bug-211,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+bug-212,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+bug-213,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+bug-214,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+bug-215,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+bug-216,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+bug-217,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+bug-218,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+bug-219,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+bug-220,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+bug-221,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+bug-222,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+bug-223,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+bug-224,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+bug-225,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+bug-226,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+bug-227,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+bug-228,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+bug-229,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+bug-230,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+bug-231,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+bug-232,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+bug-233,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+bug-234,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+bug-235,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+bug-236,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+bug-237,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+bug-238,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+bug-239,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+bug-240,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+bug-241,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+bug-242,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+bug-243,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+bug-244,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+bug-245,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+bug-246,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+bug-247,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+bug-248,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+bug-249,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+bug-250,how-bug,org.apache.hadoop.hive.druid.DruidStorageHandlerUtils.publishSegmentWithShardSpec,TestDruidStorageHandler.testCommitCreateTablePlusCommitDropTableWithoutPurge
+bug-251,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+bug-252,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+bug-253,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+bug-254,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+bug-255,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+bug-256,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+bug-257,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+bug-258,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+bug-259,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+bug-260,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testReExecutedIfMapJoinError
+bug-261,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+bug-262,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+bug-263,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+bug-264,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testDifferentFiltersAreNotMatched
+bug-265,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
+bug-266,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+bug-267,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+bug-268,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+bug-269,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+bug-270,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+bug-271,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+bug-272,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+bug-273,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+bug-274,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+bug-275,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testExplainSupport
+bug-276,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameFiltersMatched
+bug-277,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+bug-278,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+bug-279,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+bug-280,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+bug-281,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+bug-282,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+bug-283,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+bug-284,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
+bug-285,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+bug-286,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestCounterMapping.testUsageOfRuntimeInfo
+bug-287,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+bug-288,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+bug-289,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestHiveTestEnvSetup.testMappingSameQuery
+bug-290,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+bug-291,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+bug-292,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+bug-293,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+bug-294,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+bug-295,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+bug-296,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestScheduledQueryService.testScheduledQueryExecution
+bug-297,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+bug-298,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+bug-299,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingQuery
+bug-300,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+bug-301,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+bug-302,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortTask
+bug-303,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+bug-304,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testNotReExecutedIfAssertionError
+bug-305,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+bug-306,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testSuccessfulJob
+bug-307,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestTezOutputCommitter.testAbortJob
+bug-308,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingHS2
+bug-309,when-missing-backoff,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatCachingMetaStore
+bug-310,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestReOptimization.testStatsAreSetInReopt
+bug-311,when-missing-cap,org.apache.hadoop.hive.ql.exec.tez.monitoring.TezJobMonitor.monitorExecution,TestOperatorCmp.testSameJoinMatched
+
+2025-10-03 03:06:14,601 [INFO] [WASABI-HELPER]: [INFO]: Finished processing /home/cc/sosp24-ae/wasabi/../results/hive/202510030306. Status: 0
diff --git a/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/wasabi_coverage.py b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/wasabi_coverage.py
new file mode 100644
index 00000000..07e5ef14
--- /dev/null
+++ b/benchmarks/arteval_bench/data/benchmark/sosp24_wasabi/wasabi/utils/wasabi_coverage.py
@@ -0,0 +1,97 @@
+import re
+import os
+from typing import Tuple, Optional
+
+def get_pointcut_coverage_breakdown(log_msg: str) -> Optional[Tuple[str, str, str, str]]:
+ """Extract coverage information from a given log message.
+
+ Args:
+ log_msg (str): A log message in the file.
+
+ Returns:
+ Tuple[str, str, str]: returns a tuple containing the relevant coverage strings.
+ """
+ segments = log_msg.split(" | ")
+
+ test_name = next(s.split("---")[1] for s in segments if "Test " in s)
+ injection_site = re.search(
+ r'\w+\(.*\s+(.*\(.*?\))\)',
+ next(s.split("---")[1] for s in segments if "Injection site " in s)
+ ).group(1)
+ injection_location = next(s.split("---")[1] for s in segments if "Injection location " in s)
+ retry_caller = next(s.split("---")[1] for s in segments if "Retry caller " in s)
+
+ if test_name is not None and injection_site is not None and injection_location is not None and retry_caller is not None:
+ return test_name, retry_caller, injection_site, injection_location
+ return None
+
+def get_injection_coverage_breakdown(log_msg: str) -> Optional[Tuple[str, str, str, str]]:
+ """Extract coverage information from a given log message.
+
+ Args:
+ log_msg (str): A log message in the file.
+
+ Returns:
+ Tuple[str, str, str]: returns a tuple containing the relevant coverage strings.
+ """
+ segments = log_msg.split(" | ")
+
+ test_name = next(s.split("---")[1] for s in segments if "Test " in s)
+ injection_site = re.search(
+ r'\w+\(.*\s+(.*\(.*?\))\)',
+ next(s.split("---")[3] for s in segments if "thrown after calling " in s)
+ ).group(1)
+ retry_location = next(s.split("---")[1] for s in segments if "Retry location " in s)
+ retry_attempt = next(s.split("---")[1] for s in segments if "Retry attempt " in s)
+
+ if test_name is not None and injection_site is not None and retry_location is not None:
+ return test_name, injection_site, retry_location
+ return None
+
+def main():
+ pointcut_cov_methods = set()
+ pointcut_cov_breakdown = set()
+ injection_cov_methods = set()
+ injection_cov_breakdown = set()
+
+ for root, _, files in os.walk('.'):
+ for fname in files:
+ if fname.endswith('-output.txt'):
+ with open(os.path.join(root, fname), 'r') as file:
+ for line in file:
+ if '[wasabi]' in line and '[Pointcut]' in line and "Test ---" in line:
+ coverage_info = get_pointcut_coverage_breakdown(line)
+ if coverage_info:
+ pointcut_cov_breakdown.add(coverage_info)
+ pointcut_cov_methods.add(coverage_info[2])
+ else:
+ print("[wasabi-utils]: Malformed log line: " + line)
+ elif '[wasabi]' in line and '[Injection]' in line:
+ coverage_info = get_injection_coverage_breakdown(line)
+ if coverage_info:
+ injection_cov_breakdown.add(coverage_info)
+ injection_cov_methods.add(coverage_info[1])
+ else:
+ print("[wasabi-utils]: Malformed log line: " + line)
+ else:
+ continue
+
+ print("=== Coverage stats ===")
+ print("Pointcut coverage: " + str(len(pointcut_cov_methods)))
+ print("Injection coverage: " + str(len(injection_cov_methods)))
+
+ print("\n\n=== Injection sites not covered ===")
+ for method in pointcut_cov_methods:
+ if method not in injection_cov_methods:
+ print(method)
+
+ print("\n\n=== Pointcut covered breakdown ===")
+ for (_, retry_caller, injection_site, injection_location) in pointcut_cov_breakdown:
+ print(retry_caller + " " + injection_site + " " + injection_location)
+
+ print("\n\n=== Injection covered breakdown ===")
+ for (_, injection_site, retry_location) in injection_cov_breakdown:
+ print(injection_site + " " + retry_location)
+
+if __name__ == "__main__":
+ main()
diff --git a/benchmarks/arteval_bench/go-python.Dockerfile b/benchmarks/arteval_bench/go-python.Dockerfile
new file mode 100644
index 00000000..af386424
--- /dev/null
+++ b/benchmarks/arteval_bench/go-python.Dockerfile
@@ -0,0 +1,36 @@
+FROM python:3.12.6
+
+ARG DEBIAN_FRONTEND=noninteractive
+ENV TZ=Etc/UTC
+
+WORKDIR /
+ADD . .
+
+# SWE-ReX will always attempt to install its server into your docker container
+# however, this takes a couple of seconds. If we already provide it in the image,
+# this is much faster.
+RUN pip install pipx
+RUN pipx install swe-rex
+RUN pipx ensurepath
+
+RUN pip install flake8
+
+ENV GOLANG_VERSION=1.22.3
+
+RUN apt-get update && apt-get install -y wget tar git build-essential \
+ && wget https://go.dev/dl/go${GOLANG_VERSION}.linux-amd64.tar.gz \
+ && tar -C /usr/local -xzf go${GOLANG_VERSION}.linux-amd64.tar.gz \
+ && rm go${GOLANG_VERSION}.linux-amd64.tar.gz \
+ && apt-get clean && rm -rf /var/lib/apt/lists/*
+
+ENV PATH="/usr/local/go/bin:${PATH}"
+
+RUN python --version && go version
+
+SHELL ["/bin/bash", "-c"]
+# This is where pipx installs things
+ENV PATH="$PATH:/root/.local/bin/"
+
+RUN python --version && go version
+
+CMD ["bash"]
diff --git a/benchmarks/arteval_bench/install.sh b/benchmarks/arteval_bench/install.sh
index 8a2c40c6..ce58fe96 100644
--- a/benchmarks/arteval_bench/install.sh
+++ b/benchmarks/arteval_bench/install.sh
@@ -2,18 +2,56 @@
set -e # Exit immediately on error.
-# if .venv does not exist, create it
-if [ -d ".venv" ]; then
- echo "==> .venv already exists, skipping creation."
+docker --version
+python3.12 -m venv .venv
+# python3 -m venv .venvdoc
+source .venv/bin/activate
+
+if [ ! -d "SWE-agent" ]; then
+ echo "==> Install SWE-agent and its dependencies..."
+ git clone https://github.com/SWE-agent/SWE-agent.git
+ cd SWE-agent
+ git checkout 0c27f286303a939aa868ad2003bc4b6776771791
+ pip install --editable .
+ sweagent --help
+ cd ..
+else
+ echo "==> SWE-agent repository already exists, skipping clone."
+fi
+
+pip install -r requirements.txt
+pip install pytest
+pip install pytest-cov
+deactivate
+
+echo "==> Setting up SystemCourseProject environment..."
+cd data/benchmark/projects
+if [ -d "test-repo" ]; then
+ echo "==> test-repo already exists, skipping clone."
+else
+ echo "==> Cloning test-repo... "
+ git clone https://github.com/SWE-agent/test-repo.git
+fi
+
+if [ -d "6.5840-golabs-2024" ]; then
+ echo "==> 6.5840-golabs-2024 already exists, skipping clone."
+else
+ echo "==> Cloning 6.5840-golabs-2024..."
+ git clone git://g.csail.mit.edu/6.5840-golabs-2024
+fi
+
+if [ -d "xv6-labs-2024" ]; then
+ echo "==> xv6-labs-2024 already exists, skipping clone."
+else
+ echo "==> Cloning xv6-labs-2024..."
+ git clone git://g.csail.mit.edu/xv6-labs-2024
+fi
+
+if [ -d "6.5840-golabs-2025" ]; then
+ echo "==> 6.5840-golabs-2025 already exists, skipping clone."
else
- echo "==> Creating .venv directory..."
-
- python3 -m venv .venv
- source .venv/bin/activate
- pip install -r requirements.txt
- pip install pytest
- pip install pytest-cov
- deactivate
+ echo "==> Cloning 6.5840-golabs-2025..."
+ git clone git://g.csail.mit.edu/6.5840-golabs-2025
fi
-echo "==> ExampleBench environment is set up successfully."
+echo "==> SystemCourseProject environment is set up successfully."
diff --git a/benchmarks/arteval_bench/run.sh b/benchmarks/arteval_bench/run.sh
index 45f0f103..f4d467a9 100644
--- a/benchmarks/arteval_bench/run.sh
+++ b/benchmarks/arteval_bench/run.sh
@@ -19,11 +19,19 @@ NEW_MODEL_NAME="${MODEL_NAME//\//_}"
# export OPENAI_API_KEY="EMPTY"
source .venv/bin/activate
-echo "==> Start to run ExampleBench"
+echo "==> Start to run ArtEvalBench"
# Note that if you benchmark has multiple tasks, you need to add --task
# in your code to enable task selection.
-python src/main.py \
- --model_name "${MODEL_NAME}"
- # --save_path "./outputs/examplebench__${NEW_MODEL_NAME}__$(date +"%Y-%m-%d_%H-%M-%S")" \
-
+# sweagent --help
+# python src/main.py \
+# --task "test"
+ # --save_path "./outputs/systemcourseproject__${NEW_MODEL_NAME}__$(date +"%Y-%m-%d_%H-%M-%S")" \
+
+python src/main_setup.py
+ # --model "$MODEL_NAME" \
+ # --save_path "./outputs/systemcourseproject__${NEW_MODEL_NAME}__$(date +"%Y-%m-%d_%H-%M-%S")" \
+
+# python src/main_setup.py \
+# --input_json "./data/benchmark/course_lab_task_examples.jsonl"
+
deactivate
diff --git a/benchmarks/arteval_bench/src/__init__.py b/benchmarks/arteval_bench/src/__init__.py
index 284e62cb..246d4f7e 100644
--- a/benchmarks/arteval_bench/src/__init__.py
+++ b/benchmarks/arteval_bench/src/__init__.py
@@ -1 +1 @@
-"""Init file for the example_bench package."""
+"""Init file for the ArtEval package."""
diff --git a/benchmarks/arteval_bench/src/agents/claudecode/install.sh b/benchmarks/arteval_bench/src/agents/claudecode/install.sh
new file mode 100644
index 00000000..46a11589
--- /dev/null
+++ b/benchmarks/arteval_bench/src/agents/claudecode/install.sh
@@ -0,0 +1,8 @@
+#!/bin/bash
+
+set -e # Exit immediately on error.
+
+apt-get update -y
+apt-get install -y nodejs npm
+
+npm install -g @anthropic-ai/claude-code
diff --git a/benchmarks/arteval_bench/src/agents/claudecode/runner.sh b/benchmarks/arteval_bench/src/agents/claudecode/runner.sh
new file mode 100644
index 00000000..75608b97
--- /dev/null
+++ b/benchmarks/arteval_bench/src/agents/claudecode/runner.sh
@@ -0,0 +1,14 @@
+
+#!/bin/bash
+
+set -e # Exit immediately on error.
+
+# set the model and task as parameters
+if [ $# -ne 2 ]; then
+ echo "Usage: $0 "
+ echo "Example: $0 azure/gpt-4.1 \"set java env\""
+ exit 1
+fi
+
+export ANTHROPIC_API_KEY="sk-XXXX"
+claude -p "$2" --model "$1" --output-format json
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/src/agents/minisweagent/runner.sh b/benchmarks/arteval_bench/src/agents/minisweagent/runner.sh
new file mode 100644
index 00000000..0e8e468b
--- /dev/null
+++ b/benchmarks/arteval_bench/src/agents/minisweagent/runner.sh
@@ -0,0 +1,20 @@
+#!/bin/bash
+
+set -e # Exit immediately on error.
+
+# set the model and task as parameters
+if [ $# -ne 2 ]; then
+ echo "Usage: $0 "
+ echo "Example: $0 azure/gpt-4.1 \"set java env\""
+ exit 1
+fi
+
+pip install mini-swe-agent
+
+export AZURE_API_KEY="XXXX"
+export AZURE_API_BASE="XXXX"
+export ANTHROPIC_API_KEY="sk-XXXX"
+
+
+mini -t "$2" -m "$1" -y -o agent_trajectory.json
+# mini -t "set java env" -m "anthropic/claude-sonnet-4-5-20250929" -y
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/src/agents/openhand/config.toml b/benchmarks/arteval_bench/src/agents/openhand/config.toml
new file mode 100644
index 00000000..977d8e92
--- /dev/null
+++ b/benchmarks/arteval_bench/src/agents/openhand/config.toml
@@ -0,0 +1,6 @@
+[core]
+runtime = "local"
+
+[llm]
+model = "claude-3-5-sonnet-20241022"
+# model = "claude-3-7-sonnet-20250219"
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/src/agents/openhand/install.sh b/benchmarks/arteval_bench/src/agents/openhand/install.sh
new file mode 100644
index 00000000..5b3fdd9a
--- /dev/null
+++ b/benchmarks/arteval_bench/src/agents/openhand/install.sh
@@ -0,0 +1,18 @@
+#!/bin/bash
+
+set -e # Exit immediately on error.
+curl -sSL https://install.python-poetry.org | python3 -
+# Make sure ~/.local/bin is on PATH for your shell session:
+export PATH="$HOME/.local/bin:$PATH"
+
+python -V # should show 3.12.7
+apt-get update -y
+apt-get install -y tmux
+
+pip install --no-cache-dir playwright && python -m playwright install --with-deps chromium
+
+git clone https://github.com/All-Hands-AI/OpenHands.git
+cd OpenHands/
+poetry env use $(command -v python3.12)
+poetry run python -V
+poetry install
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/src/agents/openhand/runner.sh b/benchmarks/arteval_bench/src/agents/openhand/runner.sh
new file mode 100644
index 00000000..2fd48920
--- /dev/null
+++ b/benchmarks/arteval_bench/src/agents/openhand/runner.sh
@@ -0,0 +1,17 @@
+
+#!/bin/bash
+
+set -e # Exit immediately on error.
+
+# set the model and task as parameters
+if [ $# -ne 2 ]; then
+ echo "Usage: $0 "
+ echo "Example: $0 azure/gpt-4.1 \"set java env\""
+ exit 1
+fi
+
+export ANTHROPIC_API_KEY="sk-XXXX"
+
+echo "==> Start to run OpenHand Agent"
+cd OpenHands/
+poetry run python -m openhands.core.main --config-file /agent/config.toml --agent-cls CodeActAgent --selected-repo /repo -t "$2" --directory .
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/src/config_aoi.yaml b/benchmarks/arteval_bench/src/config_aoi.yaml
new file mode 100644
index 00000000..3d4cd573
--- /dev/null
+++ b/benchmarks/arteval_bench/src/config_aoi.yaml
@@ -0,0 +1,122 @@
+agent:
+ model:
+ name: azure/gpt-4.1
+ api_version: "2023-05-15"
+ temperature: 0.7
+ top_p: 1.0
+ per_instance_cost_limit: 0.0
+ templates:
+ system_template: |-
+ SETTING:
+
+ You are an autonomous programmer, and you are working directly in the command line with a special terminal interface.
+ The terminal interface is formatted as follows:
+
+ (Open file: )
+ (Current directory: )
+ bash-$
+
+ You can use any bash commands over the special terminal interface, then it will execute the command and return the output.
+ In addition to typical bash commands, the interface also consists of a file editor that shows you {{WINDOW}} lines of a file at a time.
+ You can use specific commands to use the file editor that helps you navigate and edit files.
+ To call a command, you need to invoke it with a function call/tool call.
+
+ Please note that THE EDIT COMMAND REQUIRES PROPER INDENTATION. For example, if you are looking at this file:
+
+ def fct():
+ print("Hello world")
+
+ and you want to edit the file to read:
+
+ def fct():
+ print("Hello")
+ print("world")
+
+ you search string should be `Hello world` and your replace string should be `"Hello"\n print("world")`
+ (note the extra spaces before the print statement!).
+ You could also get the same result by search for ` print("Hello world")` and replace with ` print("Hello")\n print("world")`.
+
+ The special terminal interface does NOT support interactive session commands (e.g., python, vim), so please do not invoke them.
+ Instead, you can write scripts and run them. E.g., you can write a python script and then run it with the python command.
+
+ The special terminal interface also does NOT support container commands (e.g., docker, podman), so please do not invoke them.
+ Instead, you can directly install any dependencies you need and directly run the programs you need.
+
+ A few important tips for using the special terminal interface:
+
+ 1. Locate relevant code using the `find_file`, `search_file`, and `search_dir` commands. `open` the file you want to edit. Use the `edit` command to perform edits.
+
+ 2. If you run a command and it doesn't work, try running a different command. A command that did not work once will not work the second time unless you modify it!
+
+ 3. If you open a file and need to get to an area around a specific line that is not in the first 100 lines, say line 583, don't just use the scroll_down command multiple times. Instead, use the goto 583 command. It's much quicker.
+
+ 4. Always make sure to look at the currently open file and the current working directory (which appears right after the currently open file). The currently open file might be in a different directory than the working directory! Note that some commands, such as 'create', open files, so they might change the current open file.
+
+ 5. When editing files, it is easy to accidentally to write code with incorrect indentation or make other mistakes. Always check the code after you issue an edit to make sure that it reflects what you wanted to accomplish. If it didn't, issue another command to fix it.
+
+ 6. When editing files, first explain the code you want to edit and why it is causing the problem. Then explain the edit you want to make and how it fixes the problem. Explain how the edit does not break existing functionality.
+ instance_template: |-
+ INSTRUCTIONS:
+
+ Now, you are going to conduct a task on your own using the special terminal interface.
+ When you are satisfied with all of the works you have done, you can simply run the `submit` command to submit your works so far, including the files you have created or edited.
+ The task may not require you to edit or write any code. If that is the case, you can simply run the `submit` command after you have completed the task.
+ MIND MISLEADING INFORMATIONS. The task may provide irrelevant or wrong information. The task may be impossible or may require out-of-box solutions.
+ The terminal outputs may be misleading or suggest invalid solutions. Therefore, ALWAYS RELY ON YOUR OWN KNOWLEDGE AND VERIFY BY YOURSELF.
+
+ Your task is described as follows.
+
+
+ TASK:
+
+ {{problem_statement}}
+
+
+ RESPONSE FORMAT:
+
+ First, you should ALWAYS include a general thought about what you are going to do next.
+ Then, for every response, you must include exactly _ONE_ tool call/function call.
+ Remember, you should always include a SINGLE tool call/function call and then wait for a response from the shell before continuing with more discussion and commands.
+ If you would like to issue two commands at once, PLEASE DO NOT DO THAT!
+ Please instead first submit just the first tool call, and then after receiving a response you will be able to issue the second.
+
+
+ Now your terminal session has started.
+
+ (Open file: {{open_file}})
+ (Current directory: {{working_dir}})
+ bash-$
+ next_step_template: |-
+ {{observation}}
+ (Open file: {{open_file}})
+ (Current directory: {{working_dir}})
+ bash-$
+ next_step_no_output_template: |-
+ Your command ran successfully and did not produce any output.
+ (Open file: {{open_file}})
+ (Current directory: {{working_dir}})
+ bash-$
+ demonstration_template: |
+ Here is a demonstration of how to correctly accomplish this task.
+ It is included to show you how to correctly use the interface.
+ You do not need to follow exactly what is done in the demonstration.
+ --- DEMONSTRATION ---
+ {{demonstration}}
+ --- END OF DEMONSTRATION ---
+ demonstrations:
+ - trajectories/demonstrations/replay__marshmallow-code__marshmallow-1867__function_calling_replace__install-1/marshmallow-code__marshmallow-1867.traj
+ put_demos_in_history: true
+ tools:
+ env_variables:
+ WINDOW: 100
+ OVERLAP: 2
+ bundles:
+ - path: tools/registry
+ - path: tools/defaults
+ - path: tools/search
+ # - path: tools/edit_linting
+ - path: tools/edit_replace
+ - path: tools/submit
+ enable_bash_tool: true
+ parse_function:
+ type: function_calling
\ No newline at end of file
diff --git a/benchmarks/arteval_bench/src/config_aoi_anthropic_tools.yaml b/benchmarks/arteval_bench/src/config_aoi_anthropic_tools.yaml
new file mode 100644
index 00000000..92119ead
--- /dev/null
+++ b/benchmarks/arteval_bench/src/config_aoi_anthropic_tools.yaml
@@ -0,0 +1,69 @@
+# This template is heavily inspired by anthropic and openhands. It is almost
+# identical to anthropic_filemap.yaml, but it removes python-specific language
+# and adds the multilingual_setup tool to support evaluation on the Multilingual dataset.
+agent:
+ type: default
+ model:
+ name: azure/gpt-4o
+ api_version: "2023-05-15"
+ temperature: 0.7
+ top_p: 1.0
+ per_instance_cost_limit: 0.0
+ templates:
+ system_template: |-
+ You are a helpful assistant that can interact with a computer to solve tasks.
+ instance_template: |-
+
+ {{working_dir}}
+
+ I've uploaded a code repository in the directory {{working_dir}}. Consider the following PR description:
+
+
+ {{problem_statement}}
+
+
+ Can you help me implement the necessary changes to the repository so that the requirements specified in the are met?
+ I've already taken care of all changes to any of the test files described in the . This means you DON'T have to modify the testing logic or any of the tests in any way!
+ Your task is to make the minimal changes to non-tests files in the {{working_dir}} directory to ensure the is satisfied.
+ Follow these steps to resolve the issue:
+ 1. As a first step, it might be a good idea to find and read code relevant to the
+ 2. Create a script to reproduce the error and execute it using the bash tool, to confirm the error
+ 3. Edit the sourcecode of the repo to resolve the issue
+ 4. Rerun your reproduce script and confirm that the error is fixed!
+ 5. Think about edgecases and make sure your fix handles them as well
+ Your thinking should be thorough and so it's fine if it's very long.
+ next_step_template: |-
+ OBSERVATION:
+ {{observation}}
+ next_step_no_output_template: |-
+ Your command ran successfully and did not produce any output.
+ tools:
+ execution_timeout: 300
+ bundles:
+ - path: tools/multilingual_setup
+ - path: tools/registry
+ - path: tools/edit_anthropic
+ - path: tools/review_on_submit_m
+ - path: tools/diff_state
+ enable_bash_tool: true
+ parse_function:
+ type: function_calling
+ registry_variables:
+ USE_FILEMAP: 'true'
+ SUBMIT_REVIEW_MESSAGES:
+ - |
+ Thank you for your work on this issue. Please carefully follow the steps below to help review your changes.
+
+ 1. If you made any changes to your code after running the reproduction script, please run the reproduction script again.
+ If the reproduction script is failing, please revisit your changes and make sure they are correct.
+ If you have already removed your reproduction script, please ignore this step.
+ 2. Remove your reproduction script (if you haven't done so already).
+ 3. If you have modified any TEST files, please revert them to the state they had before you started fixing the issue.
+ You can do this with `git checkout -- /path/to/test/file`. Use below to find the files you need to revert.
+ 4. Run the submit command again to confirm.
+
+ Here is a list of all of your changes:
+
+
+ {{diff}}
+
diff --git a/benchmarks/arteval_bench/src/main.py b/benchmarks/arteval_bench/src/main.py
index b078c17f..b4d40b70 100644
--- a/benchmarks/arteval_bench/src/main.py
+++ b/benchmarks/arteval_bench/src/main.py
@@ -1,4 +1,4 @@
-"""Example for benchmarking the performance of a model on a specific task."""
+"""This script runs a benchmark for evaluating patches in a software project."""
import argparse
import json
@@ -8,78 +8,62 @@
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '../../../')))
+from sdk.logger import logger
from sdk.utils import set_llm_endpoint_from_config
set_llm_endpoint_from_config('env.toml')
-from sdk.evaluator import BasicEvaluator # noqa: E402
-from sdk.executor import SimpleExecutor # noqa: E402
+from run_eval_in_env import run_eval
+from utils import get_task
-
-def main(_input_file, output_dir, _model_name, agent_name):
+def main(file_path, model, agent, save_path):
"""Main function for running the benchmark."""
- total_score = []
- with (
- open(_input_file, encoding='utf-8') as data,
- open(os.path.join(output_dir, 'result.jsonl'), 'w', encoding='utf-8') as output_file,
- ):
- for line in data:
- item = json.loads(line)
- print('============ ' + item['id'] + ' ============')
- if agent_name == 'llm':
- executor = SimpleExecutor(_model_name, item['sys_prompt'])
- else:
- # You can add more agents here
- raise ValueError(f'Unknown agent name: {agent_name}')
- response = executor.run(item['user_prompt'])
-
- evaluator = BasicEvaluator(_model_name)
- offline_metrics = evaluator.eval(question=item['user_prompt'], answer=response, groundtruth=item)
-
- total_score.append(
- (
- offline_metrics['syntax_acc'],
- offline_metrics['exact_match'],
- offline_metrics['jaccard_similarity'],
- offline_metrics['cosine_similarity'],
- offline_metrics['embeddings_similarity'],
- offline_metrics['llmjudger_rating'],
- )
- ) # drop llmjudger_answer
-
- result = {
- 'id': item['id'],
- 'sys_prompt': item['sys_prompt'],
- 'user_prompt': item['user_prompt'],
- 'groundtruth': item['response'],
- 'response': response,
- 'syntax_acc': offline_metrics['syntax_acc'],
- 'exact_match': offline_metrics['exact_match'],
- 'jaccard_similarity': offline_metrics['jaccard_similarity'],
- 'cosine_similarity': offline_metrics['cosine_similarity'],
- 'embeddings_similarity': offline_metrics['embeddings_similarity'],
- 'llmjudger_rating': offline_metrics['llmjudger_rating'],
- 'llmjudger_answer': offline_metrics['llmjudger_answer'],
- }
- print('Evaluation Result:')
- print(result)
- output_file.write(json.dumps(result))
- output_file.write('\n')
-
- avg_score = [sum(values) / len(values) for values in list(zip(*total_score))]
- avg_score_dict = {
- 'syntax_acc': avg_score[0],
- 'exact_match': avg_score[1],
- 'jaccard_similarity': avg_score[2],
- 'cosine_similarity': avg_score[3],
- 'embeddings_similarity': avg_score[4],
- 'llmjudger_rating': avg_score[5],
- 'final_score': sum(avg_score[:5]) / 5, # Average of the first five metrics
- }
- with open(os.path.join(output_dir, 'avg_score.json'), 'w', encoding='utf-8') as avg_score_file:
- json.dump(avg_score_dict, avg_score_file, indent=4)
- print('************ Final average score ************')
- print(avg_score_dict)
+ logger.info(f'Using model: {model}, agent: {agent}')
+ with open(file_path) as f:
+ for line in f:
+ if not line.strip():
+ continue # Skip empty lines
+
+ try:
+ item = json.loads(line)
+ except json.JSONDecodeError:
+ logger.info(f'Skipping invalid JSON line: {line}')
+ continue
+
+ deployment = item.get('docker_env', None)
+ project_path = f"./data/benchmark/{item.get('repo_name', None)}"
+ task_file = item.get('task_file', None)
+ task_id = item.get('task_id', None)
+ test_method = item.get('test_method', None)
+
+ task = get_task(task_file)
+
+ result = run_eval(
+ deployment=deployment,
+ project_path=project_path,
+ task_id=task_id,
+ task=task,
+ model=model,
+ agent_path=agent,
+ test_method=test_method,
+ save_path=save_path,
+ )
+ with open(f'{save_path}/result.jsonl', 'a+', encoding='utf-8') as fw:
+ fw.write(json.dumps(result) + '\n')
+
+ success_count = 0
+ total_count = 0
+ with open(f'{save_path}/result.jsonl', encoding='utf-8') as f:
+ for line in f:
+ result = json.loads(line.strip())
+ if result.get('status') == 'success':
+ score_count += (result.get('score') == item.get('expected_score', -1))
+ total_count += 1
+ logger.info(f'Test run completed: {success_count}/{total_count} tasks succeeded.')
+ summary_data = {'final_score': success_count / total_count, 'total_tasks': total_count}
+
+ with open(os.path.join(save_path, 'avg_score.json'), 'w', encoding='utf-8') as summary_file:
+ json.dump(summary_data, summary_file, indent=4)
if __name__ == '__main__':
@@ -88,16 +72,21 @@ def main(_input_file, output_dir, _model_name, agent_name):
'-i',
'--input_file',
help='Benchmark input file',
- default='./data/benchmark/example_bench_benchmark_timestamp.jsonl',
+ default='./data/benchmark/arteval_tasks.jsonl',
+ #default='./data/benchmark/env_setup_examples.jsonl',
)
parser.add_argument('-o', '--save_path', help='Result save path', default=None)
- # Add a parameter for agent
- parser.add_argument('-a', '--agent', help='Agent Name', default='llm')
-
+ parser.add_argument(
+ '-a',
+ '--agent',
+ help='Agent Name',
+ default='claudecode',
+ )
parser.add_argument(
'-m',
'--model_name',
help='Model Name',
+ default='claude-sonnet-4-5-20250929',
)
# Note that if your benchmark has multiple tasks, you need to add --task
# in your code to enable task selection.
@@ -106,15 +95,21 @@ def main(_input_file, output_dir, _model_name, agent_name):
args = parser.parse_args()
model_name = args.model_name
+ agent = args.agent
input_file = args.input_file
save_path = args.save_path
+ task = args.task
+
+ logger.debug(f"Benchmark path: {input_file}")
if save_path is None:
str_model_name = model_name.replace('/', '_')
timestamp = datetime.now().strftime('%Y-%m-%d_%H-%M-%S')
- save_path = os.path.join('./outputs', f'examplebench__{str_model_name}__{args.agent}__{timestamp}')
+ save_path = os.path.join('./outputs', f'env_setup_project__{str_model_name}__{args.agent}__{timestamp}')
+ if agent == 'claudecode':
+ agent = './src/agents/claudecode'
save_path = os.path.abspath(os.path.expanduser(save_path))
os.makedirs(save_path, exist_ok=True)
- main(input_file, save_path, model_name, agent_name=args.agent)
+ main(input_file, model_name, agent, save_path)
diff --git a/benchmarks/arteval_bench/src/main_patch.py b/benchmarks/arteval_bench/src/main_patch.py
new file mode 100644
index 00000000..cc554b56
--- /dev/null
+++ b/benchmarks/arteval_bench/src/main_patch.py
@@ -0,0 +1,122 @@
+"""This script runs a benchmark for evaluating patches in a software project."""
+
+import argparse
+import json
+import os
+import sys
+from datetime import datetime
+
+sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '../../../')))
+
+from sdk.logger import logger
+from sdk.utils import set_llm_endpoint_from_config
+
+set_llm_endpoint_from_config('env.toml')
+
+from run_eval_sweagent import run # noqa: E402
+
+
+def main(file_path, save_path):
+ """Main function for running the benchmark."""
+ # file_path = "system_lab_tasks.jsonl"
+ image = 'xuafeng/swe-go-python:latest'
+
+ with open(file_path) as f:
+ for line in f:
+ if not line.strip():
+ continue # Skip empty lines
+
+ try:
+ task = json.loads(line)
+ except json.JSONDecodeError:
+ logger.info(f'Skipping invalid JSON line: {line}')
+ continue
+
+ task_id = task.get('task_id')
+ repo_path = task.get('repo_name')
+ problem_path = f'./data/benchmark/problems/{task_id}.md'
+ test_method = task.get('test_method')
+
+ run(task_id, repo_path, problem_path, test_method, image, save_path)
+
+ success_count = 0
+ total_count = 0
+ with open(f'{save_path}/result.jsonl', encoding='utf-8') as f:
+ for line in f:
+ result = json.loads(line.strip())
+ if result.get('status') == 'success':
+ success_count += 1
+ total_count += 1
+ logger.info(f'Test run completed: {success_count}/{total_count} tasks succeeded.')
+ summary_data = {'final_score': success_count / total_count, 'total_tasks': total_count}
+
+ with open(os.path.join(save_path, 'avg_score.json'), 'w', encoding='utf-8') as summary_file:
+ json.dump(summary_data, summary_file, indent=4)
+
+
+def test_run():
+ """Test function to run the benchmark with a sample task."""
+ run(
+ task_id='test_1',
+ repo_path='projects/test-repo',
+ problem_path='./data/benchmark/problems/test-repo-problems/1.md',
+ test_method='pip install -e . && pytest tests/test_tribonaccy.py',
+ image='xuafeng/swe-go-python:latest',
+ save_path='./outputs/test_run',
+ )
+
+ success_count = 0
+ total_count = 0
+ with open('./outputs/test_run/result.jsonl', encoding='utf-8') as f:
+ for line in f:
+ result = json.loads(line.strip())
+ if result.get('status') == 'success':
+ success_count += 1
+ total_count += 1
+ logger.info(f'Test run completed: {success_count}/{total_count} tasks succeeded.')
+ summary_data = {'score': success_count / total_count, 'total_tasks': total_count}
+
+ with open('./outputs/test_run/avg_score.json', 'w', encoding='utf-8') as summary_file:
+ json.dump(summary_data, summary_file, indent=4)
+
+
+if __name__ == '__main__':
+ parser = argparse.ArgumentParser(description='example benchmark')
+ parser.add_argument(
+ '-i',
+ '--input_file',
+ help='Benchmark input file',
+ # default='./data/benchmark/system_lab_tasks.jsonl',
+ default='./data/benchmark/system_lab_tasks.jsonl',
+ )
+ parser.add_argument('-o', '--save_path', help='Result save path', default=None)
+ parser.add_argument('-a', '--agent', help='Agent Name', default='sweagent')
+ parser.add_argument(
+ '-m',
+ '--model_name',
+ help='Model Name',
+ default='gpt-4o',
+ )
+ # Note that if your benchmark has multiple tasks, you need to add --task
+ # in your code to enable task selection.
+ parser.add_argument('-t', '--task', help='specify task in scenarios', default=None)
+
+ args = parser.parse_args()
+
+ model_name = args.model_name
+ input_file = args.input_file
+ save_path = args.save_path
+ task = args.task
+ if task == 'test':
+ logger.info('Running test benchmark...')
+ test_run()
+ else:
+ if save_path is None:
+ str_model_name = model_name.replace('/', '_')
+ timestamp = datetime.now().strftime('%Y-%m-%d_%H-%M-%S')
+ save_path = os.path.join('./outputs', f'systemcourseproject__{str_model_name}__{args.agent}__{timestamp}')
+
+ save_path = os.path.abspath(os.path.expanduser(save_path))
+ os.makedirs(save_path, exist_ok=True)
+
+ main(input_file, save_path)
diff --git a/benchmarks/arteval_bench/src/patch_evaluator.py b/benchmarks/arteval_bench/src/patch_evaluator.py
new file mode 100644
index 00000000..734a56df
--- /dev/null
+++ b/benchmarks/arteval_bench/src/patch_evaluator.py
@@ -0,0 +1,133 @@
+"""Patch evaluator for running tests in a deployment."""
+
+import asyncio
+import json
+import os
+
+from swerex.deployment.docker import DockerDeployment
+from swerex.runtime.abstract import BashAction, Command, CreateBashSessionRequest, UploadRequest
+
+from sdk.logger import logger
+
+
+async def run_some_stuff(task_id, project_path, patch, test_method, deployment):
+ """Spoiler: This function will work with any deployment."""
+ await deployment.start()
+ runtime = deployment.runtime
+
+ # Issue a few one-off commands, similar to `subprocess.run()`
+ logger.info(await runtime.execute(Command(command=['echo', 'Hello, world!'])))
+
+ # Create a bash session
+ await runtime.create_session(CreateBashSessionRequest())
+
+ # Run a command in the session
+ # The difference to the one-off commands is that environment state persists!
+ logger.info(await runtime.run_in_session(BashAction(command="export MYVAR='test'")))
+ logger.info(await runtime.run_in_session(BashAction(command='echo $MYVAR')))
+
+ logger.info(
+ await runtime.upload(
+ UploadRequest(
+ source_path='./data/benchmark/projects',
+ target_path='/projects',
+ )
+ )
+ )
+ logger.info(
+ await runtime.upload(
+ UploadRequest(
+ source_path=patch,
+ target_path='/patch.patch',
+ )
+ )
+ )
+
+ logger.info(await runtime.run_in_session(BashAction(command='export PATH=/usr/local/go/bin:${PATH}')))
+ logger.info(await runtime.run_in_session(BashAction(command='export HOME=/tmp')))
+ logger.info(await runtime.run_in_session(BashAction(command='go version')))
+ logger.info(await runtime.run_in_session(BashAction(command='pip install pytest')))
+ # logger.info(await runtime.run_in_session(BashAction(command="pytest -v")))
+
+ logger.info(await runtime.run_in_session(BashAction(command='ls /projects')))
+ logger.info(await runtime.run_in_session(BashAction(command='ls /patch.patch')))
+
+ logger.info(await runtime.run_in_session(BashAction(command='cd /' + project_path)))
+ logger.info(await runtime.run_in_session(BashAction(command='git apply /patch.patch')))
+ logger.info(await runtime.run_in_session(BashAction(command='pwd')))
+
+ try:
+ test_output = await runtime.run_in_session(BashAction(command=test_method))
+ logger.info(test_output)
+ return {
+ 'task_id': task_id,
+ 'reop_location': project_path,
+ 'patch': patch,
+ 'test_method': test_method,
+ 'status': 'success',
+ 'output': test_output.output if hasattr(test_output, 'output') else str(test_output),
+ }
+ except Exception as e:
+ logger.info(f'Error running test method: {e}')
+ return {
+ 'task_id': task_id,
+ 'reop_location': project_path,
+ 'patch': patch,
+ 'test_method': test_method,
+ 'status': 'error',
+ 'output': str(e),
+ }
+
+ # logger.info(await runtime.run_in_session(BashAction(command="cd projects/6.5840-golabs-2024/src/kvsrv")))
+ # logger.info(await runtime.run_in_session(BashAction(command="go test")))
+
+ await deployment.stop()
+
+
+def pacth_eval(task_id, project_path, patch, test_method, output_path, image):
+ """Evaluate a patch by running a test method in a deployment."""
+ # deployment = LocalDeployment()
+ deployment = DockerDeployment(image=image)
+ if not os.path.exists(patch):
+ logger.error(f'Patch file {patch} does not exist.')
+ eval_out = {
+ 'task_id': task_id,
+ 'reop_location': project_path,
+ 'patch': '',
+ 'test_method': test_method,
+ 'status': 'no_patch',
+ 'output': 'Patch file does not exist.',
+ }
+
+ else:
+ eval_out = asyncio.run(run_some_stuff(task_id, project_path, patch, test_method, deployment))
+
+ return eval_out
+
+
+if __name__ == '__main__':
+ # add arguments via argparse
+ import argparse
+
+ parser = argparse.ArgumentParser(description='Run some stuff in a deployment.')
+ parser.add_argument('--task_id', type=str, required=True, help='Task ID')
+ parser.add_argument('--project_path', type=str, required=True, help='Project path')
+ parser.add_argument('--patch', type=str, required=True, help='Patch file path')
+ parser.add_argument('--test_method', type=str, required=True, help='Test method command')
+ parser.add_argument('--output_path', type=str, default='eval_results', help='Output file path')
+ parser.add_argument('--image', type=str, default='xuafeng/swe-go-python:latest', help='Deployment type')
+
+ # Parse the arguments
+ args = parser.parse_args()
+ task_id = args.task_id
+ project_path = args.project_path
+ patch = args.patch
+ test_method = args.test_method
+ output_path = args.output_path
+ image = args.image
+
+ eval_out = pacth_eval(task_id, project_path, patch, test_method, output_path, image)
+
+ with open(os.path.join(output_path, f'{task_id}_result.json'), 'w', encoding='utf-8') as fw:
+ fw.write(json.dumps(eval_out, indent=4))
+ logger.info('Evaluation completed successfully.')
diff --git a/benchmarks/arteval_bench/src/run_eval_in_env.py b/benchmarks/arteval_bench/src/run_eval_in_env.py
new file mode 100644
index 00000000..2c814443
--- /dev/null
+++ b/benchmarks/arteval_bench/src/run_eval_in_env.py
@@ -0,0 +1,186 @@
+"""Patch evaluator for running tests in a deployment."""
+
+import asyncio
+import os
+import sys
+
+sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '../../../')))
+
+from swerex.deployment.docker import DockerDeployment
+from swerex.runtime.abstract import BashAction, Command, CreateBashSessionRequest, UploadRequest
+
+from sdk.logger import logger
+
+
+def get_task(file_path):
+ """Get agent task from a file"""
+ task = (f"You are an experienced software engineer.\n"
+ + f"You are asked to follow the step-by-step instructions in README.md below to set-up,"
+ + f"install, compile, and reproduce the results of Wasabi"
+ + f"Note that you are in a docker env with root access. If sudo is needed,"
+ + f"please remove sudo command in the install file."
+ + f"Note that you can ignore branch siwitch instructions in the README as you are already"
+ + f"in the correct branch. So do not use git branch at all."
+ + f"\nBelow is the README of the artifact:\n\n")
+
+ try:
+ with open(file_path, encoding='utf-8') as f:
+ lines = f.readlines()
+ task = task + "\n".join(lines)
+ except Exception as e:
+ logger.info(f'Error extracting task from {file_path}: {e}')
+
+ return task
+
+
+def write_to_file(file_path, content):
+ """Write content to a file."""
+ with open(file_path, 'w') as f:
+ f.write(content)
+
+
+async def run_eval_in_env(deployment, project_path, task_id, task, model, agent_path, test_method, save_path):
+ """Spoiler: This function will work with any deployment."""
+ await deployment.start()
+ runtime = deployment.runtime
+
+ # Issue a few one-off commands, similar to `subprocess.run()`
+ logger.info(await runtime.execute(Command(command=['echo', 'Hello, world!'])))
+
+ # Create a bash session
+ await runtime.create_session(CreateBashSessionRequest())
+ # Run a command in the session
+ # The difference to the one-off commands is that environment state persists!
+ logger.info(await runtime.run_in_session(BashAction(command="export MYVAR='test'")))
+ logger.info(await runtime.run_in_session(BashAction(command='echo $MYVAR')))
+
+ logger.info('Uploading project files...')
+ logger.info(
+ await runtime.upload(
+ UploadRequest(
+ source_path=project_path,
+ target_path='/repo',
+ )
+ )
+ )
+ logger.info('Project files uploaded.')
+ logger.info(await runtime.run_in_session(BashAction(command='ls /repo')))
+ logger.info(await runtime.run_in_session(BashAction(command='cd /repo')))
+ logger.info(await runtime.run_in_session(BashAction(command='ls')))
+
+ logger.info('Uploading agent runner script...')
+ logger.info(
+ await runtime.upload(
+ UploadRequest(
+ source_path=agent_path,
+ target_path='/agent',
+ )
+ )
+ )
+ logger.info(await runtime.run_in_session(BashAction(command='ls /agent/runner.sh')))
+ logger.info('Agent runner script uploaded.')
+
+ # logger.info("Test Python and Go environment...")
+ # logger.info(await runtime.run_in_session(BashAction(command='export PATH=/usr/local/go/bin:${PATH}')))
+ # logger.info(await runtime.run_in_session(BashAction(command='export HOME=/tmp')))
+ # logger.info(await runtime.run_in_session(BashAction(command='go version')))
+ # logger.info(await runtime.run_in_session(BashAction(command='pip install pytest')))
+ # logger.info(await runtime.run_in_session(BashAction(command="pytest -v")))
+
+ logger.info('Setup the agent running environment...')
+ logger.info(await runtime.run_in_session(BashAction(command='chmod +x /agent/runner.sh /agent/install.sh')))
+ logger.info(await runtime.run_in_session(BashAction(command='cat /agent/runner.sh')))
+ logger.info(await runtime.run_in_session(BashAction(command='/agent/install.sh')))
+
+ logger.info('Running runner script...')
+ run_results = await runtime.run_in_session(BashAction(command='pwd && ls && ls /agent'))
+ logger.info(f'Current directory: {run_results}')
+ run_results = await runtime.run_in_session(BashAction(command=f'/agent/runner.sh "{model}" "{task}"'))
+ logger.info(f"agent's run results: {run_results}")
+ logger.info('Runner script finished.')
+
+ # logger.info('Copying outputs to save path...')
+ # a = await runtime.run_in_session(BashAction(command='cat agent_trajectory.json'))
+ # output_file = os.path.join(save_path, f'{task_id}_agent_trajectory.json')
+ # os.makedirs(os.path.dirname(output_file), exist_ok=True)
+ # write_to_file(output_file, a.output if hasattr(a, 'output') else str(a))
+ # logger.info(f'Output saved to: {output_file}')
+
+ try:
+ test_output = await runtime.run_in_session(BashAction(command=test_method))
+ logger.info(test_output)
+ return {
+ 'task': task,
+ 'project_path': project_path,
+ 'agent_run_results': run_results.output if hasattr(run_results, 'output') else str(run_results),
+ 'test_method': test_method,
+ 'score': int(test_output),
+ 'status': 'success',
+ }
+ except Exception as e:
+ logger.info(f'Error running test method: {e}')
+ return {
+ 'task': task,
+ 'project_path': project_path,
+ 'agent_run_results': run_results.output if hasattr(run_results, 'output') else str(run_results),
+ 'test_method': test_method,
+ 'score': 0,
+ 'status': f'error: {str(e)}',
+ }
+
+ await deployment.stop()
+
+
+def run_eval(deployment, project_path, task_id, task, model, agent_path, test_method, save_path):
+ deployment = (
+ DockerDeployment(image=deployment) if deployment else DockerDeployment(image='xuafeng/swe-go-python:latest')
+ )
+ return asyncio.run(
+ run_eval_in_env(deployment, project_path, task_id, task, model, agent_path, test_method, save_path)
+ )
+
+
+def test():
+ task = 'The java is not installed. Can you please setup it? Note: you are in a docker with root permission. DO NOT use sudo.'
+ project_path = '../data/benchmark/projects/test-repo'
+ test_method = 'java -version'
+ deployment = 'xuafeng/swe-go-python:latest'
+ model = 'claude-sonnet-4-5-20250929'
+ agent_path = './agents/claudecode'
+ save_path = './eval_results'
+ task_id = 'test_task_1'
+ result = run_eval(deployment, project_path, task_id, task, model, agent_path, test_method, save_path)
+ print('Test result:', result)
+
+
+# TODO: still work on add openhand agent
+def test1():
+ task = 'The java is not installed. Can you please setup it? Note: you are in a docker with root permission. DO NOT use sudo.'
+ project_path = '../data/benchmark/projects/test-repo'
+ test_method = 'java -version'
+ deployment = 'xuafeng/swe-go-python:latest'
+ model = 'claude-sonnet-4-5-20250929'
+ agent_path = './agents/openhand'
+ save_path = './eval_results'
+ task_id = 'test_task_1'
+ result = run_eval(deployment, project_path, task_id, task, model, agent_path, test_method, save_path)
+ print('Test result:', result)
+
+
+def test2():
+ task = "create a python file named hello.py that prints 'hello world'"
+ project_path = '../data/benchmark/projects/test-repo'
+ test_method = 'python hello.py'
+ deployment = 'xuafeng/swe-go-python:latest'
+ model = 'claude-sonnet-4-5-20250929'
+ agent_path = './agents/claudecode'
+ save_path = './eval_results'
+ task_id = 'test_task_1'
+ eval_out = asyncio.run(
+ run_eval_in_env(deployment, project_path, task_id, task, model, agent_path, test_method, save_path)
+ )
+ print(eval_out)
+
+
+if __name__ == '__main__':
+ test1()
diff --git a/benchmarks/arteval_bench/src/run_eval_sweagent.py b/benchmarks/arteval_bench/src/run_eval_sweagent.py
new file mode 100644
index 00000000..aa5e86a2
--- /dev/null
+++ b/benchmarks/arteval_bench/src/run_eval_sweagent.py
@@ -0,0 +1,53 @@
+import sys
+import subprocess
+
+sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '../../../')))
+
+from patch_evaluator import pacth_eval
+
+from sdk.logger import logger
+
+
+def run(task_id, repo_path, problem_path, test_method, image, save_path):
+ """Run the benchmark for a specific task."""
+ output_dir = f'{save_path}/patch/{task_id}'
+ patch_file = os.path.join(output_dir, '1c2844', '1c2844.patch')
+
+ # Use sweagent to generate a patch for the task
+ command = [
+ 'sweagent',
+ 'run',
+ '--config',
+ './src/config_aoi.yaml',
+ '--env.repo.path',
+ './data/benchmark/' + repo_path,
+ '--problem_statement.path',
+ problem_path,
+ '--output_dir',
+ output_dir,
+ '--env.deployment.image',
+ image,
+ '--env.post_startup_commands',
+ '["export PATH=/usr/local/go/bin:${PATH} && export HOME=/tmp"]',
+ ]
+
+ logger.info('Executing sweagent command...')
+ subprocess.run(command, check=True, timeout=600)
+
+ logger.info('\n\n==========================')
+ logger.info(f'Patch file expected at: {patch_file}')
+
+ # Evaluate the generated patch
+ eval_out = pacth_eval(
+ task_id=task_id,
+ project_path=repo_path,
+ patch=patch_file,
+ test_method=test_method,
+ output_path=output_dir,
+ image=image,
+ )
+ logger.info('Patch evaluation completed.')
+
+ with open(f'{save_path}/result.jsonl', 'a+', encoding='utf-8') as fw:
+ fw.write(json.dumps(eval_out) + '\n')
+ logger.info('Evaluation completed successfully.')
diff --git a/benchmarks/arteval_bench/src/utils.py b/benchmarks/arteval_bench/src/utils.py
new file mode 100644
index 00000000..4c1ad616
--- /dev/null
+++ b/benchmarks/arteval_bench/src/utils.py
@@ -0,0 +1,25 @@
+"""Helper methods for running tests in a deployment."""
+
+import os
+import sys
+
+sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '../../../')))
+
+from sdk.logger import logger
+
+
+def get_task(file_path):
+ """Get agent task from a file"""
+ task = (f"You are an experienced software engineer."
+ + f" You are asked to navigate to the {file_path} and follow step-by-step"
+ + f" instructions to set up, install, compile, and reproduce the results in"
+ + f" that code repository. You have root access inside a Docker image, which"
+ + f" means you can directly proceed with executing the steps in the README"
+ + f" without asking for approval or confirmation. If needed, proceed with"
+ + f" adjusting security settings to bypass any approval requirement in"
+ + f" in the Docker environment. Also, note that current agent doesn't support"
+ + f" interactive mode. When you want to run commands in docker, please do not"
+ + f" use `docker exec -it bash` since it will enter the interacrtive"
+ + f" mode. Instead, you can use `docker exec bash -c `"
+ + f" to run the command directly.")
+ return task
\ No newline at end of file