Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 0 additions & 6 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -64,12 +64,6 @@ replay_pid*
# Magic for local JMC built
/vendor/jmc-libs

# CircleCI #
############
_circle_ci_cache_*
upstream.env
/.circleci/config.continue.yml

# Benchmarks #
benchmark/reports
benchmark/tracer
Expand Down
16 changes: 8 additions & 8 deletions .gitlab-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -324,7 +324,7 @@ test_published_artifacts:
- *cgroup_info
- source .gitlab/gitlab-utils.sh
- gitlab_section_start "collect-reports" "Collecting reports"
- .circleci/collect_reports.sh
- .gitlab/collect_reports.sh
- gitlab_section_end "collect-reports"
artifacts:
when: always
Expand All @@ -344,7 +344,7 @@ test_published_artifacts:
- *cgroup_info
- source .gitlab/gitlab-utils.sh
- gitlab_section_start "collect-reports" "Collecting reports"
- .circleci/collect_reports.sh --destination ./check_reports --move
- .gitlab/collect_reports.sh --destination ./check_reports --move
- gitlab_section_end "collect-reports"
artifacts:
when: always
Expand Down Expand Up @@ -404,7 +404,7 @@ muzzle:
- *cgroup_info
- source .gitlab/gitlab-utils.sh
- gitlab_section_start "collect-reports" "Collecting reports"
- .circleci/collect_reports.sh
- .gitlab/collect_reports.sh
- gitlab_section_end "collect-reports"
artifacts:
when: always
Expand All @@ -423,7 +423,7 @@ muzzle-dep-report:
- ./gradlew generateMuzzleReport muzzleInstrumentationReport $GRADLE_ARGS
after_script:
- *cgroup_info
- .circleci/collect_muzzle_deps.sh
- .gitlab/collect_muzzle_deps.sh
artifacts:
when: always
paths:
Expand Down Expand Up @@ -486,10 +486,10 @@ muzzle-dep-report:
- *cgroup_info
- source .gitlab/gitlab-utils.sh
- gitlab_section_start "collect-reports" "Collecting reports"
- .circleci/collect_reports.sh
- if [ "$PROFILE_TESTS" == "true" ]; then .circleci/collect_profiles.sh; fi
- .circleci/collect_results.sh
- .circleci/upload_ciapp.sh $CACHE_TYPE $testJvm
- .gitlab/collect_reports.sh
- if [ "$PROFILE_TESTS" == "true" ]; then .gitlab/collect_profiles.sh; fi
- .gitlab/collect_results.sh
- .gitlab/upload_ciapp.sh $CACHE_TYPE $testJvm
- gitlab_section_end "collect-reports"
- URL_ENCODED_JOB_NAME=$(jq -rn --arg x "$CI_JOB_NAME" '$x|@uri')
- echo -e "${TEXT_BOLD}${TEXT_YELLOW}See test results in Datadog:${TEXT_CLEAR} https://app.datadoghq.com/ci/test/runs?query=test_level%3Atest%20%40test.service%3Add-trace-java%20%40ci.pipeline.id%3A${CI_PIPELINE_ID}%20%40ci.job.name%3A%22${URL_ENCODED_JOB_NAME}%22"
Expand Down
1 change: 0 additions & 1 deletion .gitlab/cgroup-info.sh
Original file line number Diff line number Diff line change
Expand Up @@ -80,4 +80,3 @@ elif [ -d "/sys/fs/cgroup/memory" ]; then # Assuming if memory cgroup v1 exists,
else
printf "cgroup memory paths not found. Neither cgroup v2 controller file nor cgroup v1 memory directory detected.\n"
fi

File renamed without changes.
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#!/usr/bin/env bash

# Save all important profiles into (project-root)/profiles
# This folder will be saved by circleci and available after test runs.
# This folder will be saved by gitlab and available after test runs.

set -e
#Enable '**' support
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#!/usr/bin/env bash

# Save all important reports into (project-root)/reports
# This folder will be saved by circleci and available after test runs.
# This folder will be saved by gitlab and available after test runs.

set -e
#Enable '**' support
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#!/usr/bin/env bash

# Save all important reports and artifacts into (project-root)/results
# This folder will be saved by circleci and available after test runs.
# This folder will be saved by gitlab and available after test runs.

set -e
# Enable '**' support
Expand Down
File renamed without changes.
53 changes: 27 additions & 26 deletions docs/how_to_test.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,25 +5,25 @@
The project leverages different types of test:

1. The most common ones are **unit tests**.
They are intended to test a single isolated feature, and rely on [JUnit 5 framework](https://junit.org/junit5/docs/current/user-guide/) or [Spock 2 framework](https://spockframework.org/spock/docs/).
JUnit framework is recommended for most unit tests for its simplicity and performance reasons.
Spock framework provides an alternative for more complex test scenarios, or tests that requires Groovy Script to access data outside their scope limitation (eg private fields).
They are intended to test a single isolated feature, and rely on [JUnit 5 framework](https://junit.org/junit5/docs/current/user-guide/) or [Spock 2 framework](https://spockframework.org/spock/docs/).
* JUnit framework is recommended for most unit tests for its simplicity and performance reasons.
* Spock framework provides an alternative for more complex test scenarios, or tests that require Groovy Script to access data outside their scope limitation (eg private fields).

2. A variant of unit tests are **instrumented tests**.
Their purpose is similar to the unit tests but the tested code is instrumented by the java agent (`:dd-trace-java:java-agent`) while running. They extend the Spock specification `datadog.trace.agent.test.AgentTestRunner` which allows to test produced traces and metrics.
2. A variant of unit tests is **instrumented tests**.
Their purpose is similar to the unit tests but the tested code is instrumented by the java agent (`:dd-trace-java:java-agent`) while running. They extend the Spock specification `datadog.trace.agent.test.AgentTestRunner` which allows to test produced traces and metrics.

3. The third type of tests are **Muzzle checks**.
Their goal is to check the [Muzzle directives](./how_instrumentations_work.md#muzzle), making sure instrumentations are safe to load against specific library versions.
3. The third type of tests is **Muzzle checks**.
Their goal is to check the [Muzzle directives](./how_instrumentations_work.md#muzzle), making sure instrumentations are safe to load against specific library versions.

3. The fourth type of tests are **integration tests**.
They test features that requires a more complex environment setup.
In order to build such enviroments, integration tests use Testcontainers to setup the services needed to run the tests.
4. The fourth type of tests is **integration tests**.
They test features that require a more complex environment setup.
In order to build such environment, integration tests use Testcontainers to set up the services needed to run the tests.

4. The fifth type of test are **smoke tests**.
They are dedicated to test the java agent (`:dd-java-agent`) behavior against demo applications to prevent any regression. All smoke tests are located into the `:dd-smoke-tests` module.
5. The fifth type of test is **smoke tests**.
They are dedicated to test the java agent (`:dd-java-agent`) behavior against demo applications to prevent any regression. All smoke tests are located into the `:dd-smoke-tests` module.

5. The last type of test are **system tests**.
They are intended to test behavior consistency between all the client libraries, and relies on [their on GitHub repository](https://github.com/DataDog/system-tests).
6. The last type of test is **system tests**.
They are intended to test behavior consistency between all the client libraries, and rely on [their on GitHub repository](https://github.com/DataDog/system-tests).

> [!TIP]
> Most of the instrumented tests and integration tests are instrumentation tests.
Expand All @@ -40,13 +40,16 @@ This mechanism exists to make sure either java agent state or static data are re

### Flaky Tests

If a test runs unreliably, or doen't have a fully deterministic behavior, this will lead into recurrent unexpected errors in continuous integration.
If a test runs unreliably, or doesn't have a fully deterministic behavior, this will lead into recurrent unexpected errors in continuous integration.
In order to identify such tests and avoid the continuous integration to fail, they are marked as _flaky_ and must be annotated with the `@Flaky` annotation.

> [!TIP]
> In case your pull request checks failed due to some unexpected flaky tests, you can retry the continous integration pilepeline on CircleCI using the `Rerun workflow from failed` button:

![Rerun workflow from failed](how_to_test/rerun-workflow-from-failed.png)
> In case your pull request checks failed due to some unexpected flaky tests, you can retry the continuous
> integration pipeline on Gitlab
> * using the `Run again` button from the pipeline view:
> ![Re run workflow from failed](how_to_test/run-again-job.png)
> * using the `Retry` button from the job view:
> ![Rerun workflow from failed](how_to_test/retry-failed-job.png)

## Running Tests

Expand All @@ -71,25 +74,23 @@ To run tests on a different JVM than the one used for doing the build, you need

### Running System Tests

The system tests are setup to run on continous integration as pull request check.
The system tests are setup to run on continuous integration as pull request check.

If you would like to run them locally, you would have to grab [a local copy of the system tests](https://github.com/DataDog/system-tests), and run them from there.
You can make them use your development version of `dd-trace-java` by [dropping the built artifacts to the `/binaries` folder](https://github.com/DataDog/system-tests/blob/main/docs/execute/binaries.md#java-library) of your local copy of the system tests.

If you would like to run another version of the system tests on continuous integration, or update them to the latest version, you would need to use [the update pinned system tests script](../.circleci/update_pinned_system_tests.sh) as your pull request won't use the latest `main` version from the system test repository, but a pinned version.

> [!NOTE]
> The system tests version used for continous integration is defined using `default_system_tests_commit` in [CircleCI configuration](../.circleci/config.continue.yml.j2).
In the CI System tests will be run with the pipeline defined [`DataDog/system-tests/blob/main/.github/workflows/system-tests.yml`](https://github.com/DataDog/system-tests/blob/main/.github/workflows/system-tests.yml)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TonyCTHsu is working on this part this quarter :)


### The APM test agent

The APM test agent emulates the APM endpoints of the Datadog Agent.
The APM Test Agent container runs alongside Java tracer Instrumentation Tests in CI,
handling all traces during test runs and performing a number of `Trace Checks`.
Trace Check results are returned within the `Get APM Test Agent Trace Check Results` step for all instrumentation test jobs.
Check [trace invariant checks](https://github.com/DataDog/dd-apm-test-agent#trace-invariant-checks) for more informations.
Check [trace invariant checks](https://github.com/DataDog/dd-apm-test-agent#trace-invariant-checks) for more information.

The APM Test Agent also emits helpful logging, including logging received traces' headers, spans, errors encountered,
ands information on trace checks being performed.
Logs can be viewed in CircleCI within the Test-Agent container step for all instrumentation test suites, ie: `z_test_8_inst` job.
ands information on trace checks being performed.

Logs can be viewed in GitLab within the Test-Agent container step for all instrumentation test suites, e.g. the `test_inst` jobs.
Read more about [the APM Test Agent](https://github.com/datadog/dd-apm-test-agent#readme).
Binary file removed docs/how_to_test/rerun-workflow-from-failed.png
Binary file not shown.
Binary file added docs/how_to_test/retry-failed-job.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/how_to_test/run-again-job.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion gradle/ci_jobs.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ if (rootProject.hasProperty("gitBaseRef")) {
rootProject.changedFiles = rootProject.changedFiles.findAll { !ignoredFiles.contains(it) }

final globalEffectFiles = fileTree(rootProject.projectDir) {
include '.circleci/**'
include '.gitlab/**'
include 'build.gradle'
include 'gradle/**'
}
Expand Down
2 changes: 1 addition & 1 deletion gradle/configure_tests.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ def forkedTestLimit = gradle.sharedServices.registerIfAbsent("forkedTestLimit",
maxParallelUsages = 3
}

// Force timeout after 9 minutes (CircleCI defaults will fail after 10 minutes without output)
// Force timeout after 9 minutes (The timeout is configurable per job, default job timeout is 1h)
def testTimeoutDuration = Duration.of(9, ChronoUnit.MINUTES)

testing {
Expand Down
2 changes: 1 addition & 1 deletion gradle/spotless.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ spotless {

format 'misc', {
toggleOffOn()
target '.gitignore', '*.sh', 'tooling/*.sh', '.circleci/*.sh'
target '.gitignore', '*.sh', 'tooling/*.sh', '.gitlab/*.sh'
indentWithSpaces()
trimTrailingWhitespace()
endWithNewline()
Expand Down