Skip to content

Conversation

@ivandika3
Copy link
Contributor

@ivandika3 ivandika3 commented Nov 7, 2025

Description of PR

JIRA: https://issues.apache.org/jira/browse/HADOOP-19217

Hadoop FileSystem supports multiple FileSystem implementations awareness (e.g. client is aware of both hdfs:// and ofs:// protocols).

However, it seems that currently Hadoop TrashPolicy remains the same regardless of the URI scheme. The TrashPolicy is governed by "fs.trash.classname" configuration and stays the same regardless of the FileSystem implementation. For example, HDFS defaults to TrashPolicyDefault and Ozone defaults to TrashPolicyOzone, but only one will be picked since the the configuration will be overwritten by the other.

Therefore, I propose to tie the TrashPolicy implementation to each FileSystem implementation by introducing a new FileSystem#getTrashPolicy interface. TrashPolicy#getInstance can call FileSystem#getTrashPolicy to get the appropriate TrashPolicy.

How was this patch tested?

Unit and contract tests (HDFS and LocalFS).

Disclosure: FileSystem.md part was initially generated by AI, but majority of it was updated. The other implementations are hand-coded.

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 26s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 5 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 6m 51s Maven dependency ordering for branch
+1 💚 mvninstall 17m 38s trunk passed
+1 💚 compile 10m 6s trunk passed with JDK Ubuntu-21.0.7+6-Ubuntu-0ubuntu120.04
+1 💚 compile 9m 57s trunk passed with JDK Ubuntu-17.0.15+6-Ubuntu-0ubuntu120.04
+1 💚 checkstyle 1m 41s trunk passed
+1 💚 mvnsite 1m 59s trunk passed
+1 💚 javadoc 1m 33s trunk passed with JDK Ubuntu-21.0.7+6-Ubuntu-0ubuntu120.04
+1 💚 javadoc 1m 22s trunk passed with JDK Ubuntu-17.0.15+6-Ubuntu-0ubuntu120.04
-1 ❌ spotbugs 1m 40s /branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html hadoop-common-project/hadoop-common in trunk has 448 extant spotbugs warnings.
-1 ❌ spotbugs 2m 12s /branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html hadoop-hdfs-project/hadoop-hdfs in trunk has 291 extant spotbugs warnings.
+1 💚 shadedclient 18m 1s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 22s Maven dependency ordering for patch
+1 💚 mvninstall 1m 30s the patch passed
+1 💚 compile 9m 47s the patch passed with JDK Ubuntu-21.0.7+6-Ubuntu-0ubuntu120.04
+1 💚 javac 9m 47s the patch passed
+1 💚 compile 9m 56s the patch passed with JDK Ubuntu-17.0.15+6-Ubuntu-0ubuntu120.04
+1 💚 javac 9m 56s the patch passed
-1 ❌ blanks 0m 1s /blanks-eol.txt The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
-0 ⚠️ checkstyle 1m 44s /results-checkstyle-root.txt root: The patch generated 28 new + 248 unchanged - 0 fixed = 276 total (was 248)
+1 💚 mvnsite 1m 59s the patch passed
+1 💚 javadoc 1m 26s the patch passed with JDK Ubuntu-21.0.7+6-Ubuntu-0ubuntu120.04
+1 💚 javadoc 1m 24s the patch passed with JDK Ubuntu-17.0.15+6-Ubuntu-0ubuntu120.04
+1 💚 spotbugs 4m 1s the patch passed
+1 💚 shadedclient 17m 38s patch has no errors when building and testing our client artifacts.
_ Other Tests _
-1 ❌ unit 16m 29s /patch-unit-hadoop-common-project_hadoop-common.txt hadoop-common in the patch passed.
-1 ❌ unit 168m 25s /patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt hadoop-hdfs in the patch failed.
+1 💚 asflicense 0m 36s The patch does not generate ASF License warnings.
308m 15s
Reason Tests
Failed junit tests hadoop.fs.contract.localfs.TestLocalFSContractTrash
hadoop.security.ssl.TestDelegatingSSLSocketFactory
hadoop.fs.contract.hdfs.TestHDFSContractTrash
hadoop.hdfs.tools.TestDFSAdmin
Subsystem Report/Notes
Docker ClientAPI=1.51 ServerAPI=1.51 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-8063/2/artifact/out/Dockerfile
GITHUB PR #8063
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint
uname Linux 438b5dd4e148 5.15.0-156-generic #166-Ubuntu SMP Sat Aug 9 00:02:46 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 29c99cb
Default Java Ubuntu-17.0.15+6-Ubuntu-0ubuntu120.04
Multi-JDK versions /usr/lib/jvm/java-21-openjdk-amd64:Ubuntu-21.0.7+6-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-17-openjdk-amd64:Ubuntu-17.0.15+6-Ubuntu-0ubuntu120.04
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-8063/2/testReport/
Max. process+thread count 3971 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-8063/2/console
versions git=2.25.1 maven=3.9.11 spotbugs=4.9.7
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 25s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 1s codespell was not available.
+0 🆗 detsecrets 0m 1s detect-secrets was not available.
+0 🆗 markdownlint 0m 1s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 5 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 7m 19s Maven dependency ordering for branch
+1 💚 mvninstall 16m 34s trunk passed
+1 💚 compile 8m 40s trunk passed with JDK Ubuntu-21.0.7+6-Ubuntu-0ubuntu120.04
+1 💚 compile 8m 46s trunk passed with JDK Ubuntu-17.0.15+6-Ubuntu-0ubuntu120.04
+1 💚 checkstyle 1m 43s trunk passed
+1 💚 mvnsite 2m 3s trunk passed
+1 💚 javadoc 1m 27s trunk passed with JDK Ubuntu-21.0.7+6-Ubuntu-0ubuntu120.04
+1 💚 javadoc 1m 27s trunk passed with JDK Ubuntu-17.0.15+6-Ubuntu-0ubuntu120.04
-1 ❌ spotbugs 1m 39s /branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html hadoop-common-project/hadoop-common in trunk has 448 extant spotbugs warnings.
-1 ❌ spotbugs 2m 0s /branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html hadoop-hdfs-project/hadoop-hdfs in trunk has 291 extant spotbugs warnings.
+1 💚 shadedclient 16m 31s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 17s Maven dependency ordering for patch
+1 💚 mvninstall 1m 26s the patch passed
+1 💚 compile 8m 14s the patch passed with JDK Ubuntu-21.0.7+6-Ubuntu-0ubuntu120.04
+1 💚 javac 8m 14s the patch passed
+1 💚 compile 8m 37s the patch passed with JDK Ubuntu-17.0.15+6-Ubuntu-0ubuntu120.04
+1 💚 javac 8m 37s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
-0 ⚠️ checkstyle 1m 34s /results-checkstyle-root.txt root: The patch generated 28 new + 248 unchanged - 0 fixed = 276 total (was 248)
+1 💚 mvnsite 2m 2s the patch passed
+1 💚 javadoc 1m 28s the patch passed with JDK Ubuntu-21.0.7+6-Ubuntu-0ubuntu120.04
+1 💚 javadoc 1m 27s the patch passed with JDK Ubuntu-17.0.15+6-Ubuntu-0ubuntu120.04
+1 💚 spotbugs 3m 51s the patch passed
+1 💚 shadedclient 16m 20s patch has no errors when building and testing our client artifacts.
_ Other Tests _
-1 ❌ unit 16m 27s /patch-unit-hadoop-common-project_hadoop-common.txt hadoop-common in the patch passed.
-1 ❌ unit 174m 4s /patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt hadoop-hdfs in the patch failed.
+1 💚 asflicense 0m 38s The patch does not generate ASF License warnings.
305m 17s
Reason Tests
Failed junit tests hadoop.security.ssl.TestDelegatingSSLSocketFactory
hadoop.hdfs.tools.TestDFSAdmin
Subsystem Report/Notes
Docker ClientAPI=1.51 ServerAPI=1.51 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-8063/4/artifact/out/Dockerfile
GITHUB PR #8063
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint
uname Linux c4eea5429b3b 5.15.0-156-generic #166-Ubuntu SMP Sat Aug 9 00:02:46 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 5af2d3b
Default Java Ubuntu-17.0.15+6-Ubuntu-0ubuntu120.04
Multi-JDK versions /usr/lib/jvm/java-21-openjdk-amd64:Ubuntu-21.0.7+6-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-17-openjdk-amd64:Ubuntu-17.0.15+6-Ubuntu-0ubuntu120.04
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-8063/4/testReport/
Max. process+thread count 4212 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-8063/4/console
versions git=2.25.1 maven=3.9.11 spotbugs=4.9.7
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@ivandika3 ivandika3 marked this pull request as ready for review November 8, 2025 11:48
@slfan1989
Copy link
Contributor

@ayushtkn @KeeProMise Could you please review this PR? Thank you very much!

@ivandika3
Copy link
Contributor Author

@steveloughran Please help to take a look when you are available. Thanks in advance.

@ayushtkn
Copy link
Member

curious where we will use it, adding this as a FileSystem API looks overkill to me, if it required for some particular use case, maybe part of getServerDefaults()?

@ivandika3
Copy link
Contributor Author

ivandika3 commented Nov 17, 2025

@ayushtkn Thanks for taking a look.

curious where we will use it

The problem that we encountered is that we have a client that can access both hdfs:// and ofs:// but TrashPolicyDefault and OzoneTrashPolicy is not compatible (TrashPolicyDefault was changed that cause issues when applied to Ozone). Despite setting "fs.trash.classname" in both HDFS and Ozone, only one will be picked (usually TrashPolicyDefault) which causes user not being able to move files to trash in Ozone.

Other related works like HADOOP-18013 and HADOOP-18893 approached it by using per-scheme configuration (e.g. fs.s3a.trash.classname)

adding this as a FileSystem API looks overkill to me, if it required for some particular use case, maybe part of getServerDefaults()?

I am not aware with getServerDefaults and it seems we can move the TrashPolicy as part of getServerDefaults if needed. However, I feel adding a new FileSystem API is more explicit which also allows us to specify the trash behavior in FileSystem.md. The default behavior is also backward compatible and will be transparent to users.

@ivandika3
Copy link
Contributor Author

cc: @sadanand48

@steveloughran
Copy link
Contributor

just noticed this. Having it per FS really matters, as it allows people to have an hdfs fs which uses trash and and s3 store where you want fast deletions

Copy link
Contributor

@steveloughran steveloughran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry, I'd missed this. good work.

FWIW, two earlier iterations on the problem

Mehakmeet's has code you want to pull in, specifically the "EmptyTrashPolicy" which deletes files; my DeleteFilesTrashPolicy is similar.

TrashPolicy trash = ReflectionUtils.newInstance(trashClass, conf);
trash.initialize(conf, fs); // initialize TrashPolicy
return trash;
TrashPolicy trashPolicy = fs.getTrashPolicy(conf);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for this (legacy) static call, pass in a path of "/" to the getTrashPolicy() call

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, updated.

*/
@InterfaceAudience.Public
@InterfaceStability.Unstable
public TrashPolicy getTrashPolicy(Configuration conf) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

have it take a Path.

this'll allow viewfs to resolve through the mount points to the final value

Copy link
Contributor Author

@ivandika3 ivandika3 Jan 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, updated.

However, the reason I don't put the Path is because the FileSystem resolution logic is already done in Trash#moveToAppropriateTrash. In that case, we might not need to use a Path. This is the reason why I mentioned in the filesystem.md that

FileSystem implementations with multiple child file systems (e.g. ViewFileSystem) should NOT implement this method since the Hadoop trash mechanism should resolve to the underlying filesystem before invoking getTrashPolicy.

Please let me know what you think.

@InterfaceStability.Unstable
public TrashPolicy getTrashPolicy(Configuration conf) {
Class<? extends TrashPolicy> trashClass = conf.getClass(
"fs.trash.classname", TrashPolicyDefault.class, TrashPolicy.class);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • make this a constant somewhere.
  • log at debug the trash policy "default filesysetm trash policy loaded policy "

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, updated.

trash.initialize(conf, fs); // initialize TrashPolicy
return trash;
TrashPolicy trashPolicy = fs.getTrashPolicy(conf);
trashPolicy.initialize(conf, fs); // initialize TrashPolicy
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about requiring the initialize to be done in the fs instance, rather than here? That would support directly accessing the policy in newer code

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, I'm not entire sure what you meant, do you mean:

  1. Invoke TrashPolicy#initialize in FileSystem#getTrashPolicy
  2. Introduce a new API to initialize the TrashPolicy
  3. Initialize a TrashPolicy in FileSystem#initialize

IIRC I tried to do (1), but I think it caused some regressions, and I reverted it.

// Test plugged TrashPolicy
conf.setClass("fs.trash.classname", TestTrashPolicy.class, TrashPolicy.class);
Trash trash = new Trash(conf);
assertInstanceOf(TestTrashPolicy.class, trash.getTrashPolicy());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you use assertJ asserts , as these we can backport to older branches without problems

Assertions.assertThat(trash.getTrashPolicy()).isInstanceOf(TestTrashPolicy.class)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

if (checkpoints.size() == 4) {
// The actual contents should be smaller since the last checkpoint
// should've been deleted and Current might not have been recreated yet
assertTrue(checkpoints.size() > files.length);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use assertj assert on size

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

ContractTestUtils.writeTextFile(fs, myFile, "myFileContent", false);

// Verify that we succeed in removing the file we re-created
assertTrue(trash.moveToTrash(myFile));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about factoring this out into an assertMovedToTrash(trash,Path) so the assertTrue and error message can be used everywhere

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

* </ol>
* </p>
*/
public abstract class AbstractContractTrashTest extends AbstractFSContractTestBase {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's a risk here that recycled filesystems have the older trash policy. Have setup invoke a FileSystem.closeAll()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, updated.

@AfterEach
@Override
public void teardown() throws Exception {
final FileSystem fs = getFileSystem();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a try/catch here so if there's a failure in the test and teardown, the teardown failure doesn't hide the test failure

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, updated.

* The path returned is a directory


### `TrashPolicy getTrashPolicy(Configuration conf)`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you need to define what a trash policy is/does. Maybe add a new file trashpolicy.md

trash policies are part of the public API (Hive and iceberg use it)

Copy link
Contributor Author

@ivandika3 ivandika3 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @steveloughran for the review. I have updated some of the reviews (mostly the low hanging ones). Regarding a separate document about trash and trash policy behavior, I'll work on it when I'm available.

Mehakmeet's has code you want to pull in, specifically the "EmptyTrashPolicy" which deletes files; my DeleteFilesTrashPolicy is similar.

Regarding this, I'll also incorporate it to this PR later. The idea is that S3A and ABFS getTrashPolicy will check fs.s3a.trash.classname and fs.abfs.trash.classname and both will default to EmptyTrashPolicy. Let me know if this approach is agreeable.

*/
@InterfaceAudience.Public
@InterfaceStability.Unstable
public TrashPolicy getTrashPolicy(Configuration conf) {
Copy link
Contributor Author

@ivandika3 ivandika3 Jan 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, updated.

However, the reason I don't put the Path is because the FileSystem resolution logic is already done in Trash#moveToAppropriateTrash. In that case, we might not need to use a Path. This is the reason why I mentioned in the filesystem.md that

FileSystem implementations with multiple child file systems (e.g. ViewFileSystem) should NOT implement this method since the Hadoop trash mechanism should resolve to the underlying filesystem before invoking getTrashPolicy.

Please let me know what you think.

@InterfaceStability.Unstable
public TrashPolicy getTrashPolicy(Configuration conf) {
Class<? extends TrashPolicy> trashClass = conf.getClass(
"fs.trash.classname", TrashPolicyDefault.class, TrashPolicy.class);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, updated.

TrashPolicy trash = ReflectionUtils.newInstance(trashClass, conf);
trash.initialize(conf, fs); // initialize TrashPolicy
return trash;
TrashPolicy trashPolicy = fs.getTrashPolicy(conf);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, updated.

trash.initialize(conf, fs); // initialize TrashPolicy
return trash;
TrashPolicy trashPolicy = fs.getTrashPolicy(conf);
trashPolicy.initialize(conf, fs); // initialize TrashPolicy
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, I'm not entire sure what you meant, do you mean:

  1. Invoke TrashPolicy#initialize in FileSystem#getTrashPolicy
  2. Introduce a new API to initialize the TrashPolicy
  3. Initialize a TrashPolicy in FileSystem#initialize

IIRC I tried to do (1), but I think it caused some regressions, and I reverted it.

*/
package org.apache.hadoop.fs.contract;

import org.apache.hadoop.conf.Configuration;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated, thanks. It'll be nice if this can be enforced by the CI.

fileIndex++;

// Move the files to trash
assertTrue(trashPolicy.moveToTrash(myFile));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

if (checkpoints.size() == 4) {
// The actual contents should be smaller since the last checkpoint
// should've been deleted and Current might not have been recreated yet
assertTrue(checkpoints.size() > files.length);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

ContractTestUtils.writeTextFile(fs, myFile, "myFileContent", false);

// Verify that we succeed in removing the file we re-created
assertTrue(trash.moveToTrash(myFile));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

// Test plugged TrashPolicy
conf.setClass("fs.trash.classname", TestTrashPolicy.class, TrashPolicy.class);
Trash trash = new Trash(conf);
assertInstanceOf(TestTrashPolicy.class, trash.getTrashPolicy());
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

* following order should not result in unexpected results such as files in trash that
* will never be deleted by trash mechanism.
* <ol>
* <li>
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants