Skip to content

Conversation

@plusplusjiajia
Copy link
Member

Purpose

When using Paimon with DLF (Data Lake Formation) server, the server passes default OSS endpoint values (e.g., fs.oss.endpoint=oss-cn-hangzhou-internal.aliyuncs.com) through tokens. However, these default internal endpoints are not accessible in all client environments, causing connectivity issues. The client-provided options should take precedence over server defaults, but the previous merge logic didn't handle this correctly. This PR introduced DLF_OSS_ENDPOINT configuration option in RESTCatalogOptions to allow clients to specify their preferred OSS endpoint.

Tests

org.apache.paimon.utils.RESTUtilTest


return builder.build();
// Handle special case: dlf.oss-endpoint should override fs.oss.endpoint
String dlfOssEndpoint = result.get(DLF_OSS_ENDPOINT.key());
Copy link
Contributor

@JingsongLi JingsongLi Sep 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you just modify RESTTokenFileIO? RESTUtil.merge should be a very simple method, just updates merge into targets.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you just modify RESTTokenFileIO? RESTUtil.merge should be a very simple method, just updates merge into targets.
Sure, updated.

Copy link
Contributor

@JingsongLi JingsongLi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

@JingsongLi JingsongLi merged commit 351f8b5 into apache:master Sep 11, 2025
23 checks passed
jerry-024 added a commit to jerry-024/paimon that referenced this pull request Sep 15, 2025
* upstream/master: (23 commits)
  [flink] Use Paimon format table read for flink (apache#6246)
  [core] Ensure system tables use the correct identifier for loadTableToken in RESTTokenFileIO. (apache#6247)
  [spark] Enhance v1 write merge schema test coverage (apache#6249)
  [spark] Eliminate duplicate convertLiteral invocations (apache#6250)
  [python] Fix failing to read 1000cols (apache#6244)
  [python] Expose CatalogFactory and Schema directly (apache#6243)
  [doc] Modify Python API to JVM free (apache#6242)
  [python] Fix multiple write brefore once commit  (apache#6241)
  [core] Support push down branchesTable by branchName (apache#6231)
  [cdc] Fix PostgreSQL DECIMAL type conversion issue (apache#6239)
  [arrow] Optimize Arrow string write performance (apache#6240)
  [core] Fix checkpoint recovery failure for compacted changelog files (apache#6173)
  [core] RESTCatalog: add DLF OSS endpoint support and improve configuration merge (apache#6232)
  [core] fix RESTCatalog#listViews for system database (apache#6233)
  [core] Introduce 'ignore-update-before' to ignore UD only (apache#6235)
  [python] Fix DLF partition statistical error (apache#6237)
  [python] Add _VALUE_STATS_COLS param to fix parse wrong bytes (apache#6234)
  [ci] Rename to Python Check Code Style and Test
  [python] Rename binary row to generic row
  [hotfix] Remove methods in SchemaManager for SchemasTable
  ...
zhuyufeng0809 pushed a commit to zhuyufeng0809/flink-table-store that referenced this pull request Sep 18, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants