KAFKA-2752: Follow up to fix checkstlye#492
Conversation
|
Sorry about that @granthenke, not sure how I missed that. LGTM. |
|
Cherry picking to 0.9.0 failed with some conflicts, will do an manual commit. EDIT: since 2752 is not committed to 0.9.0, we do not need to cherry pick it either. |
|
@ewencp not a problem at all. The builds are taking longer and longer, and on Jenkins they are piling up. I have added failing fast to the build refactoring work I am doing for KAFKA-2787. See the initial work in #477. That way waiting for the build to commit wont be as painful. |
|
@granthenke Awesome, those will be very welcome improvements. Moving rat up to run earlier probably isn't too bad since people aren't adding new files that often. I'd be wary about making too much stuff fail fast though -- checkstyle in particular is really annoying if it runs before tests. |
|
@guozhangwang 2752 didn't go on 0.9.0, it shouldn't need cherry-picking |
Author: Grant Henke <granthenke@gmail.com> Reviewers: Ewen Cheslack-Postava Closes #492 from granthenke/fix
Author: Grant Henke <granthenke@gmail.com> Reviewers: Ewen Cheslack-Postava Closes #492 from granthenke/fix
…cs (apache#492) * test(metadata:diskless): improve test coverage for broker fencing and unregister scenarios - Add _noRacks and _withRacks test variants for consistent coverage - Fix tests that assumed broker 0 was always the leader - Get actual leader from partition registration before fencing/unregistering - Use dynamic assertions based on actual partition state - Improve assertion error messages for clarity * feat(controller:diskless): add server config for managed replicas Add diskless.managed.rf.enable config (default: false) to control whether diskless topics use managed replicas with RF=rack_count or legacy RF=1. This config only affects topic creation. When enabled, new diskless topics will be created with one replica per rack using standard KRaft placement. Part of Phase 1: Diskless Managed Replicas (See apache#478 docs/inkless/ts-unification/DISKLESS_MANAGED_RF.md) # Conflicts: # core/src/main/scala/kafka/server/ControllerServer.scala # core/src/main/scala/kafka/server/KafkaConfig.scala # metadata/src/main/java/org/apache/kafka/controller/QuorumController.java # server-common/src/main/java/org/apache/kafka/server/config/ServerConfigs.java * feat(metadata:diskless): implement managed replicas for diskless topics When diskless.managed.rf.enable=true, new diskless topics are created with RF=rack_count using standard KRaft replica placement instead of legacy RF=1. Changes: - Compute RF from rack cardinality via rackCardinality() - Use standard replicaPlacer.place() for rack-aware assignment - Allow manual replica assignments when managed replicas enabled - Add checkstyle suppression for extended createTopic method Phase 1 limitations: - Add Partitions inherits RF from existing partitions (Phase 3) - Transformer not updated, uses legacy routing (Phase 2) - Integration tests deferred to Phase 2 (See apache#478 docs/inkless/ts-unification/DISKLESS_MANAGED_RF.md) � Conflicts: � metadata/src/main/java/org/apache/kafka/controller/QuorumController.java � metadata/src/main/java/org/apache/kafka/controller/ReplicationControlManager.java � metadata/src/test/java/org/apache/kafka/controller/ReplicationControlManagerTest.java * fixup! feat(controller:diskless): add server config for managed replicas * fixup! feat(metadata:diskless): implement managed replicas for diskless topics (cherry picked from commit 09ba4d1b3b2c4b4b4a61592f2e9988a37a78f189)
…cs (apache#492) * test(metadata:diskless): improve test coverage for broker fencing and unregister scenarios - Add _noRacks and _withRacks test variants for consistent coverage - Fix tests that assumed broker 0 was always the leader - Get actual leader from partition registration before fencing/unregistering - Use dynamic assertions based on actual partition state - Improve assertion error messages for clarity * feat(controller:diskless): add server config for managed replicas Add diskless.managed.rf.enable config (default: false) to control whether diskless topics use managed replicas with RF=rack_count or legacy RF=1. This config only affects topic creation. When enabled, new diskless topics will be created with one replica per rack using standard KRaft placement. Part of Phase 1: Diskless Managed Replicas (See apache#478 docs/inkless/ts-unification/DISKLESS_MANAGED_RF.md) # Conflicts: # core/src/main/scala/kafka/server/ControllerServer.scala # core/src/main/scala/kafka/server/KafkaConfig.scala # metadata/src/main/java/org/apache/kafka/controller/QuorumController.java # server-common/src/main/java/org/apache/kafka/server/config/ServerConfigs.java * feat(metadata:diskless): implement managed replicas for diskless topics When diskless.managed.rf.enable=true, new diskless topics are created with RF=rack_count using standard KRaft replica placement instead of legacy RF=1. Changes: - Compute RF from rack cardinality via rackCardinality() - Use standard replicaPlacer.place() for rack-aware assignment - Allow manual replica assignments when managed replicas enabled - Add checkstyle suppression for extended createTopic method Phase 1 limitations: - Add Partitions inherits RF from existing partitions (Phase 3) - Transformer not updated, uses legacy routing (Phase 2) - Integration tests deferred to Phase 2 (See apache#478 docs/inkless/ts-unification/DISKLESS_MANAGED_RF.md) � Conflicts: � metadata/src/main/java/org/apache/kafka/controller/QuorumController.java � metadata/src/main/java/org/apache/kafka/controller/ReplicationControlManager.java � metadata/src/test/java/org/apache/kafka/controller/ReplicationControlManagerTest.java * fixup! feat(controller:diskless): add server config for managed replicas * fixup! feat(metadata:diskless): implement managed replicas for diskless topics
No description provided.