Closed
Conversation
Member
|
This is no longer relevant (new build system and 2.9.2 is supported). Can you please close this (we cannot close it ourselves with raising a ticket to Apache Infra, unfortunately)? |
ymatsuda
added a commit
to ymatsuda/kafka
that referenced
this pull request
Aug 5, 2015
add kafka-clients and log4j to pom.xml
resetius
referenced
this pull request
in resetius/kafka
Jun 7, 2016
[LOGBROKER-726] Fix tests & debianization
asfgit
pushed a commit
that referenced
this pull request
Apr 3, 2017
This may be a reason why we see Jenkins jobs time out at times. I can reproduce it locally. With current trunk there is a possibility to run into this: ```sh "kafka-streams-close-thread" #585 daemon prio=5 os_prio=0 tid=0x00007f66d052d800 nid=0x7e02 waiting for monitor entry [0x00007f66ae2e5000] java.lang.Thread.State: BLOCKED (on object monitor) at org.apache.kafka.streams.processor.internals.StreamThread.close(StreamThread.java:345) - waiting to lock <0x000000077d33c538> (a org.apache.kafka.streams.processor.internals.StreamThread) at org.apache.kafka.streams.KafkaStreams$1.run(KafkaStreams.java:474) at java.lang.Thread.run(Thread.java:745) "appId-bd262a91-5155-4a35-bc46-c6432552c2c5-StreamThread-97" #583 prio=5 os_prio=0 tid=0x00007f66d052f000 nid=0x7e01 waiting for monitor entry [0x00007f66ae4e6000] java.lang.Thread.State: BLOCKED (on object monitor) at org.apache.kafka.streams.KafkaStreams.setState(KafkaStreams.java:219) - waiting to lock <0x000000077d335760> (a org.apache.kafka.streams.KafkaStreams) at org.apache.kafka.streams.KafkaStreams.access$100(KafkaStreams.java:117) at org.apache.kafka.streams.KafkaStreams$StreamStateListener.onChange(KafkaStreams.java:259) - locked <0x000000077d42f138> (a org.apache.kafka.streams.KafkaStreams$StreamStateListener) at org.apache.kafka.streams.processor.internals.StreamThread.setState(StreamThread.java:168) - locked <0x000000077d33c538> (a org.apache.kafka.streams.processor.internals.StreamThread) at org.apache.kafka.streams.processor.internals.StreamThread.setStateWhenNotInPendingShutdown(StreamThread.java:176) - locked <0x000000077d33c538> (a org.apache.kafka.streams.processor.internals.StreamThread) at org.apache.kafka.streams.processor.internals.StreamThread.access$1600(StreamThread.java:70) at org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsRevoked(StreamThread.java:1321) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinPrepare(ConsumerCoordinator.java:406) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:349) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:310) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:296) at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1037) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1002) at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:531) at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:669) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:326) ``` In a nutshell: `KafkaStreams` and `StreamThread` are both waiting for each other since another intermittent `close` (eg. from a test) comes along also trying to lock on `KafkaStreams` : ```sh "main" #1 prio=5 os_prio=0 tid=0x00007f66d000c800 nid=0x78bb in Object.wait() [0x00007f66d7a15000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1249) - locked <0x000000077d45a590> (a java.lang.Thread) at org.apache.kafka.streams.KafkaStreams.close(KafkaStreams.java:503) - locked <0x000000077d335760> (a org.apache.kafka.streams.KafkaStreams) at org.apache.kafka.streams.KafkaStreams.close(KafkaStreams.java:447) at org.apache.kafka.streams.KafkaStreamsTest.testCannotStartOnceClosed(KafkaStreamsTest.java:115) ``` => causing a deadlock. Fixed this by softer locking on the state change, that guarantees atomic changes to the state but does not lock on the whole object (I at least could not find another method that would require more than atomicly-locked access except for `setState`). Also qualified the state listeners with their outer-class to make the whole code-flow around this more readable (having two interfaces with the same naming for interface and method and then using them between their two outer classes is crazy hard to read imo :)). Easy to reproduced yourself by running `org.apache.kafka.streams.KafkaStreamsTest` in a loop for a bit (save yourself some time by running 2-4 in parallel :)). Eventually it will lock on one of the tests (for me this takes less than 1 min with 4 parallel runs). Author: Armin Braun <me@obrown.io> Author: Armin <me@obrown.io> Reviewers: Eno Thereska <eno@confluent.io>, Damian Guy <damian.guy@gmail.com>, Ismael Juma <ismael@juma.me.uk> Closes #2791 from original-brownbear/fix-streams-deadlock
egor-ryashin
referenced
this pull request
in egor-ryashin/kafka
Mar 22, 2018
removed thread local buffer
3 tasks
3 tasks
3 tasks
3 tasks
3 tasks
3 tasks
3 tasks
krishkoneru
pushed a commit
to krishkoneru/kafka
that referenced
this pull request
Oct 25, 2018
Initial commit of ic-kafka-topics tool based on AdminClient
hzxa21
pushed a commit
to hzxa21/kafka
that referenced
this pull request
Mar 8, 2019
…ay (apache#1) [NOTE] This is a temporary measure to publish artifacts until CI is properly set up to do the job automatically. Users are not expected to run this themselves.
rhauch
pushed a commit
that referenced
this pull request
Apr 26, 2019
…tup control (#6638) This merge consists of two commits previously merged into later branches. Author: Cyrus Vafadari <cyrus@confluent.io> Reviewers: Randall Hauch <rhauch@gmail.com> Commit #1: MINOR: Add async and different sync startup modes in connect service test class Allow Connect Service in system tests to start asynchronously. Specifically, allow for three startup conditions: 1. No condition - start async and return immediately. 2. Semi-async - start immediately after plugins have been discovered successfully. 3. Sync - start returns after the worker has completed startup. This is the current mode, but its condition is improved by checking that the port of Connect's REST interface is open, rather than that a log line has appeared in the logs. Author: Konstantine Karantasis <konstantine@confluent.io> Reviewers: Randall Hauch <rhauch@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io> Closes #4423 from kkonstantine/MINOR-Add-async-and-different-sync-startup-modes-in-ConnectService-test-class Commit #2: MINOR: Modify Connect service's startup timeout to be passed via the init (#5882) Currently, the startup timeout is hardcoded to be 60 seconds in Connect's test service. Modifying it to be passable via init. Author: Magesh Nandakumar <mageshn@confluent.io> Reviewers: Randall Hauch <rhauch@gmail.com>, Jason Gustafson <jason@confluent.io>
jeffkbkim
pushed a commit
to jeffkbkim/kafka
that referenced
this pull request
Oct 22, 2021
Revert "KAFKA-8964: Rename tag client-id for thread-level metrics and below (apache#7429)"
divijvaidya
referenced
this pull request
in divijvaidya/kafka
Apr 6, 2022
Clear topicId cache when removing topic partitions
3 tasks
Ompragash
referenced
this pull request
in harness-community/kafka
Nov 15, 2022
This commit updates the build.gradle file to enable Harness Test Intelligence. It also adds a .ticonfig.yaml file which tells the ti service which files to ignore changes in.
3 tasks
3 tasks
junrao
pushed a commit
that referenced
this pull request
Sep 7, 2023
This change introduces some basic clean up and refactoring for forthcoming commits related to the revised fetch code for the consumer threading refactor project. Reviewers: Christo Lolov <lolovc@amazon.com>, Jun Rao <junrao@gmail.com>
3 tasks
Closed
3 tasks
3 tasks
3 tasks
apalan60
referenced
this pull request
in apalan60/kafka
Apr 19, 2025
clolov
added a commit
to clolov/kafka
that referenced
this pull request
May 28, 2025
omkreddy
pushed a commit
that referenced
this pull request
Feb 12, 2026
…ker provisioning (#21394) ## Summary Fixes bugs where `--jdk-version` and `--jdk-arch` parameters were ignored during system test worker provisioning, and refactors `vagrant/base.sh` to support flexible JDK versions without code changes. --- ## Problem The Vagrant provisioning script (`vagrant/base.sh`) had two bugs that caused JDK version parameters to be ignored: | Bug | Problem | |-----|---------| | **#1: `--jdk-version` ignored** | `JDK_FULL` was hardcoded to `17-linux-x64`, so passing `--jdk-version 25` still downloaded JDK 17 | | **#2: `--jdk-arch` ignored** | Architecture parameter was passed but never used in the S3 download URL | --- ## Solution - Validate `JDK_MAJOR` and `JDK_ARCH` input parameters with regex - Dynamically construct `JDK_FULL` from `JDK_MAJOR` and `JDK_ARCH` - Update S3 path to use `/jdk/` subdirectory - Add logging for debugging --- ## Changes ### `vagrant/base.sh` | Change | Description | |--------|-------------| | **Input validation** | Added regex validation for `JDK_MAJOR` and `JDK_ARCH` with sensible defaults | | **Dynamic construction** | `JDK_FULL` is now constructed from `JDK_MAJOR` and `JDK_ARCH` if not explicitly provided | | **Updated S3 path** | Changed URL from `/kafka-packages/jdk-{version}.tar.gz` to `/kafka-packages/jdk/jdk-{version}.tar.gz` | | **Logging** | Added debug output for JDK configuration | | **Backward compatibility** | Vagrantfile can still pass `JDK_FULL` directly; the script validates and uses it if valid | --- ## S3 Path Change ### Old Path ``` s3://kafka-packages/jdk-{version}.tar.gz ``` ### New Path ``` s3://kafka-packages/jdk/jdk-{version}.tar.gz ``` ### Available JDKs in `s3://kafka-packages/jdk/` | File | Version | Architecture | |------|---------|--------------| | `jdk-7u80-linux-x64.tar.gz` | 7u80 | x64 | | `jdk-8u144-linux-x64.tar.gz` | 8u144 | x64 | | `jdk-8u161-linux-x64.tar.gz` | 8u161 | x64 | | `jdk-8u171-linux-x64.tar.gz` | 8u171 | x64 | | `jdk-8u191-linux-x64.tar.gz` | 8u191 | x64 | | `jdk-8u202-linux-x64.tar.gz` | 8u202 | x64 | | `jdk-11.0.2-linux-x64.tar.gz` | 11.0.2 | x64 | | `jdk-17-linux-x64.tar.gz` | 17 | x64 | | `jdk-18.0.2-linux-x64.tar.gz` | 18.0.2 | x64 | | `jdk-21.0.1-linux-x64.tar.gz` | 21.0.1 | x64 | | `jdk-21.0.1-linux-aarch64.tar.gz` | 21.0.1 | aarch64 | | `jdk-25-linux-x64.tar.gz` | 25 | x64 | | `jdk-25-linux-aarch64.tar.gz` | 25 | aarch64 | | `jdk-25.0.1-linux-x64.tar.gz` | 25.0.1 | x64 | | `jdk-25.0.1-linux-aarch64.tar.gz` | 25.0.1 | aarch64 | | `jdk-25.0.2-linux-x64.tar.gz` | 25.0.2 | x64 | | `jdk-25.0.2-linux-aarch64.tar.gz` | 25.0.2 | aarch64 | --- ## Future JDK Releases > **IMPORTANT: No code changes required for future Java major/minor releases!** The validation regex supports all version formats: - **Major versions**: `17`, `25`, `26` - **Minor versions**: `25.0.1`, `25.0.2`, `26.0.1` - **Legacy format**: `8u144`, `8u202` ### Adding New JDK Versions To add support for a new JDK version (e.g., JDK 26, 25.0.3): 1. Download the JDK tarball from Oracle/Adoptium 2. Rename to follow naming convention: `jdk-{VERSION}-linux-{ARCH}.tar.gz` 3. Upload to S3: `aws s3 cp jdk-{VERSION}-linux-{ARCH}.tar.gz s3://kafka-packages/jdk/` 4. Use in tests: `--jdk-version {VERSION} --jdk-arch {ARCH}` No modifications to `base.sh` or any other scripts are needed. --- ## Benefits | Before | After | |--------|-------| | `--jdk-version` ignored | ✅ Correctly uses specified version | | `--jdk-arch` ignored | ✅ Correctly uses specified architecture | | Only major version support | ✅ Full version support (e.g., `25.0.2`) | | Code change needed for new JDK | ✅ Just upload to S3 and pass version | --- ## Testing Tested with different JDK versions to confirm the fix works correctly: | Test | JDK_MAJOR | Expected | Actual | Result | Test Report | |------|-----------|----------|--------|--------|-------------| | JDK 17 | `17` | javac 17.0.4 | javac 17.0.4 | ✅ | | JDK 25 | `25` | javac 25.0.2 | javac 25.0.2 | ✅ | --- ## Backward Compatibility - **Vagrantfile**: Continues to work as before - **Existing workflows**: Default behavior unchanged (JDK 17 on x64 architecture) - **No breaking changes**: All existing configurations continue to work --- Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>
chia7712
pushed a commit
that referenced
this pull request
Feb 18, 2026
…ker provisioning (#21394) ## Summary Fixes bugs where `--jdk-version` and `--jdk-arch` parameters were ignored during system test worker provisioning, and refactors `vagrant/base.sh` to support flexible JDK versions without code changes. --- ## Problem The Vagrant provisioning script (`vagrant/base.sh`) had two bugs that caused JDK version parameters to be ignored: | Bug | Problem | |-----|---------| | **#1: `--jdk-version` ignored** | `JDK_FULL` was hardcoded to `17-linux-x64`, so passing `--jdk-version 25` still downloaded JDK 17 | | **#2: `--jdk-arch` ignored** | Architecture parameter was passed but never used in the S3 download URL | --- ## Solution - Validate `JDK_MAJOR` and `JDK_ARCH` input parameters with regex - Dynamically construct `JDK_FULL` from `JDK_MAJOR` and `JDK_ARCH` - Update S3 path to use `/jdk/` subdirectory - Add logging for debugging --- ## Changes ### `vagrant/base.sh` | Change | Description | |--------|-------------| | **Input validation** | Added regex validation for `JDK_MAJOR` and `JDK_ARCH` with sensible defaults | | **Dynamic construction** | `JDK_FULL` is now constructed from `JDK_MAJOR` and `JDK_ARCH` if not explicitly provided | | **Updated S3 path** | Changed URL from `/kafka-packages/jdk-{version}.tar.gz` to `/kafka-packages/jdk/jdk-{version}.tar.gz` | | **Logging** | Added debug output for JDK configuration | | **Backward compatibility** | Vagrantfile can still pass `JDK_FULL` directly; the script validates and uses it if valid | --- ## S3 Path Change ### Old Path ``` s3://kafka-packages/jdk-{version}.tar.gz ``` ### New Path ``` s3://kafka-packages/jdk/jdk-{version}.tar.gz ``` ### Available JDKs in `s3://kafka-packages/jdk/` | File | Version | Architecture | |------|---------|--------------| | `jdk-7u80-linux-x64.tar.gz` | 7u80 | x64 | | `jdk-8u144-linux-x64.tar.gz` | 8u144 | x64 | | `jdk-8u161-linux-x64.tar.gz` | 8u161 | x64 | | `jdk-8u171-linux-x64.tar.gz` | 8u171 | x64 | | `jdk-8u191-linux-x64.tar.gz` | 8u191 | x64 | | `jdk-8u202-linux-x64.tar.gz` | 8u202 | x64 | | `jdk-11.0.2-linux-x64.tar.gz` | 11.0.2 | x64 | | `jdk-17-linux-x64.tar.gz` | 17 | x64 | | `jdk-18.0.2-linux-x64.tar.gz` | 18.0.2 | x64 | | `jdk-21.0.1-linux-x64.tar.gz` | 21.0.1 | x64 | | `jdk-21.0.1-linux-aarch64.tar.gz` | 21.0.1 | aarch64 | | `jdk-25-linux-x64.tar.gz` | 25 | x64 | | `jdk-25-linux-aarch64.tar.gz` | 25 | aarch64 | | `jdk-25.0.1-linux-x64.tar.gz` | 25.0.1 | x64 | | `jdk-25.0.1-linux-aarch64.tar.gz` | 25.0.1 | aarch64 | | `jdk-25.0.2-linux-x64.tar.gz` | 25.0.2 | x64 | | `jdk-25.0.2-linux-aarch64.tar.gz` | 25.0.2 | aarch64 | --- ## Future JDK Releases > **IMPORTANT: No code changes required for future Java major/minor releases!** The validation regex supports all version formats: - **Major versions**: `17`, `25`, `26` - **Minor versions**: `25.0.1`, `25.0.2`, `26.0.1` - **Legacy format**: `8u144`, `8u202` ### Adding New JDK Versions To add support for a new JDK version (e.g., JDK 26, 25.0.3): 1. Download the JDK tarball from Oracle/Adoptium 2. Rename to follow naming convention: `jdk-{VERSION}-linux-{ARCH}.tar.gz` 3. Upload to S3: `aws s3 cp jdk-{VERSION}-linux-{ARCH}.tar.gz s3://kafka-packages/jdk/` 4. Use in tests: `--jdk-version {VERSION} --jdk-arch {ARCH}` No modifications to `base.sh` or any other scripts are needed. --- ## Benefits | Before | After | |--------|-------| | `--jdk-version` ignored | ✅ Correctly uses specified version | | `--jdk-arch` ignored | ✅ Correctly uses specified architecture | | Only major version support | ✅ Full version support (e.g., `25.0.2`) | | Code change needed for new JDK | ✅ Just upload to S3 and pass version | --- ## Testing Tested with different JDK versions to confirm the fix works correctly: | Test | JDK_MAJOR | Expected | Actual | Result | Test Report | |------|-----------|----------|--------|--------|-------------| | JDK 17 | `17` | javac 17.0.4 | javac 17.0.4 | ✅ | | JDK 25 | `25` | javac 25.0.2 | javac 25.0.2 | ✅ | --- ## Backward Compatibility - **Vagrantfile**: Continues to work as before - **Existing workflows**: Default behavior unchanged (JDK 17 on x64 architecture) - **No breaking changes**: All existing configurations continue to work --- Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>, Chia-Ping Tsai <chia7712@gmail.com>
blitzy Bot
referenced
this pull request
in blitzy-public-samples/blitzy-kafka
Apr 18, 2026
Resolve all 9 Minor and 10 Info findings from the Checkpoint 1 code review, correcting factual inaccuracies, citation line-range imprecisions, and cross- artifact consistency drift. No modifications to pre-existing Kafka source, tests, build files, or comments — Audit Only rule preserved. Findings by file: accepted-mitigations.md #1 [MINOR] AclCache imports corrected: org.apache.kafka.server.immutable (PCollections-backed Kafka-internal) instead of Guava's com.google.common.collect. apache#2 [MINOR] API surface rewritten to reflect PCollections-style structural- sharing methods .updated()/.added()/.removed() instead of Guava builder pattern. apache#3 [MINOR] ZstdCompression BufferPool path split: wrap-for-output uses zstd-jni RecyclingBufferPool.INSTANCE (L55-L63), wrap-for- input uses ChunkedBytesStream (L65-L75), wrap-for-zstd-input uses anonymous Kafka-owned BufferPool delegating to BufferSupplier (L77-L98). apache#4 [INFO] MAX_RECORDS_PER_USER_OP citation corrected: declaration at QuorumController.java:L185; AclControlManager.java:L52 is the static import only. apache#5 [INFO] AclCache.removeAcl(Uuid) line corrected to L91-L103 (was L89+). references.md apache#6 [MINOR] SafeObjectInputStream citation range tightened from L17-L25 (class header + imports only) to L25-L62 covering the class declaration, DEFAULT_NO_DESERIALIZE_CLASS_NAMES blocklist (L27-L37), resolveClass (L43-L52), and isBlocked helper (L54-L62). apache#7 [INFO] PropertyFileLoginModule citation corrected to L42-L50, pointing at the Javadoc PLAINTEXT warning (L47-L48) plus the class declaration (L50). remediation-roadmap.md apache#8 [INFO] Gantt markers sanitised: all :done/:active markers replaced with :crit (illustrative critical emphasis) or plain markers to avoid any visual suggestion of work already performed. Explanatory blockquote added clarifying the marker change. severity-matrix.md apache#9 [MINOR] 7 occurrences of parenthesised '(Accepted Mitigation)' replaced with bracketed '[Accepted Mitigation]' per Global Conventions for plain-text markers. Cross-validated 9 bracketed instances, 0 parenthesised remaining. README.md apache#11 [MINOR] HEAD commit reference corrected to the pre-audit baseline 6d16f68 (was 8a99096, a mid-audit snapshot); baseline attestation now refers to the commit immediately before the audit began. apache#12 [MINOR] Snapshot date unified to 2026-04-17 across all artifacts. apache#14 [INFO] '25 files' claim qualified as 'planned at project completion' vs 'delivered at this checkpoint (15 files)'. attack-surface-map.md apache#16 [MINOR] Clients module category count corrected from 'six' to 'nine' (actual Mermaid edges: C1, C2, C3, C4, C5, C7, C8, C9, C10). apache#17 [MINOR] Connect module category count corrected from 'five' to 'seven' (actual Mermaid edges: C1, C4, C6, C7, C8, C9, C10). oauth-jwt-validation-paths.md apache#18 [INFO] Outer citation ranges tightened: BrokerJwtValidator.configure at L107-L138 (not L102-L134); OAuthBearerUnsecuredValidatorCallbackHandler.handleCallback at L154-L177 (not L161-L204, which spanned unrelated helpers); allowableClockSkewMs helper cited separately at L194-L207. executive-summary.html Cross-ref A [MINOR] HEAD commit aligned to 6d16f68 at three sites (L621, L668, L1544); methodology Mermaid node re-labelled 'Baseline 6d16f68'. Cross-ref B [MINOR] Snapshot date aligned to 2026-04-17 at two sites (L619, L1542). Out-of-scope (Info-level forward-refs): apache#10, apache#13, apache#15 — Links to docs/security-audit/findings/*.md deliverables not yet present at Checkpoint 1; expected per scope boundary; will resolve at Checkpoint 2 when the 10 per-category findings files land. Validation results (Phase 3): - Mermaid fences: all balanced (20 blocks total, all typed) - HTML tag balance: 22 sections + all 20+ tag types balanced - CDNs intact: reveal.js 5.1.0, Mermaid 11.4.0, Font Awesome 6.6.0 - Emojis: zero across all 15 artifacts - TODOs/placeholders introduced: zero - Gantt markers: :crit + plain only (no :done/:active) - Cross-artifact consistency: zero wrong SHA/date values remaining - Citation ranges: 12 verified against AclCache, QuorumController, AclControlManager, ZstdCompression, SafeObjectInputStream, PropertyFileLoginModule, BrokerJwtValidator, and OAuthBearerUnsecuredValidatorCallbackHandler. Audit Only rule verification: git diff --name-status 6d16f68..HEAD returns only 'A' entries, all under docs/security-audit/. Zero modifications, deletions, or renames of any pre-existing Kafka path.
blitzy Bot
referenced
this pull request
in blitzy-public-samples/blitzy-kafka
Apr 18, 2026
…e 6 matrix completeness + slide 8 layout Addresses QA Checkpoint 1 findings (3 MINOR, 0 Major, 0 Critical): Issue #1 — native-compression-boundary.md missing snappy/lz4 2KB chunk ceilings (Category: Functional — AAP specification deviation) - Intro + Scope now enumerate all 3 codec ceilings: zstd=16KB, snappy=2KB, lz4=2KB (previously only zstd=16KB was annotated) - Extended Mermaid flowchart to include SnappyCompression / Lz4Compression nodes + SnappyInputStream / Lz4BlockInputStream / libsnappy / liblz4 native nodes - Added dashed edges labeled 'reads in 2 KB chunks (snappy)' and 'reads in 2 KB chunks (lz4)' parallel to the existing zstd 16 KB edge - Added Per-Codec summary table + Key Observations for snappy/lz4 ceilings - Updated Sources with SnappyCompression.java:L71 and Lz4Compression.java:L71 - Updated Legend to reference per-codec ceilings instead of zstd-only - File: docs/security-audit/diagrams/native-compression-boundary.md Issue #2a — slide 6 attack-surface matrix incomplete (9 of 12 AAP modules shown) (Category: Visual — AAP §0.4.3 deviation) - Added 3 missing columns between 'streams' and 'trogdor': coordinator, server-common, tools - All 10 category rows expanded with 3 empty cells each (no direct attribution per Coordinator/server-common/tools footnote) - Reduced table font-size 0.55em to 0.5em for 13-column fit - Added explanatory footer note clarifying absent attributions - File: docs/security-audit/executive-summary.html Issue #2b — MEDIUM badge color #2563EB does not match AAP palette #D97706 (Category: Visual — AAP color palette deviation) - Added CSS variable --orange: #EA580C for High severity - .badge-high now uses var(--orange) = #EA580C (was var(--blue)) - .badge-medium now uses var(--amber) = #D97706 (was var(--blue)) - Added .icon-orange utility class + .icon-card.accent-orange - Updated heatmap cells: --heatmap-med uses amber rgba; --heatmap-high uses orange rgba for consistency with badge colors - Verified at runtime: rgb(234,88,12)=#EA580C High and rgb(217,119,6)=#D97706 Medium across 6 slide-viewport combinations - File: docs/security-audit/executive-summary.html Issue apache#3 — slide 8 content overflow at all viewports (Category: Visual — responsive layout) - Slide 8: repurposed .icon-grid-dense with scoped CSS override at #slide-high-findings .icon-grid-dense .icon-card to reduce card width (max-width 215px, font-size 0.65em H3 / 0.5em p) ensuring 5 cards fit in single row at 1280x800 without affecting Slide 16 4x2 layout - Shortened citation blocks via new .card-cite class; replaced inline styles with class references - Updated HIGH findings to use accent-orange + icon-orange; MEDIUM finding uses accent-amber + icon-amber (new classes) - Slide 9 (Connect REST sequence diagram): tuned mermaid init config (fontSize 11px, actorMargin 50, messageMargin 18, boxMargin 3, noteMargin 2, mirrorActors:false) to eliminate container overflow - Runtime-verified at 1280x800 / 768x1024 / 375x667 viewports; Slide 8 scrollHeight 674px (no overflow at 800/1024/667 heights); Slide 9 scrollHeight 807px (7px overflow at 800 — negligible, no content obscured) - File: docs/security-audit/executive-summary.html Static validation: - 22 open section = 22 close section (balanced) - 12 pre.mermaid = 12 close pre (balanced) - 0 emojis; 104 Font Awesome icons - All 13 dependency versions preserved Runtime re-verification: - 9 screenshots captured across 3 slides x 3 viewports (375/768/1280) - 0 console errors on fresh load at 1280x800 - 12/12 Mermaid blocks render with SVG (data-processed=true) - All 6 AAP palette colors verified at runtime - Issue #2a: 13-column matrix (Category + 12 modules) confirmed at all three viewports with horizontalOverflow=false Audit-only compliance: - Zero modifications outside docs/security-audit/ - No Kafka source files, tests, build configs, or existing docs modified - Only 2 files changed: native-compression-boundary.md (diagram update) and executive-summary.html (deck fixes) - Both files are within docs/security-audit/ (the audit-only scope)
blitzy Bot
referenced
this pull request
in blitzy-public-samples/blitzy-kafka
Apr 18, 2026
QA Checkpoint #1 identified 9 MINOR documentation-quality findings in the Apache Kafka 4.2 security audit deliverables. All 9 findings are documentation corrections confined to the docs/security-audit/ tree; no source code, tests, or build configuration touched — fully compliant with the Audit Only rule. FIXES APPLIED (by QA finding number): Issue #1 [MINOR] — findings/07-external-function-callback-misuse.md L247 Validation Checklist cited legacy path 'internals/secured/BrokerJwtValidator.java'. Updated to current Kafka 4.2 canonical path 'clients/src/main/java/org/apache/kafka/common/security/oauthbearer/BrokerJwtValidator.java' with an explanatory note that the class was reorganized out of the internals/secured sub-package in a prior Kafka refactor. Issue apache#2 [MINOR] — findings/08-deserialization-attacks.md L305 Same pattern as #1 — Validation Checklist updated from 'internals/secured/{Broker,Client}JwtValidator.java' to 'clients/.../oauthbearer/{Broker,Client}JwtValidator.java' with explanatory note. Issue apache#3 [MINOR] — findings/09-information-leakage.md L245 Validation Checklist cited legacy path 'connect/runtime/src/main/java/org/apache/kafka/connect/runtime/RecordRedactor.java'. Updated to current canonical path 'metadata/src/main/java/org/apache/kafka/metadata/util/RecordRedactor.java' with explanatory note. Issue apache#4 [MINOR] — findings/09-information-leakage.md L248 Validation Checklist BrokerJwtValidator and ClientJwtValidator paths updated to current 'oauthbearer/' canonical paths with explanatory note. Issue apache#5 [MINOR] — findings/10-public-api-developer-misuse.md L298 Validation Checklist BrokerJwtValidator path updated to current 'oauthbearer/BrokerJwtValidator.java:L131' canonical path with explanatory note. Issue apache#6 [MINOR] — findings/10-public-api-developer-misuse.md L302 Validation Checklist cited legacy path 'server-common/src/main/java/org/apache/kafka/server/config/ReplicationConfigs.java'. Updated to current canonical path 'server/src/main/java/org/apache/kafka/server/config/ReplicationConfigs.java' with explanatory note that the file moved from the server-common module to the server module in a prior Kafka refactor. Issue apache#7 [MINOR] — references.md Section 3.1 Configuration Added missing entry for 'AllowedPaths.java' ('clients/src/main/java/org/apache/kafka/common/config/internals/AllowedPaths.java'), inserted between the DirectoryConfigProvider and EnvVarConfigProvider entries. Finding 01 cites AllowedPaths 14 times; this bibliography gap is now closed. Issue apache#8 [MINOR] — references.md Section 7 Server Module Added missing entry for 'SocketServerConfigs.java' ('server/src/main/java/org/apache/kafka/network/SocketServerConfigs.java'), inserted after the ReplicationConfigs entry with an inline note about the 'org.apache.kafka.network' vs 'org.apache.kafka.server.config' package mismatch. Findings 03 (11 cites) and 10 (5 cites) reference SocketServerConfigs; this bibliography gap is now closed. Issue apache#9 [MINOR] — findings/01 and findings/10 section header numbering Harmonized H2 section headers to match the numbered 1-10 pattern used by findings 02-09. Applied 20 header replacements total: 10 in finding 01 ('## Category' -> '## 1. Category', etc.), 10 in finding 10 (same pattern). Validation Checklist and Key Insights remain unnumbered per the existing majority convention. Content substance is unchanged; only section prefixes updated. VALIDATION RESULTS: - All 6 canonical file paths verified via 'test -f' to exist in the Kafka source tree at HEAD. - Zero stale 'internals/secured/', 'connect/runtime/.../RecordRedactor', or 'server-common/.../ReplicationConfigs' references remain across the audit corpus. - All 10 findings now have exactly 10 numbered H2 section headers (verified via 'grep -cE "^## [0-9]+\. "'). - Markdown fence balance intact (all diagram files: 4 fences each; findings: all balanced). - Cross-referenced anchors (DISALLOW_NONE, ALLOW_LEADING_ZEROS, AllowedPaths, MAX_RECORDS_PER_USER_OP) preserved. - references.md entries verified present (AllowedPaths=1 match, SocketServerConfigs=1 match). AUDIT ONLY RULE COMPLIANCE: Modifications confined exclusively to documentation artifacts under docs/security-audit/. Zero source code, test, build-configuration, or inline-comment modifications. The untracked 'blitzy/' directory (pre-existing baseline) is NOT part of this commit. Files changed: 6 (+46 / -26 lines) M docs/security-audit/findings/01-filesystem-access-path-traversal.md M docs/security-audit/findings/07-external-function-callback-misuse.md M docs/security-audit/findings/08-deserialization-attacks.md M docs/security-audit/findings/09-information-leakage.md M docs/security-audit/findings/10-public-api-developer-misuse.md M docs/security-audit/references.md
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Compiled and used fine. I had issues with the tests though.