Skip to content

Added changes so that bin/*.sh files can work with CYGWIN under windows,...#13

Closed
aloklal99 wants to merge 1 commit intoapache:0.8from
aloklal99:0.8
Closed

Added changes so that bin/*.sh files can work with CYGWIN under windows,...#13
aloklal99 wants to merge 1 commit intoapache:0.8from
aloklal99:0.8

Conversation

@aloklal99
Copy link
Copy Markdown

Background

The script files to run Kafka under Windows don't work as is. One needs to hand tweak them since their location is not bin but bin/windows. Further, the script files under bin/windows are not a complete replica of those under bin. To be sure, this isn't a complaint. To the contrary most projects now-a-days don't bother to support running on Windows or do so very late. Just that because of these limitation it might be more prudent to make the script files under bin itself run under windows rather than trying to make the files under bin/windows work or to make them complete.

Change Summary

Most common unix-like shell on windows is the bash shell which is a part of the cygwin project. Out of the box the scripts don't work mostly due to peculiarities of the directory paths and class path separators. This change set makes a focused change to a single file under bin so that all of the script files under bin would work as is on windows platform when using bash shell of Cygwin distribution.

Motivation

Acceptance of this change would enable a vast body of developers that use (or have to use) Windows as their development/testing/production platform to use Kafka's with ease. More importantly by making the running of examples smoothly on Windoes+Cygwin-bash it would make the process of evaluation of Kafka simpler and smoother and potentially make for a favorable evaluation. For, it would show commitment of the Kafka team to espouse deployments on Windows (albeit only under cygwin). Further, as the number of people whom use Kafka on Windows increases, one would attract people who can eventually fix the script files under bin/Windows itself so that need to run under Cygwin would also go away, too.

Testing details

The change have been tested under GNU bash, version 4.1.11(2)-release (x86_64-unknown-cygwin) running on Windows 7 Enterprise.

@junrao
Copy link
Copy Markdown
Contributor

junrao commented Jan 22, 2014

Could you open an Apache Kafka jira and attach the patch there? This will take care of Apache licensing issues.

Thanks,

Jun

@aloklal99
Copy link
Copy Markdown
Author

Done. I created this JIRA
tickethttps://issues.apache.org/jira/browse/KAFKA-1230for it.
Please advise if this is enough or would you like to get the patch
sent via some other means.
Best,

On Tue, Jan 21, 2014 at 9:09 PM, Jun Rao notifications@github.com wrote:

Could you open an Apache Kafka jira and attach the patch there? This will
take care of Apache licensing issues.

Thanks,

Jun


Reply to this email directly or view it on GitHubhttps://github.com//pull/13#issuecomment-32993903
.

@ijuma
Copy link
Copy Markdown
Member

ijuma commented Jul 20, 2015

@aloklal99, can you please close this PR then? (we can't do it ourselves without asking Apache Infra via a ticket)

@aloklal99
Copy link
Copy Markdown
Author

Done.

@aloklal99 aloklal99 closed this Jul 20, 2015
ymatsuda referenced this pull request in confluentinc/kafka Aug 5, 2015
ymatsuda pushed a commit to ymatsuda/kafka that referenced this pull request Aug 27, 2015
resetius pushed a commit to resetius/kafka that referenced this pull request Jun 7, 2016
[LOGBROKER-897] copy jars before start
lentztopher pushed a commit to lentztopher/kafka that referenced this pull request Aug 28, 2018
xiowu0 pushed a commit to xiowu0/kafka that referenced this pull request Apr 19, 2019
…d clean up partitionState in PartitionStateMachine after topic deletion is done (apache#13)

Exclude topics being deleted from the offlinePartitionCount metric and clean up partitionState in PartitionStateMachine after topic deletion is done

Currently the offlinePartitionCount metric also reports the partitions of the topic
that has already been queued for deletion, which creates noise for the alerting
system, especially for the cluster that has frequent topic deletion operation. This
patch adds a mechanism to exclude partitions already been queued for deletion from
the offlinePartitionCount metric and also remove the in-memory topicsWithDeletionStarted
in TopicDeletionManager since we no longer use it to update the metric.

This patch also addresses a potential memory pressure issue of not cleaning up the in-memory
partition states in PartitionStateMachine even after the topic has already been deleted.
abhishekmendhekar pushed a commit to abhishekmendhekar/kafka that referenced this pull request Jun 12, 2019
…nt metric and clean up partitionState in PartitionStateMachine after topic deletion is done (apache#13)

TICKET =
LI_DESCRIPTION =

Exclude topics being deleted from the offlinePartitionCount metric and clean up partitionState in PartitionStateMachine after topic deletion is done

Currently the offlinePartitionCount metric also reports the partitions of the topic
that has already been queued for deletion, which creates noise for the alerting
system, especially for the cluster that has frequent topic deletion operation. This
patch adds a mechanism to exclude partitions already been queued for deletion from
the offlinePartitionCount metric and also remove the in-memory topicsWithDeletionStarted
in TopicDeletionManager since we no longer use it to update the metric.

This patch also addresses a potential memory pressure issue of not cleaning up the in-memory
partition states in PartitionStateMachine even after the topic has already been deleted.

EXIT_CRITERIA = MANUAL ["describe exit criteria"]
hzxa21 added a commit to hzxa21/kafka that referenced this pull request Jul 12, 2019
…nt metric and clean up partitionState in PartitionStateMachine after topic deletion is done (apache#13)

TICKET =
LI_DESCRIPTION =

Exclude topics being deleted from the offlinePartitionCount metric and clean up partitionState in PartitionStateMachine after topic deletion is done

Currently the offlinePartitionCount metric also reports the partitions of the topic
that has already been queued for deletion, which creates noise for the alerting
system, especially for the cluster that has frequent topic deletion operation. This
patch adds a mechanism to exclude partitions already been queued for deletion from
the offlinePartitionCount metric and also remove the in-memory topicsWithDeletionStarted
in TopicDeletionManager since we no longer use it to update the metric.

This patch also addresses a potential memory pressure issue of not cleaning up the in-memory
partition states in PartitionStateMachine even after the topic has already been deleted.

EXIT_CRITERIA = MANUAL ["describe exit criteria"]
xiowu0 pushed a commit to xiowu0/kafka that referenced this pull request Aug 23, 2019
…nt metric and clean up partitionState in PartitionStateMachine after topic deletion is done (apache#13)

TICKET =
LI_DESCRIPTION =

Exclude topics being deleted from the offlinePartitionCount metric and clean up partitionState in PartitionStateMachine after topic deletion is done

Currently the offlinePartitionCount metric also reports the partitions of the topic
that has already been queued for deletion, which creates noise for the alerting
system, especially for the cluster that has frequent topic deletion operation. This
patch adds a mechanism to exclude partitions already been queued for deletion from
the offlinePartitionCount metric and also remove the in-memory topicsWithDeletionStarted
in TopicDeletionManager since we no longer use it to update the metric.

This patch also addresses a potential memory pressure issue of not cleaning up the in-memory
partition states in PartitionStateMachine even after the topic has already been deleted.

EXIT_CRITERIA = MANUAL ["limiting this to LinkedIn branch only because we don't want to create offline partition noise for topics being deleting"]
efeg pushed a commit to efeg/kafka that referenced this pull request Jan 29, 2020
smccauliff pushed a commit to smccauliff/kafka that referenced this pull request Oct 8, 2020
…nt metric and clean up partitionState in PartitionStateMachine after topic deletion is done (apache#13)

TICKET =
LI_DESCRIPTION =

Exclude topics being deleted from the offlinePartitionCount metric and clean up partitionState in PartitionStateMachine after topic deletion is done

Currently the offlinePartitionCount metric also reports the partitions of the topic
that has already been queued for deletion, which creates noise for the alerting
system, especially for the cluster that has frequent topic deletion operation. This
patch adds a mechanism to exclude partitions already been queued for deletion from
the offlinePartitionCount metric and also remove the in-memory topicsWithDeletionStarted
in TopicDeletionManager since we no longer use it to update the metric.

This patch also addresses a potential memory pressure issue of not cleaning up the in-memory
partition states in PartitionStateMachine even after the topic has already been deleted.

EXIT_CRITERIA = MANUAL ["limiting this to LinkedIn branch only because we don't want to create offline partition noise for topics being deleting"]
wyuka pushed a commit to wyuka/kafka that referenced this pull request Jan 7, 2022
…nt metric and clean up partitionState in PartitionStateMachine after topic deletion is done (apache#13)

TICKET =
LI_DESCRIPTION =

Exclude topics being deleted from the offlinePartitionCount metric and clean up partitionState in PartitionStateMachine after topic deletion is done

Currently the offlinePartitionCount metric also reports the partitions of the topic
that has already been queued for deletion, which creates noise for the alerting
system, especially for the cluster that has frequent topic deletion operation. This
patch adds a mechanism to exclude partitions already been queued for deletion from
the offlinePartitionCount metric and also remove the in-memory topicsWithDeletionStarted
in TopicDeletionManager since we no longer use it to update the metric.

This patch also addresses a potential memory pressure issue of not cleaning up the in-memory
partition states in PartitionStateMachine even after the topic has already been deleted.

EXIT_CRITERIA = MANUAL ["limiting this to LinkedIn branch only because we don't want to create offline partition noise for topics being deleting"]
wyuka pushed a commit to wyuka/kafka that referenced this pull request Mar 4, 2022
…nt metric and clean up partitionState in PartitionStateMachine after topic deletion is done (apache#13)

TICKET =
LI_DESCRIPTION =

Exclude topics being deleted from the offlinePartitionCount metric and clean up partitionState in PartitionStateMachine after topic deletion is done

Currently the offlinePartitionCount metric also reports the partitions of the topic
that has already been queued for deletion, which creates noise for the alerting
system, especially for the cluster that has frequent topic deletion operation. This
patch adds a mechanism to exclude partitions already been queued for deletion from
the offlinePartitionCount metric and also remove the in-memory topicsWithDeletionStarted
in TopicDeletionManager since we no longer use it to update the metric.

This patch also addresses a potential memory pressure issue of not cleaning up the in-memory
partition states in PartitionStateMachine even after the topic has already been deleted.

EXIT_CRITERIA = MANUAL ["limiting this to LinkedIn branch only because we don't want to create offline partition noise for topics being deleting"]
wyuka pushed a commit to wyuka/kafka that referenced this pull request Mar 28, 2022
…nt metric and clean up partitionState in PartitionStateMachine after topic deletion is done (apache#13)

TICKET =
LI_DESCRIPTION =

Exclude topics being deleted from the offlinePartitionCount metric and clean up partitionState in PartitionStateMachine after topic deletion is done

Currently the offlinePartitionCount metric also reports the partitions of the topic
that has already been queued for deletion, which creates noise for the alerting
system, especially for the cluster that has frequent topic deletion operation. This
patch adds a mechanism to exclude partitions already been queued for deletion from
the offlinePartitionCount metric and also remove the in-memory topicsWithDeletionStarted
in TopicDeletionManager since we no longer use it to update the metric.

This patch also addresses a potential memory pressure issue of not cleaning up the in-memory
partition states in PartitionStateMachine even after the topic has already been deleted.

EXIT_CRITERIA = MANUAL ["limiting this to LinkedIn branch only because we don't want to create offline partition noise for topics being deleting"]
wyuka pushed a commit to wyuka/kafka that referenced this pull request Jun 16, 2022
…nt metric and clean up partitionState in PartitionStateMachine after topic deletion is done (apache#13)

TICKET =
LI_DESCRIPTION =

Exclude topics being deleted from the offlinePartitionCount metric and clean up partitionState in PartitionStateMachine after topic deletion is done

Currently the offlinePartitionCount metric also reports the partitions of the topic
that has already been queued for deletion, which creates noise for the alerting
system, especially for the cluster that has frequent topic deletion operation. This
patch adds a mechanism to exclude partitions already been queued for deletion from
the offlinePartitionCount metric and also remove the in-memory topicsWithDeletionStarted
in TopicDeletionManager since we no longer use it to update the metric.

This patch also addresses a potential memory pressure issue of not cleaning up the in-memory
partition states in PartitionStateMachine even after the topic has already been deleted.

EXIT_CRITERIA = MANUAL ["limiting this to LinkedIn branch only because we don't want to create offline partition noise for topics being deleting"]
mjsax pushed a commit to mjsax/kafka that referenced this pull request Jul 21, 2024
brandboat pushed a commit to brandboat/kafka that referenced this pull request Sep 15, 2025
Create an internal __cluster_link topic to store the source cluster metadata
a. By default it should be “compact” and retention.ms/bytes=-1” topic because we only need to store the latest record, and the key of one cluster-link should always be the same (i.e. the record key will be the cluster-link name (or UUID?)) 


New Config
- cluster.link.topic.num.partitions
- cluster.link.topic.replication.factor

```
./bin/kafka-configs.sh --bootstrap-server localhost:9092   --entity-type topics   --entity-name __cluster_link   --describe  --all | grep retention
  delete.retention.ms=86400000 sensitive=false synonyms={DEFAULT_CONFIG:log.cleaner.delete.retention.ms=86400000}
  local.retention.bytes=-2 sensitive=false synonyms={DEFAULT_CONFIG:log.local.retention.bytes=-2}
  local.retention.ms=-2 sensitive=false synonyms={DEFAULT_CONFIG:log.local.retention.ms=-2}
  retention.bytes=-1 sensitive=false synonyms={DEFAULT_CONFIG:log.retention.bytes=-1}
  retention.ms=-1 sensitive=false synonyms={DYNAMIC_TOPIC_CONFIG:retention.ms=-1}

```
haianh1233 added a commit to haianh1233/kafka that referenced this pull request Apr 17, 2026
New tests:
- apache#8  Port conflict (HTTP == PLAINTEXT port) → rejected
- apache#9  HTTP port=0 (random) works
- apache#10 HTTP + HTTPS coexist on same broker
- apache#11 advertised.listeners with HTTP parsed correctly
- apache#12 HTTP without httpAcceptorFactory → IllegalStateException
- apache#13 inter.broker.listener=HTTPS also rejected (not just HTTP)
- apache#14 Custom listener name mapped to HTTP protocol (MY_REST_API:HTTP)
- apache#15 HTTPS with valid SSL config succeeds

All 15 SocketServerHttpTest pass.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
blitzy Bot pushed a commit to blitzy-public-samples/blitzy-kafka that referenced this pull request Apr 18, 2026
Resolve all 9 Minor and 10 Info findings from the Checkpoint 1 code review,
correcting factual inaccuracies, citation line-range imprecisions, and cross-
artifact consistency drift. No modifications to pre-existing Kafka source,
tests, build files, or comments — Audit Only rule preserved.

Findings by file:

accepted-mitigations.md
  #1 [MINOR] AclCache imports corrected: org.apache.kafka.server.immutable
              (PCollections-backed Kafka-internal) instead of Guava's
              com.google.common.collect.
  apache#2 [MINOR] API surface rewritten to reflect PCollections-style structural-
              sharing methods .updated()/.added()/.removed() instead of
              Guava builder pattern.
  apache#3 [MINOR] ZstdCompression BufferPool path split: wrap-for-output uses
              zstd-jni RecyclingBufferPool.INSTANCE (L55-L63), wrap-for-
              input uses ChunkedBytesStream (L65-L75), wrap-for-zstd-input
              uses anonymous Kafka-owned BufferPool delegating to
              BufferSupplier (L77-L98).
  apache#4 [INFO]  MAX_RECORDS_PER_USER_OP citation corrected: declaration at
              QuorumController.java:L185; AclControlManager.java:L52 is
              the static import only.
  apache#5 [INFO]  AclCache.removeAcl(Uuid) line corrected to L91-L103 (was L89+).

references.md
  apache#6 [MINOR] SafeObjectInputStream citation range tightened from L17-L25
              (class header + imports only) to L25-L62 covering the class
              declaration, DEFAULT_NO_DESERIALIZE_CLASS_NAMES blocklist
              (L27-L37), resolveClass (L43-L52), and isBlocked helper
              (L54-L62).
  apache#7 [INFO]  PropertyFileLoginModule citation corrected to L42-L50,
              pointing at the Javadoc PLAINTEXT warning (L47-L48) plus
              the class declaration (L50).

remediation-roadmap.md
  apache#8 [INFO]  Gantt markers sanitised: all :done/:active markers replaced
              with :crit (illustrative critical emphasis) or plain markers
              to avoid any visual suggestion of work already performed.
              Explanatory blockquote added clarifying the marker change.

severity-matrix.md
  apache#9 [MINOR] 7 occurrences of parenthesised '(Accepted Mitigation)'
              replaced with bracketed '[Accepted Mitigation]' per Global
              Conventions for plain-text markers. Cross-validated 9
              bracketed instances, 0 parenthesised remaining.

README.md
  apache#11 [MINOR] HEAD commit reference corrected to the pre-audit baseline
               6d16f68 (was 8a99096, a
               mid-audit snapshot); baseline attestation now refers to the
               commit immediately before the audit began.
  apache#12 [MINOR] Snapshot date unified to 2026-04-17 across all artifacts.
  apache#14 [INFO]  '25 files' claim qualified as 'planned at project completion'
               vs 'delivered at this checkpoint (15 files)'.

attack-surface-map.md
  apache#16 [MINOR] Clients module category count corrected from 'six' to 'nine'
               (actual Mermaid edges: C1, C2, C3, C4, C5, C7, C8, C9, C10).
  apache#17 [MINOR] Connect module category count corrected from 'five' to
               'seven' (actual Mermaid edges: C1, C4, C6, C7, C8, C9, C10).

oauth-jwt-validation-paths.md
  apache#18 [INFO]  Outer citation ranges tightened:
               BrokerJwtValidator.configure at L107-L138 (not L102-L134);
               OAuthBearerUnsecuredValidatorCallbackHandler.handleCallback
               at L154-L177 (not L161-L204, which spanned unrelated
               helpers); allowableClockSkewMs helper cited separately at
               L194-L207.

executive-summary.html
  Cross-ref A [MINOR] HEAD commit aligned to 6d16f68 at three sites
                       (L621, L668, L1544); methodology Mermaid node
                       re-labelled 'Baseline 6d16f68'.
  Cross-ref B [MINOR] Snapshot date aligned to 2026-04-17 at two sites
                       (L619, L1542).

Out-of-scope (Info-level forward-refs):
  apache#10, apache#13, apache#15 — Links to docs/security-audit/findings/*.md deliverables
                   not yet present at Checkpoint 1; expected per scope
                   boundary; will resolve at Checkpoint 2 when the 10
                   per-category findings files land.

Validation results (Phase 3):
  - Mermaid fences: all balanced (20 blocks total, all typed)
  - HTML tag balance: 22 sections + all 20+ tag types balanced
  - CDNs intact: reveal.js 5.1.0, Mermaid 11.4.0, Font Awesome 6.6.0
  - Emojis: zero across all 15 artifacts
  - TODOs/placeholders introduced: zero
  - Gantt markers: :crit + plain only (no :done/:active)
  - Cross-artifact consistency: zero wrong SHA/date values remaining
  - Citation ranges: 12 verified against AclCache, QuorumController,
                     AclControlManager, ZstdCompression,
                     SafeObjectInputStream, PropertyFileLoginModule,
                     BrokerJwtValidator, and
                     OAuthBearerUnsecuredValidatorCallbackHandler.

Audit Only rule verification:
  git diff --name-status 6d16f68..HEAD returns only 'A' entries,
  all under docs/security-audit/. Zero modifications, deletions, or
  renames of any pre-existing Kafka path.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants