Skip to content

Conversation

@Renizmy
Copy link

@Renizmy Renizmy commented Nov 21, 2025

Related to: #13372

@Renizmy
Copy link
Author

Renizmy commented Nov 21, 2025

Mooved here @Megafredo

@Megafredo Megafredo assigned Megafredo and unassigned Megafredo Nov 21, 2025
@Megafredo Megafredo self-requested a review November 21, 2025 13:23
@Megafredo
Copy link
Member

Hello @Renizmy, thank you for the switch!
There is just one issue with the linter. It seems to be a matter of indentation in the linter configuration.

@Renizmy
Copy link
Author

Renizmy commented Nov 21, 2025

Fixed, sorry

@codecov
Copy link

codecov bot commented Nov 21, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 30.84%. Comparing base (bc3f6d9) to head (84b0e1c).
⚠️ Report is 785 commits behind head on master.

Additional details and impacted files
@@             Coverage Diff             @@
##           master   #13261       +/-   ##
===========================================
+ Coverage   16.26%   30.84%   +14.57%     
===========================================
  Files        2846     2913       +67     
  Lines      412135   192309   -219826     
  Branches    11512    39176    +27664     
===========================================
- Hits        67035    59317     -7718     
+ Misses     345100   132992   -212108     
Flag Coverage Δ
opencti 30.84% <ø> (+14.57%) ⬆️
opencti-front 2.45% <ø> (-1.44%) ⬇️
opencti-graphql 68.20% <ø> (+0.96%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Member

@Megafredo Megafredo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @Renizmy, thanks for your work!
This new method for streams that allows batch processing will make a lot of people happy!

@Gwendoline-FAVRE-FELIX Gwendoline-FAVRE-FELIX added the community use to identify PR from community label Dec 5, 2025
@helene-nguyen
Copy link
Member

@Renizmy FYI, we'd like to improve a bit and refactor the code before merging ! :)

@xfournet
Copy link
Member

Hi @Renizmy,

Thank you for your contribution. As @helene-nguyen mentioned, we'd like the code to be refactored before merging. The main concern is that the new class (ListenStreamBatch) and method (listen_stream_batch) duplicate existing code.

Instead of creating a new class and method, we suggest implementing a message_callback wrapper that can adapt the existing listen_stream function from a single callback per message to a batched callback. You should be able to use the code you've already introduced to create this adapter.

Then each batch-capable connector (in regards of the targeted API) could be able to use this adapter to receive batch of message instead individual message.

Usage (assuming wrapper is named create_batch_callback and the process_message of the connector becomes process_message_batch) would be something like that:

    self.helper.listen_stream(message_callback=self.process_message)

--->

    batch_callback = self.helper.create_batch_callback(self.process_message_batch, self.batch_size, self.batch_timeout, self.max_batches_per_minute)
    self.helper.listen_stream(message_callback=batch_callback)

Would you be open to making this change?

@xfournet xfournet self-requested a review December 15, 2025 16:16
Copy link
Member

@xfournet xfournet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the update! I made some comments, I will resume the PR after theses first feebacks have been processed.

self.batch_timestamps = None

# Timer thread for timeout-based batch processing
self._stop_event = threading.Event()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i don't see where _stop_event is set ?

"""

def timer_loop():
while not self._stop_event.is_set():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

after we exit the while loop, we should process remaining messages via _process_batch, else they will be lost ?

if last_msg_id is not None:
state = self.helper.get_state()
if state is not None:
state["start_from"] = str(last_msg_id)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you could rather rely on last batch_data msg.id instead managing a separate variable for that which can be error prone

self._start_timeout_timer()

# Heartbeat queue for rate limit waiting
self._heartbeat_queue: Optional[Queue] = None
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is the purpose of _heartbeat_queue since it's set but never used ?

self.batch_callback(batch_data)
if last_msg_id is not None:
state = self.helper.get_state()
if state is not None:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if there is no state, it won't never been created ?

finally:
self._lock.acquire()

def _wait_for_rate_limit(self) -> float:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is no rate limiting in the non batch mode, why should we have it only for batch mode ?

if needed it should be separated from BatchCallbackWrapper, so we can set rate limit for batch and non batch cases
it would also permit to simplify this class that supports many concerns


# Reset batch state (still under _lock)
self.batch = []
self.batch_start_time = time.time()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it will prevent the batch_start_time to be initialized at the first message of the next batch, leading to potential premature execution of the next batch. It should rather be set to None here

self.batch_start_time = time.time()

# Release _lock, acquire _processing_lock for callback
self._lock.release()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's unusal lock strategy and so look fragile for futur changes.
you should rather separate the 'batch extraction' from the 'batch processing' so you could be able to use lock with conventional way

@Renizmy
Copy link
Author

Renizmy commented Dec 29, 2025

Hi @xfournet ,

Thanks for the review! All points addressed:

Changes to the rate limiter have led to simplifications. I haven't implemented any code related to RL for basic stream consumption (out of scope?).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

community use to identify PR from community

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add bulk consumption helper/method for stream processing in client python

5 participants