Skip to content

Conversation

@splindsay-92
Copy link
Contributor

Description

Checklist

@coderabbitai
Copy link

coderabbitai bot commented Jan 12, 2026

Important

Review skipped

Auto reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Member

@paddybyers paddybyers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@m-hulbert This text contains a lot of AI giveaways, such as em dashes, and emboldened bulleted lists. Does (or should) our style guide say we aim to avoid these?


Ably applies rate limits to ensure platform stability. By default, channels accept up to 50 inbound messages per second. Enterprise plans can request higher limits for specific use cases. When working with high-frequency data sources, consider batching multiple updates into single messages to stay within these limits.

For example, data sources generating more than 50 updates per second could be batched into periodic publishes:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we suggesting doing this instead of server-side batching?


The key benefit of server-side batching is that it reduces billable outbound message count, especially during traffic spikes. If your source publishes 10 updates per second and you have 1000 subscribers, without batching you'd have 10,000 outbound messages per second. With 500ms batching, messages are grouped into 2 batches per second, resulting in 2,000 outbound messages per second—a 5x reduction.

Unlike message conflation, server-side batching preserves all messages and message order. Every update is delivered, just grouped together for efficiency. This makes it suitable for scenarios where you need complete data, but can tolerate some latency in exchange for cost savings.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some nuances about changes to message ordering - order is preserved for messages of a given type, but ordering of messages vs presence messages, or live objects updates, etc, can change.


#### Pairing with persist last message

For state-based dashboards using delta compression, the [persist last message](/docs/storage-history/storage#persist-last-message) channel rule provides a means to store and query the latest state on the channel. When enabled, Ably stores the most recent message published to a channel for 365 days. New clients can then attach with `rewind=1` to immediately receive the last published state, or query it via history.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we be clear on when we recommend this vs just persistence?

A dashboard is likely to be based on data that's consistently updated, so persist-last doesn't feel like the right solution.

@m-hulbert
Copy link
Contributor

@m-hulbert This text contains a lot of AI giveaways, such as em dashes, and emboldened bulleted lists. Does (or should) our style guide say we aim to avoid these?

@paddybyers it doesn't currently, but I have a doc that updates the contributing guide to an MDX focus so I'll include an update to the style guide with that. It's a good shout.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

4 participants