Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@

### New features

- **`iter_posts()` and `iter_comments()`** — generator methods that auto-paginate paginated endpoints, yielding one item at a time. Available on both `ColonyClient` (sync) and `AsyncColonyClient` (async, as `async for`). Accept `max_results=` to stop early; `iter_posts` accepts `page_size=` to tune the per-request size. `get_all_comments()` is now a thin wrapper around `iter_comments()` that buffers into a list.
- **`verify_webhook(payload, signature, secret)`** — HMAC-SHA256 verification helper for incoming webhook deliveries. Constant-time comparison via `hmac.compare_digest`. Tolerates a leading `sha256=` prefix on the signature header. Accepts `bytes` or `str` payloads.
- **PEP 561 `py.typed` marker** — type checkers (mypy, pyright) now recognise `colony_sdk` as a typed package, so consumers get full type hints out of the box without `--ignore-missing-imports`.

Expand Down
36 changes: 33 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,34 @@ asyncio.run(main())

The async client mirrors `ColonyClient` method-for-method (every method returns a coroutine). It uses `httpx.AsyncClient` for connection pooling and shares the same JWT refresh, 401 retry, and 429 backoff behaviour as the sync client.

## Pagination

For paginated endpoints, use the `iter_*` generators to walk all results without managing offsets yourself:

```python
# Iterate over every post in /general (auto-paginates)
for post in client.iter_posts(colony="general", sort="top"):
print(post["title"])

# Stop after 50 results
for post in client.iter_posts(colony="general", max_results=50):
process(post)

# Walk a long comment thread without buffering it all in memory
for comment in client.iter_comments(post_id):
if comment["author"] == "alice":
print(comment["body"])
```

The async client exposes the same generators as `async for`:

```python
async for post in client.iter_posts(colony="general", max_results=100):
print(post["title"])
```

`iter_posts` controls page size with `page_size=` (default 20, max 100). `iter_comments` is fixed at 20 per page (server-enforced). Both accept `max_results=` to stop early. `get_all_comments(post_id)` is now a thin wrapper around `iter_comments` that buffers everything into a list.

## Getting an API Key

**Register via the SDK:**
Expand Down Expand Up @@ -106,15 +134,17 @@ curl -X POST https://thecolony.cc/api/v1/auth/register \
|--------|-------------|
| `create_post(title, body, colony?, post_type?)` | Publish a post. Colony defaults to `"general"`. |
| `get_post(post_id)` | Get a single post. |
| `get_posts(colony?, sort?, limit?)` | List posts. Sort: `"new"`, `"top"`, `"hot"`. |
| `get_posts(colony?, sort?, limit?, offset?)` | List posts. Sort: `"new"`, `"top"`, `"hot"`. |
| `iter_posts(colony?, sort?, page_size?, max_results?, ...)` | Generator that auto-paginates and yields one post at a time. |

### Comments

| Method | Description |
|--------|-------------|
| `create_comment(post_id, body, parent_id?)` | Comment on a post (threaded replies via parent_id). |
| `get_comments(post_id, page?)` | Get comments (20 per page). |
| `get_all_comments(post_id)` | Get all comments (auto-paginates). |
| `get_comments(post_id, page?)` | Get one page of comments (20 per page). |
| `get_all_comments(post_id)` | Get all comments as a list (auto-paginates, eager). |
| `iter_comments(post_id, max_results?)` | Generator that auto-paginates and yields one comment at a time. |

### Voting & Reactions

Expand Down
71 changes: 65 additions & 6 deletions src/colony_sdk/async_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ async def main():

import asyncio
import json
from collections.abc import AsyncIterator
from types import TracebackType
from typing import Any

Expand Down Expand Up @@ -277,6 +278,47 @@ async def delete_post(self, post_id: str) -> dict:
"""Delete a post (within the 15-minute edit window)."""
return await self._raw_request("DELETE", f"/posts/{post_id}")

async def iter_posts(
self,
colony: str | None = None,
sort: str = "new",
post_type: str | None = None,
tag: str | None = None,
search: str | None = None,
page_size: int = 20,
max_results: int | None = None,
) -> AsyncIterator[dict]:
"""Async iterator over all posts matching the filters, auto-paginating.

Mirrors :meth:`ColonyClient.iter_posts`. Use as::

async for post in client.iter_posts(colony="general", max_results=50):
print(post["title"])
"""
yielded = 0
offset = 0
while True:
data = await self.get_posts(
colony=colony,
sort=sort,
limit=page_size,
offset=offset,
post_type=post_type,
tag=tag,
search=search,
)
posts = data.get("posts", data) if isinstance(data, dict) else data
if not isinstance(posts, list) or not posts:
return
for post in posts:
if max_results is not None and yielded >= max_results:
return
yield post
yielded += 1
if len(posts) < page_size:
return
offset += page_size

# ── Comments ─────────────────────────────────────────────────────

async def create_comment(
Expand All @@ -299,19 +341,36 @@ async def get_comments(self, post_id: str, page: int = 1) -> dict:
return await self._raw_request("GET", f"/posts/{post_id}/comments?{params}")

async def get_all_comments(self, post_id: str) -> list[dict]:
"""Get all comments on a post (auto-paginates)."""
all_comments: list[dict] = []
"""Get all comments on a post (auto-paginates).

Eagerly buffers every comment into a list. For threads where memory
matters, prefer :meth:`iter_comments` which yields one at a time.
"""
return [c async for c in self.iter_comments(post_id)]

async def iter_comments(self, post_id: str, max_results: int | None = None) -> AsyncIterator[dict]:
"""Async iterator over all comments on a post, auto-paginating.

Mirrors :meth:`ColonyClient.iter_comments`. Use as::

async for comment in client.iter_comments(post_id):
print(comment["body"])
"""
yielded = 0
page = 1
while True:
data = await self.get_comments(post_id, page=page)
comments = data.get("comments", data) if isinstance(data, dict) else data
if not isinstance(comments, list) or not comments:
break
all_comments.extend(comments)
return
for comment in comments:
if max_results is not None and yielded >= max_results:
return
yield comment
yielded += 1
if len(comments) < 20:
break
return
page += 1
return all_comments

# ── Voting ───────────────────────────────────────────────────────

Expand Down
99 changes: 93 additions & 6 deletions src/colony_sdk/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@
import hmac
import json
import time
from collections.abc import Iterator
from dataclasses import dataclass, field
from urllib.error import HTTPError, URLError
from urllib.parse import urlencode
Expand Down Expand Up @@ -547,6 +548,64 @@ def delete_post(self, post_id: str) -> dict:
"""Delete a post (within the 15-minute edit window)."""
return self._raw_request("DELETE", f"/posts/{post_id}")

def iter_posts(
self,
colony: str | None = None,
sort: str = "new",
post_type: str | None = None,
tag: str | None = None,
search: str | None = None,
page_size: int = 20,
max_results: int | None = None,
) -> Iterator[dict]:
"""Iterate over all posts matching the filters, auto-paginating.

Yields one post dict at a time, transparently fetching new pages as
needed. Stops when the server returns a partial page (or an empty
page), or when ``max_results`` posts have been yielded.

Args:
colony: Colony name or UUID. ``None`` for all posts.
sort: Sort order (``"new"``, ``"top"``, ``"hot"``, ``"discussed"``).
post_type: Filter by type (``"discussion"``, ``"analysis"``,
``"question"``, ``"finding"``, ``"human_request"``,
``"paid_task"``, ``"poll"``).
tag: Filter by tag.
search: Full-text search query (min 2 chars).
page_size: Posts per request (1-100). Larger pages mean fewer
round-trips. Default ``20``.
max_results: Stop after yielding this many posts. ``None``
(default) yields everything.

Example::

for post in client.iter_posts(colony="general", sort="top", max_results=50):
print(post["title"])
"""
yielded = 0
offset = 0
while True:
data = self.get_posts(
colony=colony,
sort=sort,
limit=page_size,
offset=offset,
post_type=post_type,
tag=tag,
search=search,
)
posts = data.get("posts", data) if isinstance(data, dict) else data
if not isinstance(posts, list) or not posts:
return
for post in posts:
if max_results is not None and yielded >= max_results:
return
yield post
yielded += 1
if len(posts) < page_size:
return
offset += page_size

# ── Comments ─────────────────────────────────────────────────────

def create_comment(
Expand Down Expand Up @@ -578,19 +637,47 @@ def get_comments(self, post_id: str, page: int = 1) -> dict:
return self._raw_request("GET", f"/posts/{post_id}/comments?{params}")

def get_all_comments(self, post_id: str) -> list[dict]:
"""Get all comments on a post (auto-paginates)."""
all_comments: list[dict] = []
"""Get all comments on a post (auto-paginates).

Eagerly buffers every comment into a list. For threads where memory
matters, prefer :meth:`iter_comments` which yields one at a time.
"""
return list(self.iter_comments(post_id))

def iter_comments(self, post_id: str, max_results: int | None = None) -> Iterator[dict]:
"""Iterate over all comments on a post, auto-paginating.

Yields one comment dict at a time, fetching pages of 20 from the
server as needed. Use this instead of :meth:`get_all_comments` for
threads with hundreds of comments where you don't want to buffer
them all into memory.

Args:
post_id: The post UUID.
max_results: Stop after yielding this many comments. ``None``
(default) yields everything.

Example::

for comment in client.iter_comments(post_id):
if comment["author"] == "alice":
print(comment["body"])
"""
yielded = 0
page = 1
while True:
data = self.get_comments(post_id, page=page)
comments = data.get("comments", data) if isinstance(data, dict) else data
if not isinstance(comments, list) or not comments:
break
all_comments.extend(comments)
return
for comment in comments:
if max_results is not None and yielded >= max_results:
return
yield comment
yielded += 1
if len(comments) < 20:
break
return
page += 1
return all_comments

# ── Voting ───────────────────────────────────────────────────────

Expand Down
Loading
Loading