Skip to content

Fix: 修复 aiohttp 流式响应分块大小不固定问题#3919

Merged
yanyongyu merged 4 commits intononebot:masterfrom
KeepingRunning:fix_chunk_size
Mar 27, 2026
Merged

Fix: 修复 aiohttp 流式响应分块大小不固定问题#3919
yanyongyu merged 4 commits intononebot:masterfrom
KeepingRunning:fix_chunk_size

Conversation

@KeepingRunning
Copy link
Copy Markdown
Contributor

@KeepingRunning KeepingRunning commented Mar 26, 2026

环境配置

python版本:python3.12.12
操作系统: macOS 26.2 25C56 arm64

问题复现

代码拉下来跑了一下pytest
QQ_1774506208948
test failed,说是返回的块大小不一定等于chunk_size

原因分析

原来的代码是:

response_headers = response.headers.copy()
async for chunk in response.content.iter_chunked(chunk_size):
    yield Response(
        response.status,
        headers=response_headers,
        content=chunk,
        request=setup,
    )

而iter_chunked底层调用的是_read_nowait,注解里写的是“Read not more than n bytes”,所以chunk大小可能小于chunk_size

def _read_nowait(self, n: int) -> bytes:
    """Read not more than n bytes, or whole buffer if n == -1"""
    self._timer.assert_timeout()

    chunks = []
    while self._buffer:
        chunk = self._read_nowait_chunk(n)
        chunks.append(chunk)
        if n != -1:
            n -= len(chunk)
            if n == 0:
                break

    return b"".join(chunks) if chunks else b""

解决方案

现在改成,手动写一个buffer,池子里chunk的大小大于等于要求的chunk_size的时候就截断并yield出去,结束后再把buffer里的yield出去

async for chunk in response.content.iter_chunked(chunk_size):
    if not chunk:
        continue
    buffer.extend(chunk)
    while len(buffer) >= chunk_size:
        out = bytes(buffer[:chunk_size])
        del buffer[:chunk_size]
        yield Response(
            response.status,
            headers=response_headers,
            content=out,
            request=setup,
        )
if buffer:
    yield Response(
        response.status,
        headers=response_headers,
        content=bytes(buffer),
        request=setup,
    )

最后一块chunked可能小于chunk_size的情况我并未处理,如果需要的话我可以补到chunk_size大小

测试

已通过全部测试
QQ_1774509174688

@yanyongyu yanyongyu added the bug Something isn't working label Mar 26, 2026
@codecov
Copy link
Copy Markdown

codecov bot commented Mar 26, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 93.78%. Comparing base (4f1c590) to head (1df3a1b).
⚠️ Report is 1 commits behind head on master.
✅ All tests successful. No failed tests found.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #3919      +/-   ##
==========================================
+ Coverage   93.77%   93.78%   +0.01%     
==========================================
  Files          48       48              
  Lines        4351     4360       +9     
==========================================
+ Hits         4080     4089       +9     
  Misses        271      271              
Flag Coverage Δ
unittests 93.78% <100.00%> (+0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@github-actions
Copy link
Copy Markdown
Contributor

🚀 Deployed to https://deploy-preview-3919--nonebot2.netlify.app

Copy link
Copy Markdown
Member

NCBM commented Mar 26, 2026

个人认为大概率是测试代码的问题,分包大小只保证不超过最大值是普遍符合预期的行为

@yanyongyu yanyongyu changed the title fix: 修复aiohttp流式响应分块大小不固定问题 Fix: 修复 aiohttp 流式响应分块大小不固定问题 Mar 26, 2026
yanyongyu
yanyongyu previously approved these changes Mar 26, 2026
@KeepingRunning
Copy link
Copy Markdown
Contributor Author

那我把test改成小于等于?

Copy link
Copy Markdown
Member

NCBM commented Mar 26, 2026

应该不用,这么修也不是不行

@yanyongyu yanyongyu merged commit eea5257 into nonebot:master Mar 27, 2026
33 checks passed
@KeepingRunning KeepingRunning deleted the fix_chunk_size branch March 27, 2026 03:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working

Development

Successfully merging this pull request may close these issues.

3 participants