Skip to content

feat: add some unit tests#68

Merged
aturret merged 7 commits intomainfrom
test-refactor
Mar 21, 2026
Merged

feat: add some unit tests#68
aturret merged 7 commits intomainfrom
test-refactor

Conversation

@aturret
Copy link
Owner

@aturret aturret commented Mar 21, 2026

Summary by CodeRabbit

Release Notes

  • Tests

    • Established comprehensive unit test coverage across scraper modules and core utilities.
    • Added test fixtures and configuration for consistent test execution.
  • Chores

    • Introduced code coverage tracking and enforcement with Codecov integration.
    • Implemented automated pull request testing workflow with coverage reporting.

@gitguardian
Copy link

gitguardian bot commented Mar 21, 2026

⚠️ GitGuardian has uncovered 1 secret following the scan of your pull request.

Please consider investigating the findings and remediating the incidents. Failure to do so may lead to compromising the associated services or software components.

🔎 Detected hardcoded secret in your pull request
GitGuardian id GitGuardian status Secret Commit Filename
29070475 Triggered Generic Password 74a9a0e tests/unit/test_scraper_config.py View secret
🛠 Guidelines to remediate hardcoded secrets
  1. Understand the implications of revoking this secret by investigating where it is used in your code.
  2. Replace and store your secret safely. Learn here the best practices.
  3. Revoke and rotate this secret.
  4. If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.

To avoid such incidents in the future consider


🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 21, 2026

Caution

Review failed

Pull request was closed or merged during review

📝 Walkthrough

Walkthrough

Restructures testing infrastructure by establishing Codecov coverage enforcement and GitHub Actions CI workflow. Removes legacy router/integration tests while introducing comprehensive unit test suite for scraper modules. Adds pytest-cov dependency and configures test discovery paths.

Changes

Cohort / File(s) Summary
CI and Coverage Configuration
.codecov.yml, .github/workflows/pr-gate.yml
Adds Codecov configuration with global coverage enforcement (target: auto, 1% threshold) and patch coverage requirement (80%). Introduces GitHub Actions workflow for testing on pull requests to main, running pytest with coverage reporting and uploading to Codecov.
Project Configuration
pyproject.toml
Adds pytest-cov dependency and configures pytest to discover tests in the tests directory.
Deleted Router Tests
tests/routers/test_scraper.py, tests/routers/test_telegram_bot.py, tests/routers/test_twitter.py
Removes endpoint router tests that covered scraper, telegram bot, and twitter API routes including authentication and request validation.
Deleted Legacy App Tests
apps/telegram-bot/tests/conftest.py, apps/telegram-bot/tests/test_webhook.py
Removes telegram bot test fixtures and webhook endpoint tests previously used for integration testing.
Deleted Legacy Package Tests
packages/shared/tests/test_user_setting.py, tests/test_bluesky.py, tests/test_weibo.py, tests/test_zhihu_content_processing.py
Removes older integration/module-level tests for user settings, platform-specific scrapers, and content processing utilities.
Unit Test Infrastructure
tests/unit/conftest.py, tests/unit/test_telegraph.py
Adds pytest fixtures for unit tests (metadata factories, scraper manager reset, mocked network/dependencies) and comprehensive Telegraph service tests covering initialization, async posting, and image preprocessing.
Scraper Unit Tests
tests/unit/scrapers/test_scraper_abc.py, tests/unit/scrapers/test_scraper_config.py, tests/unit/scrapers/test_scraper_manager.py, tests/unit/scrapers/test_common.py
Adds tests for scraper base classes, configuration loading under multiple environment scenarios, manager initialization flow, and InfoExtractService dispatching logic.
Platform-Specific Scraper Tests
tests/unit/scrapers/test_bluesky.py, tests/unit/scrapers/test_weibo.py, tests/unit/scrapers/test_twitter.py, tests/unit/scrapers/test_threads.py, tests/unit/scrapers/test_wechat.py, tests/unit/scrapers/test_instagram.py, tests/unit/scrapers/test_reddit.py, tests/unit/scrapers/test_xiaohongshu.py
Adds comprehensive unit tests for eight platform scrapers covering dataclass initialization, URL parsing, data extraction, media handling, and content processing with mocked dependencies.
General Scraper Tests
tests/unit/scrapers/test_general_base.py, tests/unit/scrapers/test_general_scraper.py, tests/unit/scrapers/test_general_firecrawl.py, tests/unit/scrapers/test_general_zyte.py, tests/unit/scrapers/test_general_init.py, tests/unit/scrapers/test_general_firecrawl_schema.py
Adds unit tests for the general-purpose scraping system covering base processor logic, scraper registry, Firecrawl/Zyte client implementations, truncation detection, and Pydantic schema validation.
Content Processing Tests
tests/unit/scrapers/test_douban.py, tests/unit/scrapers/test_zhihu_content_processing.py
Adds tests for Douban and Zhihu content processing utilities including HTML transformations, media extraction, template rendering, and link/reference normalization.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Poem

🐰 Hoppy tests now bloom in /unit/scrapers,
Old routers fade, new fixtures caper,
Coverage gates the PR so tight,
Each scraper mocked—what a delight! 🎉

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 21.38% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title is overly vague and generic. While it correctly identifies that the PR adds unit tests, it does not specify which components, modules, or systems are being tested, making it difficult for reviewers scanning history to understand the scope and significance of the changes. Provide a more specific title that indicates the scope of testing (e.g., 'feat: add comprehensive unit tests for scrapers and coverage configuration' or 'feat: refactor tests with Codecov integration and unit test suite').
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch test-refactor

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

🧹 Nitpick comments (8)
tests/unit/test_general_firecrawl_schema.py (1)

3-3: Consider if pytest import is necessary.

The pytest module is imported but not explicitly used in this file. If no pytest fixtures or markers are used, this import could be removed. However, keeping it is harmless if the file is expected to grow.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/test_general_firecrawl_schema.py` at line 3, The file currently
imports the pytest module but never uses it; remove the unused import statement
for pytest (the top-level "import pytest") from
tests/unit/test_general_firecrawl_schema.py to clean up the code unless you plan
to add pytest fixtures or markers in this test file.
tests/unit/test_bluesky.py (2)

10-12: Use ASCII hyphen-minus instead of EN DASH.

Line 11 contains an EN DASH character () instead of the standard hyphen-minus (-). While this doesn't affect functionality, it's best to use ASCII characters in comments for consistency.

🧹 Proposed fix
 # ---------------------------------------------------------------------------
-# Helpers – lightweight fakes for atproto types
+# Helpers - lightweight fakes for atproto types
 # ---------------------------------------------------------------------------
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/test_bluesky.py` around lines 10 - 12, Replace the EN DASH
character in the comment "Helpers – lightweight fakes for atproto types" with a
standard ASCII hyphen-minus so the line reads "Helpers - lightweight fakes for
atproto types"; edit the comment in tests/unit/test_bluesky.py (the header
comment containing "Helpers – lightweight fakes for atproto types") and save
with the ASCII hyphen-minus character.

5-5: Unused import: dataclass.

The dataclass import from the dataclasses module is not used anywhere in this file.

🧹 Proposed fix
 import pytest
 from unittest.mock import AsyncMock, MagicMock, patch, PropertyMock
-from dataclasses import dataclass
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/test_bluesky.py` at line 5, Remove the unused dataclass import:
delete the "from dataclasses import dataclass" statement so unused symbol
`dataclass` is no longer imported in this test module (keep other imports
intact).
tests/unit/test_general_scraper.py (1)

89-113: Consider an autouse fixture to reset SCRAPER_REGISTRY.

The manual try/finally blocks work but are fragile—if assertions fail before cleanup, or if new tests forget this pattern, registry mutations could leak. Since conftest.py already has reset_scraper_manager as an autouse fixture, consider extending it or adding a similar fixture for GeneralScraper.SCRAPER_REGISTRY.

♻️ Proposed addition to conftest.py
`@pytest.fixture`(autouse=True)
def reset_general_scraper_registry():
    """Reset GeneralScraper registry after each test."""
    from fastfetchbot_shared.services.scrapers.general.scraper import GeneralScraper
    from fastfetchbot_shared.services.scrapers.general.firecrawl import FirecrawlScraper
    from fastfetchbot_shared.services.scrapers.general.zyte import ZyteScraper
    
    original_registry = dict(GeneralScraper.SCRAPER_REGISTRY)
    yield
    GeneralScraper.SCRAPER_REGISTRY = {
        "FIRECRAWL": FirecrawlScraper,
        "ZYTE": ZyteScraper,
    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/test_general_scraper.py` around lines 89 - 113, Tests mutate
GeneralScraper.SCRAPER_REGISTRY directly in TestRegisterAndGetAvailable which
relies on try/finally cleanup and can leak state; add an autouse fixture in
conftest.py (similar to reset_scraper_manager) that captures/restores or
reinitializes GeneralScraper.SCRAPER_REGISTRY around each test so tests like
TestRegisterAndGetAvailable.test_register_scraper and
test_register_scraper_uppercases_name no longer need manual try/finally; the
fixture should import GeneralScraper and the canonical scrapers (e.g.,
FirecrawlScraper, ZyteScraper) and reset GeneralScraper.SCRAPER_REGISTRY to the
expected dictionary after each test.
.github/workflows/pr-gate.yml (1)

35-36: Consider specifying the coverage source directory.

The --cov flag without a source path may result in inaccurate coverage measurements. Specify the source package to ensure proper coverage collection.

♻️ Proposed fix
     - name: Run tests with coverage
-      run: uv run pytest --cov --cov-report=xml
+      run: uv run pytest --cov=packages/shared --cov-report=xml

Adjust the path to match your actual source directory structure.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/pr-gate.yml around lines 35 - 36, The pytest coverage step
("Run tests with coverage") currently uses "--cov" without a source, which can
yield incorrect metrics; update the run command (the line containing "uv run
pytest --cov --cov-report=xml") to specify your source package or source
directory (for example add "--cov=your_package" or "--cov=src") so pytest
measures coverage only for the intended codebase and the report is accurate.
tests/unit/test_wechat.py (1)

227-258: Avoid a coverage-only harness for a branch this file already describes as unreachable.

The docstring here says this path is “normally unreachable”, and the test forces it by overriding internals. That makes coverage look better without validating a real parser path. Prefer extracting the paragraph-splitting logic into a helper that can be tested directly, or deleting the dead branch instead of pinning it with this harness.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/test_wechat.py` around lines 227 - 258, The test class
TestProcessWechatBrPairCoverage forces an unreachable branch in
Wechat._process_wechat by monkeypatching bs4 Tag.extract/decompose to preserve
sibling state; instead refactor the paragraph-splitting/br-pair detection into a
testable helper (e.g., Wechat._split_br_pair_paragraphs or similar) that
encapsulates the logic currently exercised under the extract/decompose hack, add
unit tests that call that helper with realistic HTML inputs (covering both
"content before" and "no content before" cases) without patching bs4 internals,
and then remove or replace these coverage-only tests; alternatively, if the
branch is truly dead, delete the unreachable br-pair branch in _process_wechat
and delete these tests. Ensure references to _process_wechat, extract,
decompose, and the br-pair logic are updated accordingly.
tests/unit/test_common.py (1)

102-104: Test doesn't verify **kwargs propagation to scraper class.

Per the relevant code snippet (common.py:69-72), scraper classes are instantiated with url=self.url, data=self.data, **self.kwargs. This test only verifies url and data but not kwargs. Consider adding a test case with kwargs to ensure they're passed through.

💡 Suggested test addition
`@pytest.mark.asyncio`
async def test_get_item_twitter_category_with_kwargs(self, make_url_metadata):
    mock_scraper_instance = MagicMock()
    mock_scraper_instance.get_item = AsyncMock(
        return_value={"title": "Twitter Post", "content": "hello"}
    )
    mock_scraper_class = MagicMock(return_value=mock_scraper_instance)

    svc = InfoExtractService(
        url_metadata=make_url_metadata(source="twitter", url="https://twitter.com/x/1"),
        data={"some": "data"},
        extra_kwarg="extra_value",
    )

    with patch.dict(svc.service_classes, {"twitter": mock_scraper_class}):
        await svc.get_item()

    mock_scraper_class.assert_called_once_with(
        url="https://twitter.com/x/1", data={"some": "data"}, extra_kwarg="extra_value"
    )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/test_common.py` around lines 102 - 104, Add a test that verifies
**kwargs are forwarded to the scraper class: instantiate InfoExtractService with
an extra keyword (e.g., extra_kwarg="extra_value"), patch svc.service_classes to
return a mock scraper class (mock_scraper_class → returns mock_scraper_instance
with get_item AsyncMock), call svc.get_item(), and assert
mock_scraper_class.assert_called_once_with(url="https://twitter.com/x/1",
data={"some": "data"}, extra_kwarg="extra_value") so the InfoExtractService
constructor call (url=self.url, data=self.data, **self.kwargs) is covered;
reference InfoExtractService, svc.service_classes, and get_item when adding the
new test.
tests/unit/test_weibo.py (1)

209-287: Use underscore prefix for unused pics return value.

Several tests unpack result, pics but don't use pics. Static analysis flagged these as unused variables. Use _ or _pics to indicate intentionally ignored values.

💅 Example fix
-        result, pics = WeiboDataProcessor._weibo_html_text_clean("<p>Hello</p>", method="bs4")
+        result, _ = WeiboDataProcessor._weibo_html_text_clean("<p>Hello</p>", method="bs4")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/test_weibo.py` around lines 209 - 287, Several tests unpack two
return values from WeiboDataProcessor._weibo_html_text_clean(_bs4) but never use
the second value (pics); change those unpackings to use an underscore-prefixed
name (e.g., result, _ or result, _pics) to mark the variable as intentionally
unused. Update occurrences in test_bs4_method, the bs4/lxml method tests that
currently do "result, pics = ...", and in TestWeiboHtmlTextCleanBs4 methods that
unpack but don't inspect pics (e.g., test_img_replaced_with_alt,
test_image_tag_timeline_card_removed, test_search_link_unwrapped,
test_usercard_link_updated, test_span_unwrapped, test_href_slash_slash_fixed,
test_href_n_slash_fixed) so only tests that assert on pics (like
test_view_image_link_extracted and test_image_tag_non_matching_src_kept) keep
the pics variable name.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/pr-gate.yml:
- Around line 32-33: The "Install dependencies" workflow step currently runs "uv
sync" which omits the dev dependency group (so pytest and friends aren't
installed); update that step to run "uv sync --group dev" (or otherwise ensure
the dev extras are installed) so the dev/test packages (pytest, pytest-asyncio,
pytest-cov) are present for the "Run tests" job.

In `@pyproject.toml`:
- Line 58: Update the pytest-cov dependency constraint in pyproject.toml:
replace the non-existent "pytest-cov>=7.1.0" entry with a valid version range
such as "pytest-cov>=7.0.0,<8.0.0" (or a specific existing version like
"pytest-cov==7.0.0") so the dependency resolves correctly; edit the line
referencing pytest-cov in pyproject.toml accordingly.

In `@tests/unit/test_douban.py`:
- Around line 546-554: The test test_view_original_link_decomposed currently
only asserts the title text is gone, which can pass if the <a> tag remains;
update the assertion to ensure the anchor itself and its href are removed by
checking the resulting HTML from Douban._douban_short_text_process() does not
contain the anchor tag or the original URL (e.g., assert '<a' not in result or
assert 'https://img.douban.com/big.jpg' not in result) so the link is fully
decomposed.
- Around line 209-216: The test test_m_douban_non_review currently only checks
that "douban.com" is in d.url, which also passes for the original m.douban.com
and doesn't verify the host was rewritten; update the assertion in this test
(which constructs Douban and calls d.check_douban_type()) to explicitly assert
the host rewrite — for example assert "m.douban.com" not in d.url and/or assert
d.url.startswith("https://douban.com") or the expected desktop host string — so
the test guarantees the mobile host was replaced by the desktop host after
check_douban_type() on the Douban instance.
- Around line 454-473: The test currently only asserts removal of the blockquote
tags; extend it to verify the carriage-return entity is cleaned up by
Douban._get_douban_status as well: after calling d._get_douban_status() assert
that the raw_content on the Douban instance contains no "&#13;" and no literal
carriage return ("\r") (or assert the expected concatenated string "TextMore"
appears) to ensure the &#13; was removed, not just the tags.
- Around line 301-330: The test computes a boolean `found` by scanning
`call_args` for a template render call where `short_text` equals the stripped
value but never asserts it; add an assertion at the end of
`test_short_text_ending_with_newline_stripped` (after the loop that sets
`found`) to assert that `found` is True so the test fails if
`_douban_short_text_process` output with trailing newline was not stripped
before rendering; reference the test function name and the `found` variable and
the mocked template render call args to locate where to add this assertion.

In `@tests/unit/test_general_base.py`:
- Around line 21-23: The DEFAULT_OPENAI_MODEL constant is set to an invalid
model name "gpt-5-nano"; update the constant in
fastfetchbot_shared.services.scrapers.general.base (symbol:
DEFAULT_OPENAI_MODEL) to a valid OpenAI model such as "gpt-4-turbo" (or
"gpt-4o"/"gpt-3.5-turbo") and update the test expectation in
tests/unit/test_general_base.py (the assertion in test_default_openai_model) to
match the chosen valid model string so runtime API calls use a supported model.

In `@tests/unit/test_reddit.py`:
- Around line 350-365: The test test_removed_link_decomposition documents a bug
where a link tag is decomposed then still accessed, causing AttributeError; fix
the source in Reddit._process_reddit_data so that when you call a.decompose() on
a removed link you immediately skip further processing of that loop iteration
(e.g., use continue or otherwise avoid calling a.get("href") after decompose),
ensuring the loop does not access the decomposed tag variable.

In `@tests/unit/test_scraper_manager.py`:
- Around line 41-47: The test currently only asserts that init_bluesky_scraper
was not awaited but doesn't verify the public lookup map was populated; update
the test (and the other cached-branch tests at the mentioned ranges) to also
assert that ScraperManager.scrapers["bluesky"] (and the equivalent keys in the
other tests) is set to the existing instance (or is not None) after calling
init_scraper; locate the test using the symbols ScraperManager, init_scraper,
ScraperManager.bluesky_scraper and add an assertion that ScraperManager.scrapers
contains the correct scraper entry so the cached-path is fully covered.

In `@tests/unit/test_telegraph.py`:
- Around line 102-116: The test currently doesn't verify that the image
preprocessing branch is skipped; patch the DocumentPreprocessor used by
Telegraph.get_telegraph (e.g., patch the DocumentPreprocessor class or factory
in the fastfetchbot_shared.services.telegraph module) in
test_upload_images_false and assert it was not instantiated or that its
preprocess method was not awaited; keep the existing AsyncTelegraphPoster patch
(mock_poster_cls) and only change this test to add the DocumentPreprocessor
patch and a negative assertion (e.g., assert_not_called / assert_not_awaited) so
the upload_images=False path is actually enforced.

In `@tests/unit/test_threads.py`:
- Around line 20-25: Tests use SELF_CODE = "post" which asserts the wrong
identifier; update SELF_CODE to the actual post id (e.g., "ABC123") used in the
sample URL so it matches what Threads.__init__ should extract (the final path
segment) and adjust the other duplicated constants/expectations at the other
occurrences (around the blocks referenced) to use the same correct post id
instead of "post" so author-thread matching tests assert against the real post
code.

In `@tests/unit/test_twitter.py`:
- Around line 206-211: The test currently mocks both helpers
(_rapidapi_get_response_tweet_data and _api_client_get_response_tweet_data) with
the same return value so it can't detect which branch _get_response_tweet_data
selected; change the test to capture the two AsyncMock instances (e.g., assign
them to mock_rapidapi and mock_api_client) and after calling await
tw._get_response_tweet_data() assert that mock_api_client was awaited
(mock_api_client.assert_awaited_once() or assert_awaited) and mock_rapidapi was
not awaited (mock_rapidapi.assert_not_awaited()), ensuring the test verifies the
correct helper was used.

In `@tests/unit/test_weibo.py`:
- Around line 409-425: The test shows _get_pictures creates a MediaFile by
passing pic["large"]["url"] positionally which ends up in the media_type
parameter; update WeiboDataProcessor._get_pictures to call MediaFile using
explicit keyword arguments (e.g. url=pic["large"]["url"]) and, where appropriate
for livephoto, create proper MediaFile entries for the image and the video (use
url=... and set media_type='video' for the video) so the URL is passed correctly
instead of as a positional media_type.

In `@tests/unit/test_xiaohongshu.py`:
- Around line 1691-1692: The test's assertion contradicts its comment:
raw_content must be normalized to an empty string, but the current assertion
allows None; update the assertion in tests/unit/test_xiaohongshu.py to assert
xhs.raw_content == "" so it fails when raw_content is None and enforces the
normalization contract for the xhs.raw_content attribute.

---

Nitpick comments:
In @.github/workflows/pr-gate.yml:
- Around line 35-36: The pytest coverage step ("Run tests with coverage")
currently uses "--cov" without a source, which can yield incorrect metrics;
update the run command (the line containing "uv run pytest --cov
--cov-report=xml") to specify your source package or source directory (for
example add "--cov=your_package" or "--cov=src") so pytest measures coverage
only for the intended codebase and the report is accurate.

In `@tests/unit/test_bluesky.py`:
- Around line 10-12: Replace the EN DASH character in the comment "Helpers –
lightweight fakes for atproto types" with a standard ASCII hyphen-minus so the
line reads "Helpers - lightweight fakes for atproto types"; edit the comment in
tests/unit/test_bluesky.py (the header comment containing "Helpers – lightweight
fakes for atproto types") and save with the ASCII hyphen-minus character.
- Line 5: Remove the unused dataclass import: delete the "from dataclasses
import dataclass" statement so unused symbol `dataclass` is no longer imported
in this test module (keep other imports intact).

In `@tests/unit/test_common.py`:
- Around line 102-104: Add a test that verifies **kwargs are forwarded to the
scraper class: instantiate InfoExtractService with an extra keyword (e.g.,
extra_kwarg="extra_value"), patch svc.service_classes to return a mock scraper
class (mock_scraper_class → returns mock_scraper_instance with get_item
AsyncMock), call svc.get_item(), and assert
mock_scraper_class.assert_called_once_with(url="https://twitter.com/x/1",
data={"some": "data"}, extra_kwarg="extra_value") so the InfoExtractService
constructor call (url=self.url, data=self.data, **self.kwargs) is covered;
reference InfoExtractService, svc.service_classes, and get_item when adding the
new test.

In `@tests/unit/test_general_firecrawl_schema.py`:
- Line 3: The file currently imports the pytest module but never uses it; remove
the unused import statement for pytest (the top-level "import pytest") from
tests/unit/test_general_firecrawl_schema.py to clean up the code unless you plan
to add pytest fixtures or markers in this test file.

In `@tests/unit/test_general_scraper.py`:
- Around line 89-113: Tests mutate GeneralScraper.SCRAPER_REGISTRY directly in
TestRegisterAndGetAvailable which relies on try/finally cleanup and can leak
state; add an autouse fixture in conftest.py (similar to reset_scraper_manager)
that captures/restores or reinitializes GeneralScraper.SCRAPER_REGISTRY around
each test so tests like TestRegisterAndGetAvailable.test_register_scraper and
test_register_scraper_uppercases_name no longer need manual try/finally; the
fixture should import GeneralScraper and the canonical scrapers (e.g.,
FirecrawlScraper, ZyteScraper) and reset GeneralScraper.SCRAPER_REGISTRY to the
expected dictionary after each test.

In `@tests/unit/test_wechat.py`:
- Around line 227-258: The test class TestProcessWechatBrPairCoverage forces an
unreachable branch in Wechat._process_wechat by monkeypatching bs4
Tag.extract/decompose to preserve sibling state; instead refactor the
paragraph-splitting/br-pair detection into a testable helper (e.g.,
Wechat._split_br_pair_paragraphs or similar) that encapsulates the logic
currently exercised under the extract/decompose hack, add unit tests that call
that helper with realistic HTML inputs (covering both "content before" and "no
content before" cases) without patching bs4 internals, and then remove or
replace these coverage-only tests; alternatively, if the branch is truly dead,
delete the unreachable br-pair branch in _process_wechat and delete these tests.
Ensure references to _process_wechat, extract, decompose, and the br-pair logic
are updated accordingly.

In `@tests/unit/test_weibo.py`:
- Around line 209-287: Several tests unpack two return values from
WeiboDataProcessor._weibo_html_text_clean(_bs4) but never use the second value
(pics); change those unpackings to use an underscore-prefixed name (e.g.,
result, _ or result, _pics) to mark the variable as intentionally unused. Update
occurrences in test_bs4_method, the bs4/lxml method tests that currently do
"result, pics = ...", and in TestWeiboHtmlTextCleanBs4 methods that unpack but
don't inspect pics (e.g., test_img_replaced_with_alt,
test_image_tag_timeline_card_removed, test_search_link_unwrapped,
test_usercard_link_updated, test_span_unwrapped, test_href_slash_slash_fixed,
test_href_n_slash_fixed) so only tests that assert on pics (like
test_view_image_link_extracted and test_image_tag_non_matching_src_kept) keep
the pics variable name.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: dbbf700c-fbce-4627-be97-dd15c3ad92d6

📥 Commits

Reviewing files that changed from the base of the PR and between 61d1aac and 51a0dc2.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (31)
  • .codecov.yml
  • .github/workflows/pr-gate.yml
  • pyproject.toml
  • tests/integration/__init__.py
  • tests/routers/test_scraper.py
  • tests/routers/test_telegram_bot.py
  • tests/routers/test_twitter.py
  • tests/unit/__init__.py
  • tests/unit/conftest.py
  • tests/unit/test_bluesky.py
  • tests/unit/test_common.py
  • tests/unit/test_douban.py
  • tests/unit/test_general_base.py
  • tests/unit/test_general_firecrawl.py
  • tests/unit/test_general_firecrawl_schema.py
  • tests/unit/test_general_init.py
  • tests/unit/test_general_scraper.py
  • tests/unit/test_general_zyte.py
  • tests/unit/test_instagram.py
  • tests/unit/test_reddit.py
  • tests/unit/test_scraper_abc.py
  • tests/unit/test_scraper_config.py
  • tests/unit/test_scraper_manager.py
  • tests/unit/test_telegraph.py
  • tests/unit/test_threads.py
  • tests/unit/test_twitter.py
  • tests/unit/test_wechat.py
  • tests/unit/test_weibo.py
  • tests/unit/test_xiaohongshu.py
  • tests/unit/test_zhihu.py
  • tests/unit/test_zhihu_content_processing.py
💤 Files with no reviewable changes (3)
  • tests/routers/test_telegram_bot.py
  • tests/routers/test_twitter.py
  • tests/routers/test_scraper.py

Comment on lines +32 to +33
- name: Install dependencies
run: uv sync
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

uv sync will not install dev dependencies including pytest.

The dev dependency group (containing pytest, pytest-asyncio, pytest-cov) is not included by default with uv sync. Without the --group dev flag, pytest won't be available and the workflow will fail at the "Run tests" step.

🐛 Proposed fix
     - name: Install dependencies
-      run: uv sync
+      run: uv sync --group dev
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/pr-gate.yml around lines 32 - 33, The "Install
dependencies" workflow step currently runs "uv sync" which omits the dev
dependency group (so pytest and friends aren't installed); update that step to
run "uv sync --group dev" (or otherwise ensure the dev extras are installed) so
the dev/test packages (pytest, pytest-asyncio, pytest-cov) are present for the
"Run tests" job.

"pytest>=8.3.5,<9.0.0",
"pytest-asyncio>=0.26.0,<0.27.0",
"celery-types>=0.24.0",
"pytest-cov>=7.1.0",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

What is the latest version of pytest-cov package on PyPI?

💡 Result:

The latest version of the pytest-cov package on PyPI is 7.0.0.

Citations:


Change pytest-cov version constraint from 7.1.0 to 7.0.0 or lower.

The version 7.1.0 does not exist on PyPI; the latest available version is 7.0.0. Update the constraint to pytest-cov>=7.0.0,<8.0.0 or specify an appropriate existing version.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyproject.toml` at line 58, Update the pytest-cov dependency constraint in
pyproject.toml: replace the non-existent "pytest-cov>=7.1.0" entry with a valid
version range such as "pytest-cov>=7.0.0,<8.0.0" (or a specific existing version
like "pytest-cov==7.0.0") so the dependency resolves correctly; edit the line
referencing pytest-cov in pyproject.toml accordingly.

Comment on lines +209 to +216
def test_m_douban_non_review(self):
"""m.douban.com with non-review path should still rewrite host."""
from fastfetchbot_shared.services.scrapers.douban import Douban, DoubanType

d = Douban("https://m.douban.com/group/topic/12345/")
d.check_douban_type()
assert d.douban_type == DoubanType.GROUP
assert "douban.com" in d.url
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Assert the desktop-host rewrite explicitly.

The check on Line 216 passes for the original m.douban.com URL too, so this case does not actually verify the rewrite described by the docstring.

Proposed tightening
         d = Douban("https://m.douban.com/group/topic/12345/")
         d.check_douban_type()
         assert d.douban_type == DoubanType.GROUP
-        assert "douban.com" in d.url
+        assert d.url.startswith("https://www.douban.com/")
+        assert "m.douban.com" not in d.url
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def test_m_douban_non_review(self):
"""m.douban.com with non-review path should still rewrite host."""
from fastfetchbot_shared.services.scrapers.douban import Douban, DoubanType
d = Douban("https://m.douban.com/group/topic/12345/")
d.check_douban_type()
assert d.douban_type == DoubanType.GROUP
assert "douban.com" in d.url
def test_m_douban_non_review(self):
"""m.douban.com with non-review path should still rewrite host."""
from fastfetchbot_shared.services.scrapers.douban import Douban, DoubanType
d = Douban("https://m.douban.com/group/topic/12345/")
d.check_douban_type()
assert d.douban_type == DoubanType.GROUP
assert d.url.startswith("https://www.douban.com/")
assert "m.douban.com" not in d.url
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/test_douban.py` around lines 209 - 216, The test
test_m_douban_non_review currently only checks that "douban.com" is in d.url,
which also passes for the original m.douban.com and doesn't verify the host was
rewritten; update the assertion in this test (which constructs Douban and calls
d.check_douban_type()) to explicitly assert the host rewrite — for example
assert "m.douban.com" not in d.url and/or assert
d.url.startswith("https://douban.com") or the expected desktop host string — so
the test guarantees the mobile host was replaced by the desktop host after
check_douban_type() on the Douban instance.

Comment on lines +301 to +330
async def test_short_text_ending_with_newline_stripped(self, _patch_get_selector, _patch_douban_templates):
"""If short_text ends with newline, it should be stripped."""
from fastfetchbot_shared.services.scrapers.douban import Douban, DoubanType

html = """
<html><body>
<h1>Test Note</h1>
<div class="content"><a href="/people/123/">Author</a></div>
<div id="link-report"><p>Content</p></div>
</body></html>
"""
selector = etree.HTML(html)
_patch_get_selector.return_value = selector

d = Douban("https://www.douban.com/note/12345/")
d.douban_type = DoubanType.NOTE
# Patch _douban_short_text_process to return text ending with \n
with patch.object(d, "_douban_short_text_process", return_value="text\n"):
await d.get_douban_item()
# The template should receive short_text without trailing newline
call_args = _patch_douban_templates.render.call_args_list
# Find the call where short_text was passed
found = False
for c in call_args:
if c.kwargs.get("data", {}).get("short_text") == "text":
found = True
break
if c.args and isinstance(c.args[0], dict) and c.args[0].get("short_text") == "text":
found = True
break
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

This test never asserts that the newline was stripped.

Lines 323-330 compute found, but without a final assertion the test stays green even if short_text is still "text\n".

Proposed fix
             found = False
             for c in call_args:
                 if c.kwargs.get("data", {}).get("short_text") == "text":
                     found = True
                     break
                 if c.args and isinstance(c.args[0], dict) and c.args[0].get("short_text") == "text":
                     found = True
                     break
+            assert found, "expected stripped short_text to be passed to the template"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/test_douban.py` around lines 301 - 330, The test computes a
boolean `found` by scanning `call_args` for a template render call where
`short_text` equals the stripped value but never asserts it; add an assertion at
the end of `test_short_text_ending_with_newline_stripped` (after the loop that
sets `found`) to assert that `found` is True so the test fails if
`_douban_short_text_process` output with trailing newline was not stripped
before rendering; reference the test function name and the `found` variable and
the mocked template render call args to locate where to add this assertion.

Comment on lines +454 to +473
async def test_status_replaces_special_chars(self, _patch_get_selector):
"""Status should replace blockquote tags, >+<, and &#13;."""
from fastfetchbot_shared.services.scrapers.douban import Douban

html = """
<html><body>
<div class="content"><a href="/people/123/">Author</a></div>
<div class="status-saying"><blockquote>Text&#13;More</blockquote></div>
</body></html>
"""
selector = etree.HTML(html)
_patch_get_selector.return_value = selector

d = Douban("https://www.douban.com/status/12345/")
d.check_douban_type()
await d._get_douban_status()

assert "<blockquote>" not in d.raw_content
assert "</blockquote>" not in d.raw_content

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Cover the &#13; cleanup, not just tag removal.

The assertions on Lines 471-472 only prove the blockquote wrapper was removed. A regression that leaves the carriage return intact would still pass.

Proposed tightening
         await d._get_douban_status()

         assert "<blockquote>" not in d.raw_content
         assert "</blockquote>" not in d.raw_content
+        assert "\r" not in d.raw_content
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/test_douban.py` around lines 454 - 473, The test currently only
asserts removal of the blockquote tags; extend it to verify the carriage-return
entity is cleaned up by Douban._get_douban_status as well: after calling
d._get_douban_status() assert that the raw_content on the Douban instance
contains no "&#13;" and no literal carriage return ("\r") (or assert the
expected concatenated string "TextMore" appears) to ensure the &#13; was
removed, not just the tags.

Comment on lines +102 to +116
@pytest.mark.asyncio
@patch("fastfetchbot_shared.services.telegraph.TELEGRAPH_TOKEN_LIST", ["tok1"])
@patch("fastfetchbot_shared.services.telegraph.AsyncTelegraphPoster")
async def test_upload_images_false(self, mock_poster_cls):
mock_poster = AsyncMock()
mock_poster_cls.return_value = mock_poster
mock_poster.post.return_value = {"url": "https://telegra.ph/page"}

t = Telegraph("T", "https://ex.com", "Auth", "https://ex.com/a", "cat", "<p>c</p>")
result = await t.get_telegraph(upload_images=False)

assert result == "https://telegra.ph/page"
# DocumentPreprocessor should NOT have been called
mock_poster.post.assert_awaited_once()

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

This test does not actually prove the upload_images=False branch.

The comment says image preprocessing should not run, but nothing is patched or asserted for that path. If preprocessing starts being constructed unconditionally, this test still stays green.

🧪 Suggested fix
     `@pytest.mark.asyncio`
     `@patch`("fastfetchbot_shared.services.telegraph.TELEGRAPH_TOKEN_LIST", ["tok1"])
     `@patch`("fastfetchbot_shared.services.telegraph.AsyncTelegraphPoster")
-    async def test_upload_images_false(self, mock_poster_cls):
+    `@patch`("fastfetchbot_shared.services.telegraph.DocumentPreprocessor")
+    async def test_upload_images_false(self, mock_doc_pre_cls, mock_poster_cls):
         mock_poster = AsyncMock()
         mock_poster_cls.return_value = mock_poster
         mock_poster.post.return_value = {"url": "https://telegra.ph/page"}

         t = Telegraph("T", "https://ex.com", "Auth", "https://ex.com/a", "cat", "<p>c</p>")
         result = await t.get_telegraph(upload_images=False)

         assert result == "https://telegra.ph/page"
-        # DocumentPreprocessor should NOT have been called
+        mock_doc_pre_cls.assert_not_called()
         mock_poster.post.assert_awaited_once()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/test_telegraph.py` around lines 102 - 116, The test currently
doesn't verify that the image preprocessing branch is skipped; patch the
DocumentPreprocessor used by Telegraph.get_telegraph (e.g., patch the
DocumentPreprocessor class or factory in the
fastfetchbot_shared.services.telegraph module) in test_upload_images_false and
assert it was not instantiated or that its preprocess method was not awaited;
keep the existing AsyncTelegraphPoster patch (mock_poster_cls) and only change
this test to add the DocumentPreprocessor patch and a negative assertion (e.g.,
assert_not_called / assert_not_awaited) so the upload_images=False path is
actually enforced.

Comment on lines +20 to +25
# NOTE: Threads.__init__ extracts code via urlparse(url).path.split("/")[2]
# For URL "https://www.threads.net/@user/post/ABC123", path is "/@user/post/ABC123"
# split("/") = ["", "@user", "post", "ABC123"] -> index 2 = "post"
# So self.code == "post" for standard Threads URLs.

SELF_CODE = "post" # What self.code resolves to for standard URLs
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

These expectations lock in the wrong identifier for standard Threads URLs.

For https://www.threads.net/@user/post/ABC123, the post code is ABC123, not post. Keeping SELF_CODE = "post" makes this suite green only if Threads.__init__ keeps misparsing canonical URLs, and it skews all of the author-thread matching tests that reuse SELF_CODE.

🧪 Suggested fix
-# NOTE: Threads.__init__ extracts code via urlparse(url).path.split("/")[2]
-# For URL "https://www.threads.net/@user/post/ABC123", path is "/@user/post/ABC123"
-# split("/") = ["", "@user", "post", "ABC123"] -> index 2 = "post"
-# So self.code == "post" for standard Threads URLs.
-
-SELF_CODE = "post"  # What self.code resolves to for standard URLs
+# NOTE: For URL "https://www.threads.net/@user/post/ABC123",
+# the post code is the last path segment: "ABC123".
+SELF_CODE = "ABC123"

     def test_default_init(self):
         t = Threads(url="https://www.threads.net/@user/post/ABC123")
         assert t.url == "https://www.threads.net/@user/post/ABC123"
-        assert t.code == "post"
+        assert t.code == "ABC123"

Also applies to: 69-72, 88-92

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/test_threads.py` around lines 20 - 25, Tests use SELF_CODE =
"post" which asserts the wrong identifier; update SELF_CODE to the actual post
id (e.g., "ABC123") used in the sample URL so it matches what Threads.__init__
should extract (the final path segment) and adjust the other duplicated
constants/expectations at the other occurrences (around the blocks referenced)
to use the same correct post id instead of "post" so author-thread matching
tests assert against the real post code.

Comment on lines +206 to +211
with patch.object(tw, "_rapidapi_get_response_tweet_data", new_callable=AsyncMock, return_value={"ok": True}):
# ALL_SCRAPER starts with "api-client" which triggers _api_client branch
# but since it starts with "api-client" (not "Twitter"), it falls into elif
with patch.object(tw, "_api_client_get_response_tweet_data", new_callable=AsyncMock, return_value={"ok": True}):
result = await tw._get_response_tweet_data()
assert result == {"ok": True}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Differentiate the api-client branch in this routing test.

Both mocked branches return the same payload here, so the test still passes if _get_response_tweet_data() picks the wrong helper for the first ALL_SCRAPER entry. Assert which helper was awaited, and that the other one was not.

🧪 Suggested fix
-        with patch.object(tw, "_rapidapi_get_response_tweet_data", new_callable=AsyncMock, return_value={"ok": True}):
+        with patch.object(tw, "_rapidapi_get_response_tweet_data", new_callable=AsyncMock) as mock_rapidapi:
             # ALL_SCRAPER starts with "api-client" which triggers _api_client branch
             # but since it starts with "api-client" (not "Twitter"), it falls into elif
-            with patch.object(tw, "_api_client_get_response_tweet_data", new_callable=AsyncMock, return_value={"ok": True}):
+            with patch.object(
+                tw,
+                "_api_client_get_response_tweet_data",
+                new_callable=AsyncMock,
+                return_value={"ok": True},
+            ) as mock_api_client:
                 result = await tw._get_response_tweet_data()
         assert result == {"ok": True}
+        mock_api_client.assert_awaited_once()
+        mock_rapidapi.assert_not_awaited()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/test_twitter.py` around lines 206 - 211, The test currently mocks
both helpers (_rapidapi_get_response_tweet_data and
_api_client_get_response_tweet_data) with the same return value so it can't
detect which branch _get_response_tweet_data selected; change the test to
capture the two AsyncMock instances (e.g., assign them to mock_rapidapi and
mock_api_client) and after calling await tw._get_response_tweet_data() assert
that mock_api_client was awaited (mock_api_client.assert_awaited_once() or
assert_awaited) and mock_rapidapi was not awaited
(mock_rapidapi.assert_not_awaited()), ensuring the test verifies the correct
helper was used.

Comment on lines +409 to +425
def test_pic_infos_live_photo_no_original(self):
"""When original is None, falls back to MediaFile(pic["large"]["url"]) which
raises TypeError (source code bug: positional arg goes to media_type, missing url)."""
from fastfetchbot_shared.services.scrapers.weibo.scraper import WeiboDataProcessor
weibo_info = {
"pic_num": 1,
"pic_infos": {
"abc": {
"type": "livephoto",
"original": None,
"large": {"url": "https://img.com/large.jpg"},
"video": {"url": "https://video.com/live.mp4"},
}
},
}
with pytest.raises(TypeError):
WeiboDataProcessor._get_pictures(weibo_info)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Test documents a bug in _get_pictures for livephoto without original.

This test expects a TypeError when original is None for a livephoto type, indicating a bug in the source code where MediaFile(pic["large"]["url"]) is called incorrectly (positional instead of keyword argument).

Consider opening an issue to fix this bug.

Would you like me to help propose a fix for this bug in the source code?

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/test_weibo.py` around lines 409 - 425, The test shows
_get_pictures creates a MediaFile by passing pic["large"]["url"] positionally
which ends up in the media_type parameter; update
WeiboDataProcessor._get_pictures to call MediaFile using explicit keyword
arguments (e.g. url=pic["large"]["url"]) and, where appropriate for livephoto,
create proper MediaFile entries for the image and the video (use url=... and set
media_type='video' for the video) so the URL is passed correctly instead of as a
positional media_type.

Comment on lines +1691 to +1692
# raw_content should be empty string, not None
assert xhs.raw_content is None or xhs.raw_content == ""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Make this assertion match the contract in the comment.

The comment says raw_content should be normalized to "", but the assertion also allows None, so this test will not catch the regression it is describing.

🧪 Suggested fix
-        assert xhs.raw_content is None or xhs.raw_content == ""
+        assert xhs.raw_content == ""
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# raw_content should be empty string, not None
assert xhs.raw_content is None or xhs.raw_content == ""
# raw_content should be empty string, not None
assert xhs.raw_content == ""
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/test_xiaohongshu.py` around lines 1691 - 1692, The test's
assertion contradicts its comment: raw_content must be normalized to an empty
string, but the current assertion allows None; update the assertion in
tests/unit/test_xiaohongshu.py to assert xhs.raw_content == "" so it fails when
raw_content is None and enforces the normalization contract for the
xhs.raw_content attribute.

@codecov-commenter
Copy link

Welcome to Codecov 🎉

Once you merge this PR into your default branch, you're all set! Codecov will compare coverage reports and display results in all future pull requests.

ℹ️ You can also turn on project coverage checks and project coverage reporting on Pull Request comment

Thanks for integrating Codecov - We've got you covered ☂️

@aturret aturret merged commit 320b7c6 into main Mar 21, 2026
2 of 4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants