Skip to content

refactor: move divisor calculation from analysis backends to service layer#98

Merged
Limitex merged 4 commits intomainfrom
refactor/separate-score-from-divisor
Apr 17, 2026
Merged

refactor: move divisor calculation from analysis backends to service layer#98
Limitex merged 4 commits intomainfrom
refactor/separate-score-from-divisor

Conversation

@Limitex
Copy link
Copy Markdown
Owner

@Limitex Limitex commented Apr 14, 2026

Summary by CodeRabbit

  • Refactor
    • Simplified the texture analysis backend and internal data flow, reducing internal coupling and returning simpler raw complexity scores internally.
    • Cache and result assembly logic updated so end-user visible analysis results, previews, and compressor behavior remain unchanged.
    • Tests updated to reflect the new internal result shape and preserve parity between CPU/GPU analysis.

@Limitex Limitex self-assigned this Apr 14, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 14, 2026

Warning

Rate limit exceeded

@Limitex has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 54 minutes and 51 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 54 minutes and 51 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 142166aa-c1fa-4b71-ac9d-448668eaf8fd

📥 Commits

Reviewing files that changed from the base of the PR and between 76e5849 and bb79c0d.

📒 Files selected for processing (1)
  • CHANGELOG.md
📝 Walkthrough

Walkthrough

Backends no longer accept or propagate a ComplexityCalculator. AnalyzeBatch now returns raw float complexity scores (0–1) instead of TextureAnalysisResult objects; higher-level services reconstruct TextureAnalysisResult objects where needed and adjust constructor calls accordingly.

Changes

Cohort / File(s) Summary
Interface & Factory
Editor/TextureCompressor/Core/Interfaces/ITextureAnalysisBackend.cs, Editor/TextureCompressor/Analysis/Backends/AnalysisBackendFactory.cs
ITextureAnalysisBackend.AnalyzeBatch now returns Dictionary<Texture2D, float>; AnalysisBackendFactory.Create(...) signature removed the ComplexityCalculator parameter.
CPU Backend
Editor/TextureCompressor/Analysis/Backends/CpuAnalysisBackend.cs
Removed ComplexityCalculator field/constructor param; AnalyzeBatch returns Dictionary<Texture2D, float>; eliminated AnalysisResultHelper.BuildResult usage and related fields from work items.
GPU Backend
Editor/TextureCompressor/Analysis/Backends/GpuAnalysisBackend.cs
Removed TextureProcessor and ComplexityCalculator constructor params/fields; AnalyzeBatch returns Dictionary<Texture2D, float>; stores single clamped scores (or default score) instead of full result objects.
Analyzer & Services
Editor/TextureCompressor/Core/Services/TextureAnalyzer.cs, Editor/TextureCompressor/Core/Services/TextureCompressorService.cs
TextureAnalyzer constructor no longer requires ComplexityCalculator; AnalyzeBatch now works with raw float scores and TextureCompressorService maps those floats into full TextureAnalysisResult via AnalysisResultHelper.BuildResult.
UI / Preview
Editor/TextureCompressor/UI/Preview/PreviewGenerator.cs
Removed passing complexityCalc into TextureAnalyzer; when uncached, builds TextureAnalysisResult per input texture from raw scores and writes constructed results to cache.
Tests
Tests/Editor/Analysis/Backends/AnalysisBackendFactoryTests.cs, Tests/Editor/Analysis/Backends/CpuAnalysisBackendTests.cs, Tests/Editor/Analysis/Backends/GpuCpuParityTests.cs, Tests/Editor/Core/Services/TextureAnalyzerTests.cs
Removed _complexityCalc from setups; updated backend/analyzer test expectations to assert raw float scores (0–1) rather than TextureAnalysisResult properties; removed tests asserting RecommendedDivisor/RecommendedResolution.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐰 I hopped through code with tiny paws,
Took out a calculator's cause,
Backends now hum a single score,
Services stitch results once more,
A joyful thump — simpler laws! 🎶

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Description check ⚠️ Warning No pull request description was provided by the author. The repository template requires sections such as PR type selection, description summary, related issues, changes list, breaking changes, testing confirmation, and context. Add a pull request description following the repository template, including: PR type (Refactoring), summary of changes, breaking changes confirmation, testing verification, and any relevant context for reviewers.
Docstring Coverage ⚠️ Warning Docstring coverage is 9.38% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely summarizes the main refactoring objective: moving divisor calculation from analysis backends to the service layer, which aligns with the substantial changes across backend interfaces and service implementations.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch refactor/separate-score-from-divisor

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (4)
Editor/TextureCompressor/Core/Interfaces/ITextureAnalysisBackend.cs (1)

12-17: Document that the backend contract is now a raw score map.

The XML comment still reads like AnalyzeBatch() returns full analysis results. Since the signature is now Dictionary<Texture2D, float>, make the [0, 1] raw-score semantics explicit.

📝 Suggested doc update
-        /// Analyzes a batch of textures and returns per-texture results.
+        /// Analyzes a batch of textures and returns raw per-texture complexity scores in [0, 1].
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Editor/TextureCompressor/Core/Interfaces/ITextureAnalysisBackend.cs` around
lines 12 - 17, Summary: Update the XML doc for
ITextureAnalysisBackend.AnalyzeBatch to state it returns a raw score map in
[0,1]. Change the comment on AnalyzeBatch to explicitly document that the method
returns a Dictionary<Texture2D, float> containing per-texture raw scores
normalized to the [0,1] range (where 0 means lowest relevance/quality and 1
means highest), and note that implementations still control pixel access
strategy; reference the interface ITextureAnalysisBackend, method AnalyzeBatch,
and types Texture2D/TextureInfo so callers understand the contract.
Editor/TextureCompressor/Core/Services/TextureCompressorService.cs (1)

183-202: Centralize raw-score reconstruction in one helper.

This loop is now duplicated here and in Editor/TextureCompressor/UI/Preview/PreviewGenerator.cs, Lines 167-199. Extracting a batch helper from AnalysisResultHelper would keep emission boosting and divisor/resolution reconstruction from drifting between build and preview paths.

♻️ Proposed extraction
-            var analysisResults = new Dictionary<Texture2D, TextureAnalysisResult>();
-            foreach (var kvp in rawScores)
-            {
-                var texture = kvp.Key;
-                if (!textures.TryGetValue(texture, out var info))
-                    continue;
-
-                analysisResults[texture] = AnalysisResultHelper.BuildResult(
-                    kvp.Value,
-                    texture.width,
-                    texture.height,
-                    info.IsEmission,
-                    info.IsNormalMap,
-                    _complexityCalc,
-                    _processor
-                );
-            }
+            var analysisResults = AnalysisResultHelper.BuildResults(
+                rawScores,
+                textures,
+                _complexityCalc,
+                _processor
+            );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Editor/TextureCompressor/Core/Services/TextureCompressorService.cs` around
lines 183 - 202, The duplicated loop that reconstructs raw-scores and builds
TextureAnalysisResult exists in TextureCompressorService (the foreach over
rawScores using AnalysisResultHelper.BuildResult) and in PreviewGenerator;
extract a single batch helper on AnalysisResultHelper (e.g., a new method like
BuildResultsFromRawScores or ReconstructBatchResults) that takes the rawScores
dictionary plus textures metadata (width, height, IsEmission, IsNormalMap) and
dependencies (_complexityCalc, _processor) and returns Dictionary<Texture2D,
TextureAnalysisResult>; replace the loop in TextureCompressorService and the
similar code in Editor/TextureCompressor/UI/Preview/PreviewGenerator.cs to call
this new helper so emission boosting and divisor/resolution logic are
centralized.
Tests/Editor/Analysis/Backends/GpuCpuParityTests.cs (1)

359-374: Drive parity with identical combined weights.

CreateGpuBackend() pins AnalysisConstants.CombinedDefault*Weight, while CreateCpuBackend() still relies on AnalyzerFactory.Create(strategy) defaults. For AnalysisStrategyType.Combined, the parity assertions are no longer guaranteed to compare the same configuration on both backends.

♻️ Suggested tweak
 private CpuAnalysisBackend CreateCpuBackend(AnalysisStrategyType strategy)
 {
-    var standardAnalyzer = AnalyzerFactory.Create(strategy);
+    var standardAnalyzer =
+        strategy == AnalysisStrategyType.Combined
+            ? AnalyzerFactory.Create(
+                strategy,
+                AnalysisConstants.CombinedDefaultFastWeight,
+                AnalysisConstants.CombinedDefaultHighAccuracyWeight,
+                AnalysisConstants.CombinedDefaultPerceptualWeight
+            )
+            : AnalyzerFactory.Create(strategy);
     var normalMapAnalyzer = AnalyzerFactory.CreateNormalMapAnalyzer();
     return new CpuAnalysisBackend(standardAnalyzer, normalMapAnalyzer, _processor);
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Tests/Editor/Analysis/Backends/GpuCpuParityTests.cs` around lines 359 - 374,
CreateCpuBackend and CreateGpuBackend are no longer constructing equivalent
analyzers for AnalysisStrategyType.Combined: CreateGpuBackend pins
AnalysisConstants.CombinedDefaultFast/HighAccuracy/PerceptualWeight while
CreateCpuBackend calls AnalyzerFactory.Create(strategy) with defaults. Update
CreateCpuBackend so when strategy == AnalysisStrategyType.Combined it constructs
the CPU standard analyzer using the same combined weights (e.g. call the
AnalyzerFactory method that accepts weights or a CreateCombined/
CreateWithWeights overload and pass AnalysisConstants.CombinedDefaultFastWeight,
CombinedDefaultHighAccuracyWeight, CombinedDefaultPerceptualWeight); otherwise
keep AnalyzerFactory.Create(strategy) for non-combined strategies, then return
the CpuAnalysisBackend as before.
Editor/TextureCompressor/Analysis/Backends/CpuAnalysisBackend.cs (1)

123-126: Clamp the CPU score before publishing it.

TextureAnalyzer now advertises 0–1 raw scores on Editor/TextureCompressor/Core/Services/TextureAnalyzer.cs Line 34, and the GPU backend enforces that on Editor/TextureCompressor/Analysis/Backends/GpuAnalysisBackend.cs Line 209. Clamping here keeps the two backends interchangeable if an analyzer ever returns a slightly under/over-normalized value.

Proposed fix
                         else
                         {
                             score = item.Analyzer.Analyze(item.Data).Score;
                         }
 
+                        if (score < 0f)
+                            score = 0f;
+                        else if (score > 1f)
+                            score = 1f;
+
                         results[item.Texture] = score;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Editor/TextureCompressor/Analysis/Backends/CpuAnalysisBackend.cs` around
lines 123 - 126, Clamp the score produced by
item.Analyzer.Analyze(item.Data).Score to the 0–1 range before assigning into
results to match the GPU backend behavior; in CpuAnalysisBackend (where score
and results[item.Texture] are set) apply a clamp (e.g., Math.Clamp or
equivalent) to the computed score so results only receives values between 0f and
1f, keeping CpuAnalysisBackend interchangeable with
TextureAnalyzer/GpuAnalysisBackend.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@Editor/TextureCompressor/Analysis/Backends/CpuAnalysisBackend.cs`:
- Around line 123-126: Clamp the score produced by
item.Analyzer.Analyze(item.Data).Score to the 0–1 range before assigning into
results to match the GPU backend behavior; in CpuAnalysisBackend (where score
and results[item.Texture] are set) apply a clamp (e.g., Math.Clamp or
equivalent) to the computed score so results only receives values between 0f and
1f, keeping CpuAnalysisBackend interchangeable with
TextureAnalyzer/GpuAnalysisBackend.

In `@Editor/TextureCompressor/Core/Interfaces/ITextureAnalysisBackend.cs`:
- Around line 12-17: Summary: Update the XML doc for
ITextureAnalysisBackend.AnalyzeBatch to state it returns a raw score map in
[0,1]. Change the comment on AnalyzeBatch to explicitly document that the method
returns a Dictionary<Texture2D, float> containing per-texture raw scores
normalized to the [0,1] range (where 0 means lowest relevance/quality and 1
means highest), and note that implementations still control pixel access
strategy; reference the interface ITextureAnalysisBackend, method AnalyzeBatch,
and types Texture2D/TextureInfo so callers understand the contract.

In `@Editor/TextureCompressor/Core/Services/TextureCompressorService.cs`:
- Around line 183-202: The duplicated loop that reconstructs raw-scores and
builds TextureAnalysisResult exists in TextureCompressorService (the foreach
over rawScores using AnalysisResultHelper.BuildResult) and in PreviewGenerator;
extract a single batch helper on AnalysisResultHelper (e.g., a new method like
BuildResultsFromRawScores or ReconstructBatchResults) that takes the rawScores
dictionary plus textures metadata (width, height, IsEmission, IsNormalMap) and
dependencies (_complexityCalc, _processor) and returns Dictionary<Texture2D,
TextureAnalysisResult>; replace the loop in TextureCompressorService and the
similar code in Editor/TextureCompressor/UI/Preview/PreviewGenerator.cs to call
this new helper so emission boosting and divisor/resolution logic are
centralized.

In `@Tests/Editor/Analysis/Backends/GpuCpuParityTests.cs`:
- Around line 359-374: CreateCpuBackend and CreateGpuBackend are no longer
constructing equivalent analyzers for AnalysisStrategyType.Combined:
CreateGpuBackend pins
AnalysisConstants.CombinedDefaultFast/HighAccuracy/PerceptualWeight while
CreateCpuBackend calls AnalyzerFactory.Create(strategy) with defaults. Update
CreateCpuBackend so when strategy == AnalysisStrategyType.Combined it constructs
the CPU standard analyzer using the same combined weights (e.g. call the
AnalyzerFactory method that accepts weights or a CreateCombined/
CreateWithWeights overload and pass AnalysisConstants.CombinedDefaultFastWeight,
CombinedDefaultHighAccuracyWeight, CombinedDefaultPerceptualWeight); otherwise
keep AnalyzerFactory.Create(strategy) for non-combined strategies, then return
the CpuAnalysisBackend as before.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 6e4261ff-542d-489f-a2d7-667b1ecb77a9

📥 Commits

Reviewing files that changed from the base of the PR and between 2dcea66 and c5f8bdc.

📒 Files selected for processing (13)
  • Editor/TextureCompressor/Analysis/Backends/AnalysisBackendFactory.cs
  • Editor/TextureCompressor/Analysis/Backends/CpuAnalysisBackend.cs
  • Editor/TextureCompressor/Analysis/Backends/GpuAnalysisBackend.cs
  • Editor/TextureCompressor/Core/Interfaces/ITextureAnalysisBackend.cs
  • Editor/TextureCompressor/Core/Services/AnalysisResultHelper.cs
  • Editor/TextureCompressor/Core/Services/AnalysisResultHelper.cs.meta
  • Editor/TextureCompressor/Core/Services/TextureAnalyzer.cs
  • Editor/TextureCompressor/Core/Services/TextureCompressorService.cs
  • Editor/TextureCompressor/UI/Preview/PreviewGenerator.cs
  • Tests/Editor/Analysis/Backends/AnalysisBackendFactoryTests.cs
  • Tests/Editor/Analysis/Backends/CpuAnalysisBackendTests.cs
  • Tests/Editor/Analysis/Backends/GpuCpuParityTests.cs
  • Tests/Editor/Core/Services/TextureAnalyzerTests.cs

@Limitex Limitex merged commit 6e262ab into main Apr 17, 2026
5 checks passed
@Limitex Limitex deleted the refactor/separate-score-from-divisor branch April 17, 2026 14:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant