Conversation
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 54 minutes and 51 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📝 WalkthroughWalkthroughBackends no longer accept or propagate a Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (4)
Editor/TextureCompressor/Core/Interfaces/ITextureAnalysisBackend.cs (1)
12-17: Document that the backend contract is now a raw score map.The XML comment still reads like
AnalyzeBatch()returns full analysis results. Since the signature is nowDictionary<Texture2D, float>, make the[0, 1]raw-score semantics explicit.📝 Suggested doc update
- /// Analyzes a batch of textures and returns per-texture results. + /// Analyzes a batch of textures and returns raw per-texture complexity scores in [0, 1].🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Editor/TextureCompressor/Core/Interfaces/ITextureAnalysisBackend.cs` around lines 12 - 17, Summary: Update the XML doc for ITextureAnalysisBackend.AnalyzeBatch to state it returns a raw score map in [0,1]. Change the comment on AnalyzeBatch to explicitly document that the method returns a Dictionary<Texture2D, float> containing per-texture raw scores normalized to the [0,1] range (where 0 means lowest relevance/quality and 1 means highest), and note that implementations still control pixel access strategy; reference the interface ITextureAnalysisBackend, method AnalyzeBatch, and types Texture2D/TextureInfo so callers understand the contract.Editor/TextureCompressor/Core/Services/TextureCompressorService.cs (1)
183-202: Centralize raw-score reconstruction in one helper.This loop is now duplicated here and in
Editor/TextureCompressor/UI/Preview/PreviewGenerator.cs, Lines 167-199. Extracting a batch helper fromAnalysisResultHelperwould keep emission boosting and divisor/resolution reconstruction from drifting between build and preview paths.♻️ Proposed extraction
- var analysisResults = new Dictionary<Texture2D, TextureAnalysisResult>(); - foreach (var kvp in rawScores) - { - var texture = kvp.Key; - if (!textures.TryGetValue(texture, out var info)) - continue; - - analysisResults[texture] = AnalysisResultHelper.BuildResult( - kvp.Value, - texture.width, - texture.height, - info.IsEmission, - info.IsNormalMap, - _complexityCalc, - _processor - ); - } + var analysisResults = AnalysisResultHelper.BuildResults( + rawScores, + textures, + _complexityCalc, + _processor + );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Editor/TextureCompressor/Core/Services/TextureCompressorService.cs` around lines 183 - 202, The duplicated loop that reconstructs raw-scores and builds TextureAnalysisResult exists in TextureCompressorService (the foreach over rawScores using AnalysisResultHelper.BuildResult) and in PreviewGenerator; extract a single batch helper on AnalysisResultHelper (e.g., a new method like BuildResultsFromRawScores or ReconstructBatchResults) that takes the rawScores dictionary plus textures metadata (width, height, IsEmission, IsNormalMap) and dependencies (_complexityCalc, _processor) and returns Dictionary<Texture2D, TextureAnalysisResult>; replace the loop in TextureCompressorService and the similar code in Editor/TextureCompressor/UI/Preview/PreviewGenerator.cs to call this new helper so emission boosting and divisor/resolution logic are centralized.Tests/Editor/Analysis/Backends/GpuCpuParityTests.cs (1)
359-374: Drive parity with identical combined weights.
CreateGpuBackend()pinsAnalysisConstants.CombinedDefault*Weight, whileCreateCpuBackend()still relies onAnalyzerFactory.Create(strategy)defaults. ForAnalysisStrategyType.Combined, the parity assertions are no longer guaranteed to compare the same configuration on both backends.♻️ Suggested tweak
private CpuAnalysisBackend CreateCpuBackend(AnalysisStrategyType strategy) { - var standardAnalyzer = AnalyzerFactory.Create(strategy); + var standardAnalyzer = + strategy == AnalysisStrategyType.Combined + ? AnalyzerFactory.Create( + strategy, + AnalysisConstants.CombinedDefaultFastWeight, + AnalysisConstants.CombinedDefaultHighAccuracyWeight, + AnalysisConstants.CombinedDefaultPerceptualWeight + ) + : AnalyzerFactory.Create(strategy); var normalMapAnalyzer = AnalyzerFactory.CreateNormalMapAnalyzer(); return new CpuAnalysisBackend(standardAnalyzer, normalMapAnalyzer, _processor); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Tests/Editor/Analysis/Backends/GpuCpuParityTests.cs` around lines 359 - 374, CreateCpuBackend and CreateGpuBackend are no longer constructing equivalent analyzers for AnalysisStrategyType.Combined: CreateGpuBackend pins AnalysisConstants.CombinedDefaultFast/HighAccuracy/PerceptualWeight while CreateCpuBackend calls AnalyzerFactory.Create(strategy) with defaults. Update CreateCpuBackend so when strategy == AnalysisStrategyType.Combined it constructs the CPU standard analyzer using the same combined weights (e.g. call the AnalyzerFactory method that accepts weights or a CreateCombined/ CreateWithWeights overload and pass AnalysisConstants.CombinedDefaultFastWeight, CombinedDefaultHighAccuracyWeight, CombinedDefaultPerceptualWeight); otherwise keep AnalyzerFactory.Create(strategy) for non-combined strategies, then return the CpuAnalysisBackend as before.Editor/TextureCompressor/Analysis/Backends/CpuAnalysisBackend.cs (1)
123-126: Clamp the CPU score before publishing it.
TextureAnalyzernow advertises 0–1 raw scores onEditor/TextureCompressor/Core/Services/TextureAnalyzer.csLine 34, and the GPU backend enforces that onEditor/TextureCompressor/Analysis/Backends/GpuAnalysisBackend.csLine 209. Clamping here keeps the two backends interchangeable if an analyzer ever returns a slightly under/over-normalized value.Proposed fix
else { score = item.Analyzer.Analyze(item.Data).Score; } + if (score < 0f) + score = 0f; + else if (score > 1f) + score = 1f; + results[item.Texture] = score;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Editor/TextureCompressor/Analysis/Backends/CpuAnalysisBackend.cs` around lines 123 - 126, Clamp the score produced by item.Analyzer.Analyze(item.Data).Score to the 0–1 range before assigning into results to match the GPU backend behavior; in CpuAnalysisBackend (where score and results[item.Texture] are set) apply a clamp (e.g., Math.Clamp or equivalent) to the computed score so results only receives values between 0f and 1f, keeping CpuAnalysisBackend interchangeable with TextureAnalyzer/GpuAnalysisBackend.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@Editor/TextureCompressor/Analysis/Backends/CpuAnalysisBackend.cs`:
- Around line 123-126: Clamp the score produced by
item.Analyzer.Analyze(item.Data).Score to the 0–1 range before assigning into
results to match the GPU backend behavior; in CpuAnalysisBackend (where score
and results[item.Texture] are set) apply a clamp (e.g., Math.Clamp or
equivalent) to the computed score so results only receives values between 0f and
1f, keeping CpuAnalysisBackend interchangeable with
TextureAnalyzer/GpuAnalysisBackend.
In `@Editor/TextureCompressor/Core/Interfaces/ITextureAnalysisBackend.cs`:
- Around line 12-17: Summary: Update the XML doc for
ITextureAnalysisBackend.AnalyzeBatch to state it returns a raw score map in
[0,1]. Change the comment on AnalyzeBatch to explicitly document that the method
returns a Dictionary<Texture2D, float> containing per-texture raw scores
normalized to the [0,1] range (where 0 means lowest relevance/quality and 1
means highest), and note that implementations still control pixel access
strategy; reference the interface ITextureAnalysisBackend, method AnalyzeBatch,
and types Texture2D/TextureInfo so callers understand the contract.
In `@Editor/TextureCompressor/Core/Services/TextureCompressorService.cs`:
- Around line 183-202: The duplicated loop that reconstructs raw-scores and
builds TextureAnalysisResult exists in TextureCompressorService (the foreach
over rawScores using AnalysisResultHelper.BuildResult) and in PreviewGenerator;
extract a single batch helper on AnalysisResultHelper (e.g., a new method like
BuildResultsFromRawScores or ReconstructBatchResults) that takes the rawScores
dictionary plus textures metadata (width, height, IsEmission, IsNormalMap) and
dependencies (_complexityCalc, _processor) and returns Dictionary<Texture2D,
TextureAnalysisResult>; replace the loop in TextureCompressorService and the
similar code in Editor/TextureCompressor/UI/Preview/PreviewGenerator.cs to call
this new helper so emission boosting and divisor/resolution logic are
centralized.
In `@Tests/Editor/Analysis/Backends/GpuCpuParityTests.cs`:
- Around line 359-374: CreateCpuBackend and CreateGpuBackend are no longer
constructing equivalent analyzers for AnalysisStrategyType.Combined:
CreateGpuBackend pins
AnalysisConstants.CombinedDefaultFast/HighAccuracy/PerceptualWeight while
CreateCpuBackend calls AnalyzerFactory.Create(strategy) with defaults. Update
CreateCpuBackend so when strategy == AnalysisStrategyType.Combined it constructs
the CPU standard analyzer using the same combined weights (e.g. call the
AnalyzerFactory method that accepts weights or a CreateCombined/
CreateWithWeights overload and pass AnalysisConstants.CombinedDefaultFastWeight,
CombinedDefaultHighAccuracyWeight, CombinedDefaultPerceptualWeight); otherwise
keep AnalyzerFactory.Create(strategy) for non-combined strategies, then return
the CpuAnalysisBackend as before.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 6e4261ff-542d-489f-a2d7-667b1ecb77a9
📒 Files selected for processing (13)
Editor/TextureCompressor/Analysis/Backends/AnalysisBackendFactory.csEditor/TextureCompressor/Analysis/Backends/CpuAnalysisBackend.csEditor/TextureCompressor/Analysis/Backends/GpuAnalysisBackend.csEditor/TextureCompressor/Core/Interfaces/ITextureAnalysisBackend.csEditor/TextureCompressor/Core/Services/AnalysisResultHelper.csEditor/TextureCompressor/Core/Services/AnalysisResultHelper.cs.metaEditor/TextureCompressor/Core/Services/TextureAnalyzer.csEditor/TextureCompressor/Core/Services/TextureCompressorService.csEditor/TextureCompressor/UI/Preview/PreviewGenerator.csTests/Editor/Analysis/Backends/AnalysisBackendFactoryTests.csTests/Editor/Analysis/Backends/CpuAnalysisBackendTests.csTests/Editor/Analysis/Backends/GpuCpuParityTests.csTests/Editor/Core/Services/TextureAnalyzerTests.cs
Summary by CodeRabbit