Skip to content

Conversation

@AndyAyersMS
Copy link
Member

Alternative take to #215 where jit-analyze is updated to look for
and process multiple bits of metric data.

With the pattern shown here it should be simple to add additional
metrics like the ones we already have.

Alternative take to dotnet#215 where jit-analyze is updated to look for
and process multiple bits of metric data.

With the pattern shown here it should be simple to add additional
metrics like the ones we already have.
@AndyAyersMS
Copy link
Member Author

cc @dotnet/jit-contrib

Might be a bit slower, but I think processing time is dominated by file IO and text handling. Doesn't fully handle the "higher is better" case yet, but we don't have any metrics like that yet.

Sample output for perf score:

jit-analyze --base C:\repos\coreclr3\bin\diffs\XX\base --diff C:\repos\coreclr3\bin\diffs\XX\diff  -m PerfScore
Found 4 files with textual diffs.

Summary of Perf Score diffs:
(Lower is better)

Total PerfScoreUnits of diff: 7.10 (1.00% of base)
    diff is a regression.

Top file regressions (PerfScoreUnit):
        6.05 : s.dasm (14.34% of base)
        1.00 : ex.dasm (0.24% of base)
        0.05 : a.dasm (0.03% of base)

3 total files with Perf Score differences (0 improved, 3 regressed), 1 unchanged.

Top method regressions (PerfScoreUnit):
        6.05 (14.73% of base) : s.dasm - X:Main(ref):int
        1.00 ( 3.17% of base) : ex.dasm - X:Swap2(int)
        0.05 ( 0.18% of base) : a.dasm - X:Swap2(ref,int,int)

Top method regressions (percentage):
        6.05 (14.73% of base) : s.dasm - X:Main(ref):int
        1.00 ( 3.17% of base) : ex.dasm - X:Swap2(int)
        0.05 ( 0.18% of base) : a.dasm - X:Swap2(ref,int,int)

3 total methods with Perf Score differences (0 improved, 3 regressed), 12 unchanged.

1 files had text diffs but no metric diffs.
str.dasm had 2 diffs

Copy link
Contributor

@BruceForstall BruceForstall left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@BruceForstall
Copy link
Contributor

@briansull This should make your change #215 unnecessary.

@briansull
Copy link
Contributor

briansull commented Nov 7, 2019

I believe that there are still parts on my chnage in #215 that are still needed.
I particular the changes to pass on options from jit-diff:

  • --count N
  • --perfscore
  • --codesize

Copy link
Contributor

@briansull briansull left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you Andy!

}

public class PrologSizeMetric : Metric
{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if we care about PrologSizes, in my change I just removed it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems harmless to keep it, though. I expect going forward we will have lots of metrics.

@AndyAyersMS
Copy link
Member Author

I can add your jit-diffs changes, though will probably let count default to the jit-analyze default (which is 5), and will use --metric instead of hard-coded metric options.

@briansull
Copy link
Contributor

can add your jit-diffs changes,

OK that would be fine with me. Although I find that 5 is too small.

@AndyAyersMS
Copy link
Member Author

We'll give 20 a try.

Fixed a few small issues in jit-analyze that I overlooked, including a build break. Not sure what the CI does here but it evidently doesn't try to build...?

Copy link
Contributor

@CarolEidt CarolEidt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, LGTM. I might have simply omitted the boolean for higher-is-better vs lower, since it's not actually supported (or currently needed) and then added it as needed, but I don't feel strongly about it.

@BruceForstall
Copy link
Contributor

@AndyAyersMS The CI does build, but it apparently can't tell when a build succeeds or not (you can see your failure in the build logs: https://dev.azure.com/dnceng/public/_build/results?buildId=416893)

cc @echesakovMSFT

@AndyAyersMS
Copy link
Member Author

Ah, probably just using the repo build scripts, which don't propagate errors.

@AndyAyersMS
Copy link
Member Author

Just a reminder on the new --metric option for jit-diff (also works with jit-analyze)

jit-diff diff --metric PerfScore --pmi --base --base_root c:\repos\coreclr2 --diff --assembly c:\bugs\14574 
... 
Beginning PMI PerfScore Diffs for c:\bugs\14574
\ Finished 4/4 Base 4/4 Diff [5.3 sec]
Completed PMI PerfScore Diffs for c:\bugs\14574 in 5.36s
Diffs (if any) can be viewed by comparing: C:\repos\coreclr3\bin\diffs\dasmset_10\base C:\repos\coreclr3\bin\diffs\dasmset_10\diff
Analyzing PerfScore diffs...
Found 4 files with textual diffs.
PMI PerfScore Diffs for c:\bugs\14574 for x64 default jit
Summary of Perf Score diffs:
(Lower is better)
Total PerfScoreUnits of diff: 7.10 (1.00% of base)
    diff is a regression.
Top file regressions (PerfScoreUnits):
        6.05 : s.dasm (14.34% of base)
        1.00 : ex.dasm (0.24% of base)
        0.05 : a.dasm (0.03% of base)
3 total files with Perf Score differences (0 improved, 3 regressed), 1 unchanged.
Top method regressions (PerfScoreUnits):
        6.05 (14.73% of base) : s.dasm - X:Main(ref):int
        1.00 ( 3.17% of base) : ex.dasm - X:Swap2(int)
        0.05 ( 0.18% of base) : a.dasm - X:Swap2(ref,int,int)
Top method regressions (percentages):
        6.05 (14.73% of base) : s.dasm - X:Main(ref):int
        1.00 ( 3.17% of base) : ex.dasm - X:Swap2(int)
        0.05 ( 0.18% of base) : a.dasm - X:Swap2(ref,int,int)
3 total methods with Perf Score differences (0 improved, 3 regressed), 12 unchanged.
1 files had text diffs but no metric diffs.
str.dasm had 2 diffs
Completed analysis in 0.54s

@AndyAyersMS AndyAyersMS merged commit 1cbe62c into dotnet:master Nov 8, 2019
@AndyAyersMS AndyAyersMS deleted the RefactorAnalyzeForMultipleMetrics branch November 8, 2019 23:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants