new_audit(bootup-time): Bootup time per script#3563
Conversation
|
Devtools is not giving me parse time of scripts :( Also when scripts are not moved to v8.parseOnBackground than I can't seem to find any parse event. any idea @paulirish ? |
paulirish
left a comment
There was a problem hiding this comment.
generally the correct model, but some small suggestions. holler on IM if you have questions
| const timelineModel = new DevtoolsTimelineModel(trace); | ||
| const bottomUpByName = timelineModel.bottomUpGroupBy('URL'); | ||
| const result = new Map(); | ||
| bottomUpByName.children.forEach((value, url) => { |
There was a problem hiding this comment.
a bit of bikeshedding?
value ==> perUrlNode ?
| return; | ||
| } | ||
|
|
||
| const evaluateTime = value.children.get('EvaluateScript:@' + url) || {}; |
There was a problem hiding this comment.
we could do this but i'd rather iterate over the perUrlNode children (which would be perTaskPerUrlNode i guess)...
so iterate over them.. then grab perTaskPerUrlNode.event.name. This should give you v8.compile and EvaluateScript. We can then reuse this stuff:
lighthouse/lighthouse-core/audits/page-execution-timings.js
Lines 17 to 98 in 5fbff87
and we'll want any children with events of name in the 3 script*categories. (also note: many of those task names are for instant events so they have 0 duration. ;)
There was a problem hiding this comment.
we can exclude GC from the report for now but might as well collect it here and put in extInfo ? or ignore it and leave a comment saying so..
perhaps they are on a different thread? ( |
Not all scripts are parsed on the parser thread (only the once that have async/defer). When they're not it seems like there is no parse time registered in the trace, it's probably included in compile time of the script? |
| const tableDetails = BootupTime.makeTableDetails(headings, results); | ||
|
|
||
| return { | ||
| score: totalBootupTime < 4000, |
There was a problem hiding this comment.
4s is quite generous, did we consider using a numeric instead of binary score to allow nuance?
| * @return {!Map<string, Number>} | ||
| */ | ||
| static getExecutionTimingsByURL(trace) { | ||
| const timelineModel = new DevtoolsTimelineModel(trace); |
There was a problem hiding this comment.
are we doing this in a lot of places/should it be a computed artifact?
There was a problem hiding this comment.
Are you referring to make a computed artifact requestExecutionTimingsByUrl(trace) or rather have a computed artifact requestExecutionTimings(trace, 'URL')
There was a problem hiding this comment.
the idea is to make DTM a computed artifact.
we were also discussing doing a deepFreeze (from https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/freeze) on the trace data before sending it into devtools.
so it wont mutate.
if there are problems with this we could just deepFreeze each event in the traceEvents array. should be fine.
There was a problem hiding this comment.
should i do it in this PR? or just a followup?
bb663e2 to
8ec5fb3
Compare
| const WebInspector = require('../lib/web-inspector'); | ||
| const Util = require('../report/v2/renderer/util.js'); | ||
|
|
||
| const group = { |
There was a problem hiding this comment.
could pull these two objects into a lib/traces/trace-event-groups.js or something so that they can be used across audits
paulirish
left a comment
There was a problem hiding this comment.
spoiler alert: I implemented all my suggestions in #3703
wdyt? can you review?
one of the major things is only showing the scripting categories, not every kind of task we can associate with the scriptURL. that means our header array is now fixed.
| assert.equal(Math.round(output.rawValue), 176); | ||
|
|
||
| const valueOf = name => output.extendedInfo.value[name]; | ||
| assert.deepEqual(valueOf('https://www.google-analytics.com/analytics.js'), {'Evaluate Script': 40.1, 'Compile Script': 9.6, 'Recalculate Style': 0.2}); |
There was a problem hiding this comment.
It was kind of confusing that our extendedInfo reported tasks, but the UI reported taskGroups. I don't have a strong opinion on what should be exposed in extendedInfo. Maybe we just leave extendedInfo empty this time? In the tests we could assert against the table details instead.
There was a problem hiding this comment.
it might be confusing indeed but isn't it handy to show the raw information somewhere in the json report so people who want to use it can use it? Or isn't that why extendedInfo exists?
| ]; | ||
|
|
||
| // Group tasks per url | ||
| const groupsPerUrl = Array.from(bootupTimings).map(([url, durations]) => { |
There was a problem hiding this comment.
this loop groups individual tasks into taskGroups. I figured this could be a lot simpler if we just do the same thing up inside the inner getExecutionTimingsByURL loop. that's what you'll see in #3703
| perUrlNode.children.forEach((perTaskPerUrlNode) => { | ||
| const taskGroup = WebInspector.TimelineUIUtils.eventStyle(perTaskPerUrlNode.event); | ||
| tasks[taskGroup.title] = tasks[taskGroup.title] || 0; | ||
| tasks[taskGroup.title] += Number((perTaskPerUrlNode.selfTime || 0).toFixed(1)); |
There was a problem hiding this comment.
i felt weird about this rounding here as we're summing, so i moved it to outside of the loop.
| */ | ||
| static getExecutionTimingsByURL(trace) { | ||
| const timelineModel = new DevtoolsTimelineModel(trace); | ||
| const bottomUpByName = timelineModel.bottomUpGroupBy('URL'); |
show bootup time per script
Fixes part of #3105