Skip to content

Conversation

@bartkusa
Copy link

Something has been bugging me about our homepage's waterfall. Even though we use LAB.js to download our JS in parallel, sometimes the requests are delayed and/or serialized. I'm not sure it's a too-many-requests-to-our-CDN problem, and I'm not sure domain-sharding would fix it.

Here's a decent waterfall. The JS is downloading in parallel with the CSS, although, on connection 8, you can see that one JS file delays another. If you squint, you can see a delay between when connections 3 and 6 open up, and when the second JS file is requested on 8. But maybe that's just my imagination.

waterfall1

Waterfalls 2 is worse. There is an obvious missed opportunity to download JS files, and plenty of open connections.

waterfall2

Waterfall 3 is the worst. JS isn't requested until the CSS is nearly 100% completely downloaded.
waterfall3

I don't know what's causing this. Maybe I'm way wrong and this is solvable with domain-sharding. But maybe not. So why not try using <link rel=preload>? Maybe it'll magically make browsers smarter and eagerly request stuff sooner.

In the long run, maybe this can replace LAB.js; LAB is a lot of bytes in every request, bytes that could instead be HTML or inline styles or something more valuable.

I made it a config thing in React-Server, so pages can opt in, and we can individually measure any perf boosts (or lack thereof).

@CLAassistant
Copy link

CLA assistant check
All committers have accepted the CLA.

@bartkusa
Copy link
Author

BTW, I signed the CLA. Twice. Dunno what the problem is.

@davidalber
Copy link
Contributor

CLAassistant is looking at the usernames of the committers, not the username of the PR creator. You committed with the @redfin-andrew-bartkus account, so that's the one that needs to sign the CLA.

@gigabo
Copy link
Contributor

gigabo commented Mar 15, 2016

Very nice.

The only thing I worry about is the PageConfig toggle. We'd like to be able to turn this on and off based on a value in an http response ("bouncer"). But PageConfig is synchronous, so we couldn't do that as this is currently set up.

I can think of a few other ways to toggle. Unfortunately, I don't think any would be as clean as what you've got here:

  • Add a lifecycle method to PageUtil (e.g. getPreloadScripts). Could return a boolean, but a list of scripts would be more consistent with other page methods. This would be pretty redundant with getScripts and getSystemScripts.
  • Add a key to the handleRoute response object (e.g. {code: 200, preloadScripts: true}). I'd rather not add a bunch of magical keys there.
  • Add a key to the script objects returned by the getScripts and getSystemScripts lifecycle methods. This would allow us to have a middleware that waits for our http response and modifies scripts on the way back up the stack.

That last one's not pretty, but maybe the least ugly of the bunch?

@roblg, @bartkusa, any better ideas?

@bartkusa
Copy link
Author

Given that the goal is to tell the client ASAP what JS to start downloading, I don't think there's any value in waiting for async data to turn this on.

I was imagining that we'd enable this conditionally for some pages with a query param, eg, ?preload=1. That'd let us measure it. If it looked good in controlled scenarios, we'd just hardcode it on for everybody on those pages.

I'm struggling to think of a scenario where this is only valuable sometimes. It seems like it's either always valuable or never valuable.

@gigabo
Copy link
Contributor

gigabo commented Mar 15, 2016

I don't think there's any value in waiting for async data to turn this on.

I place a pretty high value on the ability to dynamically control features at runtime using the aforementioned "bouncer" endpoint.

the goal is to tell the client ASAP what JS to start downloading

To be clear, this is async data that the preloads will be waiting on anyway (at Redfin) since handleRoute won't resolve until it's available.

I'm struggling to think of a scenario where this is only valuable sometimes.

How about a script that I don't care about loading quickly? Presumably omitting that from preload would free up resources for other more important scripts? We currently have a single LABjs chain, but we've been talking for a while about supporting branches. I don't want a preload from a secondary branch preempting a script in my critical path!

@doug-wade
Copy link
Collaborator

Talked with @bartkusa offline and ran some WPT tests against our details pages (specifically against this test page ) and got the following results from WebPageTest

Without preload

Performance Results (Median Run)
Document Complete Fully Loaded
Load Time First Byte Start Render User Time Speed Index DOM Elements Time Requests Bytes In Time Requests Bytes In
First View (Run 10) 10.628s 0.306s 1.894s 10.233s 2038 3997 10.628s 157 4,096 KB 11.185s 164 4,119 KB

With preload
Performance Results (Median Run)
Document Complete Fully Loaded
Load Time First Byte Start Render User Time Speed Index DOM Elements Time Requests Bytes In Time Requests Bytes In
First View (Run 8) 10.401s 0.330s 1.594s 9.825s 2099 4094 10.401s 157 4,102 KB 14.859s 165 4,125 KB
Plot Full Results

Which seems to imply that we're gonna improve our page load by 220 millis.

@gigabo
Copy link
Contributor

gigabo commented Mar 17, 2016

200ms would be huge! 🚀

Still would like an interface that supports asynchronous opt-in and per-script opt-in. ;)

@bartkusa
Copy link
Author

I need to make some adjustments. This shouldn't merge in as-is.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants