-
Notifications
You must be signed in to change notification settings - Fork 9.6k
core(script-elements): parallelize getting request contents #9713
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
(note to others: reads a lot easier from https://github.com/GoogleChrome/lighthouse/pull/9713/files?w=1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these serial requests come from when we used to be wary of doing things out of order, so it's probably ok to parallelize (ideally we could benchmark on a phone (e.g. WPT) since disk/memory contention will be much worse there, but we can hopefully trust the implementation to chunk appropriately. At least it's in afterPass
). Looks like a good win on desktop, though!
@patrickhulce since reasoning for going serial isn't discussed in #3950.
@connorjclark do you want to add ScriptElements
to the byte-efficiency expectations (since unminified-javascript
is already using it) and/or one of the other smoke tests so we can get coverage of it?
async: true, | ||
defer: false, | ||
source: 'body', | ||
devtoolsNodePath: '2,HTML,1,BODY,1438,SCRIPT', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This alternated between 1438 / 1437, until I changed the script to be a 8s delay (there's a 7s timer that adds an element)
done. I verified the smoke test worked in master too. |
This was the primary concern and reason for doing it serially because if you flood the phone all at once requesting all scripts and it's trying to serialize all of that JSON for the protocol it adds memory pressure. if it's a device with low memory and needs to be start evicting to handle the protocol traffic, once a network file's contents are evicted from memory we basically lose out on it completely and get those sentry errors "Unable to read network response body" or whatever that are otherwise very difficult to track down. I don't have any data to say how much more often this happens and I imagine trace size with screenshots is a bigger memory pressure concern we don't do much about, so I won't protest too much here. But it'd be nice to keep some upper limit on the number of parallel requests at once whenever we try to read network file contents from the memory cache. Love the |
@connorjclark In sentry it's the
family of errors that are all indicators something was evicted at some point due to memory pressure, all of which occur at a fairly high rate. And to be clear it's not the artifact sizes that are the memory pressure concern. It's the entirety of network record bodies, decoded image data, parsed and compiled script, etc stored by Chrome during page load that's at risk of being evicted when the totality of these things climbs. The errors occurring at all already indicate that Chrome decided it was under memory pressure in those situations, so increasing our impact by 8MB I would expect us to lose out on 8MB more of assets in other gatherers. Requesting a lot things at once over the protocol is just another way to increase this incidence rate, so if we can avoid it, it'd be nice to keep it limited that's all :) |
If we could detect that we are running in a memory-limited environment, would it be OK to branch behavior based on that? Example...
|
If we implement the max parallelism style approach I suggested then we get the "branching" for free by controlling |
@exterkamp can benchmark index be used reliably for this? I thought you concluded it was ... lacking |
@connorjclark it most definitely is lacking for performance metric accuracy (see my conclusions at #9085) but we're just scaling something totally nonessential here and the scale of its inaccuracy is like something that should be |
@paulirish pointed out
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@connorjclark still ready to land? |
yeah seems cool |
We don't need to request these resources in series.
Result: on average, cnn.com loads 1s faster.
Data:
Data is from this script I have: https://github.com/GoogleChrome/lighthouse/blob/timings-script/lighthouse-core/scripts/timings.js