Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 12 additions & 2 deletions scripts/bench/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,21 @@ Work-in-progress benchmarks.

## Running the suite

You'll need two folders to compare, each of them containing `react.min.js` and `react-dom-server.min.js`. You can run `npm run build` at the repo root to get a `build` folder with these files.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice if there was an easier way to run against a release build without manually recreating the build folder.


For example, if you want to compare a stable verion against master, you can create folders called `build-stable` and `build-master` and use the benchmark scripts like this:

```
$ ./measure.py react-a.min.js a.txt react-b.min.js b.txt
$ ./analyze.py a.txt b.txt
$ ./measure.py build-stable stable.txt build-master master.txt
$ ./analyze.py stable.txt master.txt
```

The test measurements (second argument to `analyze`, `master.txt` in this example) will be compared to the control measurements (first argument to `analyze`, `stable.txt` in this example).

Changes with the `-` sign in the output mean `master` is faster than `stable`.

You can name folders any way you like, this was just an example.

## Running one
One thing you can do with them is benchmark initial render time for a realistic hierarchy:

Expand Down
13 changes: 9 additions & 4 deletions scripts/bench/measure.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,12 +73,14 @@ def _run_js_in_node(js, env):
def _measure_ssr_ms(engine, react_path, bench_name, bench_path, measure_warm):
return engine(
"""
var reactCode = readFile(ENV.react_path);
var reactCode = readFile(ENV.react_path + '/react.min.js');
var reactDOMServerCode = readFile(ENV.react_path + '/react-dom-server.min.js');
var START = now();
globalEval(reactCode);
globalEval(reactDOMServerCode);
var END = now();
ReactDOMServer = React.__SECRET_DOM_SERVER_DO_NOT_USE_OR_YOU_WILL_BE_FIRED || React;
if (typeof React !== 'object') throw new Error('React not laoded');
if (typeof React !== 'object') throw new Error('React not loaded');
if (typeof ReactDOMServer !== 'object') throw new Error('ReactDOMServer not loaded');
report('factory_ms', END - START);

globalEval(readFile(ENV.bench_path));
Expand Down Expand Up @@ -117,7 +119,7 @@ def _measure_ssr_ms(engine, react_path, bench_name, bench_path, measure_warm):

def _main():
if len(sys.argv) < 2 or len(sys.argv) % 2 == 0:
sys.stderr.write("usage: measure.py react.min.js out.txt react2.min.js out2.txt\n")
sys.stderr.write("usage: measure.py build-folder-a a.txt build-folder-b b.txt\n")
return 1
# [(react_path, out_path)]
react_paths = sys.argv[1::2]
Expand All @@ -142,7 +144,10 @@ def _main():
sys.stderr.write("\n")
sys.stderr.flush()

# You can set this to a number of trials you want to do with warm JIT.
# They are disabled by default because they are slower.
trials = 0

sys.stderr.write("Measuring SSR for PE with warm JIT (%d slow trials)\n" % trials)
sys.stderr.write("_" * trials + "\n")
for i in range(trials):
Expand Down