Skip to content

build: Lower zlib compression level for tmp/repo#2888

Merged
cgwalters merged 1 commit intocoreos:mainfrom
cgwalters:lowcompress-archive
Jun 1, 2022
Merged

build: Lower zlib compression level for tmp/repo#2888
cgwalters merged 1 commit intocoreos:mainfrom
cgwalters:lowcompress-archive

Conversation

@cgwalters
Copy link
Copy Markdown
Member

In openshift/release#29031 we are
debugging very slow build times. Of the approximately 3h build
time, 30 minutes is compressing all the files into the archive repo
in tmp/repo.

This is all essentially wasted time, because we now canonically represent
the ostree commit as an ociarchive, which is re-compressed again
differently.

Eventually, we should drop tmp/repo and have cache/repo-build
be the canonical uncompressed cache.

In the short term though, ostree makes it easy to turn down the
zlib compression level, which can have a dramatic impact here.

Locally on my desktop:

Before:

$ time sudo ostree --repo=tmp/repo pull-local cache/repo-build/ 988a1ffb47df4dda08df4d97d8e5f39f34c624d5c54b9c870f696203011758ef
3009 metadata, 19604 content objects imported; 1.3 GB content written

________________________________________________________
Executed in    8.33 secs    fish           external
   usr time   44.23 secs  836.00 micros   44.23 secs
   sys time    3.95 secs  108.00 micros    3.95 secs

After:

$ time sudo ostree --repo=tmp/repo pull-local cache/repo-build/ 988a1ffb47df4dda08df4d97d8e5f39f34c624d5c54b9c870f696203011758ef
3009 metadata, 19604 content objects imported; 1.3 GB content written

________________________________________________________
Executed in    6.09 secs    fish           external
   usr time   21.94 secs    0.00 micros   21.94 secs
   sys time    4.34 secs  955.00 micros    4.34 secs

The wall clock time isn't hugely different, but that's because
my desktop is a hyperthreaded, otherwise idle i9-9900k. The actual
CPU time spent is notably lower.

In the Prow cluster where we're contending for CPU on slower processors,
and further we are limited by cpu shares, this should help.

In openshift/release#29031 we are
debugging very slow build times.  Of the approximately 3h build
time, 30 minutes is compressing all the files into the archive repo
in `tmp/repo`.

This is all essentially wasted time, because we now canonically represent
the ostree commit as an ociarchive, which is re-compressed again
differently.

Eventually, we should drop `tmp/repo` and have `cache/repo-build`
be the canonical uncompressed cache.

In the short term though, ostree makes it easy to turn down the
zlib compression level, which can have a dramatic impact here.

Locally on my desktop:

Before:

```
$ time sudo ostree --repo=tmp/repo pull-local cache/repo-build/ 988a1ffb47df4dda08df4d97d8e5f39f34c624d5c54b9c870f696203011758ef
3009 metadata, 19604 content objects imported; 1.3 GB content written

________________________________________________________
Executed in    8.33 secs    fish           external
   usr time   44.23 secs  836.00 micros   44.23 secs
   sys time    3.95 secs  108.00 micros    3.95 secs
```

After:

```
$ time sudo ostree --repo=tmp/repo pull-local cache/repo-build/ 988a1ffb47df4dda08df4d97d8e5f39f34c624d5c54b9c870f696203011758ef
3009 metadata, 19604 content objects imported; 1.3 GB content written

________________________________________________________
Executed in    6.09 secs    fish           external
   usr time   21.94 secs    0.00 micros   21.94 secs
   sys time    4.34 secs  955.00 micros    4.34 secs
```

The wall clock time isn't hugely different, but that's because
my desktop is a hyperthreaded, otherwise idle i9-9900k.  The actual
CPU time spent is notably lower.

In the Prow cluster where we're contending for CPU on slower processors,
and further we are limited by cpu shares, this should help.
Copy link
Copy Markdown
Member

@miabbott miabbott left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great analysis! Thanks for helping the cause!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants