[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: qemu CI & ccache: cache size is too small
From: |
Daniel P . Berrangé |
Subject: |
Re: qemu CI & ccache: cache size is too small |
Date: |
Mon, 3 Jun 2024 12:25:01 +0100 |
User-agent: |
Mutt/2.2.12 (2023-09-09) |
On Mon, May 27, 2024 at 01:49:41PM +0300, Michael Tokarev wrote:
> Hi!
>
> Noticed today that a rebuild of basically the same tree (a few commits apart)
> in CI result in just 11% hit rate of ccache:
>
> https://gitlab.com/mjt0k/qemu/-/jobs/6947445337#L5054
>
> while it should be near 100%. What's interesting in there is:
>
> 1) cache size is close to max cache size,
> and more important,
> 2) cleanups performed 78
>
> so it has to remove old entries before it finished the build.
>
> So effectively, our ccache usage is an extra burden, not help.
I think this ends up being different per job. If I try the
'build-system-fedora' job, for example, I get a 99% cache
hit rate, and 0.2 GB usage of cache storage
https://gitlab.com/berrange/qemu/-/jobs/6876054586
$ ccache --show-stats
Cacheable calls: 3018 / 3208 (94.08%)
Hits: 49 / 3018 ( 1.62%)
Direct: 0 / 49 ( 0.00%)
Preprocessed: 49 / 49 (100.0%)
Misses: 2969 / 3018 (98.38%)
Uncacheable calls: 190 / 3208 ( 5.92%)
Local storage:
Cache size (GB): 0.2 / 0.5 (30.55%)
Hits: 49 / 3018 ( 1.62%)
Misses: 2969 / 3018 (98.38%)
If I compare the jobs, the big differences are the target lists:
CentOS: '--target-list=ppc64-softmmu or1k-softmmu s390x-softmmu
x86_64-softmmu rx-softmmu sh4-softmmu'
Fedora: '--target-list=microblaze-softmmu mips-softmmu xtensa-softmmu
m68k-softmmu riscv32-softmmu ppc-softmmu sparc64-softmmu'
And then a few minor things:
CentOS: '--disable-nettle' '--enable-gcrypt' '--enable-vfio-user-server'
'--enable-modules' '--enable-trace-backends=dtrace'
Fedora: '--disable-gcrypt' '--enable-nettle'
the crypto won't make a diffeernce to caching. Modules ought not to make a
difference either, as that's just moving some .o files from the exe to a
so, not adding many more exes.
The trace backends will add quite a few .o files, but I'm not sure that
will impact cache.
IOW, I bet the target list has the big difference on the amount of data
that needs to be cached, to explain the different cache usage.
I wonder what the picture looks like for cache hits / cache disk usage
across all the other jobs. Is CentOS an outlier or is FEdora an outlier?
We do want cache to be in the 90+% mark if possible as it has a big impact
on build time.
> I should be increased at least, I think. But it's actually difficult
> to say really, - is the cache shared between all builds or is it unique
> for each build config? Because if it the former, it shouldn't even
> work since different ccache versions use different format of the files
> in cache.
It is unique per job per buildtest-template.yml:
cache:
paths:
- ccache
key: "$CI_JOB_NAME"
when: always
> What's unique in my pipeline run - I ran just a single build job
> in two pipelines, nothing more.
In my test I ran a job, then re-ran it in the same pipeline.
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
- Re: qemu CI & ccache: cache size is too small,
Daniel P . Berrangé <=