if you have to pack a lot of things, using sheer power of modern multi-core/multi-threaded CPUs may come handy. unless… the tools you’re using are not enabling that by default, and you suffer running everything on one core/thread.

as I spend most of my time recently with FreeBSD and MacOS the tools I typically use are command line.

therefore, for every gzip - consider using pigz. and for bzip2 - consider using pbzip2.

example uses for pigz:

tar cf - src_files | pigz > OUTPUT_FILE.tar.gz

or as part of built-in tar capability to launch external compressing tools:

tar --use-compress-program=pigz -cf OUTPUT_FILE.tar.gz src_files

with pbzip2 it’s basically the same - and instead of using long --use-compress-program you can simply use -I:

$ tar -I pbzip2 -cf OUTPUT_FILE.tar.bz2 paths_to_archive

the time gain is dependent on the I/O subsystem and amount of CPU cores/threads available (by default, both programs use only cores, not threads), but for example set of 100+ binary files, I went from 3m53s down to 34s by simply switching gzip to pigz.

good luck compressing your data!