Comment by jacquesm

7 days ago

I've done some work on compression really long ago but I am very far from an expert in the field, in fact I'm not an expert in any field ;) The best I ever did was a way to compress video better than what was available at the time but wavelets overtook that and I have not kept current.

I'm curious about two things:

- is it really that much better (if so, that would by itself be a publishable result) where better is

  - not worse for other cases

  - always better for the cases documented

I think that's a fair challenge.

- is it correct?

And as a sidetrack to the latter: can it be understood to the point that you can prove it is correct? Unfortunately I don't have experience with your toolchain but that's a nice learning opportunity.

Question: are you familiar with

https://www.esa.int/Enabling_Support/Space_Engineering_Techn...

https://en.wikipedia.org/wiki/Calgary_corpus

https://corpus.canterbury.ac.nz/

As a black box it works. It produces smaller binaries. when extracted matching bit-by-bit to the original file.

I tested across 100 packages. better efficiency across the board.

But I don't know if I (or anyone) want to maintain software like this. Where it's a complete black box.

it was a fun experiment though. proves that with a robust testing harness you can do interesting things with pure AI coding