Let's say I have a 60 set of numbers from 0-100,000 so like [81, 98, 115, 189, 254, ... , 97866, 98441, 99671] all unique and progressively increasing. Would it be mathematically possible to compress this sequence by 80-90%? so far I've tried gzip which only compresses at 50% and other algorithms the best one I found was at 0.68% almost 0.69%. Though I have not tried mixing this algorithms together to get to 80-90% but is this mathematically possible?
The current limit I am currently trying to achieve is a byte representation that is below the number of sets I have. So for example this set [81, 98, 115, 189, 254, ... , 97866, 98441, 99671] can be translated to something with the size of 50 bytes that is below 60 so that would do great for me
also I know of subtracting the lower value to the other value, example : [81, 98, 115] → [81, 17, 17] but this doesn't really work for what I am after.