-
Notifications
You must be signed in to change notification settings - Fork 335
Comparing changes
Open a pull request
base repository: klauspost/compress
base: v1.16.0
head repository: klauspost/compress
compare: v1.16.1
- 17 commits
- 39 files changed
- 3 contributors
Commits on Feb 26, 2023
-
Configuration menu - View commit details
-
Copy full SHA for 20f77ba - Browse repository at this point
Copy the full SHA 20f77baView commit details -
Configuration menu - View commit details
-
Copy full SHA for adbe0c2 - Browse repository at this point
Copy the full SHA adbe0c2View commit details -
Configuration menu - View commit details
-
Copy full SHA for 0d37eed - Browse repository at this point
Copy the full SHA 0d37eedView commit details
Commits on Feb 28, 2023
-
gzhttp: Add BREACH mitigation (#762)
See #761 ## BREACH mitigation [BREACH](http://css.csail.mit.edu/6.858/2020/readings/breach.pdf) is a specialized attack where attacker controlled data is injected alongside secret data in a response body. This can lead to sidechannel attacks, where observing the compressed response size can reveal if there are overlaps between the secret data and the injected data. For more information see https://breachattack.com/ It can be hard to judge if you are vulnerable to BREACH. In general, if you do not include any user provided content in the response body you are safe, but if you do, or you are in doubt, you can apply mitigations. `gzhttp` can apply [Heal the Breach](https://ieeexplore.ieee.org/document/9754554), or improved content aware padding. ```Go // RandomJitter adds 1->n random bytes to output based on checksum of payload. // Specify the amount of input to buffer before applying jitter. // This should cover the sensitive part of your response. // This can be used to obfuscate the exact compressed size. // Specifying 0 will use a buffer size of 64KB. // If a negative buffer is given, the amount of jitter will not be content dependent. // This provides *less* security than applying content based jitter. func RandomJitter(n, buffer int) option { ... ``` The jitter is added as a "Comment" field. This field has a 1 byte overhead, so actual extra size will be 2 -> n+1 (inclusive). A good option would be to apply 32 random bytes, with default 64KB buffer: `gzhttp.RandomJitter(32, 0)`. Note that flushing the data forces the padding to be applied, which means that only data before the flush is considered for content aware padding. ### Examples Adding the option `gzhttp.RandomJitter(32, 50000)` will apply from 1 up to 32 bytes of random data to the output. The number of bytes added depends on the content of the first 50000 bytes, or all of them if the output was less than that. Adding the option `gzhttp.RandomJitter(32, -1)` will apply from 1 up to 32 bytes of random data to the output. Each call will apply a random amount of jitter. This should be considered less secure than content based jitter. This can be used if responses are very big, deterministic and the buffer size would be too big to cover where the mutation occurs.
Configuration menu - View commit details
-
Copy full SHA for aeed811 - Browse repository at this point
Copy the full SHA aeed811View commit details
Commits on Mar 1, 2023
-
tests: Add CICD Fuzz testing (#763)
Run mutating fuzz tests. Expand zip fuzz tests and limit memory usage somewhat.
Configuration menu - View commit details
-
Copy full SHA for 421a587 - Browse repository at this point
Copy the full SHA 421a587View commit details
Commits on Mar 3, 2023
-
ci: set minimal permissions to GitHub Workflows (#765)
Closes #764 Signed-off-by: Diogo Teles Sant'Anna <diogoteles@google.com>
Configuration menu - View commit details
-
Copy full SHA for d6408a8 - Browse repository at this point
Copy the full SHA d6408a8View commit details
Commits on Mar 7, 2023
-
gzhttp: Remove a few unneeded allocs (#768)
``` Before: Benchmark2kJitter-32 55479 21371 ns/op 95.83 MB/s 3575 B/op 20 allocs/op After: Benchmark2kJitter-32 54948 21599 ns/op 94.82 MB/s 3451 B/op 16 allocs/op ``` Don't put too much into the speed. GC is not included. Most of the allocs are in `httptest.ResponseRecorder`.
Configuration menu - View commit details
-
Copy full SHA for cd2407a - Browse repository at this point
Copy the full SHA cd2407aView commit details -
gzhttp: Fix crypto/rand.Read usage (#770)
rand.Reader.Read(p) is allowed to return < len(p) bytes and no error, and the Mac implementation sometimes does. I don't know if it will do that for len(p) == 4, but rand.Read is safer in any case.
Configuration menu - View commit details
-
Copy full SHA for 0ba0010 - Browse repository at this point
Copy the full SHA 0ba0010View commit details
Commits on Mar 8, 2023
-
gzhttp: Add SHA256 as paranoid option (#769)
``` Benchmark2kJitter-32 67309 17580 ns/op 116.50 MB/s 3478 B/op 17 allocs/op Benchmark2kJitterParanoid-32 54398 21564 ns/op 94.97 MB/s 3438 B/op 16 allocs ``` ### Paranoid? The padding size is determined by the remainder of a CRC32 of the content. Since the payload contains elements unknown to the attacker, there is no reason to believe they can derive any information from this remainder, or predict it. However, for those that feel uncomfortable with a CRC32 being used for this can enable "paranoid" mode which will use SHA256 for determining the padding. The hashing itself is about 2 orders of magnitude slower, but in overall terms will maybe only reduce speed by 10%. Paranoid mode has no effect if buffer is < 0 (non-content aware padding) 8000 .
Configuration menu - View commit details
-
Copy full SHA for 0f734cf - Browse repository at this point
Copy the full SHA 0f734cfView commit details -
zstd: Fix ineffective block size check (#771)
When falling back to Go decoding block sizes were not checked correctly. Fixes https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=56755
Configuration menu - View commit details
-
Copy full SHA for 3588812 - Browse repository at this point
Copy the full SHA 3588812View commit details
Commits on Mar 10, 2023
-
zstd: Check FSE init values (#772)
* zstd: Check FSE init values If `br.init(s.br.unread())` fails, it may decode bogus data if previous block returned without reading everything from the bit reader. This is used to feed the huff0 table for literal decoding. Return error correctly. Fixes https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=56870 Add parsing of OSS reported input. * Don't use file (yet) Fail on error nilness mismatch * Revert useless file change
Configuration menu - View commit details
-
Copy full SHA for 2e5a973 - Browse repository at this point
Copy the full SHA 2e5a973View commit details -
zstd: Report EOF from byteBuf.readBig (#773)
This method was inconsistent with its cousin readerWrapper.readBig in that it returned 0, nil when the input is too short.
Configuration menu - View commit details
-
Copy full SHA for 7501499 - Browse repository at this point
Copy the full SHA 7501499View commit details -
gzhttp: Use strings for randomJitter to skip a copy (#767)
* gzhttp: Use strings for randomJitter to skip a copy * gzhttp: Use crc32.Update instead of crc32.New Skips another allocation: name old time/op new time/op delta 2kJitter-8 29.4µs ± 4% 29.0µs ± 3% -1.30% (p=0.006 n=24+24) name old speed new speed delta 2kJitter-8 69.7MB/s ± 4% 70.6MB/s ± 3% +1.30% (p=0.006 n=24+24) name old alloc/op new alloc/op delta 2kJitter-8 3.40kB ± 4% 3.34kB ± 4% -1.94% (p=0.001 n=25+25) name old allocs/op new allocs/op delta 2kJitter-8 16.0 ± 0% 15.0 ± 0% -6.25% (p=0.000 n=25+25)
Configuration menu - View commit details
-
Copy full SHA for d900f26 - Browse repository at this point
Copy the full SHA d900f26View commit details
Commits on Mar 12, 2023
-
huff0: Speed up compress1xDo (#774)
A specialized encFourSymbols produces better code than two inlined encTwoSymbols calls in a row. Its arguments need to be different to make the inlining work. Benchmark results on amd64: name old speed new speed delta Compress1XReuseNone/digits-8 438MB/s ± 1% 461MB/s ± 1% +5.18% (p=0.000 n=10+9) Compress1XReuseNone/gettysburg-8 254MB/s ± 1% 254MB/s ± 1% ~ (p=0.412 n=10+9) Compress1XReuseNone/twain-8 363MB/s ± 1% 367MB/s ± 0% +1.05% (p=0.000 n=10+10) Compress1XReuseNone/low-ent.10k-8 466MB/s ± 0% 485MB/s ± 1% +4.01% (p=0.000 n=9+10) Compress1XReuseNone/superlow-ent-10k-8 305MB/s ± 0% 313MB/s ± 1% +2.49% (p=0.000 n=8+9) Compress1XReuseNone/crash2-8 11.4MB/s ± 0% 11.4MB/s ± 1% ~ (p=0.458 n=9+8) Compress1XReuseNone/endzerobi 8000 ts-8 15.7MB/s ± 2% 15.8MB/s ± 1% ~ (p=0.085 n=10+8) Compress1XReuseNone/endnonzero-8 7.64MB/s ± 1% 7.65MB/s ± 1% ~ (p=0.957 n=10+10) Compress1XReuseNone/case1-8 14.6MB/s ± 1% 14.7MB/s ± 1% ~ (p=0.381 n=10+10) Compress1XReuseNone/case2-8 12.3MB/s ± 1% 12.3MB/s ± 0% ~ (p=0.097 n=9+9) Compress1XReuseNone/case3-8 13.2MB/s ± 1% 13.1MB/s ± 1% ~ (p=0.540 n=10+10) Compress1XReuseNone/pngdata.001-8 302MB/s ± 0% 302MB/s ± 1% ~ (p=0.815 n=8+9) Compress1XReuseNone/normcount2-8 34.9MB/s ± 0% 34.9MB/s ± 1% ~ (p=0.646 n=9+10) Compress1XReuseAllow/digits-8 444MB/s ± 1% 465MB/s ± 1% +4.71% (p=0.000 n=10+10) Compress1XReuseAllow/gettysburg-8 282MB/s ± 0% 283MB/s ± 1% +0.39% (p=0.002 n=9+8) Compress1XReuseAllow/twain-8 366MB/s ± 1% 369MB/s ± 1% +1.01% (p=0.000 n=10+10) Compress1XReuseAllow/low-ent.10k-8 470MB/s ± 1% 488MB/s ± 0% +3.82% (p=0.000 n=9+9) Compress1XReuseAllow/superlow-ent-10k-8 308MB/s ± 1% 313MB/s ± 1% +1.83% (p=0.000 n=10+10) Compress1XReuseAllow/crash2-8 16.0MB/s ± 1% 16.0MB/s ± 0% ~ (p=0.356 n=10+10) Compress1XReuseAllow/endzerobits-8 17.0MB/s ± 0% 17.1MB/s ± 0% +0.43% (p=0.001 n=10+9) Compress1XReuseAllow/endnonzero-8 12.0MB/s ± 1% 12.0MB/s ± 1% ~ (p=0.858 n=10+9) Compress1XReuseAllow/case1-8 18.2MB/s ± 1% 18.2MB/s ± 1% ~ (p=0.724 n=10+10) Compress1XReuseAllow/case2-8 15.5MB/s ± 1% 15.3MB/s ± 3% -1.02% (p=0.049 n=10+10) Compress1XReuseAllow/case3-8 16.4MB/s ± 0% 16.4MB/s ± 1% ~ (p=0.887 n=9+10) Compress1XReuseAllow/pngdata.001-8 303MB/s ± 0% 304MB/s ± 0% +0.35% (p=0.000 n=9+9) Compress1XReuseAllow/normcount2-8 45.0MB/s ± 1% 45.1MB/s ± 0% ~ (p=0.075 n=9+10) Compress1XReusePrefer/digits-8 447MB/s ± 1% 467MB/s ± 0% +4.42% (p=0.000 n=9+9) Compress1XReusePrefer/gettysburg-8 425MB/s ± 1% 429MB/s ± 0% +0.87% (p=0.000 n=10+10) Compress1XReusePrefer/twain-8 367MB/s ± 1% 371MB/s ± 0% +1.11% (p=0.000 n=10+10) Compress1XReusePrefer/low-ent.10k-8 474MB/s ± 1% 494MB/s ± 0% +4.22% (p=0.000 n=10+10) Compress1XReusePrefer/superlow-ent-10k-8 313MB/s ± 0% 320MB/s ± 0% +2.09% (p=0.000 n=10+9) Compress1XReusePrefer/crash2-8 63.2MB/s ± 1% 62.9MB/s ± 1% ~ (p=0.159 n=10+10) Compress1XReusePrefer/endzerobits-8 24.8MB/s ± 2% 24.9MB/s ± 1% ~ (p=0.674 n=9+10) Compress1XReusePrefer/endnonzero-8 33.8MB/s ± 0% 33.9MB/s ± 0% +0.27% (p=0.004 n=10+9) Compress1XReusePrefer/case1-8 150MB/s ± 7% 152MB/s ± 2% ~ (p=0.175 n=10+9) Compress1XReusePrefer/case2-8 144MB/s ± 0% 146MB/s ± 1% +1.11% (p=0.000 n=10+9) Compress1XReusePrefer/case3-8 160MB/s ± 0% 160MB/s ± 0% ~ (p=0.593 n=10+10) Compress1XReusePrefer/pngdata.001-8 313MB/s ± 1% 314MB/s ± 1% ~ (p=0.110 n=10+10) Compress1XReusePrefer/normcount2-8 212MB/s ± 1% 215MB/s ± 0% +1.24% (p=0.000 n=10+10) Compress4XReuseNone/digits-8 444MB/s ± 0% 461MB/s ± 6% +3.99% (p=0.008 n=7+9) Compress4XReuseNone/gettysburg-8 252MB/s ± 1% 251MB/s ± 2% ~ (p=0.604 n=10+9) Compress4XReuseNone/twain-8 364MB/s ± 0% 367MB/s ± 1% ~ (p=0.243 n=9+10) Compress4XReuseNone/low-ent.10k-8 469MB/s ± 0% 489MB/s ± 1% +4.18% (p=0.000 n=9+10) Compress4XReuseNone/superlow-ent-10k-8 304MB/s ± 1% 315MB/s ± 1% +3.38% (p=0.000 n=10+10) Compress4XReuseNone/case1-8 14.5MB/s ± 0% 14.4MB/s ± 3% ~ (p=0.619 n=9+9) Compress4XReuseNone/case2-8 12.1MB/s ± 0% 11.9MB/s ± 2% -1.44% (p=0.004 n=10+10) Compress4XReuseNone/case3-8 12.9MB/s ± 0% 12.9MB/s ± 3% ~ (p=0.827 n=9+10) Compress4XReuseNone/pngdata.001-8 301MB/s ± 0% 300MB/s ± 2% ~ (p=1.000 n=10+10) Compress4XReuseNone/normcount2-8 34.2MB/s ± 1% 34.0MB/s ± 4% ~ (p=0.698 n=10+10) Compress4XReuseAllow/digits-8 445MB/s ± 0% 470MB/s ± 0% +5.43% (p=0.000 n=10+10) Compress4XReuseAllow/gettysburg-8 278MB/s ± 0% 280MB/s ± 1% +0.48% (p=0.006 n=9+10) Compress4XReuseAllow/twain-8 365MB/s ± 0% 368MB/s ± 1% +0.95% (p=0.000 n=10+10) Compress4XReuseAllow/low-ent.10k-8 471MB/s ± 1% 497MB/s ± 0% +5.62% (p=0.000 n=10+8) Compress4XReuseAllow/superlow-ent-10k-8 307MB/s ± 1% 316MB/s ± 1% +3.03% (p=0.000 n=10+10) Compress4XReuseAllow/case1-8 17.8MB/s ± 1% 17.8MB/s ± 0% +0.36% (p=0.006 n=10+9) Compress4XReuseAllow/case2-8 15.0MB/s ± 0% 15.0MB/s ± 1% -0.35% (p=0.032 n=8+9) Compress4XReuseAllow/case3-8 15.9MB/s ± 0% 15.9MB/s ± 0% ~ (p=0.556 n=9+9) Compress4XReuseAllow/pngdata.001-8 302MB/s ± 0% 303MB/s ± 1% +0.40% (p=0.003 n=10+10) Compress4XReuseAllow/normcount2-8 42.3MB/s ± 7% 43.4MB/s ± 0% ~ (p=0.108 n=9+10) Compress4XReusePrefer/digits-8 428MB/s ± 7% 472MB/s ± 0% +10.29% (p=0.000 n=10+8) Compress4XReusePrefer/gettysburg-8 417MB/s ± 1% 421MB/s ± 1% +1.03% (p=0.000 n=9+9) Compress4XReusePrefer/twain-8 362MB/s ± 4% 370MB/s ± 0% +2.14% (p=0.000 n=9+9) Compress4XReusePrefer/low-ent.10k-8 470MB/s ± 1% 501MB/s ± 0% +6.67% (p=0.000 n=9+9) Compress4XReusePrefer/superlow-ent-10k-8 307MB/s ± 3% 322MB/s ± 0% +4.79% (p=0.000 n=10+9) Compress4XReusePrefer/case1-8 129MB/s ± 3% 134MB/s ± 1% +3.70% (p=0.000 n=10+10) Compress4XReusePrefer/case2-8 120MB/s ± 2% 122MB/s ± 1% +1.65% (p=0.001 n=9+10) Compress4XReusePrefer/case3-8 130MB/s ± 1% 131MB/s ± 0% +0.79% (p=0.005 n=10+7) Compress4XReusePrefer/pngdata.001-8 312MB/s ± 0% 313MB/s ± 0% +0.34% (p=0.043 n=10+9) Compress4XReusePrefer/normcount2-8 183MB/s ± 2% 184MB/s ± 0% +0.72% (p=0.011 n=10+10) Compress1XSizes/digits-100-8 63.0MB/s ± 2% 63.2MB/s ± 2% ~ (p=0.684 n=10+10) Compress1XSizes/digits-200-8 111MB/s ± 2% 112MB/s ± 1% +1.68% (p=0.000 n=10+10) Compress1XSizes/digits-500-8 204MB/s ± 2% 207MB/s ± 1% +1.73% (p=0.002 n=9+9) Compress1XSizes/digits-1000-8 287MB/s ± 3% 295MB/s ± 1% +2.66% (p=0.000 n=10+10) Compress1XSizes/digits-5000-8 423MB/s ± 1% 441MB/s ± 1% +4.34% (p=0.000 n=9+10) Compress1XSizes/digits-10000-8 443MB/s ± 1% 460MB/s ± 1% +3.96% (p=0.000 n=9+10) Compress1XSizes/digits-50000-8 442MB/s ± 0% 461MB/s ± 0% +4.49% (p=0.000 n=8+10) Compress4XSizes/digits-100-8 61.6MB/s ± 0% 61.5MB/s ± 1% ~ (p=0.310 n=9+8) Compress4XSizes/digits-200-8 108MB/s ± 1% 108MB/s ± 1% +0.51% (p=0.033 n=8+10) Compress4XSizes/digits-500-8 202MB/s ± 1% 206MB/s ± 1% +2.03% (p=0.000 n=9+9) Compress4XSizes/digits-1000-8 280MB/s ± 2% 292MB/s ± 1% +4.47% (p=0.000 n=10+10) Compress4XSizes/digits-5000-8 419MB/s ± 0% 448MB/s ± 1% +6.98% (p=0.000 n=8+10) Compress4XSizes/digits-10000-8 442MB/s ± 1% 474MB/s ± 0% +7.24% (p=0.000 n=8+9) Compress4XSizes/digits-50000-8 437MB/s ± 2% 471MB/s ± 0% +7.70% (p=0.000 n=10+10)
Configuration menu - View commit details
-
Copy full SHA for 34dac29 - Browse repository at this point
Copy the full SHA 34dac29View commit details -
tests: Remove fuzz printing (#775)
Only report mismatches in nil state, and make that an error.
Configuration menu - View commit details
-
Copy full SHA for c73f008 - Browse repository at this point
Copy the full SHA c73f008View commit details
Commits on Mar 13, 2023
-
zstd: Speed up + improve best encoder (#776)
name old speed new speed delta Encoder_EncodeAllSimple/best-8 14.8MB/s ± 3% 20.7MB/s ± 3% +39.53% (p=0.000 n=17+19) Encoder_EncodeAllSimple4K/best-8 11.8MB/s ± 1% 19.2MB/s ± 6% +62.17% (p=0.000 n=20+20) name old alloc/op new alloc/op delta Encoder_EncodeAllSimple/best-8 14.0B ± 0% 10.2B ± 8% -27.07% (p=0.000 n=16+19) Encoder_EncodeAllSimple4K/best-8 1.00B ± 0% 0.00B -100.00% (p=0.000 n=20+19) Also, compressing enwik9 takes 6.375% less wall clock time. Output from silesia corpus and enwik9 is about .05% bigger, due to the different order in which comparisons are done: dickens 3222189 3220994 (× 0.99963) enwik9 259699309 259846164 (× 1.00057) mozilla 16912341 16912437 (× 1.00001) mr 3505553 3502823 (× 0.99922) nci 2289871 2306320 (× 1.00718) ooffice 2896410 2896907 (× 1.00017) osdb 3390871 3390548 (× 0.99990) reymont 1656006 1657380 (× 1.00083) samba 4326783 4329898 (× 1.00072) sao 5416932 5416648 (× 0.99995) webster 9966351 9972808 (× 1.00065) xml 538378 542277 (× 1.00724) x-ray 5733061 5733121 (× 1.00001) total 319554055 319728325 (× 1.00055) This is still smaller than before #705.
Configuration menu - View commit details
-
Copy full SHA for 80b543b - Browse repository at this point
Copy the full SHA 80b543bView commit details -
Configuration menu - View commit details
-
Copy full SHA for 134768b - Browse repository at this point
Copy the full SHA 134768bView commit details
This comparison is taking too long to generate.
Unfortunately it looks like we can’t render this comparison for you right now. It might be too big, or there might be something weird with your repository.
You can try running this command locally to see the comparison on your machine:
git diff v1.16.0...v1.16.1