-
Notifications
You must be signed in to change notification settings - Fork 42
Avoid slices in entries of decomposition tables #86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
For small arrays of data, slices are expensive: - there's the obvious size of the array (`sizeof::<char>() * length`) - there's the size of the slice itself (`sizeof::<(*const str, usize)>()`) - there's the size of the relocation for the pointer in the slice. The worst case is on 64-bits ELF, where it is `3 * sizeof::<usize>()`(!). Most entries in decomposition tables are 2 characters or less, so the overhead for each of these tables is incredibly large. To give an idea, a print "Hello, World" (fresh from cargo new) executable built with `--release` on my machine has 17712 bytes of .rela.dyn (relocations) and 9520 bytes of .data.rel.ro (relocatable read-only data). Adding unicode-normalization as a dependency and changing the code to `println!("{}", String::from_iter("Hello, world!".nfc()));`, those jump to, respectively, 156336 and 147968 bytes. For comparison, with unicode-normalization 0.1.8 (last release before the perfect hashes), they were 18168 and 9872 bytes. This is however compensated by the .text (code) being larger (314607 with 0.1.8 vs. 234639 with 0.1.19); likewise for .rodata (non-relocatable read-only data) (225979 with 0.1.8, vs. 82523 with 0.1.19). This can be alleviated by replacing slices with indexes into a unique slice per decomposition table, overall saving 228K (while barely adding to code size (160 bytes)). This also makes the overall cost of unicode-normalization lower than what it was in 0.1.8. As far as performance is concerned, at least on my machine, it makes virtually no difference on `cargo bench`: on master: running 22 tests test bench_is_nfc_ascii ... bench: 13 ns/iter (+/- 0) test bench_is_nfc_normalized ... bench: 23 ns/iter (+/- 0) test bench_is_nfc_not_normalized ... bench: 347 ns/iter (+/- 2) test bench_is_nfc_stream_safe_ascii ... bench: 13 ns/iter (+/- 0) test bench_is_nfc_stream_safe_normalized ... bench: 31 ns/iter (+/- 0) test bench_is_nfc_stream_safe_not_normalized ... bench: 374 ns/iter (+/- 2) test bench_is_nfd_ascii ... bench: 9 ns/iter (+/- 0) test bench_is_nfd_normalized ... bench: 29 ns/iter (+/- 2) test bench_is_nfd_not_normalized ... bench: 9 ns/iter (+/- 0) test bench_is_nfd_stream_safe_ascii ... bench: 16 ns/iter (+/- 0) test bench_is_nfd_stream_safe_normalized ... bench: 40 ns/iter (+/- 0) test bench_is_nfd_stream_safe_not_normalized ... bench: 9 ns/iter (+/- 0) test bench_nfc_ascii ... bench: 525 ns/iter (+/- 1) test bench_nfc_long ... bench: 186,528 ns/iter (+/- 1,613) test bench_nfd_ascii ... bench: 283 ns/iter (+/- 30) test bench_nfd_long ... bench: 120,183 ns/iter (+/- 4,510) test bench_nfkc_ascii ... bench: 513 ns/iter (+/- 1) test bench_nfkc_long ... bench: 192,922 ns/iter (+/- 1,673) test bench_nfkd_ascii ... bench: 276 ns/iter (+/- 30) test bench_nfkd_long ... bench: 137,163 ns/iter (+/- 2,159) test bench_streamsafe_adversarial ... bench: 323 ns/iter (+/- 5) test bench_streamsafe_ascii ... bench: 25 ns/iter (+/- 0) with patch applied: running 22 tests test bench_is_nfc_ascii ... bench: 13 ns/iter (+/- 0) test bench_is_nfc_normalized ... bench: 23 ns/iter (+/- 0) test bench_is_nfc_not_normalized ... bench: 347 ns/iter (+/- 7) test bench_is_nfc_stream_safe_ascii ... bench: 13 ns/iter (+/- 0) test bench_is_nfc_stream_safe_normalized ... bench: 36 ns/iter (+/- 1) test bench_is_nfc_stream_safe_not_normalized ... bench: 377 ns/iter (+/- 14) test bench_is_nfd_ascii ... bench: 9 ns/iter (+/- 0) test bench_is_nfd_normalized ... bench: 29 ns/iter (+/- 3) test bench_is_nfd_not_normalized ... bench: 10 ns/iter (+/- 0) test bench_is_nfd_stream_safe_ascii ... bench: 16 ns/iter (+/- 0) test bench_is_nfd_stream_safe_normalized ... bench: 39 ns/iter (+/- 1) test bench_is_nfd_stream_safe_not_normalized ... bench: 10 ns/iter (+/- 0) test bench_nfc_ascii ... bench: 545 ns/iter (+/- 2) test bench_nfc_long ... bench: 186,348 ns/iter (+/- 1,660) test bench_nfd_ascii ... bench: 281 ns/iter (+/- 2) test bench_nfd_long ... bench: 124,720 ns/iter (+/- 5,967) test bench_nfkc_ascii ... bench: 517 ns/iter (+/- 4) test bench_nfkc_long ... bench: 194,943 ns/iter (+/- 1,636) test bench_nfkd_ascii ... bench: 274 ns/iter (+/- 0) test bench_nfkd_long ... bench: 127,973 ns/iter (+/- 1,161) test bench_streamsafe_adversarial ... bench: 320 ns/iter (+/- 3) test bench_streamsafe_ascii ... bench: 25 ns/iter (+/- 0)
Forgot to mention in the commit: relocatable data also means it's not sharable between processes. |
With a hacked up cargo-bloat:
with the patch applied:
|
@Manishearth are you planning to spin a new release soon? |
No but I can if you make a PR that bumps the version |
", ".join("'\\u{%s}'" % hexify(c) for c in table[k]))) | ||
offsets = {} | ||
offset = 0 | ||
out.write("pub(crate) const %s_DECOMPOSED_CHARS: &[char] = &[\n" % name.upper()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't a fixed array avoid the relocations altogether? As in, a %s_DECOMPOSED_CHARS: [char; _] =
or so?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tables are const. The slice for the table themselves are not stored in .data.rel.ro at all because of that. So there are actually no relocations for those (also, that would only be one per table, which is not as dramatic as one per entry).
For small arrays of data, slices are expensive:
sizeof::<char>() * length
)sizeof::<(*const str, usize)>()
)worst case is on 64-bits ELF, where it is
3 * sizeof::<usize>()
(!).Most entries in decomposition tables are 2 characters or less, so the
overhead for each of these tables is incredibly large.
To give an idea, a print "Hello, World" (fresh from cargo new)
executable built with
--release
on my machine has 17712 bytes of.rela.dyn (relocations) and 9520 bytes of .data.rel.ro (relocatable
read-only data).
Adding unicode-normalization as a dependency and changing the code to
println!("{}", String::from_iter("Hello, world!".nfc()));
,those jump to, respectively, 156336 and 147968 bytes.
For comparison, with unicode-normalization 0.1.8 (last release before
the perfect hashes), they were 18168 and 9872 bytes. This is however
compensated by the .text (code) being larger (314607 with 0.1.8 vs.
234639 with 0.1.19); likewise for .rodata (non-relocatable read-only
data) (225979 with 0.1.8, vs. 82523 with 0.1.19).
This can be alleviated by replacing slices with indexes into a unique
slice per decomposition table, overall saving 228K (while barely adding
to code size (160 bytes)). This also makes the overall cost of
unicode-normalization lower than what it was in 0.1.8.
As far as performance is concerned, at least on my machine, it makes
virtually no difference on
cargo bench
:on master:
running 22 tests
test bench_is_nfc_ascii ... bench: 13 ns/iter (+/- 0)
test bench_is_nfc_normalized ... bench: 23 ns/iter (+/- 0)
test bench_is_nfc_not_normalized ... bench: 347 ns/iter (+/- 2)
test bench_is_nfc_stream_safe_ascii ... bench: 13 ns/iter (+/- 0)
test bench_is_nfc_stream_safe_normalized ... bench: 31 ns/iter (+/- 0)
test bench_is_nfc_stream_safe_not_normalized ... bench: 374 ns/iter (+/- 2)
test bench_is_nfd_ascii ... bench: 9 ns/iter (+/- 0)
test bench_is_nfd_normalized ... bench: 29 ns/iter (+/- 2)
test bench_is_nfd_not_normalized ... bench: 9 ns/iter (+/- 0)
test bench_is_nfd_stream_safe_ascii ... bench: 16 ns/iter (+/- 0)
test bench_is_nfd_stream_safe_normalized ... bench: 40 ns/iter (+/- 0)
test bench_is_nfd_stream_safe_not_normalized ... bench: 9 ns/iter (+/- 0)
test bench_nfc_ascii ... bench: 525 ns/iter (+/- 1)
test bench_nfc_long ... bench: 186,528 ns/iter (+/- 1,613)
test bench_nfd_ascii ... bench: 283 ns/iter (+/- 30)
test bench_nfd_long ... bench: 120,183 ns/iter (+/- 4,510)
test bench_nfkc_ascii ... bench: 513 ns/iter (+/- 1)
test bench_nfkc_long ... bench: 192,922 ns/iter (+/- 1,673)
test bench_nfkd_ascii ... bench: 276 ns/iter (+/- 30)
test bench_nfkd_long ... bench: 137,163 ns/iter (+/- 2,159)
test bench_streamsafe_adversarial ... bench: 323 ns/iter (+/- 5)
test bench_streamsafe_ascii ... bench: 25 ns/iter (+/- 0)
with patch applied:
running 22 tests
test bench_is_nfc_ascii ... bench: 13 ns/iter (+/- 0)
test bench_is_nfc_normalized ... bench: 23 ns/iter (+/- 0)
test bench_is_nfc_not_normalized ... bench: 347 ns/iter (+/- 7)
test bench_is_nfc_stream_safe_ascii ... bench: 13 ns/iter (+/- 0)
test bench_is_nfc_stream_safe_normalized ... bench: 36 ns/iter (+/- 1)
test bench_is_nfc_stream_safe_not_normalized ... bench: 377 ns/iter (+/- 14)
test bench_is_nfd_ascii ... bench: 9 ns/iter (+/- 0)
test bench_is_nfd_normalized ... bench: 29 ns/iter (+/- 3)
test bench_is_nfd_not_normalized ... bench: 10 ns/iter (+/- 0)
test bench_is_nfd_stream_safe_ascii ... bench: 16 ns/iter (+/- 0)
test bench_is_nfd_stream_safe_normalized ... bench: 39 ns/iter (+/- 1)
test bench_is_nfd_stream_safe_not_normalized ... bench: 10 ns/iter (+/- 0)
test bench_nfc_ascii ... bench: 545 ns/iter (+/- 2)
test bench_nfc_long ... bench: 186,348 ns/iter (+/- 1,660)
test bench_nfd_ascii ... bench: 281 ns/iter (+/- 2)
test bench_nfd_long ... bench: 124,720 ns/iter (+/- 5,967)
test bench_nfkc_ascii ... bench: 517 ns/iter (+/- 4)
test bench_nfkc_long ... bench: 194,943 ns/iter (+/- 1,636)
test bench_nfkd_ascii ... bench: 274 ns/iter (+/- 0)
test bench_nfkd_long ... bench: 127,973 ns/iter (+/- 1,161)
test bench_streamsafe_adversarial ... bench: 320 ns/iter (+/- 3)
test bench_streamsafe_ascii ... bench: 25 ns/iter (+/- 0)