10000 add bigram compression to makeqstrdata (save ~100 bytes on trinket m0 de_DE) by jepler · Pull Request #3370 · adafruit/circuitpython · GitHub
[go: up one dir, main page]

Skip to content

add bigram compression to makeqstrdata (save ~100 bytes on trinket m0 de_DE) #3370

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Sep 10, 2020

Conversation

jepler
Copy link
@jepler jepler commented Sep 2, 2020

Compress common unicode bigrams by making code points in the range 0x80 - 0xbf (inclusive) represent them. Then, they can be greedily encoded and the substituted code points handled by the existing Huffman compression. Normally code points in the range 0x80-0xbf are not used in Unicode, so we stake our own claim. Using the more arguably correct "Private Use Area" (PUA) would mean that for scripts that only use code points under 256 we would use more memory for the "values" table.

bigram means "two letters", and is also sometimes called a "digram". It's nothing to do with "big RAM". For our purposes, a bigram represents two successive unicode code points, so for instance in our build on trinket m0 for english the most frequent are: ['t ', 'e ', 'in', 'd ', ...].

In Japanese the most frequent bigram is 'ませ' and represents what would be be 6 UTF-8 bytes into just 6 bits. 'ま' is 8 bits and after bigram encoding 'せ' no longer appears at all, so this is an example of a very successful bigram compression. Before bigrams, 'ま' and 'せ' were each 6 bits. The second most frequent bigram, せん, is not even used in compression because of the overlap problem discussed below. (ません commonly is at the end of a verb form, indicating a polite present negative, e.g., "cannot assign")

The bigrams are selected based on frequency in the corpus, but the selection is not necessarily optimal, for these reasons I can think of:

  • Suppose the corpus was just "tea" repeated 100 times. The top bigrams would be "te", and "ea". However, due to overlap, "te" could never be used. Thus, some bigrams might actually waste space
    • I assume this has to be why e.g., bigram 0x86 "s " is more frequent than bigram 0x85 " a" in English for Trinket M0, because sequences like "can't add" would get the "t " digram and then be unable to use the " a" digram. It's definitely why in the japanese translation the bigram 0x81 せん is not used at all!
  • If a bigram is frequent then so are its constituents. Say that "i" and "n" both encode to just 5 or 6 bits, then the huffman code for "in" had better compress to 10 or fewer bits or it's a net loss!
    • I checked though! "i" is 5 bits, "n" is 6 bits (lucky guess) but the bigram 0x83 also just 6 bits, so this one is a win of 5 bits for every "it" minus overhead. Yay, this round goes to team compression.
    • On the other hand, the least frequent bigram 0x9d " n" is 10 bits long and its constituent code points are 4+6 bits so there's no savings, but there is the cost of the table entry.
    • and somehow 0x9f "an" is never used at all!

With or without accounting for overlaps, there is some optimum number of bigrams. Adding one more bigram uses at least 2 bytes (for the entry in the bigram table; 4 bytes if code points >255 are in the source text) and also needs a slot in the Huffman dictionary, so adding bigrams beyond the optimim number makes compression worse again.

If it's an improvement, the fact that it's not guaranteed optimal doesn't seem to matter too much. It just leaves a little more fruit for the next sweep to pick up. Perhaps try adding the most frequent bigram not yet present, until it doesn't improve compression overall.

Right now, de_DE is again the "fullest" build on trinket_m0. (It's reclaimed that spot from the ja translation somehow) This change saves 104 bytes there, increasing free space about 6.8%. In the larger (but not critically full) pyportal build it saves 324 bytes.

The specific number of bigrams used (32) was chosen as it is the max number that fit within the 0x80..0xbf range. Larger tables would require the use of 16 bit code points in the de_DE build, losing savings overall.

(Side note: The most frequent letters in English have been said to be: ETA OIN SHRDLU; but we have UAC EIL MOPRST in our corpus)

jepler and others added 3 commits September 1, 2020 17:12
Compress common unicode bigrams by making code points in the range
0x80 - 0xbf (inclusive) represent them.  Then, they can be greedily
encoded and the substituted code points handled by the existing Huffman
compression.  Normally code points in the range 0x80-0xbf are not used
in Unicode, so we stake our own claim.  Using the more arguably correct
"Private Use Area" (PUA) would mean that for scripts that only use
code points under 256 we would use more memory for the "values" table.

bigram means "two letters", and is also sometimes called a "digram".
It's nothing to do with "big RAM".  For our purposes, a bigram represents
two successive unicode code points, so for instance in our build on
trinket m0 for english the most frequent are:
['t ', 'e ', 'in', 'd ', ...].

The bigrams are selected based on frequency in the corpus, but the
selection is not necessarily optimal, for these reasons I can think of:
 * Suppose the corpus was just "tea" repeated 100 times.  The
   top bigrams would be "te", and "ea".  However,
   overlap, "te" could never be used.  Thus, some bigrams might actually
   waste space
    * I _assume_ this has to be why e.g., bigram 0x86 "s " is more
      frequent than bigram 0x85 " a" in English for Trinket M0, because
      sequences like "can't add" would get the "t " digram and then
      be unable to use the " a" digram.

 * And generally, if a bigram is frequent then so are its constituents.
   Say that "i" and "n" both encode to just 5 or 6 bits, then the huffman
   code for "in" had better compress to 10 or fewer bits or it's a net
   loss!
    * I checked though!  "i" is 5 bits, "n" is 6 bits (lucky guess)
      but the bigram 0x83 also just 6 bits, so this one is a win of
      5 bits for every "it" minus overhead.  Yay, this round goes to team
      compression.
    * On the other hand, the least frequent bigram 0x9d " n" is 10 bits
      long and its constituent code points are 4+6 bits so there's no
      savings, but there is the cost of the table entry.
    * and somehow 0x9f 'an' is never used at all!

With or without accounting for overlaps, there is some optimum number
of bigrams.  Adding one more bigram uses at least 2 bytes (for the
entry in the bigram table; 4 bytes if code points >255 are in the
source text) and also needs a slot in the Huffman dictionary, so
adding bigrams beyond the optimim number makes compression worse again.

If it's an improvement, the fact that it's not guaranteed optimal
doesn't seem to matter too much.  It just leaves a little more fruit
for the next sweep to pick up.  Perhaps try adding the most frequent
bigram not yet present, until it doesn't improve compression overall.

Right now, de_DE is again the "fullest" build on trinket_m0.  (It's
reclaimed that spot from the ja translation somehow)  This change saves
104 bytes there, increasing free space about 6.8%.  In the larger
(but not critically full) pyportal build it saves 324 bytes.

The specific number of bigrams used (32) was chosen as it is the max
number that fit within the 0x80..0xbf range.  Larger tables would
require the use of 16 bit code points in the de_DE build, losing savings
overall.

(Side note: The most frequent letters in English have been said
to be: ETA OIN SHRDLU; but we have UAC EIL MOPRST in our corpus)
These characters, at code point 0xa0, are unintended.
The previous range was unintentionally big and overlaps some characters
we'd like to use (and also 0xa0, which we don't intentionally use)
@jepler
Copy link
Author
jepler commented Sep 2, 2020

The single failure is a network failure during upload-artifact

@tannewt tannewt self-requested a review September 2, 2020 22:55
@@ -46,10 +47,18 @@ STATIC int put_utf8(char *buf, int u) {
if(u <= 0x7f) {
*buf = u;
return 1;
} else if(MP_ARRAY_SIZE(ngrams) <= 64 && u <= 0xbf) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The switch from 0x80 to 0xe000 here is at 64 but above in encode and decode is 32. Shouldn't they be the same? Could we do it at compile time here with a macro?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Python code works based on the count of bigrams (up to 32 of them) but the C array gets 2 entries for each bigram, so 64 is correct. However, it is true that we could place macros such as BIGRAM_CODE_POINT_START, BIGRAM_CODE_POINT_END in compression.generated.h and use them instead.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah! Still plan on updating this?

Now this gets filled in with values e.g., 128 (0x80) and 159 (0x9f).
Two problems: The lead byte for 3-byte sequences was wrong, and one
mid-byte was not even filled in due to a missing "++"!

Apparently this was broken ever since the first "Compress as unicode,
not bytes" commit, but I believed I'd "tested" it by running on the
Pinyin translation.

This rendered at least the Korean and Japanese translations completely
illegible, affecting 5.0 and all later releases.
@jepler jepler requested a review from tannewt September 9, 2020 02:00
@jepler
Copy link
Author
jepler commented Sep 9, 2020

I also added to this a main-branch version of #3385 which is an important bugfix. I will split that out if there are any other things to revise on this PR, because I'd like to see it fixed now that I know it's there.

Copy link
Member
@tannewt tannewt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! Thank you!

@tannewt tannewt merged commit 1ba28b3 into adafruit:main Sep 10, 2020
@ciscorn
Copy link
ciscorn commented Sep 12, 2020

@jepler @tannewt
FYI.
I was doing a similar experiment and found choosing longer patterns could save more bytes.
https://gist.github.com/ciscorn/915ef9970c1b7f662af33e7679e4efba#file-results-txt

The computational cost for finding frequent patterns may be a bit high, though.

@jepler
Copy link
Author
jepler commented Sep 12, 2020

@ciscorn wow that looks like another amazing improvement.

@jepler
Copy link
Author
jepler commented Sep 12, 2020

@ciscorn if you can code up the Python-side decompression part, I'll take a stab at translating it to "C". I'm not sure I immediately understood all that is going on.

@jepler jepler deleted the compression-bigrams branch September 12, 2020 14:00
jepler pushed a commit to jepler/circuitpython that referenced this pull request Aug 27, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants
0