-
-
Notifications
You must be signed in to change notification settings - Fork 10.9k
The PCG implementation provided by Numpy has significant, dangerous self-correlation #16313
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@imneme, @bashtage, @rkern would be the authorities here, but I think we have gone over this and it is why we preferred the |
@mattip This has nothing to do with jumping. |
I think in practice it is difficult to make wholesale changes, although improved documentation is always a good idea. I would probably recommend |
Can you quantify this? I can replicate the failures using the same increment, but we seed the increment as well as the state, and I have not yet observed a failure with two different random increments. If the increments also have to be carefully constructed, then that would affect the practical birthday collision frequency. https://gist.github.com/rkern/f46552e030e59b5f1ebbd3b3ec045759
|
OK, I'll try again. There are no multiple streams in an LCG with a power-of-2 modulus. Many believed it in the early days, and there are even long old papers claiming to do interesting stuff with those "streams", but it is has been known for decades that the orbits you obtain by changing the constants are all the same modulo an additive constant and possibly a sign change. The farthest I can trace it is Mark J. Durst, Using linear congruential generators for parallel random number generation, So, I wrote another program http://prng.di.unimi.it/corrpcgnumpy.c in which you can set:
So this is exactly the setting of the first program, but you can also choose the constants.
So there are at least 2^72 correlated subsequences, no matter how you choose the "stream constants", exactly as in the same-constant case. And we're given a ridiculous amount of slack to the generator: even if instead of the exact starting point I'm calculating you would use a state a little bit before or after, correlation would show up anyway. You can modify easily the program with an additional parameter to do that. I repeat, once again: no existing modern generator from the scientific literature has this misbehavior (of course, a power-of-2 LCG has this behavior, but, for God's sake, that's not a modern generator). |
Sabastiano's critiques of PCG are addressed in this blog post from 2018. The short version is that if you're allowed to contrive specific seeds, you can show “bad looking” behavior out of pretty much any PRNG. Notwithstanding his claim that PCG is somehow unique, actually PCG is pretty conventional — PCG's streams are no worse than, say, SplitMix's, which is another widely used PRNG. |
That is entirely false. To prove me wrong, show two correlated nonoverlapping sequences from MRG32k3a or xoshiro256++. |
I never said non-overlapping. Show me a currently available test for xoshiro256++. that two seeds avoid overlap. |
In contrast, I do have a test for PCG that shows that the “correlations” you showed are essentially a form of overlap. |
I can't fight FUD like "essentially" and "a form", but I modified http://prng.di.unimi.it/intpcgnumpy.c so that initially it iterates each PRNG 10 billion times, and exits with an error message if the generated sequence traverses the initial state of the other PRNG. This guarantees that the first 160GB of data into Practrand come from non-overlapping sequences:
This particular initialization data has just 56 lower fixed bits, so one can generate 2^72 correlated sequences by flipping the higher bits. The statistical failures happen after just 64GB of data, showing that overlaps are not responsible for the correlation. It is possible that with other specific targeted choices overlap happens before 64GB, of course—this is a specific example. But it is now easy to check that overlapping is not the problem—the generator has just a lot of undesirable internal correlation. |
Please respect the code of conduct. Try to keep your comments in tone with the directives to be "empathetic, welcoming, friendly, and patient" and "be careful in the words that we choose. We are careful and respectful in our communication". |
Well, it's trivial: decide the length of the stream, iterate, and check that the two streams do not cross the initial state. It's the same code I used to show the correlated PCG streams in the program http://prng.di.unimi.it/intpcgnumpy.c do not overlap. |
IMHO, the self-correlation within PCG is much worse. There is no result for the additive generator underlying a SplitMix instance analogous to Durst's dramatic 1989 results about LCGs. But the very mild problems of SplitMix are known, and JEP 356 will provide a new class of splittable generators, LXM, trying to address those problems. It would be time to move on and replace PCG, too, with something less flawed. The underlying issue is known for both generators, and it is due to the lack of state mix. If you change bit k of the state of one of those generators, the change will never propagate below bit k. This does not happen in LCGs with prime modulus, in F₂-linear generators, CMWC generators, etc. All other generators try to mix their state as quickly and as much as possible. Equating the problems of PCG and SplitMix is a red herring. SplitMix has a very simple underlying generator, just additive, but on top of that there is a scrambling function that is very powerful: it is Appleby's 64-bit finalizer of the MurmurHash3 hash function, which has been widely used in a number of contexts and has been improved by Stafford (http://zimbry.blogspot.com/2011/09/better-bit-mixing-improving-on.html). The constants of the function have been trained to have specific, measurable avalanching properties. Even changes in a small number of bits tend to spread over all the output. In other words, SplitMix stands on the shoulder of giants. On the contrary, the LCG underlying PCG generators has the same lack-of-mixing problems, but the scrambling functions are just a simple sequence of arithmetic and logical operation assembled by the author without any theoretical or statistical guarantee. If they had been devised taking care of the fact that all sequences of the underlying LCG are the same modulo an additive constant and possibly a sign change, it would have been possible to address the problem. But the author had no idea that the sequences were so easily derivable one from another. This can be easily seen from this statement in Section 4.2.3 of the PCG technical report (https://www.pcg-random.org/pdf/hmc-cs-2014-0905.pdf): "Every choice of c results in a different sequence of numbers that has none of its pairs of successive outputs in common with another sequence." This is taken as proof that the sequences are different, that is that the underlying LCG provides multiple streams. Durst's 1989 negative results about this topic do not appear anywhere in the paper. As I remarked earlier, by those results all such sequences are the same, modulo an additive constant and possibly a sign change (for LCGs with power-of-2 modulus of maximum potency, as it happens in PCG). I'm sure not quoting Durst's results is a bona fide mistake, but the problem is that once you are convinced the underlying LCG you are using provides "streams" that are "different" in some sense, when they are not, you end up with a generator like PCG in which for each subsequence there are 2^72 non-overlapping, correlated subsequences, even if you change the "stream". |
Thank you all for your input. For the moment, I am not interested in binary judgments like "PCG is good/bad". Please use your own forums for such discussions. What is on-topic here is what numpy will do, and that final judgment belongs to the numpy developers. We do appreciate the expertise that you all bring to this discussion, but I want to focus it on the underlying facts rather than the final judgement. I especially appreciate quantitative statements that give me an idea of the amount of headroom that we have. If my earlier judgments were wrong, it was because I jumped to the judgment too soon, so I would appreciate your assistance in avoiding that again. Thank you.
@vigna Can you walk me through this calculation? The birthday collision calculation that I am familiar with gives the 50% chance of an I am convinced that changing the increment doesn't affect the probability of collision. Further discussion of the merits of "PCG streams" is off-topic. Using that discussion as an excuse to repeatedly hammer on "the author" is especially unwelcome and treads on our code of conduct. Persisting will mean that we will have to proceed without your input. Thank you. @imneme This seems to be the related to the issues with jumping by a multiple of a large power of 2. When I construct a pair of |
OK, this is embarrassing: it was half a billion, not half a million. A single letter can make a big difference. I apologize for the slip. But, as I said early, this is the probability of hitting exactly the same starting state, not the probability of a significant overlap of correlated subsequences. Personally I prefer to use PRNGs without correlated subsequences, since there are plenty of them, but, as you rightly say, the decision is only yours. Fixing the scrambling function so that it has better mixing properties sounds like a perfectly reasonable solution. My post was just meant to be a clarification of the structural differences between PCG and SplitMix, since a previous post claimed they had similar problems, and I don't think that is a correct statement. You cannot write a program like http://prng.di.unimi.it/corrpcgnumpy.c for SplitMix. |
@rkern, you asked:
Thanks for finding and linking back to the discussion thread from last year. Yes, as Sebastiano notes in his response,
XSL-RR is at the weaker end of things. In contrast, both the original RXS-M output function from the PCG paper, and the new DXSM output function do more in the way of scrambling, and so don't show these kinds of issues. DXSM (added to the PCG source in this commit last year) was specifically designed to be stronger than XSL-RR but have similar time performance (c.f., RXS-M, which is slower). I tested DXSM fairly hard last year, but 67 days into the run we had an extended power outage that took down the server (the UPS battery drained) and ended the test run, but at that point it had proven itself pretty well in both normal testing (128 TB of output tested) and jumps of 2^48 (64 TB of output tested, since that runs slower). If, even without designing DXSM, RXS-M would have taken care of the issue, one question is why I ever used the weaker XSL-RR permutation instead — why not always use very strong bit-scrambling in the output function? The answer is that it basically comes down to cycles. Speed matters to people, so you try to avoid doing a lot more scrambling than you need to. This is an issue Sebastiano is familiar with, because his approach and mine have much in common. We each take a long established approach that would fail modern statistical tests (LCGs in my case, and Marsaglia's XorShift LFSRs in his) and add a scrambling output function to redeem it. We both strive to make that output function cheap, and we've both been caught out a little bit where deficiencies in the underlying generator that we're trying to mask with our output function nevertheless show through. In his case, it's linearity issues which have shown through. But in somewhat recent work that pleased me a lot, he's also shown how many LFSR-based designs have hamming-weight issues (reflecting a longstanding concern of my own) that are inadequately masked by their output functions. His own generators pass his test, so that's something, but back in 2018 when I was looking at his new xoshiro PRNG it seemed to me that hamming-weight issues from the underling generator did make it through his output function. He's since revised xoshiro with a new output function, and I hope that does the trick (he made some other changes too, so perhaps that one also fixes the repeats issue, highlighted by this test program?). As for his correlation programs, back in 2018 when he put a critique of PCG in on his website that included programs he'd written with various issues (like contrived seeding, etc.), I wrote a response that contained a bunch of similar programs for other long established PRNGs, including |
Please forgive my lack of understanding - but could I ask someone for a summary conclusion at this stage? Are there realisitic circumstances is which the default PCG is unsafe? What are the arguments for switching the default? |
The easy path is to add some documentation about high (ultra-high?) -dimensional applications and the probability of correlated sequences. A harder path would be to replace the output function which would break the stream. I'm not sure how strong a promise The hardest path would be to find a new default rng. It wasn't easy the first time and I don't think anything has changed to move the needle in any particular direction over the past 18 months. |
I opened gh-16493 about the I do not claim to understand this discussion fully, but it seems there are two things to figure out:
|
The easiest thing to do right now would be to add a Incidentally, I prefer that it be a separate named I don't think we really settled the policy for making changes to what |
As part of adding the new BG it would be good to update the notes to PCG64 to start serving as a guide and to provide a rationale for preferring the newer variant. |
That's probably a little too glib. It depends on how many streams, how much data you pull from each stream, and what your risk tolerance for a birthday collision is. I haven't gotten around to crunching that math into a comprehensible paragraph yet to make easy-to-follow recommendations yet, which is why I haven't revisited this Github issue in a while. I don't think it's a hair-on-fire issue that needs to be fixed right now, though. |
I'll write something longer later, but as I see it in this thread we've been retreading the ground we went over last year. Nothing has changed other than Sebastiano discovered that NumPy had shipped PCG. The analysis from the NumPy team last year was more in-depth, and considered more plausible scenarios. |
My preference would be to upgrade the default as quickly as reasonably possible - just to reduce confusion. I mean, not wait for a deprection cycle. |
@imneme - thanks much - I'd find a longer thing very useful. |
Probably the best blast from the past post about it is this one. I think it's certainly worth a read. You can scroll back up in the thread from there to see us talking about these issues. It was about PCG 32. |
I had in my head what I wanted to say, but I find looking at the posts from a year ago, I've said it all already, both here and elsewhere (my blog (2017), reddit, my blog again (2018), and in NumPy discussions, etc.) About streams and their self-similarity (for which @rkern wrote a stream-dependence tester), I wrote last year:
(I then went into more depth in this comment and in this one.) I'd say that the key take-away is that for almost any PRNG, with a bit of time and energy you can contrive pathological seedings. PCG is in a somewhat unusual position that it has someone who enjoys working plausible looking seedings for PCG specifically that have a hand-crafted pathology (i.e., Sebastiano). As a result of that work, I've turned around and done the same for both his PRNGs and for other longstanding ones. In general, you want to initialize the PRNG state with something that looks “kinda random”. That's pretty universal, even if people want to do otherwise. For example, for any LFSR PRNG (XorShift-style, Mersenne Twister, etc.), you must not initialize it with an all-zeros state because it'll just stay stuck there. But even states that are mostly-zero are often problematic for LFSRs (a phenomenon known as zeroland, and why the C++ folks made If you're using a PCG generator that was initialized with reasonable entropy, it's fine. If you want to initialize it with junk like 1, 2, 3, that's actually problematic with any PRNG. And with PCG, 99.9% of the time, even using something kinda junky would be fine. It doesn't have anything like the kind of issues that LFSRs have. But DXSM is certainly stronger. I think it's better or I wouldn't have made it, but it's worth having some perspective and realizing that in practice users aren't going to run into problems with the classic PCG64 implementation. |
I'd like to separate out the criticism/defense of PCG64's streams (via the LCG increment) from the present discussion. While there's a certain duality involved due to the mathematics of the LCG, it's not the core issue that was originally brought up here. Also, there's more detail here to be considered than was present in Sebastiano's original critique, your response, or the old mega-thread. Perhaps the connection is more obvious to the experts who have spent more time on the math, but the practical consequences are now clearer to me, at least.
Granted, but it's not the binary can/can't that drives the decision in front of me. If I draw too many numbers from a finite PRNG, eventually PractRand will suss it out. That binary fact doesn't invalidate that PRNG algorithm. Moving away from that binary and establishing the concept of headroom was one of the things that I really appreciated about the original PCG paper. Given an adversarially-generated pathology, we can take a look at how often that pathology could arise randomly from good entropy-seeding. I want to quantify that, and turn it into practical advice for users. Given two states that share the lower 58 bits and the same increment (we'll put a pin in that), interleaving PCG64 XSL-RR instances from those states demonstrates practically observable failures in PractRand at around 32 GiB. I think it's reasonable to want to avoid that. So let's take that as our benchmark and look at how often that arises with good entropy-seeding. Fortunately, this adversarial scheme is amenable to probabilistic analysis (not all are so friendly). For I think it's also reasonable to want to avoid initial states whose subsequent draws will cross any of the lower-58-bit collision states of the other initial states. I'm still trying to think through the logic on that, but I think the length affects the probability linearly rather than quadratically. If I'm right, and I want to draw 16 GiB from each instance ( Let's take that increment pin out and address it. I think it's fair to say that with the XSL-RR output function, also entropy-seeding the increment in addition to the state does not change this particular analysis. It seems that for any given pair of entropy-seeded increments, there's the same number of practically-colliding states. The concrete procedure for deliberately constructing those states looks more complicated than bashing in the same lower 58 bits, but it seems like the number of colliding states is the same, so the probability calculations remain the same. This isn't intrinsic to the PCG scheme in general. The DXSM output function appears to be strong enough that changing the increment (even with a simple I want to end by reiterating what we all seem to be in perfect agreement about: |
Streams are still somewhat relevant because the issue only shows up if we have the generators on the same stream. But under what circumstances would they have the same lower 58 bits and be on the same stream? Is there a use case where this would happen? The one somewhat realistic case I know of is the one we talked about last year (when we talked about |
BTW, to give a more tangible measure of how to improve the quality of the multipliers involved, I computed the spectral scores, from f₂ to f₈, of the current 64-bit multiplier used by PCG DXS and some alternative. Spectral scores are the standard way to judge the goodness of a multiplier. 0 is bad, 1 is excellent. Each score describe how well are distributed the pairs, triples, 4-tuples, etc. in the output. These seven numbers can be resumed in the classic measure, the minimum, or a weighted measure (the first score, plus the second divided by two, etc., normalized), to make the first scores more important, as suggested by Knuth in TAoCP, and these are the minimum and the weighted measure for the current multiplier:
There are much better 64-bit constants than that:
If you go to 65 bits, essentially at the same speed (at least, for LCG128Mix is the same speed) you get a better weighted measure:
The reason is that 64-bit multipliers have an intrinsic limit to their f₂ score (≤0.93), which is as noted by Knuth is the most relevant:
So the first multiplier has a mediocre f₂ score. The second multiplier gets very close to the optimum for a 64-bit multiplier. The 65-bit multiplier does not have these limitations and has a score very close to 1, the best possible in general. For completeness, here are all the scores:
You can recompute these score or look for your own multiplier with the code Guy Steele and I distributed: https://github.com/vigna/CPRNG . The better multipliers are taken from the associated paper. |
PCG will probably be a fine default prng for numpy, but I don't think it will stand the test of time, as there are more promising, but less tested ways to do this. I propose one in the following. The half chaotic SFC64 is one of the fastest of the statistically sound generators with a reasonable large minumum period. SFC64 has no jump functions, but can without speed overhead be extended to support 2^63 guarateed unique streams. Simply add a Weyl sequence with a user chosen additive constant k (must be odd), instead of just incrementing the counter by one. Each odd k produces a unique full period. It requires additional 64-bits of state to hold the Weyl constant:
A 320 bits state is sometimes undesirable, so I have tried to squeeze it down to using 256 bits again. Note the changed output function too, which better utilizes the Weyl sequence for bit mixing. It uses 128/128 bits chaotic/structured state, which seems to strike a good balance:
This has currently passed 4 TB in PractRand testing without anomalies, and I briefly ran Vigna's Hamming-weight test without issues so far (although, passing these tests are no guarantee for near true random output, rather a test whether the prng is flawed or not). Note: it is supposedly statistcally an advantage to use a (unique) random Weyl constant with roughly 50% of the bits set, but only further testing or analysis will reveal how significant this is. /Edits: cleanups. |
@tylo-work SFC64 is already in NumPy, along with Philox, this is about the default generator. |
Ok, I didn't know exactly which were implemented, so this is only about selecting the most suited overall from those? Fair enough, and thanks for clarifying. I will try to test my proposed generator extensively to see how it stacks up with others, and so far it looks very good regarding speed, output quality, simplicity/size/portablity, and for massive parallel usage. But I would be happy if others tested it as well. |
The version in NumPy is the standard variant which has numpy/numpy/random/src/sfc64/sfc64.h Lines 25 to 33 in ad30b31
|
I don't think we're reopening the discussion about the default PRNG from scratch. We have a very specific issue with our current PRNG and are looking at available, closely-related variants that address that specific issue. One of our concerns is that the current default PRNG exposes certain features of the PRNG, like jumpability, that the variant that replaces it must still expose. SFC64 (either ours or yours) doesn't have that feature. It's possible that that @bashtage would be willing to accept a PR for randomgen to add your Weyl-stream variants of SFC64. |
@tylo-work If you are interesting in parallel execution you might want to look at NumPy's SeedSequence implementation. |
Assuming you want something PCG-DXS-like, there are further improvements you can do with just better constants (and a very marginal slowdown). For example, PCG-DXS will soon fail two distinct type of tests on two interleaved, correlated subsequences with the same lower 112 bits of state:
Note that we're talking about just ≈65536 correlated sequences—nothing to be afraid of. But you can improve the generator by choosing a better multiplier, such as 0x1d605bbb58c8abbfd, and a better mixer, such as 0x9e3779b97f4a7c15. The first number is a 65-bit multiplier that has much better spectral scores. The second number is just the golden ratio in a 64-bit fixpoint representation, and this is known to have nice mixing properties (see Knuth TAoCP on multiplicative hashing); for example, it is used by the Eclipse Collections library to mix hash codes. As a result, you fail just FPF for the same amount of data:
In fact, if we go further at 2TB PCG-DXS fails three type of tests for the same interleaved, correlated subsequences:
whereas PCG65-DXSϕ still fails just FPF:
Sooner or later, of course, also PCG65-DXSϕ will fail Gap and TMFn. But you need to see much more output than with PCG-DXS. This is the complete code for PCG65-DXSϕ, which is just PCG-DXS with better constants:
The marginal slowdown is due to an add instruction (caused by the 65-bit multiplier), and having two 64-bit constants to load. I'm not endorsing generators of this kind in general, but PCG65-DXSϕ is measurably better than PCG-DXS at hiding correlation. |
@vigna, FYI, I also did some interleaving testing, and noticed that xoshiro256** failed rather quickly when creating 128 interleaved streams or more. With 256 it failed quickly. The point of the test is to check how well the PRNGs behave when each stream was initialized with some linear dependencies. Essentially, the state is initialized to I realize, this is not the recommended initialization for xoshiro, but it is still interesting - and a little worrying - that the tests seemed fine for xoshiro with few interleaving streams, but failed with many.
I also tried to weaken the initialization for SFC64 and TYLO64 to skip only 2 outputs, however they still seemed OK.
I'll include some relevant header code:
|
@tylo-work I appreciate the analysis, but I really need this issue to stay focused. If you would like to continue that line of discussion, I encourage you to post your work in your own Github repo, and make one more post here inviting people here to it. Everyone else, please respond there. Thank you for your cooperation. |
@rkern Looks like PCG64DXSM won't make it into 1.19.0, I'll be releasing this weekend. If you could write the note about our change policy/upcoming changes that you mentioned above I would appreciate it. |
Sorry, I've been dealing with some other unrelated matters. Based on our discussion, I don't think a small delay is a big issue, since PCG64DXSM was planned as an alternate option, not a new default (for now, at least). |
Now that 1.20 is starting up, is it time to revisit this and move to DXSM? |
We would still have some time to do the move before branching, but it may be good to get a start on it within the next week or so. @bashtage I guess you have the From what it see, it sounded like we should just do this for 1.20 if we have it readily available. |
IIRC, we have been waiting for a reference that could be linked. But if the random number folks are happy with the change we should use it. Do we need any special code for windows? |
It is just a different constant and a different scrambling function. Nothing more novel than what @rkern wrote for the original PCG64 implementation on Windows. I think the decision was to have a fully standalone PCG64DXSM rather than to share some code (for performance). |
Would probably made sense to start from rkern's WIP branch. |
I said I would write some a blog post about it, which I think @rkern wanted, but I've been attending to some other matters and it hasn't happened yet (sorry). In the meantime, the DXSM permutation has been grinding away under test and continues to seem like an improvement over the original. From remarks earlier in the thread, I think @rkern might have liked an even stronger output permutation, but doing so either costs you speed or (if you cut corners to gain speed) adds trivial predictability. |
@rkern, 1.21.0 will branch in mid May, it would be nice if the PCGDXSM variant was in it. |
@charris I will work on it this weekend. |
The PCG generator used by Numpy has a significant amount self-correlation. That is, for each sequence generated from a seed there is a large number of correlated, nonoverlapping sequences starting from other seeds. By "correlated" I mean that interleaving two such sequences and testing the result you obtain failures that did not appear in each sequence individually.
The probability that two generators out of large set of terminals get two of those sequences is nonnegligible. Why this happens from a mathematical viewpoint is well known but it is explained here in detail: http://prng.di.unimi.it/pcg.pgp (see "Subsequences within the same generator").
To show this problem directly, I wrote this simple C program reusing the Numpy code: http://prng.di.unimi.it/intpcgnumpy.c . The program takes two 128-bit states of two generators (with the same LCG constant or "stream") in the form of high and low bits, interleaves their output and writes it in binary form. Once we send it through PractRand, we should see no statistical failure, as the two streams should be independent. But if try to start from two states with the same 64 lower bits, you get:
./intpcgnumpy 0x596d84dfefec2fc7 0x6b79f81ab9f3e37b 0x8d7deae980a64ab0 0x6b79f81ab9f3e37b | stdbuf -oL ~/svn/c/xorshift/practrand/RNG_test stdin -tf 2 -te 1 -tlmaxonly -multithreaded
RNG_test using PractRand version 0.94
RNG = RNG_stdin, seed = unknown
test set = expanded, folding = extra
You can even go lower—you just need the same 58 lower bits:
Note that to get more the 50% probability that two generators start from two correlated seed (chosen at random) you need just about half a million generators starting at random (birthday paradox). And if you consider the probability that they do not exactly start from the same state, but have significant overlapping correlating sequences, you need much less.
Any sensible generator from the literature will not behave like that. You can choose adversarially any two starting states of MRG32k3a, SFC64, CMWC, xoshiro256++, etc., and as long as you generate nonoverlapping sequences you will not see the failures above. This is a major drawback that can pop up when a number of devices uses the generator and one assumes (as it should be) that pairwise those sequences should not show correlation. The correlation can induce unwanted behavior that is hard to detect.
Please at least document somewhere that the generator should not be used on multiple terminals or in a highly parallel environment.
The same can happen with different "streams", as the sequences generated by an LCG by changing the additive constant are all the same modulo a change of sign and an additive constant. You can see some discussion here: rust-random/rand#907 and a full mathematical discussion of the problem here: https://arxiv.org/abs/2001.05304 .
The text was updated successfully, but these errors were encountered: