-
-
Notifications
You must be signed in to change notification settings - Fork 3.3k
Script: Implement TextDecoderStream
#38112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Script: Implement TextDecoderStream
#38112
Conversation
Would it be better if this PR is split into three smaller PRs because of the changes mentioned above? |
116bbf6
to
852a99e
Compare
Since this is adding a new interface, should |
57c6fae
to
eb350cc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great start, thank you.
Couple of general comments on the overall structure. I think there may have been a misunderstanding in zulip re the use of generics. I'm open to counter arguments off course.
4f70be6
to
a1f0c91
Compare
Error::Type("Unable to convert chunk into ArrayBuffer or ArrayBufferView".to_string()) | ||
})? | ||
}; | ||
let buffer_source = conversion_result.get_success_value().ok_or(Error::Type( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gterzian I thini this needs to use BufferSource from buffer_source.rs right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We did effort to hide all unsafe code inside buffer_source.rs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes looks like it should.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if I fully understand this. The buffer_source
is passed to the TextDecoderCommon::decode
immediately after, which is a function shared between TextDecoderStream
and TextDecoder
. The reason why TextDecoderCommon::decode
takes Option<&ArrayBufferViewOrArrayBuffer>
as the argument type is because that is the type in the generated trait TextDecoderMethods
.
Is the codegen supposed to generate traits with BufferSource
as the arg type? Even though TextDecoder.webidl
says BufferSource
, the generated code in TextDecoderMethods
is still Option<ArrayBufferViewOrArrayBuffer>
.
The decode_and_enqueue_a_chunk
function takes a SafeHandleValue
because that's the type TransformStreamDefaultController::perform_transform
uses. Plus, I was not able to find a way to convert a SafeHandleValue
into a BufferSource
without unsafe code. Neither was I able to find other places in the servo repo that performs this kind of conversion.
Would you mind providing some more guidance on this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right now, we’re using ArrayBufferViewOrArrayBuffer and calling unsafe { a.as_slice() } directly, which exposes raw JSObject* at every call.
Our HeapBufferSource solves all of this in one place:
- It roots the JSObject in a Heap<*mut JSObject> → GC-safe
- It has is_initialized() & is_detached_buffer() → prevents UB
- It provides typed_array_to_option() and acquire_data() → safe slice copying
- It aligns directly with WebIDL’s BufferSource union semantics, rather than ad hoc conversions
so I would do something like:
let buffer_source_union = conversion_result.get_success_value().ok_or(Error::Type(
"Unable to convert chunk into ArrayBuffer or ArrayBufferView".to_string(),
))?;
let heap_buffer_source = match buffer_source_union {
ArrayBufferViewOrArrayBuffer::ArrayBufferView(view) => {
let view_heap = HeapBufferSource::<ArrayBufferViewU8>::new(
BufferSource::ArrayBufferView(Heap::boxed(unsafe { (view.underlying_object()).get() })),
);
// normalize to its backing ArrayBuffer
view_heap.get_array_buffer_view_buffer(cx)
},
ArrayBufferViewOrArrayBuffer::ArrayBuffer(buf) => {
HeapBufferSource::<ArrayBufferU8>::new(BufferSource::ArrayBuffer(Heap::boxed(unsafe { (buf.underlying_object()).get() })))
},
};
and I would go further and have unsafe { (view.underlying_object()).get() }
inside a function in HeapBufferSource
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added HeapBufferSource::from_array_buffer_view_or_array_buffer
in 0f3c8c7. Is this a reasonable place to hide the unsafe code?
Reverted because found problem with other WPT subtests after switching to HeapBufferSource
in TextDecoder
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As mentioned in zulip, converting ArrayBufferViewOrArrayBuffer
into HeapBufferSource<ArrayBufferU8>
sometimes lead to inconsistent byte length number. One particular example is the WPT subtest https://wpt.live/referrer-policy/generic/unsupported-csp-referrer-directive.html, which gives a byte length of 489 before conversion and a byte length of 6084 after converting into HeapBufferSource
. The extra spaces seem to be filled with null
s.
The byte length is given by two FFI functions in mozjs
(JS_GetArrayBufferViewByteLength
and GetArrayBufferByteLength
), so troubleshooting and/or fixing the bug in mozjs is probably out of the scope of this PR? Does it make sense if we separate the inclusion of HeapBufferSource
in another issue/PR (unless we make some ad-hoc workaround with the current HeapBufferSource
)?
[additional writes should wait for backpressure to be relieved for class TextDecoderStream] | ||
expected: FAIL | ||
|
||
[write() should not complete until read relieves backpressure for TextEncoderStream] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any idea why this is still failing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important to fix since it goes to the core of the streams stuff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No actually this is nothing, since the PR implements TextDecoderStream
, and this test is for the encoder.
TransformerType::Decoder(_) => { | ||
// `TextDecoderStream` does NOT specify a cancel algorithm. | ||
// | ||
// Step 4. Let cancelAlgorithm be an algorithm which returns a promise resolved with undefined. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how come TextDecoderStream
does not specify a cancel algorithm.` but we still have:
Step 4. Let cancelAlgorithm be an algorithm which returns a promise resolved with undefined.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here,
// <https://streams.spec.whatwg.org/#transformstream-set-up>
// Let result be the result of running cancelAlgorithm given reason, if cancelAlgorithm was given, or null otherwise.
// Note: `TextDecoderStream` does not specify a cancel algorithm.
// If result is a Promise, then return result.
// Note: not applicable.
// Return a promise resolved with undefined.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added more spec steps in the documentation (b857c0c)
// chunk argument and runs the decode and enqueue a chunk | ||
// algorithm with this and chunk. | ||
decode_and_enqueue_a_chunk(cx, global, chunk, decoder, self, can_gc) | ||
.map(|_| Promise::new_resolved(global, cx, (), can_gc)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
usually spec explicitly says return Promise, but I am missing that part here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes this needs to implement, and document the steps of, the transformAlgorithmWrapper
from https://streams.spec.whatwg.org/#transformstream-set-up like:
// <https://streams.spec.whatwg.org/#transformstream-set-up >
// Let transformAlgorithmWrapper be an algorithm that runs these steps given a value chunk:
// Let result be the result of running transformAlgorithm given chunk.
let result = decode_and_enqueue_a_chunk
For the exception, please use the same as
// If result is an abrupt completion, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added more spec steps in the documentation (b857c0c)
🔨 Triggering try run (#16349517941) for Linux (WPT) |
Test results for linux-wpt from try job (#16349517941): Flaky unexpected result (19)
Stable unexpected results that are known to be intermittent (19)
|
✨ Try run (#16349517941) succeeded. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple of more comments on the algos.
self.transform_obj.set(*this_object); | ||
if let TransformerType::Js { transform_obj, .. } = &self.transformer_type { | ||
transform_obj.set(*this_object) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we should assert that the type is Js
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It now panics in the else
branch in b857c0c
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this still the case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This has already been changed to unreachable!()
in the else branch in commit 4d04d78. GH somehow didn't mark this outdated
// chunk argument and runs the decode and enqueue a chunk | ||
// algorithm with this and chunk. | ||
decode_and_enqueue_a_chunk(cx, global, chunk, decoder, self, can_gc) | ||
.map(|_| Promise::new_resolved(global, cx, (), can_gc)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes this needs to implement, and document the steps of, the transformAlgorithmWrapper
from https://streams.spec.whatwg.org/#transformstream-set-up like:
// <https://streams.spec.whatwg.org/#transformstream-set-up >
// Let transformAlgorithmWrapper be an algorithm that runs these steps given a value chunk:
// Let result be the result of running transformAlgorithm given chunk.
let result = decode_and_enqueue_a_chunk
For the exception, please use the same as
// If result is an abrupt completion, |
TransformerType::Decoder(_) => { | ||
// `TextDecoderStream` does NOT specify a cancel algorithm. | ||
// | ||
// Step 4. Let cancelAlgorithm be an algorithm which returns a promise resolved with undefined. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here,
// <https://streams.spec.whatwg.org/#transformstream-set-up>
// Let result be the result of running cancelAlgorithm given reason, if cancelAlgorithm was given, or null otherwise.
// Note: `TextDecoderStream` does not specify a cancel algorithm.
// If result is a Promise, then return result.
// Note: not applicable.
// Return a promise resolved with undefined.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok aside from those few comments, this looks good overall. Haven't had time to review all the steps in details; @Taym95 do you want to do it since you wrote transform stream?
🛠 These changes could not be applied onto the latest upstream WPT. Servo's copy of the Web Platform Tests may be out of sync. |
.extend_from_slice(unsafe { a.as_slice() }); | ||
}, | ||
None => {}, | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be good to avoid the copy(see the note in the spec). Not sure how, could be more of a follow-up, but we should file an issue and add a TODO note.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the new implementation, this copying only happens if there are leftover bytes. Otherwise, it would just use the input slice directly. Any bytes that are not consumed will then be stored in the decoder's I/O queue
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean with the new implementation? In the current code, the extend_from_slice
call clones the data rigtht? I think to avoid the copy you'd have to store the buffer in the I/O queue, and pass slices of it to the decoder?
Good for a follow-up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant the changes introduced in a5bc680. GitHub somehow didn't mark this as outdated
Signed-off-by: minghuaw <wuminghua7@huawei.com>
Signed-off-by: minghuaw <wuminghua7@huawei.com>
I think the confusion comes from the imperfect mapping between the encoder/decoder implementation we are using, ie.
This I believe is already taken care of by
I believe this is also already handled within
Yes it should be broken up but not as per the above. I have moved
I think this is an misunderstanding. Isn't the "streaming" part referring to the |
Signed-off-by: minghuaw <wuminghua7@huawei.com>
So I've had another look at your implementation and the spec. What you implemented--passing "do not flush" along as "last" for the text decoder, and passing "last" as true in the flush algo for the stream decoder--makes conceptual sense to me, and it also seems to match what the JS examples do. And it also matches how encoding rs describes incremental decoding. But what I find confusing is that it doesn't seem to be how the spec is written. For example, the "decode and enqueue algo" only enqueues a chunk on the transform when "end of queue" is received, not each time an item is processed. Also, for the text decoder, a result is only returned when "end of queue" is encountered, not each time an item is processed. So the spec, to me at least, seems to imply that one should keep appending processed items to an output, but only return or enqueue the result of serialising the output when reaching end of queue. Also note that "end of queue" is a special item that means there will be no more items, so it's not the same thing as "the i/o queue is currently empty", which is indeed reached at the end of processing each item. In conclusion, I'd say we stick to your implementation, but you could consider opening an issue at the spec to point out the potential weirdness. |
So last update in the above discussion: my reading of "end of queue" was wrong; it isn't a special value enqueued to signal that no more items will be enqueued, rather it just means "queue is currently empty". So for each call to 'decode' the "end of queue" will be hit. The way "streaming" works is that the end of queue will only be processed when the stream option is false, or, in the streaming decoder, when the stream is closed and the flush algorithm is called. The point of that seems to be to allow more data to be received before "end of queue" is finally processed. So your implementation is right, and the spec is unambiguous. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok so from my side we just need to throw the error.
@Taym95 how is it looking on the buffer use?
// <https://encoding.spec.whatwg.org/#dom-textdecoder-decode> | ||
// Step 5.3.3 Otherwise, if result is error, throw a TypeError. | ||
DecoderResult::Malformed(_, _) => { | ||
return Err(Error::Type("Decoding failed".to_owned())); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So here I think you need to throw the type error, as done here, but using Error::Type
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't realize that throwing is different from just returning an Err
. Would you mind providing some more contexts regarding why both are used?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The realm problem found in wpt/tests/encoding/streams/realms.window.js
are actually solved by manually specifying the realm when creating the rejection Promise
b66a365#diff-3b8b0e94fec7d463a8c649b8679235a8b77cbf2663a934f3b1ba16a7244ffff5
There is still one TextDecoderStream
test that fails in that subtest, which is the case of invalid chunk, and it seems like the root cause is a bug in encoding_rs
. encoding_rs::Decoder
doesn't return an error with invalid input [0xFF]
. See my comment below for an example
|
||
[TypeError for chunk with the wrong type should come from constructor realm of TextDecoderStream] | ||
expected: FAIL | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Those failing test should be fixed by throwing the type error.
expected: ERROR | ||
|
||
[decode-utf8.any.worker.html] | ||
expected: ERROR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note to self: error is from
servo/tests/wpt/tests/common/sab.js
Line 1 in 0e4b2bf
const createBuffer = (() => { |
"sabConstructor is not a constructor".
Signed-off-by: minghuaw <wuminghua7@huawei.com>
Signed-off-by: minghuaw <wuminghua7@huawei.com>
[TypeError for chunk with the wrong type should come from constructor realm of TextDecoderStream] | ||
expected: FAIL | ||
|
||
[TypeError for invalid chunk should come from constructor realm of TextDecoderStream] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is still failing because encoding_rs::Decoder::decode_to_string_without_replacement
fails to return DecoderResult::Malformed
with invalid input [0xFF]
. This can be reproduced with this: https://play.rust-lang.org/?version=stable&mode=debug&edition=2024&gist=8225345ee8ffedee5a4b6ecd0161beb2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Opened an issue in encoding_rs
repo: hsivonen/encoding_rs#122
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might be just me using it wrong though because changing last
to true
would indeed give Malformed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, the problem is that when last
is set to false
(not flushing), the number of bytes read would actually include the malformed byte, so when it gets to flushing, ie. last
is set to true
, it sees an empty slice because the malformed bytes are already removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like that encoding_rs::Decoder only shows this problem when decoding an invalid input of length 1 with last set to false, which is exactly how the WPT test case is written
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good for a follow-up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want an issue in Servo tracking this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Solved in 3039820
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't realize that throwing is different from just returning an Err. Would you mind providing some more contexts regarding why both are used?
If you return an error from a method implementing webidl(like Decode
), then the bindings will throw an exception.
But in the case where the transform algorithm is called, the error is not returned to Webidl; we do propagate the error by rejecting the promise with it, but that is not the same thing as throwing an exception. I don't see any tests asserting the one is thrown however(you are right to point out the realms.window.js
is unrelated), so I would say we leave it like it is.
FYI, if you wanted to manually set the exception, you'd have to return Error::JSFailed
, to prevent the bindings from setting the exception that would have already been set, and then to propagate to the promise you'd have to manually get the pending exception, as is done at
unsafe { assert!(JS_GetPendingException(*cx, rval.handle_mut())) }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks and good to see another use of streams in the codebase.
Couple of comments to address and issues to file, and then it's ready for the merge queue.
self.transform_obj.set(*this_object); | ||
if let TransformerType::Js { transform_obj, .. } = &self.transformer_type { | ||
transform_obj.set(*this_object) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this still the case?
// <https://encoding.spec.whatwg.org/#decode-and-enqueue-a-chunk> | ||
// Step 2. Push a copy of bufferSource to decoder’s I/O queue. | ||
// | ||
// NOTE: try to avoid this copy unless there are bytes left |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a note or a todo? If it's a note, it would be good to explain how the copy is avoided.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a note. Here we only call extend_from_slice
on the io_queue
if we have leftover bytes from incomplete input. Otherwise, the input
will be pointing to the input chunk
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok I see, the code is clear actually.
.extend_from_slice(unsafe { a.as_slice() }); | ||
}, | ||
None => {}, | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean with the new implementation? In the current code, the extend_from_slice
call clones the data rigtht? I think to avoid the copy you'd have to store the buffer in the I/O queue, and pass slices of it to the decoder?
Good for a follow-up.
[TypeError for chunk with the wrong type should come from constructor realm of TextDecoderStream] | ||
expected: FAIL | ||
|
||
[TypeError for invalid chunk should come from constructor realm of TextDecoderStream] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good for a follow-up.
Signed-off-by: minghuaw <wuminghua7@huawei.com>
let decoder = if ignoreBOM { | ||
encoding.new_decoder_without_bom_handling() | ||
} else { | ||
encoding.new_decoder_with_bom_removal() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It turns out that, because of my ignorance, I had been creating new decoder instances with new_decoder()
which enables BOM sniffing, thus making the decoder not return immediately when it sees 0xFF
. We now pass all the decoder related test cases in realms.window.js
WPT subtest
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok awesome, thanks. Time to add this to the merge queue.
Signed-off-by: minghuaw <wuminghua7@huawei.com>
Updated the WPT expectations. Should be good now (see the try run: https://github.com/minghuaw/servo/actions/runs/16584399397) |
This PR implements the `TextDecoderStream`. Other than introducing the necessary mod and webidl files corresponding to `TextDecoderStream`, this PR also involves some changes in `TextDecoder` and `TrasnformStream`: - The common part that can be shared between `TextDecoder` and `TextDecoderStream` are extracted into a separate type `script::dom::textdecodercommon::TextDecoderCommon`. This type could probably use a different name because there is an interface called `TextDecoderCommon` in the spec (https://encoding.spec.whatwg.org/#textdecodercommon) which just gets included in `TextDecoder` and `TextDecoderStream`. - The three algorithms in `TransformStream` (`cancel`, `flush`, and `transform`) all have become `enum` that has a `Js` variant for a JS function object and a `Native` variant for a rust trait object. Whether the cancel algorithm needs this enum type is debatable as I did not find any interface in the spec that explicitly sets the cancel algorithm. Testing: Existing WPT tests `tests/wpt/tests/encoding/stream` should be sufficient Fixes: servo#37723 --------- Signed-off-by: minghuaw <michael.wu1107@gmail.com> Signed-off-by: minghuaw <wuminghua7@huawei.com> Signed-off-by: Minghua Wu <michael.wu1107@gmail.com>
This PR implements the
TextDecoderStream
. Other than introducing the necessary mod and webidl files corresponding toTextDecoderStream
, this PR also involves some changes inTextDecoder
andTrasnformStream
:TextDecoder
andTextDecoderStream
are extracted into a separate typescript::dom::textdecodercommon::TextDecoderCommon
. This type could probably use a different name because there is an interface calledTextDecoderCommon
in the spec (https://encoding.spec.whatwg.org/#textdecodercommon) which just gets included inTextDecoder
andTextDecoderStream
.TransformStream
(cancel
,flush
, andtransform
) all have becomeenum
that has aJs
variant for a JS function object and aNative
variant for a rust trait object. Whether the cancel algorithm needs this enum type is debatable as I did not find any interface in the spec that explicitly sets the cancel algorithm.Testing: Existing WPT tests
tests/wpt/tests/encoding/stream
should be sufficientFixes: #37723