8000 [pytree] support collections.defaultdict type for Python pytree by XuehaiPan · Pull Request #113255 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

[pytree] support collections.defaultdict type for Python pytree #113255

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 13 commits into from
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Update on "[pytree] support collections.defaultdict type for Python p…
…ytree"

cc zou3519

[ghstack-poisoned]
  • Loading branch information
XuehaiPan committed Nov 30, 2023
commit 4ca0e385027a2b0486ea7b44dbac43cef52ccdb8
2 changes: 1 addition & 1 deletion .ci/docker/ci_commit_pins/executorch.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
5159de436ced71c78bc1c22e3c1d93654c429227
bed91223f660685325147a5027348356f11cdd17
2 changes: 1 addition & 1 deletion .ci/docker/common/install_onnx.sh
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ pip_install coloredlogs packaging
retry pip_install -i https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ --no-cache-dir --no-input ort-nightly==1.17.0.dev20231005006

pip_install -i https://test.pypi.org/simple/ onnx==1.15.0rc2
pip_install onnxscript==0.1.0.dev20231114 --no-deps
pip_install onnxscript==0.1.0.dev20231128 --no-deps

# Cache the transformers model to be used later by ONNX tests. We need to run the transformers
# package to download the model. By default, the model is cached at ~/.cache/huggingface/hub/
Expand Down
2 changes: 1 addition & 1 deletion .ci/pytorch/macos-build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ cross_compile_arm64() {
compile_arm64() {
# Compilation for arm64
# TODO: Compile with OpenMP support (but this causes CI regressions as cross-compilation were done with OpenMP disabled)
USE_DISTRIBUTED=0 USE_OPENMP=0 MACOSX_DEPLOYMENT_TARGET=11.0 WERROR=1 BUILD_TEST=OFF USE_PYTORCH_METAL=1 python setup.py bdist_wheel
USE_DISTRIBUTED=0 USE_OPENMP=1 MACOSX_DEPLOYMENT_TARGET=11.0 WERROR=1 BUILD_TEST=OFF USE_PYTORCH_METAL=1 python setup.py bdist_wheel
}

compile_x86_64() {
Expand Down
2 changes: 2 additions & 0 deletions .ci/pytorch/test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -291,6 +291,8 @@ test_inductor_distributed() {
pytest test/inductor/test_torchinductor.py -k test_multi_gpu
pytest test/inductor/test_aot_inductor.py -k test_non_default_cuda_device
pytest test/inductor/test_aot_inductor.py -k test_replicate_on_devices
pytest test/distributed/_tensor/test_dtensor_compile.py
pytest test/distributed/tensor/parallel/test_fsdp_2d_parallel.py

# this runs on both single-gpu and multi-gpu instance. It should be smart about skipping tests that aren't supported
# with if required # gpus aren't available
Expand Down
2 changes: 1 addition & 1 deletion .github/ci_commit_pins/torchbench.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
94126be68e0eed9ac6561f5fc53bbfe487be6d99
99944a2fb8624947f9c0e2edc898ff42a16124da
4 changes: 4 additions & 0 deletions .github/labeler.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,10 @@
- .github/ci_commit_pins/**
- c10/core/Sym*
- torch/fx/experimental/symbolic_shapes.py
- test/distributed/_tensor/test_dtensor_compile.py
- test/distributed/tensor/parallel/test_fsdp_2d_parallel.py
- torch/distributed/_tensor/**
- torch/distributed/fsdp/**

"module: cpu":
- aten/src/ATen/cpu/**
Expand Down
10 changes: 10 additions & 0 deletions .github/workflows/_mac-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,16 @@ jobs:
pkill "${PROCESS}" || true
done

- name: Clean up leftover local python3 site-packages on MacOS pet runner
continue-on-error: true
run: |
for dir in ~/.local/lib/python3.*/site-packages; do
echo "Cleaning up ${dir}"
rm -rf "${dir}"
done



- name: Clean up disk space before running MacOS workflow
uses: pytorch/test-infra/.github/actions/check-disk-space@main

Expand Down
8 changes: 4 additions & 4 deletions .github/workflows/i F438 nductor.yml
Original file line number Diff line number Diff line change
Expand Up @@ -89,10 +89,10 @@ jobs:
test-matrix: |
{ include: [
{ config: "cpu_inductor_huggingface", shard: 1, num_shards: 1, runner: "linux.12xlarge" },
{ config: "cpu_inductor_timm", shard: 1, num_shards: 2, runner: "linux.4xlarge" },
{ config: "cpu_inductor_timm", shard: 2, num_shards: 2, runner: "linux.4xlarge" },
{ config: "cpu_inductor_torchbench", shard: 1, num_shards: 2, runner: "linux.4xlarge" },
{ config: "cpu_inductor_torchbench", shard: 2, num_shards: 2, runner: "linux.4xlarge" },
{ config: "cpu_inductor_timm", shard: 1, num_shards: 2, runner: "linux.12xlarge" },
{ config: "cpu_inductor_timm", shard: 2, num_shards: 2, runner: "linux.12xlarge" },
{ config: "cpu_inductor_torchbench", shard: 1, num_shards: 2, runner: "linux.12xlarge" },
{ config: "cpu_inductor_torchbench", shard: 2, num_shards: 2, runner: "linux.12xlarge" },
{ config: "dynamic_cpu_inductor_huggingface", shard: 1, num_shards: 1, runner: "linux.12xlarge" },
{ config: "dynamic_cpu_inductor_timm", shard: 1, num_shards: 2, runner: "linux.12xlarge" },
{ config: "dynamic_cpu_inductor_timm", shard: 2, num_shards: 2, runner: "linux.12xlarge" },
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -275,6 +275,7 @@ MANIFEST

# Atom/Watchman required file
.watchmanconfig
.watchman

# Files generated by CLion
cmake-build-debug
Expand Down
1 change: 0 additions & 1 deletion .watchman

This file was deleted.

1 change: 1 addition & 0 deletions RELEASE.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ Following is the Release Compatibility Matrix for PyTorch releases:

| PyTorch version | Python | Stable CUDA | Experimental CUDA |
| --- | --- | --- | --- |
| 2.2 | >=3.8, <=3.11 | CUDA 11.8, CUDNN 8.7.0.84 | CUDA 12.1, CUDNN 8.9.2.26 |
| 2.1 | >=3.8, <=3.11 | CUDA 11.8, CUDNN 8.7.0.84 | CUDA 12.1, CUDNN 8.9.2.26 |
| 2.0 | >=3.8, <=3.11 | CUDA 11.7, CUDNN 8.5.0.96 | CUDA 11.8, CUDNN 8.7.0.84 |
| 1.13 | >=3.7, <=3.10 | CUDA 11.6, CUDNN 8.3.2.44 | CUDA 11.7, CUDNN 8.5.0.96 |
Expand Down
38 changes: 38 additions & 0 deletions aten/src/ATen/FunctionalTensorWrapper.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -230,6 +230,39 @@ void FunctionalTensorWrapper::replace_(const Tensor& other) {
TORCH_INTERNAL_ASSERT(!value_.key_set().has(c10::DispatchKey::Functionalize));
}
mutation_counter_++;
if (!at::GradMode::is_enabled() || InferenceMode::is_enabled()) {
// This mutation happened under no_grad or inference_mode
mark_mutation_during_no_grad_or_inference_mode();
}
}

bool FunctionalTensorWrapper::has_data_mutation() {
// Current tensor's data was mutated if its storage saw any mutations.
return functional_storage_impl()->generation() > 0;
}

void FunctionalTensorWrapper::set__impl(const FunctionalTensorWrapper* other) {
// self.set_(src) will cause self to have all of the tensor properties of self.
value_ = other->value_;
generation_ = other->generation_;
view_metas_ = other->view_metas_;
// FREEZE the old storage, preventing mutations to it.
// this is a huge pain to handle properly in all cases, so we ban it.
functional_storage_impl()->freeze();
// Unsafely swap out the storage with other's storage,
// disconnecting `self` with its view chain
storage_ = other->storage_;
/// explicitly mark the tensor as having its storage changed from set_()
// Otherwise, we don't actually have a 100% accurate way to check this.
// (We could check if the updated value has a new storage than the original value,
// but this won't also let us uniquely determine if the tensor **also**
// experienced a data mutation).
was_storage_changed_ = true;

auto sizes_ = value_.sym_sizes();
auto strides_ = value_.sym_strides();
auto storage_offset_ = value_.sym_storage_offset();
set_sizes_and_strides(sizes_, strides_, storage_offset_);
}

void FunctionalTensorWrapper::maybe_replace_storage(const Tensor& other) {
Expand Down Expand Up @@ -545,6 +578,11 @@ bool are_all_mutations_hidden_from_autograd(const Tensor& functional_tensor) {
return unsafeGetFunctionalWrapper(functional_tensor)->are_all_mutations_hidden_from_autograd();
}

bool are_all_mutations_under_no_grad_or_inference_mode(const Tensor& functional_tensor) {
TORCH_CHECK(isFunctionalTensor(functional_tensor));
return unsafeGetFunctionalWrapper(functional_tensor)->are_all_mutations_under_no_grad_or_inference_mode();
}

bool isFunctionalTensor(const at::Tensor& tensor) {
return tensor.unsafeGetTensorImpl()->key_set().has(c10::DispatchKey::Functionalize);
}
Expand Down
28 changes: 28 additions & 0 deletions aten/src/ATen/FunctionalTensorWrapper.h
Original file line number Diff line number Diff line change
Expand Up @@ -80,10 +80,20 @@ struct TORCH_API FunctionalTensorWrapper : public c10::TensorImpl {
void mark_mutation_hidden_from_autograd() {
mutation_hidden_from_autograd_counter_++;
}
void mark_mutation_during_no_grad_or_inference_mode() {
mutation_during_no_grad_or_inference_mode_++;
}
// Are all the mutations happening to the tensor hidden from autograd
bool are_all_mutations_hidden_from_autograd() const {
return mutation_hidden_from_autograd_counter_ == mutation_counter_;
}
// Did all mutations happen under no_grad or inference_mode
// (We also need to ignore mutations fully hidden from autograd here)
bool are_all_mutations_under_no_grad_or_inference_mode() const {
return mutation_hidden_from_autograd_counter_ +
mutation_during_no_grad_or_inference_mode_ ==
mutation_counter_;
}

// Sync's the underlying tensor with its alias, if it's out of date. This
// involves two steps: 1) Apply any pending updates/mutations to the alias 2)
Expand Down Expand Up @@ -122,6 +132,18 @@ struct TORCH_API FunctionalTensorWrapper : public c10::TensorImpl {
// tensor by replaying the views off of the alias.
void mutate_view_meta(at::functionalization::ViewMeta meta);

// Custom implementation of self.set_(src)
void set__impl(const FunctionalTensorWrapper* other);

// Returns whether the current tensor's data was ever mutated
bool has_data_mutation();
//
// Returns whether the current FunctionalTensorWrapper
// experienced a set_() call.
bool was_storage_changed() {
return was_storage_changed_;
}

// The functionalization pass can be used to remove mutations.
// It does so by replacing any mutation op with it's corresponding
// out-of-place op, followed by a call to replace_(). e.g:
Expand Down Expand Up @@ -193,8 +215,11 @@ struct TORCH_API FunctionalTensorWrapper : public c10::TensorImpl {
// the copy_() from autograd as well.
uint64_t mutation_counter_ = 0;
uint64_t mutation_hidden_from_autograd_counter_ = 0;
uint64_t mutation_during_no_grad_or_inference_mode_ = 0;
bool has_metadata_mutation_ = false;
bool is_multi_output_view_ = false;
// Did the tensor experience a set_() call.
bool was_storage_changed_ = false;

size_t generation_ = 0;
std::vector<at::functionalization::ViewMeta> view_metas_;
Expand Down Expand Up @@ -256,6 +281,9 @@ TORCH_API void mark_mutation_hidden_from_autograd(
TORCH_API bool are_all_mutations_hidden_from_autograd(
const Tensor& functional_tensor);

TORCH_API bool are_all_mutations_under_no_grad_or_inference_mode(
const Tensor& functional_tensor);

// These two methods are XLA-specific logic and are no-ops
// for the normal functionalization flow.
TORCH_API void propagate_xla_data(
Expand Down
25 changes: 25 additions & 0 deletions aten/src/ATen/FunctionalizeFallbackKernel.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -299,6 +299,28 @@ static at::Tensor _unsafe_view_functionalize(const at::Tensor & self, at::SymInt
return out;
}

static at::Tensor& set__functionalize(at::Tensor& self, const at::Tensor& src) {
// error case
TORCH_CHECK(at::functionalization::impl::isFunctionalTensor(self) || !at::functionalization::impl::isFunctionalTensor(src),
"set__functionalize: Tried to mutate a non-functional tensor with a functional tensor, which is not allowed");

TORCH_CHECK(at::functionalization::impl::isFunctionalTensor(src),
"set__functionalize: We do not currently support x.set_(y) where y is not a FunctionalTensor. Please file an issue");

// nop case
if (!at::functionalization::impl::isFunctionalTensor(self) && !at::functionalization::impl::isFunctionalTensor(src)) {
at::AutoDispatchSkipFunctionalize guard;
return self.set_(src);
}

TORCH_INTERNAL_ASSERT(at::functionalization::impl::isFunctionalTensor(self));
TORCH_INTERNAL_ASSERT(at::functionalization::impl::isFunctionalTensor(src));
auto self_impl = at::functionalization::impl::unsafeGetFunctionalWrapper(self);
auto src_impl = at::functionalization::impl::unsafeGetFunctionalWrapper(src);
self_impl->set__impl(src_impl);
return self;
}

TORCH_LIBRARY_IMPL(_, Functionalize, m) {
m.fallback(torch::CppFunction::makeFromBoxedFunction<&functionalizeFallback>());
}
Expand All @@ -310,4 +332,7 @@ TORCH_LIBRARY_IMPL(aten, Functionalize, m) {
m.impl("lift_fresh_copy", TORCH_FN(lift_fresh_functionalize_copy));
m.impl("_to_copy", TORCH_FN(_to_copy_functionalize));
m.impl("_unsafe_view", TORCH_FN(_unsafe_view_functionalize));
// The overloads of set_() that take in a storage should never
// appear with torch.compile, because dynamo graph breaks
m.impl("set_.source_Tensor", TORCH_FN(set__functionalize));
}
4 changes: 2 additions & 2 deletions aten/src/ATen/NestedTensorImpl.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -179,8 +179,8 @@ NestedTensorImpl::NestedTensorImpl(
"in the near future.");
auto storage_device = storage_.device();
TORCH_INTERNAL_ASSERT(
storage_device.is_cpu() || storage_device.is_cuda() || storage_device.is_privateuseone(),
"NestedTensorImpl storage must be either CUDA, CPU or ", get_privateuse1_backend(), " but got ",
storage_device.is_cpu() || storage_device.is_cuda() || storage_device.is_xpu() || storage_device.is_privateuseone(),
"NestedTensorImpl storage must be either CUDA, CPU, XPU or ", get_privateuse1_backend(), " but got ",
storage_device);
validate_nested_tensor_metadata(nested_sizes_, nested_strides_, storage_offsets_);
refresh_dim();
Expand Down
7 changes: 4 additions & 3 deletions aten/src/ATen/TensorIndexing.h
Original file line number Diff line number Diff line change
Expand Up @@ -224,7 +224,8 @@ static inline Tensor applySlice(
return self;
}
}
return self.slice_symint(dim, start, stop, std::move(step));
return self.slice_symint(
dim, std::move(start), std::move(stop), std::move(step));
}

static inline Tensor applySelect(
Expand Down Expand Up @@ -258,7 +259,7 @@ static inline Tensor applySelect(
// if the index is negative, do not normalize it because that would fix the
// index on the current tensor size in the tracer. aten::select also works on
// negative indices
return self.select_symint(dim, index);
return self.select_symint(dim, std::move(index));
}

static inline Tensor boolToIndexingTensorCPUOrCUDA(
Expand Down Expand Up @@ -534,7 +535,7 @@ static inline Tensor applySlicing(
/*original_tensor=*/self,
/*index=*/obj,
/*dim=*/&dim,
/*specified_dims=*/&specified_dims,
/*specified_dims_ptr=*/&specified_dims,
/*real_dim=*/i,
/*outIndices=*/outIndices,
/*disable_slice_optimization=*/disable_slice_optimization,
Expand Down
2 changes: 0 additions & 2 deletions aten/src/ATen/core/ATen_fwd.h
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,6 @@
// Forward declarations of core ATen types used in dispatch functions
namespace c10 {

template<typename T>
class optional;
template<typename T>
class List;
template<typename T>
Expand Down
1 change: 1 addition & 0 deletions aten/src/ATen/core/GeneratorForPrivateuseone.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ at::Generator GetGeneratorForPrivateuse1(c10::DeviceIndex device_index) {
"Please register a generator to the PrivateUse1 dispatch key, \
using the REGISTER_GENERATOR_PRIVATEUSE1 macro.");

// NOLINTNEXTLINE(bugprone-unchecked-optional-access)
return GetGeneratorPrivate().value()(device_index);
}

Expand Down
1 change: 1 addition & 0 deletions aten/src/ATen/core/IListRef_test.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -240,6 +240,7 @@ TEST(IOptTensorListRefTest, Boxed_Iterate) {
for (const auto t : list) {
EXPECT_EQ(boxed[i].has_value(), t.has_value());
if (t.has_value()) {
// NOLINTNEXTLINE(bugprone-unchecked-optional-access)
EXPECT_TRUE((*boxed[i]).is_same(*t));
}
i++;
Expand Down
2 changes: 2 additions & 0 deletions aten/src/ATen/core/List_test.cpp
341A
Original file line number Diff line number Diff line change
Expand Up @@ -1136,8 +1136,10 @@ TEST(ListTest, canAccessOptionalStringByReference) {
c10::optional<std::string> str2 = list[2];
decltype(auto) strRef1 = listRef[1];
decltype(auto) strRef2 = listRef[2];
// NOLINTNEXTLINE(bugprone-unchecked-optional-access)
EXPECT_EQ("two", str1.value());
EXPECT_FALSE(str2.has_value());
// NOLINTNEXTLINE(bugprone-unchecked-optional-access)
EXPECT_EQ("two", strRef1.value().get());
EXPECT_FALSE(strRef2.has_value());
}
Expand Down
1 change: 1 addition & 0 deletions aten/src/ATen/core/PythonFallbackKernel.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ constexpr c10::DispatchKeySet after_Python_keyset = c10::DispatchKeySet(c10::Dis
// This guard assumes that tls_on_entry has a value.
struct StashTLSOnEntryGuard {
public:
// NOLINTNEXTLINE(bugprone-unchecked-optional-access)
StashTLSOnEntryGuard(): saved_(tls_on_entry.value()) {
tls_on_entry = c10::nullopt;
}
Expand Down
5 changes: 3 additions & 2 deletions aten/src/ATen/core/adaption.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,11 @@
namespace c10 {
namespace impl {

void common_device_check_failure(optional<Device>& common_device, const at::Tensor& tensor, at::CheckedFrom methodName, at::CheckedFrom argName) {
void common_device_check_failure(Device common_device, const at::Tensor& tensor, at::CheckedFrom methodName, at::CheckedFrom argName) {
TORCH_CHECK(false,
"Expected all tensors to be on the same device, but "
"found at least two devices, ", common_device.value(), " and ", tensor.device(), "! "
// NOLINTNEXTLINE(bugprone-unchecked-optional-access)
"found at least two devices, ", common_device, " and ", tensor.device(), "! "
"(when checking argument for argument ", argName, " in method ", methodName, ")");
}

Expand Down
5 changes: 5 additions & 0 deletions aten/src/ATen/core/class_type.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -87,9 +87,11 @@ std::string ClassType::getForwardPreHookErrorMessage(int pre_hook_idx) const {
pre_hook_name + "(self, input: Tuple[" + input_types + "])";
std::string return_string =
"This error occurred while scripting the forward pre-hook '" +
// NOLINTNEXTLINE(bugprone-unchecked-optional-access)
pre_hook_name + "' on module '" + name()->name() +
"'. If you did not want to script this pre-hook remove it from the "
"original NN module before scripting. Pre-hooks for module '" +
// NOLINTNEXTLINE(bugprone-unchecked-optional-access)
name()->name() + "' are expected to have the following signature: "
+ pre_hook_schema + " with a return type of either 'None'" +
single_output + " or 'Tuple[" + input_types + "]'.";
Expand All @@ -112,6 +114,7 @@ std::string ClassType::getForwardHookErrorMessage(int hook_idx) const {
input_types + "], output: " + output_types + ")";
std::string return_string =
"This error occurred while scripting the forward hook '"
// NOLINTNEXTLINE(bugprone-unchecked-optional-access)
+ hook_name + "' on module " + name()->name() +
". If you did not want to script this hook remove it from" +
" the original NN module before scripting. This hook was" +
Expand Down Expand Up @@ -191,6 +194,7 @@ void ClassType::checkForwardPreHookSchema(
const FunctionSchema& pre_hook_schema) const {
const torch::jit::Function* pre_hook = forward_pre_hooks_[pre_hook_idx];
std::string hook_id =
// NOLINTNEXTLINE(bugprone-unchecked-optional-access)
"Pre-hook '" + pre_hook->name() + "' on module '" + name()->name() + "' ";
std::string pre_hook_err_msg = getForwardPreHookErrorMessage(pre_hook_idx) + "\n";

Expand Down Expand Up @@ -287,6 +291,7 @@ void ClassType::checkForwardHookSchema(
const FunctionSchema& hook_schema) const {
const torch::jit::Function* hook = forward_hooks_[hook_idx];
std::string hook_id =
// NOLINTNEXTLINE(bugprone-unchecked-optional-access)
"Hook '" + hook->name() + "' on module '" + name()->name() + "' ";
std::string hook_err_msg = getForwardHookErrorMessage(hook_idx) + "\n";
// Hooks are expecting three inputs: self, a Tuple containing the non-self
Expand Down
2 changes: 2 additions & 0 deletions aten/src/ATen/core/custom_class.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,7 @@ static std::unordered_map<std::string, at::ClassTypePtr>& customClasses() {

void registerCustomClass(at::ClassTypePtr class_type) {
TORCH_INTERNAL_ASSERT(class_type->name());
// NOLINTNEXTLINE(bugprone-unchecked-optional-access)
auto name = class_type->name()->qualifiedName();
TORCH_CHECK(
!customClasses().count(name),
Expand Down Expand Up @@ -96,6 +97,7 @@ const std::unordered_set<std::string> getAllCustomClassesNames() {

bool isCustomClass(const c10::IValue& v) {
return v.isObject() && v.toObject()->type()->name() &&
// NOLINTNEXTLINE(bugprone-unchecked-optional-access)
getCustomClass(v.toObject()->type()->name()->qualifiedName());
}

Expand Down
Loading
You are viewing a condensed version of this merge commit. You can view the full changes here.
0