[JIT] Compilation-induced discrepancy in F.instance_norm when passing input as running stats
#153315
Labels
module: correctness (silent)
issue that returns an incorrect result silently
oncall: jit
Add this issue/PR to JIT oncall triage queue
Uh oh!
There was an error while loading. Please reload this page.
🐛 Bug Description
When scripting a model containing
F.instance_norm
with broadcasted input, JIT-compiled results differ from eager mode.🔍 Minimal Reproduction Code
output:
Versions
Collecting environment information...
PyTorch version: 2.0.1+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Could not collect
GCC version: Could not collect
Clang version: 20.1.2
CMake version: version 4.0.0
Libc version: N/A
Python version: 3.9.7 (tags/v3.9.7:1016ef3, Aug 30 2021, 20:19:38) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.26100-SP0
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 560.94
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.26.1
[pip3] torch==2.0.1
[conda] Could not collect
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @chauhang @penguinwu
The text was updated successfully, but these errors were encountered: