-
Notifications
You must be signed in to change notification settings - Fork 24.7k
Description
I need to convert an image to grayscale on the fly within the forward method of my module. I noticed that the following code:
import numpy as np
img = np.random.random((96,96,3))
np.dot(img, [0.29, 0.58, 0.11])
works fine when completely in Numpy but when converted to Tensor form:
import torch
img = np.random.random((96,96,3))
torch.dot( torch.Tensor(img), torch.Tensor([0.29, 0.58, 0.11]))
it throws the error:
RuntimeError: 1D tensors expected, got 3D, 1D tensors at C:\w\1\s\tmp_conda_3.8_075429\conda\conda-bld\pytorch_1579852542185\work\aten\src\TH/generic/THTensorEvenMoreMath.cpp:733
I somewhat understand where this might come from, but I expected when the underlying arrays and tensors are the same dimensions, that broadcasting behavior would be the same? Can someone help me understand what's going on here?
cc @mruberry