This repository was archived by the owner on Nov 1, 2021. It is now read-only.

Description
When using direct assignment and the tensor being assigned is a function of parameters, those parameters don't seem to get a gradient i.e. params.W1[i] = params.W2 * x will result in a zero gradient for W2. The following code is a minimal test case:
grad = require 'autograd'
torch = require 'torch'
params={
W=torch.range(0, 8):view(3, 3),
storage=torch.zeros(3, 3)
}
function f(params, x)
params.storage[2] = params.W * x
return torch.mean(params.storage)
end
grad, _ = grad(f)(params, torch.ones(3))
print(grad.W)
The gradient of W here is zero, while it should be torch.ones(3, 3) / 9.
When I try to run the same thing with {optimize=true} I get an error:
/home/vanmerb/torch/install/bin/luajit: ...re/lua/5.1/autograd/runtime/codegen/backend/lua/init.lua:550: attempt to index local 'node' (a nil value)
stack traceback:
...re/lua/5.1/autograd/runtime/codegen/backend/lua/init.lua:550: in function 'addNodeTargets'
...re/lua/5.1/autograd/runtime/codegen/backend/lua/init.lua:572: in function 'generateCode'
...re/lua/5.1/autograd/runtime/codegen/backend/lua/init.lua:748: in function 'generateFn'
.../install/share/lua/5.1/autograd/runtime/codegen/init.lua:140: in function <.../install/share/lua/5.1/autograd/runtime/codegen/init.lua:114>
autograd_subtensor.lua:14: in main chunk
[C]: in function 'dofile'
...merb/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x00405e90