-
Notifications
You must be signed in to change notification settings - Fork 47
Open
Description
Hi, I'm trying to do some linear operations with two ffts.
class fft_autotest(torch.nn.Module):
def __init__(self):
super(fft_autotest, self).__init__()
def forward(self, x1, x2):
f = fft.Fft()
x1_fre,x1_fim = f(x1,torch.zeros_like(x1))
x2_fre,x2_fim = f(x2,torch.zeros_like(x2))
return x1_fre+x2_fre
x1 = Variable(torch.rand(3,2).cuda(), requires_grad=True)
x2 = Variable(torch.rand(3,2).cuda(), requires_grad=True)
func = fft_autotest();
test = gradcheck(func, (x1,x2), eps=1e-2)
print(test)which will output error
RuntimeError: for output no. 0,
numerical:(
1.0000 1.0000 0.0000 0.0000 0.0000 0.0000
1.0000 -1.0000 0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 1.0000 1.0000 0.0000 0.0000
0.0000 0.0000 1.0000 -1.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000 1.0000 1.0000
0.0000 0.0000 0.0000 0.0000 1.0000 -1.0000
[torch.FloatTensor of size 6x6]
,
1.0000 1.0000 0.0000 0.0000 0.0000 0.0000
1.0000 -1.0000 0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 1.0000 1.0000 0.0000 0.0000
0.0000 0.0000 1.0000 -1.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000 1.0000 1.0000
0.0000 0.0000 0.0000 0.0000 1.0000 -1.0000
[torch.FloatTensor of size 6x6]
)
analytical:(
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
[torch.FloatTensor of size 6x6]
,
2 2 0 0 0 0
2 -2 0 0 0 0
0 0 2 2 0 0
0 0 2 -2 0 0
0 0 0 0 2 2
0 0 0 0 2 -2
[torch.FloatTensor of size 6x6]
)The interesting observation is that the second analytical output equals to the summation of the numerical outputs. I tried with different output function and thing always holds. Any ideas why this coincidence happens?
Thanks!
Metadata
Metadata
Assignees
Labels
No labels