I am trying to write an iterative solver for a non-linear equation that requires computing a large tensor for residuals R2
and using R2
to update amplitudes T2
, then those new amplitudes are further used to calculate new residuals and the process is repeated till the norm R2
is below a selected threshold. Till now, I had been using TensorOperations.jl where the update of the T2
tensor looked as follows :
function update_T2(T2::Array{Float64,4},R2::Array{Float64,4},Scaled_R2::Array{Float64,4})
fvv = deserialize("fvv.jlbin")
foo = deserialize("foo.jlbin")
nv = deserialize("nv.jlbin")
nocc = deserialize("nocc.jlbin")
shiftp = 0.20
for a in 1:nv, b in 1:nv, i in 1:nocc, j in 1:nocc
Scaled_R2[a,b,i,j] = R2[a,b,i,j]/(fvv[a,a] + fvv[b,b]-foo[i,i] - foo[j,j]+shiftp)
end
for a in 1:nv, b in 1:nv, i in 1:nocc, j in 1:nocc
T2[a,b,i,j] = T2[a,b,i,j] - Scaled_R2[a,b,i,j]
end
return T2
end
I want to update T2
the same way but now T2
has the data type ::ITensor
. So I tried to write the following code
function update_T2(T2::ITensor,R2::ITensor,Scaled_R2::ITensor)
fvv = ITensor(deserialize("fvv.jlbin"),a_1,a_2)
foo = ITensor(deserialize("foo.jlbin"),i_1,i_2)
nv = deserialize("nv.jlbin")::Int64
nocc = deserialize("nocc.jlbin")::Int64
shiftp = 0.20
for a in 1:nv, b in 1:nv, i in 1:nocc, j in 1:nocc
Scaled_R2[a,b,i,j] = R2[a,b,i,j]/(fvv[a,a] + fvv[b,b]-foo[i,i] - foo[j,j]+shiftp)
end
for a in 1:nv, b in 1:nv, i in 1:nocc, j in 1:nocc
T2[a,b,i,j] = T2[a,b,i,j] - Scaled_R2[a,b,i,j]
end
return T2
end
Where a_1,a_2,i_1,i_2
are the indices of the tensors T2
Scaled_R2
and R2
. The ITensor version seems to return a different value of T2
even when given the same initial guess of T2
and other parameters. Is there any problem in the way I am updating the tensor element-wise ?