I need Float64 precision for high-accuracy computations on GPU, but I haven’t been able to find relevant documentation or discussions on how to switch from Float32 to Float64 in ITensor. For reference, here’s a minimal example:
using ITensors, ITensorMPS
using Random: Random
using CUDA
using cuTENSOR
device_id = 0
CUDA.device!(device_id)
function main(; dtype=Float64, device=identity)
Random.seed!(1234)
L = 2
H = OpSum()
H += 1.0, "Sz", 1, "Sz", 2
sites = siteinds("S=1/2", L)
H_MPO = device(MPO(dtype, H, sites))
psi_init = device(random_mps(dtype, sites; linkdims=10))
outputlevel = 0
nsweeps = 50
maxdim = [10]
cutoff = [1E-10]
E, psi = dmrg(H_MPO, psi_init; nsweeps, maxdim, cutoff, outputlevel=outputlevel)
println("Energy data type: ", typeof(E))
flush(stdout)
GC.gc()
CUDA.reclaim()
end
main(;dtype=Float64, device=cu)
The printed energy data type is Float32, despite explicitly setting the data type of psi_init
and H_MPO
to Float64. This suggests that the DMRG algorithm is actually operating in Float32. Could someone clarify how to properly configure ITensor/ITensorMPS to use Float64 throughout the computation?