ITensorTDVP has issues with svd when using new GPU backend

I am copying this over from a discussion on this post.

I am trying to use the new GPU backend (CUDA) with ITensorTDVP and am running into an issue with svd when I simply run tdvp on an MPS loaded onto the GPU. I believe it has something to do with the updates to the keyword arguments for svd, specifically for the handling of the alg keyword. Do you know of any quick fixes to this issue?

Here is some minimal code that reproduces the issue.

using ITensors
using ITensorTDVP
using CUDA

N = 10
cutoff = 1e-12

s = siteinds("S=1/2", N)

function heisenberg(N)
    os = OpSum()
    for j in 1:(N - 1)
        os += 0.5, "S+", j, "S-", j + 1
        os += 0.5, "S-", j, "S+", j + 1
        os += "Sz", j, "Sz", j + 1
    end

    return os
end

H = cu(MPO(heisenberg(N), s))

ψ0 = cu(randomMPS(s; linkdims=10))

ψ1 = tdvp(H, -0.1im, ψ0; cutoff)

This only seems to be a problem for two-site TDVP however. If I set nsite=1 in the tdvp() call then there is no issue.

Thanks for the report, we’ll look into it. Could you also post the full error message you get?

Thank you, and yes here is the error message:

ERROR: LoadError: TypeError: in keyword argument alg, expected String, got a value of type LinearAlgebra.DivideAndConquer
Stacktrace:
  [1] svd(T::NDTensors.DenseTensor{ComplexF64, 2, Tuple{Index{Int64}, Index{Int64}}, NDTensors.Dense{ComplexF64, CuArray{ComplexF64, 1, CUDA.Mem.DeviceBuffer}}}; mindim::Int64, maxdim::Int64, cutoff::Float64, use_absolute_cutoff::Nothing, use_relative_cutoff::Nothing, alg::String, min_blockdim::Nothing)  
    @ NDTensors C:\Users\Kevin\.julia\packages\NDTensors\9WMtv\src\linearalgebra\linearalgebra.jl:88  
  [2] svd(A::ITensor, Linds::Tuple{Index{Int64}, Index{Int64}}; leftdir::Nothing, rightdir::Nothing, lefttags::TagSet, righttags::TagSet, mindim::Int64, maxdim::Int64, cutoff::Float64, alg::String, use_absolute_cutoff::Nothing, use_relative_cutoff::Nothing, min_blockdim::Nothing, utags::TagSet, vtags::TagSet)
    @ ITensors C:\Users\Kevin\.julia\packages\ITensors\7KqSL\src\tensor_operations\matrix_decomposition.jl:162
  [3] factorize_svd(A::ITensor, Linds::Tuple{Index{Int64}, Index{Int64}}; singular_values!::Nothing, ortho::String, alg::String, dir::Nothing, mindim::Int64, maxdim::Int64, cutoff::Float64, tags::TagSet, 
use_absolute_cutoff::Nothing, use_relative_cutoff::Nothing, min_blockdim::Nothing)
    @ ITensors C:\Users\Kevin\.julia\packages\ITensors\7KqSL\src\tensor_operations\matrix_decomposition.jl:614
  [4] factorize(A::ITensor, Linds::Tuple{Index{Int64}, Index{Int64}}; mindim::Int64, maxdim::Int64, cutoff::Float64, ortho::String, tags::TagSet, plev::Nothing, which_decomp::Nothing, eigen_perturbation::Nothing, svd_alg::String, use_absolute_cutoff::Nothing, use_relative_cutoff::Nothing, min_blockdim::Nothing, singular_values!::Nothing, dir::Nothing)
    @ ITensors C:\Users\Kevin\.julia\packages\ITensors\7KqSL\src\tensor_operations\matrix_decomposition.jl:808
  [5] replacebond!(M::MPS, b::Int64, phi::ITensor; normalize::Bool, swapsites::Nothing, ortho::String, which_decomp::Nothing, mindim::Int64, maxdim::Int64, cutoff::Float64, eigen_perturbation::Nothing, svd_alg::String, use_absolute_cutoff::Nothing, use_relative_cutoff::Nothing, min_blockdim::Nothing)     
    @ ITensors C:\Users\Kevin\.julia\packages\ITensors\7KqSL\src\mps\mps.jl:559
  [6] tdvp_site_update!(nsite_val::Val{2}, reverse_step_val::Val{true}, solver::ITensorTDVP.var"#solver#34"{ITensorTDVP.var"#solver#33#35"{Base.Pairs{Symbol, Float64, Tuple{Symbol}, NamedTuple{(:cutoff,), Tuple{Float64}}}}}, PH::ProjMPO, psi::MPS, b::Int64; current_time::Float64, outputlevel::Int64, time_step::ComplexF64, normalize::Bool, direction::Base.Order.ForwardOrdering, noise::Float64, which_decomp::Nothing, svd_alg::String, cutoff::Float64, maxdim::Int64, mindim::Int64, maxtruncerr::Float64)      
    @ ITensorTDVP C:\Users\Kevin\.julia\packages\ITensorTDVP\V1Kco\src\tdvp_step.jl:365
  [7] tdvp_site_update!(solver::ITensorTDVP.var"#solver#34"{ITensorTDVP.var"#solver#33#35"{Base.Pairs{Symbol, Float64, Tuple{Symbol}, NamedTuple{(:cutoff,), Tuple{Float64}}}}}, PH::ProjMPO, psi::MPS, b::Int64; nsite::Int64, reverse_step::Bool, current_time::Float64, outputlevel::Int64, time_step::ComplexF64, normalize::Bool, direction::Base.Order.ForwardOrdering, noise::Float64, which_decomp::Nothing, svd_alg::String, cutoff::Float64, maxdim::Int64, mindim::Int64, maxtruncerr::Float64)
    @ ITensorTDVP C:\Users\Kevin\.julia\packages\ITensorTDVP\V1Kco\src\tdvp_step.jl:157
  [8] tdvp_sweep(direction::Base.Order.ForwardOrdering, solver::Function, PH::ProjMPO, time_step::ComplexF64, psi::MPS; kwargs::Base.Pairs{Symbol, Real, NTuple{7, Symbol}, NamedTuple{(:current_time, :cutoff, :reverse_step, :sweep, :maxdim, :mindim, :noise), Tuple{Float64, Float64, Bool, Int64, Int64, Int64, Float64}}})
    @ ITensorTDVP C:\Users\Kevin\.julia\packages\ITensorTDVP\V1Kco\src\tdvp_step.jl:80
  [9] tdvp_step(order::ITensorTDVP.TDVPOrder{2, Base.Order.ForwardOrdering()}, solver::Function, PH::ProjMPO, time_step::ComplexF64, psi::MPS; current_time::Float64, kwargs::Base.Pairs{Symbol, Real, NTuple{6, Symbol}, NamedTuple{(:cutoff, :reverse_step, :sweep, :maxdim, :mindim, :noise), Tuple{Float64, Bool, Int64, Int64, Int64, Float64}}})
    @ ITensorTDVP C:\Users\Kevin\.julia\packages\ITensorTDVP\V1Kco\src\tdvp_step.jl:9
 [10] macro expansion
    @ C:\Users\Kevin\.julia\packages\ITensorTDVP\V1Kco\src\tdvp_generic.jl:84 [inlined]
 [11] macro expansion
    @ .\timing.jl:382 [inlined]
 [12] tdvp(solver::Function, PH::ProjMPO, t::ComplexF64, psi0::MPS; kwargs::Base.Pairs{Symbol, Float64, Tuple{Symbol}, NamedTuple{(:cutoff,), Tuple{Float64}}})
    @ ITensorTDVP C:\Users\Kevin\.julia\packages\ITensorTDVP\V1Kco\src\tdvp_generic.jl:83
 [13] tdvp(solver::Function, H::MPO, t::ComplexF64, psi0::MPS; kwargs::Base.Pairs{Symbol, Float64, Tuple{Symbol}, NamedTuple{(:cutoff,), Tuple{Float64}}})
    @ ITensorTDVP C:\Users\Kevin\.julia\packages\ITensorTDVP\V1Kco\src\tdvp_generic.jl:150
 [14] #tdvp#41
    @ C:\Users\Kevin\.julia\packages\ITensorTDVP\V1Kco\src\tdvp.jl:47 [inlined]
 [15] top-level scope
    @ C:\Users\Kevin\Documents\VS Code\QSL\test.jl:25
 [16] include(fname::String)
    @ Base.MainInclude .\client.jl:476
 [17] top-level scope
    @ REPL[62]:1
 [18] top-level scope
    @ C:\Users\Kevin\.julia\packages\CUDA\rXson\src\initialization.jl:208
in expression starting at C:\Users\Kevin\Documents\VS Code\QSL\test.jl:25

Thanks. Could you also print the versions of the packages you are using?

Hi,

Hi, Thank you for the report. I have sent Matt a message about where the problem comes from and how to fix it in this package. In the meantime, you can utilize CUDA in the code you wrote by modifying the last line to look like this

ψ1 = tdvp(H, -0.1im, ψ0; cutoff, svd_alg="qr_algorithm")
1 Like

This will be fixed by Refactor keyword argument processing by mtfishman · Pull Request #62 · ITensor/ITensorTDVP.jl · GitHub, I will register ITensorTDVP v0.2 tomorrow which includes a fix for this issue.

2 Likes

Great, thanks everyone!

Should be fixed now in the latest ITensorTDVP version.

1 Like