[ANN] Change to keyword argument behavior in ITensors.jl

As of ITensors.jl v0.3.49, functions like dmrg, factorize, and svd are stricter about how they handle keyword arguments. Specifically, if you pass unsupported keyword arguments to them, they will now error instead of silently ignoring them, for example:

julia> using ITensors

julia> i, j = Index.((2, 2))
((dim=2|id=219), (dim=2|id=59))

julia> A = randomITensor(i, j)
ITensor ord=2 (dim=2|id=219) (dim=2|id=59)
NDTensors.Dense{Float64, Vector{Float64}}

julia> svd(A, i; bad_kwarg=true)
ERROR: MethodError: no method matching svd(::ITensor, ::Index{Int64}; bad_kwarg::Bool)

Closest candidates are:
  svd(::ITensor, ::Any...; leftdir, rightdir, lefttags, righttags, mindim, maxdim, cutoff, alg, use_absolute_cutoff, use_relative_cutoff, min_blockdim, utags, vtags) got unsupported keyword argument "bad_kwarg"
   @ ITensors ~/.julia/dev/ITensors/src/tensor_operations/matrix_decomposition.jl:109
  svd(::ITensor; kwargs...)
   @ ITensors ~/.julia/dev/ITensors/src/tensor_operations/matrix_decomposition.jl:201

Stacktrace:
 [1] kwerr(::NamedTuple{(:bad_kwarg,), Tuple{Bool}}, ::Function, ::ITensor, ::Index{Int64})
   @ Base ./error.jl:165
 [2] top-level scope
   @ REPL[4]:1

Most users should not be effected, but you may see errors show up in your codes if you update to that version, which you can fix by removing unsupported keyword arguments.

This has led us to catch a few issues in downstream packages like ITensorGaussianMPS.jl and ITensorNetworks.jl where we were passing unsupported keyword arguments to ITensors.jl functions which we have fixed or are fixing now.

I am trying to use the new GPU backend (CUDA) with ITensorTDVP and am running into an issue with svd when I simply run tdvp on an MPS loaded onto the GPU. I believe it has something to do with these updates to the keyword arguments for svd, specifically for the handling of the “alg” keyword. I had no trouble using ITensorTDVP with ITensorGPU in the past. Do you know of any quick fixes to this issue?

ERROR: TypeError: in keyword argument alg, expected String, got a value of type LinearAlgebra.DivideAndConquer
Stacktrace:
  [1] svd(T::NDTensors.DenseTensor{ComplexF64, 2, Tuple{Index{Int64}, Index{Int64}}, NDTensors.Dense{ComplexF64, CuArray{ComplexF64, 1, CUDA.Mem.DeviceBuffer}}}; mindim::Int64, maxdim::Int64, cutoff::Float64, use_absolute_cutoff::Nothing, use_relative_cutoff::Nothing, alg::String, min_blockdim::Nothing)  
    @ NDTensors C:\Users\Kevin\.julia\packages\NDTensors\9WMtv\src\linearalgebra\linearalgebra.jl:88  
  [2] svd(A::ITensor, Linds::Tuple{Index{Int64}, Index{Int64}}; leftdir::Nothing, rightdir::Nothing, lefttags::TagSet, righttags::TagSet, mindim::Int64, maxdim::Int64, cutoff::Float64, alg::String, use_absolute_cutoff::Nothing, use_relative_cutoff::Nothing, min_blockdim::Nothing, utags::TagSet, vtags::TagSet)
    @ ITensors C:\Users\Kevin\.julia\packages\ITensors\7KqSL\src\tensor_operations\matrix_decomposition.jl:162
  [3] factorize_svd(A::ITensor, Linds::Tuple{Index{Int64}, Index{Int64}}; singular_values!::Nothing, ortho::String, alg::String, dir::Nothing, mindim::Int64, maxdim::Int64, cutoff::Float64, tags::TagSet, 
use_absolute_cutoff::Nothing, use_relative_cutoff::Nothing, min_blockdim::Nothing)
    @ ITensors C:\Users\Kevin\.julia\packages\ITensors\7KqSL\src\tensor_operations\matrix_decomposition.jl:614
  [4] factorize(A::ITensor, Linds::Tuple{Index{Int64}, Index{Int64}}; mindim::Int64, maxdim::Int64, cutoff::Float64, ortho::String, tags::TagSet, plev::Nothing, which_decomp::Nothing, eigen_perturbation::Nothing, svd_alg::String, use_absolute_cutoff::Nothing, use_relative_cutoff::Nothing, min_blockdim::Nothing, singular_values!::Nothing, dir::Nothing)
    @ ITensors C:\Users\Kevin\.julia\packages\ITensors\7KqSL\src\tensor_operations\matrix_decomposition.jl:808
  [5] replacebond!(M::MPS, b::Int64, phi::ITensor; normalize::Bool, swapsites::Nothing, ortho::String, which_decomp::Nothing, mindim::Int64, maxdim::Int64, cutoff::Float64, eigen_perturbation::Nothing, svd_alg::String, use_absolute_cutoff::Nothing, use_relative_cutoff::Nothing, min_blockdim::Nothing)     
    @ ITensors C:\Users\Kevin\.julia\packages\ITensors\7KqSL\src\mps\mps.jl:559
  [6] tdvp_site_update!(nsite_val::Val{2}, reverse_step_val::Val{true}, solver::ITensorTDVP.var"#solver#34"{ITensorTDVP.var"#solver#33#35"{Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}}, 
PH::ProjMPO, psi::MPS, b::Int64; current_time::Float64, outputlevel::Int64, time_step::ComplexF64, normalize::Bool, direction::Base.Order.ForwardOrdering, noise::Float64, which_decomp::Nothing, svd_alg::String, cutoff::Float64, maxdim::Int64, mindim::Int64, maxtruncerr::Float64)
    @ ITensorTDVP C:\Users\Kevin\.julia\packages\ITensorTDVP\V1Kco\src\tdvp_step.jl:365
  [7] tdvp_site_update!(solver::ITensorTDVP.var"#solver#34"{ITensorTDVP.var"#solver#33#35"{Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}}, PH::ProjMPO, psi::MPS, b::Int64; nsite::Int64, reverse_step::Bool, current_time::Float64, outputlevel::Int64, time_step::ComplexF64, normalize::Bool, 
direction::Base.Order.ForwardOrdering, noise::Float64, which_decomp::Nothing, svd_alg::String, cutoff::Float64, maxdim::Int64, mindim::Int64, maxtruncerr::Float64)
    @ ITensorTDVP C:\Users\Kevin\.julia\packages\ITensorTDVP\V1Kco\src\tdvp_step.jl:157
  [8] tdvp_sweep(direction::Base.Order.ForwardOrdering, solver::Function, PH::ProjMPO, time_step::ComplexF64, psi::MPS; kwargs::Base.Pairs{Symbol, Real, NTuple{7, Symbol}, NamedTuple{(:current_time, :reverse_step, :sweep, :maxdim, :mindim, :cutoff, :noise), Tuple{Float64, Bool, Int64, Int64, Int64, Float64, Float64}}})
    @ ITensorTDVP C:\Users\Kevin\.julia\packages\ITensorTDVP\V1Kco\src\tdvp_step.jl:80
  [9] tdvp_step(order::ITensorTDVP.TDVPOrder{2, Base.Order.ForwardOrdering()}, solver::Function, PH::ProjMPO, time_step::ComplexF64, psi::MPS; current_time::Float64, kwargs::Base.Pairs{Symbol, Real, NTuple{6, Symbol}, NamedTuple{(:reverse_step, :sweep, :maxdim, :mindim, :cutoff, :noise), Tuple{Bool, Int64, Int64, Int64, Float64, Float64}}})
    @ ITensorTDVP C:\Users\Kevin\.julia\packages\ITensorTDVP\V1Kco\src\tdvp_step.jl:9
 [10] macro expansion
    @ C:\Users\Kevin\.julia\packages\ITensorTDVP\V1Kco\src\tdvp_generic.jl:84 [inlined]
 [11] macro expansion
    @ .\timing.jl:382 [inlined]
 [12] tdvp(solver::Function, PH::ProjMPO, t::ComplexF64, psi0::MPS; kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ ITensorTDVP C:\Users\Kevin\.julia\packages\ITensorTDVP\V1Kco\src\tdvp_generic.jl:83
 [13] tdvp
    @ C:\Users\Kevin\.julia\packages\ITensorTDVP\V1Kco\src\tdvp_generic.jl:45 [inlined]
 [14] tdvp(solver::Function, H::MPO, t::ComplexF64, psi0::MPS; kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ ITensorTDVP C:\Users\Kevin\.julia\packages\ITensorTDVP\V1Kco\src\tdvp_generic.jl:150
 [15] tdvp
    @ C:\Users\Kevin\.julia\packages\ITensorTDVP\V1Kco\src\tdvp_generic.jl:143 [inlined]
 [16] #tdvp#41
    @ C:\Users\Kevin\.julia\packages\ITensorTDVP\V1Kco\src\tdvp.jl:47 [inlined]
 [17] tdvp(H::MPO, t::ComplexF64, psi0::MPS)
    @ ITensorTDVP C:\Users\Kevin\.julia\packages\ITensorTDVP\V1Kco\src\tdvp.jl:46
 [18] top-level scope
    @ REPL[29]:1
 [19] top-level scope
    @ C:\Users\Kevin\.julia\packages\CUDA\7XnOO\src\initialization.jl:171

Can you try using the new package extension instead of ITensorGPU ([ANN] Initial release of new ITensor GPU backends)? You can use it by not loading ITensorGPU and instead loading CUDA.

Thanks for the quick reply. Sorry I wasn’t clear about this, but I am no longer loading ITensorGPU when this problem occurs. This is what happens when I try loading CUDA directly.

I see. Could you start a new post and in that post include a minimal runnable example that reproduces the issue?

Sure, I can do that right now.