Hello ITensors team,
I would like to solve larger MPO’s with the build in DMRG on an NVIDIA GPU.
I was guessing that following your documentation you could do it in the following naive way.
Is the build in DMRG even supposed to be compatible with GPU use? If this is the case could you point me to a solution of my error.
Thank you very much,
Tibidabo
Minimal code example:
using ITensors
using CUDA
let
# Define the simple model
N = 10
sites = siteinds("S=1/2", N)
os = OpSum()
for j in 1:N-1
os += "Sx", j, "Sx", j+1
os += "Sy", j, "Sy", j+1
os += "Sz", j, "Sz", j+1
end
H = MPO(os, sites)
Hcu = cu(H)
# Perform DMRG calculation
psi0 = randomMPS(sites, 10)
energy, psi = dmrg(Hcu, psi0, nsweeps=5, maxdim=200)
println("Ground state energy on NVIDIA GPU with CUDA: $energy")
end
Which leads to the following error
ERROR: MethodError: no method matching DenseArray{ComplexF64, 3}(::UndefInitializer, ::Tuple{Int64, Int64, Int64})
The type `DenseArray{ComplexF64, 3}` exists, but no method is defined for this combination of argument types when trying to construct it.
Closest candidates are:
(::Type{<:AbstractArray})(::NDTensors.AllocateData.UndefInitializer, ::Tuple)
@ NDTensors ~/.julia/packages/NDTensors/EHP8P/src/lib/AllocateData/src/initializers.jl:16
(::Type{<:AbstractArray})(::NDTensors.NeverAlias, ::AbstractArray)
@ ITensors ~/.julia/packages/ITensors/gRmgt/src/itensor.jl:673
(::Type{<:AbstractArray})(::NDTensors.AllowAlias, ::AbstractArray)
@ ITensors ~/.julia/packages/ITensors/gRmgt/src/itensor.jl:677
Stacktrace:
[1] similar(arraytype::Type{DenseVector{ComplexF64}}, dims::Tuple{Index{Int64}, Index{Int64}, Index{Int64}})
@ NDTensors ~/.julia/packages/NDTensors/EHP8P/src/abstractarray/similar.jl:10
[2] similar(storagetype::Type{NDTensors.Dense{…}}, dims::Tuple{Index{…}, Index{…}, Index{…}})
@ NDTensors ~/.julia/packages/NDTensors/EHP8P/src/tensorstorage/similar.jl:40
[3] similar(tensortype::Type{NDTensors.DenseTensor{…}}, dims::Tuple{Index{…}, Index{…}, Index{…}})
@ NDTensors ~/.julia/packages/NDTensors/EHP8P/src/tensor/similar.jl:22
[4] contraction_output(tensor1::NDTensors.DenseTensor{…}, tensor2::NDTensors.DenseTensor{…}, indsR::Tuple{…})
@ NDTensors ~/.julia/packages/NDTensors/EHP8P/src/dense/tensoralgebra/contract.jl:3
[5] contraction_output(tensor1::NDTensors.DenseTensor{…}, labelstensor1::Tuple{…}, tensor2::NDTensors.DenseTensor{…}, labelstensor2::Tuple{…}, labelsoutput_tensor::Tuple{…})
@ NDTensors ~/.julia/packages/NDTensors/EHP8P/src/tensoroperations/generic_tensor_operations.jl:62
[6] contract(tensor1::NDTensors.DenseTensor{…}, labelstensor1::Tuple{…}, tensor2::NDTensors.DenseTensor{…}, labelstensor2::Tuple{…}, labelsoutput_tensor::Tuple{…})
@ NDTensors ~/.julia/packages/NDTensors/EHP8P/src/tensoroperations/generic_tensor_operations.jl:108
[7] contract(::Type{…}, tensor1::NDTensors.DenseTensor{…}, labels_tensor1::Tuple{…}, tensor2::NDTensors.DenseTensor{…}, labels_tensor2::Tuple{…})
@ NDTensors ~/.julia/packages/NDTensors/EHP8P/src/tensoroperations/generic_tensor_operations.jl:91
[8] contract
@ ~/.julia/packages/SimpleTraits/l1ZsK/src/SimpleTraits.jl:331 [inlined]
[9] _contract(A::NDTensors.DenseTensor{…}, B::NDTensors.DenseTensor{…})
@ ITensors ~/.julia/packages/ITensors/gRmgt/src/tensor_operations/tensor_algebra.jl:3
[10] _contract(A::ITensor, B::ITensor)
@ ITensors ~/.julia/packages/ITensors/gRmgt/src/tensor_operations/tensor_algebra.jl:9
[11] contract(A::ITensor, B::ITensor)
@ ITensors ~/.julia/packages/ITensors/gRmgt/src/tensor_operations/tensor_algebra.jl:74
[12] *
@ ~/.julia/packages/ITensors/gRmgt/src/tensor_operations/tensor_algebra.jl:61 [inlined]
[13] *
@ ./operators.jl:596 [inlined]
[14] _makeR!(P::ProjMPO, psi::MPS, k::Int64)
@ ITensors.ITensorMPS ~/.julia/packages/ITensors/gRmgt/src/lib/ITensorMPS/src/abstractprojmpo/abstractprojmpo.jl:177
[15] makeR!
@ ~/.julia/packages/ITensors/gRmgt/src/lib/ITensorMPS/src/abstractprojmpo/abstractprojmpo.jl:186 [inlined]
[16] position!
@ ~/.julia/packages/ITensors/gRmgt/src/lib/ITensorMPS/src/abstractprojmpo/abstractprojmpo.jl:204 [inlined]
[17]
@ ITensors.ITensorMPS ~/.julia/packages/ITensors/gRmgt/src/lib/ITensorMPS/src/dmrg.jl:202
[18] dmrg
@ ~/.julia/packages/ITensors/gRmgt/src/lib/ITensorMPS/src/dmrg.jl:158 [inlined]
[19] #dmrg#509
@ ~/.julia/packages/ITensors/gRmgt/src/lib/ITensorMPS/src/dmrg.jl:28 [inlined]
[20] dmrg
@ ~/.julia/packages/ITensors/gRmgt/src/lib/ITensorMPS/src/dmrg.jl:21 [inlined]
[21] #dmrg#515
@ ~/.julia/packages/ITensors/gRmgt/src/lib/ITensorMPS/src/dmrg.jl:388 [inlined]
[22] top-level scope
@ ~/cqed-tensor-network-study/quick_tests/test_gpus/test_itensors_gpu.jl:19
Some type information was truncated. Use `show(err)` to see complete types.
My Environment
NVIDIA GeForce RTX 4090
julia> versioninfo()
Julia Version 1.11.1
Commit 8f5b7ca12ad (2024-10-16 10:53 UTC)
Build Info:
Official https://julialang.org/ release
Platform Info:
OS: Linux (x86_64-linux-gnu)
CPU: 16 × Intel(R) Core(TM) i9-14900KF
WORD_SIZE: 64
LLVM: libLLVM-16.0.6 (ORCJIT, alderlake)
Threads: 1 default, 0 interactive, 1 GC (on 16 virtual cores)
Environment:
JULIA_EDITOR = code
JULIA_NUM_THREADS =
(@v1.11) pkg> status
Status `~/.julia/environments/v1.11/Project.toml`
[13e28ba4] AppleAccelerate v0.4.0
[052768ef] CUDA v5.5.2
⌃ [9136182c] ITensors v0.6.22
[33e6dc65] MKL v0.7.0
[856f044c] MKL_jll v2024.2.0+0
[8e850b90] libblastrampoline_jll v5.11.0+0
Info Packages marked with ⌃ have new versions available and may be upgradable.