I am trying to use GPU to improve a code using multiples GPU at the same time trying to follow Multiple GPUs · CUDA.jl (juliagpu.org). In general is important to use the unified memory, and usually to do that it is enough to put unified = true as an extra parameter. However, in my code I am using NDTensors.cu() instead of cu() because when I use cu() sometimes I got errors as the one showed here: TEBD with GPU - error with eigen - ITensor Julia Questions - ITensor Discourse
My problem is that when I put unified = true inside NDTensors.cu() I got the following error:
using ITensors
sites = siteinds("S=1/2",50)
#cu()
A = cu(randomMPS(sites); unified = true)
#This works without problem
#NDTensors.cu()
A = NDTensors.cu(randomMPS(sites); unified=true)
#Here I got: MethodError: no method matching cu(::MPS; unified::Bool)
So, I am not sure how to use unified memory and NDTensors.cu() at the same time.
In the meantime, you could try using adapt(CuArray{Float64,<:Any,UnifiedMemory}, x) where x is your MPS. adapt is defined in the package Adapt.jl so you’ll have to install and load that package.
Also note that the latest syntax is random_mps, not randomMPS, we’ll be deprecating randomMPS.
Hi @joacop16, we did change the implementation of our NDTensors.cu function and using the keyword unified should throw an error because that is not an acceptable keyword.
@ REPL[24]:1 What version of ITensors/NDTensors are you using?
I did run your example with the code Matt suggests and it does properly construct an MPS on GPU with UnifiedMemory