I am trying to use GPU for faster performance, but the GPU is currently slower than the CPU.
Below is the relevant part of my code.
t_cTEBD=@elapsed begin
for t in tau:tau:ttotal
psi_t_cpu = apply(gates_cpu, psi_t_cpu; cutoff=1e-8,maxdim=400)
normalize!(psi_t_cpu)
end
end
println("TEBD CPU Time = ", t_cTEBD)
t_gTEBD=@elapsed begin
for t in tau:tau:ttotal
psi_t_gpu = apply(gates_gpu, psi_t_gpu; cutoff=1e-8,maxdim=400)
normalize!(psi_t_gpu)
end
end
println("TEBD GPU Time = ", t_gTEBD)
Roughly speaking, you will only see good speedups running on GPU when your tensors are large (and dense) and the algorithm you are running is dominated by tensor contractions, since generally tensor factorizations (particularly SVD) aren’t sped up very much when run on GPU.
Standard formulations of TEBD (like the one we use) isn’t that well suited to run on GPU since it is dominated by tensor factorizations such as truncated SVD. There is promising research circumventing that issue ( [2212.09782] Fast Time-Evolution of Matrix-Product States using the QR decomposition ) by replacing the truncated SVD with a clever use of a series of QR decompositions (which are relatively better to run on GPU), though we haven’t implemented that yet in ITensor so you would need to implement that yourself if you want to try it. We plan on implementing that alternative version of TEBD (and/or making it easier for users to implement it themselves in a minimal way) but we are in the middle of rewriting a lot of our code right now so we aren’t really implementing new features like that until that rewrite is more complete. Alternatively, algorithms like TDVP may be more amenable to running on GPU.
ERROR: LoadError: MethodError: no method matching tdvp(::MPO, ::MPS; dt::Float64, nsweeps::Int64, maxdim::Int64)
The function tdvp exists, but no method is defined for this combination of argument types.
Here’s the docstring for the tdvp function. As you can see below, it expects the arguments in a different order from how you passed them.
tdvp(operator, t::Number, init::MPS; time_step, nsteps, kwargs...)
Use the time dependent variational principle (TDVP) algorithm to compute exp(t * operator) * init
using an efficient algorithm based on alternating optimization of the MPS tensors and local Krylov
exponentiation of operator.
Specify one of time_step or nsteps. If they are both specified, they must satisfy time_step *
nsteps == t. If neither are specified, the default is nsteps=1, which means that time_step == t.
Returns:
• state::MPS - time-evolved MPS