How to realize parallel time-dependent variational principle algorithm for matrix product states?

Hello everyone!

Inspired by the TDVP parallelisation first introduced in Phys. Rev. B 101, 235123 (2020) - Parallel time-dependent variational principle algorithm for matrix product states (aps.org), I want to ask is that possiple to realize parallel TDVP algorithm in ITensors and ITensorTDVP?

Thanks to Regarding Vidal’s representation of MPS - ITensor Julia Questions - ITensor Discourse, I have already some startups.

Can anyone give me some suggestions or “sparks” to go on?

Thanks a lot !
Wang.

Maybe some others who have done this will be able to comment about things they have tried in Julia, but at a broad level, the answer is “yes” it is possible to realize the parallel TDVP algorithm in ITensor. But if you are asking if we have it implemented already in a way you can simply call by inputting a Hamiltonian, the answer is no.

There is some early work in the (experimental) ITensorParallel Github repo, which you might want to take a look at for a jumping-off point for working on your own code, since it may help you to learn some code patterns and Julia tools for distributing tensors and tensor calculations across multiple nodes.

3 Likes

Note that ITensorParallel.jl only implements parallelization over sums of MPOs, not real space parallelization.

1 Like

So, what I need to do is that first transform the ordinary MPS to inverse canonical gauge form MPS with multiple orthogonality centers, and then use tdvp_site_update!() in ITensorTDVP to update site tensors in one sweep of every partition of MPS in parallel.

I want to ask how implement these processes in parallel in Julia or ITensors? How can I specify particular computer threads to do local updates in every partition? To the best of my knowledge, one can specify multiple threads to run a julia codes by julia -t 8 xxx.jl

Thanks a lot!

Hi!
I just want to ask is there any way to control the direction of half of sweep (from left to right and right to left) by any key-word-arguments?
From Custom sweep direction in DMRG - DMRG and Numerical Methods - ITensor Discourse, one can give order=1 to close the reverse step.
But in parallel 2-site TDVP algorithm, sweep directions of nearest two partitions of the MPS should be opposite.
Also, here I try to do an ordinary sweep of 2-site tdvp sites by sites, but the time-evolved MPS is not correctly.
Are there mistakes in my codes? Or, do I misunderstand the algorithm of 2-site tdvp?

using ITensors
using ITensorTDVP
using LinearAlgebra
using Statistics

L=3
sites=siteinds("S=1/2",L)
psi=randomMPS(sites;linkdims=10)

os=OpSum()
for i=1:L-1
    os += -1,"Splus",i,"Sminus",i+1
    os += -1,"Splus",i+1,"Sminus",i
    os += 1,"Sz",i,"Sz",i+1
end
os += 1,"Sz",L,"Sz",L
Hal=MPO(os,sites)

#time-evolved psi by tdvp()
psi_1=tdvp(Hal,-im*0.05,psi;maxdim=100,cutoff=1e-10,nsite=2,outputlevel=1,nsweeps=1)

###updating site tensors one by one

###from left to right
psi_t=copy(psi)
H_1=copy(Hal)

PH=ProjMPO(H_1)
##update site 1 and site 2, forward time-evolution
ITensors.orthogonalize!(psi_t,1)
ITensors.set_nsite!(PH,2)
position!(PH,psi_t,1)
phi1=psi_t[1]*psi_t[2]
phi1,info=exponentiate(PH, -im*0.05/2, phi1)
phi1 /= norm(phi1)
replacebond!(psi_t,1,phi1)

##update site 2, backward time-evolution
ITensors.orthogonalize!(psi_t,2)
ITensors.set_nsite!(PH,1)
position!(PH,psi_t,2)
phi1=psi_t[2]
phi1,info=exponentiate(PH, im*0.05/2, phi1)
phi1 /= norm(phi1)
psi_t[2]=phi1

##update site 2 and site 3, forward time-evolution
ITensors.orthogonalize!(psi_t,3)
ITensors.set_nsite!(PH,2)
position!(PH,psi_t,2)
phi1=psi_t[2]*psi_t[3]
phi1,info=exponentiate(PH, -im*0.05/2, phi1)
phi1 /= norm(phi1)
replacebond!(psi_t,2,phi1)

###from righ to left
##update site 3 and site 2, forward time-evolution
ITensors.orthogonalize!(psi_t,3)
ITensors.set_nsite!(PH,2)
position!(PH,psi_t,2)
phi1=psi_t[2]*psi_t[3]
phi1,info=exponentiate(PH, -im*0.05/2, phi1)
phi1 /= norm(phi1)
replacebond!(psi_t,2,phi1;ortho="right")

##update site 2, backward time-evolution
ITensors.orthogonalize!(psi_t,2)
ITensors.set_nsite!(PH,1)
position!(PH,psi_t,2)
phi1=psi_t[2]
phi1,info=exponentiate(PH, im*0.05/2, phi1)
phi1 /= norm(phi1)
psi_t[2]=phi1

##update site 2 and site 1, forward time-evolution
ITensors.orthogonalize!(psi_t,1)
ITensors.set_nsite!(PH,2)
position!(PH,psi_t,1)
phi1=psi_t[1]*psi_t[2]
phi1,info=exponentiate(PH, -im*0.05/2, phi1)
phi1 /= norm(phi1)
replacebond!(psi_t,2,phi1)

The finally updated MPS psi_t has wrong inds, which are

MPS
[1] ((dim=2|id=886|"S=1/2,Site,n=1"), (dim=2|id=444|"Link,l=1"))
[2] ((dim=2|id=634|"Link,l=2"), (dim=2|id=305|"S=1/2,Site,n=2"), (dim=2|id=370|"Link,l=2"))
[3] ((dim=2|id=370|"Link,l=2"), (dim=2|id=886|"S=1/2,Site,n=1"))

Any suggestions are grateful!

Hi, I combined the parallel DMRG code from ITensor with TDVP for C++ in this repo: GitHub - jansendavid/Parallel_tdvp_itensor: Code for running parallel TDVP using Itensor and local basis optimization .
It also contains different ways of incorporating local basis optimization.
Maybe it can be helpful!

1 Like

Thank you very much! It helps a lot !!!