Hamiltonian time-dependent evolution time is too long

Hello!

I would like to calculate the time evolution of the Hamiltonian and the von neumann entropy of the final state.
But after a while the program became very slow.
What is the cause of this slow running? Can I improve the running speed by changing the program itself?

using ITensors
using ITensors.HDF5


function entropy_von_neumann(psi::MPS, b::Int)
  s = siteinds(psi)  
  orthogonalize!(psi, b)
  _,S = svd(psi[b], (linkind(psi, b-1), s[b]))
  SvN = 0.0
  for n in 1:dim(S, 1)
    p = S[n,n]^2
    SvN -= p * log(p)
  end
  return SvN
end

N = 144
tau = 0.05
ttotal = 20
h = [2 * cos(2 * pi * (sqrt(5) - 1) / 2 * i) for i in 1:N]
J = 1.0
V = 4.0
s = siteinds("S=1/2", N; conserve_qns=true)
cutoff = 1E-8
hh=1.75
L_matrix = zeros(Float64, Int(ttotal / tau) + 1, N)
for (k, h_val) in enumerate(hh)
    gates = ITensor[]
    for j in 1:(N - 1)
        s1 = s[j]
        s2 = s[j + 1]
        hj1 = J * op("S+", s1) * op("S-", s2) +
               J * op("S-", s1) * op("S+", s2) +
               V * (op("Sz", s1) + 0.5 * op("Id", s1)) * (op("Sz", s2) + 0.5 * op("Id", s2))
        hj2 = 1 / 2 * h_val * h[j] * (op("Sz", s1) * op("Id", s2) + op("Id", s1) * op("Id", s2))
        Gj = exp(-im * tau / 2 * (hj1 + hj2))
        push!(gates, Gj)
    end
    append!(gates, reverse(gates))
    psi = MPS(s, n -> isodd(n) ? "Up" : "Dn")
    psi_f_t = psi

    for (i, t) in enumerate(0:tau:ttotal)
        t ā‰ˆ ttotal && break
        psi_f_t = apply(gates, psi_f_t; cutoff)
        normalize!(psi_f_t)
        # Calculate entropy
        for b=2:N-1
          SvN = entropy_von_neumann(psi_f_t, b)
          L_matrix[i + 1, b] = SvN
        end
        println("$t")
    end

end

My Hamiltonian is as follows:
\hat{H}=\sum_{j=0}^{N-1}\left[h_j\hat{n}_j+J(\hat{c}_j^\dagger\hat{c}_{j+1}+\hat{c}_{j+1}^\dagger\hat{c}_j)+V\hat{n}_j\hat{n}_{j+1}\right]
Thank you!

When you evolve a pure state of a closed system under unitary time evolution (regular ā€˜realā€™ time evolution) the entanglement generically grows, and often at a pretty fast rate. When starting from a state which is far from an eigenstate (e.g. a product state), the resulting growth of the MPS bond dimension is usually exponential. This is a well documented issue with time evolution using MPS and tensor networks, which is why in many papers you will see times only going up to about 10 or 20 or so (in appropriate units).

In the past few years, there have been some interesting algorithms developed to overcome this problem, but they are all new and some are fairly specialized and all are under heavy research and development. One that I am involved in working on which is among the simplest to implement is the idea of evolving in ā€˜complex timeā€™ which means mixing real and imaginary time steps to lower the entanglement. Then real-time properties have to be reconstructed afterward by various extrapolation procedures. References:
(1) [2311.10909] Dynamical correlation functions from complex time evolution
(2) [2312.11705] Complex Time Evolution in Tensor Networks

Other algorithms to get around this problem include using ā€œinfluence functionalā€ ideas and machine learning of families of Feynman diagrams.

Please let me know if that answers your question!

1 Like

Thank you very much for your answer.It helped me a lot.
I read these two references. But due to my limited understanding of quantum mechanics, I am not quite clear on how to use imaginary time to compute the final state or entanglement entropy in ITensor.
If itā€™s convenient, could you provide me with an example code?

Glad that helped. I think I will need to know more clearly what you are asking by saying ā€œfinal stateā€. Do you mean final state as in the state reached after a long time evolution in real time? Or do you mean the final state reached after a long time evolution in imaginary time?

In imaginary time, an initial state \ket{\psi_0} is acted on by e^{-H\tau} \ket{\psi_0} where if you take \tau to be large, then eventually the state will end up as the dominant eigenvector of H (the one with most negative eigenvalue). This is known as the ā€œground stateā€ of H.

In real time, an initial state \ket{\psi_0} is acted on by e^{-iH t} \ket{\psi_0} and there is a concept of a long-time ā€œsteady stateā€ reached in the limit of t \rightarrow \infty. This state is not as easy to describe and for a closed system (system described by just a Hamiltonian and a ā€œpureā€ initial state) the long-time steady state is formally a very complicated pure state (state vector) which canā€™t be compressed well using tensor network methods. (However there is more one can say about this steady state, and if one is only interested in certain properties of it, such as local properties, there does exist a compressed description that captures those properties.)

1 Like

Thank you for being so patient!

I mean final state as in the state reached after a long time evolution in real time.
As you said, the initial state I chose was a ā€œpureā€ state \left|\psi_{0}\right\rangle . I want to know the final state \left|\psi_{t}\right\rangle =e^{-iHt}\left|\psi_{0}\right\rangle of this pure state after a long evolution. Then, I will use the final state and the initial state to calculate the von neumann entropy or the time-dependent evolution of other operators.But the problem I faced was that it took too long to compute using itensor.

According to the two documents you provided me, I think the combination of a complex time evolution of tensor network states and a perturbative reconstruction of the real time maybe calculate faster. Iā€™d like to ask if this idea is workable. I would also like to ask how you used itensor to implement these steps in that article.
References:
(1) [2311.10909] Dynamical correlation functions from complex time evolution
Could you please provide me a code about these steps in that article?

Hi, yes, so I think the idea of using complex time is workable. But right now we are only guessing about the properties it may work for. I am not sure if entanglement is one of those properties, but it could be. It is just an open research question.

I donā€™t have a code to provide to you about the steps in the article, since the code was done by someone else and was anyway tailored very narrowly to the fermionic impurity model applications pursued in those papers.

But it should be straightforward to implement it yourself using the TDVP method available through ITensor. The rest is most ā€œpost processingā€ that you do in terms of what measurements you perform and how you combine them afterward. So if you have any questions about what the article is saying about those things, please let me know and I could provide some short explanations.

1 Like

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.