ODE updater for time independent H in TDVP

Hi,

I want to integrate a time-independent Hamiltonian using the tdvp function of ITensorMPS.jl.
I see that there are “exponentiate” and “applyexp” updaters defined inside the tdvp function (ITensorTDVP.jl/src/tdvp.jl at main · ITensor/ITensorTDVP.jl · GitHub).

However, in the example with time-dependent Hamiltonian, an ODE solver is also implemented for the projected equations of motion. Can you help me write an ODE updater for the basic case of the time-independent Hamiltonian?

I tried to borrow the function from ITensorTDVP.jl/examples/03_updaters.jl at main · ITensor/ITensorTDVP.jl · GitHub but got incorrect results.

Best

I’d be happy to discuss it with you. When you tried modifying the ode_updater in the file https://github.com/ITensor/ITensorTDVP.jl/blob/main/examples/03_updaters.jl what changes did you make? I would recommend deleting the two definitions of f in there and then writing a new one that just assumes a simpler, time-independent operator is passed to the ode_updater function. Otherwise I’m not personally too familiar with the ODEProblem interface – what kind of inputs does it expect? (I did not write the 03_tdvp_time_dependent.jl example or else I would remember.)

1 Like

Thanks, Miles, and sorry for the long delay in my answer.

I added a bit of code to TDVP, which performs the effective Hamiltonian evolution with an ODE backend (using Tsit5 algo from OrdinaryDiffEq.jl).
However, I currently get incorrect gs energy in the TDVP imaginary time evolution from example 1.

I simply added the following lines to TDVP-

  • usings:
using ITensors: Algorithm, @Algorithm_str, ITensor, array, inds, itensor
using ITensorMPS: to_vec
using KrylovKit: exponentiate
using OrdinaryDiffEq: ODEProblem, Tsit5, solve
  • The function itself reads:
function ode_updater(operator, init; internal_kwargs, alg=Tsit5(), kwargs...)
    init_vec, to_itensor = to_vec(init)
    f(init::ITensor, p, t) = operator(init)
    f(init_vec::AbstractArray, p, t) = to_vec(f(to_itensor(init_vec), p, t))[1]
    prob = ODEProblem(f, init_vec, internal_kwargs.time_step)
    sol = solve(prob, alg; kwargs...)
    state_vec = sol.u[end]
    return to_itensor(state_vec), (;)  
end

tdvp_updater(::Algorithm"ode") = ode_updater

One can change the algorithm’s tolerance with updater_kwargs.

I can’t find the problem with this method; it does not converge even with very small tolerances and time steps.

Note that I’m using ITensorMPS and not ITensorTDVP

Hi, thanks for your patience with the slow reply.

What kind of problem are you simulating? Is it a “quench” starting from a low-entanglement state? If so, then with TDVP it can be important to do a “basis expansion” and we have a new function you can call to do that.

If instead you are starting from e.g. a ground state with an operator acting on it (such as to compute a time-depedent correlation function), then the basis expansion may not be necessary.

Thanks for the comment,
In a private channel with @mtfishman, he answered that an ODE updater OridinaryDiffEq.jl is not expected to perform better on this linear ODE, also not on a GPU. I benchmarked both algorithms on systems with growing bond dimensions and saw that the ODE solver was faster than KrilovKit.jl’s “exponentiate.”
However,

  1. I might have worked with too small systems to overcome some overhead of “exponentiate” with GPU backend (for the discussion in KrylovKit.jl: Significant overhead when using GPU and ITensors · Issue #101 · Jutho/KrylovKit.jl · GitHub) and
  2. The ODE solver did not return the correct answers… (from example 1).

My specific system does require subspace expansion, but in example 1, there is no need, as the interactions are nearest neighbors, right? In any case, both exponentiate and the solver get the same linear equation to solve from TDVP, so if one works without expansion, the other should also.

Yes, I think all your logic here is correct. You’re right that if exponentiate works without subspace expansion, then an ODE solver should too, because they are solving the same “core” problem, just using different algorithms to do it.

Regarding whether subspace expansion is needed or not, I would say while nearest-neighbor interactions are one consideration, another one is what initial state you are starting from. Even in the nearest neighbor case, if you start from a product state then a subspace expansion can still help. But if you start from something like a ground state with an operator acting on it, say, then subspace expansions aren’t usually needed, especially for short-range Hamiltonians. Basically if your initial state has a large bond dimension or is a very “rich” state then 2-site TDVP can often be enough. (All of this is fairly similar to the situation with DMRG too, where subspace expansions are primarily to help 1-site DMRG converge, though they can also be useful for certain ‘tough’ Hamiltonians or systems.)