Tree tensor netowrk using ITensors (and DMRG)?

Hi,

I’m wondering if possible to perform (true) tree tensor networks calculations in ITensor?

What I meant is that, for example, if I can define a siteinds structure that extends out in an ‘X’ fashion, with siteinds of 1, 2, 3, 4… on each ‘branch’, instead of somehow flattening it as 1 (branch 1 site 1), 2 (branch 1 site 2), …, N (branch 1 site N), N + 1 (branch 2 site 1).

AFAIK the built-in siteinds treats everything as 1D, which will introduce unnecessary long range interaction (for example across the center of a X shape lattice)

So you can build any type of tensor network you want using ITensor. Meaning, that you can created collections of ITensors where they have any number of indices you want, so you can choose them to form a tree tensor network. Likewise, the siteinds structure you’re referring to is just a regular Julia array of Index objects, so you can reinterpret that as having a tree structure if you’d like, though a better idea for a tree would be to make some other kind of container holding indices which offers a tree-like interface.

Perhaps you are asking if our algorithms, such as DMRG, work automatically for general tree networks? In that case, no, not the ones in the package ITensors.jl. But work is ongoing on the package ITensorNetworks.jl which offers precisely this functionality.

1 Like

Hi miles,

A somewhat irrelevant question, but I’m wondering if the ITensor TDVP function could be used to work with more than 2 sites? (i.e. nsite > 2 in arguments to tdvp()). Currently I’m getting

`tdvp` with `nsite=4` and `reverse_step=true` not implemented.

The reason being, I’m concerned the way I flattened certain geometries might disconnect parts of the lattice by introducing long range interactions that skip over some sites, and some initial testing indicates that where I placed these disconnected part in the flattened lattice actually had an impact on the results. So I’m trying to see if it’s possible to use TDVP on more sites at a time

(Or if there are some simply modification to the code I can do. For example, could nsite=4 just be two more QRs each time on the resulting tensors to reshape the effective results into 4 sites?)

In principle it wouldn’t be difficult to generalize to nsite > 2 but that isn’t implemented yet.

Thanks Matt.

On a side note, as of now should I expect algorithms such as tdvp and dmrg to work with tree TNs contructed with ITensorNetworks.jl? I ran some tests recently and it seems I couldn’t get a minimal example running.

Yes, DMRG and TDVP work on trees, you can look at the tests to see what is working: ITensorNetworks.jl/test/test_treetensornetworks/test_solvers at v0.6.0 · mtfishman/ITensorNetworks.jl · GitHub

Though please take note of the warning in the README: ITensorNetworks.jl/README.md at v0.6.0 · mtfishman/ITensorNetworks.jl · GitHub

Thanks for the update.

I was able to follow the test example and manually defined a connected graph. Based on that I also defined a map from the ‘flattened’ 1D indices to the graph vertices. However the current tests show that the DMRG calculations based on the 1D version (MPS), and the TTN have different energies (about 1% error, higher than tol),

Does the indexing of the graph, i.e. how I map between the flattened and the tree indices, matter in this case? Also TTN seems to run much slower than the MPS version, I don’t know if that’s also an indication of something not quite right

Things are moving and changing quite a lot in the package so I don’t think we can reasonably provide user support right now, you may just have to investigate issues yourself. If you can identify specific issues with minimal reproducible code, issues on Github would be appreciated so we can keep track of things we should fix.