MPO out of a matrix

Mission:
I would love to try using DMRG to find extremal eigenvalues of any sparse, hermitian matrix.

Question:
How could one write a (sparse) matrix as a MPO?

Attempt:
One can write any 2^N \times 2^N matrix as a sum of tensorproducts of N 2 \times 2 matrices
M_{2^N \times 2^N} = \sum_{entries} \otimes_N m_{2 \times 2}
Where the m_{2 \times 2} are build out of Id, \sigma^x, \sigma^y, \sigma^z.
Explained for example here: [1909.12847] Resource-efficient digital quantum simulation of $d$-level systems for photonic, vibrational, and spin-$s$ Hamiltonians

Problem:

  • probably not scalable for large matrices
  • OpSum() only accepts single or 2-site operators

Hi @BurgerAndreas, it sounds like this can be handled by OpSum. Have you tried writing the particular Hamiltonian you are interested in as an OpSum and converting it to an MPO? To clarify, OpSum is not only limited to single or 2-site operators. The main limitation is that it may be inefficient to convert certain OpSum to an MPO if it has many nonlocal terms, either because of the scaling of the algorithm we use or because the resulting MPO inherently has a large bond dimension. But please try it and see if it works for your use case.

1 Like

Hi, to second what Matt says it sounds like the idea you have of representing the matrix as a sum of outer products (tensor products) of smaller matrices is ideal for our OpSum system.

Then indeed the DMRG algorithm should work quite well for getting the extremal eigenvalues and eigenvectors (as MPS).

I’d add that writing an arbitrary matrix as an MPO may be a more challenging task or sort of an open research area, but if you have a way to do it based on that paper it sounds promising. Some other literature about how to write large matrices as MPO’s goes under the name “tensor train matrix” where in that literature, as you may know, tensor train (TT) is an alternative name for MPS and tensor train matrix is an alternative name for MPO. That literature is associated with the applied math community so they have been interested in finding examples of matrices which are compressible as tensor trains (or tensor train matrices): for example the discretized Laplacian in 1D, 2D, or 3D etc. is compressible that way if you order the indices in a certain pattern.

1 Like

Thank you both!

Quick update:
I implemented the decomposition I mentioned under ‘Attempt’ using OpSum.
Thanks for the clarification!

I will fix some pre-factors and run some tests.
Thanks for the hint on tensor train/MPS in applied maths, I will look into it.

Work in progress:

Cheers!

Thanks – looks really interesting! Please let us know how it goes.

You might also find this discussion interesting, where we tried out a method to take the dominant and sub-dominant eigenvectors (in this case excited states of a Hamiltonian), project the Hamiltonian into a basis formed by them, and re-diagonalize this “mini Hamiltonian” to get even better precision on the eigenvalues and make the orthogonality of the eigenvectors better:

https://github.com/ITensor/ITensors.jl/issues/845

(see the code at the end of the discussion).