Hello,
I am trying to understand what OpSum is doing, and in particular, from a mathematical point of view, how best to construct arbitrary MPOs from sums of tensor products and how to compress them.
The way I understand this is e.g. H = X_0 X_2 + Y_1 X_2 in the following way
So each extra site adds an MPO tensor (vectors on the boundary), and each new term adds a row and column to each of the tensors. Is this correct?
Now I was wondering how you can compress them, because if I constructed a TFIM with n spins, using this formalism I’d have O(n) terms and i.e. MPO bond dim O(n). But I know that there is a more efficient MPO construction for the TFIM where I can construct it with MPO bond dim 3 (or 4/5, not sure).
The way I would naively do the compression, and also what I undersatnd itensor to be doing here, is take the tensor with legs (v1, v2, s1, s2)
(virtual 1/2, physical s1/2), permute to (v1, s1, v2, s2)
, reshape to ((v1, s1), (v2, s2))
, and then SVD. However, when I do that for an example with N=10 terms, where 2 are non-trivial PauliX and the remaining 8 are identities, I get a flat spectrum with all ones, so clearly something is not right?
Or is this expected with this construction? And if so, what is the better construction (essentially, what is OpSum doing?)
I am not too well-versed in julia and found it easier to do a small mwe in python, hope there is no offense taken
import numpy as np
paulix = np.array([[0., 1.], [1., 0.]])
dimMPO = 10
arr = np.zeros((dimMPO, dimMPO, 2, 2), dtype=complex)
for i in range(dimMPO):
arr[i, i] = np.eye(2)
arr[0, 0] = paulix
arr[1, 1] = paulix
arr = np.transpose(arr, [0, 2, 1, 3])
arr = np.reshape(arr, (dimMPO * 2, dimMPO * 2))
U, S, Vd = np.linalg.svd(arr)
print(S)
# [1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]