Is a global sign in a MPO irrelevant?

Hi,

A naive question here as I am hindered in my understanding of the objects in ITensors.

MPO being (a possibly more efficient representation of) a matrix, I am expecting to be able to define -A when an MPO A is defined.

Looking at eigenvalues this does not seem to be the case:

using ITensors
using Random
Random.seed!(0)

N = 2
s = siteinds("Electron", N; conserve_qns=true)
H = randomMPO(s)

A = prod(H) 
ev = eigen(A)
println(ev.spec)

minus_A = prod(-H)
ev_bis = eigen(minus_A) 
println(ev_bis.spec)

twice_A = prod(2*H)
ev_ter = eigen(twice_A)
print(ev_ter.spec)

outputs

Spectrum{Vector{Float64}, Float64}([0.6109978717430119, 0.550991216139776, 0.3777881680204986, 0.34068524911062964, 0.15800631098015647, 0.11889916146348563, 0.09769741847149604, 0.07351694411033347, 0.053654309944501266, 0.04963932327936461, 0.048384871461378126, 0.04476420028113693, 0.013875203130138448, 0.01283691271877742, 0.010441038760256732, 0.009659729086607635], 0.0)
Spectrum{Vector{Float64}, Float64}([0.6109978717430119, 0.550991216139776, 0.3777881680204986, 0.34068524911062964, 0.15800631098015647, 0.11889916146348563, 0.09769741847149604, 0.07351694411033347, 0.053654309944501266, 0.04963932327936461, 0.048384871461378126, 0.04476420028113693, 0.013875203130138448, 0.01283691271877742, 0.010441038760256732, 0.009659729086607635], 0.0)
Spectrum{Vector{Float64}, Float64}([1.2219957434860238, 1.101982432279552, 0.7555763360409972, 0.6813704982212593, 0.31601262196031293, 0.23779832292697126, 0.1953948369429921, 0.14703388822066693, 0.10730861988900253, 0.09927864655872921, 0.09676974292275625, 0.08952840056227386, 0.027750406260276896, 0.02567382543755484, 0.020882077520513465, 0.01931945817321527], 0.0)

We see that the ‘minus MPO’ is isospectral to the original MPO whereas the ‘twice MPO’ indeed has eigenvalues which were multiplied by 2.

How am I to understand this behaviour?

Best regards,

SpSn

As you mentioned, an MPO is just a representation of a large matrix. So any statement that would be true about the matrix would generally be true about the MPO, as long as the MPO representation is exact (e.g. MPO’s are often exact for matrices like Hamiltonians).

So if the matrix / Hamiltonian you are representing as an MPO is isospectral when multiplying the matrix by a minus sign, then the same statement will be true when representing the matrix as an MPO (because it is the same matrix, just represented a different way). Hope that helps.

Hi Miles,

First, thank you very much for coming back to me!

For a matrix A and matrix -A to have the same spectrum, A must have its non-zero eigenvalues appear in pairs -lambda, +lambda. That is not the case here (all of the eigenvalues of A are strictly positive), which is why I don’t understand the output of the code pasted above.

Thanks for your patience on this. I think you have hit on a possibly confusing behavior of ITensor here. Here is the explanation of what’s going on.

First of all, we have two implementations of eigen, one for Hermitian matrices and one for general matrices.

For the general implementation (which your code is correctly calling), we actually return the absolute values of the eigenvalues inside the Spectrum object. See the following line of code where this is done:
https://github.com/ITensor/ITensors.jl/blob/d8e365881964eef8fd1995d0e9ac9d682d0e695d/NDTensors/src/linearalgebra/linearalgebra.jl#L313
I think the reason for this is that Spectrum is really a report about what happened in the case that a truncation is done, and for that we only recommend doing trunctations for non-negative matrices anyway.

So basically I’d say don’t think of the Spectrum object as a way to obtain the eigenvalues. I’d agree this is potentially quite confusing though.

Mainly we have been using eigen as a numerical tool, where the main use of it for ITensors is like this:

D, U = eigen(T, Linds, Rinds)

where U is the ITensor of right eigenvectors and D a diagonal ITensor of eigenvalues.

(@mtfishman mentioning you since you might find this interesting)

Quick note that I think its hard to understand because its a low rank MPO (random_mpoonly produces link dimension of 1 (note the new style)).

On the other hand, consider a small Heisenberg example:

N = 4
os = OpSum()
for j=1:N-1
   os += "Sz",j,"Sz",j+1
   os += 0.5,"S+",j,"S-",j+1
   os += 0.5,"S-",j,"S+",j+1
 end

s = siteinds("S=1/2",N)
H = MPO(os,s)
D,U = eigen(prod(H); ishermitian=true)
D2,U2 = eigen(prod(-H); ishermitian=true)
ev = diag(D)
ev2 = diag(D2)
@show ev[1:4]
@show ev2[1:4]
@show ev ≈ ev2
@show abs.(ev) ≈ abs.(ev2)

and output

ev[1:4] = [-1.6160254037844377, -0.9571067811865469, -0.9571067811865448, -0.9571067811865442]
ev2[1:4] = [1.6160254037844377, 0.9571067811865469, 0.9571067811865448, 0.9571067811865441]
ev ≈ ev2 = false
abs.(ev) ≈ abs.(ev2) = true

So the eigenvalues flip sign as expected