On computing Neutral Gap using OBC

Hello ITensor Team,

First, thanks for the effort and development of such a fantastic package.

I’m working in a 4-spins cell ferrimagnetic model that presents two Ising universality class transitions. I’m having some issues computing the Neutral gap, \Delta_n = E(1,0,N) - E(0,0,N) with E(p,S^z_{total},N) the variable of the p-th excited state at the magnetization sector S^z_{total}, and N the size of the chain. The model presents a transition between an Antiferromagnetic(AFM) phase and a Haldane phase, where we expect the neutral gap scaling to be near zero deep in the AFM and being finite and increasing with N in the Haldane case. I obtained the correct answer using periodic boundary conditions, but for Open Boundary conditions, there’s a discontinuity in the Neutral gap measurements leading to an incorrect behavior. I tried some strategies that were well-described in the discussion forum, such as using smooth boundary conditions(which works perfectly for Gaussian transitions, for instance) or trying to use fixed boundary conditions as I saw recently in the appendix of this excellent article. For both cases, the behavior of the Neutral gap is not only different for the PBC case but also incorrect. I tried fixing the boundary conditions conserving the quantum numbers of the wave function with S^z_{total} = 0 and inputting an initial wave function with the desired directions of the edge spins(code attached). I also tried using the pining field only in the edges to force the direction of the spin and the desired value, but it did not change the general behavior of the Neutral gap at all.
About converging issues, I tried following all the orientations of the ITensor page, such as :

  • considering extremely low cutoff values(up to 1E-16)
  • different strategies on increasing max bond-dimension, also doing 20 sweeps with maxbond = 20 as I saw in previous discussions here.
  • Different values of Noise Terms
  • Increasing the sweeps numbers(60 sweeps for N=80 spins with modified OBC)
  • increase the value of the eigsolve_krilov variable up to 6
  • Also increase the link dimensions of the initial wave-function(up to 10)
  • Trying different initial states at zero sector ( [UP, UP, DN, DN], [DN, DN, UP, UP], [UP, Z0, DN, Z0])

Here is the code I’m using. I’m using Julia v1.10.2, ITensors v0.8.0, and ITensorMPS v0.3.8.
I would appreciate any tips on how to improve it

using ITensors,ITensorMPS
using DelimitedFiles
using MKL
int = range(0.5, step=2E-1, stop=1.3)
Data_Gap = [] 
D = -1.0

for J in int   

    N = 80
    let
        function sites_type(n::Int)
            if iseven(n) == true
                return "S=1"
            end
            if iseven(n) == false
                return "S=1/2"
            end
        end

    sites = siteinds(n->sites_type(n), N; conserve_qns=true)
    J_1 = 1.0 #Same spin couplins
    J_2 = J #Different spin couplins
    Jz_1 = 1.0
    Jz_2 = 1.0

    ampo = OpSum()
    # s-s interactions with coupling J_1
    for j=1:4:N
        ampo += 0.5*J_1,"S+",j,"S-",j+2
        ampo += 0.5*J_1,"S-",j,"S+",j+2
        ampo += J_1,"Sz",j,"Sz",j+2
    end

    # S-s interations with Coupling J_2
    for j=2:4:N
        ampo += 0.5*J_2,"S+",j,"S-",j+1
        ampo += 0.5*J_2,"S-",j,"S+",j+1
        ampo += J_2,"Sz",j,"Sz",j+1
    end
    # S-S interations with coupling J_1
    for j=2:4:N
        ampo += 0.5*J_1,"S+",j,"S-",j+2
        ampo += 0.5*J_1,"S-",j,"S+",j+2
        ampo += J_1,"Sz",j,"Sz",j+2
    end
    # S-s interations with coupling J_2
    for j=4:4:N-4
        ampo += 0.5*J_2,"S+",j,"S-",j+1
        ampo += 0.5*J_2,"S-",j,"S+",j+1
        ampo += J_2,"Sz",j,"Sz",j+1
    end
    for j=1:4:N
        #ampo += D,"Sz2",j
        ampo += D,"Sz2",j+1
        #ampo += D,"Sz2",j+2
        ampo += D,"Sz2",j+3
    end
    #ampo += -B,"Sz",1
    #ampo += B,"Sz",N
    #PBC interactions
    #ampo += 0.5*J_2,"S+",N,"S-",1
    #ampo += 0.5*J_2,"S-",N,"S+",1
    #ampo += J_2,"Sz",N,"Sz",1

    H = MPO(ampo,sites)

    #Definitions of Sz_tot =0 
    state0, state1 = [], []
    for n =1:N/4
        push!(state0, "Up", "Up", "Dn", "Dn")
    end
    for n =1:N/4
        push!(state1, "Up", "Up", "Dn", "Dn")
    end

    psi0_init = random_mps(sites,state0,linkdims=2)
    psi02_init = random_mps(sites,state1,linkdims=2)

    @show flux(psi0_init)
    @show flux(psi02_init)

    nsweeps = 40
    maxdim = [20,20,20,20,40,40,40,40,80,80,80,80,160,160,320,320,640,640,1080,1080]
    cutoff = [1E-10]
    noise = [1E-4, 1E-4, 1E-4, 1E-4, 1E-6, 1E-6, 1E-6, 1E-6, 1E-7, 1E-7, 1E-8, 1E-8, 0.0]

    energy0,psi0 = dmrg(H,psi0_init; nsweeps, maxdim, cutoff, noise)
    H0 = inner(H,psi0,H,psi0)
    E0 = inner(psi0',H,psi0)
    var0 = H0-E0^2
    @show var0

    println(" Next simulation")
    nsweeps = 60
    maxdim  = [20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,40,40,80,80,160,160,320,320,640,640,1080,1080]
    cutoff = [1E-16]
    noise = [1E-4, 1E-4, 1E-4, 1E-4, 1E-6, 1E-6, 1E-6, 1E-6, 1E-7, 1E-7, 1E-8, 1E-8, 0.0]

    energy1,psi1 = dmrg(H,[psi0],psi02_init; nsweeps, maxdim, cutoff, noise)
    H1 = inner(H,psi1,H,psi1)
    E1 = inner(psi1',H,psi1)
    var1 = H1-E1^2
    @show var1
    @show inner(psi1,psi0)
    gap1 = energy1-energy0
    sgap1 = N*gap1
    println("The Gap0 is =", gap1)
    println("The Scaled Gap0 is =", sgap1)
    push!(Data_Gap, ["$J" "$gap1" "$sgap1"  ])
    end

    open("Gap(N=$N).dat", "w") do f
        writedlm(f, Data_Gap, '\t')
    end
end

Following a Figure of the Neutral Gap behavior considering three boundary conditions. MOBC is for Modified Open Boundary conditions with J_edges = 0.0.

Evaluating other measurements, I identified two transitions at J_c ~ 0.69 and J_c ~ 0.95 for larger system(N=400). I would expect the neutral gap for OBC start to increase continuously as we approach the transition point between AFM-Haldane phases(J_c ~ 0.95), but as we can see, the Neutral Gap Jumps to a finite value for J_c ~ 1.0. Another point is that for the first transition between topological trivial and AFM phases, this behavior doesn’t happen.
Follow an figure of the scaled neutral gap for this transition considering different systems sizes :

Is there a different strategy for computing the Neutral Gap for OBC more efficiently? I’m trying here don’t fall in the easy(despite computationally extremely cost) solution using PBC for this case.

It’s not totally obvious what’s going wrong. I think your first strategy should be to collect some more information about the states themselves that you are finding. For example, how orthogonal are they? How good is their energy variance (i.e. are they very high-quality eigenstates)?

A related idea would be to plot the local energy of each state and then subtract these two arrays of data to see the local energy difference as a function of space. Is the extra energy of the excited state localized more in the middle of the system or at the edge?

If the extra energy is at the edge, that could be your problem, and you might need to handle the edge differently, like change boundary conditions or use “infinite boundaries”.

The idea of “infinite boundaries” or in a more basic sense, “restricted sweeping” is to use a larger or even infinite DMRG calculation for the ground state, then use the “environments” (projection of Hamiltonian into left and right basis tensors) from that calculation as a kind of ‘correlated boundary conditions’ for a DMRG calculation in a medium to large unit cell in the middle. This new DMRG calculation would ‘target’ an excited state and this way the excitations cannot get stuck to the edge.

A less fancy idea, that I think you should try first, is to compute a few more excited states, maybe one or two more, then “rediagonalize” them. This means to make a small matrix (like a 4x4 matrix) of the form h_{ij} = \langle \psi_i| H |\psi_j \rangle as well as \langle \psi_i|\psi_j \rangle (which you should not assume is identity) and then think of the |\psi_j\rangle as a non-orthogonal basis in which you want to work. Then you find an actually orthonormal basis made from these which diagonalizes h_{ij} and this can give you more accurate excited state energies. Also it can reveal if there are any issues such as finding the second excited state before the first excited state etc.