I am fairly new to Itensors and was working with ItensorNetworks.jl. I wanted to calculate the norm/inner product after addition of two tensors, for example, \psi = \psi_1 + \psi_2. This gives me some weird behaviour that I do not understand.
When I do it on a 2D network having 1 loop the norm of \psi gives 1. But when I increase the network to having more than one loops the norm of \psi seems to be \frac{1}{\sqrt{2^{n-1}}}, where n is the number of loops in the network.
Here is a minimal working example
using LinearAlgebra: norm_sqr
using NamedGraphs: NamedEdge
using NamedGraphs.NamedGraphGenerators: named_grid
g1 = named_grid((2, 2)) #This has 1 loop
g2 = named_grid((3, 2)) #This has 2 loops
g3 = named_grid((3, 3)) #This has 4 loop
s = siteinds("S=1/2", g1)
ψ1 = ITensorNetwork(v -> "↑", s)
ψ2 = ITensorNetwork(v -> "↓", s)
ψ = ψ1 + ψ2
println("Norm when there is one loop=",norm(ψ))
s = siteinds("S=1/2", g2)
ψ1 = ITensorNetwork(v -> "↑", s)
ψ2 = ITensorNetwork(v -> "↓", s)
ψ = ψ1 + ψ2
println("Norm when there are two loops=",norm(ψ))
s = siteinds("S=1/2", g3)
ψ1 = ITensorNetwork(v -> "↑", s)
ψ2 = ITensorNetwork(v -> "↓", s)
ψ = ψ1 + ψ2
println("Norm when there are four loop=",norm(ψ))
Furthermore, lets say I normalize ψ by multiplying \sqrt{2^{n-1}}. If I now add another tensor to it in the subsequent step, then irrespective of the number of loops in the network the norm remains 1.
Here is an example that shows this,
using LinearAlgebra: norm_sqr
using NamedGraphs: NamedEdge
using NamedGraphs.NamedGraphGenerators: named_grid
g2 = named_grid((3, 3)) #This has 4 loops
s = siteinds("S=1/2", g2)
ψ1 = ITensorNetwork(v -> "↑", s)
ψ2 = ITensorNetwork(v -> "↓", s)
ψ = ψ1 + ψ2
println("The norm without normalization=", norm(ψ))
ψ = sqrt(8)*(ψ1 + ψ2)
println("The norm after normalization=", norm(ψ))
ψ = ψ + ψ2 + ψ1 # This does not require normalization
println("The norm after doing addition again=", norm(ψ))
So, I am confused about two things:
(a) Whether + is indeed the simple addition of two tensors, i.e. if I just want to add wavefunctions \psi_1 + \psi_2 on a network, is just doing such an addition correct? Typically I would just put a 1/\sqrt{2} to normalize but here it seems that is not the case.
(b) Why does norm/inner depend on the number of loops of the underlying network after addition of the tensors? And why any subsequent additions are not affected?
Am I missing something?