Memory leak when using ITensors

Hello,

I’m currently writing simple iDMRG code using ITensors core features; While the code is working, there is a memory leakage, which I could not catch. I assume I’m using some feature (permutation, contraction) of ITensors in a wrong way, so that it results on the leakage. Basically, I have a cycle of 2-site adding in the iDMRG process, which I presented below. For the parameter set which I used, maximum total size of variables should be around 2GB, whereas the total consumed by Julia is around 12 GB; Using garbage collector GC.gc() inserted after contractions/permutations somehow decreased from 24 GB to 12 GB.
I’m using just ITensor(LWinds) to nullify objects which will not be used further in the code. Any suggestion is welcomed! Just want to be sure, that I’m using all the features in a right way. Thank you!

 for j=1: Int64(L/2)
  mpo1=mpo.W[j]; #Self written -mpo tensor
  mpoL=mpo.W[L-j+1]; 
println("Part A")                               
inds_LeftTensor=inds(LeftTensor); # LeftTensor = initial Rank-3 Tensor for left Block
inds_RightTensor=inds(RightTensor);
inds_mpo1=inds(mpo1);
inds_mpoL=inds(mpoL);
replaceind!(LeftTensor,inds_LeftTensor[2],dag(inds_mpo1[1]));
LW=LeftTensor*mpo1;
replaceind!(RightTensor,inds_RightTensor[2],dag(inds_mpoL[2]));
  LWWR=RightTensor*mpoL;
replaceind!(LW,inds(LW)[3],dag(inds(LWWR)[3]))
println("Part A1")
LWWR=LW*LWWR;
LWinds=inds(LW);
LW=ITensor(LWinds);
LWWRinds = inds(LWWR);
LWWR=permute(LWWR,LWWRinds[1],LWWRinds[5],LWWRinds[3],LWWRinds[7],LWWRinds[2],LWWRinds[6],LWWRinds[4],LWWRinds[8];allow_alias = true);
tlower=combiner(LWWRinds[1],LWWRinds[5],LWWRinds[3],LWWRinds[7];tags="c",dir=ITensors.In);
tupper=combiner(LWWRinds[2],LWWRinds[6],LWWRinds[4],LWWRinds[8];tags="cp",dir=ITensors.Out);
Htensor=tlower*LWWR;
LWWR=ITensor(LWinds);
Htensor = Htensor*tupper;
println("Part B")
inds_HTensor=inds(Htensor);
Htensor=permute(Htensor,inds_HTensor[2],inds_HTensor[1]);
println("Part B1")
H=getMatrixWithGivenQn(Htensor,QN(0));
Htensor=ITensor(LWinds);
println("Part B2")
 λ,ϕ= Arpack.eigs(H,nev=1;which=:SR); 
@show λ/L
H=0;
 psiTensor=writePsiToTensor(ϕ,dag(inds_HTensor[2]),QN(0))
 println("Part C")

  
pL=psiTensor*(tlower);
 
tlower=Nothing;
inds_pL=inds(pL);

pL=permute(pL,inds_pL[1],inds_pL[3],inds_pL[2],inds_pL[4];allow_alias = true);
inds_pL=inds(pL);

plc1=combiner(inds_pL[1],inds_pL[2]; tags="p1",dir=ITensors.In);
plc2=combiner(inds_pL[3],inds_pL[4]; tags="p2",dir=ITensors.Out);
pL1=plc2*pL*plc1;
pL=Nothing;
inds_pL1=inds(pL1)
U,s,V=svd(pL1,inds_pL1[1];maxdim=D); 
U=U*dag(plc1)
V=V*dag(plc2);
plc1=0;
plc2=0;
println("Part D")
LUWUp=LeftTensor*U*mpo1*dag(U)'; 
LUWUp=replacetags(LUWUp,"Link,u","a_0");
RVWVp=RightTensor*V*mpoL*dag(V)'; 
RVWVp=replacetags(RVWVp,"Link,v","a_L");
LUWUp=replacetags(LUWUp,"Link,u","a_0");
 
U=replacetags(U,"Link,u","Link_"*string(j));
U=replacetags(U,"a_0","Link_"*string(j-1));
mps.T[j]=U; 
V=permute(V,inds(V)[3],inds(V)[2],inds(V)[1];allow_alias = true); 

V=replacetags(V,"Link,v","Link_"*string(L-j));
V=replacetags(V,"a_L","Link_"*string(L-j+1));
 mps.T[L-j+1]=V;
 LeftTensor=LUWUp;
RightTensor=RVWVp;
push!(ϵ,λ[1]/(2*j));
LUWUp=ITensor(LWinds);
RVWVp=ITensor(LWinds);
GC.gc(); 
 
end 
 

Hi, so while Julia can sometimes consume large amounts of memory, it’s very unlikely that it’s due to a memory leak. Instead, Julia is a “garbage collected” language which means that memory that is no longer used is not leaked but is just scheduled to be freed at some arbitrary rate that can sometimes be slower than the rate at which your code allocates the memory in the first place. So the deepest fix here would be to refactor your code not to allocate as rapidly. But some parts of ITensor are responsible for this that we plan to improve over time.

For now, a good option for you may be to pass the —heap-size-hint argument to the julia program. For more information see here:

https://julialang.org/blog/2023/04/julia-1.9-highlights/#memory_usage_hint_for_the_gc_with_--heap-size-hint

Please let me know if that feature helps with the memory usage.