Cutoff relative to maximum singular value

I have a working implementation of the TEMPO algorititm using ITensor.jl. I would like to compare it to other implementations, apples to apples, but the relative SVD cutoff in ITensor is different to what is usually done in TEMPO.

The desired cutoff works by dividing the singular values by the maximum value and then truncating values smaller than the provided cutoff (instead of the sum of squares of sinuglar values, Frobenius norm, that ITensor uses).

I have had a go implementing this in a performant way but I’ve got stuck. Currently, it either runs out of memory of I get a error in ITensor/NDTensor that I don’t understand. Any plans for a cutoff like this in ITensor or a way to get my truncation to work?

function truncate_svd(U, S, Vt; cutoff)
  sing_vals = diag(array(S))

  s = sing_vals[sing_vals/max(sing_vals...).>cutoff]
  χ = length(s)
  if χ == 0
    χ = length(sing_vals)
    s = @view sing_vals[1:χ]
  end

  u_link = Index(χ, "cLink,u")
  v_link = Index(χ, "cLink,v")

  I_S_trunc = diagitensor(convert(Vector{ComplexF64}, s), u_link, v_link)

  U_indices = [inds(U)...]
  U_indices[end] = u_link
  # U_trunc = array(U)[fill(:, length(U_indices[1:end]) - 1)..., 1:χ]
  U_trunc = @view reshape(U.tensor.storage.data, dim.(U.tensor.inds)...)[fill(:, length(U_indices) - 1)..., 1:χ]
  I_U_trunc = itensor(copy(U_trunc), U_indices...)

  V_indices = [inds(Vt)...]
  V_indices[end] = v_link
  # V_trunc = array(Vt)[fill(:, length(V_indices[1:end]) - 1)..., 1:χ]
  V_trunc = @view reshape(Vt.tensor.storage.data, dim.(Vt.tensor.inds)...)[fill(:, length(V_indices) - 1)..., 1:χ]
  I_V_trunc = itensor(copy(V_trunc), V_indices...)

  return I_U_trunc, I_S_trunc, I_V_trunc
end

The error happens if the copy is removed.

function truncate_svd(U, S, Vt; cutoff)
  sing_vals = diag(array(S))

  s = sing_vals[sing_vals/max(sing_vals...).>cutoff]
  χ = length(s)
  if χ == 0
    χ = length(sing_vals)
    s = @view sing_vals[1:χ]
  end

  u_link = Index(χ, "cLink,u")
  v_link = Index(χ, "cLink,v")

  I_S_trunc = diagitensor(convert(Vector{ComplexF64}, s), u_link, v_link)

  U_indices = collect(inds(U))
  U_indices[end] = u_link

  U_trunc = @view reshape(U.tensor.storage.data, dim.(U.tensor.inds)...)[fill(:, length(U_indices) - 1)..., 1:χ]
  I_U_trunc = itensor(U_trunc, U_indices...)

  V_indices = collect(inds(Vt))
  V_indices[end] = v_link

  V_trunc = @view reshape(Vt.tensor.storage.data, dim.(Vt.tensor.inds)...)[fill(:, length(V_indices) - 1)..., 1:χ]
  I_V_trunc = itensor(V_trunc, V_indices...)

  return I_U_trunc, I_S_trunc, I_V_trunc
end

Hi Oliver,
I see what you mean about wanting to have a custom version but it’s hard for us to support all the variations that might exist on how to truncate a singular value spectrum. While we do have something similar to what you are wanting, it nevertheless works with the squared singular values.

Here is a way you could get the behavior you want that could be easier than having to work directly with the tensors as you are trying. The idea would be to simply “hijack” or overwrite the NDTensors.truncate! method which is called internally by the ITensor svd. The steps to do it would be these:

  1. open the file NDTensors/src/truncate.jl which you can find here:
    ITensors.jl/truncate.jl at main · ITensor/ITensors.jl · GitHub

  2. copy the entire code for the function truncate! at the top of this file

  3. put this code into your code

  4. change the name of the function to NDTensors.truncate! which will tell Julia this is an overload (or overwrite) of the NDTensors implementation of this function

Now if you call the usual svd defined by ITensor and you specify some truncation parameters like cutoff it will call your custom NDTensors.truncate! function. You should check this by adding a line like println("Calling my custom truncate function") and check that you see this printout.

Now you can edit this function as much as you like , and I believe you’ll have an easy time understanding it. It works by taking in a vector of p’s which are the squares of the singular values (so you can take the square root of these to get the original singular values) and then computes a “docut” value which is a real number specifying that the svd code should keep all singular values s_n if s_n > \text{docut}.

You can see already in this code how we offer a use_absolute_cutoff mode which is almost what you were asking for, except it doesn’t normalize by dividing by the topmost singular value and it is defined in terms of the squared singular values (elements of the P vector).

Thanks, this works great. In TEMPO this seems to provide a quadratic improvement in time complexity over the built-in relative cutoff.

The only issue I faced is that for a DenseTensor array you can’t extend the functionality easily as truncate isn’t passed all kwargs given when svd is called. This isn’t the case when svd is called with a BlockSparse input.

DenseTensor:

truncate!(
  P;
  mindim=mindim,
  maxdim=maxdim,
  cutoff=cutoff,
  use_absolute_cutoff=use_absolute_cutoff,
  use_relative_cutoff=use_relative_cutoff,
)

BlockSparse

truncate!(d; kwargs...)

The two behave a bit differently but if both behaved like the BlockSparse method then I could add functionality to truncate! without breaking existing/expected functionality.

Glad it works! And thanks for pointing out this asymmetry in how we handle the keyword args in the two cases. I’ve just submitted a PR to change the code to go ahead and pass kwargs... in both cases in the future.