Transverse Operators through ITensors

I have a tool that introduces and transforms a large number of transverse operators X, by which I here mean

X=X(a_1)\otimes \cdots \otimes X(a_m)

where each X_{ai} is a linear operator on the vector space with index a_i. Is this natively implemented in ITensors?

If not, below is the list of features I need to involve.

  1. I need access to the individual operators X[a] for any given a in the list of indices, imagine something like a Julia Dictionary from Index to ITensor where each X[a] is an ITensor with exactly two indices.
  2. X must act/contract on a given ITensor T and not change the inds(T)
  3. Unified encoding of data that determines X. For example, I often have that X is Hermitian and that it is the same operator on multiple indices so its compact encoding is to have a vectorized copy of the upper triangular portion stored once and simply fetched from the same memory location whenever needed.

I’m comfortable packaging this up as a new type for my own private use but would prefer to learn what might already exist or if there are any “nearly-there” solutions.

My naive starting point is to make a struct TransverseOp that stores a dictionary of (Index, ITensor) pairs where inds(X[a])=(a,a’) (or (a,a_retagged)) . Uses ITensors.jl to construct their tensor product and store that as well. Then add an operator

Base.:*(X::TransverseOp, T::ITensor)::ITensor

where X*T internally relabels the incident indices of T to match a’ (or a_retagged), before apply the necessary contraction. I’d also add helper functions that allow me to create separate ITensor terms but with the same pointer to the underlying data to solve problem 3.

All of this is being done inside some iterative methods that loop an apply these contractions 10,000-100,000 times (rewriting the underlying data of the individual operators). So I’d prefer the minimum garbage and the max memory reuse I can get from the design.