There is basically no advantage to using swapind!
over swapind
, in either case new tensor data isn’t allocated. In the case of swapind
, a new ITensor is constructed, but it still points to the same original data, so the overhead is very small. In future versions, I plan to deprecate the in-place version (swapind!
) since I think it is confusing to have multiple ways of doing nearly the same thing, it is a holdover from our port from C++ to Julia.
A * delta(i, j)
does allocate new tensor data. I think it is difficult to avoid that without having surprising behavior given the semantics of memory management in Julia, for example:
A = random_itensor(j, k)
d = delta(i, j)
B = A * d
B .*= 2 # Does this modify `A` as well?
A = random_itensor(Float32, j, k)
d = delta(Float64, i, j)
B = A * d
B .*= 2 # What about now?
So, the rule of thumb I use is that if it becomes difficult to reason about whether the output aliases one of the inputs, and that can change based on seemingly benign changes to the input, it is safer to make a copy.
We can probably have a “mode” of contraction where users can specify that they are ok having the output of a tensor contraction potentially be an alias of one of the inputs, i.e.:
@allow_alias begin
B = A * d
end
That is similar to the allow_alias
flag that we have in ITensors.permute
: ITensor · ITensors.jl (Miles will no doubt raise the point that if there was some form of reference counting or copy-on-write in Julia, we could handle this automatically to some extent, but that isn’t available right now in the language).
Also, probably you should be using replaceind
rather than swapind
if you are just trying to replace one index with another, that is closer to what delta
is doing.