unexpected behavior of norm(delta(i, j))

Dear ITensor team,

First of all, thank you for your efforts in maintaining the ITensor libraries.

This post is based on this tweet.

I am unsure if this is intentional or a bug.
Could you please clarify?

Minimal Working Example

using ITensors
i, j = Index(2), Index(2)
println(norm(delta(i,j)))

Actual Result

1.0

Expected Result

1.4142135623730951

Detailed Description

The issue is norm(delta(i, j)) returns 1 where i, j = Index(2), Index(2).
Since delta or δ represents the Kronecker delta, and in this case, we would expect the norm to be the Frobenius norm, the expected result should be 1.4142135623730951 \approx\sqrt{2}.

Additional Information

norm(delta(i, j)) is equivalent to norm(data(storage(tensor(delta(i, j)))) if my understanding is correct.

The discrepancy appears to arise from the behavior of TensorStorage.data.
For instance, storage(ITensor([1 0; 0 1], i, j)).data yields:

julia> storage(ITensor([1 0; 0 1], i, j)).data
4-element Vector{Float64}:
 1.0
 0.0
 0.0
 1.0

On the other hand, data(storage(tensor(delta(i, j)))) returns 1, resulting in norm also returning 1.

Moreover, norm for delta always returns 1 regardless of the size because .data always views only one element 1.

For example,

julia> i, j, k, l = Index(2), Index(2), Index(2), Index(2)
((dim=2|id=891), (dim=2|id=973), (dim=2|id=397), (dim=2|id=970))

julia> storage(delta(i,j,k,l))
Diag{Float64, Float64}
Diag storage with uniform diagonal value:
1.0

julia> storage(delta(i,j,k,l)).data
1.0

julia> norm(delta(i,j,k,l))
1.0

Environment

Julia v1.8.5
ITensors v0.3.34
NDTensors v0.1.51

To reproduce the results, please note that some functions mentioned here require using NDTensors in addition to using ITensors.

@ultimatile looks like a bug, could you make a bug report on Github? We’ll probably have to special case norm for tensors with uniform diagonal entries. Also we are discussing moving to using FillArrays.jl for the uniform diagonal storage which should help to make cases like this “just work” generically while still having minimal memory overhead. @kmp5

Thank you kindly for your response.
I have created a issue.