multiple dispatch depending on whether an MPS has its data on GPU

hi,
I have a quick question which is more related to julia than ITensors, but it’s probably an easy answer for you guys working on the internals: I was wondering if it is possible to check for the data type of an MPS (more specifically whether it’s living on the GPU) in a function call, in order to profit from multiple dispatch - I’m thinking of something like

function myf(a::MPS{CUDA data})
  b = cu(random_mps(siteinds(a))
  inner(a,b)  # this should run on the GPU
end
function myf(a::MPS{non-cuda})
  b = random_mps(siteinds(a))
  inner(a,b)  # this is CPU instead
end

would something like this be possible ?
thanks!

This is what we would do internally:

using Adapt: adapt
using ITensorMPS: random_mps, siteinds
using Metal: mtl
using NDTensors: NDTensors

s = siteinds("S=1/2", 4)
x = mtl(random_mps(s))
y = adapt(mapreduce(NDTensors.unwrap_array_type, promote_type, x), random_mps(s))

NDTensors.unwrap_array_type is an internal function not really meant for external usage yet, so it may change at any time, but this is the best I can suggest for now.

2 Likes

ok I get the idea, thanks Matt!

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.