You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It looks like the C++ code in LibTorch is directly messing with Python objects to add the necessary reference to from the CUDA backend.
As far as I can tell, we'd have to link with CUDA libraries directly in order to find these APIs, it doesn't seem like libtorch exports anything that we can use from TorchSharp's native interop layer.
haytham2597
added a commit
to haytham2597/TorchSharp
that referenced
this issue
Oct 21, 2024
pytorch exposes this as https://pytorch.org/docs/stable/generated/torch.cuda.get_device_properties.html the idea is to do the same for TorchSharp with the goal of being able to query available memory on CUDA devices etc.
I don't know exactly what needs to be changed to allow this, but it should be fairly straightforward.
The text was updated successfully, but these errors were encountered: