torch.cuda.memory_summary¶
- torch.cuda.memory_summary(device=None, abbreviated=False)[source]¶
Returns a human-readable printout of the current memory allocator statistics for a given device.
This can be useful to display periodically during training, or when handling out-of-memory exceptions.
- Parameters:
device (torch.device or int, optional) – selected device. Returns printout for the current device, given by
current_device()
, ifdevice
isNone
(default).abbreviated (bool, optional) – whether to return an abbreviated summary (default: False).
- Return type:
Note
See Memory management for more details about GPU memory management.