torch.cuda.max_memory_reserved¶
- torch.cuda.max_memory_reserved(device=None)[source]¶
Returns the maximum GPU memory managed by the caching allocator in bytes for a given device.
By default, this returns the peak cached memory since the beginning of this program.
reset_peak_memory_stats()
can be used to reset the starting point in tracking this metric. For example, these two functions can measure the peak cached memory amount of each iteration in a training loop.- Parameters:
device (torch.device or int, optional) – selected device. Returns statistic for the current device, given by
current_device()
, ifdevice
isNone
(default).- Return type:
Note
See Memory management for more details about GPU memory management.