torch.cuda.reset_max_memory_cached¶
- torch.cuda.reset_max_memory_cached(device=None)[source]¶
Reset the starting point in tracking maximum GPU memory managed by the caching allocator for a given device.
See
max_memory_cached()
for details.- Parameters
device (torch.device or int, optional) – selected device. Returns statistic for the current device, given by
current_device()
, ifdevice
isNone
(default).
Warning
This function now calls
reset_peak_memory_stats()
, which resets /all/ peak memory stats.Note
See Memory management for more details about GPU memory management.