torch._dynamo¶
Warning
This module is an early prototype and is subject to change.
- torch._dynamo.allow_in_graph(fn)[source]¶
Customize which functions TorchDynamo will include in the generated graph. Similar to torch.fx.wrap().
torch._dynamo.allow_in_graph(my_custom_function) @torch._dynamo.optimize(...) def fn(a): x = torch.add(x, 1) x = my_custom_function(x) x = torch.add(x, 1) return x fn(...)
Will capture a single graph containing my_custom_function().
- torch._dynamo.disallow_in_graph(fn)[source]¶
Customize which functions TorchDynamo will exclude in the generated graph and force a graph break on.
torch._dynamo.disallow_in_graph(torch.sub) @torch._dynamo.optimize(...) def fn(a): x = torch.add(x, 1) x = torch.sub(x, 1) x = torch.add(x, 1) return x fn(...)
Will break the graph on torch.sub, and give two graphs each with a single torch.add() op.
- torch._dynamo.optimize(backend='inductor', *, nopython=False, guard_export_fn=None, guard_fail_fn=None, disable=False, dynamic=False)[source]¶
The main entrypoint of TorchDynamo. Do graph capture and call backend() to optimize extracted graphs.
- Parameters:
backend – One of the two things: - Either, a function/callable taking a torch.fx.GraphModule and example_inputs and returning a python callable that runs the graph faster. One can also provide additional context for the backend, like torch.jit.fuser(“fuser2”), by setting the backend_ctx_ctor attribute. See AOTAutogradMemoryEfficientFusionWithContext for the usage. - Or, a string backend name in torch._dynamo.list_backends()
nopython – If True, graph breaks will be errors and there will be a single whole-program graph.
disable – If True, turn this decorator into a no-op
dynamic – If True, turn on dynamic shapes support
Example Usage:
@torch._dynamo.optimize() def toy_example(a, b): ...
- torch._dynamo.optimize_assert(backend, *, hooks=Hooks(guard_export_fn=None, guard_fail_fn=None), export=False, dynamic=False)[source]¶
The same as torch._dynamo.optimize(backend, nopython=True)
- torch._dynamo.skip(fn=None)[source]¶
Skip frames associated with the function code, but still process recursively invoked frames
- class torch._dynamo.OptimizedModule(mod, dynamo_ctx)[source]¶
Wraps the original nn.Module object and later patches its forward method to optimized self.forward method.
- torch._dynamo.register_backend(compiler_fn=None, name=None, tags=())[source]¶
Decorator to add a given compiler to the registry to allow calling torch.compile with string shorthand. Note: for projects not imported by default, it might be easier to pass a function directly as a backend and not use a string.