ConvertCustomConfig¶
- class torch.ao.quantization.fx.custom_config.ConvertCustomConfig[source]¶
Custom configuration for
convert_fx()
.Example usage:
convert_custom_config = ConvertCustomConfig() .set_observed_to_quantized_mapping(ObservedCustomModule, QuantizedCustomModule) .set_preserved_attributes(["attr1", "attr2"])
- classmethod from_dict(convert_custom_config_dict)[source]¶
Create a
ConvertCustomConfig
from a dictionary with the following items:“observed_to_quantized_custom_module_class”: a nested dictionary mapping from quantization mode to an inner mapping from observed module classes to quantized module classes, e.g.:: { “static”: {FloatCustomModule: ObservedCustomModule}, “dynamic”: {FloatCustomModule: ObservedCustomModule}, “weight_only”: {FloatCustomModule: ObservedCustomModule} } “preserved_attributes”: a list of attributes that persist even if they are not used in
forward
This function is primarily for backward compatibility and may be removed in the future.
- Return type
- set_observed_to_quantized_mapping(observed_class, quantized_class, quant_type=QuantType.STATIC)[source]¶
Set the mapping from a custom observed module class to a custom quantized module class.
The quantized module class must have a
from_observed
class method that converts the observed module class to the quantized module class.- Return type