Searched defs:mixed_precision (Results 1 – 7 of 7) sorted by relevance
/aosp_15_r20/external/ComputeLibrary/src/dynamic_fusion/sketch/gpu/operators/ |
H A D | GpuPool2d.cpp | 52 GpuPool2dSettings &GpuPool2dSettings::mixed_precision(bool mixed_precision) in mixed_precision() argument 58 bool GpuPool2dSettings::mixed_precision() const in mixed_precision() function in arm_compute::experimental::dynamic_fusion::GpuPool2dSettings
|
/aosp_15_r20/external/ComputeLibrary/tests/validation/fixtures/dynamic_fusion/gpu/cl/ |
H A D | Pool2dFixture.h | 55 …nsorShape input_shape, const Pool2dAttributes &pool_attr, DataType data_type, bool mixed_precision) in setup() 85 …ape input_shape, const Pool2dAttributes &pool_attr, const DataType data_type, bool mixed_precision) in compute_target() 168 …size, Padding2D pad, Size2D stride, bool exclude_padding, DataType data_type, bool mixed_precision) in setup()
|
/aosp_15_r20/external/pytorch/test/distributed/fsdp/ |
H A D | test_fsdp_state_dict.py | 100 mixed_precision=False, argument 693 self, state_dict_type, mixed_precision, state_dict_rank0_and_offload argument 1026 self, state_dict_type, prefix, ignore_inner, mixed_precision argument
|
H A D | test_fsdp_comm_hooks.py | 40 def __init__(self, has_wrapping, sharding_strategy, mixed_precision=None): argument 174 def _init_model(self, core, sharding_strategy, mixed_precision=None): argument
|
H A D | test_fsdp_core.py | 264 def test_param_change_after_init(self, mixed_precision): argument 388 def test_transformer_no_grad(self, mixed_precision): argument
|
H A D | test_fsdp_sharded_grad_scaler.py | 58 mixed_precision = ["enable_mixed_precision", None] variable
|
/aosp_15_r20/external/pytorch/test/inductor/ |
H A D | test_cutlass_backend.py | 248 mixed_precision=False, argument
|