Home
last modified time | relevance | path

Searched defs:mixed_precision (Results 1 – 7 of 7) sorted by relevance

/aosp_15_r20/external/ComputeLibrary/src/dynamic_fusion/sketch/gpu/operators/
H A DGpuPool2d.cpp52 GpuPool2dSettings &GpuPool2dSettings::mixed_precision(bool mixed_precision) in mixed_precision() argument
58 bool GpuPool2dSettings::mixed_precision() const in mixed_precision() function in arm_compute::experimental::dynamic_fusion::GpuPool2dSettings
/aosp_15_r20/external/ComputeLibrary/tests/validation/fixtures/dynamic_fusion/gpu/cl/
H A DPool2dFixture.h55 …nsorShape input_shape, const Pool2dAttributes &pool_attr, DataType data_type, bool mixed_precision) in setup()
85 …ape input_shape, const Pool2dAttributes &pool_attr, const DataType data_type, bool mixed_precision) in compute_target()
168 …size, Padding2D pad, Size2D stride, bool exclude_padding, DataType data_type, bool mixed_precision) in setup()
/aosp_15_r20/external/pytorch/test/distributed/fsdp/
H A Dtest_fsdp_state_dict.py100 mixed_precision=False, argument
693 self, state_dict_type, mixed_precision, state_dict_rank0_and_offload argument
1026 self, state_dict_type, prefix, ignore_inner, mixed_precision argument
H A Dtest_fsdp_comm_hooks.py40 def __init__(self, has_wrapping, sharding_strategy, mixed_precision=None): argument
174 def _init_model(self, core, sharding_strategy, mixed_precision=None): argument
H A Dtest_fsdp_core.py264 def test_param_change_after_init(self, mixed_precision): argument
388 def test_transformer_no_grad(self, mixed_precision): argument
H A Dtest_fsdp_sharded_grad_scaler.py58 mixed_precision = ["enable_mixed_precision", None] variable
/aosp_15_r20/external/pytorch/test/inductor/
H A Dtest_cutlass_backend.py248 mixed_precision=False, argument