Home
last modified time | relevance | path

Searched full:operators (Results 1 – 25 of 8710) sorted by relevance

12345678910>>...349

/aosp_15_r20/prebuilts/tools/common/m2/repository/io/reactivex/rxjava2/rxjava/2.2.9/
HDrxjava-2.2.9-sources.jarMETA-INF/ META-INF/MANIFEST.MF io/ io/reactivex/ io/ ...
HDrxjava-2.2.9.jarMETA-INF/ META-INF/MANIFEST.MF io/ io/reactivex/ io/ ...
/aosp_15_r20/external/tensorflow/tensorflow/python/ops/linalg/
H A Dlinear_operator_composition.py35 This operator composes one or more linear operators `[op1,...,opJ]`,
51 the defining operators' methods.
54 # Create a 2 x 2 linear operator composed of two 2 x 2 operators.
73 # Create a [2, 3] batch of 4 x 5 linear operators.
77 # Create a [2, 3] batch of 5 x 6 linear operators.
81 # Compose to create a [2, 3] batch of 4 x 6 operators.
93 the sum of the individual operators' operations.
112 operators, argument
120 `LinearOperatorComposition` is initialized with a list of operators
126 operators: Iterable of `LinearOperator` objects, each with
[all …]
H A Dlinear_operator_kronecker.py65 This operator composes one or more linear operators `[op1,...,opJ]`,
72 where the product is over all operators.
75 # Create a 4 x 4 linear operator composed of two 2 x 2 operators.
96 # Create a [2, 3] batch of 4 x 5 linear operators.
100 # Create a [2, 3] batch of 5 x 6 linear operators.
104 # Compose to create a [2, 3] batch of 20 x 30 operators.
116 the sum of the individual operators' operations.
134 operators, argument
142 `LinearOperatorKronecker` is initialized with a list of operators
146 operators: Iterable of `LinearOperator` objects, each with
[all …]
H A Dlinear_operator_block_diag.py37 This operator combines one or more linear operators `[op1,...,opJ]`,
66 # Create a 4 x 4 linear operator combined of two 2 x 2 operators.
106 # Create a [2, 3] batch of 4 x 4 linear operators.
110 # Create a [1, 3] batch of 5 x 5 linear operators.
114 # Combine to create a [2, 3] batch of 9 x 9 operators.
131 the sum of the individual operators' operations.
150 operators, argument
158 `LinearOperatorBlockDiag` is initialized with a list of operators
162 operators: Iterable of `LinearOperator` objects, each with
175 operators names joined with `_o_`.
[all …]
H A Dlinear_operator_block_lower_triangular.py40 This operator is initialized with a nested list of linear operators, which
71 operators' methods.
74 operators:
125 Create a [2, 3] batch of 4 x 4 linear operators:
129 Create a [1, 3] batch of 5 x 4 linear operators:
133 Create a [1, 3] batch of 5 x 5 linear operators:
137 Combine to create a [2, 3] batch of 9 x 9 operators:
163 operators is `N = D * (D + 1) // 2`.
166 complexities of the individual operators.
168 of the operators on the diagonal and the `matmul` complexities of the
[all …]
H A Dlinear_operator_addition.py32 def add_operators(operators, argument
36 """Efficiently add one or more linear operators.
38 Given operators `[A1, A2,...]`, this `Op` returns a possibly shorter list of
39 operators `[B1, B2,...]` such that
43 The operators `Bk` result by adding some of the `Ak`, as allowed by
46 Example of efficient adding of diagonal operators.
75 operators: Iterable of `LinearOperator` objects with same `dtype`, domain
89 ValueError: If `operators` argument is empty.
97 check_ops.assert_proper_iterable(operators)
98 operators = list(reversed(operators))
[all …]
/aosp_15_r20/external/pytorch/test/
H A Dtest_namedtuple_return_api.py65 'operators are allowed to have named '
78 op = namedtuple('op', ['operators', 'input', 'names', 'hasout'])
79 operators = [
80 …op(operators=['max', 'min', 'median', 'nanmedian', 'mode', 'sort', 'topk', 'cummax', 'cummin'], in…
82 op(operators=['kthvalue'], input=(1, 0),
84 op(operators=['svd'], input=(), names=('U', 'S', 'V'), hasout=True),
85 … op(operators=['linalg_svd', '_linalg_svd'], input=(), names=('U', 'S', 'Vh'), hasout=True),
86 … op(operators=['slogdet', 'linalg_slogdet'], input=(), names=('sign', 'logabsdet'), hasout=True),
87 …op(operators=['_linalg_slogdet'], input=(), names=('sign', 'logabsdet', 'LU', 'pivots'), hasout=Tr…
88 op(operators=['qr', 'linalg_qr'], input=(), names=('Q', 'R'), hasout=True),
[all …]
/aosp_15_r20/external/executorch/docs/source/
H A Dir-ops-set-definition.md5 The list of operators that have been identified as a Core ATen operator can be found on the [IRs pa…
9operators is the [ATen library](https://pytorch.org/cppdocs/#aten); outside of ATen operators, dev…
11 An “ATen operator set” or “ATen opset” is the set of ATen operators that can be used to represent a…
15operators (i.e. operators that do not mutate or alias inputs). Therefore, `torch.export` produces …
19 …ions. This process will replace specified ATen operators with equivalent sequences of other ATen o…
21operators will be decomposed. ATen operators that are a part of the core ATen opset (i.e. core ATe…
23operators that need to be handled by PyTorch backends and compilers once a model is exported. Not …
27 …possible; the vast majority of use-cases will not want to decompose the operators contained within…
31operators created by surveying models in public GitHub repositories in addition to well-known open…
35 …itions of the operators; the decomposition should be a relatively straightforward re-expression of…
[all …]
H A Dconcepts.md23 … consists of functional ATen operators, higher order operators (like control flow operators) and r…
36 …des only differentiable ATen operators, along with higher order operators (control flow ops) and r…
44 …contain operators or submodules that are only meaningful to the target backend. This dialect allow…
52operators that are not part of ATen dialect or Edge dialect. Backend specific operators are only i…
68 …ator information from models and/or other sources and only includes the operators required by them…
74 …dialect contains the core ATen operators along with higher order operators (control flow) and regi…
76 ## [Core ATen operators / Canonical ATen operator set](./ir-ops-set-definition.md)
78 A select subset of the PyTorch ATen operator library. Core ATen operators will not be decomposed wh…
82 …of other operators. During the AOT process, a default list of decompositions is employed, breaking…
86 …hese are operators that aren't part of the ATen library, but which appear in [eager mode](./concep…
[all …]
H A Dbuild-run-xtensa.md10operators and compiler passes to enhance the model and make it more suitable to running on Xtensa …
76 │ │ ├── operators
81 │ ├── operators
87 └── operators
92 …h/executorch/blob/main/backends/cadence/aot/quantizer.py), will replace operators with custom ones…
94 ***Operators***:
96operators folder contains two kinds of operators: existing operators from the [ExecuTorch portable…
115 ***Quantized Operators***:
117 The other, more complex model are custom operators, including:
118 …[here](https://github.com/pytorch/executorch/blob/main/examples/cadence/operators/quantized_linear…
[all …]
/aosp_15_r20/external/tensorflow/tensorflow/python/autograph/operators/
H A D__init__.py15 """This module implements operators that AutoGraph overloads.
32 # All operators may accept a final argument named "opts", of a type that
36 from tensorflow.python.autograph.operators.conditional_expressions import if_exp
37 from tensorflow.python.autograph.operators.control_flow import for_stmt
38 from tensorflow.python.autograph.operators.control_flow import if_stmt
39 from tensorflow.python.autograph.operators.control_flow import while_stmt
40 from tensorflow.python.autograph.operators.data_structures import list_append
41 from tensorflow.python.autograph.operators.data_structures import list_pop
42 from tensorflow.python.autograph.operators.data_structures import list_stack
43 from tensorflow.python.autograph.operators.data_structures import ListPopOpts
[all …]
/aosp_15_r20/external/pytorch/tools/code_analyzer/
H A Dgen_operators_yaml.py24 # Generate YAML file containing the operators used for a specific PyTorch model.
39 # operators:
63 # 1. Inference Root Operators (--root-ops): Root operators (called directly
66 # 2. Training Root Operators (--training-root-ops): Root operators used
67 # by training use-cases. Currently, this list is the list of all operators
68 # used by training, and not just the root operators. All Training ops are
72 # operator dependency graph used to determine which operators depend on
73 # which other operators for correct functioning. This is used for
74 # generating the transitive closure of all the operators used by the
75 # model based on the root operators when static selective build is used.
[all …]
/aosp_15_r20/external/pytorch/torchgen/selective_build/
H A Dselector.py27 # operators that should be included in the build.
32 # operators.
39 operators: dict[str, SelectiveBuildOperator]
80 "operators",
103 operators = {}
104 operators_dict = data.get("operators", {})
108 operators[k] = SelectiveBuildOperator.from_yaml_dict(k, v)
134 operators,
157 operators = {}
159 operators[op] = {
[all …]
/aosp_15_r20/external/armnn/docs/
H A D05_01_parsers.dox29 ## ONNX operators that the Arm NN SDK supports
31 This reference guide provides a list of ONNX operators the Arm NN SDK currently supports.
33 The Arm NN SDK ONNX parser currently only supports fp32 operators.
38 …- See the ONNX [Add documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Add)…
41 …veragePool documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#AveragePool) …
44 …- See the ONNX [Concat documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#C…
47 …- See the ONNX [Constant documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md…
50 …- See the ONNX [Clip documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Cli…
53 …- See the ONNX [Flatten documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#…
56 …- See the ONNX [Gather documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#G…
[all …]
/aosp_15_r20/external/tensorflow/tensorflow/lite/toco/tflite/
H A Dexport_test.cc42 void ResetOperators() { input_model_.operators.clear(); } in ResetOperators()
61 input_model_.operators.emplace_back(op); in AddOperatorsByName()
72 input_model_.operators.emplace_back(op); in AddOperatorsByName()
86 input_model_.operators.emplace_back(op); in AddOperatorsByName()
98 input_model_.operators.emplace_back(op); in AddOperatorsByName()
102 input_model_.operators.emplace_back(op); in AddOperatorsByName()
145 input_model_.operators.emplace_back(op); in BuildQuantizableTestModel()
157 input_model_.operators.emplace_back(op); in BuildQuantizableTestModel()
201 auto operators = (*model->subgraphs())[0]->operators(); in ExportAndGetOperatorIndices() local
202 for (const auto* op : *operators) { in ExportAndGetOperatorIndices()
[all …]
H A Dimport_test.cc109 Offset<Vector<Offset<::tflite::Operator>>> operators, in BuildSubGraphs() argument
117 builder_.CreateVector(outputs), operators, in BuildSubGraphs()
125 // conversions multiple times, and the conversion of operators is tested by
131 auto operators = BuildOperators(); in BuildTestModel() local
132 auto subgraphs = BuildSubGraphs(tensors, operators); in BuildTestModel()
160 details::OperatorsTable operators; in TEST_F() local
161 details::LoadOperatorsTable(*input_model_, &operators); in TEST_F()
162 EXPECT_THAT(operators, ElementsAre("MAX_POOL_2D", "CONV_2D")); in TEST_F()
193 auto operators = BuildOperators(); in TEST_F() local
194 auto subgraphs = BuildSubGraphs(tensors, operators); in TEST_F()
[all …]
/aosp_15_r20/external/ComputeLibrary/
H A DAndroid.bp97 "src/c/operators/AclActivation.cpp",
463 "src/cpu/operators/CpuActivation.cpp",
464 "src/cpu/operators/CpuAdd.cpp",
465 "src/cpu/operators/CpuAddMulAdd.cpp",
466 "src/cpu/operators/CpuCast.cpp",
467 "src/cpu/operators/CpuConcatenate.cpp",
468 "src/cpu/operators/CpuConv2d.cpp",
469 "src/cpu/operators/CpuConvertFullyConnectedWeights.cpp",
470 "src/cpu/operators/CpuCopy.cpp",
471 "src/cpu/operators/CpuDepthwiseConv2d.cpp",
[all …]
H A Dfilelist.json98 "operators": array
100 "src/c/operators/AclActivation.cpp"
149 "operators": { object
154 "src/gpu/cl/operators/ClActivation.cpp",
172 "src/gpu/cl/operators/ClAdd.cpp"
235 "src/gpu/cl/operators/ClCast.cpp",
265 "src/gpu/cl/operators/ClConcatenate.cpp",
294 "src/gpu/cl/operators/ClConv2d.cpp",
295 "src/gpu/cl/operators/ClDirectConv2d.cpp",
296 "src/gpu/cl/operators/ClGemmConv2d.cpp",
[all …]
/aosp_15_r20/external/tensorflow/tensorflow/lite/tools/optimize/
H A Dmodify_model_interface_test.cc75 model->subgraphs[0]->operators.push_back(std::move(quant_op)); in CreateQuantizedModelSingleInputOutput()
76 model->subgraphs[0]->operators.push_back(std::move(fc_op)); in CreateQuantizedModelSingleInputOutput()
77 model->subgraphs[0]->operators.push_back(std::move(dequant_op)); in CreateQuantizedModelSingleInputOutput()
178 model->subgraphs[0]->operators.push_back(std::move(quant_op_1)); in CreateQuantizedModelMultipleInputOutput()
179 model->subgraphs[0]->operators.push_back(std::move(quant_op_2)); in CreateQuantizedModelMultipleInputOutput()
180 model->subgraphs[0]->operators.push_back(std::move(fc_op)); in CreateQuantizedModelMultipleInputOutput()
181 model->subgraphs[0]->operators.push_back(std::move(dequant_op_1)); in CreateQuantizedModelMultipleInputOutput()
182 model->subgraphs[0]->operators.push_back(std::move(dequant_op_2)); in CreateQuantizedModelMultipleInputOutput()
281 model->subgraphs[0]->operators.push_back(std::move(fc_op)); in CreateFloatModel()
329 EXPECT_EQ(model->subgraphs[0]->operators.size(), 1); in TEST_P()
[all …]
H A Dquantize_model_test.cc127 // TODO(jianlijianli): Compare operators as well. in ExpectSameModels()
235 ASSERT_EQ(quantized_graph->operators.size(), in TEST_P()
236 float_graph->operators()->size()); in TEST_P()
237 for (size_t i = 0; i < quantized_graph->operators.size(); i++) { in TEST_P()
238 const auto quant_op = quantized_graph->operators[i].get(); in TEST_P()
239 const auto float_op = float_graph->operators()->Get(i); in TEST_P()
279 EXPECT_EQ(subgraph->operators.size(), in TEST_P()
280 readonly_subgraph->operators()->size() + 2); in TEST_P()
282 const auto& quant_op = subgraph->operators[0]; in TEST_P()
284 subgraph->operators[subgraph->operators.size() - 1]; in TEST_P()
[all …]
/aosp_15_r20/external/jsilver/src/com/google/clearsilver/jsilver/functions/bundles/
H A DCoreOperators.java20 import com.google.clearsilver.jsilver.functions.operators.AddFunction;
21 import com.google.clearsilver.jsilver.functions.operators.AndFunction;
22 import com.google.clearsilver.jsilver.functions.operators.DivideFunction;
23 import com.google.clearsilver.jsilver.functions.operators.EqualFunction;
24 import com.google.clearsilver.jsilver.functions.operators.ExistsFunction;
25 import com.google.clearsilver.jsilver.functions.operators.GreaterFunction;
26 import com.google.clearsilver.jsilver.functions.operators.GreaterOrEqualFunction;
27 import com.google.clearsilver.jsilver.functions.operators.LessFunction;
28 import com.google.clearsilver.jsilver.functions.operators.LessOrEqualFunction;
29 import com.google.clearsilver.jsilver.functions.operators.ModuloFunction;
[all …]
/aosp_15_r20/external/pytorch/benchmarks/operator_benchmark/
H A DREADME.md3 This benchmark suite provides a systemic way to measure the performance of operators for a wide ran…
33 Note: we set the number of OpenMP and MKL threads both to 1. If you want to benchmark operators wit…
52 …rch operators to the benchmark suite. Existing benchmarks for operators are in the `pt` directory …
104 … `Tag` allows you to only run some of the inputs. Most of the inputs to operators being supported …
127 List all the supported operators:
139 python -m benchmark_all_test --operators add --omp-num-threads 1 --mkl-num-threads 1
148 ## Adding New Operators to the Benchmark Suite
149operators in the benchmark suite. In the following sections, we'll step through the complete flow …
195 #### Part 1. Specify Inputs to Operators
218 1\. `op_bench.config_list` is a helper function which specifies a list of inputs to operators. It t…
[all …]
/aosp_15_r20/external/pytorch/torch/_dynamo/
H A Dprofiler.py14 operators: int = 0 variable in ProfileMetrics
20 self.operators += other.operators
28 self.operators + other.operators,
37 self.operators / max(1, other.operators),
42 return f"{self.operators:4.0%} ops {self.microseconds:4.0%} time"
45 return [self.operators, self.microseconds]
66 f"{self.captured.operators:4}/{self.total.operators:4} = "
74 self.captured.operators,
75 self.total.operators,
137 operators=captured_ops,
[all …]
/aosp_15_r20/external/sdv/vsomeip/third_party/boost/utility/
Doperators.htm9 <title>Header &lt;boost/operators.hpp&gt; Documentation</title>
15 "../../boost/operators.hpp">boost/operators.hpp</a>&gt;</cite></h1>
18 "../../boost/operators.hpp">boost/operators.hpp</a>&gt;</cite> supplies
20 templates define operators at namespace scope in terms of a minimal
21 number of fundamental operators provided by the class.</p>
65 <a href="#arithmetic">Arithmetic operators</a>
69 <a href="#smpl_oprs">Simple Arithmetic Operators</a>
78 <li><a href="#grpd_oprs">Grouped Arithmetic Operators</a></li>
82 <li><a href="#a_demo">Arithmetic Operators Demonstration and Test
88 <a href="#deref">Dereference Operators and Iterator Helpers</a>
[all …]

12345678910>>...349