/aosp_15_r20/external/executorch/docs/source/ |
H A D | compiler-memory-planning.md | 1 # Memory Planning 9 Concretely, there are three passes related to memory planning: 13 …RangeBasedSymShapeEval is the recommended way of doing UpperBoundMemory planning. It will actually… 15 * `MemoryPlanningPass` does the actual memory planning given all tensors get a TensorSpec with conc… 19 ExecuTorch provides two options for memory planning algorithms out of the box, but users can define… 24 …fetime doesn’t overlap with the current tensor that we try to do memory planning for, we allocate … 46 …cations, or even change the planning algorithm itself. The following example shows how you could r… 85 Users attempting to write a custom memory planning algorithm should start by looking at [the greedy… 89 Please refer to [Memory Planning Inspection](./memory-planning-inspection.md) for a tool to inspect…
|
H A D | memory-planning-inspection.md | 1 # Memory Planning Inspection in ExecuTorch 3 After the [Memory Planning](https://pytorch.org/executorch/main/concepts.html#memory-planning) pass… 23  30 * [Memory Planning](https://pytorch.org/executorch/main/compiler-memory-planning.html)
|
H A D | concepts.md | 120 …data dependent output shapes. Such operators are difficult to do memory planning on, as each invoc… 209 ## [Memory planning](./compiler-memory-planning.md) 211 The process of allocating and managing memory for a model. In ExecuTorch, a memory planning pass is… 229 …m tensor lifetime analysis. In ExecuTorch, an out variant pass is performed before memory planning.
|
H A D | intro-how-it-works.md | 8 …lerators to improve latency. It also provides an entry point for memory planning, i.e. to efficien… 23 …ng with high-performance operator implementations or customizing memory planning based on storage … 24 …e to run at low latency because of ahead-of-time compilation and memory planning stages, with the …
|
H A D | getting-started-architecture.md | 64 …cant performance and power overhead. It can be avoided using AOT memory planning, and a static exe… 67 …planning algorithms. For example, there can be specific layers of memory hierarchy for an embedded…
|
/aosp_15_r20/external/pytorch/torch/distributed/checkpoint/ |
H A D | storage.py | 76 Perform storage-specific local planning. 85 A transformed ``SavePlan`` after storage local planning 91 Perform centralized planning of storage. 102 A list of transformed ``SavePlan`` after storage global planning 226 Perform storage-specific local planning. 235 A transformed ``LoadPlan`` after storage local planning 241 Perform centralized planning of storage loading. 252 A list of transformed ``LoadPlan`` after storage global planning
|
H A D | planner.py | 124 Process the state_dict and produces a `SavePlan` that will be sent for global planning. 130 This gives each rank a chance to adjust to global planning decisions. 169 …Using the global planning step to make central decisions that can't be made individually by each r… 286 Process the state_dict and produces a `LoadPlan` that will be sent for global planning.
|
/aosp_15_r20/external/executorch/docs/source/tutorials_source/ |
H A D | export-to-executorch-tutorial.py | 19 # transformations, default or user-defined memory planning, and more. 507 # Running User-Defined Passes and Memory Planning 512 # backend operator, and a memory planning pass, to tell the runtime how to 515 # A default memory planning pass is provided, but we can also choose a 516 # backend-specific memory planning pass if it exists. More information on 517 # writing a custom memory planning pass can be found 518 # `here <../compiler-memory-planning.html>`__ 526 memory_planning_pass=MemoryPlanningPass(), # Default memory planning pass 540 # This is because between running the backend passes and memory planning passes, 541 # to prepare the graph for memory planning, an out-variant pass is run on [all …]
|
/aosp_15_r20/external/googleapis/google/ads/googleads/v14/services/ |
H A D | reach_plan_service.proto | 113 // The list of locations available for planning. 145 // Required. The ID of the selected location for planning. To list the 153 // The list of products available for planning and related targeting metadata. 305 // `parent_country_id`. Planning for more than `parent_county` is not 358 // Required. Selected product for planning. 366 // The value is specified in the selected planning currency_code. 467 // Selected product for planning. The product codes returned are within the
|
/aosp_15_r20/external/googleapis/google/datastore/v1/ |
H A D | query_profile.proto | 38 // metrics from the planning stages. 41 // query results along with both planning and execution stage metrics. 47 // Planning phase information for the query. 56 // Planning phase information for the query.
|
/aosp_15_r20/external/googleapis/google/firestore/v1/ |
H A D | query_profile.proto | 39 // metrics from the planning stages. 42 // query results along with both planning and execution stage metrics. 48 // Planning phase information for the query. 57 // Planning phase information for the query.
|
/aosp_15_r20/external/pytorch/torch/csrc/jit/runtime/static/ |
H A D | README.md | 70 ## Memory Planning 115 1) Out variants: ops that return tensors which we may be able to manage. See "Memory Planning" for … 208 See the "Memory Planning" section. `MemoryPlanner` is an abstract base class. Each sub-class implem… 209 memory planning algorithm. 211 In addition to the memory planning we do for tensors, `MemoryPlanner` encapsulates a few other opti…
|
/aosp_15_r20/external/googleapis/google/ads/googleads/v15/services/ |
H A D | reach_plan_service.proto | 115 // The list of locations available for planning. 147 // Required. The ID of the selected location for planning. To list the 155 // The list of products available for planning and related targeting metadata. 310 // `parent_country_id`. Planning for more than `parent_county` is not 363 // Required. Selected product for planning. 371 // The value is specified in the selected planning currency_code. 480 // Selected product for planning. The product codes returned are within the
|
/aosp_15_r20/external/googleapis/google/ads/googleads/v16/services/ |
H A D | reach_plan_service.proto | 115 // The list of locations available for planning. 147 // Required. The ID of the selected location for planning. To list the 155 // The list of products available for planning and related targeting metadata. 310 // `parent_country_id`. Planning for more than `parent_county` is not 363 // Required. Selected product for planning. 371 // The value is specified in the selected planning currency_code. 480 // Selected product for planning. The product codes returned are within the
|
/aosp_15_r20/external/executorch/exir/passes/ |
H A D | weights_to_outputs_pass.py | 18 …em to the outputs in order to make the weights easier to handle in memory planning and the emitter. 65 …# Flag these placeholder nodes as having a gradient attached so that memory planning will operate … 83 …# planning and the emitter. There is no outputkind.Parameter so I am using TOKEN which is currentl…
|
H A D | memory_planning_pass.py | 38 to control if the memory planning algorithm need allocate memory for 95 A pass for memory planning. The actual algorithm used will be picked by
|
/aosp_15_r20/external/executorch/util/ |
H A D | activation_memory_profiler.py | 96 Validate whether the memory planning has been done on the given program. 99 … # If there is at least one memory allocation node, then we know the memory planning has been done. 128 raise ValueError("Executorch program does not have memory planning.")
|
/aosp_15_r20/external/executorch/backends/vulkan/ |
H A D | vulkan_preprocess.py | 65 # This is a workaround to allow the memory planning pass to work without 156 # shapes and memory planning. Until this point, the graph must be ATen compliant 190 # Finally, apply dynamic shape passes and memory planning pass. These passes
|
/aosp_15_r20/external/apache-commons-io/src/main/java/org/apache/commons/io/ |
H A D | CopyUtils.java | 236 // XXX Unless anyone is planning on rewriting OutputStreamWriter, we in copy() 260 // XXX Unless anyone is planning on rewriting OutputStreamWriter, we in copy() 307 // XXX Unless anyone is planning on rewriting OutputStreamWriter, we in copy() 333 // XXX Unless anyone is planning on rewriting OutputStreamWriter, we in copy()
|
/aosp_15_r20/external/executorch/exir/ |
H A D | memory_planning.py | 36 Verify if the outcome of a memory planning algorithm makes sense. 301 …# graph signature is None for memory planning passes not called from EdgeProgramManager, these pat… 400 # Memory planning should ignore them. 409 # we skip planning memory for graph input. 757 # memory planning for submodule need to be aware of the amount of
|
/aosp_15_r20/external/googleapis/google/cloud/baremetalsolution/v2/ |
H A D | provisioning.proto | 272 // types](https://cloud.google.com/bare-metal/docs/bms-planning#server_configurations) 280 // images](https://cloud.google.com/bare-metal/docs/bms-planning#server_configurations) 429 // https://cloud.google.com/bare-metal/docs/bms-planning. 529 // https://cloud.google.com/bare-metal/docs/bms-planning.
|
/aosp_15_r20/packages/modules/AdServices/adservices/apk/assets/classifier/ |
D | topic_id_to_name.csv | 189 10188 /Finance/Accounting & Auditing/Tax Preparation & Planning 197 10196 /Finance/Financial Planning & Management 198 10197 /Finance/Financial Planning & Management/Retirement & Pension 306 10305 /Jobs & Education/Jobs/Career Resources & Planning
|
/aosp_15_r20/external/executorch/backends/cadence/aot/ |
H A D | memory_planning.py | 186 # the memory hierarchy used in memory planning. 248 # Print two tables with relevant memory planning information 354 # Create the memory planning pass. We allocate memory for input
|
/aosp_15_r20/external/tensorflow/tensorflow/lite/ |
H A D | memory_planner.h | 24 // A MemoryPlanner is responsible for planning and executing a number of 65 // Dumps the memory planning information against the specified op node
|
H A D | arena_planner.h | 42 // If dynamic tensors are used the planning steps can be repeated during model 45 // planning.
|