Home
last modified time | relevance | path

Searched full:workloads (Results 1 – 25 of 222) sorted by relevance

123456789

/linux-6.14.4/drivers/crypto/intel/qat/
DKconfig24 for accelerating crypto and compression workloads.
35 for accelerating crypto and compression workloads.
46 for accelerating crypto and compression workloads.
57 for accelerating crypto and compression workloads.
68 for accelerating crypto and compression workloads.
81 Virtual Function for accelerating crypto and compression workloads.
93 Virtual Function for accelerating crypto and compression workloads.
105 Virtual Function for accelerating crypto and compression workloads.
/linux-6.14.4/tools/perf/tests/shell/lib/
Dperf_metric_validation.py13 self.workloads = [wl] # multiple workloads possible
24 … \tRelationship rule description: \'{5}\'".format(self.metric, self.collectedValue, self.workloads,
28 \tworkload(s): {1}".format(self.metric, self.workloads)
33 .format(self.metric, self.collectedValue, self.workloads,
47 self.workloads = [x for x in workload.split(",") if x]
48 self.wlidx = 0 # idx of current workloads
202 … [TestError([m], self.workloads[self.wlidx], negmetric[m], 0) for m in negmetric.keys()])
277 …self.errlist.append(TestError([m['Name'] for m in rule['Metrics']], self.workloads[self.wlidx], [],
280 …self.errlist.append(TestError([m['Name'] for m in rule['Metrics']], self.workloads[self.wlidx], [v…
332 self.errlist.extend([TestError([name], self.workloads[self.wlidx], val,
[all …]
/linux-6.14.4/Documentation/driver-api/
Ddma-buf.rst292 randomly hangs workloads until the timeout kicks in. Workloads, which from
305 workloads. This also means no implicit fencing for shared buffers in these
327 faults on GPUs are limited to pure compute workloads.
343 - Compute workloads can always be preempted, even when a page fault is pending
346 - DMA fence workloads and workloads which need page fault handling have
349 reservations for DMA fence workloads.
352 hardware resources for DMA fence workloads when they are in-flight. This must
357 all workloads must be flushed from the GPU when switching between jobs
361 made visible anywhere in the system, all compute workloads must be preempted
372 Note that workloads that run on independent hardware like copy engines or other
[all …]
/linux-6.14.4/tools/perf/Documentation/
Dperf-test.txt58 Run a built-in workload, to list them use '--list-workloads', current ones include:
68 The datasym and landlock workloads don't accept any.
70 --list-workloads::
71 List the available workloads to use with -w/--workload.
/linux-6.14.4/Documentation/gpu/
Ddrm-compute.rst2 Long running workloads and compute
5 Long running workloads (compute) are workloads that will not complete in 10
7 This means that other techniques need to be used to manage those workloads,
Ddrm-vm-bind-async.rst103 exec functions. For long-running workloads, such pipelining of a bind
109 operations for long-running workloads will not allow for pipelining
110 anyway since long-running workloads don't allow for dma-fences as
121 deeply pipelined behind other VM_BIND operations and workloads
/linux-6.14.4/Documentation/timers/
Dno_hz.rst26 workloads, you will normally -not- want this option.
39 right approach, for example, in heavy workloads with lots of tasks
42 hundreds of microseconds). For these types of workloads, scheduling
56 are running light workloads, you should therefore read the following
118 computationally intensive short-iteration workloads: If any CPU is
228 aggressive real-time workloads, which have the option of disabling
230 some workloads will no doubt want to use adaptive ticks to
232 options for these workloads:
252 workloads, which have few such transitions. Careful benchmarking
253 will be required to determine whether or not other workloads
/linux-6.14.4/Documentation/accel/qaic/
Daic100.rst13 inference workloads. They are AI accelerators.
16 (x8). An individual SoC on a card can have up to 16 NSPs for running workloads.
20 performance. AIC100 cards are multi-user capable and able to execute workloads
82 the processors that run the workloads on AIC100. Each NSP is a Qualcomm Hexagon
85 one workload, AIC100 is limited to 16 concurrent workloads. Workload
93 in and out of workloads. AIC100 has one of these. The DMA Bridge has 16
103 This DDR is used to store workloads, data for the workloads, and is used by the
114 for generic compute workloads.
160 ready to process workloads.
210 | | | | managing workloads. |
[all …]
Dqaic.rst158 and detach slice calls allows userspace to use a BO with multiple workloads.
164 client, and multiple clients can each consume one or more DBCs. Workloads
170 workloads. Attempts to access resources assigned to other clients will be
/linux-6.14.4/Documentation/mm/damon/
Dindex.rst16 of the size of target workloads).
21 their workloads can write personalized applications for better understanding
22 and optimizations of their workloads and systems.
/linux-6.14.4/Documentation/admin-guide/
Dnvme-multipath.rst45 2. High Affinity Workloads: Binds I/O processing to the CPU to reduce
57 1. Balanced Workloads: Effective for balanced and predictable workloads with
/linux-6.14.4/drivers/accel/qaic/
DKconfig15 designed to accelerate Deep Learning inference workloads.
18 for users to submit workloads to the devices.
/linux-6.14.4/drivers/accel/habanalabs/
DKconfig18 designed to accelerate Deep Learning inference and training workloads.
21 the user to submit workloads to the devices.
/linux-6.14.4/security/
DKconfig.hardening178 sees a 1% slowdown, other systems and workloads may vary and you
218 your workloads.
239 workloads have measured as high as 7%.
257 synthetic workloads have measured as high as 8%.
277 workloads. Image size growth depends on architecture, and should
/linux-6.14.4/drivers/cpuidle/
DKconfig33 Some workloads benefit from using it and it generally should be safe
45 Some virtualized workloads benefit from using it.
/linux-6.14.4/Documentation/dev-tools/
Dautofdo.rst16 for workloads affected by front-end stalls.
23 characteristics similar to the workloads that are intended to be
34 (1) Sample real workloads using a production environment.
/linux-6.14.4/Documentation/networking/device_drivers/ethernet/intel/
Didpf.rst81 Driver defaults are meant to fit a wide variety of workloads, but if further
89 is tuned for general workloads. The user can customize the interrupt rate
90 control for specific workloads, via ethtool, adjusting the number of
/linux-6.14.4/Documentation/admin-guide/pm/
Dintel_uncore_frequency_scaling.rst23 Users may have some latency sensitive workloads where they do not want any
24 change to uncore frequency. Also, users may have workloads which require
123 latency sensitive workloads further tuning can be done by SW to
/linux-6.14.4/Documentation/scheduler/
Dsched-design-CFS.rst104 "server" (i.e., good batching) workloads. It defaults to a setting suitable
105 for desktop workloads. SCHED_BATCH is handled by the CFS scheduler module too.
108 base_slice_ns will have little to no impact on the workloads.
116 than the previous vanilla scheduler: both types of workloads are isolated much
/linux-6.14.4/Documentation/accounting/
Dpsi.rst10 When CPU, memory or IO devices are contended, workloads experience
19 such resource crunches and the time impact it has on complex workloads
23 scarcity aids users in sizing workloads to hardware--or provisioning
/linux-6.14.4/tools/perf/tests/
Dbuiltin-test.c141 static struct test_workload *workloads[] = { variable
152 for (unsigned i = 0; i < ARRAY_SIZE(workloads) && ({ workload = workloads[i]; 1; }); i++)
707 …OPT_STRING('w', "workload", &workload, "work", "workload to run for testing, use '--list-workloads in cmd_test()
708 …OPT_BOOLEAN(0, "list-workloads", &list_workloads, "List the available builtin workloads to use wit… in cmd_test()
Dtests.h204 * Define test workloads to be used in test suites.
222 /* The list of test workloads */
/linux-6.14.4/drivers/cpufreq/
DKconfig.x86191 the CPUs' workloads are. CPU-bound workloads will be more sensitive
193 workloads will be less sensitive -- they will not necessarily perform
/linux-6.14.4/kernel/rcu/
DKconfig245 real-time workloads. It can also be used to offload RCU
249 workloads will incur significant increases in context-switch
312 eliminates such IPIs for many workloads, proper setting
/linux-6.14.4/drivers/gpu/drm/i915/gvt/
Dscheduler.c1055 /* free the unsubmitted workloads in the queues. */ in intel_vgpu_clean_workloads()
1124 * there are pending workloads which are already submitted in complete_current_workload()
1128 * workloads won't be submitted to HW GPU and will be in complete_current_workload()
1330 kmem_cache_destroy(s->workloads); in intel_vgpu_clean_submission()
1422 s->workloads = kmem_cache_create_usercopy("gvt-g_vgpu_workload", in intel_vgpu_setup_submission()
1429 if (!s->workloads) { in intel_vgpu_setup_submission()
1538 kmem_cache_free(s->workloads, workload); in intel_vgpu_destroy_workload()
1547 workload = kmem_cache_zalloc(s->workloads, GFP_KERNEL); in alloc_workload()
1721 kmem_cache_free(s->workloads, workload); in intel_vgpu_create_workload()
1735 kmem_cache_free(s->workloads, workload); in intel_vgpu_create_workload()
[all …]

123456789