Home
last modified time | relevance | path

Searched defs:grad_q (Results 1 – 3 of 3) sorted by relevance

/aosp_15_r20/external/pytorch/aten/src/ATen/native/cpu/
H A DFlashAttentionKernel.cpp421 const at::Tensor& grad_q, in cpu_flash_attention_backward()
777 const at::Tensor& grad_q, in flash_attention_backward_kernel_impl()
/aosp_15_r20/external/pytorch/aten/src/ATen/native/transformers/cuda/
H A Dattention_backward.cu353 at::Tensor grad_q, grad_k, grad_v, grad_bias; in _efficient_attention_backward() local
/aosp_15_r20/external/pytorch/aten/src/ATen/native/transformers/
H A Dattention.cpp837 auto grad_q = at::zeros(q_t.sizes(), query.options()); in _scaled_dot_product_flash_attention_cpu_backward() local