Searched defs:k_padded (Results 1 – 2 of 2) sorted by relevance
/aosp_15_r20/external/pytorch/aten/src/ATen/native/transformers/cuda/flash_attn/ |
H A D | flash_api.cpp | 421 at::Tensor q_padded, k_padded, v_padded; in mha_fwd() local 660 at::Tensor q_padded, k_padded, v_padded; in mha_varlen_fwd() local 1409 at::Tensor k, v, k_padded, v_padded; in mha_fwd_kvcache() local
|
/aosp_15_r20/external/pytorch/aten/src/ATen/native/transformers/cuda/ |
H A D | attention.cu | 878 Tensor output, q_padded, k_padded, v_padded, logsumexp, output_shape, in _flash_attention_forward() local
|