1*da0073e9SAndroid Build Coastguard Worker""" 2*da0073e9SAndroid Build Coastguard WorkerPython implementation of ``__torch_function__`` 3*da0073e9SAndroid Build Coastguard Worker 4*da0073e9SAndroid Build Coastguard WorkerWhile most of the torch API and handling for ``__torch_function__`` happens 5*da0073e9SAndroid Build Coastguard Workerat the C++ level, some of the torch API is written in Python so we need 6*da0073e9SAndroid Build Coastguard Workerpython-level handling for ``__torch_function__`` overrides as well. The main 7*da0073e9SAndroid Build Coastguard Workerdeveloper-facing functionality in this file are handle_torch_function and 8*da0073e9SAndroid Build Coastguard Workerhas_torch_function. See torch/functional.py and test/test_overrides.py 9*da0073e9SAndroid Build Coastguard Workerfor usage examples. 10*da0073e9SAndroid Build Coastguard Worker 11*da0073e9SAndroid Build Coastguard WorkerNote 12*da0073e9SAndroid Build Coastguard Worker---- 13*da0073e9SAndroid Build Coastguard Workerheavily inspired by NumPy's ``__array_function__`` (see: 14*da0073e9SAndroid Build Coastguard Workerhttps://github.com/pytorch/pytorch/issues/24015 and 15*da0073e9SAndroid Build Coastguard Workerhttps://www.numpy.org/neps/nep-0018-array-function-protocol.html 16*da0073e9SAndroid Build Coastguard Worker) 17*da0073e9SAndroid Build Coastguard Worker 18*da0073e9SAndroid Build Coastguard WorkerIf changing this file in a way that can affect ``__torch_function__`` overhead, 19*da0073e9SAndroid Build Coastguard Workerplease report the benchmarks in ``benchmarks/overrides_benchmark``. See the 20*da0073e9SAndroid Build Coastguard Workerinstructions in the ``README.md`` in that directory. 21*da0073e9SAndroid Build Coastguard Worker""" 22*da0073e9SAndroid Build Coastguard Worker 23*da0073e9SAndroid Build Coastguard Workerimport __future__ # noqa: F404 24*da0073e9SAndroid Build Coastguard Worker 25*da0073e9SAndroid Build Coastguard Workerimport collections 26*da0073e9SAndroid Build Coastguard Workerimport contextlib 27*da0073e9SAndroid Build Coastguard Workerimport functools 28*da0073e9SAndroid Build Coastguard Workerimport types 29*da0073e9SAndroid Build Coastguard Workerimport warnings 30*da0073e9SAndroid Build Coastguard Workerfrom functools import wraps 31*da0073e9SAndroid Build Coastguard Workerfrom typing import Any, Callable, Dict, Iterable, List, Set, Tuple, Type 32*da0073e9SAndroid Build Coastguard Worker 33*da0073e9SAndroid Build Coastguard Workerimport torch 34*da0073e9SAndroid Build Coastguard Workerfrom torch._C import ( 35*da0073e9SAndroid Build Coastguard Worker _add_docstr, 36*da0073e9SAndroid Build Coastguard Worker _get_function_stack_at, 37*da0073e9SAndroid Build Coastguard Worker _has_torch_function, 38*da0073e9SAndroid Build Coastguard Worker _has_torch_function_unary, 39*da0073e9SAndroid Build Coastguard Worker _has_torch_function_variadic, 40*da0073e9SAndroid Build Coastguard Worker _is_torch_function_mode_enabled, 41*da0073e9SAndroid Build Coastguard Worker _len_torch_function_stack, 42*da0073e9SAndroid Build Coastguard Worker _pop_torch_function_stack, 43*da0073e9SAndroid Build Coastguard Worker _push_on_torch_function_stack, 44*da0073e9SAndroid Build Coastguard Worker) 45*da0073e9SAndroid Build Coastguard Worker 46*da0073e9SAndroid Build Coastguard Worker 47*da0073e9SAndroid Build Coastguard Worker__all__ = [ 48*da0073e9SAndroid Build Coastguard Worker "get_ignored_functions", 49*da0073e9SAndroid Build Coastguard Worker "get_overridable_functions", 50*da0073e9SAndroid Build Coastguard Worker "get_testing_overrides", 51*da0073e9SAndroid Build Coastguard Worker "handle_torch_function", 52*da0073e9SAndroid Build Coastguard Worker "has_torch_function", 53*da0073e9SAndroid Build Coastguard Worker "resolve_name", 54*da0073e9SAndroid Build Coastguard Worker "is_tensor_like", 55*da0073e9SAndroid Build Coastguard Worker "is_tensor_method_or_property", 56*da0073e9SAndroid Build Coastguard Worker "wrap_torch_function", 57*da0073e9SAndroid Build Coastguard Worker "enable_reentrant_dispatch", 58*da0073e9SAndroid Build Coastguard Worker] 59*da0073e9SAndroid Build Coastguard Worker 60*da0073e9SAndroid Build Coastguard Worker 61*da0073e9SAndroid Build Coastguard Workerdef _disable_user_warnings( 62*da0073e9SAndroid Build Coastguard Worker func: Callable, 63*da0073e9SAndroid Build Coastguard Worker regex: str = ".*is deprecated, please use.*", 64*da0073e9SAndroid Build Coastguard Worker module: str = "torch", 65*da0073e9SAndroid Build Coastguard Worker) -> Callable: 66*da0073e9SAndroid Build Coastguard Worker """ 67*da0073e9SAndroid Build Coastguard Worker Decorator that temporarily disables ``UserWarning``s for the given ``module`` if the warning message matches the 68*da0073e9SAndroid Build Coastguard Worker given ``regex`` pattern. 69*da0073e9SAndroid Build Coastguard Worker 70*da0073e9SAndroid Build Coastguard Worker Arguments 71*da0073e9SAndroid Build Coastguard Worker --------- 72*da0073e9SAndroid Build Coastguard Worker func : function 73*da0073e9SAndroid Build Coastguard Worker Function to disable the warnings for. 74*da0073e9SAndroid Build Coastguard Worker regex : str 75*da0073e9SAndroid Build Coastguard Worker A regex pattern compilable by ``re.compile``. This is used to match the ``UserWarning`` message. 76*da0073e9SAndroid Build Coastguard Worker module : str 77*da0073e9SAndroid Build Coastguard Worker The python module to which the filtering should be restricted. 78*da0073e9SAndroid Build Coastguard Worker 79*da0073e9SAndroid Build Coastguard Worker Returns 80*da0073e9SAndroid Build Coastguard Worker ------- 81*da0073e9SAndroid Build Coastguard Worker function 82*da0073e9SAndroid Build Coastguard Worker The wrapped function. 83*da0073e9SAndroid Build Coastguard Worker """ 84*da0073e9SAndroid Build Coastguard Worker 85*da0073e9SAndroid Build Coastguard Worker @wraps(func) 86*da0073e9SAndroid Build Coastguard Worker def wrapper(*args, **kwargs): 87*da0073e9SAndroid Build Coastguard Worker with warnings.catch_warnings(): 88*da0073e9SAndroid Build Coastguard Worker warnings.filterwarnings( 89*da0073e9SAndroid Build Coastguard Worker "ignore", category=UserWarning, message=regex, module=module 90*da0073e9SAndroid Build Coastguard Worker ) 91*da0073e9SAndroid Build Coastguard Worker return func(*args, **kwargs) 92*da0073e9SAndroid Build Coastguard Worker 93*da0073e9SAndroid Build Coastguard Worker return wrapper 94*da0073e9SAndroid Build Coastguard Worker 95*da0073e9SAndroid Build Coastguard Worker 96*da0073e9SAndroid Build Coastguard Worker@functools.lru_cache(None) 97*da0073e9SAndroid Build Coastguard Worker@_disable_user_warnings 98*da0073e9SAndroid Build Coastguard Workerdef get_ignored_functions() -> Set[Callable]: 99*da0073e9SAndroid Build Coastguard Worker """ 100*da0073e9SAndroid Build Coastguard Worker Return public functions that cannot be overridden by ``__torch_function__``. 101*da0073e9SAndroid Build Coastguard Worker 102*da0073e9SAndroid Build Coastguard Worker Returns 103*da0073e9SAndroid Build Coastguard Worker ------- 104*da0073e9SAndroid Build Coastguard Worker Set[Callable] 105*da0073e9SAndroid Build Coastguard Worker A tuple of functions that are publicly available in the torch API but cannot 106*da0073e9SAndroid Build Coastguard Worker be overridden with ``__torch_function__``. Mostly this is because none of the 107*da0073e9SAndroid Build Coastguard Worker arguments of these functions are tensors or tensor-likes. 108*da0073e9SAndroid Build Coastguard Worker 109*da0073e9SAndroid Build Coastguard Worker Examples 110*da0073e9SAndroid Build Coastguard Worker -------- 111*da0073e9SAndroid Build Coastguard Worker >>> torch.Tensor.as_subclass in torch.overrides.get_ignored_functions() 112*da0073e9SAndroid Build Coastguard Worker True 113*da0073e9SAndroid Build Coastguard Worker >>> torch.add in torch.overrides.get_ignored_functions() 114*da0073e9SAndroid Build Coastguard Worker False 115*da0073e9SAndroid Build Coastguard Worker """ 116*da0073e9SAndroid Build Coastguard Worker Tensor = torch.Tensor 117*da0073e9SAndroid Build Coastguard Worker return { 118*da0073e9SAndroid Build Coastguard Worker torch.typename, 119*da0073e9SAndroid Build Coastguard Worker torch.is_tensor, 120*da0073e9SAndroid Build Coastguard Worker torch.is_storage, 121*da0073e9SAndroid Build Coastguard Worker torch.set_default_tensor_type, 122*da0073e9SAndroid Build Coastguard Worker torch.set_default_device, 123*da0073e9SAndroid Build Coastguard Worker torch.get_default_device, 124*da0073e9SAndroid Build Coastguard Worker torch.set_rng_state, 125*da0073e9SAndroid Build Coastguard Worker torch.get_rng_state, 126*da0073e9SAndroid Build Coastguard Worker torch.manual_seed, 127*da0073e9SAndroid Build Coastguard Worker torch.initial_seed, 128*da0073e9SAndroid Build Coastguard Worker torch.seed, 129*da0073e9SAndroid Build Coastguard Worker torch.save, 130*da0073e9SAndroid Build Coastguard Worker torch.load, 131*da0073e9SAndroid Build Coastguard Worker torch.set_printoptions, 132*da0073e9SAndroid Build Coastguard Worker torch.fork, 133*da0073e9SAndroid Build Coastguard Worker torch.get_default_dtype, 134*da0073e9SAndroid Build Coastguard Worker torch.get_num_interop_threads, 135*da0073e9SAndroid Build Coastguard Worker torch.get_num_threads, 136*da0073e9SAndroid Build Coastguard Worker torch.init_num_threads, 137*da0073e9SAndroid Build Coastguard Worker torch.import_ir_module, 138*da0073e9SAndroid Build Coastguard Worker torch.import_ir_module_from_buffer, 139*da0073e9SAndroid Build Coastguard Worker torch.is_anomaly_enabled, 140*da0073e9SAndroid Build Coastguard Worker torch.is_anomaly_check_nan_enabled, 141*da0073e9SAndroid Build Coastguard Worker torch.is_grad_enabled, 142*da0073e9SAndroid Build Coastguard Worker torch.merge_type_from_type_comment, 143*da0073e9SAndroid Build Coastguard Worker torch.parse_ir, 144*da0073e9SAndroid Build Coastguard Worker torch.parse_schema, 145*da0073e9SAndroid Build Coastguard Worker torch.parse_type_comment, 146*da0073e9SAndroid Build Coastguard Worker torch.set_anomaly_enabled, 147*da0073e9SAndroid Build Coastguard Worker torch.set_flush_denormal, 148*da0073e9SAndroid Build Coastguard Worker torch.set_num_interop_threads, 149*da0073e9SAndroid Build Coastguard Worker torch.set_num_threads, 150*da0073e9SAndroid Build Coastguard Worker torch.wait, 151*da0073e9SAndroid Build Coastguard Worker torch.as_tensor, 152*da0073e9SAndroid Build Coastguard Worker torch.from_numpy, 153*da0073e9SAndroid Build Coastguard Worker torch.get_device, 154*da0073e9SAndroid Build Coastguard Worker torch.tensor, 155*da0073e9SAndroid Build Coastguard Worker torch.default_generator, 156*da0073e9SAndroid Build Coastguard Worker torch.has_cuda, 157*da0073e9SAndroid Build Coastguard Worker torch.has_cudnn, 158*da0073e9SAndroid Build Coastguard Worker torch.has_lapack, 159*da0073e9SAndroid Build Coastguard Worker torch.device, 160*da0073e9SAndroid Build Coastguard Worker torch.dtype, 161*da0073e9SAndroid Build Coastguard Worker torch.finfo, 162*da0073e9SAndroid Build Coastguard Worker torch.has_mkl, 163*da0073e9SAndroid Build Coastguard Worker torch.has_mps, 164*da0073e9SAndroid Build Coastguard Worker torch.has_mkldnn, 165*da0073e9SAndroid Build Coastguard Worker torch.has_openmp, 166*da0073e9SAndroid Build Coastguard Worker torch.iinfo, 167*da0073e9SAndroid Build Coastguard Worker torch.memory_format, 168*da0073e9SAndroid Build Coastguard Worker torch.qscheme, 169*da0073e9SAndroid Build Coastguard Worker torch.set_grad_enabled, 170*da0073e9SAndroid Build Coastguard Worker torch.no_grad, 171*da0073e9SAndroid Build Coastguard Worker torch.enable_grad, 172*da0073e9SAndroid Build Coastguard Worker torch.inference_mode, 173*da0073e9SAndroid Build Coastguard Worker torch.is_inference_mode_enabled, 174*da0073e9SAndroid Build Coastguard Worker torch.layout, 175*da0073e9SAndroid Build Coastguard Worker torch.align_tensors, 176*da0073e9SAndroid Build Coastguard Worker torch.arange, 177*da0073e9SAndroid Build Coastguard Worker torch.as_strided, 178*da0073e9SAndroid Build Coastguard Worker torch.bartlett_window, 179*da0073e9SAndroid Build Coastguard Worker torch.blackman_window, 180*da0073e9SAndroid Build Coastguard Worker torch.broadcast_shapes, 181*da0073e9SAndroid Build Coastguard Worker torch.can_cast, 182*da0073e9SAndroid Build Coastguard Worker torch.compile, 183*da0073e9SAndroid Build Coastguard Worker torch.cudnn_affine_grid_generator, 184*da0073e9SAndroid Build Coastguard Worker torch.cudnn_batch_norm, 185*da0073e9SAndroid Build Coastguard Worker torch.cudnn_convolution, 186*da0073e9SAndroid Build Coastguard Worker torch.cudnn_convolution_transpose, 187*da0073e9SAndroid Build Coastguard Worker torch.cudnn_convolution_relu, 188*da0073e9SAndroid Build Coastguard Worker torch.cudnn_convolution_add_relu, 189*da0073e9SAndroid Build Coastguard Worker torch.cudnn_grid_sampler, 190*da0073e9SAndroid Build Coastguard Worker torch.cudnn_is_acceptable, 191*da0073e9SAndroid Build Coastguard Worker torch.empty, 192*da0073e9SAndroid Build Coastguard Worker torch.empty_permuted, 193*da0073e9SAndroid Build Coastguard Worker torch.empty_strided, 194*da0073e9SAndroid Build Coastguard Worker torch.empty_quantized, 195*da0073e9SAndroid Build Coastguard Worker torch.export.export, 196*da0073e9SAndroid Build Coastguard Worker torch.export.load, 197*da0073e9SAndroid Build Coastguard Worker torch.export.register_dataclass, 198*da0073e9SAndroid Build Coastguard Worker torch.export.save, 199*da0073e9SAndroid Build Coastguard Worker torch.eye, 200*da0073e9SAndroid Build Coastguard Worker torch.fft.fftfreq, 201*da0073e9SAndroid Build Coastguard Worker torch.fft.rfftfreq, 202*da0073e9SAndroid Build Coastguard Worker torch.from_file, 203*da0073e9SAndroid Build Coastguard Worker torch.full, 204*da0073e9SAndroid Build Coastguard Worker torch.fill, 205*da0073e9SAndroid Build Coastguard Worker torch.hamming_window, 206*da0073e9SAndroid Build Coastguard Worker torch.hann_window, 207*da0073e9SAndroid Build Coastguard Worker torch.kaiser_window, 208*da0073e9SAndroid Build Coastguard Worker torch.linspace, 209*da0073e9SAndroid Build Coastguard Worker torch.logspace, 210*da0073e9SAndroid Build Coastguard Worker torch.mkldnn_adaptive_avg_pool2d, 211*da0073e9SAndroid Build Coastguard Worker torch.mkldnn_convolution, 212*da0073e9SAndroid Build Coastguard Worker torch.mkldnn_max_pool2d, 213*da0073e9SAndroid Build Coastguard Worker torch.mkldnn_max_pool3d, 214*da0073e9SAndroid Build Coastguard Worker torch.mkldnn_linear_backward_weights, 215*da0073e9SAndroid Build Coastguard Worker torch.mkldnn_rnn_layer, 216*da0073e9SAndroid Build Coastguard Worker torch.normal, 217*da0073e9SAndroid Build Coastguard Worker torch.ones, 218*da0073e9SAndroid Build Coastguard Worker torch.promote_types, 219*da0073e9SAndroid Build Coastguard Worker torch.rand, 220*da0073e9SAndroid Build Coastguard Worker torch.randn, 221*da0073e9SAndroid Build Coastguard Worker torch.randint, 222*da0073e9SAndroid Build Coastguard Worker torch.randperm, 223*da0073e9SAndroid Build Coastguard Worker torch.range, 224*da0073e9SAndroid Build Coastguard Worker torch.result_type, 225*da0073e9SAndroid Build Coastguard Worker torch.scalar_tensor, 226*da0073e9SAndroid Build Coastguard Worker torch.sparse_coo_tensor, 227*da0073e9SAndroid Build Coastguard Worker torch.sparse_compressed_tensor, 228*da0073e9SAndroid Build Coastguard Worker torch.sparse_csr_tensor, 229*da0073e9SAndroid Build Coastguard Worker torch.sparse_csc_tensor, 230*da0073e9SAndroid Build Coastguard Worker torch.sparse_bsr_tensor, 231*da0073e9SAndroid Build Coastguard Worker torch.sparse_bsc_tensor, 232*da0073e9SAndroid Build Coastguard Worker torch.sym_constrain_range, 233*da0073e9SAndroid Build Coastguard Worker torch.sym_constrain_range_for_size, 234*da0073e9SAndroid Build Coastguard Worker torch.tril_indices, 235*da0073e9SAndroid Build Coastguard Worker torch.triu_indices, 236*da0073e9SAndroid Build Coastguard Worker torch.vander, 237*da0073e9SAndroid Build Coastguard Worker torch.zeros, 238*da0073e9SAndroid Build Coastguard Worker torch._jit_internal.boolean_dispatch, 239*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.assert_int_or_pair, 240*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.upsample, 241*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.upsample_bilinear, 242*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.upsample_nearest, 243*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.has_torch_function, 244*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.has_torch_function_unary, 245*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.has_torch_function_variadic, 246*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.handle_torch_function, 247*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.sigmoid, 248*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.hardsigmoid, 249*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.tanh, 250*da0073e9SAndroid Build Coastguard Worker torch.nn.functional._canonical_mask, 251*da0073e9SAndroid Build Coastguard Worker torch.nn.functional._none_or_dtype, 252*da0073e9SAndroid Build Coastguard Worker # Doesn't actually take or return tensor arguments 253*da0073e9SAndroid Build Coastguard Worker torch.nn.init.calculate_gain, 254*da0073e9SAndroid Build Coastguard Worker # These are deprecated; don't test them 255*da0073e9SAndroid Build Coastguard Worker torch.nn.init.uniform, 256*da0073e9SAndroid Build Coastguard Worker torch.nn.init.normal, 257*da0073e9SAndroid Build Coastguard Worker torch.nn.init.constant, 258*da0073e9SAndroid Build Coastguard Worker torch.nn.init.eye, 259*da0073e9SAndroid Build Coastguard Worker torch.nn.init.dirac, 260*da0073e9SAndroid Build Coastguard Worker torch.nn.init.xavier_uniform, 261*da0073e9SAndroid Build Coastguard Worker torch.nn.init.xavier_normal, 262*da0073e9SAndroid Build Coastguard Worker torch.nn.init.kaiming_uniform, 263*da0073e9SAndroid Build Coastguard Worker torch.nn.init.kaiming_normal, 264*da0073e9SAndroid Build Coastguard Worker torch.nn.init.orthogonal, 265*da0073e9SAndroid Build Coastguard Worker torch.nn.init.sparse, 266*da0073e9SAndroid Build Coastguard Worker torch.nested.to_padded_tensor, 267*da0073e9SAndroid Build Coastguard Worker has_torch_function, 268*da0073e9SAndroid Build Coastguard Worker handle_torch_function, 269*da0073e9SAndroid Build Coastguard Worker torch.set_autocast_enabled, 270*da0073e9SAndroid Build Coastguard Worker torch.is_autocast_enabled, 271*da0073e9SAndroid Build Coastguard Worker torch.set_autocast_dtype, 272*da0073e9SAndroid Build Coastguard Worker torch.get_autocast_dtype, 273*da0073e9SAndroid Build Coastguard Worker torch.clear_autocast_cache, 274*da0073e9SAndroid Build Coastguard Worker torch.set_autocast_cpu_enabled, 275*da0073e9SAndroid Build Coastguard Worker torch.is_autocast_cpu_enabled, 276*da0073e9SAndroid Build Coastguard Worker torch.set_autocast_xla_enabled, 277*da0073e9SAndroid Build Coastguard Worker torch.is_autocast_xla_enabled, 278*da0073e9SAndroid Build Coastguard Worker torch.set_autocast_ipu_enabled, 279*da0073e9SAndroid Build Coastguard Worker torch.is_autocast_ipu_enabled, 280*da0073e9SAndroid Build Coastguard Worker torch.set_autocast_cpu_dtype, 281*da0073e9SAndroid Build Coastguard Worker torch.get_autocast_cpu_dtype, 282*da0073e9SAndroid Build Coastguard Worker torch.set_autocast_ipu_dtype, 283*da0073e9SAndroid Build Coastguard Worker torch.get_autocast_ipu_dtype, 284*da0073e9SAndroid Build Coastguard Worker torch.get_autocast_gpu_dtype, 285*da0073e9SAndroid Build Coastguard Worker torch.set_autocast_gpu_dtype, 286*da0073e9SAndroid Build Coastguard Worker torch.get_autocast_xla_dtype, 287*da0073e9SAndroid Build Coastguard Worker torch.set_autocast_xla_dtype, 288*da0073e9SAndroid Build Coastguard Worker torch.autocast_increment_nesting, 289*da0073e9SAndroid Build Coastguard Worker torch.autocast_decrement_nesting, 290*da0073e9SAndroid Build Coastguard Worker torch.is_autocast_cache_enabled, 291*da0073e9SAndroid Build Coastguard Worker torch.set_autocast_cache_enabled, 292*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.hardswish, 293*da0073e9SAndroid Build Coastguard Worker torch.is_vulkan_available, 294*da0073e9SAndroid Build Coastguard Worker torch.are_deterministic_algorithms_enabled, 295*da0073e9SAndroid Build Coastguard Worker torch.use_deterministic_algorithms, 296*da0073e9SAndroid Build Coastguard Worker torch.is_deterministic_algorithms_warn_only_enabled, 297*da0073e9SAndroid Build Coastguard Worker torch.set_deterministic_debug_mode, 298*da0073e9SAndroid Build Coastguard Worker torch.get_device_module, 299*da0073e9SAndroid Build Coastguard Worker torch.get_deterministic_debug_mode, 300*da0073e9SAndroid Build Coastguard Worker torch.set_float32_matmul_precision, 301*da0073e9SAndroid Build Coastguard Worker torch.get_float32_matmul_precision, 302*da0073e9SAndroid Build Coastguard Worker torch.unify_type_list, 303*da0073e9SAndroid Build Coastguard Worker torch.is_warn_always_enabled, 304*da0073e9SAndroid Build Coastguard Worker torch.set_warn_always, 305*da0073e9SAndroid Build Coastguard Worker torch.vitals_enabled, 306*da0073e9SAndroid Build Coastguard Worker torch.set_vital, 307*da0073e9SAndroid Build Coastguard Worker torch.read_vitals, 308*da0073e9SAndroid Build Coastguard Worker torch.vmap, 309*da0073e9SAndroid Build Coastguard Worker torch.cond, 310*da0073e9SAndroid Build Coastguard Worker torch.frombuffer, 311*da0073e9SAndroid Build Coastguard Worker torch.asarray, 312*da0073e9SAndroid Build Coastguard Worker torch._functional_sym_constrain_range, 313*da0073e9SAndroid Build Coastguard Worker torch._make_dep_token, 314*da0073e9SAndroid Build Coastguard Worker Tensor.__delitem__, 315*da0073e9SAndroid Build Coastguard Worker Tensor.__dir__, 316*da0073e9SAndroid Build Coastguard Worker Tensor.__getattribute__, 317*da0073e9SAndroid Build Coastguard Worker Tensor.__init__, 318*da0073e9SAndroid Build Coastguard Worker Tensor.__iter__, 319*da0073e9SAndroid Build Coastguard Worker Tensor.__init_subclass__, 320*da0073e9SAndroid Build Coastguard Worker Tensor.__delattr__, 321*da0073e9SAndroid Build Coastguard Worker Tensor.__setattr__, 322*da0073e9SAndroid Build Coastguard Worker Tensor.__torch_function__, 323*da0073e9SAndroid Build Coastguard Worker Tensor.__torch_dispatch__, 324*da0073e9SAndroid Build Coastguard Worker Tensor.__new__, 325*da0073e9SAndroid Build Coastguard Worker Tensor.__class__, 326*da0073e9SAndroid Build Coastguard Worker Tensor.__subclasshook__, 327*da0073e9SAndroid Build Coastguard Worker Tensor.__hash__, 328*da0073e9SAndroid Build Coastguard Worker Tensor.as_subclass, 329*da0073e9SAndroid Build Coastguard Worker Tensor.eig, 330*da0073e9SAndroid Build Coastguard Worker Tensor.lstsq, 331*da0073e9SAndroid Build Coastguard Worker Tensor.reinforce, 332*da0073e9SAndroid Build Coastguard Worker Tensor.new, 333*da0073e9SAndroid Build Coastguard Worker Tensor.new_tensor, 334*da0073e9SAndroid Build Coastguard Worker Tensor.new_empty, 335*da0073e9SAndroid Build Coastguard Worker Tensor.new_empty_strided, 336*da0073e9SAndroid Build Coastguard Worker Tensor.new_zeros, 337*da0073e9SAndroid Build Coastguard Worker Tensor.new_ones, 338*da0073e9SAndroid Build Coastguard Worker Tensor.new_full, 339*da0073e9SAndroid Build Coastguard Worker Tensor._make_subclass, 340*da0073e9SAndroid Build Coastguard Worker Tensor.solve, 341*da0073e9SAndroid Build Coastguard Worker Tensor.symeig, 342*da0073e9SAndroid Build Coastguard Worker Tensor.stride, 343*da0073e9SAndroid Build Coastguard Worker Tensor.unflatten, 344*da0073e9SAndroid Build Coastguard Worker Tensor.to_sparse_coo, 345*da0073e9SAndroid Build Coastguard Worker Tensor.to_sparse_csr, 346*da0073e9SAndroid Build Coastguard Worker Tensor.to_sparse_csc, 347*da0073e9SAndroid Build Coastguard Worker Tensor.to_sparse_bsr, 348*da0073e9SAndroid Build Coastguard Worker Tensor.to_sparse_bsc, 349*da0073e9SAndroid Build Coastguard Worker Tensor._to_sparse, 350*da0073e9SAndroid Build Coastguard Worker Tensor._to_sparse_csr, 351*da0073e9SAndroid Build Coastguard Worker Tensor._to_sparse_csc, 352*da0073e9SAndroid Build Coastguard Worker Tensor._to_sparse_bsr, 353*da0073e9SAndroid Build Coastguard Worker Tensor._to_sparse_bsc, 354*da0073e9SAndroid Build Coastguard Worker Tensor._typed_storage, 355*da0073e9SAndroid Build Coastguard Worker Tensor._reduce_ex_internal, 356*da0073e9SAndroid Build Coastguard Worker Tensor._fix_weakref, 357*da0073e9SAndroid Build Coastguard Worker Tensor._view_func, 358*da0073e9SAndroid Build Coastguard Worker Tensor._view_func_unsafe, 359*da0073e9SAndroid Build Coastguard Worker Tensor._rev_view_func_unsafe, 360*da0073e9SAndroid Build Coastguard Worker Tensor._make_wrapper_subclass, 361*da0073e9SAndroid Build Coastguard Worker Tensor._python_dispatch.__get__, 362*da0073e9SAndroid Build Coastguard Worker Tensor._has_symbolic_sizes_strides.__get__, 363*da0073e9SAndroid Build Coastguard Worker Tensor._conj, 364*da0073e9SAndroid Build Coastguard Worker Tensor._conj_physical, 365*da0073e9SAndroid Build Coastguard Worker Tensor._lazy_clone, 366*da0073e9SAndroid Build Coastguard Worker Tensor._neg_view, 367*da0073e9SAndroid Build Coastguard Worker Tensor._is_zerotensor, 368*da0073e9SAndroid Build Coastguard Worker Tensor._is_all_true, 369*da0073e9SAndroid Build Coastguard Worker Tensor._is_any_true, 370*da0073e9SAndroid Build Coastguard Worker Tensor._addmm_activation, 371*da0073e9SAndroid Build Coastguard Worker Tensor.to_padded_tensor, 372*da0073e9SAndroid Build Coastguard Worker Tensor._use_count, 373*da0073e9SAndroid Build Coastguard Worker } 374*da0073e9SAndroid Build Coastguard Worker 375*da0073e9SAndroid Build Coastguard Worker 376*da0073e9SAndroid Build Coastguard Worker@functools.lru_cache(None) 377*da0073e9SAndroid Build Coastguard Workerdef get_default_nowrap_functions() -> Set[Callable]: 378*da0073e9SAndroid Build Coastguard Worker """ 379*da0073e9SAndroid Build Coastguard Worker Return public functions that do not wrap in a subclass when invoked by 380*da0073e9SAndroid Build Coastguard Worker the default ``Tensor.__torch_function__`` that preserves subclasses. Typically, 381*da0073e9SAndroid Build Coastguard Worker these functions represent field accesses (i.e., retrieving a Tensor that 382*da0073e9SAndroid Build Coastguard Worker is stored somewhere on the Tensor) as opposed to computation. Users of 383*da0073e9SAndroid Build Coastguard Worker these functions expect object identity to be preserved over multiple accesses 384*da0073e9SAndroid Build Coastguard Worker (e.g., ``a.grad is a.grad``) which cannot be upheld if we're wrapping on 385*da0073e9SAndroid Build Coastguard Worker the fly every time (furthermore, the tensor stored here might already be 386*da0073e9SAndroid Build Coastguard Worker the subclass, in which case wrapping really ought not to happen). 387*da0073e9SAndroid Build Coastguard Worker 388*da0073e9SAndroid Build Coastguard Worker Not ALL property accessors have this property; for example ``Tensor.T`` actually 389*da0073e9SAndroid Build Coastguard Worker just creates a new transposed tensor on the fly, and so we SHOULD interpose on 390*da0073e9SAndroid Build Coastguard Worker these calls (you need to check the implementation of the function to see if 391*da0073e9SAndroid Build Coastguard Worker this is the case or not). Additionally, if a property accessor doesn't return a Tensor, 392*da0073e9SAndroid Build Coastguard Worker it doesn't have to be on this list (though it is harmless if it is). 393*da0073e9SAndroid Build Coastguard Worker """ 394*da0073e9SAndroid Build Coastguard Worker Tensor = torch.Tensor 395*da0073e9SAndroid Build Coastguard Worker return { 396*da0073e9SAndroid Build Coastguard Worker Tensor._base.__get__, 397*da0073e9SAndroid Build Coastguard Worker Tensor.grad.__get__, 398*da0073e9SAndroid Build Coastguard Worker Tensor._grad.__get__, 399*da0073e9SAndroid Build Coastguard Worker } 400*da0073e9SAndroid Build Coastguard Worker 401*da0073e9SAndroid Build Coastguard Worker 402*da0073e9SAndroid Build Coastguard Worker@functools.lru_cache(None) 403*da0073e9SAndroid Build Coastguard Worker@_disable_user_warnings 404*da0073e9SAndroid Build Coastguard Workerdef get_testing_overrides() -> Dict[Callable, Callable]: 405*da0073e9SAndroid Build Coastguard Worker """Return a dict containing dummy overrides for all overridable functions 406*da0073e9SAndroid Build Coastguard Worker 407*da0073e9SAndroid Build Coastguard Worker Returns 408*da0073e9SAndroid Build Coastguard Worker ------- 409*da0073e9SAndroid Build Coastguard Worker Dict[Callable, Callable] 410*da0073e9SAndroid Build Coastguard Worker A dictionary that maps overridable functions in the PyTorch API to 411*da0073e9SAndroid Build Coastguard Worker lambda functions that have the same signature as the real function 412*da0073e9SAndroid Build Coastguard Worker and unconditionally return -1. These lambda functions are useful 413*da0073e9SAndroid Build Coastguard Worker for testing API coverage for a type that defines ``__torch_function__``. 414*da0073e9SAndroid Build Coastguard Worker 415*da0073e9SAndroid Build Coastguard Worker Examples 416*da0073e9SAndroid Build Coastguard Worker -------- 417*da0073e9SAndroid Build Coastguard Worker >>> import inspect 418*da0073e9SAndroid Build Coastguard Worker >>> my_add = torch.overrides.get_testing_overrides()[torch.add] 419*da0073e9SAndroid Build Coastguard Worker >>> inspect.signature(my_add) 420*da0073e9SAndroid Build Coastguard Worker <Signature (input, other, out=None)> 421*da0073e9SAndroid Build Coastguard Worker """ 422*da0073e9SAndroid Build Coastguard Worker # Every function in the PyTorchAPI that can be overriden needs an entry 423*da0073e9SAndroid Build Coastguard Worker # in this dict. 424*da0073e9SAndroid Build Coastguard Worker # 425*da0073e9SAndroid Build Coastguard Worker # Optimally we would use inspect to get the function signature and define 426*da0073e9SAndroid Build Coastguard Worker # the lambda function procedurally but that is blocked by generating 427*da0073e9SAndroid Build Coastguard Worker # function signatures for native kernels that can be consumed by inspect. 428*da0073e9SAndroid Build Coastguard Worker # See Issue #28233. 429*da0073e9SAndroid Build Coastguard Worker Tensor = torch.Tensor 430*da0073e9SAndroid Build Coastguard Worker ret: Dict[Callable, Callable] = { 431*da0073e9SAndroid Build Coastguard Worker torch.abs: lambda input, out=None: -1, 432*da0073e9SAndroid Build Coastguard Worker torch.absolute: lambda input, out=None: -1, 433*da0073e9SAndroid Build Coastguard Worker torch.adaptive_avg_pool1d: lambda input, output_size: -1, 434*da0073e9SAndroid Build Coastguard Worker torch.adaptive_max_pool1d: lambda inputs, output_size: -1, 435*da0073e9SAndroid Build Coastguard Worker torch.acos: lambda input, out=None: -1, 436*da0073e9SAndroid Build Coastguard Worker torch.adjoint: lambda input: -1, 437*da0073e9SAndroid Build Coastguard Worker torch.arccos: lambda input, out=None: -1, 438*da0073e9SAndroid Build Coastguard Worker torch.acosh: lambda input, out=None: -1, 439*da0073e9SAndroid Build Coastguard Worker torch.arccosh: lambda input, out=None: -1, 440*da0073e9SAndroid Build Coastguard Worker torch.add: lambda input, other, out=None: -1, 441*da0073e9SAndroid Build Coastguard Worker torch.addbmm: lambda input, batch1, batch2, alpha=1, beta=1, out=None: -1, 442*da0073e9SAndroid Build Coastguard Worker torch.addcdiv: lambda input, tensor1, tensor2, value=1, out=None: -1, 443*da0073e9SAndroid Build Coastguard Worker torch.addcmul: lambda input, tensor1, tensor2, value=1, out=None: -1, 444*da0073e9SAndroid Build Coastguard Worker torch.addmm: lambda input, mat1, mat2, beta=1, alpha=1, out=None: -1, 445*da0073e9SAndroid Build Coastguard Worker torch.addmv: lambda input, mat, vec, beta=1, alpha=1, out=None: -1, 446*da0073e9SAndroid Build Coastguard Worker torch.addr: lambda input, vec1, vec2, beta=1, alpha=1, out=None: -1, 447*da0073e9SAndroid Build Coastguard Worker torch.affine_grid_generator: lambda theta, size, align_corners: -1, 448*da0073e9SAndroid Build Coastguard Worker torch.all: lambda input, dim=None: -1, 449*da0073e9SAndroid Build Coastguard Worker torch.allclose: lambda input, other, trol=1e-05, atol=1e-08, equal_nan=False: -1, 450*da0073e9SAndroid Build Coastguard Worker torch.alpha_dropout: lambda input, p, train, inplace=False: -1, 451*da0073e9SAndroid Build Coastguard Worker torch.amax: lambda input, dim=None: -1, 452*da0073e9SAndroid Build Coastguard Worker torch.amin: lambda input, dim=None: -1, 453*da0073e9SAndroid Build Coastguard Worker torch.aminmax: lambda input, dim=None, keepdim=False, out=None: -1, 454*da0073e9SAndroid Build Coastguard Worker torch.angle: lambda input, out=None: -1, 455*da0073e9SAndroid Build Coastguard Worker torch.any: lambda input, dim=None, keepdim=False, out=None: -1, 456*da0073e9SAndroid Build Coastguard Worker torch.argmax: lambda input: -1, 457*da0073e9SAndroid Build Coastguard Worker torch.argmin: lambda input: -1, 458*da0073e9SAndroid Build Coastguard Worker torch.argsort: lambda input, dim=None: -1, 459*da0073e9SAndroid Build Coastguard Worker torch.asin: lambda input, out=None: -1, 460*da0073e9SAndroid Build Coastguard Worker torch._assert_async: lambda input, msg: -1, 461*da0073e9SAndroid Build Coastguard Worker torch.arcsin: lambda input, out=None: -1, 462*da0073e9SAndroid Build Coastguard Worker torch.asinh: lambda input, out=None: -1, 463*da0073e9SAndroid Build Coastguard Worker torch.arcsinh: lambda input, out=None: -1, 464*da0073e9SAndroid Build Coastguard Worker torch.atan: lambda input, out=None: -1, 465*da0073e9SAndroid Build Coastguard Worker torch.arctan: lambda input, out=None: -1, 466*da0073e9SAndroid Build Coastguard Worker torch.atan2: lambda input, other, out=None: -1, 467*da0073e9SAndroid Build Coastguard Worker torch.arctan2: lambda input, other, out=None: -1, 468*da0073e9SAndroid Build Coastguard Worker torch.atanh: lambda input, out=None: -1, 469*da0073e9SAndroid Build Coastguard Worker torch.arctanh: lambda input, out=None: -1, 470*da0073e9SAndroid Build Coastguard Worker torch.atleast_1d: lambda *tensors: -1, 471*da0073e9SAndroid Build Coastguard Worker torch.atleast_2d: lambda *tensors: -1, 472*da0073e9SAndroid Build Coastguard Worker torch.atleast_3d: lambda *tensors: -1, 473*da0073e9SAndroid Build Coastguard Worker torch.avg_pool1d: lambda input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True: -1, 474*da0073e9SAndroid Build Coastguard Worker torch.baddbmm: lambda input, batch1, batch2, alpha=1, beta=1, out=None: -1, 475*da0073e9SAndroid Build Coastguard Worker torch.batch_norm: lambda input, weight, bias, running_mean, running_var, training, momentum, eps, cudnn_enabled: -1, 476*da0073e9SAndroid Build Coastguard Worker torch.batch_norm_backward_elemt: lambda grad_out, input, mean, invstd, weight, sum_dy, sum_dy_xmu, count_tensor: -1, 477*da0073e9SAndroid Build Coastguard Worker torch.batch_norm_backward_reduce: lambda grad_out, input, mean, invstd, weight, input_g, weight_g, bias_g: -1, 478*da0073e9SAndroid Build Coastguard Worker torch.batch_norm_elemt: lambda input, weight, bias, mean, invstd, eps: -1, 479*da0073e9SAndroid Build Coastguard Worker torch.batch_norm_gather_stats: lambda input, mean, invstd, running_mean, running_var, momentum, eps, count: -1, 480*da0073e9SAndroid Build Coastguard Worker torch.batch_norm_gather_stats_with_counts: lambda input, mean, invstd, running_mean, running_var, momentum, eps, count: -1, 481*da0073e9SAndroid Build Coastguard Worker torch.batch_norm_stats: lambda input, eps: -1, 482*da0073e9SAndroid Build Coastguard Worker torch.batch_norm_update_stats: lambda input, running_mean, running_var, momentum: -1, 483*da0073e9SAndroid Build Coastguard Worker torch.bernoulli: lambda input, generator=None, out=None: -1, 484*da0073e9SAndroid Build Coastguard Worker torch.bilinear: lambda input1, input2, weight, bias: -1, 485*da0073e9SAndroid Build Coastguard Worker torch.binary_cross_entropy_with_logits: ( 486*da0073e9SAndroid Build Coastguard Worker lambda input, target, weight=None, size_average=None, reduce=None, reduction="mean", pos_weight=None: -1 487*da0073e9SAndroid Build Coastguard Worker ), 488*da0073e9SAndroid Build Coastguard Worker torch.bincount: lambda input, weights=None, minlength=0: -1, 489*da0073e9SAndroid Build Coastguard Worker torch.binomial: lambda count, prob, generator=None: -1, 490*da0073e9SAndroid Build Coastguard Worker torch.bitwise_and: lambda input, other, out=None: -1, 491*da0073e9SAndroid Build Coastguard Worker torch.bitwise_not: lambda input, out=None: -1, 492*da0073e9SAndroid Build Coastguard Worker torch.bitwise_or: lambda input, other, out=None: -1, 493*da0073e9SAndroid Build Coastguard Worker torch.bitwise_xor: lambda input, other, out=None: -1, 494*da0073e9SAndroid Build Coastguard Worker torch.bitwise_left_shift: lambda input, other, out=None: -1, 495*da0073e9SAndroid Build Coastguard Worker torch.bitwise_right_shift: lambda input, other, out=None: -1, 496*da0073e9SAndroid Build Coastguard Worker torch.block_diag: lambda *tensors: -1, 497*da0073e9SAndroid Build Coastguard Worker torch.bmm: lambda input, mat2, out=None: -1, 498*da0073e9SAndroid Build Coastguard Worker torch.broadcast_tensors: lambda *tensors: -1, 499*da0073e9SAndroid Build Coastguard Worker torch.broadcast_to: lambda self, size: -1, 500*da0073e9SAndroid Build Coastguard Worker torch.bucketize: lambda input, boundaries, out_int32=False, right=False, out=None: -1, 501*da0073e9SAndroid Build Coastguard Worker torch.cartesian_prod: lambda *tensors: -1, 502*da0073e9SAndroid Build Coastguard Worker torch.cat: lambda tensors, dim=0, out=None: -1, 503*da0073e9SAndroid Build Coastguard Worker torch.concat: lambda tensors, dim=0, out=None: -1, # alias for torch.cat 504*da0073e9SAndroid Build Coastguard Worker torch.concatenate: lambda tensors, dim=0, out=None: -1, # alias for torch.concatenate 505*da0073e9SAndroid Build Coastguard Worker torch.cdist: lambda x1, x2, p=2.0, compute_mode="use_mm_for_euclid_dist_if_necessary": -1, 506*da0073e9SAndroid Build Coastguard Worker torch.ceil: lambda input, out=None: -1, 507*da0073e9SAndroid Build Coastguard Worker torch.celu: lambda input, alpha=1.0, inplace=False: -1, 508*da0073e9SAndroid Build Coastguard Worker torch.chain_matmul: lambda *matrices, out=None: -1, 509*da0073e9SAndroid Build Coastguard Worker torch.channel_shuffle: lambda input, groups: -1, 510*da0073e9SAndroid Build Coastguard Worker torch.cholesky: lambda input, upper=False, out=None: -1, 511*da0073e9SAndroid Build Coastguard Worker torch.linalg.cholesky: lambda input, out=None: -1, 512*da0073e9SAndroid Build Coastguard Worker torch.linalg.cholesky_ex: lambda input, check_errors=False, out=None: -1, 513*da0073e9SAndroid Build Coastguard Worker torch.cholesky_inverse: lambda input, upper=False, out=None: -1, 514*da0073e9SAndroid Build Coastguard Worker torch.cholesky_solve: lambda input1, input2, upper=False, out=None: -1, 515*da0073e9SAndroid Build Coastguard Worker torch.choose_qparams_optimized: lambda input, numel, n_bins, ratio, bit_width: -1, 516*da0073e9SAndroid Build Coastguard Worker torch.chunk: lambda input, chunks, dim=0: -1, 517*da0073e9SAndroid Build Coastguard Worker torch.clamp: lambda input, min=None, max=None, out=None: -1, 518*da0073e9SAndroid Build Coastguard Worker torch.clip: lambda input, min=None, max=None, out=None: -1, 519*da0073e9SAndroid Build Coastguard Worker torch.clamp_min: lambda input, min, out=None: -1, 520*da0073e9SAndroid Build Coastguard Worker torch.clamp_max: lambda input, max, out=None: -1, 521*da0073e9SAndroid Build Coastguard Worker torch.column_stack: lambda tensors, out=None: -1, 522*da0073e9SAndroid Build Coastguard Worker torch.cov: lambda input, correction=1, fweights=None, aweights=None: -1, 523*da0073e9SAndroid Build Coastguard Worker torch.clone: lambda input: -1, 524*da0073e9SAndroid Build Coastguard Worker torch.combinations: lambda input, r=2, with_replacement=False: -1, 525*da0073e9SAndroid Build Coastguard Worker torch.complex: lambda real, imag: -1, 526*da0073e9SAndroid Build Coastguard Worker torch.copysign: lambda input, other, out=None: -1, 527*da0073e9SAndroid Build Coastguard Worker torch.polar: lambda abs, ang: -1, 528*da0073e9SAndroid Build Coastguard Worker torch.linalg.cond: lambda input, ord=None: -1, 529*da0073e9SAndroid Build Coastguard Worker torch.conj: lambda input, out=None: -1, 530*da0073e9SAndroid Build Coastguard Worker torch.conj_physical: lambda input, out=None: -1, 531*da0073e9SAndroid Build Coastguard Worker torch.resolve_conj: lambda input, out=None: -1, 532*da0073e9SAndroid Build Coastguard Worker torch.resolve_neg: lambda input, out=None: -1, 533*da0073e9SAndroid Build Coastguard Worker torch.constant_pad_nd: lambda input, pad, value=0: -1, 534*da0073e9SAndroid Build Coastguard Worker torch.conv1d: lambda input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1: -1, 535*da0073e9SAndroid Build Coastguard Worker torch.conv2d: lambda input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1: -1, 536*da0073e9SAndroid Build Coastguard Worker torch.conv3d: lambda input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1: -1, 537*da0073e9SAndroid Build Coastguard Worker torch.convolution: lambda input, weight, bias, stride, padding, dilation, transposed, output_adding, groups: -1, 538*da0073e9SAndroid Build Coastguard Worker torch.conv_tbc: lambda input, weight, bias, pad=0: -1, 539*da0073e9SAndroid Build Coastguard Worker torch.conv_transpose1d: lambda input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1: -1, 540*da0073e9SAndroid Build Coastguard Worker torch.conv_transpose2d: lambda input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1: -1, 541*da0073e9SAndroid Build Coastguard Worker torch.conv_transpose3d: lambda input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1: -1, 542*da0073e9SAndroid Build Coastguard Worker torch.corrcoef: lambda input: -1, 543*da0073e9SAndroid Build Coastguard Worker torch.cos: lambda input, out=None: -1, 544*da0073e9SAndroid Build Coastguard Worker torch.cosine_embedding_loss: lambda input1, input2, target, margin=0, size_average=None, reduce=None, reduction="mean": -1, 545*da0073e9SAndroid Build Coastguard Worker torch.cosh: lambda input, out=None: -1, 546*da0073e9SAndroid Build Coastguard Worker torch.cosine_similarity: lambda x1, x2, dim=1, eps=1e-8: -1, 547*da0073e9SAndroid Build Coastguard Worker torch.count_nonzero: lambda input: -1, 548*da0073e9SAndroid Build Coastguard Worker torch.cross: lambda input, other, dim=None, out=None: -1, 549*da0073e9SAndroid Build Coastguard Worker torch.linalg.cross: lambda input, other, dim=-1, out=None: -1, 550*da0073e9SAndroid Build Coastguard Worker torch.ctc_loss: ( 551*da0073e9SAndroid Build Coastguard Worker lambda log_probs, targets, input_lengths, target_lengths, blank=0, reduction="mean", zero_infinity=False: -1 552*da0073e9SAndroid Build Coastguard Worker ), 553*da0073e9SAndroid Build Coastguard Worker torch.cummax: lambda input, dim, out=None: -1, 554*da0073e9SAndroid Build Coastguard Worker torch.cummin: lambda input, dim, out=None: -1, 555*da0073e9SAndroid Build Coastguard Worker torch.cumprod: lambda input, dim, out=None, dtype=None: -1, 556*da0073e9SAndroid Build Coastguard Worker torch.cumsum: lambda input, dim, out=None, dtype=None: -1, 557*da0073e9SAndroid Build Coastguard Worker torch.cumulative_trapezoid: lambda y, x=None, dim=-1: -1, 558*da0073e9SAndroid Build Coastguard Worker torch.logcumsumexp: lambda input, dim, out=None: -1, 559*da0073e9SAndroid Build Coastguard Worker torch.deg2rad: lambda input, out=None: -1, 560*da0073e9SAndroid Build Coastguard Worker torch.dequantize: lambda input: -1, 561*da0073e9SAndroid Build Coastguard Worker torch.det: lambda input: -1, 562*da0073e9SAndroid Build Coastguard Worker torch.linalg.det: lambda input: -1, # alias for torch.det # type: ignore[attr-defined] 563*da0073e9SAndroid Build Coastguard Worker torch.detach: lambda input: -1, 564*da0073e9SAndroid Build Coastguard Worker torch.diag: lambda input, diagonal=0, out=None: -1, 565*da0073e9SAndroid Build Coastguard Worker torch.diag_embed: lambda input, diagonal=0, out=None: -1, 566*da0073e9SAndroid Build Coastguard Worker torch.diagflat: lambda input, offset=0: -1, 567*da0073e9SAndroid Build Coastguard Worker torch.diff: lambda input, n=1, dim=-1, prepend=None, append=None, out=None: -1, 568*da0073e9SAndroid Build Coastguard Worker torch.diagonal: lambda input, offset=0, dim1=0, dim2=1: -1, 569*da0073e9SAndroid Build Coastguard Worker torch.linalg.diagonal: lambda input, offset=0, dim1=-2, dim2=-1: -1, 570*da0073e9SAndroid Build Coastguard Worker torch.diagonal_scatter: lambda input, src, offset=0, dim1=0, dim2=1: -1, 571*da0073e9SAndroid Build Coastguard Worker torch.as_strided_scatter: lambda self, src, size, stride, storage_offset=None: -1, 572*da0073e9SAndroid Build Coastguard Worker torch.digamma: lambda input, out=None: -1, 573*da0073e9SAndroid Build Coastguard Worker torch.dist: lambda input, other, p=2: -1, 574*da0073e9SAndroid Build Coastguard Worker torch.div: lambda input, other, rounding_mode=None, out=None: -1, 575*da0073e9SAndroid Build Coastguard Worker torch.divide: lambda input, other, rounding_mode=None, out=None: -1, 576*da0073e9SAndroid Build Coastguard Worker torch.dot: lambda input, other, out=None: -1, 577*da0073e9SAndroid Build Coastguard Worker torch.dropout: lambda input, p, train, inplace=False: -1, 578*da0073e9SAndroid Build Coastguard Worker torch.dsmm: lambda input, mat2: -1, 579*da0073e9SAndroid Build Coastguard Worker torch.hsmm: lambda mat1, mat2: -1, 580*da0073e9SAndroid Build Coastguard Worker torch.dsplit: lambda input, indices_or_sections: -1, 581*da0073e9SAndroid Build Coastguard Worker torch.dstack: lambda tensors, out=None: -1, 582*da0073e9SAndroid Build Coastguard Worker torch.linalg.eig: lambda input, out=None: -1, 583*da0073e9SAndroid Build Coastguard Worker torch.linalg.eigvals: lambda input, out=None: -1, 584*da0073e9SAndroid Build Coastguard Worker torch.linalg.eigh: lambda input, UPLO="L", out=None: -1, 585*da0073e9SAndroid Build Coastguard Worker torch.linalg.eigvalsh: lambda input, UPLO="L", out=None: -1, 586*da0073e9SAndroid Build Coastguard Worker torch.einsum: lambda equation, *operands: -1, 587*da0073e9SAndroid Build Coastguard Worker torch.embedding: ( 588*da0073e9SAndroid Build Coastguard Worker lambda input, weight, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False: -1 # noqa: B950 589*da0073e9SAndroid Build Coastguard Worker ), 590*da0073e9SAndroid Build Coastguard Worker torch.embedding_bag: ( 591*da0073e9SAndroid Build Coastguard Worker lambda input, weight, offsets, max_norm=None, norm_type=2, scale_grad_by_freq=False, mode="mean", sparse=False, per_sample_weights=None, padding_idx=None: -1 # noqa: B950 592*da0073e9SAndroid Build Coastguard Worker ), 593*da0073e9SAndroid Build Coastguard Worker torch.empty_like: lambda input, dtype=None, layout=None, device=None, requires_grad=False: -1, 594*da0073e9SAndroid Build Coastguard Worker torch.eq: lambda input, other, out=None: -1, 595*da0073e9SAndroid Build Coastguard Worker torch.equal: lambda input, other: -1, 596*da0073e9SAndroid Build Coastguard Worker torch.erf: lambda input, out=None: -1, 597*da0073e9SAndroid Build Coastguard Worker torch.erfc: lambda input, out=None: -1, 598*da0073e9SAndroid Build Coastguard Worker torch.erfinv: lambda input, out=None: -1, 599*da0073e9SAndroid Build Coastguard Worker torch.exp: lambda input, out=None: -1, 600*da0073e9SAndroid Build Coastguard Worker torch.exp2: lambda input, out=None: -1, 601*da0073e9SAndroid Build Coastguard Worker torch.expm1: lambda input, out=None: -1, 602*da0073e9SAndroid Build Coastguard Worker torch.fake_quantize_per_channel_affine: lambda input, scale, zero_point, axis, quant_min, quant_max: -1, 603*da0073e9SAndroid Build Coastguard Worker torch.fake_quantize_per_tensor_affine: lambda input, scale, zero_point, quant_min, quant_max: -1, 604*da0073e9SAndroid Build Coastguard Worker torch.fused_moving_avg_obs_fake_quant: ( 605*da0073e9SAndroid Build Coastguard Worker lambda x, observer_on, fake_quant_on, averaging_const, running_min, running_max, scale, zero_point, quant_min, quant_max, ch_axis, per_row_fake_quant=False, symmetric_quant=False: -1 # noqa: B950 606*da0073e9SAndroid Build Coastguard Worker ), 607*da0073e9SAndroid Build Coastguard Worker torch.fbgemm_linear_fp16_weight: lambda input, packed_weight, bias: -1, 608*da0073e9SAndroid Build Coastguard Worker torch.fbgemm_linear_fp16_weight_fp32_activation: lambda input, packed_weight, bias: -1, 609*da0073e9SAndroid Build Coastguard Worker torch.fbgemm_linear_int8_weight: lambda input, weight, packed, col_offsets, weight_scale, weight_zero_point, bias: -1, # noqa: B950 610*da0073e9SAndroid Build Coastguard Worker torch.fbgemm_linear_int8_weight_fp32_activation: ( 611*da0073e9SAndroid Build Coastguard Worker lambda input, weight, packed, col_offsets, weight_scale, weight_zero_point, bias: -1 612*da0073e9SAndroid Build Coastguard Worker ), 613*da0073e9SAndroid Build Coastguard Worker torch.fbgemm_linear_quantize_weight: lambda input: -1, 614*da0073e9SAndroid Build Coastguard Worker torch.fbgemm_pack_gemm_matrix_fp16: lambda input: -1, 615*da0073e9SAndroid Build Coastguard Worker torch.fbgemm_pack_quantized_matrix: lambda input, a, b: -1, 616*da0073e9SAndroid Build Coastguard Worker torch.feature_alpha_dropout: lambda input, p, train: -1, 617*da0073e9SAndroid Build Coastguard Worker torch.feature_dropout: lambda input, p, train: -1, 618*da0073e9SAndroid Build Coastguard Worker torch.fft.ifft: lambda input, n=None, dim=-1, norm=None: -1, 619*da0073e9SAndroid Build Coastguard Worker torch.fft.rfft: lambda input, n=None, dim=-1, norm=None: -1, 620*da0073e9SAndroid Build Coastguard Worker torch.fft.irfft: lambda input, n=None, dim=-1, norm=None: -1, 621*da0073e9SAndroid Build Coastguard Worker torch.fft.hfft: lambda input, n=None, dim=-1, norm=None: -1, 622*da0073e9SAndroid Build Coastguard Worker torch.fft.ihfft: lambda input, n=None, dim=-1, norm=None: -1, 623*da0073e9SAndroid Build Coastguard Worker torch.fft.hfft2: lambda input, s=None, dim=(-2, -1), norm=None: -1, 624*da0073e9SAndroid Build Coastguard Worker torch.fft.ihfft2: lambda input, s=None, dim=(-2, -1), norm=None: -1, 625*da0073e9SAndroid Build Coastguard Worker torch.fft.hfftn: lambda input, s=None, dim=-1, norm=None: -1, 626*da0073e9SAndroid Build Coastguard Worker torch.fft.ihfftn: lambda input, s=None, dim=-1, norm=None: -1, 627*da0073e9SAndroid Build Coastguard Worker torch.fft.fftn: lambda input, s=None, dim=None, norm=None: -1, 628*da0073e9SAndroid Build Coastguard Worker torch.fft.ifftn: lambda input, s=None, dim=None, norm=None: -1, 629*da0073e9SAndroid Build Coastguard Worker torch.fft.rfftn: lambda input, s=None, dim=None, norm=None: -1, 630*da0073e9SAndroid Build Coastguard Worker torch.fft.irfftn: lambda input, s=None, dim=None, norm=None: -1, 631*da0073e9SAndroid Build Coastguard Worker torch.fft.fft2: lambda input, s=None, dim=(-2, -1), norm=None: -1, 632*da0073e9SAndroid Build Coastguard Worker torch.fft.ifft2: lambda input, s=None, dim=(-2, -1), norm=None: -1, 633*da0073e9SAndroid Build Coastguard Worker torch.fft.rfft2: lambda input, s=None, dim=(-2, -1), norm=None: -1, 634*da0073e9SAndroid Build Coastguard Worker torch.fft.irfft2: lambda input, s=None, dim=(-2, -1), norm=None: -1, 635*da0073e9SAndroid Build Coastguard Worker torch.fft.fftshift: lambda input, dim=None: -1, 636*da0073e9SAndroid Build Coastguard Worker torch.fft.ifftshift: lambda input, dim=None: -1, 637*da0073e9SAndroid Build Coastguard Worker torch.fft.fft: lambda input, n=None, dim=-1, norm=None: -1, 638*da0073e9SAndroid Build Coastguard Worker torch.fix: lambda input, out=None: -1, 639*da0073e9SAndroid Build Coastguard Worker torch.flatten: lambda input, start_dim=0, end_dim=-1: -1, 640*da0073e9SAndroid Build Coastguard Worker torch.flip: lambda input, dims: -1, 641*da0073e9SAndroid Build Coastguard Worker torch.fliplr: lambda input: -1, 642*da0073e9SAndroid Build Coastguard Worker torch.flipud: lambda input: -1, 643*da0073e9SAndroid Build Coastguard Worker torch.frobenius_norm: lambda input, dim=None, keepdim=False, out=None: -1, 644*da0073e9SAndroid Build Coastguard Worker torch.floor: lambda input, out=None: -1, 645*da0073e9SAndroid Build Coastguard Worker torch.floor_divide: lambda input, other: -1, 646*da0073e9SAndroid Build Coastguard Worker torch.float_power: lambda input, exponent, out=None: -1, 647*da0073e9SAndroid Build Coastguard Worker torch.fmod: lambda input, other, out=None: -1, 648*da0073e9SAndroid Build Coastguard Worker torch.frac: lambda input, out=None: -1, 649*da0073e9SAndroid Build Coastguard Worker torch.frexp: lambda input, out=None: -1, 650*da0073e9SAndroid Build Coastguard Worker torch.full_like: lambda input, fill_value, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False: -1, # noqa: B950 651*da0073e9SAndroid Build Coastguard Worker torch._functional_assert_async: lambda input, msg, dep_token: -1, 652*da0073e9SAndroid Build Coastguard Worker torch.lu_unpack: lambda LU_data, LU_pivots, unpack_data=True, unpack_pivots=True: -1, 653*da0073e9SAndroid Build Coastguard Worker torch.gather: lambda input, dim, index, out=None, sparse_grad=False: -1, 654*da0073e9SAndroid Build Coastguard Worker torch.gcd: lambda input, other, out=None: -1, 655*da0073e9SAndroid Build Coastguard Worker torch.ge: lambda input, other, out=None: -1, 656*da0073e9SAndroid Build Coastguard Worker torch.greater_equal: lambda input, other, out=None: -1, 657*da0073e9SAndroid Build Coastguard Worker torch.geqrf: lambda input, out=None: -1, 658*da0073e9SAndroid Build Coastguard Worker torch.i0: lambda input, out=None: -1, 659*da0073e9SAndroid Build Coastguard Worker torch.inner: lambda input, other, out=None: -1, 660*da0073e9SAndroid Build Coastguard Worker torch.outer: lambda input, vec2, out=None: -1, 661*da0073e9SAndroid Build Coastguard Worker torch.ger: lambda input, vec2, out=None: -1, # alias for torch.outer 662*da0073e9SAndroid Build Coastguard Worker torch.gradient: lambda input, spacing=None, dim=None, edge_order=1: -1, 663*da0073e9SAndroid Build Coastguard Worker torch.grid_sampler: lambda input, grid, interpolation_mode, padding_mode, align_corners: -1, 664*da0073e9SAndroid Build Coastguard Worker torch.grid_sampler_2d: lambda input, grid, interpolation_mode, padding_mode, align_corners: -1, 665*da0073e9SAndroid Build Coastguard Worker torch.grid_sampler_3d: lambda input, grid, interpolation_mode, padding_mode, align_corners: -1, 666*da0073e9SAndroid Build Coastguard Worker torch.group_norm: lambda input, num_groups, weight=None, bias=None, eps=1e-05, cudnn_enabled=True: -1, 667*da0073e9SAndroid Build Coastguard Worker torch.gru: lambda input, hx, params, has_biases, num_layers, dropout, train, bidirectional, batch_first: -1, 668*da0073e9SAndroid Build Coastguard Worker torch.gru_cell: lambda input, hx, w_ih, w_hh, b_ih=None, b_hh=None: -1, 669*da0073e9SAndroid Build Coastguard Worker torch.gt: lambda input, other, out=None: -1, 670*da0073e9SAndroid Build Coastguard Worker torch.greater: lambda input, other, out=None: -1, 671*da0073e9SAndroid Build Coastguard Worker torch.hardshrink: lambda input, lambd=0.5: -1, 672*da0073e9SAndroid Build Coastguard Worker torch.heaviside: lambda input, values, out=None: -1, 673*da0073e9SAndroid Build Coastguard Worker torch.hinge_embedding_loss: lambda input, target, margin=1.0, size_average=None, reduce=None, reduction="mean": -1, # noqa: B950 674*da0073e9SAndroid Build Coastguard Worker torch.histc: lambda input, bins=100, min=0, max=0, out=None: -1, 675*da0073e9SAndroid Build Coastguard Worker torch.histogram: lambda input, bins=100, min=None, max=None, weight=None, density=False, out=None: -1, 676*da0073e9SAndroid Build Coastguard Worker torch.histogramdd: lambda input, bins, range=None, weight=None, density=False: -1, 677*da0073e9SAndroid Build Coastguard Worker torch.linalg.householder_product: lambda input, tau: -1, 678*da0073e9SAndroid Build Coastguard Worker torch.hspmm: lambda mat1, mat2, out=None: -1, 679*da0073e9SAndroid Build Coastguard Worker torch.hsplit: lambda input, indices_or_sections: -1, 680*da0073e9SAndroid Build Coastguard Worker torch.hstack: lambda tensors, out=None: -1, 681*da0073e9SAndroid Build Coastguard Worker torch.hypot: lambda input, other, out=None: -1, 682*da0073e9SAndroid Build Coastguard Worker torch.igamma: lambda input, other, out=None: -1, 683*da0073e9SAndroid Build Coastguard Worker torch.igammac: lambda input, other, out=None: -1, 684*da0073e9SAndroid Build Coastguard Worker torch.imag: lambda input, out=None: -1, 685*da0073e9SAndroid Build Coastguard Worker torch.index_add: lambda input, dim, index, source: -1, 686*da0073e9SAndroid Build Coastguard Worker torch.index_copy: lambda input, dim, index, source: -1, 687*da0073e9SAndroid Build Coastguard Worker torch.index_put: lambda input, indices, values, accumulate=False: -1, 688*da0073e9SAndroid Build Coastguard Worker torch.index_select: lambda input, dim, index, out=None: -1, 689*da0073e9SAndroid Build Coastguard Worker torch.index_fill: lambda input, dim, index, value: -1, 690*da0073e9SAndroid Build Coastguard Worker torch.index_reduce: lambda input, dim, index, source, reduce, include_input=True: -1, 691*da0073e9SAndroid Build Coastguard Worker torch.isfinite: lambda tensor: -1, 692*da0073e9SAndroid Build Coastguard Worker torch.isin: lambda e, te, assume_unique=False, invert=False: -1, 693*da0073e9SAndroid Build Coastguard Worker torch.isinf: lambda tensor: -1, 694*da0073e9SAndroid Build Coastguard Worker torch.isreal: lambda tensor: -1, 695*da0073e9SAndroid Build Coastguard Worker torch.isposinf: lambda input, out=None: -1, 696*da0073e9SAndroid Build Coastguard Worker torch.isneginf: lambda input, out=None: -1, 697*da0073e9SAndroid Build Coastguard Worker torch.instance_norm: ( 698*da0073e9SAndroid Build Coastguard Worker lambda input, running_mean, running_var, weight, bias, use_input_stats, momentum, eps, cudnn_enabled: -1 699*da0073e9SAndroid Build Coastguard Worker ), 700*da0073e9SAndroid Build Coastguard Worker torch.int_repr: lambda input: -1, 701*da0073e9SAndroid Build Coastguard Worker torch.inverse: lambda input, out=None: -1, 702*da0073e9SAndroid Build Coastguard Worker torch.linalg.inv: lambda input, out=None: -1, 703*da0073e9SAndroid Build Coastguard Worker torch.linalg.inv_ex: lambda input, check_errors=False, out=None: -1, 704*da0073e9SAndroid Build Coastguard Worker torch.is_complex: lambda input: -1, 705*da0073e9SAndroid Build Coastguard Worker torch.is_conj: lambda input: -1, 706*da0073e9SAndroid Build Coastguard Worker torch.is_neg: lambda input: -1, 707*da0073e9SAndroid Build Coastguard Worker torch.is_distributed: lambda input: -1, 708*da0073e9SAndroid Build Coastguard Worker torch.is_inference: lambda input: -1, 709*da0073e9SAndroid Build Coastguard Worker torch.is_floating_point: lambda input: -1, 710*da0073e9SAndroid Build Coastguard Worker torch.is_nonzero: lambda input: -1, 711*da0073e9SAndroid Build Coastguard Worker torch.is_same_size: lambda input, other: -1, 712*da0073e9SAndroid Build Coastguard Worker torch.is_signed: lambda input: -1, 713*da0073e9SAndroid Build Coastguard Worker torch.isclose: lambda input, other, rtol=1e-05, atol=1e-08, equal_nan=False: -1, 714*da0073e9SAndroid Build Coastguard Worker torch.isnan: lambda input: -1, 715*da0073e9SAndroid Build Coastguard Worker torch.istft: ( 716*da0073e9SAndroid Build Coastguard Worker lambda input, n_fft, hop_length=None, win_length=None, window=None, center=True, normalized=False, onesided=None, length=None, return_complex=False: -1 # noqa: B950 717*da0073e9SAndroid Build Coastguard Worker ), 718*da0073e9SAndroid Build Coastguard Worker torch.kl_div: lambda input, target, size_average=None, reduce=None, reduction="mean", log_target=False: -1, 719*da0073e9SAndroid Build Coastguard Worker torch.kron: lambda input, other: -1, 720*da0073e9SAndroid Build Coastguard Worker torch.kthvalue: lambda input, k, dim=None, keepdim=False, out=None: -1, 721*da0073e9SAndroid Build Coastguard Worker torch.linalg.ldl_factor_ex: lambda input, hermitian=False, check_errors=False, out=None: -1, 722*da0073e9SAndroid Build Coastguard Worker torch.linalg.ldl_factor: lambda input, hermitian=False, out=None: -1, 723*da0073e9SAndroid Build Coastguard Worker torch.linalg.ldl_solve: lambda LD, pivots, B, hermitian=False, out=None: -1, 724*da0073e9SAndroid Build Coastguard Worker torch.layer_norm: lambda input, normalized_shape, weight=None, bias=None, esp=1e-05, cudnn_enabled=True: -1, 725*da0073e9SAndroid Build Coastguard Worker torch.lcm: lambda input, other, out=None: -1, 726*da0073e9SAndroid Build Coastguard Worker torch.ldexp: lambda input, other, out=None: -1, 727*da0073e9SAndroid Build Coastguard Worker torch.le: lambda input, other, out=None: -1, 728*da0073e9SAndroid Build Coastguard Worker torch.less_equal: lambda input, other, out=None: -1, 729*da0073e9SAndroid Build Coastguard Worker torch.lerp: lambda input, end, weight, out=None: -1, 730*da0073e9SAndroid Build Coastguard Worker torch.lgamma: lambda input, out=None: -1, 731*da0073e9SAndroid Build Coastguard Worker torch.lobpcg: lambda input, k=None, B=None, X=None, n=None, iK=None, niter=None, tol=None, largest=None, method=None, tracker=None, ortho_iparams=None, ortho_fparams=None, ortho_bparams=None: -1, # noqa: B950 732*da0073e9SAndroid Build Coastguard Worker torch.log: lambda input, out=None: -1, 733*da0073e9SAndroid Build Coastguard Worker torch.log_softmax: lambda input, dim, dtype=None: -1, 734*da0073e9SAndroid Build Coastguard Worker torch.log10: lambda input, out=None: -1, 735*da0073e9SAndroid Build Coastguard Worker torch.log1p: lambda input, out=None: -1, 736*da0073e9SAndroid Build Coastguard Worker torch.log2: lambda input, out=None: -1, 737*da0073e9SAndroid Build Coastguard Worker torch.logaddexp: lambda input, other, out=None: -1, 738*da0073e9SAndroid Build Coastguard Worker torch.logaddexp2: lambda input, other, out=None: -1, 739*da0073e9SAndroid Build Coastguard Worker torch.logdet: lambda input: -1, 740*da0073e9SAndroid Build Coastguard Worker torch.xlogy: lambda x, y, out=None: -1, 741*da0073e9SAndroid Build Coastguard Worker torch.logical_and: lambda input, other, out=None: -1, 742*da0073e9SAndroid Build Coastguard Worker torch.logical_not: lambda input, out=None: -1, 743*da0073e9SAndroid Build Coastguard Worker torch.logical_or: lambda input, other, out=None: -1, 744*da0073e9SAndroid Build Coastguard Worker torch.logical_xor: lambda input, other, out=None: -1, 745*da0073e9SAndroid Build Coastguard Worker torch.logit: lambda input, eps=None: -1, 746*da0073e9SAndroid Build Coastguard Worker torch.logsumexp: lambda input, names, keepdim=False, out=None: -1, 747*da0073e9SAndroid Build Coastguard Worker torch.lstm: lambda data, batch_sizes, hx, params, has_biases, num_layers, dropout, train, bidirectional: -1, 748*da0073e9SAndroid Build Coastguard Worker torch.lstm_cell: lambda input, hx, w_ih, w_hh, b_ih=None, b_hh=None: -1, 749*da0073e9SAndroid Build Coastguard Worker torch.lt: lambda input, other, out=None: -1, 750*da0073e9SAndroid Build Coastguard Worker torch.less: lambda input, other, out=None: -1, 751*da0073e9SAndroid Build Coastguard Worker torch.lu: lambda A, pivot=True, get_infos=False, out=None: -1, 752*da0073e9SAndroid Build Coastguard Worker torch.lu_solve: lambda b, LU_data, LU_pivots, out=None: -1, 753*da0073e9SAndroid Build Coastguard Worker torch.margin_ranking_loss: lambda input1, input2, target, margin=0, size_average=None, reduce=None, reduction="mean": -1, # type: ignore[attr-defined] # noqa: B950 754*da0073e9SAndroid Build Coastguard Worker torch.masked_fill: lambda input, mask, value: -1, 755*da0073e9SAndroid Build Coastguard Worker torch.masked_scatter: lambda input, mask, source: -1, 756*da0073e9SAndroid Build Coastguard Worker torch.masked_select: lambda input, mask, out=None: -1, 757*da0073e9SAndroid Build Coastguard Worker torch.matmul: lambda input, other, out=None: -1, 758*da0073e9SAndroid Build Coastguard Worker torch.linalg.lu: lambda input, pivot=True, out=None: -1, 759*da0073e9SAndroid Build Coastguard Worker torch.linalg.lu_factor: lambda input, pivot=True, out=None: -1, 760*da0073e9SAndroid Build Coastguard Worker torch.linalg.lu_factor_ex: lambda input, pivot=True, check_errors=False, out=None: -1, 761*da0073e9SAndroid Build Coastguard Worker torch.linalg.lu_solve: lambda LU, pivots, B, left=True, adjoint=False, out=None: -1, 762*da0073e9SAndroid Build Coastguard Worker torch.linalg.matmul: lambda input, other, out=None: -1, # alias for torch.matmul 763*da0073e9SAndroid Build Coastguard Worker torch.matrix_power: lambda input, n: -1, 764*da0073e9SAndroid Build Coastguard Worker torch.linalg.matrix_power: lambda input, n, out=None: -1, 765*da0073e9SAndroid Build Coastguard Worker torch.linalg.matrix_rank: lambda input, tol=None, hermitian=False: -1, 766*da0073e9SAndroid Build Coastguard Worker torch.linalg.multi_dot: lambda tensors, out=None: -1, 767*da0073e9SAndroid Build Coastguard Worker torch.matrix_exp: lambda input: -1, 768*da0073e9SAndroid Build Coastguard Worker torch.linalg.matrix_exp: lambda input: -1, 769*da0073e9SAndroid Build Coastguard Worker torch.max: lambda input, out=None: -1, 770*da0073e9SAndroid Build Coastguard Worker torch.maximum: lambda input, other, out=None: -1, 771*da0073e9SAndroid Build Coastguard Worker torch.fmax: lambda input, other, out=None: -1, 772*da0073e9SAndroid Build Coastguard Worker torch.max_pool1d: lambda input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False: -1, 773*da0073e9SAndroid Build Coastguard Worker torch.max_pool2d: lambda input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False: -1, 774*da0073e9SAndroid Build Coastguard Worker torch.max_pool3d: lambda input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False: -1, 775*da0073e9SAndroid Build Coastguard Worker torch.max_pool1d_with_indices: ( 776*da0073e9SAndroid Build Coastguard Worker lambda input, kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False: -1 777*da0073e9SAndroid Build Coastguard Worker ), 778*da0073e9SAndroid Build Coastguard Worker torch.mean: lambda input, dim=None: -1, 779*da0073e9SAndroid Build Coastguard Worker torch.nanmean: lambda input, dim=None, keepdim=False, dtype=None, out=None: -1, 780*da0073e9SAndroid Build Coastguard Worker torch.median: lambda input, dim=None: -1, 781*da0073e9SAndroid Build Coastguard Worker torch.nanmedian: lambda input, dim=None: -1, 782*da0073e9SAndroid Build Coastguard Worker torch.meshgrid: lambda *tensors, **kwargs: -1, 783*da0073e9SAndroid Build Coastguard Worker torch.min: lambda input, out=None: -1, 784*da0073e9SAndroid Build Coastguard Worker torch.minimum: lambda input, other, out=None: -1, 785*da0073e9SAndroid Build Coastguard Worker torch.fmin: lambda input, other, out=None: -1, 786*da0073e9SAndroid Build Coastguard Worker torch.miopen_batch_norm: ( 787*da0073e9SAndroid Build Coastguard Worker lambda input, weight, bias, running_mean, running_var, training, exponential_average_factor, epsilon: -1 788*da0073e9SAndroid Build Coastguard Worker ), 789*da0073e9SAndroid Build Coastguard Worker torch.miopen_convolution: lambda input, weight, bias, padding, stride, dilation, groups, benchmark, deterministic: -1, # noqa: B950 790*da0073e9SAndroid Build Coastguard Worker torch.miopen_convolution_add_relu: lambda input, weight, z, alpha, bias, stride, padding, dilation, groups: -1, 791*da0073e9SAndroid Build Coastguard Worker torch.miopen_convolution_relu: lambda input, weight, bias, stride, padding, dilation, groups: -1, 792*da0073e9SAndroid Build Coastguard Worker torch.miopen_convolution_transpose: ( 793*da0073e9SAndroid Build Coastguard Worker lambda input, weight, bias, padding, output_padding, stride, dilation, groups, benchmark, deterministic: -1 794*da0073e9SAndroid Build Coastguard Worker ), 795*da0073e9SAndroid Build Coastguard Worker torch.miopen_depthwise_convolution: ( 796*da0073e9SAndroid Build Coastguard Worker lambda input, weight, bias, padding, stride, dilation, groups, benchmark, deterministic: -1 797*da0073e9SAndroid Build Coastguard Worker ), 798*da0073e9SAndroid Build Coastguard Worker torch.miopen_rnn: ( 799*da0073e9SAndroid Build Coastguard Worker lambda input, weight, weight_stride0, hx, cx, mode, hidden_size, num_layers, batch_first, dropout, train, bidirectional, batch_sizes, dropout_state: -1 # noqa: B950 800*da0073e9SAndroid Build Coastguard Worker ), 801*da0073e9SAndroid Build Coastguard Worker torch.mm: lambda input, mat2, out=None: -1, 802*da0073e9SAndroid Build Coastguard Worker torch.mode: lambda input, dim=-1, keepdim=False, out=None: -1, 803*da0073e9SAndroid Build Coastguard Worker torch.movedim: lambda input, source, destination: -1, 804*da0073e9SAndroid Build Coastguard Worker torch.moveaxis: lambda input, source, destination: -1, 805*da0073e9SAndroid Build Coastguard Worker torch.msort: lambda input, descending=False, out=None: -1, 806*da0073e9SAndroid Build Coastguard Worker torch.mul: lambda input, other, out=None: -1, 807*da0073e9SAndroid Build Coastguard Worker torch.multiply: lambda input, other, out=None: -1, 808*da0073e9SAndroid Build Coastguard Worker torch.multinomial: lambda input, num_samples, replacement=False, out=None: -1, 809*da0073e9SAndroid Build Coastguard Worker torch.mv: lambda input, vec, out=None: -1, 810*da0073e9SAndroid Build Coastguard Worker torch.mvlgamma: lambda input, p: -1, 811*da0073e9SAndroid Build Coastguard Worker torch.narrow: lambda input, dim, start, length: -1, 812*da0073e9SAndroid Build Coastguard Worker torch.nan_to_num: lambda input, nan=0.0, posinf=None, neginf=None, out=None: -1, 813*da0073e9SAndroid Build Coastguard Worker torch.native_batch_norm: lambda input, weight, bias, running_mean, running_var, training, momentum, eps: -1, 814*da0073e9SAndroid Build Coastguard Worker torch._native_batch_norm_legit: lambda input, weight, bias, training, momentum, eps: -1, 815*da0073e9SAndroid Build Coastguard Worker torch.native_dropout: lambda input, p, train: -1, 816*da0073e9SAndroid Build Coastguard Worker torch.native_layer_norm: lambda input, normalized_shape, weight=None, bias=None, eps=1e-05: -1, 817*da0073e9SAndroid Build Coastguard Worker torch.native_group_norm: lambda input, weight, bias, N, C, HxW, group, eps: -1, 818*da0073e9SAndroid Build Coastguard Worker torch.native_norm: lambda input, p=2, dim=None, keepdim=False, dtype=None: -1, 819*da0073e9SAndroid Build Coastguard Worker torch.native_channel_shuffle: lambda input, groups: -1, 820*da0073e9SAndroid Build Coastguard Worker torch.ne: lambda input, other, out=None: -1, 821*da0073e9SAndroid Build Coastguard Worker torch.not_equal: lambda input, other, out=None: -1, 822*da0073e9SAndroid Build Coastguard Worker torch.neg: lambda input, out=None: -1, 823*da0073e9SAndroid Build Coastguard Worker torch.negative: lambda input, out=None: -1, 824*da0073e9SAndroid Build Coastguard Worker torch.nextafter: lambda input, other, out=None: -1, 825*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.adaptive_avg_pool2d: lambda input, output_size: -1, 826*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.adaptive_avg_pool3d: lambda input, output_size: -1, 827*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.adaptive_max_pool1d: lambda input, output_size, return_indices=False: -1, 828*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.adaptive_max_pool1d_with_indices: lambda input, output_size, return_indices=False: -1, 829*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.adaptive_max_pool2d: lambda input, output_size, return_indices=False: -1, 830*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.adaptive_max_pool2d_with_indices: lambda input, output_size, return_indices=False: -1, 831*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.adaptive_max_pool3d: lambda input, output_size, return_indices=False: -1, 832*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.adaptive_max_pool3d_with_indices: lambda input, output_size, return_indices=False: -1, 833*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.affine_grid: lambda theta, size, align_corners=None: -1, 834*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.alpha_dropout: lambda input, p=0.5, training=False, inplace=False: -1, 835*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.avg_pool2d: ( 836*da0073e9SAndroid Build Coastguard Worker lambda input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None: -1 # noqa: B950 837*da0073e9SAndroid Build Coastguard Worker ), 838*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.avg_pool3d: ( 839*da0073e9SAndroid Build Coastguard Worker lambda input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None: -1 # noqa: B950 840*da0073e9SAndroid Build Coastguard Worker ), 841*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.batch_norm: ( 842*da0073e9SAndroid Build Coastguard Worker lambda input, running_mean, running_var, weight=None, bias=None, training=False, momentum=0.1, eps=1e-05: -1 843*da0073e9SAndroid Build Coastguard Worker ), 844*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.bilinear: lambda input1, input2, weight, bias=None: -1, 845*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.binary_cross_entropy: ( 846*da0073e9SAndroid Build Coastguard Worker lambda input, target, weight=None, size_average=None, reduce=None, reduction="mean": -1 847*da0073e9SAndroid Build Coastguard Worker ), 848*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.binary_cross_entropy_with_logits: ( 849*da0073e9SAndroid Build Coastguard Worker lambda input, target, weight=None, size_average=None, reduce=None, reduction="mean", pos_weight=None: -1 850*da0073e9SAndroid Build Coastguard Worker ), 851*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.celu: lambda input, alpha=1.0, inplace=False: -1, 852*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.cosine_embedding_loss: ( 853*da0073e9SAndroid Build Coastguard Worker lambda input1, input2, target, margin=0, size_average=None, reduce=None, reduction="mean": -1 854*da0073e9SAndroid Build Coastguard Worker ), 855*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.cross_entropy: ( 856*da0073e9SAndroid Build Coastguard Worker lambda input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction="mean", label_smoothing=0.0: -1 # noqa: B950 857*da0073e9SAndroid Build Coastguard Worker ), 858*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.ctc_loss: ( 859*da0073e9SAndroid Build Coastguard Worker lambda log_probs, targets, input_lengths, target_lengths, blank=0, reduction="mean", zero_infinity=False: -1 860*da0073e9SAndroid Build Coastguard Worker ), 861*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.dropout: lambda input, p=0.5, training=True, inplace=False: -1, 862*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.dropout1d: lambda input, p=0.5, training=True, inplace=False: -1, 863*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.dropout2d: lambda input, p=0.5, training=True, inplace=False: -1, 864*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.dropout3d: lambda input, p=0.5, training=True, inplace=False: -1, 865*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.elu: lambda input, alpha=1.0, inplace=False: -1, 866*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.embedding: ( 867*da0073e9SAndroid Build Coastguard Worker lambda input, weight, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False: -1 # noqa: B950 868*da0073e9SAndroid Build Coastguard Worker ), 869*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.embedding_bag: ( 870*da0073e9SAndroid Build Coastguard Worker lambda input, weight, offsets=None, max_norm=None, norm_type=2, scale_grad_by_freq=False, mode="mean", sparse=False, per_sample_weights=None, include_last_offset=False, padding_idx=None: -1 # noqa: B950 871*da0073e9SAndroid Build Coastguard Worker ), 872*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.feature_alpha_dropout: lambda input, p=0.5, training=False, inplace=False: -1, 873*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.fold: lambda input, output_size, kernel_size, dilation=1, padding=0, stride=1: -1, 874*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.fractional_max_pool2d: ( 875*da0073e9SAndroid Build Coastguard Worker lambda input, kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None: -1 # noqa: B950 876*da0073e9SAndroid Build Coastguard Worker ), 877*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.fractional_max_pool2d_with_indices: ( 878*da0073e9SAndroid Build Coastguard Worker lambda input, kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None: -1 # noqa: B950 879*da0073e9SAndroid Build Coastguard Worker ), 880*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.fractional_max_pool3d: ( 881*da0073e9SAndroid Build Coastguard Worker lambda input, kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None: -1 # noqa: B950 882*da0073e9SAndroid Build Coastguard Worker ), 883*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.fractional_max_pool3d_with_indices: ( 884*da0073e9SAndroid Build Coastguard Worker lambda input, kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None: -1 # noqa: B950 885*da0073e9SAndroid Build Coastguard Worker ), 886*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.gaussian_nll_loss: lambda input, target, var, full=False, eps=1e-06, reduction="mean": -1, 887*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.gelu: lambda input, approximate="none": -1, 888*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.glu: lambda input, dim=-1: -1, 889*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.grid_sample: lambda input, grid, mode="bilinear", padding_mode="zeros", align_corners=None: -1, # noqa: B950 890*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.group_norm: lambda input, num_groups, weight=None, bias=None, eps=1e-05: -1, 891*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.gumbel_softmax: lambda logits, tau=1, hard=False, eps=1e-10, dim=-1: -1, 892*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.hardshrink: lambda input, lambd=0.5: -1, 893*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.hardtanh: lambda input, min_val=-1.0, max_val=1.0, inplace=False: -1, 894*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.hinge_embedding_loss: ( 895*da0073e9SAndroid Build Coastguard Worker lambda input, target, margin=1.0, size_average=None, reduce=None, reduction="mean": -1 896*da0073e9SAndroid Build Coastguard Worker ), 897*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.instance_norm: ( 898*da0073e9SAndroid Build Coastguard Worker lambda input, running_mean=None, running_var=None, weight=None, bias=None, use_input_stats=True, momentum=0.1, eps=1e-05: -1 # noqa: B950 899*da0073e9SAndroid Build Coastguard Worker ), 900*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.interpolate: ( 901*da0073e9SAndroid Build Coastguard Worker lambda input, size=None, scale_factor=None, mode="nearest", align_corners=None, recompute_scale_factor=None, antialias=False: -1 # noqa: B950 902*da0073e9SAndroid Build Coastguard Worker ), 903*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.kl_div: lambda input, target, size_average=None, reduce=None, reduction="mean", log_target=False: -1, # noqa: B950 904*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.l1_loss: lambda input, target, size_average=None, reduce=None, reduction="mean": -1, 905*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.layer_norm: lambda input, normalized_shape, weight=None, bias=None, eps=1e-05: -1, 906*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.leaky_relu: lambda input, negative_slope=0.01, inplace=False: -1, 907*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.linear: lambda input, weight, bias=None: -1, 908*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.local_response_norm: lambda input, size, alpha=0.0001, beta=0.75, k=1.0: -1, 909*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.log_softmax: lambda input, dim=None, _stacklevel=3, dtype=None: -1, 910*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.logsigmoid: lambda input: -1, 911*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.lp_pool1d: lambda input, norm_type, kernel_size, stride=None, ceil_mode=False: -1, 912*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.lp_pool2d: lambda input, norm_type, kernel_size, stride=None, ceil_mode=False: -1, 913*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.lp_pool3d: lambda input, norm_type, kernel_size, stride=None, ceil_mode=False: -1, 914*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.margin_ranking_loss: ( 915*da0073e9SAndroid Build Coastguard Worker lambda input1, input2, target, margin=0, size_average=None, reduce=None, reduction="mean": -1 916*da0073e9SAndroid Build Coastguard Worker ), 917*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.max_pool1d: ( 918*da0073e9SAndroid Build Coastguard Worker lambda input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False: -1 919*da0073e9SAndroid Build Coastguard Worker ), 920*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.max_pool1d_with_indices: ( 921*da0073e9SAndroid Build Coastguard Worker lambda input, kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False: -1 922*da0073e9SAndroid Build Coastguard Worker ), 923*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.max_pool2d: ( 924*da0073e9SAndroid Build Coastguard Worker lambda input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False: -1 925*da0073e9SAndroid Build Coastguard Worker ), 926*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.max_pool2d_with_indices: ( 927*da0073e9SAndroid Build Coastguard Worker lambda input, kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False: -1 928*da0073e9SAndroid Build Coastguard Worker ), 929*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.max_pool3d: ( 930*da0073e9SAndroid Build Coastguard Worker lambda input, kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False: -1 931*da0073e9SAndroid Build Coastguard Worker ), 932*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.max_pool3d_with_indices: ( 933*da0073e9SAndroid Build Coastguard Worker lambda input, kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False: -1 934*da0073e9SAndroid Build Coastguard Worker ), 935*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.max_unpool1d: lambda input, indices, kernel_size, stride=None, padding=0, output_size=None: -1, # noqa: B950 936*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.max_unpool2d: lambda input, indices, kernel_size, stride=None, padding=0, output_size=None: -1, # noqa: B950 937*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.max_unpool3d: lambda input, indices, kernel_size, stride=None, padding=0, output_size=None: -1, # noqa: B950 938*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.mse_loss: lambda input, target, size_average=None, reduce=None, reduction="mean": -1, 939*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.multi_head_attention_forward: ( 940*da0073e9SAndroid Build Coastguard Worker lambda query, key, value, embed_dim_to_check, num_heads, in_proj_weight, in_proj_bias, bias_k, bias_v, add_zero_attn, dropout_p, out_proj_weight, out_proj_bias, training=True, key_padding_mask=None, need_weights=True, attn_mask=None, use_separate_proj_weight=False, q_proj_weight=None, k_proj_weight=None, v_proj_weight=None, static_k=None, static_v=None, average_attn_weights=None, is_causal=False: -1 # noqa: B950 941*da0073e9SAndroid Build Coastguard Worker ), 942*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.multi_margin_loss: ( 943*da0073e9SAndroid Build Coastguard Worker lambda input, target, p=1, margin=1.0, weight=None, size_average=None, reduce=None, reduction="mean": -1 944*da0073e9SAndroid Build Coastguard Worker ), 945*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.multilabel_margin_loss: ( 946*da0073e9SAndroid Build Coastguard Worker lambda input, target, size_average=None, reduce=None, reduction="mean": -1 947*da0073e9SAndroid Build Coastguard Worker ), 948*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.multilabel_soft_margin_loss: ( 949*da0073e9SAndroid Build Coastguard Worker lambda input, target, weight=None, size_average=None, reduce=None, reduction="mean": -1 950*da0073e9SAndroid Build Coastguard Worker ), 951*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.nll_loss: ( 952*da0073e9SAndroid Build Coastguard Worker lambda input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction="mean": -1 953*da0073e9SAndroid Build Coastguard Worker ), 954*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.normalize: lambda input, p=2, dim=1, eps=1e-12, out=None: -1, 955*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.one_hot: lambda tensor, num_classes=-1: -1, 956*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.pad: lambda input, pad, mode="constant", value=0: -1, 957*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.pairwise_distance: lambda x1, x2, p=2.0, eps=1e-06, keepdim=False: -1, 958*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.poisson_nll_loss: ( 959*da0073e9SAndroid Build Coastguard Worker lambda input, target, log_input=True, full=False, size_average=None, eps=1e-08, reduce=None, reduction="mean": -1 # noqa: B950 960*da0073e9SAndroid Build Coastguard Worker ), 961*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.prelu: lambda input, weight: -1, 962*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.relu: lambda input, inplace=False: -1, 963*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.relu6: lambda input, inplace=False: -1, 964*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.rms_norm: lambda input, normalized_shape, weight=None, eps=1e-6: -1, 965*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.rrelu: lambda input, lower=0.125, upper=0.3333333333333333, training=False, inplace=False: -1, # noqa: B950 966*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.selu: lambda input, inplace=False: -1, 967*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.silu: lambda input, inplace=False: -1, 968*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.mish: lambda input, inplace=False: -1, 969*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.scaled_dot_product_attention: lambda query, key, value, attn_mask=None, dropout_p=0.0: -1, 970*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.smooth_l1_loss: lambda input, target, size_average=None, reduce=None, reduction="mean", beta=1.0: -1, # noqa: B950 971*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.huber_loss: lambda input, target, reduction="mean", delta=1.0: -1, 972*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.soft_margin_loss: lambda input, target, size_average=None, reduce=None, reduction="mean": -1, # noqa: B950 973*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.softmax: lambda input, dim=None, _stacklevel=3, dtype=None: -1, 974*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.softmin: lambda input, dim=None, _stacklevel=3, dtype=None: -1, 975*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.softplus: lambda input, beta=1, threshold=20: -1, 976*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.softshrink: lambda input, lambd=0.5: -1, 977*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.softsign: lambda input: -1, 978*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.tanhshrink: lambda input: -1, 979*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.threshold: lambda input, threshold, value, inplace=False: -1, 980*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.triplet_margin_loss: ( 981*da0073e9SAndroid Build Coastguard Worker lambda anchor, positive, negative, margin=1.0, p=2, eps=1e-06, swap=False, size_average=None, reduce=None, reduction="mean": -1 # noqa: B950 982*da0073e9SAndroid Build Coastguard Worker ), 983*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.triplet_margin_with_distance_loss: ( 984*da0073e9SAndroid Build Coastguard Worker lambda anchor, positive, negative, *, distance_function=None, margin=1.0, swap=False, reduction="mean": -1 985*da0073e9SAndroid Build Coastguard Worker ), 986*da0073e9SAndroid Build Coastguard Worker torch.nn.functional.unfold: lambda input, kernel_size, dilation=1, padding=0, stride=1: -1, 987*da0073e9SAndroid Build Coastguard Worker torch.nn.init.uniform_: lambda tensor, a=0.0, b=1.0, generator=None: -1, 988*da0073e9SAndroid Build Coastguard Worker torch.nn.init.normal_: lambda tensor, mean=0.0, std=1.0, generator=None: -1, 989*da0073e9SAndroid Build Coastguard Worker torch.nn.init.constant_: lambda tensor, val: -1, 990*da0073e9SAndroid Build Coastguard Worker torch.nn.init.kaiming_uniform_: lambda tensor, a=0, mode="fan_in", nonlinearity="leaky_relu", generator=None: -1, # noqa: B950 991*da0073e9SAndroid Build Coastguard Worker torch.nonzero: lambda input, as_tuple=False: -1, 992*da0073e9SAndroid Build Coastguard Worker torch.nonzero_static: lambda input, *, size, fill_value=-1: -1, 993*da0073e9SAndroid Build Coastguard Worker torch.argwhere: lambda input: -1, 994*da0073e9SAndroid Build Coastguard Worker torch.norm: lambda input, p="fro", dim=None, keepdim=False, out=None, dtype=None: -1, 995*da0073e9SAndroid Build Coastguard Worker torch.linalg.norm: lambda input, ord=None, dim=None, keepdim=False, out=None, dtype=None: -1, 996*da0073e9SAndroid Build Coastguard Worker torch.linalg.vector_norm: lambda input, ord=2, dim=None, keepdim=False, out=None, dtype=None: -1, 997*da0073e9SAndroid Build Coastguard Worker torch.linalg.matrix_norm: lambda input, ord="fro", dim=( 998*da0073e9SAndroid Build Coastguard Worker -2, 999*da0073e9SAndroid Build Coastguard Worker -1, 1000*da0073e9SAndroid Build Coastguard Worker ), keepdim=False, out=None, dtype=None: -1, 1001*da0073e9SAndroid Build Coastguard Worker torch.norm_except_dim: lambda v, pow=2, dim=0: -1, 1002*da0073e9SAndroid Build Coastguard Worker torch.nuclear_norm: lambda input, p="fro", dim=None, keepdim=False, out=None, dtype=None: -1, 1003*da0073e9SAndroid Build Coastguard Worker torch.numel: lambda input: -1, 1004*da0073e9SAndroid Build Coastguard Worker torch.orgqr: lambda input, tau: -1, 1005*da0073e9SAndroid Build Coastguard Worker torch.ormqr: lambda input, input2, input3, left=True, transpose=False: -1, 1006*da0073e9SAndroid Build Coastguard Worker torch.pairwise_distance: lambda x1, x2, p=2.0, eps=1e-06, keepdim=False: -1, 1007*da0073e9SAndroid Build Coastguard Worker torch.permute: lambda self, dim: -1, 1008*da0073e9SAndroid Build Coastguard Worker torch.pca_lowrank: lambda input, q=None, center=True, niter=2: -1, 1009*da0073e9SAndroid Build Coastguard Worker torch.pdist: lambda input, p=2: -1, 1010*da0073e9SAndroid Build Coastguard Worker torch.pinverse: lambda input, rcond=1e-15: -1, 1011*da0073e9SAndroid Build Coastguard Worker torch.linalg.pinv: lambda input, rcond=1e-15, hermitian=False: -1, 1012*da0073e9SAndroid Build Coastguard Worker torch.pixel_shuffle: lambda input, upscale_factor: -1, 1013*da0073e9SAndroid Build Coastguard Worker torch.pixel_unshuffle: lambda input, downscale_factor: -1, 1014*da0073e9SAndroid Build Coastguard Worker torch.poisson: lambda input, generator=None: -1, 1015*da0073e9SAndroid Build Coastguard Worker torch.poisson_nll_loss: lambda input, target, log_input, full, eps, reduction: -1, 1016*da0073e9SAndroid Build Coastguard Worker torch.polygamma: lambda input, n, out=None: -1, 1017*da0073e9SAndroid Build Coastguard Worker torch.positive: lambda input, out=None: -1, 1018*da0073e9SAndroid Build Coastguard Worker torch.prelu: lambda input, weight: -1, 1019*da0073e9SAndroid Build Coastguard Worker torch.ones_like: lambda input, dtype=None, layout=None, device=None, requires_grad=False: -1, 1020*da0073e9SAndroid Build Coastguard Worker torch.pow: lambda input, exponent, out=None: -1, 1021*da0073e9SAndroid Build Coastguard Worker torch.prod: lambda input, dtype=None: -1, 1022*da0073e9SAndroid Build Coastguard Worker torch.put: lambda input, index, source, accumulate=False: -1, 1023*da0073e9SAndroid Build Coastguard Worker torch.q_per_channel_axis: lambda input: -1, 1024*da0073e9SAndroid Build Coastguard Worker torch.q_per_channel_scales: lambda input: -1, 1025*da0073e9SAndroid Build Coastguard Worker torch.q_per_channel_zero_points: lambda input: -1, 1026*da0073e9SAndroid Build Coastguard Worker torch.q_scale: lambda input: -1, 1027*da0073e9SAndroid Build Coastguard Worker torch.q_zero_point: lambda input: -1, 1028*da0073e9SAndroid Build Coastguard Worker torch.qr: lambda input, some=True, out=None: -1, 1029*da0073e9SAndroid Build Coastguard Worker torch.linalg.qr: lambda input, mode="reduced", out=None: -1, 1030*da0073e9SAndroid Build Coastguard Worker torch.quantile: lambda input, q, dim=None, keepdim=False, interpolation="linear", out=None: -1, 1031*da0073e9SAndroid Build Coastguard Worker torch.nanquantile: lambda input, q, dim=None, keepdim=False, interpolation="linear", out=None: -1, 1032*da0073e9SAndroid Build Coastguard Worker torch.quantize_per_channel: lambda input, scales, zero_points, axis, dtype: -1, 1033*da0073e9SAndroid Build Coastguard Worker torch.quantize_per_tensor: lambda input, scale, zero_point, dtype: -1, 1034*da0073e9SAndroid Build Coastguard Worker torch.quantize_per_tensor_dynamic: lambda input, dtype, reduce_range: -1, 1035*da0073e9SAndroid Build Coastguard Worker torch.quantized_batch_norm: lambda input, weight, bias, mean, var, eps, output_scale, output_zero_point: -1, 1036*da0073e9SAndroid Build Coastguard Worker torch.quantized_gru_cell: ( 1037*da0073e9SAndroid Build Coastguard Worker lambda input, hx, w_ih, w_hh, b_ih, b_hh, packed_ih, packed_hh, col_offsets_ih, col_offsets_hh, scale_ih, scale_hh, zero_point_ih, zero_point_hh: -1 # noqa: B950 1038*da0073e9SAndroid Build Coastguard Worker ), 1039*da0073e9SAndroid Build Coastguard Worker torch.quantized_lstm_cell: ( 1040*da0073e9SAndroid Build Coastguard Worker lambda input, hx, w_ih, w_hh, b_ih, b_hh, packed_ih, packed_hh, col_offsets_ih, col_offsets_hh, scale_ih, scale_hh, zero_point_ih, zero_point_hh: -1 # noqa: B950 1041*da0073e9SAndroid Build Coastguard Worker ), 1042*da0073e9SAndroid Build Coastguard Worker torch.quantized_max_pool1d: ( 1043*da0073e9SAndroid Build Coastguard Worker lambda input, kernel_size, stride=(), padding=(0,), dilation=( 1044*da0073e9SAndroid Build Coastguard Worker 1, 1045*da0073e9SAndroid Build Coastguard Worker ), ceil_mode=False: -1 1046*da0073e9SAndroid Build Coastguard Worker ), 1047*da0073e9SAndroid Build Coastguard Worker torch.quantized_max_pool2d: ( 1048*da0073e9SAndroid Build Coastguard Worker lambda input, kernel_size, stride=(), padding=(0, 0), dilation=( 1049*da0073e9SAndroid Build Coastguard Worker 1, 1050*da0073e9SAndroid Build Coastguard Worker 1, 1051*da0073e9SAndroid Build Coastguard Worker ), ceil_mode=False: -1 1052*da0073e9SAndroid Build Coastguard Worker ), 1053*da0073e9SAndroid Build Coastguard Worker torch.quantized_max_pool3d: ( 1054*da0073e9SAndroid Build Coastguard Worker lambda input, kernel_size, stride=(), padding=(0, 0, 0), dilation=( 1055*da0073e9SAndroid Build Coastguard Worker 1, 1056*da0073e9SAndroid Build Coastguard Worker 1, 1057*da0073e9SAndroid Build Coastguard Worker 1, 1058*da0073e9SAndroid Build Coastguard Worker ), ceil_mode=False: -1 1059*da0073e9SAndroid Build Coastguard Worker ), 1060*da0073e9SAndroid Build Coastguard Worker torch.quantized_rnn_relu_cell: ( 1061*da0073e9SAndroid Build Coastguard Worker lambda input, hx, w_ih, w_hh, b_ih, b_hh, packed_ih, packed_hh, col_offsets_ih, col_offsets_hh, scale_ih, scale_hh, zero_point_ih, zero_point_hh: -1 # noqa: B950 1062*da0073e9SAndroid Build Coastguard Worker ), 1063*da0073e9SAndroid Build Coastguard Worker torch.quantized_rnn_tanh_cell: ( 1064*da0073e9SAndroid Build Coastguard Worker lambda input, hx, w_ih, w_hh, b_ih, b_hh, packed_ih, packed_hh, col_offsets_ih, col_offsets_hh, scale_ih, scale_hh, zero_point_ih, zero_point_hh: -1 # noqa: B950 1065*da0073e9SAndroid Build Coastguard Worker ), 1066*da0073e9SAndroid Build Coastguard Worker torch.rad2deg: lambda input, out=None: -1, 1067*da0073e9SAndroid Build Coastguard Worker torch.rand_like: lambda input, dtype=None, layout=None, device=None, requires_grad=False: -1, 1068*da0073e9SAndroid Build Coastguard Worker torch.randint_like: lambda input, high, dtype=None, layout=torch.strided, device=None, requires_grad=False: -1, 1069*da0073e9SAndroid Build Coastguard Worker torch.randn_like: lambda input, dtype=None, layout=None, device=None, requires_grad=False: -1, 1070*da0073e9SAndroid Build Coastguard Worker torch.ravel: lambda input: -1, 1071*da0073e9SAndroid Build Coastguard Worker torch.real: lambda input, out=None: -1, 1072*da0073e9SAndroid Build Coastguard Worker torch.vdot: lambda input, other, out=None: -1, 1073*da0073e9SAndroid Build Coastguard Worker torch.linalg.vecdot: lambda input, other, dim=-1, out=None: -1, 1074*da0073e9SAndroid Build Coastguard Worker torch.view_as_real: lambda input: -1, 1075*da0073e9SAndroid Build Coastguard Worker torch.view_as_complex: lambda input: -1, 1076*da0073e9SAndroid Build Coastguard Worker torch.reciprocal: lambda input, out=None: -1, 1077*da0073e9SAndroid Build Coastguard Worker torch.relu: lambda input, inplace=False: -1, 1078*da0073e9SAndroid Build Coastguard Worker torch.remainder: lambda input, other, out=None: -1, 1079*da0073e9SAndroid Build Coastguard Worker torch.renorm: lambda input, p, dim, maxnorm, out=None: -1, 1080*da0073e9SAndroid Build Coastguard Worker torch.repeat_interleave: lambda input, dim=None: -1, 1081*da0073e9SAndroid Build Coastguard Worker torch.reshape: lambda input, shape: -1, 1082*da0073e9SAndroid Build Coastguard Worker torch.rms_norm: lambda input, normalized_shape, weight=None, eps=1e-6: -1, 1083*da0073e9SAndroid Build Coastguard Worker torch.rnn_relu: lambda input, hx, params, has_biases, num_layers, dropout, train, bidirectional, batch_first: -1, # noqa: B950 1084*da0073e9SAndroid Build Coastguard Worker torch.rnn_relu_cell: lambda input, hx, w_ih, w_hh, b_ih=None, b_hh=None: -1, 1085*da0073e9SAndroid Build Coastguard Worker torch.rnn_tanh: lambda input, hx, params, has_biases, num_layers, dropout, train, bidirectional, batch_first: -1, # noqa: B950 1086*da0073e9SAndroid Build Coastguard Worker torch.rnn_tanh_cell: lambda input, hx, w_ih, w_hh, b_ih=None, b_hh=None: -1, 1087*da0073e9SAndroid Build Coastguard Worker torch.roll: lambda input, shifts, dims=None: -1, 1088*da0073e9SAndroid Build Coastguard Worker torch.rot90: lambda input, k=1, dims=(0, 1): -1, 1089*da0073e9SAndroid Build Coastguard Worker torch.round: lambda input, out=None: -1, 1090*da0073e9SAndroid Build Coastguard Worker torch.row_stack: lambda tensors, out=None: -1, # alias for torch.vstack 1091*da0073e9SAndroid Build Coastguard Worker torch._rowwise_prune: (lambda weight, mask, compressed_indices_dtype: -1), 1092*da0073e9SAndroid Build Coastguard Worker torch.rrelu: lambda input, lower=1.0 / 8, upper=1.0 / 3, training=False, inplace=False: -1, 1093*da0073e9SAndroid Build Coastguard Worker torch.rsqrt: lambda input, out=None: -1, 1094*da0073e9SAndroid Build Coastguard Worker torch.rsub: lambda input, other, alpha=1: -1, 1095*da0073e9SAndroid Build Coastguard Worker torch.saddmm: lambda input, mat1, mat2, beta=1, alpha=1, out=None: -1, 1096*da0073e9SAndroid Build Coastguard Worker torch.scatter: lambda input, dim, index, src: -1, 1097*da0073e9SAndroid Build Coastguard Worker torch.scatter_add: lambda input, dim, index, src: -1, 1098*da0073e9SAndroid Build Coastguard Worker torch.scatter_reduce: lambda input, dim, index, src, reduce, include_self=True: -1, 1099*da0073e9SAndroid Build Coastguard Worker torch.searchsorted: lambda sorted_sequence, input, out_int32=False, right=False, out=None: -1, 1100*da0073e9SAndroid Build Coastguard Worker torch._segment_reduce: lambda data, reduce="max", lengths=None, indices=None, offsets=None, axis=0, unsafe=False: -1, # noqa: B950 1101*da0073e9SAndroid Build Coastguard Worker torch.select: lambda input, dim, index: -1, 1102*da0073e9SAndroid Build Coastguard Worker torch.select_scatter: lambda input, src, dim, index: -1, 1103*da0073e9SAndroid Build Coastguard Worker torch.slice_inverse: lambda input, src, dim=0, start=None, end=None, step=1: -1, 1104*da0073e9SAndroid Build Coastguard Worker torch.slice_scatter: lambda input, src, dim=0, start=None, end=None, step=1: -1, 1105*da0073e9SAndroid Build Coastguard Worker torch.selu: lambda input, inplace=False: -1, 1106*da0073e9SAndroid Build Coastguard Worker torch.sigmoid: lambda input, out=None: -1, 1107*da0073e9SAndroid Build Coastguard Worker torch.sign: lambda input, out=None: -1, 1108*da0073e9SAndroid Build Coastguard Worker torch.signbit: lambda input, out=None: -1, 1109*da0073e9SAndroid Build Coastguard Worker torch.sgn: lambda input, out=None: -1, 1110*da0073e9SAndroid Build Coastguard Worker torch.sin: lambda input, out=None: -1, 1111*da0073e9SAndroid Build Coastguard Worker torch.sinc: lambda input, out=None: -1, 1112*da0073e9SAndroid Build Coastguard Worker torch.sinh: lambda input, out=None: -1, 1113*da0073e9SAndroid Build Coastguard Worker torch.slogdet: lambda input: -1, 1114*da0073e9SAndroid Build Coastguard Worker torch.linalg.slogdet: lambda input: -1, 1115*da0073e9SAndroid Build Coastguard Worker torch.smm: lambda input, mat2: -1, 1116*da0073e9SAndroid Build Coastguard Worker torch.spmm: lambda input, mat2: -1, 1117*da0073e9SAndroid Build Coastguard Worker torch.softmax: lambda input, dim, dtype=None: -1, 1118*da0073e9SAndroid Build Coastguard Worker torch.linalg.solve: lambda A, B, left=True, out=None: -1, 1119*da0073e9SAndroid Build Coastguard Worker torch.linalg.solve_ex: lambda A, B, left=True, check_errors=False, out=None: -1, 1120*da0073e9SAndroid Build Coastguard Worker torch.sort: lambda input, dim=-1, descending=False, *, stable=False, out=None: -1, 1121*da0073e9SAndroid Build Coastguard Worker torch.split: lambda tensor, split_size_or_sections, dim=0: -1, 1122*da0073e9SAndroid Build Coastguard Worker torch.split_with_sizes: lambda tensor, split_size_or_sections, dim=0: -1, 1123*da0073e9SAndroid Build Coastguard Worker torch.sqrt: lambda input, out=None: -1, 1124*da0073e9SAndroid Build Coastguard Worker torch.square: lambda input, out=None: -1, 1125*da0073e9SAndroid Build Coastguard Worker torch.squeeze: lambda input, dim=None, out=None: -1, 1126*da0073e9SAndroid Build Coastguard Worker torch.sspaddmm: lambda input, mat1, mat2, beta=1, alpha=1, out=None: -1, 1127*da0073e9SAndroid Build Coastguard Worker torch.stack: lambda tensors, dim=0, out=None: -1, 1128*da0073e9SAndroid Build Coastguard Worker torch.std: lambda input, dim=None: -1, 1129*da0073e9SAndroid Build Coastguard Worker torch.std_mean: lambda input, dim=None: -1, 1130*da0073e9SAndroid Build Coastguard Worker torch.stft: ( 1131*da0073e9SAndroid Build Coastguard Worker lambda input, n_fft, hop_length=None, win_length=None, window=None, center=True, pad_mode="reflect", normalized=False, onesided=True, return_complex=None: -1 # noqa: B950 1132*da0073e9SAndroid Build Coastguard Worker ), 1133*da0073e9SAndroid Build Coastguard Worker torch.sub: lambda input, other, out=None: -1, 1134*da0073e9SAndroid Build Coastguard Worker torch.subtract: lambda input, other, out=None: -1, 1135*da0073e9SAndroid Build Coastguard Worker torch.sum: lambda input, dim=None: -1, 1136*da0073e9SAndroid Build Coastguard Worker torch.sym_float: lambda input: -1, 1137*da0073e9SAndroid Build Coastguard Worker torch.sym_int: lambda input: -1, 1138*da0073e9SAndroid Build Coastguard Worker torch.sym_max: lambda a, b: -1, 1139*da0073e9SAndroid Build Coastguard Worker torch.sym_min: lambda a, b: -1, 1140*da0073e9SAndroid Build Coastguard Worker torch.sym_not: lambda input: -1, 1141*da0073e9SAndroid Build Coastguard Worker torch.sym_ite: lambda a, b, c: -1, 1142*da0073e9SAndroid Build Coastguard Worker torch._sym_sqrt: lambda input: -1, 1143*da0073e9SAndroid Build Coastguard Worker torch._sym_cos: lambda input: -1, 1144*da0073e9SAndroid Build Coastguard Worker torch._sym_cosh: lambda input: -1, 1145*da0073e9SAndroid Build Coastguard Worker torch._sym_sin: lambda input: -1, 1146*da0073e9SAndroid Build Coastguard Worker torch._sym_sinh: lambda input: -1, 1147*da0073e9SAndroid Build Coastguard Worker torch._sym_tan: lambda input: -1, 1148*da0073e9SAndroid Build Coastguard Worker torch._sym_tanh: lambda input: -1, 1149*da0073e9SAndroid Build Coastguard Worker torch._sym_asin: lambda input: -1, 1150*da0073e9SAndroid Build Coastguard Worker torch._sym_acos: lambda input: -1, 1151*da0073e9SAndroid Build Coastguard Worker torch._sym_atan: lambda input: -1, 1152*da0073e9SAndroid Build Coastguard Worker torch.nansum: lambda input, dim=None: -1, 1153*da0073e9SAndroid Build Coastguard Worker torch.svd: lambda input, some=True, compute_uv=True, out=None: -1, 1154*da0073e9SAndroid Build Coastguard Worker torch.svd_lowrank: lambda input, q=6, niter=2, M=None: -1, 1155*da0073e9SAndroid Build Coastguard Worker torch.linalg.svd: lambda input, full_matrices=True, out=None: -1, 1156*da0073e9SAndroid Build Coastguard Worker torch.linalg.svdvals: lambda input, out=None: -1, 1157*da0073e9SAndroid Build Coastguard Worker torch.swapaxes: lambda input, dim0, dim1: -1, 1158*da0073e9SAndroid Build Coastguard Worker torch.swapdims: lambda input, axis0, axis1: -1, 1159*da0073e9SAndroid Build Coastguard Worker torch.special.airy_ai: lambda input: -1, 1160*da0073e9SAndroid Build Coastguard Worker torch.special.bessel_j0: lambda input: -1, 1161*da0073e9SAndroid Build Coastguard Worker torch.special.bessel_j1: lambda input: -1, 1162*da0073e9SAndroid Build Coastguard Worker torch.special.bessel_y0: lambda input: -1, 1163*da0073e9SAndroid Build Coastguard Worker torch.special.bessel_y1: lambda input: -1, 1164*da0073e9SAndroid Build Coastguard Worker torch.special.chebyshev_polynomial_t: lambda input, n, out=None: -1, 1165*da0073e9SAndroid Build Coastguard Worker torch.special.chebyshev_polynomial_u: lambda input, n, out=None: -1, 1166*da0073e9SAndroid Build Coastguard Worker torch.special.chebyshev_polynomial_v: lambda input, n, out=None: -1, 1167*da0073e9SAndroid Build Coastguard Worker torch.special.chebyshev_polynomial_w: lambda input, n, out=None: -1, 1168*da0073e9SAndroid Build Coastguard Worker torch.special.digamma: lambda input: -1, 1169*da0073e9SAndroid Build Coastguard Worker torch.special.entr: lambda input: -1, 1170*da0073e9SAndroid Build Coastguard Worker torch.special.erf: lambda input: -1, 1171*da0073e9SAndroid Build Coastguard Worker torch.special.erfc: lambda input: -1, 1172*da0073e9SAndroid Build Coastguard Worker torch.special.erfcx: lambda input: -1, 1173*da0073e9SAndroid Build Coastguard Worker torch.special.erfinv: lambda input: -1, 1174*da0073e9SAndroid Build Coastguard Worker torch.special.exp2: lambda input: -1, 1175*da0073e9SAndroid Build Coastguard Worker torch.special.expit: lambda input: -1, 1176*da0073e9SAndroid Build Coastguard Worker torch.special.expm1: lambda input: -1, 1177*da0073e9SAndroid Build Coastguard Worker torch.special.gammainc: lambda input, other, out=None: -1, 1178*da0073e9SAndroid Build Coastguard Worker torch.special.gammaincc: lambda input, other, out=None: -1, 1179*da0073e9SAndroid Build Coastguard Worker torch.special.gammaln: lambda input: -1, 1180*da0073e9SAndroid Build Coastguard Worker torch.special.hermite_polynomial_h: lambda input, n, out=None: -1, 1181*da0073e9SAndroid Build Coastguard Worker torch.special.hermite_polynomial_he: lambda input, n, out=None: -1, 1182*da0073e9SAndroid Build Coastguard Worker torch.special.i0: lambda input: -1, 1183*da0073e9SAndroid Build Coastguard Worker torch.special.i0e: lambda input: -1, 1184*da0073e9SAndroid Build Coastguard Worker torch.special.i1: lambda input: -1, 1185*da0073e9SAndroid Build Coastguard Worker torch.special.i1e: lambda input: -1, 1186*da0073e9SAndroid Build Coastguard Worker torch.special.laguerre_polynomial_l: lambda input, n, out=None: -1, 1187*da0073e9SAndroid Build Coastguard Worker torch.special.legendre_polynomial_p: lambda input, n, out=None: -1, 1188*da0073e9SAndroid Build Coastguard Worker torch.special.log1p: lambda input: -1, 1189*da0073e9SAndroid Build Coastguard Worker torch.special.log_ndtr: lambda input: -1, 1190*da0073e9SAndroid Build Coastguard Worker torch.special.log_softmax: lambda input, dim, dtype=None: -1, 1191*da0073e9SAndroid Build Coastguard Worker torch.special.logit: lambda input: -1, 1192*da0073e9SAndroid Build Coastguard Worker torch.special.logsumexp: lambda input, dim, keepdim=False, out=None: -1, 1193*da0073e9SAndroid Build Coastguard Worker torch.special.modified_bessel_i0: lambda input: -1, 1194*da0073e9SAndroid Build Coastguard Worker torch.special.modified_bessel_i1: lambda input: -1, 1195*da0073e9SAndroid Build Coastguard Worker torch.special.modified_bessel_k0: lambda input: -1, 1196*da0073e9SAndroid Build Coastguard Worker torch.special.modified_bessel_k1: lambda input: -1, 1197*da0073e9SAndroid Build Coastguard Worker torch.special.multigammaln: lambda input, p: -1, 1198*da0073e9SAndroid Build Coastguard Worker torch.special.ndtr: lambda input: -1, 1199*da0073e9SAndroid Build Coastguard Worker torch.special.ndtri: lambda input: -1, 1200*da0073e9SAndroid Build Coastguard Worker torch.special.polygamma: lambda input, n, out=None: -1, 1201*da0073e9SAndroid Build Coastguard Worker torch.special.psi: lambda input: -1, 1202*da0073e9SAndroid Build Coastguard Worker torch.special.round: lambda input: -1, 1203*da0073e9SAndroid Build Coastguard Worker torch.special.scaled_modified_bessel_k0: lambda input: -1, 1204*da0073e9SAndroid Build Coastguard Worker torch.special.scaled_modified_bessel_k1: lambda input: -1, 1205*da0073e9SAndroid Build Coastguard Worker torch.special.shifted_chebyshev_polynomial_t: lambda input, n, out=None: -1, 1206*da0073e9SAndroid Build Coastguard Worker torch.special.shifted_chebyshev_polynomial_u: lambda input, n, out=None: -1, 1207*da0073e9SAndroid Build Coastguard Worker torch.special.shifted_chebyshev_polynomial_v: lambda input, n, out=None: -1, 1208*da0073e9SAndroid Build Coastguard Worker torch.special.shifted_chebyshev_polynomial_w: lambda input, n, out=None: -1, 1209*da0073e9SAndroid Build Coastguard Worker torch.special.sinc: lambda input: -1, 1210*da0073e9SAndroid Build Coastguard Worker torch.special.softmax: lambda input, dim, dtype=None: -1, 1211*da0073e9SAndroid Build Coastguard Worker torch.special.spherical_bessel_j0: lambda input: -1, 1212*da0073e9SAndroid Build Coastguard Worker torch.special.xlog1py: lambda input, other, out=None: -1, 1213*da0073e9SAndroid Build Coastguard Worker torch.special.xlogy: lambda input, other, out=None: -1, 1214*da0073e9SAndroid Build Coastguard Worker torch.special.zeta: lambda self, other, out=None: -1, 1215*da0073e9SAndroid Build Coastguard Worker torch.t: lambda input: -1, 1216*da0073e9SAndroid Build Coastguard Worker torch.take: lambda input, index: -1, 1217*da0073e9SAndroid Build Coastguard Worker torch.take_along_dim: lambda input, indices, dim=None, out=None: -1, 1218*da0073e9SAndroid Build Coastguard Worker torch.tan: lambda input, out=None: -1, 1219*da0073e9SAndroid Build Coastguard Worker torch.tanh: lambda input, out=None: -1, 1220*da0073e9SAndroid Build Coastguard Worker torch.linalg.tensorinv: lambda a, ind=2: -1, 1221*da0073e9SAndroid Build Coastguard Worker torch.linalg.tensorsolve: lambda a, b, dims=None: -1, 1222*da0073e9SAndroid Build Coastguard Worker torch.tensordot: lambda a, b, dims=2, out=None: -1, 1223*da0073e9SAndroid Build Coastguard Worker torch.tensor_split: lambda input, indices_or_sections, dim=0: -1, 1224*da0073e9SAndroid Build Coastguard Worker torch.threshold: lambda input, threshold, value, inplace=False: -1, 1225*da0073e9SAndroid Build Coastguard Worker torch.tile: lambda input, dims: -1, 1226*da0073e9SAndroid Build Coastguard Worker torch.topk: lambda input, k, dim=-1, descending=False, out=None: -1, 1227*da0073e9SAndroid Build Coastguard Worker torch.trace: lambda input: -1, 1228*da0073e9SAndroid Build Coastguard Worker torch.transpose: lambda input, dim0, dim1: -1, 1229*da0073e9SAndroid Build Coastguard Worker torch.trapz: lambda y, x=None, dim=-1: -1, 1230*da0073e9SAndroid Build Coastguard Worker torch.trapezoid: lambda y, x=None, dim=-1: -1, 1231*da0073e9SAndroid Build Coastguard Worker torch.triangular_solve: lambda input, A, upper=True, transpose=False, unitriangular=False: -1, 1232*da0073e9SAndroid Build Coastguard Worker torch.linalg.solve_triangular: lambda input, B, upper, left=True, unitriangular=False: -1, 1233*da0073e9SAndroid Build Coastguard Worker torch.tril: lambda input, diagonal=0, out=None: -1, 1234*da0073e9SAndroid Build Coastguard Worker torch.triplet_margin_loss: ( 1235*da0073e9SAndroid Build Coastguard Worker lambda anchor, positive, negative, margin=1.0, p=2, eps=1e-06, swap=False, size_average=None, reduce=None, reduction="mean": -1 # noqa: B950 1236*da0073e9SAndroid Build Coastguard Worker ), 1237*da0073e9SAndroid Build Coastguard Worker torch.triu: lambda input, diagonal=0, out=None: -1, 1238*da0073e9SAndroid Build Coastguard Worker torch.true_divide: lambda input, other: -1, 1239*da0073e9SAndroid Build Coastguard Worker torch.trunc: lambda input, out=None: -1, 1240*da0073e9SAndroid Build Coastguard Worker torch.unbind: lambda input, dim=0: -1, 1241*da0073e9SAndroid Build Coastguard Worker torch.unflatten: lambda input, dim, sizes, names: -1, 1242*da0073e9SAndroid Build Coastguard Worker torch.unique: lambda input, sorted=True, return_inverse=False, return_counts=False, dim=None: -1, 1243*da0073e9SAndroid Build Coastguard Worker torch.unique_consecutive: lambda input, return_inverse=False, return_counts=False, dim=None: -1, 1244*da0073e9SAndroid Build Coastguard Worker torch.unravel_index: lambda indices, shape: -1, 1245*da0073e9SAndroid Build Coastguard Worker torch.unsafe_chunk: lambda input, chunks, dim=0: -1, 1246*da0073e9SAndroid Build Coastguard Worker torch.unsafe_split: lambda tensor, split_size_or_sections, dim=0: -1, 1247*da0073e9SAndroid Build Coastguard Worker torch.unsafe_split_with_sizes: lambda tensor, split_size_or_sections, dim=0: -1, 1248*da0073e9SAndroid Build Coastguard Worker torch.unsqueeze: lambda input, dim, out=None: -1, 1249*da0073e9SAndroid Build Coastguard Worker torch.linalg.vander: lambda x, N=None: -1, 1250*da0073e9SAndroid Build Coastguard Worker torch.var: lambda input, dim=None: -1, 1251*da0073e9SAndroid Build Coastguard Worker torch.var_mean: lambda input, dim=None: -1, 1252*da0073e9SAndroid Build Coastguard Worker torch.vsplit: lambda input, indices_or_sections: -1, 1253*da0073e9SAndroid Build Coastguard Worker torch.vstack: lambda tensors, out=None: -1, 1254*da0073e9SAndroid Build Coastguard Worker torch.where: lambda condition, x=None, y=None: -1, 1255*da0073e9SAndroid Build Coastguard Worker torch._wrapped_linear_prepack: lambda weight, weight_scale, weight_zero_point, bias : -1, 1256*da0073e9SAndroid Build Coastguard Worker torch._wrapped_quantized_linear_prepacked: ( 1257*da0073e9SAndroid Build Coastguard Worker lambda input, input_scale, input_zero_point, prepacked, out_scale, out_zero_point, out_channel : -1 # noqa: B950 1258*da0073e9SAndroid Build Coastguard Worker ), 1259*da0073e9SAndroid Build Coastguard Worker torch.zeros_like: lambda input, dtype=None, layout=None, device=None, requires_grad=False: -1, 1260*da0073e9SAndroid Build Coastguard Worker torch._fw_primal_copy: lambda self, level: -1, 1261*da0073e9SAndroid Build Coastguard Worker torch._make_dual_copy: lambda primal, tangent, level: -1, 1262*da0073e9SAndroid Build Coastguard Worker torch.view_as_real_copy: lambda self: -1, 1263*da0073e9SAndroid Build Coastguard Worker torch.view_as_complex_copy: lambda self: -1, 1264*da0073e9SAndroid Build Coastguard Worker torch._conj_copy: lambda self: -1, 1265*da0073e9SAndroid Build Coastguard Worker torch._neg_view_copy: lambda self: -1, 1266*da0073e9SAndroid Build Coastguard Worker torch.as_strided_copy: lambda self, size, stride, storage_offset=None: -1, 1267*da0073e9SAndroid Build Coastguard Worker torch._sparse_broadcast_to_copy: lambda self, size: -1, 1268*da0073e9SAndroid Build Coastguard Worker torch.diagonal_copy: lambda self, offset=0, dim1=0, dim2=1: -1, 1269*da0073e9SAndroid Build Coastguard Worker torch.expand_copy: lambda self, size, *, implicit=False: -1, 1270*da0073e9SAndroid Build Coastguard Worker torch.narrow_copy: lambda self, dim, start, length: -1, 1271*da0073e9SAndroid Build Coastguard Worker torch.permute_copy: lambda self, dims: -1, 1272*da0073e9SAndroid Build Coastguard Worker torch._reshape_alias_copy: lambda self, size, stride: -1, 1273*da0073e9SAndroid Build Coastguard Worker torch.select_copy: lambda self, dim, index: -1, 1274*da0073e9SAndroid Build Coastguard Worker torch.detach_copy: lambda self: -1, 1275*da0073e9SAndroid Build Coastguard Worker torch.slice_copy: lambda self, dim=0, start=None, end=None, step=1: -1, 1276*da0073e9SAndroid Build Coastguard Worker torch.split_copy: lambda self, split_size, dim=0: -1, 1277*da0073e9SAndroid Build Coastguard Worker torch.split_with_sizes_copy: lambda self, split_sizes, dim=0: -1, 1278*da0073e9SAndroid Build Coastguard Worker torch.squeeze_copy: lambda self, dim: -1, 1279*da0073e9SAndroid Build Coastguard Worker torch.t_copy: lambda self: -1, 1280*da0073e9SAndroid Build Coastguard Worker torch.transpose_copy: lambda self, dim0, dim1: -1, 1281*da0073e9SAndroid Build Coastguard Worker torch.unsqueeze_copy: lambda self, dim: -1, 1282*da0073e9SAndroid Build Coastguard Worker torch._indices_copy: lambda self: -1, 1283*da0073e9SAndroid Build Coastguard Worker torch._values_copy: lambda self: -1, 1284*da0073e9SAndroid Build Coastguard Worker torch.indices_copy: lambda self: -1, 1285*da0073e9SAndroid Build Coastguard Worker torch.values_copy: lambda self: -1, 1286*da0073e9SAndroid Build Coastguard Worker torch.crow_indices_copy: lambda self: -1, 1287*da0073e9SAndroid Build Coastguard Worker torch.col_indices_copy: lambda self: -1, 1288*da0073e9SAndroid Build Coastguard Worker torch.ccol_indices_copy: lambda self: -1, 1289*da0073e9SAndroid Build Coastguard Worker torch.row_indices_copy: lambda self: -1, 1290*da0073e9SAndroid Build Coastguard Worker torch.unbind_copy: lambda self, dim=0: -1, 1291*da0073e9SAndroid Build Coastguard Worker torch.view_copy: lambda self, dtype: -1, 1292*da0073e9SAndroid Build Coastguard Worker torch.unfold_copy: lambda self, dimension, size, step: -1, 1293*da0073e9SAndroid Build Coastguard Worker torch.alias_copy: lambda self: -1, 1294*da0073e9SAndroid Build Coastguard Worker Tensor.__floordiv__: lambda self, other: -1, 1295*da0073e9SAndroid Build Coastguard Worker Tensor.__rfloordiv__: lambda self, other: -1, 1296*da0073e9SAndroid Build Coastguard Worker Tensor.__ifloordiv__: lambda self, other: -1, 1297*da0073e9SAndroid Build Coastguard Worker Tensor.__truediv__: lambda self, other: -1, 1298*da0073e9SAndroid Build Coastguard Worker Tensor.__rtruediv__: lambda self, other: -1, 1299*da0073e9SAndroid Build Coastguard Worker Tensor.__itruediv__: lambda self, other: -1, 1300*da0073e9SAndroid Build Coastguard Worker Tensor.__lshift__: lambda self, other: -1, 1301*da0073e9SAndroid Build Coastguard Worker Tensor.__rlshift__: lambda self, other: -1, 1302*da0073e9SAndroid Build Coastguard Worker Tensor.__ilshift__: lambda self, other: -1, 1303*da0073e9SAndroid Build Coastguard Worker Tensor.__rshift__: lambda self, other: -1, 1304*da0073e9SAndroid Build Coastguard Worker Tensor.__rrshift__: lambda self, other: -1, 1305*da0073e9SAndroid Build Coastguard Worker Tensor.__irshift__: lambda self, other: -1, 1306*da0073e9SAndroid Build Coastguard Worker Tensor.__and__: lambda self, other: -1, 1307*da0073e9SAndroid Build Coastguard Worker Tensor.__or__: lambda self, other: -1, 1308*da0073e9SAndroid Build Coastguard Worker Tensor.__xor__: lambda self, other: -1, 1309*da0073e9SAndroid Build Coastguard Worker Tensor.__float__: lambda self: -1, 1310*da0073e9SAndroid Build Coastguard Worker Tensor.__complex__: lambda self: -1, 1311*da0073e9SAndroid Build Coastguard Worker Tensor.__array__: lambda self, dtype: -1, 1312*da0073e9SAndroid Build Coastguard Worker Tensor.__bool__: lambda self: -1, 1313*da0073e9SAndroid Build Coastguard Worker Tensor.__contains__: lambda self, other: -1, 1314*da0073e9SAndroid Build Coastguard Worker Tensor.__neg__: lambda self: -1, 1315*da0073e9SAndroid Build Coastguard Worker Tensor.__invert__: lambda self: -1, 1316*da0073e9SAndroid Build Coastguard Worker Tensor.__mod__: lambda self, other: -1, 1317*da0073e9SAndroid Build Coastguard Worker Tensor.__rmod__: lambda self, other: -1, 1318*da0073e9SAndroid Build Coastguard Worker Tensor.__imod__: lambda self, other: -1, 1319*da0073e9SAndroid Build Coastguard Worker Tensor.__array_wrap__: lambda self, array: -1, 1320*da0073e9SAndroid Build Coastguard Worker Tensor.__getitem__: lambda self, idx: -1, 1321*da0073e9SAndroid Build Coastguard Worker Tensor.__deepcopy__: lambda self, memo: -1, 1322*da0073e9SAndroid Build Coastguard Worker Tensor.__int__: lambda self: -1, 1323*da0073e9SAndroid Build Coastguard Worker Tensor.__long__: lambda self: -1, 1324*da0073e9SAndroid Build Coastguard Worker Tensor.__index__: lambda self: -1, 1325*da0073e9SAndroid Build Coastguard Worker Tensor.__len__: lambda self: -1, 1326*da0073e9SAndroid Build Coastguard Worker Tensor.__format__: lambda self, format_spec: -1, 1327*da0073e9SAndroid Build Coastguard Worker Tensor.__reduce_ex__: lambda self, proto: -1, 1328*da0073e9SAndroid Build Coastguard Worker Tensor.__reversed__: lambda self: -1, 1329*da0073e9SAndroid Build Coastguard Worker Tensor.__repr__: lambda self, *, tensor_contents=None: -1, 1330*da0073e9SAndroid Build Coastguard Worker Tensor.__setitem__: lambda self, k, v: -1, 1331*da0073e9SAndroid Build Coastguard Worker Tensor.__setstate__: lambda self, d: -1, 1332*da0073e9SAndroid Build Coastguard Worker Tensor.T.__get__: lambda self: -1, 1333*da0073e9SAndroid Build Coastguard Worker Tensor.H.__get__: lambda self: -1, 1334*da0073e9SAndroid Build Coastguard Worker Tensor.mT.__get__: lambda self: -1, 1335*da0073e9SAndroid Build Coastguard Worker Tensor.mH.__get__: lambda self: -1, 1336*da0073e9SAndroid Build Coastguard Worker Tensor._backward_hooks.__get__: lambda self: -1, 1337*da0073e9SAndroid Build Coastguard Worker Tensor._post_accumulate_grad_hooks.__get__: lambda self: -1, 1338*da0073e9SAndroid Build Coastguard Worker Tensor._base.__get__: lambda self: -1, 1339*da0073e9SAndroid Build Coastguard Worker Tensor._cdata.__get__: lambda self: -1, 1340*da0073e9SAndroid Build Coastguard Worker Tensor.grad.__get__: lambda self: -1, 1341*da0073e9SAndroid Build Coastguard Worker Tensor._grad.__get__: lambda self: -1, 1342*da0073e9SAndroid Build Coastguard Worker Tensor._grad_fn.__get__: lambda self: -1, 1343*da0073e9SAndroid Build Coastguard Worker Tensor.grad_fn.__get__: lambda self: -1, 1344*da0073e9SAndroid Build Coastguard Worker Tensor._version.__get__: lambda self: -1, 1345*da0073e9SAndroid Build Coastguard Worker Tensor._autocast_to_reduced_precision: lambda self, cuda_enabled, cpu_enabled, cuda_dtype, cpu_dtype: -1, 1346*da0073e9SAndroid Build Coastguard Worker Tensor._autocast_to_full_precision: lambda self, cuda_enabled, cpu_enabled: -1, 1347*da0073e9SAndroid Build Coastguard Worker Tensor.data.__get__: lambda self: -1, 1348*da0073e9SAndroid Build Coastguard Worker Tensor.device.__get__: lambda self: -1, 1349*da0073e9SAndroid Build Coastguard Worker Tensor.dtype.__get__: lambda self: -1, 1350*da0073e9SAndroid Build Coastguard Worker Tensor.is_cuda.__get__: lambda self: -1, 1351*da0073e9SAndroid Build Coastguard Worker Tensor.is_cpu.__get__: lambda self: -1, 1352*da0073e9SAndroid Build Coastguard Worker Tensor.is_xla.__get__: lambda self: -1, 1353*da0073e9SAndroid Build Coastguard Worker Tensor.is_xpu.__get__: lambda self: -1, 1354*da0073e9SAndroid Build Coastguard Worker Tensor.is_ipu.__get__: lambda self: -1, 1355*da0073e9SAndroid Build Coastguard Worker Tensor.is_leaf.__get__: lambda self: -1, 1356*da0073e9SAndroid Build Coastguard Worker Tensor.retains_grad.__get__: lambda self: -1, 1357*da0073e9SAndroid Build Coastguard Worker Tensor.is_meta.__get__: lambda self: -1, 1358*da0073e9SAndroid Build Coastguard Worker Tensor.is_mps.__get__: lambda self: -1, 1359*da0073e9SAndroid Build Coastguard Worker Tensor.is_mtia.__get__: lambda self: -1, 1360*da0073e9SAndroid Build Coastguard Worker Tensor.is_nested.__get__: lambda self: -1, 1361*da0073e9SAndroid Build Coastguard Worker Tensor.is_maia.__get__: lambda self: -1, 1362*da0073e9SAndroid Build Coastguard Worker Tensor.is_mkldnn.__get__: lambda self: -1, 1363*da0073e9SAndroid Build Coastguard Worker Tensor.is_quantized.__get__: lambda self: -1, 1364*da0073e9SAndroid Build Coastguard Worker Tensor.is_sparse.__get__: lambda self: -1, 1365*da0073e9SAndroid Build Coastguard Worker Tensor.is_sparse_csr.__get__: lambda self: -1, 1366*da0073e9SAndroid Build Coastguard Worker Tensor.is_vulkan.__get__: lambda self: -1, 1367*da0073e9SAndroid Build Coastguard Worker Tensor.itemsize.__get__: lambda self: -1, 1368*da0073e9SAndroid Build Coastguard Worker Tensor.layout.__get__: lambda self: -1, 1369*da0073e9SAndroid Build Coastguard Worker Tensor.name.__get__: lambda self: -1, 1370*da0073e9SAndroid Build Coastguard Worker Tensor.names.__get__: lambda self: -1, 1371*da0073e9SAndroid Build Coastguard Worker Tensor.nbytes.__get__: lambda self: -1, 1372*da0073e9SAndroid Build Coastguard Worker Tensor.ndim.__get__: lambda self: -1, 1373*da0073e9SAndroid Build Coastguard Worker Tensor.output_nr.__get__: lambda self: -1, 1374*da0073e9SAndroid Build Coastguard Worker Tensor.requires_grad.__get__: lambda self: -1, 1375*da0073e9SAndroid Build Coastguard Worker Tensor.shape.__get__: lambda self: -1, 1376*da0073e9SAndroid Build Coastguard Worker Tensor.volatile.__get__: lambda self: -1, 1377*da0073e9SAndroid Build Coastguard Worker Tensor.real.__get__: lambda self: -1, 1378*da0073e9SAndroid Build Coastguard Worker Tensor.imag.__get__: lambda self: -1, 1379*da0073e9SAndroid Build Coastguard Worker Tensor.__cuda_array_interface__.__get__: lambda self: -1, 1380*da0073e9SAndroid Build Coastguard Worker Tensor.type: lambda self, dtype=None, non_blocking=False, **kwargs: -1, 1381*da0073e9SAndroid Build Coastguard Worker Tensor._dimI: lambda self: -1, 1382*da0073e9SAndroid Build Coastguard Worker Tensor._dimV: lambda self: -1, 1383*da0073e9SAndroid Build Coastguard Worker Tensor._indices: lambda self: -1, 1384*da0073e9SAndroid Build Coastguard Worker Tensor._is_view: lambda self: -1, 1385*da0073e9SAndroid Build Coastguard Worker Tensor._nnz: lambda self: -1, 1386*da0073e9SAndroid Build Coastguard Worker Tensor.crow_indices: lambda self: -1, 1387*da0073e9SAndroid Build Coastguard Worker Tensor.col_indices: lambda self: -1, 1388*da0073e9SAndroid Build Coastguard Worker Tensor.ccol_indices: lambda self: -1, 1389*da0073e9SAndroid Build Coastguard Worker Tensor.row_indices: lambda self: -1, 1390*da0073e9SAndroid Build Coastguard Worker Tensor._update_names: lambda self, names, inplace: -1, 1391*da0073e9SAndroid Build Coastguard Worker Tensor._values: lambda self: -1, 1392*da0073e9SAndroid Build Coastguard Worker Tensor.adjoint: lambda self: -1, 1393*da0073e9SAndroid Build Coastguard Worker Tensor.align_as: lambda self, other: -1, 1394*da0073e9SAndroid Build Coastguard Worker Tensor.align_to: lambda self, order, ellipsis_idx: -1, 1395*da0073e9SAndroid Build Coastguard Worker Tensor.apply_: lambda self, callable: -1, 1396*da0073e9SAndroid Build Coastguard Worker Tensor.as_strided: lambda self, size, stride: -1, 1397*da0073e9SAndroid Build Coastguard Worker Tensor.as_strided_: lambda self, size, stride: -1, 1398*da0073e9SAndroid Build Coastguard Worker Tensor.backward: lambda self, gradient=None, retain_graph=None, create_graph=False, inputs=None: -1, 1399*da0073e9SAndroid Build Coastguard Worker Tensor.bfloat16: lambda self, memory_format=torch.preserve_format: -1, 1400*da0073e9SAndroid Build Coastguard Worker Tensor.bool: lambda self, memory_format=torch.preserve_format: -1, 1401*da0073e9SAndroid Build Coastguard Worker Tensor.byte: lambda self, memory_format=torch.preserve_format: -1, 1402*da0073e9SAndroid Build Coastguard Worker Tensor.char: lambda self, memory_format=torch.preserve_format: -1, 1403*da0073e9SAndroid Build Coastguard Worker Tensor.cauchy_: lambda self, median=0, sigma=1, *, generator=None: -1, 1404*da0073e9SAndroid Build Coastguard Worker Tensor.coalesce: lambda self: -1, 1405*da0073e9SAndroid Build Coastguard Worker Tensor._coalesced_: lambda self, coalesced: -1, 1406*da0073e9SAndroid Build Coastguard Worker Tensor.contiguous: lambda self, memory_format=torch.contiguous_format: -1, 1407*da0073e9SAndroid Build Coastguard Worker Tensor.copy_: lambda self, src, non_blocking=False: -1, 1408*da0073e9SAndroid Build Coastguard Worker Tensor.cpu: lambda self, memory_format=torch.preserve_format: -1, 1409*da0073e9SAndroid Build Coastguard Worker Tensor.cuda: lambda self, memory_format=torch.preserve_format: -1, 1410*da0073e9SAndroid Build Coastguard Worker Tensor.mtia: lambda self, memory_format=torch.preserve_format: -1, 1411*da0073e9SAndroid Build Coastguard Worker Tensor.xpu: lambda self, memory_format=torch.preserve_format: -1, 1412*da0073e9SAndroid Build Coastguard Worker Tensor.ipu: lambda self, memory_format=torch.preserve_format: -1, 1413*da0073e9SAndroid Build Coastguard Worker Tensor.data_ptr: lambda self: -1, 1414*da0073e9SAndroid Build Coastguard Worker Tensor.dense_dim: lambda self: -1, 1415*da0073e9SAndroid Build Coastguard Worker Tensor.diagonal_scatter: lambda self, src, offset=0, dim1=0, dim2=1: -1, 1416*da0073e9SAndroid Build Coastguard Worker Tensor.dim: lambda self: -1, 1417*da0073e9SAndroid Build Coastguard Worker Tensor.dim_order: lambda self: -1, 1418*da0073e9SAndroid Build Coastguard Worker Tensor.double: lambda self, memory_format=torch.preserve_format: -1, 1419*da0073e9SAndroid Build Coastguard Worker Tensor.cdouble: lambda self, memory_format=torch.preserve_format: -1, 1420*da0073e9SAndroid Build Coastguard Worker Tensor.element_size: lambda self: -1, 1421*da0073e9SAndroid Build Coastguard Worker Tensor.expand: lambda self, size: -1, 1422*da0073e9SAndroid Build Coastguard Worker Tensor.expand_as: lambda self, other: -1, 1423*da0073e9SAndroid Build Coastguard Worker Tensor.exponential_: lambda self, lambd=1, *, generator=None: -1, 1424*da0073e9SAndroid Build Coastguard Worker Tensor.fill_: lambda self, value: -1, 1425*da0073e9SAndroid Build Coastguard Worker Tensor.fill_diagonal_: lambda self, value: -1, 1426*da0073e9SAndroid Build Coastguard Worker Tensor.float: lambda self, memory_format=torch.preserve_format: -1, 1427*da0073e9SAndroid Build Coastguard Worker Tensor.cfloat: lambda self, memory_format=torch.preserve_format: -1, 1428*da0073e9SAndroid Build Coastguard Worker Tensor.geometric_: lambda self, p, *, generator=None: -1, 1429*da0073e9SAndroid Build Coastguard Worker Tensor.get_device: lambda self: -1, 1430*da0073e9SAndroid Build Coastguard Worker Tensor.half: lambda self, memory_format=torch.preserve_format: -1, 1431*da0073e9SAndroid Build Coastguard Worker Tensor.chalf: lambda self, memory_format=torch.preserve_format: -1, 1432*da0073e9SAndroid Build Coastguard Worker Tensor.has_names: lambda self: -1, 1433*da0073e9SAndroid Build Coastguard Worker Tensor.indices: lambda self: -1, 1434*da0073e9SAndroid Build Coastguard Worker Tensor.int: lambda self, memory_format=torch.preserve_format: -1, 1435*da0073e9SAndroid Build Coastguard Worker Tensor.is_coalesced: lambda self: -1, 1436*da0073e9SAndroid Build Coastguard Worker Tensor.is_contiguous: lambda self: -1, 1437*da0073e9SAndroid Build Coastguard Worker Tensor.is_inference: lambda self: -1, 1438*da0073e9SAndroid Build Coastguard Worker Tensor.is_pinned: lambda self: -1, 1439*da0073e9SAndroid Build Coastguard Worker Tensor.is_set_to: lambda self, tensor: -1, 1440*da0073e9SAndroid Build Coastguard Worker Tensor.is_shared: lambda self: -1, 1441*da0073e9SAndroid Build Coastguard Worker Tensor.item: lambda self: -1, 1442*da0073e9SAndroid Build Coastguard Worker Tensor.log_normal_: lambda self, mean=1, std=2, *, generator=None: -1, 1443*da0073e9SAndroid Build Coastguard Worker Tensor.log_softmax: lambda self, dim: -1, 1444*da0073e9SAndroid Build Coastguard Worker Tensor.long: lambda self, memory_format=torch.preserve_format: -1, 1445*da0073e9SAndroid Build Coastguard Worker Tensor.map_: lambda self, tensor, callable: -1, 1446*da0073e9SAndroid Build Coastguard Worker Tensor.map2_: lambda self, x, y, callable: -1, 1447*da0073e9SAndroid Build Coastguard Worker Tensor.mm: lambda self, mat2: -1, 1448*da0073e9SAndroid Build Coastguard Worker Tensor.module_load: lambda self, other, assign=False: -1, 1449*da0073e9SAndroid Build Coastguard Worker Tensor.narrow_copy: lambda self, dimension, start, length: -1, 1450*da0073e9SAndroid Build Coastguard Worker Tensor.ndimension: lambda self: -1, 1451*da0073e9SAndroid Build Coastguard Worker Tensor.nelement: lambda self: -1, 1452*da0073e9SAndroid Build Coastguard Worker Tensor._nested_tensor_size: lambda self: -1, 1453*da0073e9SAndroid Build Coastguard Worker Tensor._nested_tensor_storage_offsets: lambda self: -1, 1454*da0073e9SAndroid Build Coastguard Worker Tensor._nested_tensor_strides: lambda self: -1, 1455*da0073e9SAndroid Build Coastguard Worker Tensor.normal_: lambda self: -1, 1456*da0073e9SAndroid Build Coastguard Worker Tensor.numpy: lambda self: -1, 1457*da0073e9SAndroid Build Coastguard Worker Tensor.permute: lambda self, dim: -1, 1458*da0073e9SAndroid Build Coastguard Worker Tensor.pin_memory: lambda self: -1, 1459*da0073e9SAndroid Build Coastguard Worker Tensor.put_: lambda self, indices, tensor, accumulate=False: -1, 1460*da0073e9SAndroid Build Coastguard Worker Tensor.qscheme: lambda self: -1, 1461*da0073e9SAndroid Build Coastguard Worker Tensor.random_: lambda self, from_=0, to=None, *, generator=None: -1, 1462*da0073e9SAndroid Build Coastguard Worker Tensor.record_stream: lambda self, stream: -1, 1463*da0073e9SAndroid Build Coastguard Worker Tensor.refine_names: lambda self, names: -1, 1464*da0073e9SAndroid Build Coastguard Worker Tensor.register_hook: lambda self, hook: -1, 1465*da0073e9SAndroid Build Coastguard Worker Tensor.register_post_accumulate_grad_hook: lambda self, hook: -1, 1466*da0073e9SAndroid Build Coastguard Worker Tensor.rename: lambda self, name: -1, 1467*da0073e9SAndroid Build Coastguard Worker Tensor.repeat: lambda self, *size: -1, 1468*da0073e9SAndroid Build Coastguard Worker Tensor.requires_grad_: lambda self, requires_grad=True: -1, 1469*da0073e9SAndroid Build Coastguard Worker Tensor.reshape_as: lambda self, other: -1, 1470*da0073e9SAndroid Build Coastguard Worker Tensor.resize: lambda self, *size: -1, 1471*da0073e9SAndroid Build Coastguard Worker Tensor.resize_: lambda self, size: -1, 1472*da0073e9SAndroid Build Coastguard Worker Tensor.resize_as: lambda self, other: -1, 1473*da0073e9SAndroid Build Coastguard Worker Tensor.resize_as_sparse_: lambda self, other: -1, 1474*da0073e9SAndroid Build Coastguard Worker Tensor.retain_grad: lambda self: -1, 1475*da0073e9SAndroid Build Coastguard Worker Tensor.set_: lambda self, source=None, storage_offset=0, size=None, stride=None: -1, 1476*da0073e9SAndroid Build Coastguard Worker Tensor.select_scatter: lambda self, src, dim, index: -1, 1477*da0073e9SAndroid Build Coastguard Worker Tensor.share_memory_: lambda self: -1, 1478*da0073e9SAndroid Build Coastguard Worker Tensor.short: lambda self, memory_format=torch.preserve_format: -1, 1479*da0073e9SAndroid Build Coastguard Worker Tensor.size: lambda self: -1, 1480*da0073e9SAndroid Build Coastguard Worker Tensor.slice_scatter: lambda self, src, dim=0, start=None, end=None, step=1: -1, 1481*da0073e9SAndroid Build Coastguard Worker Tensor.sparse_dim: lambda self: -1, 1482*da0073e9SAndroid Build Coastguard Worker Tensor.sparse_mask: lambda self, mask: -1, 1483*da0073e9SAndroid Build Coastguard Worker Tensor._sparse_mask_projection: lambda self, mask, accumulate_matches=False: -1, 1484*da0073e9SAndroid Build Coastguard Worker Tensor.sparse_resize_: lambda self, size1, size2, dense_dim: -1, 1485*da0073e9SAndroid Build Coastguard Worker Tensor.sparse_resize_and_clear_: lambda self, size1, size2, dense_dim: -1, 1486*da0073e9SAndroid Build Coastguard Worker Tensor.sspaddmm: lambda self, mat1, mat2, beta=1, alpha=1, out=None: -1, 1487*da0073e9SAndroid Build Coastguard Worker Tensor.storage: lambda self: -1, 1488*da0073e9SAndroid Build Coastguard Worker Tensor.untyped_storage: lambda self: -1, 1489*da0073e9SAndroid Build Coastguard Worker Tensor.storage_offset: lambda self: -1, 1490*da0073e9SAndroid Build Coastguard Worker Tensor.storage_type: lambda self: -1, 1491*da0073e9SAndroid Build Coastguard Worker Tensor.sum_to_size: lambda self, size: -1, 1492*da0073e9SAndroid Build Coastguard Worker Tensor.tile: lambda self, *reps: -1, 1493*da0073e9SAndroid Build Coastguard Worker Tensor.to: lambda self, dtype, non_blocking=False, copy=False, memory_format=torch.preserve_format: -1, 1494*da0073e9SAndroid Build Coastguard Worker Tensor.to_dense: lambda self, dtype=None, *, masked_grad=None: -1, 1495*da0073e9SAndroid Build Coastguard Worker Tensor._to_dense: lambda self, dtype=None, masked_grad=None: -1, 1496*da0073e9SAndroid Build Coastguard Worker Tensor.to_sparse: lambda self: -1, 1497*da0073e9SAndroid Build Coastguard Worker Tensor.tolist: lambda self: -1, 1498*da0073e9SAndroid Build Coastguard Worker Tensor.to_mkldnn: lambda self: -1, 1499*da0073e9SAndroid Build Coastguard Worker Tensor.type_as: lambda self, other: -1, 1500*da0073e9SAndroid Build Coastguard Worker Tensor.unfold: lambda self, dimension, size, step: -1, 1501*da0073e9SAndroid Build Coastguard Worker Tensor.uniform_: lambda self, from_=0, to=1: -1, 1502*da0073e9SAndroid Build Coastguard Worker Tensor.values: lambda self: -1, 1503*da0073e9SAndroid Build Coastguard Worker Tensor.view: lambda self, shape: -1, 1504*da0073e9SAndroid Build Coastguard Worker Tensor.view_as: lambda self, other: -1, 1505*da0073e9SAndroid Build Coastguard Worker Tensor.zero_: lambda self: -1, 1506*da0073e9SAndroid Build Coastguard Worker Tensor.__dlpack__: lambda self, stream=None: -1, 1507*da0073e9SAndroid Build Coastguard Worker Tensor.__dlpack_device__: lambda self: -1, 1508*da0073e9SAndroid Build Coastguard Worker torch.linalg.lstsq: lambda self, b, cond=None, driver=None: -1, 1509*da0073e9SAndroid Build Coastguard Worker } # fmt: skip 1510*da0073e9SAndroid Build Coastguard Worker 1511*da0073e9SAndroid Build Coastguard Worker privateuse1_backend_name = ( 1512*da0073e9SAndroid Build Coastguard Worker torch.utils.backend_registration._privateuse1_backend_name 1513*da0073e9SAndroid Build Coastguard Worker ) 1514*da0073e9SAndroid Build Coastguard Worker if hasattr(Tensor, privateuse1_backend_name): 1515*da0073e9SAndroid Build Coastguard Worker ret[getattr(Tensor, privateuse1_backend_name)] = ( 1516*da0073e9SAndroid Build Coastguard Worker lambda self, device=None, non_blocking=False, **kwargs: -1 1517*da0073e9SAndroid Build Coastguard Worker ) 1518*da0073e9SAndroid Build Coastguard Worker ret[getattr(Tensor, f"is_{privateuse1_backend_name}").__get__] = lambda self: -1 1519*da0073e9SAndroid Build Coastguard Worker 1520*da0073e9SAndroid Build Coastguard Worker ret2 = {} 1521*da0073e9SAndroid Build Coastguard Worker ignored = get_ignored_functions() 1522*da0073e9SAndroid Build Coastguard Worker 1523*da0073e9SAndroid Build Coastguard Worker for k, v in ret.items(): 1524*da0073e9SAndroid Build Coastguard Worker # Generate methods like __add__ and add_ by default from add 1525*da0073e9SAndroid Build Coastguard Worker names = [ 1526*da0073e9SAndroid Build Coastguard Worker k.__name__, # Default method 1527*da0073e9SAndroid Build Coastguard Worker k.__name__ + "_", # Inplace variant 1528*da0073e9SAndroid Build Coastguard Worker "__" + k.__name__ + "__", # Dunder method 1529*da0073e9SAndroid Build Coastguard Worker "__i" + k.__name__ + "__", # Inplace dunder method 1530*da0073e9SAndroid Build Coastguard Worker "__r" + k.__name__ + "__", # Reverse dunder method 1531*da0073e9SAndroid Build Coastguard Worker ] 1532*da0073e9SAndroid Build Coastguard Worker 1533*da0073e9SAndroid Build Coastguard Worker if k.__name__.startswith("bitwise_"): 1534*da0073e9SAndroid Build Coastguard Worker # bitwise_<op> have dunder methods of the form __<op>__ 1535*da0073e9SAndroid Build Coastguard Worker # And so on. 1536*da0073e9SAndroid Build Coastguard Worker subname = k.__name__[len("bitwise_") :] 1537*da0073e9SAndroid Build Coastguard Worker names.extend( 1538*da0073e9SAndroid Build Coastguard Worker ["__" + subname + "__", "__i" + subname + "__", "__r" + subname + "__"] 1539*da0073e9SAndroid Build Coastguard Worker ) 1540*da0073e9SAndroid Build Coastguard Worker 1541*da0073e9SAndroid Build Coastguard Worker for name in names: 1542*da0073e9SAndroid Build Coastguard Worker func = getattr(Tensor, name, None) 1543*da0073e9SAndroid Build Coastguard Worker if callable(func) and func not in ret and func not in ignored: 1544*da0073e9SAndroid Build Coastguard Worker ret2[func] = v 1545*da0073e9SAndroid Build Coastguard Worker 1546*da0073e9SAndroid Build Coastguard Worker ret.update(ret2) 1547*da0073e9SAndroid Build Coastguard Worker return ret 1548*da0073e9SAndroid Build Coastguard Worker 1549*da0073e9SAndroid Build Coastguard Worker 1550*da0073e9SAndroid Build Coastguard Workerdef wrap_torch_function(dispatcher: Callable): 1551*da0073e9SAndroid Build Coastguard Worker """Wraps a given function with ``__torch_function__`` -related functionality. 1552*da0073e9SAndroid Build Coastguard Worker 1553*da0073e9SAndroid Build Coastguard Worker Parameters 1554*da0073e9SAndroid Build Coastguard Worker ---------- 1555*da0073e9SAndroid Build Coastguard Worker dispatcher: Callable 1556*da0073e9SAndroid Build Coastguard Worker A callable that returns an iterable of Tensor-likes passed into the function. 1557*da0073e9SAndroid Build Coastguard Worker 1558*da0073e9SAndroid Build Coastguard Worker Note 1559*da0073e9SAndroid Build Coastguard Worker ---- 1560*da0073e9SAndroid Build Coastguard Worker This decorator may reduce the performance of your code. Generally, it's enough to express 1561*da0073e9SAndroid Build Coastguard Worker your code as a series of functions that, themselves, support __torch_function__. If you 1562*da0073e9SAndroid Build Coastguard Worker find yourself in the rare situation where this is not the case, e.g. if you're wrapping a 1563*da0073e9SAndroid Build Coastguard Worker low-level library and you also need it to work for Tensor-likes, then this function is available. 1564*da0073e9SAndroid Build Coastguard Worker 1565*da0073e9SAndroid Build Coastguard Worker Examples 1566*da0073e9SAndroid Build Coastguard Worker -------- 1567*da0073e9SAndroid Build Coastguard Worker >>> def dispatcher(a): # Must have the same signature as func 1568*da0073e9SAndroid Build Coastguard Worker ... return (a,) 1569*da0073e9SAndroid Build Coastguard Worker >>> @torch.overrides.wrap_torch_function(dispatcher) 1570*da0073e9SAndroid Build Coastguard Worker >>> def func(a): # This will make func dispatchable by __torch_function__ 1571*da0073e9SAndroid Build Coastguard Worker ... return a + 0 1572*da0073e9SAndroid Build Coastguard Worker """ 1573*da0073e9SAndroid Build Coastguard Worker 1574*da0073e9SAndroid Build Coastguard Worker def inner(func): 1575*da0073e9SAndroid Build Coastguard Worker @functools.wraps(func) 1576*da0073e9SAndroid Build Coastguard Worker def wrapped(*args, **kwargs): 1577*da0073e9SAndroid Build Coastguard Worker relevant_args = dispatcher(*args, **kwargs) 1578*da0073e9SAndroid Build Coastguard Worker if has_torch_function(relevant_args): 1579*da0073e9SAndroid Build Coastguard Worker return handle_torch_function(wrapped, relevant_args, *args, **kwargs) 1580*da0073e9SAndroid Build Coastguard Worker 1581*da0073e9SAndroid Build Coastguard Worker return func(*args, **kwargs) 1582*da0073e9SAndroid Build Coastguard Worker 1583*da0073e9SAndroid Build Coastguard Worker return wrapped 1584*da0073e9SAndroid Build Coastguard Worker 1585*da0073e9SAndroid Build Coastguard Worker return inner 1586*da0073e9SAndroid Build Coastguard Worker 1587*da0073e9SAndroid Build Coastguard Worker 1588*da0073e9SAndroid Build Coastguard Workerdef _get_overloaded_args( 1589*da0073e9SAndroid Build Coastguard Worker relevant_args: Iterable[Any], 1590*da0073e9SAndroid Build Coastguard Worker get_type_fn: Callable[[Any], Type] = None, 1591*da0073e9SAndroid Build Coastguard Worker) -> List[Any]: 1592*da0073e9SAndroid Build Coastguard Worker """Returns a list of arguments on which to call __torch_function__. 1593*da0073e9SAndroid Build Coastguard Worker 1594*da0073e9SAndroid Build Coastguard Worker Checks arguments in relevant_args for __torch_function__ implementations, 1595*da0073e9SAndroid Build Coastguard Worker storing references to the arguments and their types in overloaded_args and 1596*da0073e9SAndroid Build Coastguard Worker overloaded_types in order of calling precedence. Only distinct types are 1597*da0073e9SAndroid Build Coastguard Worker considered. If a type is a subclass of another type it will have higher 1598*da0073e9SAndroid Build Coastguard Worker precedence, otherwise the precedence order is the same as the order of 1599*da0073e9SAndroid Build Coastguard Worker arguments in relevant_args, that is, from left-to-right in the argument list. 1600*da0073e9SAndroid Build Coastguard Worker 1601*da0073e9SAndroid Build Coastguard Worker The precedence-determining algorithm implemented in this function is 1602*da0073e9SAndroid Build Coastguard Worker described in `NEP-0018`_. 1603*da0073e9SAndroid Build Coastguard Worker 1604*da0073e9SAndroid Build Coastguard Worker See torch::append_overloaded_arg for the equivalent function in the C++ 1605*da0073e9SAndroid Build Coastguard Worker implementation. 1606*da0073e9SAndroid Build Coastguard Worker 1607*da0073e9SAndroid Build Coastguard Worker Parameters 1608*da0073e9SAndroid Build Coastguard Worker ---------- 1609*da0073e9SAndroid Build Coastguard Worker relevant_args : iterable of array-like 1610*da0073e9SAndroid Build Coastguard Worker Iterable of array-like arguments to check for __torch_function__ 1611*da0073e9SAndroid Build Coastguard Worker methods. 1612*da0073e9SAndroid Build Coastguard Worker 1613*da0073e9SAndroid Build Coastguard Worker get_type_fn : callable, optional 1614*da0073e9SAndroid Build Coastguard Worker Function to call on each argument in relevant_args to get its type. 1615*da0073e9SAndroid Build Coastguard Worker 1616*da0073e9SAndroid Build Coastguard Worker Returns 1617*da0073e9SAndroid Build Coastguard Worker ------- 1618*da0073e9SAndroid Build Coastguard Worker overloaded_args : list 1619*da0073e9SAndroid Build Coastguard Worker Arguments from relevant_args on which to call __torch_function__ 1620*da0073e9SAndroid Build Coastguard Worker methods, in the order in which they should be called. 1621*da0073e9SAndroid Build Coastguard Worker 1622*da0073e9SAndroid Build Coastguard Worker .. _NEP-0018: 1623*da0073e9SAndroid Build Coastguard Worker https://numpy.org/neps/nep-0018-array-function-protocol.html 1624*da0073e9SAndroid Build Coastguard Worker """ 1625*da0073e9SAndroid Build Coastguard Worker if get_type_fn is None: 1626*da0073e9SAndroid Build Coastguard Worker get_type_fn = type 1627*da0073e9SAndroid Build Coastguard Worker 1628*da0073e9SAndroid Build Coastguard Worker # If torch function is not enabled, there are no overloaded types 1629*da0073e9SAndroid Build Coastguard Worker if not torch._C._is_torch_function_enabled(): 1630*da0073e9SAndroid Build Coastguard Worker return [] 1631*da0073e9SAndroid Build Coastguard Worker # Runtime is O(num_arguments * num_unique_types) 1632*da0073e9SAndroid Build Coastguard Worker overloaded_types: Set[Type] = set() 1633*da0073e9SAndroid Build Coastguard Worker overloaded_args: List[Any] = [] 1634*da0073e9SAndroid Build Coastguard Worker for arg in relevant_args: 1635*da0073e9SAndroid Build Coastguard Worker arg_type = get_type_fn(arg) 1636*da0073e9SAndroid Build Coastguard Worker # We only collect arguments if they have a unique type, which ensures 1637*da0073e9SAndroid Build Coastguard Worker # reasonable performance even with a long list of possibly overloaded 1638*da0073e9SAndroid Build Coastguard Worker # arguments. 1639*da0073e9SAndroid Build Coastguard Worker # 1640*da0073e9SAndroid Build Coastguard Worker # NB: Important to exclude _disabled_torch_function_impl, otherwise 1641*da0073e9SAndroid Build Coastguard Worker # https://github.com/pytorch/pytorch/issues/64687 1642*da0073e9SAndroid Build Coastguard Worker if ( 1643*da0073e9SAndroid Build Coastguard Worker arg_type not in overloaded_types 1644*da0073e9SAndroid Build Coastguard Worker and hasattr(arg_type, "__torch_function__") 1645*da0073e9SAndroid Build Coastguard Worker and arg_type.__torch_function__ != torch._C._disabled_torch_function_impl 1646*da0073e9SAndroid Build Coastguard Worker ): 1647*da0073e9SAndroid Build Coastguard Worker # Create lists explicitly for the first type (usually the only one 1648*da0073e9SAndroid Build Coastguard Worker # done) to avoid setting up the iterator for overloaded_args. 1649*da0073e9SAndroid Build Coastguard Worker if overloaded_types: 1650*da0073e9SAndroid Build Coastguard Worker overloaded_types.add(arg_type) 1651*da0073e9SAndroid Build Coastguard Worker # By default, insert argument at the end, but if it is 1652*da0073e9SAndroid Build Coastguard Worker # subclass of another argument, insert it before that argument. 1653*da0073e9SAndroid Build Coastguard Worker # This ensures "subclasses before superclasses". 1654*da0073e9SAndroid Build Coastguard Worker index = len(overloaded_args) 1655*da0073e9SAndroid Build Coastguard Worker for i, old_arg in enumerate(overloaded_args): 1656*da0073e9SAndroid Build Coastguard Worker if issubclass(arg_type, get_type_fn(old_arg)): 1657*da0073e9SAndroid Build Coastguard Worker index = i 1658*da0073e9SAndroid Build Coastguard Worker break 1659*da0073e9SAndroid Build Coastguard Worker overloaded_args.insert(index, arg) 1660*da0073e9SAndroid Build Coastguard Worker else: 1661*da0073e9SAndroid Build Coastguard Worker overloaded_types = {arg_type} 1662*da0073e9SAndroid Build Coastguard Worker overloaded_args = [arg] 1663*da0073e9SAndroid Build Coastguard Worker return overloaded_args 1664*da0073e9SAndroid Build Coastguard Worker 1665*da0073e9SAndroid Build Coastguard Worker 1666*da0073e9SAndroid Build Coastguard Workerdef handle_torch_function( 1667*da0073e9SAndroid Build Coastguard Worker public_api: Callable, 1668*da0073e9SAndroid Build Coastguard Worker relevant_args: Iterable[Any], 1669*da0073e9SAndroid Build Coastguard Worker *args, 1670*da0073e9SAndroid Build Coastguard Worker **kwargs, 1671*da0073e9SAndroid Build Coastguard Worker) -> Any: 1672*da0073e9SAndroid Build Coastguard Worker """Implement a function with checks for ``__torch_function__`` overrides. 1673*da0073e9SAndroid Build Coastguard Worker 1674*da0073e9SAndroid Build Coastguard Worker See torch::autograd::handle_torch_function for the equivalent of this 1675*da0073e9SAndroid Build Coastguard Worker function in the C++ implementation. 1676*da0073e9SAndroid Build Coastguard Worker 1677*da0073e9SAndroid Build Coastguard Worker Arguments 1678*da0073e9SAndroid Build Coastguard Worker --------- 1679*da0073e9SAndroid Build Coastguard Worker public_api : function 1680*da0073e9SAndroid Build Coastguard Worker Function exposed by the public torch API originally called like 1681*da0073e9SAndroid Build Coastguard Worker ``public_api(*args, **kwargs)`` on which arguments are now being 1682*da0073e9SAndroid Build Coastguard Worker checked. 1683*da0073e9SAndroid Build Coastguard Worker relevant_args : iterable 1684*da0073e9SAndroid Build Coastguard Worker Iterable of arguments to check for __torch_function__ methods. 1685*da0073e9SAndroid Build Coastguard Worker args : tuple 1686*da0073e9SAndroid Build Coastguard Worker Arbitrary positional arguments originally passed into ``public_api``. 1687*da0073e9SAndroid Build Coastguard Worker kwargs : tuple 1688*da0073e9SAndroid Build Coastguard Worker Arbitrary keyword arguments originally passed into ``public_api``. 1689*da0073e9SAndroid Build Coastguard Worker 1690*da0073e9SAndroid Build Coastguard Worker Returns 1691*da0073e9SAndroid Build Coastguard Worker ------- 1692*da0073e9SAndroid Build Coastguard Worker object 1693*da0073e9SAndroid Build Coastguard Worker Result from calling ``implementation`` or an ``__torch_function__`` 1694*da0073e9SAndroid Build Coastguard Worker method, as appropriate. 1695*da0073e9SAndroid Build Coastguard Worker 1696*da0073e9SAndroid Build Coastguard Worker Raises 1697*da0073e9SAndroid Build Coastguard Worker ------ 1698*da0073e9SAndroid Build Coastguard Worker TypeError : if no implementation is found. 1699*da0073e9SAndroid Build Coastguard Worker 1700*da0073e9SAndroid Build Coastguard Worker Example 1701*da0073e9SAndroid Build Coastguard Worker ------- 1702*da0073e9SAndroid Build Coastguard Worker >>> def func(a): 1703*da0073e9SAndroid Build Coastguard Worker ... if has_torch_function_unary(a): 1704*da0073e9SAndroid Build Coastguard Worker ... return handle_torch_function(func, (a,), a) 1705*da0073e9SAndroid Build Coastguard Worker ... return a + 0 1706*da0073e9SAndroid Build Coastguard Worker """ 1707*da0073e9SAndroid Build Coastguard Worker # Check for __torch_function__ methods. 1708*da0073e9SAndroid Build Coastguard Worker overloaded_args = _get_overloaded_args(relevant_args) 1709*da0073e9SAndroid Build Coastguard Worker # overloaded_args already have unique types. 1710*da0073e9SAndroid Build Coastguard Worker types = tuple(map(type, overloaded_args)) 1711*da0073e9SAndroid Build Coastguard Worker 1712*da0073e9SAndroid Build Coastguard Worker # Check for __torch_function__ mode. 1713*da0073e9SAndroid Build Coastguard Worker if _is_torch_function_mode_enabled(): 1714*da0073e9SAndroid Build Coastguard Worker # if we're here, the mode must be set to a TorchFunctionStackMode 1715*da0073e9SAndroid Build Coastguard Worker # this unsets it and calls directly into TorchFunctionStackMode's torch function 1716*da0073e9SAndroid Build Coastguard Worker with _pop_mode_temporarily() as mode: 1717*da0073e9SAndroid Build Coastguard Worker result = mode.__torch_function__(public_api, types, args, kwargs) 1718*da0073e9SAndroid Build Coastguard Worker if result is not NotImplemented: 1719*da0073e9SAndroid Build Coastguard Worker return result 1720*da0073e9SAndroid Build Coastguard Worker 1721*da0073e9SAndroid Build Coastguard Worker # Call overrides 1722*da0073e9SAndroid Build Coastguard Worker for overloaded_arg in overloaded_args: 1723*da0073e9SAndroid Build Coastguard Worker # This call needs to become a classmethod call in the future. 1724*da0073e9SAndroid Build Coastguard Worker # See https://github.com/pytorch/pytorch/issues/63767 1725*da0073e9SAndroid Build Coastguard Worker torch_func_method = overloaded_arg.__torch_function__ 1726*da0073e9SAndroid Build Coastguard Worker if ( 1727*da0073e9SAndroid Build Coastguard Worker hasattr(torch_func_method, "__self__") 1728*da0073e9SAndroid Build Coastguard Worker and torch_func_method.__self__ is overloaded_arg 1729*da0073e9SAndroid Build Coastguard Worker and torch_func_method is not torch._C._disabled_torch_function_impl 1730*da0073e9SAndroid Build Coastguard Worker ): 1731*da0073e9SAndroid Build Coastguard Worker warnings.warn( 1732*da0073e9SAndroid Build Coastguard Worker "Defining your `__torch_function__ as a plain method is deprecated and " 1733*da0073e9SAndroid Build Coastguard Worker "will be an error in future, please define it as a classmethod.", 1734*da0073e9SAndroid Build Coastguard Worker DeprecationWarning, 1735*da0073e9SAndroid Build Coastguard Worker ) 1736*da0073e9SAndroid Build Coastguard Worker 1737*da0073e9SAndroid Build Coastguard Worker # Use `public_api` instead of `implementation` so __torch_function__ 1738*da0073e9SAndroid Build Coastguard Worker # implementations can do equality/identity comparisons. 1739*da0073e9SAndroid Build Coastguard Worker result = torch_func_method(public_api, types, args, kwargs) 1740*da0073e9SAndroid Build Coastguard Worker 1741*da0073e9SAndroid Build Coastguard Worker if result is not NotImplemented: 1742*da0073e9SAndroid Build Coastguard Worker return result 1743*da0073e9SAndroid Build Coastguard Worker 1744*da0073e9SAndroid Build Coastguard Worker func_name = f"{public_api.__module__}.{public_api.__name__}" 1745*da0073e9SAndroid Build Coastguard Worker msg = ( 1746*da0073e9SAndroid Build Coastguard Worker f"no implementation found for '{func_name}' on types that implement " 1747*da0073e9SAndroid Build Coastguard Worker f"__torch_function__: {[type(arg) for arg in overloaded_args]}" 1748*da0073e9SAndroid Build Coastguard Worker ) 1749*da0073e9SAndroid Build Coastguard Worker if _is_torch_function_mode_enabled(): 1750*da0073e9SAndroid Build Coastguard Worker msg += f" nor in mode {_get_current_function_mode()}" 1751*da0073e9SAndroid Build Coastguard Worker raise TypeError(msg) 1752*da0073e9SAndroid Build Coastguard Worker 1753*da0073e9SAndroid Build Coastguard Worker 1754*da0073e9SAndroid Build Coastguard Workerhas_torch_function = _add_docstr( 1755*da0073e9SAndroid Build Coastguard Worker _has_torch_function, 1756*da0073e9SAndroid Build Coastguard Worker r"""Check for __torch_function__ implementations in the elements of an iterable 1757*da0073e9SAndroid Build Coastguard Worker or if a __torch_function__ mode is enabled. Considers exact ``Tensor`` s 1758*da0073e9SAndroid Build Coastguard Worker and ``Parameter`` s non-dispatchable. Use this to guard a call to 1759*da0073e9SAndroid Build Coastguard Worker :func:`handle_torch_function`; don't use it to test if something 1760*da0073e9SAndroid Build Coastguard Worker is Tensor-like, use :func:`is_tensor_like` instead. 1761*da0073e9SAndroid Build Coastguard Worker Arguments 1762*da0073e9SAndroid Build Coastguard Worker --------- 1763*da0073e9SAndroid Build Coastguard Worker relevant_args : iterable 1764*da0073e9SAndroid Build Coastguard Worker Iterable or arguments to check for __torch_function__ methods. 1765*da0073e9SAndroid Build Coastguard Worker Returns 1766*da0073e9SAndroid Build Coastguard Worker ------- 1767*da0073e9SAndroid Build Coastguard Worker bool 1768*da0073e9SAndroid Build Coastguard Worker True if any of the elements of relevant_args have __torch_function__ 1769*da0073e9SAndroid Build Coastguard Worker implementations, False otherwise. 1770*da0073e9SAndroid Build Coastguard Worker See Also 1771*da0073e9SAndroid Build Coastguard Worker ________ 1772*da0073e9SAndroid Build Coastguard Worker torch.is_tensor_like 1773*da0073e9SAndroid Build Coastguard Worker Checks if something is a Tensor-like, including an exact ``Tensor``. 1774*da0073e9SAndroid Build Coastguard Worker """, 1775*da0073e9SAndroid Build Coastguard Worker) 1776*da0073e9SAndroid Build Coastguard Worker 1777*da0073e9SAndroid Build Coastguard Workerhas_torch_function_unary = _add_docstr( 1778*da0073e9SAndroid Build Coastguard Worker _has_torch_function_unary, 1779*da0073e9SAndroid Build Coastguard Worker r"""Special case of `has_torch_function` for single inputs. 1780*da0073e9SAndroid Build Coastguard Worker Instead of: 1781*da0073e9SAndroid Build Coastguard Worker `has_torch_function((t,))` 1782*da0073e9SAndroid Build Coastguard Worker call: 1783*da0073e9SAndroid Build Coastguard Worker `has_torch_function_unary(t)` 1784*da0073e9SAndroid Build Coastguard Worker which skips unnecessary packing and unpacking work. 1785*da0073e9SAndroid Build Coastguard Worker """, 1786*da0073e9SAndroid Build Coastguard Worker) 1787*da0073e9SAndroid Build Coastguard Worker 1788*da0073e9SAndroid Build Coastguard Workerhas_torch_function_variadic = _add_docstr( 1789*da0073e9SAndroid Build Coastguard Worker _has_torch_function_variadic, 1790*da0073e9SAndroid Build Coastguard Worker r"""Special case of `has_torch_function` that skips tuple creation. 1791*da0073e9SAndroid Build Coastguard Worker 1792*da0073e9SAndroid Build Coastguard Worker This uses the METH_FASTCALL protocol introduced in Python 3.7 1793*da0073e9SAndroid Build Coastguard Worker 1794*da0073e9SAndroid Build Coastguard Worker Instead of: 1795*da0073e9SAndroid Build Coastguard Worker `has_torch_function((a, b))` 1796*da0073e9SAndroid Build Coastguard Worker call: 1797*da0073e9SAndroid Build Coastguard Worker `has_torch_function_variadic(a, b)` 1798*da0073e9SAndroid Build Coastguard Worker which skips unnecessary packing and unpacking work. 1799*da0073e9SAndroid Build Coastguard Worker """, 1800*da0073e9SAndroid Build Coastguard Worker) 1801*da0073e9SAndroid Build Coastguard Worker 1802*da0073e9SAndroid Build Coastguard Worker 1803*da0073e9SAndroid Build Coastguard Worker@functools.lru_cache(None) 1804*da0073e9SAndroid Build Coastguard Workerdef _get_overridable_functions() -> ( 1805*da0073e9SAndroid Build Coastguard Worker Tuple[Dict[Any, List[Callable]], Dict[Callable, str]] 1806*da0073e9SAndroid Build Coastguard Worker): 1807*da0073e9SAndroid Build Coastguard Worker overridable_funcs = collections.defaultdict(list) 1808*da0073e9SAndroid Build Coastguard Worker index = {} 1809*da0073e9SAndroid Build Coastguard Worker tested_namespaces = [ 1810*da0073e9SAndroid Build Coastguard Worker ("torch", torch, torch.__all__), 1811*da0073e9SAndroid Build Coastguard Worker ("torch.functional", torch.functional, torch.functional.__all__), 1812*da0073e9SAndroid Build Coastguard Worker ("torch.nn.functional", torch.nn.functional, dir(torch.nn.functional)), 1813*da0073e9SAndroid Build Coastguard Worker ("torch.nn.init", torch.nn.init, dir(torch.nn.init)), 1814*da0073e9SAndroid Build Coastguard Worker ("torch.Tensor", torch.Tensor, dir(torch.Tensor)), 1815*da0073e9SAndroid Build Coastguard Worker ("torch.linalg", torch.linalg, dir(torch.linalg)), 1816*da0073e9SAndroid Build Coastguard Worker ("torch.fft", torch.fft, dir(torch.fft)), 1817*da0073e9SAndroid Build Coastguard Worker ("torch.special", torch.special, dir(torch.special)), 1818*da0073e9SAndroid Build Coastguard Worker ] 1819*da0073e9SAndroid Build Coastguard Worker for namespace_str, namespace, ns_funcs in tested_namespaces: 1820*da0073e9SAndroid Build Coastguard Worker for func_name in ns_funcs: 1821*da0073e9SAndroid Build Coastguard Worker ignore = False 1822*da0073e9SAndroid Build Coastguard Worker # ignore private functions or functions that are deleted in torch.__init__ 1823*da0073e9SAndroid Build Coastguard Worker if namespace is not torch.Tensor: 1824*da0073e9SAndroid Build Coastguard Worker if func_name.startswith("__"): 1825*da0073e9SAndroid Build Coastguard Worker continue 1826*da0073e9SAndroid Build Coastguard Worker elif func_name.startswith("_"): 1827*da0073e9SAndroid Build Coastguard Worker ignore = True 1828*da0073e9SAndroid Build Coastguard Worker elif func_name.endswith("_"): 1829*da0073e9SAndroid Build Coastguard Worker ignore = True 1830*da0073e9SAndroid Build Coastguard Worker elif not func_name[0].islower(): 1831*da0073e9SAndroid Build Coastguard Worker ignore = True 1832*da0073e9SAndroid Build Coastguard Worker elif func_name == "unique_dim": 1833*da0073e9SAndroid Build Coastguard Worker continue 1834*da0073e9SAndroid Build Coastguard Worker else: 1835*da0073e9SAndroid Build Coastguard Worker func = getattr(namespace, func_name) 1836*da0073e9SAndroid Build Coastguard Worker if getattr(object, func_name, None) == func: 1837*da0073e9SAndroid Build Coastguard Worker continue 1838*da0073e9SAndroid Build Coastguard Worker if func_name == "__weakref__": 1839*da0073e9SAndroid Build Coastguard Worker continue 1840*da0073e9SAndroid Build Coastguard Worker func = getattr(namespace, func_name) 1841*da0073e9SAndroid Build Coastguard Worker if namespace is torch.Tensor and getattr(object, func_name, None) == func: 1842*da0073e9SAndroid Build Coastguard Worker continue 1843*da0073e9SAndroid Build Coastguard Worker # ignore re-exported modules 1844*da0073e9SAndroid Build Coastguard Worker if isinstance(func, types.ModuleType): 1845*da0073e9SAndroid Build Coastguard Worker continue 1846*da0073e9SAndroid Build Coastguard Worker # ignore __future__ imports 1847*da0073e9SAndroid Build Coastguard Worker if isinstance(func, __future__._Feature): 1848*da0073e9SAndroid Build Coastguard Worker continue 1849*da0073e9SAndroid Build Coastguard Worker 1850*da0073e9SAndroid Build Coastguard Worker if not callable(func) and hasattr(func, "__get__"): 1851*da0073e9SAndroid Build Coastguard Worker index[func.__get__] = f"{namespace_str}.{func_name}.__get__" 1852*da0073e9SAndroid Build Coastguard Worker index[func.__set__] = f"{namespace_str}.{func_name}.__set__" 1853*da0073e9SAndroid Build Coastguard Worker if ignore: 1854*da0073e9SAndroid Build Coastguard Worker continue 1855*da0073e9SAndroid Build Coastguard Worker if func.__get__ in get_ignored_functions(): 1856*da0073e9SAndroid Build Coastguard Worker msg = ( 1857*da0073e9SAndroid Build Coastguard Worker "{}.{} is in the tuple returned by torch._overrides.get_ignored_functions " 1858*da0073e9SAndroid Build Coastguard Worker "but still has an explicit override" 1859*da0073e9SAndroid Build Coastguard Worker ) 1860*da0073e9SAndroid Build Coastguard Worker assert func.__get__ not in get_testing_overrides(), msg.format( 1861*da0073e9SAndroid Build Coastguard Worker namespace, func.__name__ 1862*da0073e9SAndroid Build Coastguard Worker ) 1863*da0073e9SAndroid Build Coastguard Worker continue 1864*da0073e9SAndroid Build Coastguard Worker else: 1865*da0073e9SAndroid Build Coastguard Worker overridable_funcs[func].append(func.__get__) 1866*da0073e9SAndroid Build Coastguard Worker continue 1867*da0073e9SAndroid Build Coastguard Worker 1868*da0073e9SAndroid Build Coastguard Worker if not callable(func): 1869*da0073e9SAndroid Build Coastguard Worker continue 1870*da0073e9SAndroid Build Coastguard Worker 1871*da0073e9SAndroid Build Coastguard Worker index[func] = f"{namespace_str}.{func_name}" 1872*da0073e9SAndroid Build Coastguard Worker 1873*da0073e9SAndroid Build Coastguard Worker if ignore: 1874*da0073e9SAndroid Build Coastguard Worker continue 1875*da0073e9SAndroid Build Coastguard Worker 1876*da0073e9SAndroid Build Coastguard Worker # cannot be overriden by __torch_function__ 1877*da0073e9SAndroid Build Coastguard Worker if func in get_ignored_functions(): 1878*da0073e9SAndroid Build Coastguard Worker msg = ( 1879*da0073e9SAndroid Build Coastguard Worker "{}.{} is in the tuple returned by torch._overrides.get_ignored_functions " 1880*da0073e9SAndroid Build Coastguard Worker "but still has an explicit override" 1881*da0073e9SAndroid Build Coastguard Worker ) 1882*da0073e9SAndroid Build Coastguard Worker assert func not in get_testing_overrides(), msg.format( 1883*da0073e9SAndroid Build Coastguard Worker namespace, func.__name__ 1884*da0073e9SAndroid Build Coastguard Worker ) 1885*da0073e9SAndroid Build Coastguard Worker continue 1886*da0073e9SAndroid Build Coastguard Worker overridable_funcs[namespace].append(func) 1887*da0073e9SAndroid Build Coastguard Worker return overridable_funcs, index 1888*da0073e9SAndroid Build Coastguard Worker 1889*da0073e9SAndroid Build Coastguard Worker 1890*da0073e9SAndroid Build Coastguard Worker@_disable_user_warnings 1891*da0073e9SAndroid Build Coastguard Workerdef get_overridable_functions() -> Dict[Any, List[Callable]]: 1892*da0073e9SAndroid Build Coastguard Worker """List functions that are overridable via __torch_function__ 1893*da0073e9SAndroid Build Coastguard Worker 1894*da0073e9SAndroid Build Coastguard Worker Returns 1895*da0073e9SAndroid Build Coastguard Worker ------- 1896*da0073e9SAndroid Build Coastguard Worker Dict[Any, List[Callable]] 1897*da0073e9SAndroid Build Coastguard Worker A dictionary that maps namespaces that contain overridable functions 1898*da0073e9SAndroid Build Coastguard Worker to functions in that namespace that can be overridden. 1899*da0073e9SAndroid Build Coastguard Worker """ 1900*da0073e9SAndroid Build Coastguard Worker return _get_overridable_functions()[0] 1901*da0073e9SAndroid Build Coastguard Worker 1902*da0073e9SAndroid Build Coastguard Worker 1903*da0073e9SAndroid Build Coastguard Worker@_disable_user_warnings 1904*da0073e9SAndroid Build Coastguard Workerdef resolve_name(f): 1905*da0073e9SAndroid Build Coastguard Worker """Get a human readable string name for a function passed to 1906*da0073e9SAndroid Build Coastguard Worker __torch_function__ 1907*da0073e9SAndroid Build Coastguard Worker 1908*da0073e9SAndroid Build Coastguard Worker Arguments 1909*da0073e9SAndroid Build Coastguard Worker --------- 1910*da0073e9SAndroid Build Coastguard Worker f : Callable 1911*da0073e9SAndroid Build Coastguard Worker Function to resolve the name of. 1912*da0073e9SAndroid Build Coastguard Worker 1913*da0073e9SAndroid Build Coastguard Worker Returns 1914*da0073e9SAndroid Build Coastguard Worker ------- 1915*da0073e9SAndroid Build Coastguard Worker str 1916*da0073e9SAndroid Build Coastguard Worker Name of the function; if eval'ed it should give back the input 1917*da0073e9SAndroid Build Coastguard Worker function. 1918*da0073e9SAndroid Build Coastguard Worker """ 1919*da0073e9SAndroid Build Coastguard Worker if isinstance(f, (torch._ops.OpOverload, torch._ops.OpOverloadPacket)): 1920*da0073e9SAndroid Build Coastguard Worker return str(f) 1921*da0073e9SAndroid Build Coastguard Worker return _get_overridable_functions()[1].get(f) 1922*da0073e9SAndroid Build Coastguard Worker 1923*da0073e9SAndroid Build Coastguard Worker 1924*da0073e9SAndroid Build Coastguard Worker@functools.lru_cache(None) 1925*da0073e9SAndroid Build Coastguard Workerdef _get_tensor_methods() -> Set[Callable]: 1926*da0073e9SAndroid Build Coastguard Worker """Returns a set of the overridable methods on ``torch.Tensor``""" 1927*da0073e9SAndroid Build Coastguard Worker overridable_funcs = get_overridable_functions() 1928*da0073e9SAndroid Build Coastguard Worker methods = set(overridable_funcs[torch.Tensor]) 1929*da0073e9SAndroid Build Coastguard Worker return methods 1930*da0073e9SAndroid Build Coastguard Worker 1931*da0073e9SAndroid Build Coastguard Worker 1932*da0073e9SAndroid Build Coastguard Worker@_disable_user_warnings 1933*da0073e9SAndroid Build Coastguard Workerdef is_tensor_method_or_property(func: Callable) -> bool: 1934*da0073e9SAndroid Build Coastguard Worker """ 1935*da0073e9SAndroid Build Coastguard Worker Returns True if the function passed in is a handler for a 1936*da0073e9SAndroid Build Coastguard Worker method or property belonging to ``torch.Tensor``, as passed 1937*da0073e9SAndroid Build Coastguard Worker into ``__torch_function__``. 1938*da0073e9SAndroid Build Coastguard Worker 1939*da0073e9SAndroid Build Coastguard Worker .. note:: 1940*da0073e9SAndroid Build Coastguard Worker For properties, their ``__get__`` method must be passed in. 1941*da0073e9SAndroid Build Coastguard Worker 1942*da0073e9SAndroid Build Coastguard Worker This may be needed, in particular, for the following reasons: 1943*da0073e9SAndroid Build Coastguard Worker 1944*da0073e9SAndroid Build Coastguard Worker 1. Methods/properties sometimes don't contain a `__module__` slot. 1945*da0073e9SAndroid Build Coastguard Worker 2. They require that the first passed-in argument is an instance 1946*da0073e9SAndroid Build Coastguard Worker of ``torch.Tensor``. 1947*da0073e9SAndroid Build Coastguard Worker 1948*da0073e9SAndroid Build Coastguard Worker Examples 1949*da0073e9SAndroid Build Coastguard Worker -------- 1950*da0073e9SAndroid Build Coastguard Worker >>> is_tensor_method_or_property(torch.Tensor.add) 1951*da0073e9SAndroid Build Coastguard Worker True 1952*da0073e9SAndroid Build Coastguard Worker >>> is_tensor_method_or_property(torch.add) 1953*da0073e9SAndroid Build Coastguard Worker False 1954*da0073e9SAndroid Build Coastguard Worker """ 1955*da0073e9SAndroid Build Coastguard Worker return func in _get_tensor_methods() or func.__name__ == "__get__" 1956*da0073e9SAndroid Build Coastguard Worker 1957*da0073e9SAndroid Build Coastguard Worker 1958*da0073e9SAndroid Build Coastguard Workerdef is_tensor_like(inp): 1959*da0073e9SAndroid Build Coastguard Worker """ 1960*da0073e9SAndroid Build Coastguard Worker Returns ``True`` if the passed-in input is a Tensor-like. 1961*da0073e9SAndroid Build Coastguard Worker 1962*da0073e9SAndroid Build Coastguard Worker Currently, this occurs whenever there's a ``__torch_function__`` 1963*da0073e9SAndroid Build Coastguard Worker attribute on the type of the input. 1964*da0073e9SAndroid Build Coastguard Worker 1965*da0073e9SAndroid Build Coastguard Worker Examples 1966*da0073e9SAndroid Build Coastguard Worker -------- 1967*da0073e9SAndroid Build Coastguard Worker A subclass of tensor is generally a Tensor-like. 1968*da0073e9SAndroid Build Coastguard Worker 1969*da0073e9SAndroid Build Coastguard Worker >>> class SubTensor(torch.Tensor): ... 1970*da0073e9SAndroid Build Coastguard Worker >>> is_tensor_like(SubTensor([0])) 1971*da0073e9SAndroid Build Coastguard Worker True 1972*da0073e9SAndroid Build Coastguard Worker 1973*da0073e9SAndroid Build Coastguard Worker Built-in or user types aren't usually Tensor-like. 1974*da0073e9SAndroid Build Coastguard Worker 1975*da0073e9SAndroid Build Coastguard Worker >>> is_tensor_like(6) 1976*da0073e9SAndroid Build Coastguard Worker False 1977*da0073e9SAndroid Build Coastguard Worker >>> is_tensor_like(None) 1978*da0073e9SAndroid Build Coastguard Worker False 1979*da0073e9SAndroid Build Coastguard Worker >>> class NotATensor: ... 1980*da0073e9SAndroid Build Coastguard Worker >>> is_tensor_like(NotATensor()) 1981*da0073e9SAndroid Build Coastguard Worker False 1982*da0073e9SAndroid Build Coastguard Worker 1983*da0073e9SAndroid Build Coastguard Worker But, they can be made Tensor-like by implementing __torch_function__. 1984*da0073e9SAndroid Build Coastguard Worker 1985*da0073e9SAndroid Build Coastguard Worker >>> class TensorLike: 1986*da0073e9SAndroid Build Coastguard Worker ... @classmethod 1987*da0073e9SAndroid Build Coastguard Worker ... def __torch_function__(cls, func, types, args, kwargs): 1988*da0073e9SAndroid Build Coastguard Worker ... return -1 1989*da0073e9SAndroid Build Coastguard Worker >>> is_tensor_like(TensorLike()) 1990*da0073e9SAndroid Build Coastguard Worker True 1991*da0073e9SAndroid Build Coastguard Worker """ 1992*da0073e9SAndroid Build Coastguard Worker return type(inp) is torch.Tensor or hasattr(inp, "__torch_function__") 1993*da0073e9SAndroid Build Coastguard Worker 1994*da0073e9SAndroid Build Coastguard Worker 1995*da0073e9SAndroid Build Coastguard Workerclass TorchFunctionMode: 1996*da0073e9SAndroid Build Coastguard Worker """ 1997*da0073e9SAndroid Build Coastguard Worker A ``TorchFunctionMode`` allows you to override the meaning of all 1998*da0073e9SAndroid Build Coastguard Worker ``__torch_function__`` overrideable functions within a dynamic scope, 1999*da0073e9SAndroid Build Coastguard Worker without having to actually create a tensor subclass or manually 2000*da0073e9SAndroid Build Coastguard Worker monkey-patch functions in the PyTorch API. Some common situations 2001*da0073e9SAndroid Build Coastguard Worker where you should use a mode: 2002*da0073e9SAndroid Build Coastguard Worker 2003*da0073e9SAndroid Build Coastguard Worker * You want to override the meaning of factory functions, or other 2004*da0073e9SAndroid Build Coastguard Worker functions that do not otherwise take a tensor as an argument 2005*da0073e9SAndroid Build Coastguard Worker (these cannot be overridden with tensor subclasses). 2006*da0073e9SAndroid Build Coastguard Worker 2007*da0073e9SAndroid Build Coastguard Worker * You want to override the behavior of all functions without needing 2008*da0073e9SAndroid Build Coastguard Worker to wrap your inputs in tensor subclasses; e.g., if you are just 2009*da0073e9SAndroid Build Coastguard Worker interested in logging intermediate computations. 2010*da0073e9SAndroid Build Coastguard Worker 2011*da0073e9SAndroid Build Coastguard Worker * You want to control the order of execution of various tensor 2012*da0073e9SAndroid Build Coastguard Worker subclasses explicitly, rather than implicitly via the return of 2013*da0073e9SAndroid Build Coastguard Worker ``NotImplemented``. 2014*da0073e9SAndroid Build Coastguard Worker 2015*da0073e9SAndroid Build Coastguard Worker Independent subclasses of :class:`TorchFunctionMode` are compositional: 2016*da0073e9SAndroid Build Coastguard Worker modes can be pushed onto a stack using ``with MyMode():``. 2017*da0073e9SAndroid Build Coastguard Worker When you call functions in the PyTorch API inside your 2018*da0073e9SAndroid Build Coastguard Worker ``__torch_function__`` implementation, by default, they will forward on to 2019*da0073e9SAndroid Build Coastguard Worker the next mode on the mode stack. If you want recursively call back into 2020*da0073e9SAndroid Build Coastguard Worker your current ``__torch_function__`` implementation, either explicitly 2021*da0073e9SAndroid Build Coastguard Worker invoke ``self.__torch_function__(...)``, or use the context manager 2022*da0073e9SAndroid Build Coastguard Worker ``enable_torch_function_mode(self, replace=self.inner)`` to make PyTorch 2023*da0073e9SAndroid Build Coastguard Worker API self-referential (beware of infinite loops, in this case!) 2024*da0073e9SAndroid Build Coastguard Worker """ 2025*da0073e9SAndroid Build Coastguard Worker 2026*da0073e9SAndroid Build Coastguard Worker inner: "TorchFunctionMode" 2027*da0073e9SAndroid Build Coastguard Worker 2028*da0073e9SAndroid Build Coastguard Worker # Force metaclass to generate constructor at the base of the hierarchy 2029*da0073e9SAndroid Build Coastguard Worker def __init__(self) -> None: 2030*da0073e9SAndroid Build Coastguard Worker pass 2031*da0073e9SAndroid Build Coastguard Worker 2032*da0073e9SAndroid Build Coastguard Worker def __torch_function__(self, func, types, args=(), kwargs=None): 2033*da0073e9SAndroid Build Coastguard Worker raise NotImplementedError 2034*da0073e9SAndroid Build Coastguard Worker 2035*da0073e9SAndroid Build Coastguard Worker def __enter__(self): 2036*da0073e9SAndroid Build Coastguard Worker _push_mode(self) 2037*da0073e9SAndroid Build Coastguard Worker return self 2038*da0073e9SAndroid Build Coastguard Worker 2039*da0073e9SAndroid Build Coastguard Worker def __exit__(self, exc_type, exc_val, exc_tb): 2040*da0073e9SAndroid Build Coastguard Worker _pop_mode() 2041*da0073e9SAndroid Build Coastguard Worker 2042*da0073e9SAndroid Build Coastguard Worker @classmethod 2043*da0073e9SAndroid Build Coastguard Worker def push(cls, *args, **kwargs): 2044*da0073e9SAndroid Build Coastguard Worker warnings.warn( 2045*da0073e9SAndroid Build Coastguard Worker "`Mode.push()` is no longer necessary and can be replaced with just `with Mode()`" 2046*da0073e9SAndroid Build Coastguard Worker ) 2047*da0073e9SAndroid Build Coastguard Worker instance = cls(*args, **kwargs) 2048*da0073e9SAndroid Build Coastguard Worker return instance 2049*da0073e9SAndroid Build Coastguard Worker 2050*da0073e9SAndroid Build Coastguard Worker 2051*da0073e9SAndroid Build Coastguard Workerdef _get_current_function_mode(): 2052*da0073e9SAndroid Build Coastguard Worker stack_len = _len_torch_function_stack() 2053*da0073e9SAndroid Build Coastguard Worker return _get_function_stack_at(stack_len - 1) if stack_len > 0 else None 2054*da0073e9SAndroid Build Coastguard Worker 2055*da0073e9SAndroid Build Coastguard Worker 2056*da0073e9SAndroid Build Coastguard Workerdef _get_current_function_mode_stack(): 2057*da0073e9SAndroid Build Coastguard Worker stack_len = _len_torch_function_stack() 2058*da0073e9SAndroid Build Coastguard Worker return [_get_function_stack_at(i) for i in range(stack_len)] 2059*da0073e9SAndroid Build Coastguard Worker 2060*da0073e9SAndroid Build Coastguard Worker 2061*da0073e9SAndroid Build Coastguard Workerdef _push_mode(mode): 2062*da0073e9SAndroid Build Coastguard Worker _push_on_torch_function_stack(mode) 2063*da0073e9SAndroid Build Coastguard Worker 2064*da0073e9SAndroid Build Coastguard Worker 2065*da0073e9SAndroid Build Coastguard Workerdef _pop_mode(): 2066*da0073e9SAndroid Build Coastguard Worker old = _pop_torch_function_stack() 2067*da0073e9SAndroid Build Coastguard Worker return old 2068*da0073e9SAndroid Build Coastguard Worker 2069*da0073e9SAndroid Build Coastguard Worker 2070*da0073e9SAndroid Build Coastguard Worker@contextlib.contextmanager 2071*da0073e9SAndroid Build Coastguard Workerdef _pop_mode_temporarily(): 2072*da0073e9SAndroid Build Coastguard Worker old = _pop_mode() 2073*da0073e9SAndroid Build Coastguard Worker try: 2074*da0073e9SAndroid Build Coastguard Worker yield old 2075*da0073e9SAndroid Build Coastguard Worker finally: 2076*da0073e9SAndroid Build Coastguard Worker _push_mode(old) 2077*da0073e9SAndroid Build Coastguard Worker 2078*da0073e9SAndroid Build Coastguard Worker 2079*da0073e9SAndroid Build Coastguard Workerclass BaseTorchFunctionMode(TorchFunctionMode): 2080*da0073e9SAndroid Build Coastguard Worker def __torch_function__(self, func, types, args=(), kwargs=None): 2081*da0073e9SAndroid Build Coastguard Worker if kwargs is None: 2082*da0073e9SAndroid Build Coastguard Worker kwargs = {} 2083*da0073e9SAndroid Build Coastguard Worker return func(*args, **kwargs) 2084*da0073e9SAndroid Build Coastguard Worker 2085*da0073e9SAndroid Build Coastguard Worker 2086*da0073e9SAndroid Build Coastguard Worker@contextlib.contextmanager 2087*da0073e9SAndroid Build Coastguard Workerdef enable_reentrant_dispatch(): 2088*da0073e9SAndroid Build Coastguard Worker # NB: this can't simply be 2089*da0073e9SAndroid Build Coastguard Worker # `enable_reentrant_dispatch = torch._C._RestorePythonTLSSnapshot` 2090*da0073e9SAndroid Build Coastguard Worker # because: 2091*da0073e9SAndroid Build Coastguard Worker # 1. torch._C._RestorePythonTLSSnapshot is unavailable when this file 2092*da0073e9SAndroid Build Coastguard Worker # initially gets imported. Probably an import order thing. 2093*da0073e9SAndroid Build Coastguard Worker # 2. enable_reentrant_dispatch is technically public API; assigning 2094*da0073e9SAndroid Build Coastguard Worker # it the object would change the __module__ to look private. 2095*da0073e9SAndroid Build Coastguard Worker with torch._C._RestorePythonTLSSnapshot(): 2096*da0073e9SAndroid Build Coastguard Worker try: 2097*da0073e9SAndroid Build Coastguard Worker yield 2098*da0073e9SAndroid Build Coastguard Worker finally: 2099*da0073e9SAndroid Build Coastguard Worker pass 2100