Name Date Size #Lines LOC

..--

api/H25-Apr-2025-37,96322,145

autograd/H25-Apr-2025-33,92424,635

cpu/H25-Apr-2025-3324

cuda/H25-Apr-2025-6,5445,562

deploy/H25-Apr-2025-32

distributed/H25-Apr-2025-54,66741,031

dynamo/H25-Apr-2025-8,4156,256

functorch/H25-Apr-2025-610495

fx/H25-Apr-2025-265227

inductor/H25-Apr-2025-7,0965,654

instruction_counter/H25-Apr-2025-9676

jit/H25-Apr-2025-217,593172,372

lazy/H25-Apr-2025-12,5539,580

monitor/H25-Apr-2025-841654

mps/H25-Apr-2025-292253

mtia/H25-Apr-2025-10074

multiprocessing/H25-Apr-2025-9873

onnx/H25-Apr-2025-520390

profiler/H25-Apr-2025-10,9498,865

tensor/H25-Apr-2025-505363

utils/H25-Apr-2025-13,55010,714

xpu/H25-Apr-2025-864721

CudaIPCTypes.cppH A D25-Apr-20258.6 KiB267219

CudaIPCTypes.hH A D25-Apr-20253.3 KiB144111

DataLoader.cppH A D25-Apr-20259 KiB262194

DataLoader.hH A D25-Apr-2025222 73

Device.cppH A D25-Apr-20258.7 KiB282239

Device.hH A D25-Apr-2025483 2213

Dtype.cppH A D25-Apr-20256 KiB196163

Dtype.hH A D25-Apr-2025834 3222

DynamicTypes.cppH A D25-Apr-20254.2 KiB13397

DynamicTypes.hH A D25-Apr-2025999 3826

Event.cppH A D25-Apr-202510.1 KiB328279

Event.hH A D25-Apr-2025534 2217

Exceptions.cppH A D25-Apr-202512.9 KiB334275

Exceptions.hH A D25-Apr-202515 KiB394276

Export.hH A D25-Apr-2025157 107

Generator.cppH A D25-Apr-202513.2 KiB425351

Generator.hH A D25-Apr-20251 KiB3115

Layout.cppH A D25-Apr-20252.2 KiB7870

Layout.hH A D25-Apr-2025537 2615

MemoryFormat.cppH A D25-Apr-20252.8 KiB9281

MemoryFormat.hH A D25-Apr-2025632 2817

Module.cppH A D25-Apr-202574.7 KiB2,3842,119

Module.hH A D25-Apr-2025101 74

PyInterpreter.cppH A D25-Apr-202534.6 KiB989777

PyInterpreter.hH A D25-Apr-2025383 149

QScheme.cppH A D25-Apr-20252.6 KiB9079

QScheme.hH A D25-Apr-2025558 2615

README.mdH A D25-Apr-20254.6 KiB130102

Size.cppH A D25-Apr-20258.5 KiB285247

Size.hH A D25-Apr-2025428 1610

Storage.cppH A D25-Apr-202522.7 KiB746628

Storage.hH A D25-Apr-20251.5 KiB6146

StorageMethods.cppH A D25-Apr-202521.2 KiB670580

StorageMethods.hH A D25-Apr-2025132 95

StorageSharing.cppH A D25-Apr-202524.1 KiB692576

StorageSharing.hH A D25-Apr-2025139 95

Stream.cppH A D25-Apr-202512.1 KiB386333

Stream.hH A D25-Apr-2025546 2417

THConcat.hH A D25-Apr-2025691 2013

THP.hH A D25-Apr-2025894 3121

TypeInfo.cppH A D25-Apr-202513.1 KiB397353

TypeInfo.hH A D25-Apr-2025498 2717

Types.hH A D25-Apr-2025163 149

copy_utils.hH A D25-Apr-20251.4 KiB5346

empty.cH A D25-Apr-20250 10

itt.cppH A D25-Apr-2025439 1512

itt_wrapper.cppH A D25-Apr-2025729 2721

itt_wrapper.hH A D25-Apr-2025320 1310

python_dimname.cppH A D25-Apr-20253.5 KiB11178

python_dimname.hH A D25-Apr-2025214 86

python_headers.hH A D25-Apr-2025649 2618

serialization.cppH A D25-Apr-202512.8 KiB410340

serialization.hH A D25-Apr-2025681 2820

stub.cH A D25-Apr-2025241 1612

utils.cppH A D25-Apr-202512.6 KiB445360

utils.hH A D25-Apr-20259.2 KiB218173

README.md

1# csrc
2
3The csrc directory contains all of the code concerned with integration
4with Python.  This is in contrast to lib, which contains the Torch
5libraries that are Python agnostic.  csrc depends on lib, but not vice
6versa.
7
8There are a number of utilities for easing integration with Python which
9are worth knowing about, which we briefly describe here.  But the most
10important gotchas:
11
12* DO NOT forget to take out the GIL with `pybind11::gil_scoped_acquire`
13  before calling Python API or bringing a `THPObjectPtr` into scope.
14
15* Make sure you include `Python.h` first in your header files, before
16  any system headers; otherwise, you will get `error: "_XOPEN_SOURCE" redefined`
17  error.  If you pay attention to warnings, you will see where you need to
18  do this.
19
20## Notes
21
22### Note [Storage is not nullptr]
23
24Historically, Torch supported nullptr storage, as a minor optimization to
25avoid having to allocate a storage object when it would be empty.
26However, this is actually a confusing special case to deal with, so
27by-in-large, PyTorch assumes that, in fact, storage is never nullptr.
28
29One important case where this assumption is important is when tracking
30the CUDA device a tensor is stored in: this information is stored
31solely in the storage, so if a storage is nullptr, we lose this information.
32
33Although storage is never nullptr, the data field of c10::StorageImpl may be
34nullptr.  This
35mostly occurs when we want to pre-allocate an output tensor struct, but then
36have it be resized and filled with data by some operator: there's no point in
37allocating data for it in this case!
38
39## Files
40
41### `Exceptions.h`
42
43Frequently when working with the Python API, you may call a function
44which returns an error.  In this case, we want to return directly to the
45Python interpreter, so that this exception can be propagated
46accordingly; however, because the Python API is C-based, what actually
47will happen is it will return control to whatever C++ code called it.
48Similarly, if we raise a C++ exception, prior to returning to the Python
49interpreter, we must set the Python error flags, so it turns into a C++
50exception.
51
52Moreover, when using the following macros, the generated warnings
53will be converted into python warnings that can be caught by the user.
54
55Exceptions define helpers for two main cases:
56* For code where you write the python binding by hand, `HANDLE_TH_ERRORS`,
57`END_HANDLE_TH_ERRORS` and an exception class `python_error`.  You call them like this:
58
59```
60// Entry point from Python interpreter
61PyObject* run(PyObject* arg) {
62  HANDLE_TH_ERRORS
63  ...
64  if (!x) throw python_error();
65  // From c10/Exception.h
66  TORCH_CHECK(cond, "cond was false here");
67  TORCH_WARN("Warning message");
68  ...
69  END_HANDLE_TH_ERRORS
70}
71```
72
73The `HANDLE_TH_ERRORS` macro will catch all exceptions and convert them
74into an appropriate Python signal.  `python_error` is a special
75exception which doesn't contain any info, instead it says, "An error
76occurred in the Python API; if you return to the interpreter, Python
77will raise that exception, nothing else needs to be done."
78
79* For code that you bind using pybind, `HANDLE_TH_ERRORS` and `END_HANDLE_TH_ERRORS_PYBIND`
80can be used. They will work jointly with pybind error handling to raise
81pytorch errors and warnings natively and let pybind handle other errors. It can be used as:
82
83```
84// Function given to the pybind binding
85at::Tensor foo(at::Tensor x) {
86  HANDLE_TH_ERRORS
87  ...
88  if (!x) throw python_error();
89  // pybind native error
90  if (!x) throw py::value_error();
91  // From c10/Exception.h
92  TORCH_CHECK(cond, "cond was false here");
93  TORCH_WARN("Warning message");
94  ...
95  END_HANDLE_TH_ERRORS_PYBIND
96}
97```
98
99
100### GIL
101
102Whenever you make any calls to the Python API, you must have taken out
103the Python GIL, as none of these calls are thread safe.
104`pybind11::gil_scoped_acquire` is a RAII struct which handles taking and
105releasing the GIL.  Use it like this:
106
107```
108void iWantToUsePython() {
109  pybind11::gil_scoped_acquire gil;
110  ...
111}
112```
113
114In general, the compiler will NOT warn you if you use Python
115functionality without taking out the GIL, so DO NOT FORGET this call.
116
117### `utils/object_ptr.h`
118
119`THPPointer` is a smart pointer class analogous to `std::shared_ptr`,
120but which is overloaded to handle reference counting scheme of various
121objects which are not based on `shared_ptr`.  The most important overloads are:
122
123* `PyObject` (so important we've aliased it as `THPObjectPtr`), which
124  hooks into Python reference counting.  (By the way, that means you
125  MUST take out the GIL before bringing one of these into scope!)
126
127* The various TH tensor and storage types (e.g., `THTensor`), which
128  hook into TH's reference counting.  (TH's reference counting
129  IS thread safe, no locks necessary.)
130