Name Date Size #Lines LOC

..--

.ci/H25-Apr-2025-12,6219,243

.circleci/H25-Apr-2025-2,2011,695

.ctags.d/H25-Apr-2025-32

.devcontainer/H25-Apr-2025-250188

.github/H25-Apr-2025-47,98442,703

.vscode/H25-Apr-2025-2220

android/H25-Apr-2025-7,7496,306

aten/H25-Apr-2025-704,785570,475

benchmarks/H25-Apr-2025-96,17375,180

binaries/H25-Apr-2025-1,6591,300

c10/H25-Apr-2025-79,29056,938

caffe2/H25-Apr-2025-13,06610,852

cmake/H25-Apr-2025-14,35412,751

docs/H25-Apr-2025-49,28037,722

functorch/H25-Apr-2025-18,21314,942

ios/H25-Apr-2025-2,3702,089

mypy_plugins/H25-Apr-2025-9979

scripts/H25-Apr-2025-5,6004,152

test/H25-Apr-2025-879,511700,979

third_party/H25-Apr-2025-24,92720,586

tools/H25-Apr-2025-40,62431,849

torch/H25-Apr-2025-1,248,927972,035

torchgen/H25-Apr-2025-27,25220,474

.bazelignoreH A D25-Apr-2025176 54

.bazelrcH A D25-Apr-20255.5 KiB115104

.bazelversionH A D25-Apr-20256 21

.buckconfig.ossH A D25-Apr-2025453 2721

.clang-formatH A D25-Apr-20252.6 KiB9392

.clang-tidyH A D25-Apr-20252.3 KiB6867

.cmakelintrcH A D25-Apr-2025245 21

.coveragercH A D25-Apr-2025186 1614

.dockerignoreH A D25-Apr-20255.9 KiB381325

.flake8H A D25-Apr-20253.1 KiB8079

.gdbinitH A D25-Apr-2025653 1513

.git-blame-ignore-revsH A D25-Apr-20252.1 KiB4746

.gitattributesH A D25-Apr-2025338 87

.gitignoreH A D25-Apr-20255.9 KiB381325

.lintrunner.tomlH A D25-Apr-202552.8 KiB1,6761,635

.lldbinitH A D25-Apr-2025785 1816

Android.bpH A D25-Apr-2025199 108

BUCK.ossH A D25-Apr-20254 KiB123115

BUILD.bazelH A D25-Apr-202527.8 KiB1,1001,023

CITATION.cffH A D25-Apr-20253.1 KiB117116

CMakeLists.txtH A D25-Apr-202551.9 KiB1,3891,256

CODEOWNERSH A D25-Apr-20256 KiB168142

CODE_OF_CONDUCT.mdH A D25-Apr-20253.3 KiB7757

CONTRIBUTING.mdH A D25-Apr-202557.8 KiB1,3571,064

DockerfileH A D25-Apr-20254.1 KiB113101

GLOSSARY.mdH A D25-Apr-20252.4 KiB8665

LICENSEH A D25-Apr-20253.3 KiB8162

MANIFEST.inH A D25-Apr-2025827 3231

METADATAH A D25-Apr-2025670 2018

MODULE_LICENSE_BSDHD25-Apr-20250

MakefileH A D25-Apr-20251.2 KiB5639

NOTICEH A D25-Apr-202523.1 KiB457367

OWNERSH A D25-Apr-202551 21

README.mdH A D25-Apr-202524.4 KiB500360

RELEASE.mdH A D25-Apr-202529.3 KiB482337

SECURITY.mdH A D25-Apr-20259.4 KiB8855

WORKSPACEH A D25-Apr-20258.8 KiB355286

aten.bzlH A D25-Apr-20252.8 KiB9485

buckbuild.bzlH A D25-Apr-202582.9 KiB2,2422,102

build.bzlH A D25-Apr-202511.2 KiB328297

build_variables.bzlH A D25-Apr-202569.2 KiB1,5131,444

defs.bzlH A D25-Apr-20252.7 KiB8075

docker.MakefileH A D25-Apr-20253.8 KiB123104

mypy-strict.iniH A D25-Apr-20251.6 KiB6650

mypy.iniH A D25-Apr-20256.1 KiB311220

pt_ops.bzlH A D25-Apr-202519.9 KiB672650

pt_template_srcs.bzlH A D25-Apr-202511.1 KiB247218

pyproject.tomlH A D25-Apr-20255.1 KiB210198

pytest.iniH A D25-Apr-2025839 2522

requirements.txtH A D25-Apr-2025607 2524

setup.pyH A D25-Apr-202553 KiB1,5191,074

ubsan.suppH A D25-Apr-202542 11

ufunc_defs.bzlH A D25-Apr-2025800 2621

version.txtH A D25-Apr-20258 21

README.md

1![PyTorch Logo](https://github.com/pytorch/pytorch/raw/main/docs/source/_static/img/pytorch-logo-dark.png)
2
3--------------------------------------------------------------------------------
4
5PyTorch is a Python package that provides two high-level features:
6- Tensor computation (like NumPy) with strong GPU acceleration
7- Deep neural networks built on a tape-based autograd system
8
9You can reuse your favorite Python packages such as NumPy, SciPy, and Cython to extend PyTorch when needed.
10
11Our trunk health (Continuous Integration signals) can be found at [hud.pytorch.org](https://hud.pytorch.org/ci/pytorch/pytorch/main).
12
13<!-- toc -->
14
15- [More About PyTorch](#more-about-pytorch)
16  - [A GPU-Ready Tensor Library](#a-gpu-ready-tensor-library)
17  - [Dynamic Neural Networks: Tape-Based Autograd](#dynamic-neural-networks-tape-based-autograd)
18  - [Python First](#python-first)
19  - [Imperative Experiences](#imperative-experiences)
20  - [Fast and Lean](#fast-and-lean)
21  - [Extensions Without Pain](#extensions-without-pain)
22- [Installation](#installation)
23  - [Binaries](#binaries)
24    - [NVIDIA Jetson Platforms](#nvidia-jetson-platforms)
25  - [From Source](#from-source)
26    - [Prerequisites](#prerequisites)
27      - [NVIDIA CUDA Support](#nvidia-cuda-support)
28      - [AMD ROCm Support](#amd-rocm-support)
29      - [Intel GPU Support](#intel-gpu-support)
30    - [Get the PyTorch Source](#get-the-pytorch-source)
31    - [Install Dependencies](#install-dependencies)
32    - [Install PyTorch](#install-pytorch)
33      - [Adjust Build Options (Optional)](#adjust-build-options-optional)
34  - [Docker Image](#docker-image)
35    - [Using pre-built images](#using-pre-built-images)
36    - [Building the image yourself](#building-the-image-yourself)
37  - [Building the Documentation](#building-the-documentation)
38  - [Previous Versions](#previous-versions)
39- [Getting Started](#getting-started)
40- [Resources](#resources)
41- [Communication](#communication)
42- [Releases and Contributing](#releases-and-contributing)
43- [The Team](#the-team)
44- [License](#license)
45
46<!-- tocstop -->
47
48## More About PyTorch
49
50[Learn the basics of PyTorch](https://pytorch.org/tutorials/beginner/basics/intro.html)
51
52At a granular level, PyTorch is a library that consists of the following components:
53
54| Component | Description |
55| ---- | --- |
56| [**torch**](https://pytorch.org/docs/stable/torch.html) | A Tensor library like NumPy, with strong GPU support |
57| [**torch.autograd**](https://pytorch.org/docs/stable/autograd.html) | A tape-based automatic differentiation library that supports all differentiable Tensor operations in torch |
58| [**torch.jit**](https://pytorch.org/docs/stable/jit.html) | A compilation stack (TorchScript) to create serializable and optimizable models from PyTorch code  |
59| [**torch.nn**](https://pytorch.org/docs/stable/nn.html) | A neural networks library deeply integrated with autograd designed for maximum flexibility |
60| [**torch.multiprocessing**](https://pytorch.org/docs/stable/multiprocessing.html) | Python multiprocessing, but with magical memory sharing of torch Tensors across processes. Useful for data loading and Hogwild training |
61| [**torch.utils**](https://pytorch.org/docs/stable/data.html) | DataLoader and other utility functions for convenience |
62
63Usually, PyTorch is used either as:
64
65- A replacement for NumPy to use the power of GPUs.
66- A deep learning research platform that provides maximum flexibility and speed.
67
68Elaborating Further:
69
70### A GPU-Ready Tensor Library
71
72If you use NumPy, then you have used Tensors (a.k.a. ndarray).
73
74![Tensor illustration](./docs/source/_static/img/tensor_illustration.png)
75
76PyTorch provides Tensors that can live either on the CPU or the GPU and accelerates the
77computation by a huge amount.
78
79We provide a wide variety of tensor routines to accelerate and fit your scientific computation needs
80such as slicing, indexing, mathematical operations, linear algebra, reductions.
81And they are fast!
82
83### Dynamic Neural Networks: Tape-Based Autograd
84
85PyTorch has a unique way of building neural networks: using and replaying a tape recorder.
86
87Most frameworks such as TensorFlow, Theano, Caffe, and CNTK have a static view of the world.
88One has to build a neural network and reuse the same structure again and again.
89Changing the way the network behaves means that one has to start from scratch.
90
91With PyTorch, we use a technique called reverse-mode auto-differentiation, which allows you to
92change the way your network behaves arbitrarily with zero lag or overhead. Our inspiration comes
93from several research papers on this topic, as well as current and past work such as
94[torch-autograd](https://github.com/twitter/torch-autograd),
95[autograd](https://github.com/HIPS/autograd),
96[Chainer](https://chainer.org), etc.
97
98While this technique is not unique to PyTorch, it's one of the fastest implementations of it to date.
99You get the best of speed and flexibility for your crazy research.
100
101![Dynamic graph](https://github.com/pytorch/pytorch/raw/main/docs/source/_static/img/dynamic_graph.gif)
102
103### Python First
104
105PyTorch is not a Python binding into a monolithic C++ framework.
106It is built to be deeply integrated into Python.
107You can use it naturally like you would use [NumPy](https://www.numpy.org/) / [SciPy](https://www.scipy.org/) / [scikit-learn](https://scikit-learn.org) etc.
108You can write your new neural network layers in Python itself, using your favorite libraries
109and use packages such as [Cython](https://cython.org/) and [Numba](http://numba.pydata.org/).
110Our goal is to not reinvent the wheel where appropriate.
111
112### Imperative Experiences
113
114PyTorch is designed to be intuitive, linear in thought, and easy to use.
115When you execute a line of code, it gets executed. There isn't an asynchronous view of the world.
116When you drop into a debugger or receive error messages and stack traces, understanding them is straightforward.
117The stack trace points to exactly where your code was defined.
118We hope you never spend hours debugging your code because of bad stack traces or asynchronous and opaque execution engines.
119
120### Fast and Lean
121
122PyTorch has minimal framework overhead. We integrate acceleration libraries
123such as [Intel MKL](https://software.intel.com/mkl) and NVIDIA ([cuDNN](https://developer.nvidia.com/cudnn), [NCCL](https://developer.nvidia.com/nccl)) to maximize speed.
124At the core, its CPU and GPU Tensor and neural network backends
125are mature and have been tested for years.
126
127Hence, PyTorch is quite fast — whether you run small or large neural networks.
128
129The memory usage in PyTorch is extremely efficient compared to Torch or some of the alternatives.
130We've written custom memory allocators for the GPU to make sure that
131your deep learning models are maximally memory efficient.
132This enables you to train bigger deep learning models than before.
133
134### Extensions Without Pain
135
136Writing new neural network modules, or interfacing with PyTorch's Tensor API was designed to be straightforward
137and with minimal abstractions.
138
139You can write new neural network layers in Python using the torch API
140[or your favorite NumPy-based libraries such as SciPy](https://pytorch.org/tutorials/advanced/numpy_extensions_tutorial.html).
141
142If you want to write your layers in C/C++, we provide a convenient extension API that is efficient and with minimal boilerplate.
143No wrapper code needs to be written. You can see [a tutorial here](https://pytorch.org/tutorials/advanced/cpp_extension.html) and [an example here](https://github.com/pytorch/extension-cpp).
144
145
146## Installation
147
148### Binaries
149Commands to install binaries via Conda or pip wheels are on our website: [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/)
150
151
152#### NVIDIA Jetson Platforms
153
154Python wheels for NVIDIA's Jetson Nano, Jetson TX1/TX2, Jetson Xavier NX/AGX, and Jetson AGX Orin are provided [here](https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-10-now-available/72048) and the L4T container is published [here](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-pytorch)
155
156They require JetPack 4.2 and above, and [@dusty-nv](https://github.com/dusty-nv) and [@ptrblck](https://github.com/ptrblck) are maintaining them.
157
158
159### From Source
160
161#### Prerequisites
162If you are installing from source, you will need:
163- Python 3.8 or later (for Linux, Python 3.8.1+ is needed)
164- A compiler that fully supports C++17, such as clang or gcc (gcc 9.4.0 or newer is required, on Linux)
165- Visual Studio or Visual Studio Build Tool on Windows
166
167\* PyTorch CI uses Visual C++ BuildTools, which come with Visual Studio Enterprise,
168Professional, or Community Editions. You can also install the build tools from
169https://visualstudio.microsoft.com/visual-cpp-build-tools/. The build tools *do not*
170come with Visual Studio Code by default.
171
172\* We highly recommend installing an [Anaconda](https://www.anaconda.com/download) environment. You will get a high-quality BLAS library (MKL) and you get controlled dependency versions regardless of your Linux distro.
173
174An example of environment setup is shown below:
175
176* Linux:
177
178```bash
179$ source <CONDA_INSTALL_DIR>/bin/activate
180$ conda create -y -n <CONDA_NAME>
181$ conda activate <CONDA_NAME>
182```
183
184* Windows:
185
186```bash
187$ source <CONDA_INSTALL_DIR>\Scripts\activate.bat
188$ conda create -y -n <CONDA_NAME>
189$ conda activate <CONDA_NAME>
190$ call "C:\Program Files\Microsoft Visual Studio\<VERSION>\Community\VC\Auxiliary\Build\vcvarsall.bat" x64
191```
192
193##### NVIDIA CUDA Support
194If you want to compile with CUDA support, [select a supported version of CUDA from our support matrix](https://pytorch.org/get-started/locally/), then install the following:
195- [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads)
196- [NVIDIA cuDNN](https://developer.nvidia.com/cudnn) v8.5 or above
197- [Compiler](https://gist.github.com/ax3l/9489132) compatible with CUDA
198
199Note: You could refer to the [cuDNN Support Matrix](https://docs.nvidia.com/deeplearning/cudnn/reference/support-matrix.html) for cuDNN versions with the various supported CUDA, CUDA driver and NVIDIA hardware
200
201If you want to disable CUDA support, export the environment variable `USE_CUDA=0`.
202Other potentially useful environment variables may be found in `setup.py`.
203
204If you are building for NVIDIA's Jetson platforms (Jetson Nano, TX1, TX2, AGX Xavier), Instructions to install PyTorch for Jetson Nano are [available here](https://devtalk.nvidia.com/default/topic/1049071/jetson-nano/pytorch-for-jetson-nano/)
205
206##### AMD ROCm Support
207If you want to compile with ROCm support, install
208- [AMD ROCm](https://rocm.docs.amd.com/en/latest/deploy/linux/quick_start.html) 4.0 and above installation
209- ROCm is currently supported only for Linux systems.
210
211If you want to disable ROCm support, export the environment variable `USE_ROCM=0`.
212Other potentially useful environment variables may be found in `setup.py`.
213
214##### Intel GPU Support
215If you want to compile with Intel GPU support, follow these
216- [PyTorch Prerequisites for Intel GPUs](https://www.intel.com/content/www/us/en/developer/articles/tool/pytorch-prerequisites-for-intel-gpus.html) instructions.
217- Intel GPU is supported for Linux and Windows.
218
219If you want to disable Intel GPU support, export the environment variable `USE_XPU=0`.
220Other potentially useful environment variables may be found in `setup.py`.
221
222#### Get the PyTorch Source
223```bash
224git clone --recursive https://github.com/pytorch/pytorch
225cd pytorch
226# if you are updating an existing checkout
227git submodule sync
228git submodule update --init --recursive
229```
230
231#### Install Dependencies
232
233**Common**
234
235```bash
236conda install cmake ninja
237# Run this command on native Windows
238conda install rust
239# Run this command from the PyTorch directory after cloning the source code using the “Get the PyTorch Source“ section below
240pip install -r requirements.txt
241```
242
243**On Linux**
244
245```bash
246pip install mkl-static mkl-include
247# CUDA only: Add LAPACK support for the GPU if needed
248conda install -c pytorch magma-cuda121  # or the magma-cuda* that matches your CUDA version from https://anaconda.org/pytorch/repo
249
250# (optional) If using torch.compile with inductor/triton, install the matching version of triton
251# Run from the pytorch directory after cloning
252# For Intel GPU support, please explicitly `export USE_XPU=1` before running command.
253make triton
254```
255
256**On MacOS**
257
258```bash
259# Add this package on intel x86 processor machines only
260pip install mkl-static mkl-include
261# Add these packages if torch.distributed is needed
262conda install pkg-config libuv
263```
264
265**On Windows**
266
267```bash
268pip install mkl-static mkl-include
269# Add these packages if torch.distributed is needed.
270# Distributed package support on Windows is a prototype feature and is subject to changes.
271conda install -c conda-forge libuv=1.39
272```
273
274#### Install PyTorch
275**On Linux**
276
277If you would like to compile PyTorch with [new C++ ABI](https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html) enabled, then first run this command:
278```bash
279export _GLIBCXX_USE_CXX11_ABI=1
280```
281
282Please **note** that starting from PyTorch 2.5, the PyTorch build with XPU supports both new and old C++ ABIs. Previously, XPU only supported the new C++ ABI. If you want to compile with Intel GPU support, please follow [Intel GPU Support](#intel-gpu-support).
283
284If you're compiling for AMD ROCm then first run this command:
285```bash
286# Only run this if you're compiling for ROCm
287python tools/amd_build/build_amd.py
288```
289
290Install PyTorch
291```bash
292export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
293python setup.py develop
294```
295
296> _Aside:_ If you are using [Anaconda](https://www.anaconda.com/distribution/#download-section), you may experience an error caused by the linker:
297>
298> ```plaintext
299> build/temp.linux-x86_64-3.7/torch/csrc/stub.o: file not recognized: file format not recognized
300> collect2: error: ld returned 1 exit status
301> error: command 'g++' failed with exit status 1
302> ```
303>
304> This is caused by `ld` from the Conda environment shadowing the system `ld`. You should use a newer version of Python that fixes this issue. The recommended Python version is 3.8.1+.
305
306**On macOS**
307
308```bash
309python3 setup.py develop
310```
311
312**On Windows**
313
314If you want to build legacy python code, please refer to [Building on legacy code and CUDA](https://github.com/pytorch/pytorch/blob/main/CONTRIBUTING.md#building-on-legacy-code-and-cuda)
315
316**CPU-only builds**
317
318In this mode PyTorch computations will run on your CPU, not your GPU
319
320```cmd
321python setup.py develop
322```
323
324Note on OpenMP: The desired OpenMP implementation is Intel OpenMP (iomp). In order to link against iomp, you'll need to manually download the library and set up the building environment by tweaking `CMAKE_INCLUDE_PATH` and `LIB`. The instruction [here](https://github.com/pytorch/pytorch/blob/main/docs/source/notes/windows.rst#building-from-source) is an example for setting up both MKL and Intel OpenMP. Without these configurations for CMake, Microsoft Visual C OpenMP runtime (vcomp) will be used.
325
326**CUDA based build**
327
328In this mode PyTorch computations will leverage your GPU via CUDA for faster number crunching
329
330[NVTX](https://docs.nvidia.com/gameworks/content/gameworkslibrary/nvtx/nvidia_tools_extension_library_nvtx.htm) is needed to build Pytorch with CUDA.
331NVTX is a part of CUDA distributive, where it is called "Nsight Compute". To install it onto an already installed CUDA run CUDA installation once again and check the corresponding checkbox.
332Make sure that CUDA with Nsight Compute is installed after Visual Studio.
333
334Currently, VS 2017 / 2019, and Ninja are supported as the generator of CMake. If `ninja.exe` is detected in `PATH`, then Ninja will be used as the default generator, otherwise, it will use VS 2017 / 2019.
335<br/> If Ninja is selected as the generator, the latest MSVC will get selected as the underlying toolchain.
336
337Additional libraries such as
338[Magma](https://developer.nvidia.com/magma), [oneDNN, a.k.a. MKLDNN or DNNL](https://github.com/oneapi-src/oneDNN), and [Sccache](https://github.com/mozilla/sccache) are often needed. Please refer to the [installation-helper](https://github.com/pytorch/pytorch/tree/main/.ci/pytorch/win-test-helpers/installation-helpers) to install them.
339
340You can refer to the [build_pytorch.bat](https://github.com/pytorch/pytorch/blob/main/.ci/pytorch/win-test-helpers/build_pytorch.bat) script for some other environment variables configurations
341
342
343```cmd
344cmd
345
346:: Set the environment variables after you have downloaded and unzipped the mkl package,
347:: else CMake would throw an error as `Could NOT find OpenMP`.
348set CMAKE_INCLUDE_PATH={Your directory}\mkl\include
349set LIB={Your directory}\mkl\lib;%LIB%
350
351:: Read the content in the previous section carefully before you proceed.
352:: [Optional] If you want to override the underlying toolset used by Ninja and Visual Studio with CUDA, please run the following script block.
353:: "Visual Studio 2019 Developer Command Prompt" will be run automatically.
354:: Make sure you have CMake >= 3.12 before you do this when you use the Visual Studio generator.
355set CMAKE_GENERATOR_TOOLSET_VERSION=14.27
356set DISTUTILS_USE_SDK=1
357for /f "usebackq tokens=*" %i in (`"%ProgramFiles(x86)%\Microsoft Visual Studio\Installer\vswhere.exe" -version [15^,17^) -products * -latest -property installationPath`) do call "%i\VC\Auxiliary\Build\vcvarsall.bat" x64 -vcvars_ver=%CMAKE_GENERATOR_TOOLSET_VERSION%
358
359:: [Optional] If you want to override the CUDA host compiler
360set CUDAHOSTCXX=C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\bin\HostX64\x64\cl.exe
361
362python setup.py develop
363
364```
365
366##### Adjust Build Options (Optional)
367
368You can adjust the configuration of cmake variables optionally (without building first), by doing
369the following. For example, adjusting the pre-detected directories for CuDNN or BLAS can be done
370with such a step.
371
372On Linux
373```bash
374export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
375python setup.py build --cmake-only
376ccmake build  # or cmake-gui build
377```
378
379On macOS
380```bash
381export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
382MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py build --cmake-only
383ccmake build  # or cmake-gui build
384```
385
386### Docker Image
387
388#### Using pre-built images
389
390You can also pull a pre-built docker image from Docker Hub and run with docker v19.03+
391
392```bash
393docker run --gpus all --rm -ti --ipc=host pytorch/pytorch:latest
394```
395
396Please note that PyTorch uses shared memory to share data between processes, so if torch multiprocessing is used (e.g.
397for multithreaded data loaders) the default shared memory segment size that container runs with is not enough, and you
398should increase shared memory size either with `--ipc=host` or `--shm-size` command line options to `nvidia-docker run`.
399
400#### Building the image yourself
401
402**NOTE:** Must be built with a docker version > 18.06
403
404The `Dockerfile` is supplied to build images with CUDA 11.1 support and cuDNN v8.
405You can pass `PYTHON_VERSION=x.y` make variable to specify which Python version is to be used by Miniconda, or leave it
406unset to use the default.
407
408```bash
409make -f docker.Makefile
410# images are tagged as docker.io/${your_docker_username}/pytorch
411```
412
413You can also pass the `CMAKE_VARS="..."` environment variable to specify additional CMake variables to be passed to CMake during the build.
414See [setup.py](./setup.py) for the list of available variables.
415
416```bash
417make -f docker.Makefile
418```
419
420### Building the Documentation
421
422To build documentation in various formats, you will need [Sphinx](http://www.sphinx-doc.org) and the
423readthedocs theme.
424
425```bash
426cd docs/
427pip install -r requirements.txt
428```
429You can then build the documentation by running `make <format>` from the
430`docs/` folder. Run `make` to get a list of all available output formats.
431
432If you get a katex error run `npm install katex`.  If it persists, try
433`npm install -g katex`
434
435> Note: if you installed `nodejs` with a different package manager (e.g.,
436`conda`) then `npm` will probably install a version of `katex` that is not
437compatible with your version of `nodejs` and doc builds will fail.
438A combination of versions that is known to work is `[email protected]` and
439`[email protected]`. To install the latter with `npm` you can run
440```npm install -g [email protected]```
441
442### Previous Versions
443
444Installation instructions and binaries for previous PyTorch versions may be found
445on [our website](https://pytorch.org/previous-versions).
446
447
448## Getting Started
449
450Three-pointers to get you started:
451- [Tutorials: get you started with understanding and using PyTorch](https://pytorch.org/tutorials/)
452- [Examples: easy to understand PyTorch code across all domains](https://github.com/pytorch/examples)
453- [The API Reference](https://pytorch.org/docs/)
454- [Glossary](https://github.com/pytorch/pytorch/blob/main/GLOSSARY.md)
455
456## Resources
457
458* [PyTorch.org](https://pytorch.org/)
459* [PyTorch Tutorials](https://pytorch.org/tutorials/)
460* [PyTorch Examples](https://github.com/pytorch/examples)
461* [PyTorch Models](https://pytorch.org/hub/)
462* [Intro to Deep Learning with PyTorch from Udacity](https://www.udacity.com/course/deep-learning-pytorch--ud188)
463* [Intro to Machine Learning with PyTorch from Udacity](https://www.udacity.com/course/intro-to-machine-learning-nanodegree--nd229)
464* [Deep Neural Networks with PyTorch from Coursera](https://www.coursera.org/learn/deep-neural-networks-with-pytorch)
465* [PyTorch Twitter](https://twitter.com/PyTorch)
466* [PyTorch Blog](https://pytorch.org/blog/)
467* [PyTorch YouTube](https://www.youtube.com/channel/UCWXI5YeOsh03QvJ59PMaXFw)
468
469## Communication
470* Forums: Discuss implementations, research, etc. https://discuss.pytorch.org
471* GitHub Issues: Bug reports, feature requests, install issues, RFCs, thoughts, etc.
472* Slack: The [PyTorch Slack](https://pytorch.slack.com/) hosts a primary audience of moderate to experienced PyTorch users and developers for general chat, online discussions, collaboration, etc. If you are a beginner looking for help, the primary medium is [PyTorch Forums](https://discuss.pytorch.org). If you need a slack invite, please fill this form: https://goo.gl/forms/PP1AGvNHpSaJP8to1
473* Newsletter: No-noise, a one-way email newsletter with important announcements about PyTorch. You can sign-up here: https://eepurl.com/cbG0rv
474* Facebook Page: Important announcements about PyTorch. https://www.facebook.com/pytorch
475* For brand guidelines, please visit our website at [pytorch.org](https://pytorch.org/)
476
477## Releases and Contributing
478
479Typically, PyTorch has three minor releases a year. Please let us know if you encounter a bug by [filing an issue](https://github.com/pytorch/pytorch/issues).
480
481We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion.
482
483If you plan to contribute new features, utility functions, or extensions to the core, please first open an issue and discuss the feature with us.
484Sending a PR without discussion might end up resulting in a rejected PR because we might be taking the core in a different direction than you might be aware of.
485
486To learn more about making a contribution to Pytorch, please see our [Contribution page](CONTRIBUTING.md). For more information about PyTorch releases, see [Release page](RELEASE.md).
487
488## The Team
489
490PyTorch is a community-driven project with several skillful engineers and researchers contributing to it.
491
492PyTorch is currently maintained by [Soumith Chintala](http://soumith.ch), [Gregory Chanan](https://github.com/gchanan), [Dmytro Dzhulgakov](https://github.com/dzhulgakov), [Edward Yang](https://github.com/ezyang), and [Nikita Shulga](https://github.com/malfet) with major contributions coming from hundreds of talented individuals in various forms and means.
493A non-exhaustive but growing list needs to mention: [Trevor Killeen](https://github.com/killeent), [Sasank Chilamkurthy](https://github.com/chsasank), [Sergey Zagoruyko](https://github.com/szagoruyko), [Adam Lerer](https://github.com/adamlerer), [Francisco Massa](https://github.com/fmassa), [Alykhan Tejani](https://github.com/alykhantejani), [Luca Antiga](https://github.com/lantiga), [Alban Desmaison](https://github.com/albanD), [Andreas Koepf](https://github.com/andreaskoepf), [James Bradbury](https://github.com/jamesb93), [Zeming Lin](https://github.com/ebetica), [Yuandong Tian](https://github.com/yuandong-tian), [Guillaume Lample](https://github.com/glample), [Marat Dukhan](https://github.com/Maratyszcza), [Natalia Gimelshein](https://github.com/ngimel), [Christian Sarofeen](https://github.com/csarofeen), [Martin Raison](https://github.com/martinraison), [Edward Yang](https://github.com/ezyang), [Zachary Devito](https://github.com/zdevito).
494
495Note: This project is unrelated to [hughperkins/pytorch](https://github.com/hughperkins/pytorch) with the same name. Hugh is a valuable contributor to the Torch community and has helped with many things Torch and PyTorch.
496
497## License
498
499PyTorch has a BSD-style license, as found in the [LICENSE](LICENSE) file.
500