xref: /aosp_15_r20/external/pytorch/docs/source/tensor_view.rst (revision da0073e96a02ea20f0ac840b70461e3646d07c45)
1.. currentmodule:: torch
2
3.. _tensor-view-doc:
4
5Tensor Views
6=============
7
8PyTorch allows a tensor to be a ``View`` of an existing tensor. View tensor shares the same underlying data
9with its base tensor. Supporting ``View`` avoids explicit data copy, thus allows us to do fast and memory efficient
10reshaping, slicing and element-wise operations.
11
12For example, to get a view of an existing tensor ``t``, you can call ``t.view(...)``.
13
14::
15
16    >>> t = torch.rand(4, 4)
17    >>> b = t.view(2, 8)
18    >>> t.storage().data_ptr() == b.storage().data_ptr()  # `t` and `b` share the same underlying data.
19    True
20    # Modifying view tensor changes base tensor as well.
21    >>> b[0][0] = 3.14
22    >>> t[0][0]
23    tensor(3.14)
24
25Since views share underlying data with its base tensor, if you edit the data
26in the view, it will be reflected in the base tensor as well.
27
28Typically a PyTorch op returns a new tensor as output, e.g. :meth:`~torch.Tensor.add`.
29But in case of view ops, outputs are views of input tensors to avoid unnecessary data copy.
30No data movement occurs when creating a view, view tensor just changes the way
31it interprets the same data. Taking a view of contiguous tensor could potentially produce a non-contiguous tensor.
32Users should pay additional attention as contiguity might have implicit performance impact.
33:meth:`~torch.Tensor.transpose` is a common example.
34
35::
36
37    >>> base = torch.tensor([[0, 1],[2, 3]])
38    >>> base.is_contiguous()
39    True
40    >>> t = base.transpose(0, 1)  # `t` is a view of `base`. No data movement happened here.
41    # View tensors might be non-contiguous.
42    >>> t.is_contiguous()
43    False
44    # To get a contiguous tensor, call `.contiguous()` to enforce
45    # copying data when `t` is not contiguous.
46    >>> c = t.contiguous()
47
48For reference, here’s a full list of view ops in PyTorch:
49
50- Basic slicing and indexing op, e.g. ``tensor[0, 2:, 1:7:2]`` returns a view of base ``tensor``, see note below.
51- :meth:`~torch.Tensor.adjoint`
52- :meth:`~torch.Tensor.as_strided`
53- :meth:`~torch.Tensor.detach`
54- :meth:`~torch.Tensor.diagonal`
55- :meth:`~torch.Tensor.expand`
56- :meth:`~torch.Tensor.expand_as`
57- :meth:`~torch.Tensor.movedim`
58- :meth:`~torch.Tensor.narrow`
59- :meth:`~torch.Tensor.permute`
60- :meth:`~torch.Tensor.select`
61- :meth:`~torch.Tensor.squeeze`
62- :meth:`~torch.Tensor.transpose`
63- :meth:`~torch.Tensor.t`
64- :attr:`~torch.Tensor.T`
65- :attr:`~torch.Tensor.H`
66- :attr:`~torch.Tensor.mT`
67- :attr:`~torch.Tensor.mH`
68- :attr:`~torch.Tensor.real`
69- :attr:`~torch.Tensor.imag`
70- :meth:`~torch.Tensor.view_as_real`
71- :meth:`~torch.Tensor.unflatten`
72- :meth:`~torch.Tensor.unfold`
73- :meth:`~torch.Tensor.unsqueeze`
74- :meth:`~torch.Tensor.view`
75- :meth:`~torch.Tensor.view_as`
76- :meth:`~torch.Tensor.unbind`
77- :meth:`~torch.Tensor.split`
78- :meth:`~torch.Tensor.hsplit`
79- :meth:`~torch.Tensor.vsplit`
80- :meth:`~torch.Tensor.tensor_split`
81- :meth:`~torch.Tensor.split_with_sizes`
82- :meth:`~torch.Tensor.swapaxes`
83- :meth:`~torch.Tensor.swapdims`
84- :meth:`~torch.Tensor.chunk`
85- :meth:`~torch.Tensor.indices` (sparse tensor only)
86- :meth:`~torch.Tensor.values`  (sparse tensor only)
87
88.. note::
89   When accessing the contents of a tensor via indexing, PyTorch follows Numpy behaviors
90   that basic indexing returns views, while advanced indexing returns a copy.
91   Assignment via either basic or advanced indexing is in-place. See more examples in
92   `Numpy indexing documentation <https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html>`_.
93
94It's also worth mentioning a few ops with special behaviors:
95
96- :meth:`~torch.Tensor.reshape`, :meth:`~torch.Tensor.reshape_as` and :meth:`~torch.Tensor.flatten` can return either a view or new tensor, user code shouldn't rely on whether it's view or not.
97- :meth:`~torch.Tensor.contiguous` returns **itself** if input tensor is already contiguous, otherwise it returns a new contiguous tensor by copying data.
98
99For a more detailed walk-through of PyTorch internal implementation,
100please refer to `ezyang's blogpost about PyTorch Internals <http://blog.ezyang.com/2019/05/pytorch-internals/>`_.
101