Usage instructions based on view in PyTorch

  • 2021-09-16 07:18:45
  • OfStack

It is equivalent to the function of resize () in numpy, but it may not be used in the same way.

My understanding is:

Arrange the data in the original tensor into a 1-dimensional data in row-first order (this should be because the addresses are required to be stored continuously), and then combine them into tensor of other dimensions according to parameters.

For example, whether your original data is [[[1, 2, 3], [4, 5, 6]]] or [1, 2, 3, 4, 5, 6], because they are all six elements in a 1-dimensional vector, so as long as the parameter after view is 1, the result will be 1.

For example,


a=torch.Tensor([[[1,2,3],[4,5,6]]])
b=torch.Tensor([1,2,3,4,5,6])
print(a.view(1,6))
print(b.view(1,6))

The results are all


tensor([[1., 2., 3., 4., 5., 6.]]) 

Look at another example:


a=torch.Tensor([[[1,2,3],[4,5,6]]])
print(a.view(3,2))

Will get:


tensor([[1., 2.],
    [3., 4.],
    [5., 6.]])

It is equivalent to filling the required shapes with arrays from 1, 2, 3, 4, 5 and 6 in order. But if you want to get the following results:


tensor([[1., 4.],
    [2., 5.],
    [3., 6.]])

You need to use another function: permute (). Usage See my other blog: Usage of permute in PyTorch

In addition, the parameter cannot be null. The-1 in the parameter means that this position is inferred from the numbers of other positions. As long as there is no ambiguity, the view parameter can be inferred, that is, the view function can also be inferred if people can infer the shape.

For example, the number of data in a tensor is 6. If view (1,-1), we can infer that-1 represents 6 according to the number of elements in tensor.

If it is view (-1,-1, 2), people don't know how to infer, and neither do machines.

There is another situation that people can infer, but machines can't infer: view (-1,-1, 6). People can know that-1 represents 1, but machines are not allowed to have two negative ones at the same time.

If there is no-1, then the product of all parameters will be equal to the total number of elements in tensor, otherwise an error will occur.

Supplement: Usage of x. view () and permute in pytorch

Usage of x. view () in pytorch

You often see x. view () in pytorch, which means converting the dimension of Tensor to the dimension specified by view, somewhat similar to the resize function


b=torch.Tensor([[[[1,2,3],[4,5,6],[7,8,9]],[[1,2,3],[4,5,6],[7,8,9]]]])
print(b.size())
(1, 2, 3, 3)
print(b.view(b.size(0),-1))
tensor([[1., 2., 3., 4., 5., 6., 7., 8., 9., 1., 2., 3., 4., 5., 6., 7., 8., 9.]])
print(b.view(b.size(0),-1).size())
(1, 18)

b. size (0) represents dimension 0 in b = = 1, and-1 is the number of columns automatically allocated according to the original data.


a=torch.Tensor([[[1,2,3],[4,5,6]]])
print(a.size())
(1, 2, 3)
print(a.view(6,-1))
tensor([[1.],
[2.],
[3.],
[4.],
[5.],
[6.]])
print(a.view(6,-1).size())
(6, 1)

Convert a to 6 rows and 1 column


print(a.view(-1,6).size())
(1, 6)

Or convert a to 1 row and 6 columns

In the program also often see view function followed by permute () function, this function is to do dimensional transposition


print(a.view(-1,6).permute(1,0))
tensor([[1.],
[2.],
[3.],
[4.],
[5.],
[6.]])
print(a.view(-1,6).permute(1,0).size())
(6, 1)

With the addition of permute, a changes from (1, 6) to (6, 1).


Related articles: