Interpretation of x = x. view of x. size of 0 1 in pytorch

  • 2021-09-16 07:19:01
  • OfStack

It is often seen in CNN code of pytorch


x.view(x.size(0), -1)

First of all, the view () function in pytorch is used to change the shape of tensor, for example, changing tensor with 2 rows and 3 columns into 1 row and 6 columns, where-1 means that the remaining dimensions will be adaptively adjusted


a = torch.Tensor(2,3)
print(a)
# tensor([[0.0000, 0.0000, 0.0000],
#    [0.0000, 0.0000, 0.0000]])
 
print(a.view(1,-1))
# tensor([[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]])

After convolution or pooling in CNN, it is necessary to connect the full connection layer, so it is necessary to flatten the multi-dimensional tensor into one dimension, which is realized by x. view (x. size (0),-1)


def forward(self,x):
  x=self.pre(x)
  x=self.layer1(x)
  x=self.layer2(x)
  x=self.layer3(x)
  x=self.layer4(x)
    
  x=F.avg_pool2d(x,7)
  x=x.view(x.size(0),-1)
  return self.fc(x)

The dimensions of tensor after convolution or pooling are (batchsize, channels, x, y), where x. size (0) refers to the value of batchsize. Finally, the structure of tensor is transformed into (batchsize, channels*x*y) through x. view (x. size (0),-1), that is, (channels, x, y) is straightened, and then it can be connected with fc layer

Supplement: Usage of view in pytorch (Reconstruction Tensor)

view in pytorch is used to change the tensor of shape, simple and easy to use.

The usage of view in pytorch is usually called with. view directly after the tensor name, and then put the shape you want. Such as


tensor_name.view(shape)

Example:

1. Direct usage:


 >>> x = torch.randn(4, 4)
 >>> x.size()
 torch.Size([4, 4])
 >>> y = x.view(16)
 >>> y.size()
 torch.Size([16])

2. Emphasize the size of a 1-dimension:


>>> z = x.view(-1, 8) # the size -1 is inferred from other dimensions
>>> z.size()
torch.Size([2, 8])

3. Straightening tensor:

(Fill-1 directly for straightening, which is equivalent to tensor_name. flatten ())


 >>> y = x.view(-1)
 >>> y.size()
 torch.Size([16])

4. Do Dimension Transformations without Changing Memory Arrangements


>>> a = torch.randn(1, 2, 3, 4)
>>> a.size()
torch.Size([1, 2, 3, 4])
>>> b = a.transpose(1, 2) # Swaps 2nd and 3rd dimension
>>> b.size()
torch.Size([1, 3, 2, 4])
>>> c = a.view(1, 3, 2, 4) # Does not change tensor layout in memory
>>> c.size()
torch.Size([1, 3, 2, 4])
>>> torch.equal(b, c)
False

Note that the last False is not equivalent in the tensors b and c. From here, we can see that the view function, as its name suggests, only changes what it "looks" like, and does not change the arrangement of tensors in memory.


Related articles: