Operation of reading parameters into numpy matrix from Pytorch model pth file

  • 2021-09-16 07:31:34
  • OfStack

Purpose:

Extract the trained pth model parameters, and then deploy them to edge devices in other ways.

Pytorch provides a convenient interface for reading parameters:


nn.Module.parameters()

Look directly at demo:


from torchvision.models.alexnet import alexnet 
model = alexnet(pretrained=True).eval().cuda()
parameters = model.parameters()
for p in parameters:
  numpy_para = p.detach().cpu().numpy()
  print(type(numpy_para))
  print(numpy_para.shape)

The numpy_para obtained above is the numpy parameter ~

Note:

model. parameters () iteratively returns parameters for each layer as a generator. Therefore, the parameters of each layer are read by for cycle, and the number of cycles indicates the number of layers.

The parameters of each layer are torch. nn. parameter. Parameter, which is a subclass of Tensor, so the numpy matrix can be directly transformed by the method of transforming tensor into numpy (i.e. p. detach (). cpu (). numpy ()).

Convenient and easy to use, praise ~

Supplement: pytorch trained. pth model converted to. pt

Convert the python trained. pth file to. pt


import torch
import torchvision
from unet import UNet
model = UNet(3, 2)# Self-defined network model 
model.load_state_dict(torch.load("best_weights.pth"))# Saved training model 
model.eval()# Switch to eval () 
example = torch.rand(1, 3, 320, 480)# Generate 1 Input of random input dimensions 
traced_script_module = torch.jit.trace(model, example)
traced_script_module.save("model.pt")

Related articles: