tensorflow method of restoring a specified layer and specifying different learning rates for different layers

  • 2020-11-18 06:19:36
  • OfStack

As shown below:


#tensorflow  In the from ckpt Restores the specified layer in a file or does not restore the specified layer :
#tensorflow  In the different layer Specify different learning rates 
 
with tf.Graph().as_default():
		# This is the layer parameter that needs to be recovered 
	 variables_to_restore = []
	 # This is the layer parameter name that needs to be trained , Here are the unrecovered ones that need to be retrained , In fact, the recovered parameters can also be trained 
  variables_to_train = []
  for var in slim.get_model_variables():
   excluded = False
   for exclusion in fine_tune_layers:
   # Such as fine tune layer Contained in the logits,bottleneck
    if var.op.name.startswith(exclusion):
     excluded = True
     break
   if not excluded:
    variables_to_restore.append(var)
    #print('var to restore :',var)
   else:
    variables_to_train.append(var)
    #print('var to train: ',var)
 
 
  # I'll leave it out here 1些步骤, Step into training :
  # will variables_to_train, The parameters that need training are given optimizer  the compute_gradients function 
  grads = opt.compute_gradients(total_loss, variables_to_train)
  # This function will only evaluate variables_to_train The gradient 
  # And then you apply the gradient :
  apply_gradient_op = opt.apply_gradients(grads, global_step=global_step)
  # It can also be called directly opt.minimize(total_loss,variables_to_train)
  #minimize Just to compute_gradients with apply_gradients Encapsulation is 1 A function , These are actually the two functions that were called 
  # If you are in a gradient different parameters require different learning rates , Then you can :
 
  capped_grads_and_vars = []#[(MyCapper(gv[0]), gv[1]) for gv in grads_and_vars]
  #update_gradient_vars Is the parameter that needs to be updated , The global learning rate is used 
  # For not update_gradient_vars The parameters of the , I'm going to update the gradient times 0.0001, Use is essentially static 
 	for grad in grads:
 		for update_vars in update_gradient_vars:
 			if grad[1]==update_vars:
 				capped_grads_and_vars.append((grad[0],grad[1]))
 			else:
 				capped_grads_and_vars.append((0.0001*grad[0],grad[1]))
 
 	apply_gradient_op = opt.apply_gradients(capped_grads_and_vars, global_step=global_step)
 
 	# When you restore the model :
 
  with sess.as_default():
 
   if pretrained_model:
    print('Restoring pretrained model: %s' % pretrained_model)
    init_fn = slim.assign_from_checkpoint_fn(
    pretrained_model,
    variables_to_restore)
    init_fn(sess)
   # This will leave the specified layer parameter unrecovered 

Related articles: