Deep learning models often require precise control over which parameters are updated during training. In Keras, the non_trainable_weights
property helps manage such parameters efficiently. Whether you’re fine-tuning a pre-trained model or implementing custom layers, knowing how to use non_trainable_weights
correctly can improve performance and flexibility.
Keras models and layers have two types of weight attributes:
- Trainable Weights: These are updated during backpropagation.
- Non-Trainable Weights: These remain constant during training, useful for storing fixed parameters like statistics in batch normalization.
The non_trainable_weights
property allows access to these parameters, ensuring they are used without being modified by gradient updates.
- Pre-trained Model Fine-Tuning: Freezing layers to retain learned features.
- Custom Layers: Defining stateful layers with fixed parameters.
- Efficiency: Reducing the number of trainable parameters optimizes training speed.
To access non-trainable weights in a model, use:
from tensorflow import keras# Load a pre-trained model
base_model = keras.applications.MobileNetV2(weights='imagenet', include_top=False)
# Freeze all layers
for layer in base_model.layers:
layer.trainable = False
print("Non-trainable weights:", base_model.non_trainable_weights)
Here, all layers are frozen, making their weights part of non_trainable_weights
.
You can define a custom layer with fixed parameters:
import tensorflow as tfclass CustomLayer(keras.layers.Layer):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.fixed_weight = self.add_weight(shape=(1,), initializer="ones", trainable=False)
def call(self, inputs):
return inputs * self.fixed_weight
layer = CustomLayer()
print("Non-trainable weights:", layer.non_trainable_weights)
Here, fixed_weight
remains unchanged during training.
Even though they are not updated automatically, non-trainable weights can be manually modified:
layer.fixed_weight.assign([2.0])
print("Updated non-trainable weight:", layer.fixed_weight.numpy())
- Use for Frozen Layers: When fine-tuning pre-trained models, set
trainable = False
for layers. - Manually Update When Needed: If updates are required, assign values explicitly.
- Monitor Parameter Count: Use
model.summary()
to check trainable vs. non-trainable parameters.
Understanding and leveraging non_trainable_weights
in Keras is crucial for optimizing deep learning workflows. Whether you’re customizing layers or fine-tuning models, managing trainable and non-trainable weights can significantly enhance model efficiency.
Interested in mastering AI and deep learning? Check out my Udemy courses: Karthik K on Udemy
Share your thoughts in the comments! Have you used non_trainable_weights
before? How did it impact your model training?