I have been trying to get the trainable variables from my layers and can't figure out a way to make it work. So here is what I have tried:
I have tried accessing the kernel and bias attribute of the Dense or Conv2D object directly, but to no avail. The type of result that I get is "Dense object has no attribute 'kernel'".
trainable_variables.append(conv_layer.kernel)
trainable_variables.append(conv_layer.bias)
Similarly, I have tried using the attribute "trainable_variables" in the following way:
trainable_variables.extend(conv_layer.trainable_variables)
From what I know this is supposed to return a list of two variables, the weight and the bias variables. However, what I get is an empty list.
Any idea of how to get the variables from labels in TensorFlow 2.0? I want to be able to later feed those variables to an optimizer, in a way similar to the following:
gradients = tape.gradient(loss, trainable_variables)
optimizer.apply_gradients(zip(gradients, trainable_variables))
Edit: Here is part of my current code to serve as an example and help answering the question (Hope it is readable)
from tensorflow.keras.layers import Dense, Conv2D, Conv2DTranspose, Reshape, Flatten... class Network:def __init__(self, params):weights_initializer = tf.initializers.GlorotUniform(seed=params["seed"])bias_initializer = tf.initializers.Constant(0.0)self.trainable_variables = []self.conv_layers = []self.conv_activations = []self.create_conv_layers(params, weights_initializer, bias_initializer)self.flatten_layer = Flatten()self.dense_layers = []self.dense_activations = []self.create_dense_layers(params, weights_initializer, bias_initializer)self.output_layer = Dense(1, kernel_initializer=weights_initializer, bias_initializer=bias_initializer)self.trainable_variables.append(self.output_layer.kernel)self.trainable_variables.append(self.output_layer.bias)def create_conv_layers(self, params, weight_init, bias_init):nconv = len(params['stride'])for i in range(nconv):conv_layer = Conv2D(filters=params["nfilter"][i],kernel_size=params["shape"][i], kernel_initializer=weight_init,kernel_regularizer=spectral_norm,use_bias=True, bias_initializer=bias_init,strides=params["stride"][i],padding="same", )self.conv_layers.append(conv_layer)self.trainable_variables.append(conv_layer.kernel)self.trainable_variables.append(conv_layer.bias)self.conv_activations.append(params["activation"])def create_conv_layers(self, params, weight_init, bias_init):nconv = len(params['stride'])for i in range(nconv):conv_layer = Conv2D(filters=params["nfilter"][i],kernel_size=params["shape"][i], kernel_initializer=weight_init,kernel_regularizer=spectral_norm,use_bias=True, bias_initializer=bias_init,strides=params["stride"][i],padding="same", )self.conv_layers.append(conv_layer)self.trainable_variables.append(conv_layer.kernel)self.trainable_variables.append(conv_layer.bias)self.conv_activations.append(params["activation"])
As you can see I am trying to gather all my trainable variables into a list attribute called trainable_variables. However as I mentioned this code does not work because I get an error for trying to acquire the kernel and bias attributes of those layer objects.