I'm trying to make a Neural Network in Tensorflow with Radial Basis Functions as the activation function. I've read a lot about it and I've been able to use it both using tf.exp() and creating the function as in this question: Problems with my RBF Network in Tensorflow?.
When I define the activation function in TF I'm telling my code to use the same exact function for each neuron in the current layer. However, when you want to use more than one RBF, you have to change the media of each one in order to succed precisely (and, maybe, the variance too).
My question is about that. How can I define the activation function depending on some variables (at least one for each neuron in the layer)?
I'm using the code below to create the layers of my network but, as you can see, it only depends on the input array.
x = tf.placeholder(tf.float32, shape = [None, 2 * ndirT]) dense_layer1 = tf.layers.Dense(units = neuronas, activation = lambda v: tf.exp(-2*(tf.pow(v,2)))) y_pred_aux = dense_layer1(x) dense_layer2 = tf.layers.Dense(units = (n+1) * (m+1) * 2) y_pred = dense_layer2(y_pred_aux) y_real = tf.placeholder(tf.float32, shape = [None, (n+1) * (m+1) * 2])
Thank you all in advance and sorry if I did some mistakes writing in English (it's not my mother tongue).