Thursday, June 22, 2017

Method for efficient neural network

Overview

Usually, neural network’s training takes much time and doesn’t go well. There are some ways to make that efficiently go.
Here, I list up those and summarize.
By using those method, the training go well and good model can be made.



Methods

Scaling

Scaling is adjusting the values into in the range of 0~1, sometimes -1~1.
This is sometimes efficient but not always. When each variables scale of biggness is same, you can use this.
For example, image data’s pixels have same range 0~255. In this case, you can use this.

Regularization

Regularization is to prevent model from overfitting. This is very important and in many cases necessary.
Those are some types as followings.
  • L1 regularization
  • L2 regularization
  • L1-L2 regularization
Those regularization prevent parameters from becoming too big.

Dropout

Dropout is the method to avoid overfitting. This de-activate some nodes when you train the model.
In any train, some nodes are not used and the final outcome(network) is relevant to ensemble one.

How to use those in keras

About scaling, you can use numpy. If you split any values by max value, it is enough.
Regularization is like this.
model.add(Dense(dense_num, activation='relu', W_regularizer = l1_l2(.01)))
As optional argument, you can add regularization.
Dropout is also easy to use.
Just after the layer you want to adjust dropout to, you can add this.
model.add(Conv2D(conv_num * 2, (3,3), activation='relu'))
model.add(Dropout(0.4))