事前定義されたKerasモデルを前提として、最初に事前にトレーニングされたウェイトをロードし、次にモデルの内部(最後の数以外)レイヤーを1〜3つ削除してから、別のレイヤーに置き換えようとしています。
keras.io でそのようなことをしたり、事前定義されたモデルからレイヤーを削除したりするためのドキュメントがまったく見つからないようです。
私が使用しているモデルは、以下に示す関数でインスタンス化される優れたoleVGG-16ネットワークです。
def model(self, output_shape):
# Prepare image for input to model
img_input = Input(shape=self._input_shape)
# Block 1
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1')(img_input)
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(x)
# Block 2
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1')(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x)
# Block 3
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1')(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2')(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x)
# Block 4
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(x)
# Block 5
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool')(x)
# Classification block
x = Flatten(name='flatten')(x)
x = Dense(4096, activation='relu', name='fc1')(x)
x = Dropout(0.5)(x)
x = Dense(4096, activation='relu', name='fc2')(x)
x = Dropout(0.5)(x)
x = Dense(output_shape, activation='softmax', name='predictions')(x)
inputs = img_input
# Create model.
model = Model(inputs, x, name=self._name)
return model
例として、ブロック1の2つのConvレイヤーを取得し、元のウェイトを他のすべてのレイヤーにロードした後、それらを1つのConvレイヤーに置き換えたいと思います。
何か案は?
上記の関数またはkeras.applications.VGG16(weights='imagenet')
のいずれかによって初期化されたモデルvgg16_model
があると仮定します。ここで、他のレイヤーの重みが保存されるように、中央に新しいレイヤーを挿入する必要があります。
アイデアは、ネットワーク全体を分解して別々のレイヤーにした後、組み立て直すことです。タスク専用のコードは次のとおりです。
vgg_model = applications.VGG16(include_top=True, weights='imagenet')
# Disassemble layers
layers = [l for l in vgg_model.layers]
# Defining new convolutional layer.
# Important: the number of filters should be the same!
# Note: the receiptive field of two 3x3 convolutions is 5x5.
new_conv = Conv2D(filters=64,
kernel_size=(5, 5),
name='new_conv',
padding='same')(layers[0].output)
# Now stack everything back
# Note: If you are going to fine tune the model, do not forget to
# mark other layers as un-trainable
x = new_conv
for i in range(3, len(layers)):
layers[i].trainable = False
x = layers[i](x)
# Final touch
result_model = Model(input=layer[0].input, output=x)
result_model.summary()
そして、上記のコードの出力は次のとおりです。
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_50 (InputLayer) (None, 224, 224, 3) 0
_________________________________________________________________
new_conv (Conv2D) (None, 224, 224, 64) 1792
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 112, 112, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 112, 112, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 56, 56, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
_________________________________________________________________
flatten (Flatten) (None, 25088) 0
_________________________________________________________________
fc1 (Dense) (None, 4096) 102764544
_________________________________________________________________
fc2 (Dense) (None, 4096) 16781312
_________________________________________________________________
predictions (Dense) (None, 1000) 4097000
=================================================================
Total params: 138,320,616
Trainable params: 1,792
Non-trainable params: 138,318,824
_________________________________________________________________
これを行う別の方法は、シーケンシャルモデルを構築することです。 ReLUレイヤーをPReLUに交換する次の例を参照してください。不要なレイヤーを追加せずに、新しいレイヤーを追加する必要があります。
def convert_model_relu(model):
from keras.layers.advanced_activations import PReLU
from keras.activations import linear as linear_activation
from keras.models import Sequential
new_model = Sequential()
# Go through all layers, if it has a ReLU activation, replace it with PrELU
for layer in Tuple(model.layers):
layer_type = type(layer).__name__
if hasattr(layer, 'activation') and layer.activation.__name__ == 'relu':
# Set activation to linear, add PReLU
prelu_name = layer.name + "_prelu"
prelu = PReLU(shared_axes=(1, 2), name=prelu_name) \
if layer_type == "Conv2D" else PReLU(name=prelu_name)
layer.activation = linear_activation
new_model.add(layer)
new_model.add(prelu)
else:
new_model.add(layer)
return new_model