How to create a CNN model correctly in Keras?

Asked

Viewed 126 times

0

I want to make a convolutional neural network model using Keras. Input is a set of images of size 360,640,3 and the exit shall be 720,1280,3.

So I made the following model :

w,h,c=x_train[0].shape


entrada = Input(shape=(w,h,c),name='LR')
x = UpSampling2D(size=(2,2), name='UP')(entrada)
print(x.shape)
h = Dense(720, activation='relu', name ='hide')(x)
h2= Dense(1280, activation='relu', name ='hide2')(h)
output= Dense(3, activation='relu', name ='output')(h2)


model = Model(inputs=entrada, outputs=output)
model.compile(loss='mse', optimizer='adam', metrics=['accuracy'])
model.fit(x_train,y_train, epochs=50, verbose=0)

The x_train has the size of (360,640,3) and the y_train has the size of (720,1280,3).

However when I put to run I get the following message :

ResourceExhaustedError: OOM when allocating tensor with shape[4608000,720] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
     [[{{node hide/MatMul}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

My purpose is to take a smaller image with a certain feature and make the network learn to look like the larger image without the feature.

Someone knows what I’m doing wrong ?

  • You are making a dense neural Rde and not a convolutional network. The problem with the dense in this case is that it gets an absurd Qtd of parameters. Try to use Conv2D instead of Dense there. There are a lot of examples on the internet, for example here: https://github.com/keras-team/keras/blob/master/examples/mnist_cnn.py

1 answer

1

About OOM

Problem of OOM (Out Of Memory) happens when the available memory is not enough. When using images, the most common architecture involves Convolutions and Poolings, as this decreases the amount of network parameters, as pointed out by @Daniel Falbel. However, with its architecture, the total amount of parameters is 929.603, which is not an absurd number. To see the number of parameters, just do model.summary().

So the problem must be model.fit(x_train,y_train, epochs=50, verbose=0). Load all images from the dataset and feed the model can be too heavy for memory, so it is common to use fit_genetator(). With the generator, it is possible to climb into memory just a few images and train the network gradually so as not to overload the memory.

About Convolutional Networks

To classify an image, a common architecture is:

Input -> Convolução -> Pooling -> Convolução -> Pooling -> Convolução -> Flatten -> Dense ->Dense -> Output

The code in Keras equivalent is:

def My_Conv_Model(channels, pixels_x, pixels_y, num_categories):
    img_input  = Input(shape=(pixels_x, pixels_y, channels)
                    , name='img_input')

    first_Conv2D = Conv2D(filters=40, kernel_size=(3, 3), data_format='channels_last'
                       , activation='relu', padding='valid')(img_input)
    first_Conv2D = MaxPooling2D(pool_size=(3, 3), padding='same', data_format='channels_last')(first_Conv2D)

    second_Conv2D = Conv2D(filters=20, kernel_size=(3, 3), data_format='channels_last'
                        , activation='relu', padding='valid')(first_Conv2D)
    second_Conv2D = MaxPooling2D(pool_size=(3, 3), padding='same', data_format='channels_last')(second_Conv2D)

    third_Conv2D = Conv2D(filters=10, kernel_size=(3, 3), data_format='channels_last'
                        , padding='valid')(second_Conv2D)

    flat_layer = Flatten()(third_Conv2D)

    first_Dense = Dense(128,)(flat_layer)
    second_Dense = Dense(32,)(first_Dense)

    target = Dense(num_categories, name='class_output')(second_Dense)

    seq = Model(inputs=img_input, outputs=target, name='Model')

    return seq

Total of parameters for an input in the format (360, 640, 3): 3,370,632

To generate an image from another, a common architecture is:

Input -> Convolução -> Pooling -> Convolução -> Pooling -> Convolução Transposta -> Convolução Transposta -> Output

The code in Keras equivalent is:

def My_Conv_Model(channels, pixels_x, pixels_y):
    img_input  = Input(shape=(pixels_x, pixels_y, channels)
                    , name='img_input')

    first_Conv2D = Conv2D(filters=40, kernel_size=(3, 3), data_format='channels_last'
                       , activation='relu', padding='same')(img_input)
    first_Conv2D = MaxPooling2D(pool_size=(2, 2), padding='same', data_format='channels_last')(first_Conv2D)

    second_Conv2D = Conv2D(filters=20, kernel_size=(3, 3), data_format='channels_last'
                        , activation='relu', padding='same')(first_Conv2D)
    second_Conv2D = MaxPooling2D(pool_size=(2, 2), padding='same', data_format='channels_last')(second_Conv2D)

    third_Conv2D = Conv2D(filters=10, kernel_size=(3, 3), data_format='channels_last'
                        , padding='same')(second_Conv2D)

    first_Conv2DTranspose = Conv2DTranspose(64, (5, 5), strides=2, padding='same')(third_Conv2D)

    second_Conv2DTranspose = Conv2DTranspose(32, (5, 5), strides=2, padding='same')(first_Conv2DTranspose)

    target = Conv2DTranspose(3, (5, 5), strides=2, padding='same')(second_Conv2DTranspose)

    seq = Model(inputs=img_input, outputs=target, name='Model')

    return seq

Total of parameters for an input in the format (360, 640, 3): 79,849

Browser other questions tagged

You are not signed in. Login or sign up in order to post.