I want to improve the loss rate and acc rate of the image discrimination AI model.

Asked 2 years ago, Updated 2 years ago, 100 views

We are developing AI to distinguish between rabbits and turtles.
However, for some reason, the learning is not going well and the values of loss and acc (discriminatory success rate) are not getting better.
Typically, loss and acc are inversely proportional, and if one number increases, the other should decrease, but loss increases significantly and acc does not change around 0.5.
Is this a bad neural network design??
Below are the specifications, source code, and results screens.

Specifications Specifications ·Windows 10
·Docker
·Python 3.7.3

import sys
importos
import numpy as np
import pandas aspd
import gc
from keras.models import Sequential
from keras.layers import Convolution 2D, MaxPooling 2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras.utils import np_utils

Class TrainModel: 
  def__init__(self):
    input_dir='./Gazo'
    self.nb_classes=len([name for name inos.listdir(input_dir) if name!=".DS_Store"])
    x_train, x_test, y_train, y_test=np.load("./Gakushu.npy")
    # normalize data
    self.x_train=x_train.astype("float")/256
    self.x_test=x_test.astype("float")/256
    self.y_train=np_utils.to_categorical(y_train,self.nb_classes)
    self.y_test=np_utils.to_categorical(y_test,self.nb_classes)
  default train(self, input=None):
    model=Sequential()
    # K=32, M=3, H=3
    if input == None:
      model.add(Convolution2D(32,3,3, border_mode='same', input_shape=self.x_train.shape[1:]))
    else:
      model.add(Convolution2D(32,3, border_mode='same', input_shape=input))
    # K=64, M=3, H=3 (adjustment)
    model.add(Activation('relu'))
    model.add(MaxPooling2D(pool_size=(2,2))))
    model.add (Dropout (0.25))
    model.add(Convolution2D(64,3,3, border_mode='same'))
    model.add(Activation('relu'))
    # K=64, M=3, H=3 (adjustment)
    model.add(Convolution 2D(64,3,3))
    model.add(MaxPooling2D(pool_size=(2,2))))
    model.add (Dropout (0.25))
    model.add(Flatten()) 
    model.add (Dense (512))
    # bias nb_classes
    model.add(Activation('relu'))
    model.add(Dropout(0.5))
    model.add(Dense(self.nb_classes))
    model.add(Activation('softmax'))
    model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
    if input == None:
      # Learn and Save Models
      model.fit(self.x_train, self.y_train, batch_size=32, nb_epoch=int(kaisu), validation_data=(self.x_test, self.y_test))
      hdf5_file="./Aimodel.hdf5"
      model.save_weights(hdf5_file)
      score=model.evaluate(self.x_test, self.y_test, verbose=0)
      print('---------------------------------------------------')
      print("")
      print("Training is over!")
      print("")
      print('loss=', score[0])
      print('Determination Success Rate =', score[1])
      print("")
    return model
if__name__=="__main__":
  args=sys.argv
  train=TrainModel()
  train.train()
  gc.collect()

Enter a description of the image here

Reference Page: https://qiita.com/tsunaki/items/608ff3cd941d82cd656b

python docker machine-learning deep-learning

2022-09-30 17:50

1 Answers

You haven't been able to learn since Epoch.
I guess it's because the model structure is too complicated for 168 sheets of data.
I think it would be better to increase the number of training data by data augmentation or reduce the size and number of filters in the convolutional and pooling layers.
Also, depending on the setting condition of the random number, the result changes every time you train, so I think it's better to set it up in advance.


2022-09-30 17:50

If you have any answers or tips


© 2024 OneMinuteCode. All rights reserved.