Is there any difference between keras and scikit-learn depending on the calculation method of accuracy?

Asked 2 years ago, Updated 2 years ago, 102 views

I am currently using Keras' CNN for multi-label image classification.
Also, we reconfirmed the accuracy using various evaluation methods (Recall, Precision, F1 score, Accuracy) for scikit-learn as well as keras accuracies.
As a result, the Accuracy calculated in keras represents approximately 90%, while the scikit-learn shows only about 60%.
I don't know why this is, so please tell me who it is.
Is the calculation of keras strange?

Is there a difference in calculation of accuracies between keras and scikit-learn?

If there is anything missing, I will add it

I copy here.

The model is fine-tuned using mobileenetV2.

Defining #input_tensor
input_tensor=Input(shape=(img_width, img_height, 3))

base_model=MobileNetV2(include_top=False, weights='imagenet')

#model.summary()

x = base_model.output
x = GlobalAveragePooling 2D()(x)
x = Dense(1024, activation='relu')(x)
x = Dropout (0.5)(x)
predictions=Dense(6, activation='sigmoid')(x)

# network definition
model=Model(inputs=base_model.input, outputs=predictions)
print("{} layers".format(len(model.layers)))))

sgd = optimizers.SGD (lr = 0.001, delay = 1e-6, momentum = 0.9, nesterov = True)

model.compile(optimizer=sgd, loss="binary_crossentropy", metrics=["acc"])

history=model.fit(X_train,y_train,epochs=50,validation_data=(X_val,y_val),batch_size=64,verbose=2)

# model assessment
def model_evaluate():
    score=model.validate(X_test,y_test,verbose=1)
    print("evaluate loss: {[0]:.4f}".format(score))
    print("evaluate acc:{[1]:.1%}".format(score))

model_evaluate()

Processing scikit-Learn

from sklearn.metrics import precision_score,recall_score,f1_score,accuracy_score
thresholds = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]

y_pred=model.predict(X_test)


predictions=model.predict(X_test)
for val in thresholds:
    print("For threshold:",val)
    pred=predits.copy()
  
    pred [pred>=val] = 1
    pred [pred<val] = 0
  
    precision=precision_score(y_test, pred, average='micro')
    recall=recall_score(y_test, pred, average='micro')
    f1 = f1_score(y_test, pred, average='micro')
   
    print("Micro-average quality numbers")
    print("Precision: {:.4f}, Recall: {:.4f}, F1-measure: {:.4f}".format(precision,recall,f1))

Scikit-Learn Results

For threshold: 0.1
Micro-average quality numbers
Precision: 0.3776, Recall: 0.8727, F1-measure: 0.5271
For threshold: 0.2
Micro-average quality numbers
Precision: 0.4550, Recall: 0.8033, F1-measure: 0.5810
For threshold: 0.3
Micro-average quality numbers
Precision: 0.5227, Recall: 0.7403, F1-measure: 0.6128
For threshold—0.4
Micro-average quality numbers
Precision: 0.5842, Recall: 0.6702, F1-measure: 0.6243
For threshold—0.5
Micro-average quality numbers
Precision: 0.6359, Recall: 0.5858, F1-measure: 0.6098
For threshold—0.6
Micro-average quality numbers
Precision: 0.6993, Recall: 0.4707, F1-measure: 0.5626
For threshold—0.7
Micro-average quality numbers
Precision: 0.7520, Recall: 0.3383, F1-measure: 0.4667
For threshold—0.8
Micro-average quality numbers
Precision: 0.7863, Recall: 0.2132, F1-measure: 0.3354
For threshold: 0.9
Micro-average quality numbers
Precision: 0.8987, Recall: 0.1016, F1-measure: 0.1825

machine-learning keras scikit-learn

2022-09-30 11:29

1 Answers

Thank you for your hard work.
Calculating keras and scikit-learn accuracies
I imagine that is the same.
"As a result, the Accuracy calculated in keras represents about 90 percent, but all scikit-learns are around 60 percent." I think it is possible.
If you show me exactly what the multi-label image classification is, the contents of keras, the specific processing of scikit-learn, etc., I might get an answer.
Regarding keras and scikit-learn, why don't you check the degree of Accuracy in the articles on the Internet?


2022-09-30 11:29

If you have any answers or tips


© 2024 OneMinuteCode. All rights reserved.