1. Anuncie Aqui ! Entre em contato fdantas@4each.com.br

[Python] Why my neural network's val_accuracy is always same?

Discussão em 'Python' iniciado por Stack, Setembro 27, 2024 às 18:32.

  1. Stack

    Stack Membro Participativo

    I have started studying ML using keras and tensorflow and i wanted to train neural network with TSV-file. However, my val_accuracy is always same. TSV-file have 1000 arguments for using in prediction and 1 output that can be only 0 or 1.

    I've tried using different optimizers and losses, varying the number of Dense layers, units, and activation functions. Here is my model

    model = keras.Sequential([
    keras.layers.Dense(1000, activation=keras.activations.relu),
    keras.layers.Dense(200, activation=keras.activations.relu),
    keras.layers.Dense(50, activation=keras.activations.relu),
    keras.layers.Dense(2, activation=keras.activations.softmax)
    ])

    model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])

    loss = model.fit(train_data, train_labels, epochs=80, validation_split=0.2)


    after model.fit i have this result:

    Epoch 76/80
    4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 15ms/step - accuracy: 0.6595 - loss: 0.6508 - val_accuracy: 0.8214 - val_loss: 0.5894
    Epoch 77/80
    4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.6449 - loss: 0.6561 - val_accuracy: 0.8214 - val_loss: 0.5887
    Epoch 78/80
    4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 18ms/step - accuracy: 0.6876 - loss: 0.6397 - val_accuracy: 0.8214 - val_loss: 0.5879
    Epoch 79/80
    4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.6126 - loss: 0.6682 - val_accuracy: 0.8214 - val_loss: 0.5873
    Epoch 80/80
    4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step - accuracy: 0.6584 - loss: 0.6504 - val_accuracy: 0.8214 - val_loss: 0.5865


    where val_accuracy is always 0.8214. After model.predict:

    [0.8214285969734192, 0.8214285969734192, 0.8214285969734192, 0.8214285969734192...


    so here i have 0.8214 too. Maybe i dont understand how to use Dense layers. So how can i fix this?

    Continue reading...

Compartilhe esta Página