Peer Herholz (he/him)
Research affilaite - NeuroDataScience lab at MNI/McGill, UNIQUE
Member - BIDS, ReproNim, Brainhack, Neuromod, OHBM SEA-SIG
@peerherholz
Let's imagine the following scenario:
Your PI tells you to run some machine learning
analyses on your data (because buzz words and we all need top tier publications and that sweet grant money). Specifically, you should use resting state connectivity
data to predict the age
of participants
(sounds familiar, eh?). So you go ahead, gather some data, apply a random forest
(a form of decision tree
) and ...
Whoa, let's go back a few steps and actually briefly define what we're talking about here, starting with machine learning
.
and also its components
& steps
:
Of course, also reproducibility
:
Now, let's actually check how this looks like. At first we get the data
:
import urllib.request
url = 'https://www.dropbox.com/s/v48f8pjfw4u2bxi/MAIN_BASC064_subsamp_features.npz?dl=1'
urllib.request.urlretrieve(url, 'MAIN2019_BASC064_subsamp_features.npz')
('MAIN2019_BASC064_subsamp_features.npz', <http.client.HTTPMessage at 0x109f3a8f0>)
and then inspect it:
import numpy as np
data = np.load('MAIN2019_BASC064_subsamp_features.npz')['a']
data.shape
(155, 2016)
We will also visualize
it to better grasp what's going on:
import plotly.express as px
from IPython.display import display, HTML
from plotly.offline import init_notebook_mode, plot
fig = px.imshow(data, labels=dict(x="features (whole brain connectome connections)", y="participants"),
height=800, aspect='None')
fig.update(layout_coloraxis_showscale=False)
init_notebook_mode(connected=True)
fig.show()
#plot(fig, filename = 'input_data.html')
#display(HTML('input_data.html'))
Beside the input data
we also need our labels
:
url = 'https://www.dropbox.com/s/ofsqdcukyde4lke/participants.csv?dl=1'
urllib.request.urlretrieve(url, 'participants.csv')
('participants.csv', <http.client.HTTPMessage at 0x112da47f0>)
Which we then load
and check
as well:
import pandas as pd
labels = pd.read_csv('participants.csv')['AgeGroup']
labels.describe()
count 155 unique 6 top 5yo freq 34 Name: AgeGroup, dtype: object
For a better intuition, we’re going to also visualize
the labels
and their distribution
:
fig = px.histogram(labels, marginal='box', template='plotly_white')
fig.update_layout(showlegend=False, width=800, height=600)
init_notebook_mode(connected=True)
fig.show()
#plot(fig, filename = 'labels.html')
#display(HTML('labels.html'))
And we’re ready to create our machine learning analysis pipeline
using scikit-learn within we will scale
our input data
, train
a Random Forest and test its predictive performance
. We import the required functions
and classes
:
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import GroupShuffleSplit, cross_validate, cross_val_score
from sklearn.metrics import accuracy_score
and setup an scikit-learn
pipeline:
pipe = make_pipeline(
StandardScaler(),
RandomForestClassifier()
)
That's all we need to run the analysis
, computing accuracy
and mean absolute error
:
acc_val = cross_validate(pipe, data, pd.Categorical(labels).codes, cv=10, return_estimator =True)
acc = cross_val_score(pipe, data, pd.Categorical(labels).codes, cv=10)
mae = cross_val_score(pipe, data, pd.Categorical(labels).codes, cv=10,
scoring='neg_mean_absolute_error')
which we can then inspect for each CV fold
:
for i in range(10):
print(
'Fold {} -- Acc = {}, MAE = {}'.format(i, np.round(acc[i], 3), np.round(-mae[i], 3))
)
Fold 0 -- Acc = 0.312, MAE = 1.0 Fold 1 -- Acc = 0.438, MAE = 0.938 Fold 2 -- Acc = 0.625, MAE = 1.812 Fold 3 -- Acc = 0.5, MAE = 1.562 Fold 4 -- Acc = 0.75, MAE = 0.875 Fold 5 -- Acc = 0.6, MAE = 1.133 Fold 6 -- Acc = 0.667, MAE = 0.933 Fold 7 -- Acc = 0.667, MAE = 0.867 Fold 8 -- Acc = 0.533, MAE = 0.6 Fold 9 -- Acc = 0.333, MAE = 1.333
and overall:
print('Accuracy = {}, MAE = {}, Chance = {}'.format(np.round(np.mean(acc), 3),
np.round(np.mean(-mae), 3),
np.round(1/len(labels.unique()), 3)))
Accuracy = 0.542, MAE = 1.105, Chance = 0.167
That's a pretty good performance, eh? The amazing power of machine learning
! But there's more: you also try out an ANN
to see if it's providing even better predictions
.
That's as easy as the "basic machine learning pipeline
". Just import
the respective functions
and classes
:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
Define a simple ANN
with 4 layers
:
model = keras.Sequential()
model.add(layers.Dense(100, activation="relu", kernel_initializer='he_normal', bias_initializer='zeros', input_shape=data[1].shape))
model.add(layers.BatchNormalization())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(50, activation="relu"))
model.add(layers.BatchNormalization())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(25, activation="relu"))
model.add(layers.BatchNormalization())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(len(labels.unique()), activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
2022-03-04 16:07:30.452673: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE4.1 SSE4.2 AVX AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Split the data
into train
and test
again:
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(data, pd.Categorical(labels).codes, test_size=0.2, shuffle=True)
and train
your model
:
%time fit = model.fit(X_train, y_train, epochs=300, batch_size=20, validation_split=0.2)
Epoch 1/300 5/5 [==============================] - 1s 43ms/step - loss: 3.0540 - accuracy: 0.1515 - val_loss: 1.7936 - val_accuracy: 0.1600 Epoch 2/300 5/5 [==============================] - 0s 9ms/step - loss: 2.4455 - accuracy: 0.1717 - val_loss: 1.8192 - val_accuracy: 0.1600 Epoch 3/300 5/5 [==============================] - 0s 7ms/step - loss: 2.7707 - accuracy: 0.1313 - val_loss: 1.8292 - val_accuracy: 0.1600 Epoch 4/300 5/5 [==============================] - 0s 8ms/step - loss: 2.4132 - accuracy: 0.2525 - val_loss: 1.8174 - val_accuracy: 0.1600 Epoch 5/300 5/5 [==============================] - 0s 7ms/step - loss: 2.1844 - accuracy: 0.2626 - val_loss: 1.7965 - val_accuracy: 0.1600 Epoch 6/300 5/5 [==============================] - 0s 8ms/step - loss: 2.1683 - accuracy: 0.2727 - val_loss: 1.7899 - val_accuracy: 0.2000 Epoch 7/300 5/5 [==============================] - 0s 7ms/step - loss: 2.0828 - accuracy: 0.3131 - val_loss: 1.7635 - val_accuracy: 0.2000 Epoch 8/300 5/5 [==============================] - 0s 7ms/step - loss: 2.0643 - accuracy: 0.2626 - val_loss: 1.7412 - val_accuracy: 0.2800 Epoch 9/300 5/5 [==============================] - 0s 10ms/step - loss: 1.8044 - accuracy: 0.3535 - val_loss: 1.7283 - val_accuracy: 0.2800 Epoch 10/300 5/5 [==============================] - 0s 8ms/step - loss: 1.9280 - accuracy: 0.2828 - val_loss: 1.7181 - val_accuracy: 0.3200 Epoch 11/300 5/5 [==============================] - 0s 12ms/step - loss: 2.0347 - accuracy: 0.2929 - val_loss: 1.7149 - val_accuracy: 0.3200 Epoch 12/300 5/5 [==============================] - 0s 8ms/step - loss: 1.6646 - accuracy: 0.3434 - val_loss: 1.7162 - val_accuracy: 0.2800 Epoch 13/300 5/5 [==============================] - 0s 7ms/step - loss: 1.7154 - accuracy: 0.3737 - val_loss: 1.7096 - val_accuracy: 0.3200 Epoch 14/300 5/5 [==============================] - 0s 8ms/step - loss: 1.6403 - accuracy: 0.3939 - val_loss: 1.7155 - val_accuracy: 0.3200 Epoch 15/300 5/5 [==============================] - 0s 8ms/step - loss: 1.6892 - accuracy: 0.3838 - val_loss: 1.7145 - val_accuracy: 0.3200 Epoch 16/300 5/5 [==============================] - 0s 8ms/step - loss: 1.6541 - accuracy: 0.4040 - val_loss: 1.7012 - val_accuracy: 0.2800 Epoch 17/300 5/5 [==============================] - 0s 7ms/step - loss: 1.5987 - accuracy: 0.4040 - val_loss: 1.6762 - val_accuracy: 0.2800 Epoch 18/300 5/5 [==============================] - 0s 7ms/step - loss: 1.5346 - accuracy: 0.4040 - val_loss: 1.6516 - val_accuracy: 0.3200 Epoch 19/300 5/5 [==============================] - 0s 7ms/step - loss: 1.5579 - accuracy: 0.4343 - val_loss: 1.6208 - val_accuracy: 0.4400 Epoch 20/300 5/5 [==============================] - 0s 13ms/step - loss: 1.5057 - accuracy: 0.4141 - val_loss: 1.5827 - val_accuracy: 0.4400 Epoch 21/300 5/5 [==============================] - 0s 14ms/step - loss: 1.3896 - accuracy: 0.4141 - val_loss: 1.5627 - val_accuracy: 0.4000 Epoch 22/300 5/5 [==============================] - 0s 8ms/step - loss: 1.5214 - accuracy: 0.4646 - val_loss: 1.5416 - val_accuracy: 0.4400 Epoch 23/300 5/5 [==============================] - 0s 8ms/step - loss: 1.4237 - accuracy: 0.4646 - val_loss: 1.5219 - val_accuracy: 0.4800 Epoch 24/300 5/5 [==============================] - 0s 8ms/step - loss: 1.4055 - accuracy: 0.4444 - val_loss: 1.5125 - val_accuracy: 0.4800 Epoch 25/300 5/5 [==============================] - 0s 9ms/step - loss: 1.2694 - accuracy: 0.5152 - val_loss: 1.5142 - val_accuracy: 0.4800 Epoch 26/300 5/5 [==============================] - 0s 7ms/step - loss: 1.2300 - accuracy: 0.5556 - val_loss: 1.5087 - val_accuracy: 0.4800 Epoch 27/300 5/5 [==============================] - 0s 8ms/step - loss: 1.2042 - accuracy: 0.5556 - val_loss: 1.4955 - val_accuracy: 0.4800 Epoch 28/300 5/5 [==============================] - 0s 8ms/step - loss: 1.1893 - accuracy: 0.5657 - val_loss: 1.4769 - val_accuracy: 0.4800 Epoch 29/300 5/5 [==============================] - 0s 11ms/step - loss: 1.2291 - accuracy: 0.5758 - val_loss: 1.4469 - val_accuracy: 0.4800 Epoch 30/300 5/5 [==============================] - 0s 7ms/step - loss: 1.2137 - accuracy: 0.5051 - val_loss: 1.4172 - val_accuracy: 0.4800 Epoch 31/300 5/5 [==============================] - 0s 7ms/step - loss: 1.5025 - accuracy: 0.4343 - val_loss: 1.3894 - val_accuracy: 0.4800 Epoch 32/300 5/5 [==============================] - 0s 7ms/step - loss: 1.1595 - accuracy: 0.5556 - val_loss: 1.3678 - val_accuracy: 0.4800 Epoch 33/300 5/5 [==============================] - 0s 7ms/step - loss: 1.0789 - accuracy: 0.6465 - val_loss: 1.3527 - val_accuracy: 0.4800 Epoch 34/300 5/5 [==============================] - 0s 7ms/step - loss: 1.1717 - accuracy: 0.5657 - val_loss: 1.3387 - val_accuracy: 0.4800 Epoch 35/300 5/5 [==============================] - 0s 7ms/step - loss: 0.9939 - accuracy: 0.6061 - val_loss: 1.3236 - val_accuracy: 0.4800 Epoch 36/300 5/5 [==============================] - 0s 7ms/step - loss: 0.9578 - accuracy: 0.5657 - val_loss: 1.3082 - val_accuracy: 0.4800 Epoch 37/300 5/5 [==============================] - 0s 8ms/step - loss: 1.0285 - accuracy: 0.6768 - val_loss: 1.2869 - val_accuracy: 0.4800 Epoch 38/300 5/5 [==============================] - 0s 8ms/step - loss: 0.9933 - accuracy: 0.5960 - val_loss: 1.2706 - val_accuracy: 0.4800 Epoch 39/300 5/5 [==============================] - 0s 7ms/step - loss: 1.0912 - accuracy: 0.6162 - val_loss: 1.2492 - val_accuracy: 0.5200 Epoch 40/300 5/5 [==============================] - 0s 13ms/step - loss: 1.0722 - accuracy: 0.5152 - val_loss: 1.2253 - val_accuracy: 0.5200 Epoch 41/300 5/5 [==============================] - 0s 7ms/step - loss: 0.9574 - accuracy: 0.6162 - val_loss: 1.2053 - val_accuracy: 0.5200 Epoch 42/300 5/5 [==============================] - 0s 8ms/step - loss: 0.9672 - accuracy: 0.6364 - val_loss: 1.1862 - val_accuracy: 0.5200 Epoch 43/300 5/5 [==============================] - 0s 8ms/step - loss: 0.9382 - accuracy: 0.6263 - val_loss: 1.1677 - val_accuracy: 0.5600 Epoch 44/300 5/5 [==============================] - 0s 7ms/step - loss: 1.0414 - accuracy: 0.5859 - val_loss: 1.1571 - val_accuracy: 0.5600 Epoch 45/300 5/5 [==============================] - 0s 7ms/step - loss: 0.8385 - accuracy: 0.7172 - val_loss: 1.1581 - val_accuracy: 0.5600 Epoch 46/300 5/5 [==============================] - 0s 7ms/step - loss: 0.9972 - accuracy: 0.6061 - val_loss: 1.1788 - val_accuracy: 0.6000 Epoch 47/300 5/5 [==============================] - 0s 7ms/step - loss: 0.8625 - accuracy: 0.6768 - val_loss: 1.1962 - val_accuracy: 0.6000 Epoch 48/300 5/5 [==============================] - 0s 8ms/step - loss: 0.9351 - accuracy: 0.6566 - val_loss: 1.2184 - val_accuracy: 0.4800 Epoch 49/300 5/5 [==============================] - 0s 7ms/step - loss: 0.9000 - accuracy: 0.6667 - val_loss: 1.2525 - val_accuracy: 0.4800 Epoch 50/300 5/5 [==============================] - 0s 8ms/step - loss: 0.7558 - accuracy: 0.7172 - val_loss: 1.2779 - val_accuracy: 0.4800 Epoch 51/300 5/5 [==============================] - 0s 47ms/step - loss: 0.7994 - accuracy: 0.7273 - val_loss: 1.2827 - val_accuracy: 0.4800 Epoch 52/300 5/5 [==============================] - 0s 28ms/step - loss: 0.8583 - accuracy: 0.6768 - val_loss: 1.2729 - val_accuracy: 0.4800 Epoch 53/300 5/5 [==============================] - 0s 11ms/step - loss: 0.8085 - accuracy: 0.6566 - val_loss: 1.2527 - val_accuracy: 0.4800 Epoch 54/300 5/5 [==============================] - 0s 10ms/step - loss: 0.8659 - accuracy: 0.6768 - val_loss: 1.2197 - val_accuracy: 0.5200 Epoch 55/300 5/5 [==============================] - 0s 10ms/step - loss: 0.7825 - accuracy: 0.6869 - val_loss: 1.1830 - val_accuracy: 0.5600 Epoch 56/300 5/5 [==============================] - 0s 10ms/step - loss: 0.6574 - accuracy: 0.8081 - val_loss: 1.1868 - val_accuracy: 0.5600 Epoch 57/300 5/5 [==============================] - 0s 10ms/step - loss: 0.7811 - accuracy: 0.7071 - val_loss: 1.1933 - val_accuracy: 0.5600 Epoch 58/300 5/5 [==============================] - 0s 13ms/step - loss: 0.6313 - accuracy: 0.7778 - val_loss: 1.1894 - val_accuracy: 0.5600 Epoch 59/300 5/5 [==============================] - 0s 8ms/step - loss: 0.7533 - accuracy: 0.7172 - val_loss: 1.1858 - val_accuracy: 0.5600 Epoch 60/300 5/5 [==============================] - 0s 7ms/step - loss: 0.6668 - accuracy: 0.7879 - val_loss: 1.1839 - val_accuracy: 0.5200 Epoch 61/300 5/5 [==============================] - 0s 31ms/step - loss: 0.6888 - accuracy: 0.7576 - val_loss: 1.1849 - val_accuracy: 0.5200 Epoch 62/300 5/5 [==============================] - 0s 15ms/step - loss: 0.7216 - accuracy: 0.7475 - val_loss: 1.1867 - val_accuracy: 0.5200 Epoch 63/300 5/5 [==============================] - 0s 9ms/step - loss: 0.6753 - accuracy: 0.7980 - val_loss: 1.2008 - val_accuracy: 0.5200 Epoch 64/300 5/5 [==============================] - 0s 26ms/step - loss: 0.7372 - accuracy: 0.7273 - val_loss: 1.1922 - val_accuracy: 0.4800 Epoch 65/300 5/5 [==============================] - 0s 16ms/step - loss: 0.8591 - accuracy: 0.6566 - val_loss: 1.1736 - val_accuracy: 0.5600 Epoch 66/300 5/5 [==============================] - 0s 7ms/step - loss: 0.7145 - accuracy: 0.7576 - val_loss: 1.1438 - val_accuracy: 0.5600 Epoch 67/300 5/5 [==============================] - 0s 7ms/step - loss: 0.6689 - accuracy: 0.7677 - val_loss: 1.1288 - val_accuracy: 0.5600 Epoch 68/300 5/5 [==============================] - 0s 7ms/step - loss: 0.7691 - accuracy: 0.7374 - val_loss: 1.0958 - val_accuracy: 0.5600 Epoch 69/300 5/5 [==============================] - 0s 7ms/step - loss: 0.5594 - accuracy: 0.8384 - val_loss: 1.0727 - val_accuracy: 0.5600 Epoch 70/300 5/5 [==============================] - 0s 8ms/step - loss: 0.6253 - accuracy: 0.8384 - val_loss: 1.0738 - val_accuracy: 0.6000 Epoch 71/300 5/5 [==============================] - 0s 8ms/step - loss: 0.6143 - accuracy: 0.8283 - val_loss: 1.0934 - val_accuracy: 0.6400 Epoch 72/300 5/5 [==============================] - 0s 7ms/step - loss: 0.6074 - accuracy: 0.7879 - val_loss: 1.1255 - val_accuracy: 0.6400 Epoch 73/300 5/5 [==============================] - 0s 7ms/step - loss: 0.6404 - accuracy: 0.8283 - val_loss: 1.1363 - val_accuracy: 0.6000 Epoch 74/300 5/5 [==============================] - 0s 7ms/step - loss: 0.6795 - accuracy: 0.7778 - val_loss: 1.1369 - val_accuracy: 0.6000 Epoch 75/300 5/5 [==============================] - 0s 7ms/step - loss: 0.5579 - accuracy: 0.8384 - val_loss: 1.1460 - val_accuracy: 0.6000 Epoch 76/300 5/5 [==============================] - 0s 7ms/step - loss: 0.7211 - accuracy: 0.7475 - val_loss: 1.1523 - val_accuracy: 0.6000 Epoch 77/300 5/5 [==============================] - 0s 7ms/step - loss: 0.5225 - accuracy: 0.8283 - val_loss: 1.1574 - val_accuracy: 0.5600 Epoch 78/300 5/5 [==============================] - 0s 7ms/step - loss: 0.6300 - accuracy: 0.7778 - val_loss: 1.1683 - val_accuracy: 0.5200 Epoch 79/300 5/5 [==============================] - 0s 7ms/step - loss: 0.4909 - accuracy: 0.8384 - val_loss: 1.1885 - val_accuracy: 0.5200 Epoch 80/300 5/5 [==============================] - 0s 7ms/step - loss: 0.5411 - accuracy: 0.8586 - val_loss: 1.1757 - val_accuracy: 0.5200 Epoch 81/300 5/5 [==============================] - 0s 7ms/step - loss: 0.5302 - accuracy: 0.8081 - val_loss: 1.1503 - val_accuracy: 0.6000 Epoch 82/300 5/5 [==============================] - 0s 8ms/step - loss: 0.6819 - accuracy: 0.7980 - val_loss: 1.1513 - val_accuracy: 0.5600 Epoch 83/300 5/5 [==============================] - 0s 8ms/step - loss: 0.6683 - accuracy: 0.8182 - val_loss: 1.1321 - val_accuracy: 0.5600 Epoch 84/300 5/5 [==============================] - 0s 10ms/step - loss: 0.4623 - accuracy: 0.8687 - val_loss: 1.0950 - val_accuracy: 0.6000 Epoch 85/300 5/5 [==============================] - 0s 7ms/step - loss: 0.5767 - accuracy: 0.7778 - val_loss: 1.0876 - val_accuracy: 0.6000 Epoch 86/300 5/5 [==============================] - 0s 7ms/step - loss: 0.5021 - accuracy: 0.8384 - val_loss: 1.0995 - val_accuracy: 0.6000 Epoch 87/300 5/5 [==============================] - 0s 7ms/step - loss: 0.5257 - accuracy: 0.8384 - val_loss: 1.1078 - val_accuracy: 0.6000 Epoch 88/300 5/5 [==============================] - 0s 6ms/step - loss: 0.5774 - accuracy: 0.8384 - val_loss: 1.1017 - val_accuracy: 0.6000 Epoch 89/300 5/5 [==============================] - 0s 7ms/step - loss: 0.5161 - accuracy: 0.8182 - val_loss: 1.0603 - val_accuracy: 0.6800 Epoch 90/300 5/5 [==============================] - 0s 7ms/step - loss: 0.4697 - accuracy: 0.8586 - val_loss: 1.0174 - val_accuracy: 0.6400 Epoch 91/300 5/5 [==============================] - 0s 7ms/step - loss: 0.4246 - accuracy: 0.8687 - val_loss: 0.9952 - val_accuracy: 0.6400 Epoch 92/300 5/5 [==============================] - 0s 7ms/step - loss: 0.5062 - accuracy: 0.8485 - val_loss: 0.9939 - val_accuracy: 0.6400 Epoch 93/300 5/5 [==============================] - 0s 7ms/step - loss: 0.4633 - accuracy: 0.8283 - val_loss: 1.0143 - val_accuracy: 0.5600 Epoch 94/300 5/5 [==============================] - 0s 7ms/step - loss: 0.5166 - accuracy: 0.7879 - val_loss: 1.0275 - val_accuracy: 0.5200 Epoch 95/300 5/5 [==============================] - 0s 6ms/step - loss: 0.4926 - accuracy: 0.8990 - val_loss: 1.0393 - val_accuracy: 0.5200 Epoch 96/300 5/5 [==============================] - 0s 7ms/step - loss: 0.4417 - accuracy: 0.8889 - val_loss: 1.0508 - val_accuracy: 0.5200 Epoch 97/300 5/5 [==============================] - 0s 8ms/step - loss: 0.4507 - accuracy: 0.8485 - val_loss: 1.0277 - val_accuracy: 0.5600 Epoch 98/300 5/5 [==============================] - 0s 8ms/step - loss: 0.4157 - accuracy: 0.8586 - val_loss: 1.0344 - val_accuracy: 0.5600 Epoch 99/300 5/5 [==============================] - 0s 12ms/step - loss: 0.4012 - accuracy: 0.9091 - val_loss: 1.0440 - val_accuracy: 0.5600 Epoch 100/300 5/5 [==============================] - 0s 13ms/step - loss: 0.3953 - accuracy: 0.8889 - val_loss: 1.0578 - val_accuracy: 0.6800 Epoch 101/300 5/5 [==============================] - 0s 7ms/step - loss: 0.3934 - accuracy: 0.8889 - val_loss: 1.0894 - val_accuracy: 0.6400 Epoch 102/300 5/5 [==============================] - 0s 7ms/step - loss: 0.4137 - accuracy: 0.8889 - val_loss: 1.1043 - val_accuracy: 0.5600 Epoch 103/300 5/5 [==============================] - 0s 12ms/step - loss: 0.4877 - accuracy: 0.8384 - val_loss: 1.1093 - val_accuracy: 0.5600 Epoch 104/300 5/5 [==============================] - 0s 7ms/step - loss: 0.5137 - accuracy: 0.8081 - val_loss: 1.0745 - val_accuracy: 0.6400 Epoch 105/300 5/5 [==============================] - 0s 7ms/step - loss: 0.3199 - accuracy: 0.9192 - val_loss: 1.0458 - val_accuracy: 0.6800 Epoch 106/300 5/5 [==============================] - 0s 7ms/step - loss: 0.5026 - accuracy: 0.8384 - val_loss: 1.0336 - val_accuracy: 0.6400 Epoch 107/300 5/5 [==============================] - 0s 8ms/step - loss: 0.3789 - accuracy: 0.9192 - val_loss: 1.0323 - val_accuracy: 0.6400 Epoch 108/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2951 - accuracy: 0.9091 - val_loss: 1.0498 - val_accuracy: 0.6400 Epoch 109/300 5/5 [==============================] - 0s 7ms/step - loss: 0.4411 - accuracy: 0.8586 - val_loss: 1.0521 - val_accuracy: 0.6400 Epoch 110/300 5/5 [==============================] - 0s 7ms/step - loss: 0.3854 - accuracy: 0.8889 - val_loss: 1.0633 - val_accuracy: 0.6400 Epoch 111/300 5/5 [==============================] - 0s 7ms/step - loss: 0.3317 - accuracy: 0.9293 - val_loss: 1.0426 - val_accuracy: 0.6000 Epoch 112/300 5/5 [==============================] - 0s 7ms/step - loss: 0.3301 - accuracy: 0.9192 - val_loss: 1.0332 - val_accuracy: 0.6400 Epoch 113/300 5/5 [==============================] - 0s 7ms/step - loss: 0.3291 - accuracy: 0.9091 - val_loss: 1.0250 - val_accuracy: 0.6400 Epoch 114/300 5/5 [==============================] - 0s 8ms/step - loss: 0.3235 - accuracy: 0.9091 - val_loss: 1.0358 - val_accuracy: 0.6400 Epoch 115/300 5/5 [==============================] - 0s 6ms/step - loss: 0.4453 - accuracy: 0.8788 - val_loss: 1.0366 - val_accuracy: 0.6400 Epoch 116/300 5/5 [==============================] - 0s 6ms/step - loss: 0.4084 - accuracy: 0.8586 - val_loss: 1.0111 - val_accuracy: 0.5600 Epoch 117/300 5/5 [==============================] - 0s 6ms/step - loss: 0.2518 - accuracy: 0.9293 - val_loss: 0.9992 - val_accuracy: 0.6000 Epoch 118/300 5/5 [==============================] - 0s 7ms/step - loss: 0.4391 - accuracy: 0.8889 - val_loss: 0.9878 - val_accuracy: 0.6000 Epoch 119/300 5/5 [==============================] - 0s 7ms/step - loss: 0.3262 - accuracy: 0.9091 - val_loss: 0.9921 - val_accuracy: 0.6000 Epoch 120/300 5/5 [==============================] - 0s 7ms/step - loss: 0.3428 - accuracy: 0.8889 - val_loss: 1.0168 - val_accuracy: 0.6000 Epoch 121/300 5/5 [==============================] - 0s 7ms/step - loss: 0.4270 - accuracy: 0.8788 - val_loss: 1.0680 - val_accuracy: 0.5600 Epoch 122/300 5/5 [==============================] - 0s 7ms/step - loss: 0.3612 - accuracy: 0.8889 - val_loss: 1.1217 - val_accuracy: 0.6000 Epoch 123/300 5/5 [==============================] - 0s 7ms/step - loss: 0.4407 - accuracy: 0.8788 - val_loss: 1.1541 - val_accuracy: 0.6000 Epoch 124/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2597 - accuracy: 0.9495 - val_loss: 1.1678 - val_accuracy: 0.6000 Epoch 125/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2890 - accuracy: 0.9091 - val_loss: 1.1679 - val_accuracy: 0.6000 Epoch 126/300 5/5 [==============================] - 0s 6ms/step - loss: 0.3614 - accuracy: 0.8687 - val_loss: 1.1817 - val_accuracy: 0.6000 Epoch 127/300 5/5 [==============================] - 0s 7ms/step - loss: 0.3280 - accuracy: 0.8990 - val_loss: 1.2216 - val_accuracy: 0.5200 Epoch 128/300 5/5 [==============================] - 0s 7ms/step - loss: 0.4395 - accuracy: 0.8586 - val_loss: 1.2445 - val_accuracy: 0.4800 Epoch 129/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2894 - accuracy: 0.9596 - val_loss: 1.2639 - val_accuracy: 0.4400 Epoch 130/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2759 - accuracy: 0.9394 - val_loss: 1.2791 - val_accuracy: 0.4400 Epoch 131/300 5/5 [==============================] - 0s 7ms/step - loss: 0.4035 - accuracy: 0.8990 - val_loss: 1.2658 - val_accuracy: 0.4400 Epoch 132/300 5/5 [==============================] - 0s 6ms/step - loss: 0.2721 - accuracy: 0.9192 - val_loss: 1.2598 - val_accuracy: 0.4400 Epoch 133/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2641 - accuracy: 0.9495 - val_loss: 1.2272 - val_accuracy: 0.4400 Epoch 134/300 5/5 [==============================] - 0s 7ms/step - loss: 0.3194 - accuracy: 0.9192 - val_loss: 1.2148 - val_accuracy: 0.4400 Epoch 135/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2911 - accuracy: 0.9192 - val_loss: 1.1787 - val_accuracy: 0.4400 Epoch 136/300 5/5 [==============================] - 0s 13ms/step - loss: 0.2864 - accuracy: 0.9192 - val_loss: 1.1108 - val_accuracy: 0.6000 Epoch 137/300 5/5 [==============================] - 0s 8ms/step - loss: 0.3476 - accuracy: 0.9192 - val_loss: 1.0644 - val_accuracy: 0.6000 Epoch 138/300 5/5 [==============================] - 0s 8ms/step - loss: 0.2756 - accuracy: 0.9192 - val_loss: 1.0491 - val_accuracy: 0.6000 Epoch 139/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2644 - accuracy: 0.9495 - val_loss: 1.0293 - val_accuracy: 0.6000 Epoch 140/300 5/5 [==============================] - 0s 7ms/step - loss: 0.3794 - accuracy: 0.8788 - val_loss: 1.0183 - val_accuracy: 0.6000 Epoch 141/300 5/5 [==============================] - 0s 7ms/step - loss: 0.3456 - accuracy: 0.8586 - val_loss: 1.0221 - val_accuracy: 0.6000 Epoch 142/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2562 - accuracy: 0.9394 - val_loss: 1.0127 - val_accuracy: 0.6000 Epoch 143/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2208 - accuracy: 0.9596 - val_loss: 1.0285 - val_accuracy: 0.6000 Epoch 144/300 5/5 [==============================] - 0s 12ms/step - loss: 0.3093 - accuracy: 0.8990 - val_loss: 1.0313 - val_accuracy: 0.6000 Epoch 145/300 5/5 [==============================] - 0s 6ms/step - loss: 0.3141 - accuracy: 0.9293 - val_loss: 1.0109 - val_accuracy: 0.6000 Epoch 146/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2631 - accuracy: 0.9394 - val_loss: 1.0058 - val_accuracy: 0.6000 Epoch 147/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2548 - accuracy: 0.9293 - val_loss: 1.0153 - val_accuracy: 0.6000 Epoch 148/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2497 - accuracy: 0.9495 - val_loss: 1.0301 - val_accuracy: 0.6000 Epoch 149/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2246 - accuracy: 0.9394 - val_loss: 1.0254 - val_accuracy: 0.5600 Epoch 150/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2562 - accuracy: 0.9495 - val_loss: 1.0127 - val_accuracy: 0.6000 Epoch 151/300 5/5 [==============================] - 0s 7ms/step - loss: 0.3132 - accuracy: 0.8687 - val_loss: 1.0091 - val_accuracy: 0.6000 Epoch 152/300 5/5 [==============================] - 0s 8ms/step - loss: 0.2366 - accuracy: 0.9293 - val_loss: 0.9822 - val_accuracy: 0.6000 Epoch 153/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2279 - accuracy: 0.9293 - val_loss: 0.9536 - val_accuracy: 0.6000 Epoch 154/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2626 - accuracy: 0.9293 - val_loss: 0.9569 - val_accuracy: 0.6000 Epoch 155/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2412 - accuracy: 0.9495 - val_loss: 0.9619 - val_accuracy: 0.6000 Epoch 156/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2880 - accuracy: 0.9394 - val_loss: 0.9606 - val_accuracy: 0.6000 Epoch 157/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2705 - accuracy: 0.9293 - val_loss: 0.9531 - val_accuracy: 0.6400 Epoch 158/300 5/5 [==============================] - 0s 7ms/step - loss: 0.3676 - accuracy: 0.8990 - val_loss: 0.9414 - val_accuracy: 0.6400 Epoch 159/300 5/5 [==============================] - 0s 7ms/step - loss: 0.3017 - accuracy: 0.8788 - val_loss: 0.9630 - val_accuracy: 0.6000 Epoch 160/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2612 - accuracy: 0.9495 - val_loss: 1.0015 - val_accuracy: 0.6400 Epoch 161/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2008 - accuracy: 0.9495 - val_loss: 1.0235 - val_accuracy: 0.6400 Epoch 162/300 5/5 [==============================] - 0s 6ms/step - loss: 0.2786 - accuracy: 0.9192 - val_loss: 1.0243 - val_accuracy: 0.6400 Epoch 163/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2574 - accuracy: 0.8889 - val_loss: 1.0388 - val_accuracy: 0.6400 Epoch 164/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2659 - accuracy: 0.9091 - val_loss: 1.0422 - val_accuracy: 0.6400 Epoch 165/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2034 - accuracy: 0.9394 - val_loss: 1.0529 - val_accuracy: 0.6400 Epoch 166/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2993 - accuracy: 0.9192 - val_loss: 1.0771 - val_accuracy: 0.6400 Epoch 167/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2134 - accuracy: 0.9293 - val_loss: 1.0742 - val_accuracy: 0.6400 Epoch 168/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2292 - accuracy: 0.9293 - val_loss: 1.0786 - val_accuracy: 0.6400 Epoch 169/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1583 - accuracy: 0.9798 - val_loss: 1.0858 - val_accuracy: 0.6800 Epoch 170/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2223 - accuracy: 0.9394 - val_loss: 1.0875 - val_accuracy: 0.6400 Epoch 171/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2198 - accuracy: 0.9394 - val_loss: 1.0794 - val_accuracy: 0.6400 Epoch 172/300 5/5 [==============================] - 0s 7ms/step - loss: 0.3067 - accuracy: 0.9091 - val_loss: 1.0884 - val_accuracy: 0.6000 Epoch 173/300 5/5 [==============================] - 0s 9ms/step - loss: 0.2112 - accuracy: 0.9394 - val_loss: 1.0815 - val_accuracy: 0.6000 Epoch 174/300 5/5 [==============================] - 0s 9ms/step - loss: 0.2374 - accuracy: 0.9293 - val_loss: 1.0747 - val_accuracy: 0.5600 Epoch 175/300 5/5 [==============================] - 0s 9ms/step - loss: 0.1768 - accuracy: 0.9596 - val_loss: 1.0781 - val_accuracy: 0.5600 Epoch 176/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2269 - accuracy: 0.9293 - val_loss: 1.0908 - val_accuracy: 0.6000 Epoch 177/300 5/5 [==============================] - 0s 9ms/step - loss: 0.2933 - accuracy: 0.9192 - val_loss: 1.1141 - val_accuracy: 0.6000 Epoch 178/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2316 - accuracy: 0.9192 - val_loss: 1.1561 - val_accuracy: 0.5600 Epoch 179/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2193 - accuracy: 0.9293 - val_loss: 1.1992 - val_accuracy: 0.5600 Epoch 180/300 5/5 [==============================] - 0s 7ms/step - loss: 0.3480 - accuracy: 0.8990 - val_loss: 1.1463 - val_accuracy: 0.6000 Epoch 181/300 5/5 [==============================] - 0s 7ms/step - loss: 0.3451 - accuracy: 0.8990 - val_loss: 1.0857 - val_accuracy: 0.6000 Epoch 182/300 5/5 [==============================] - 0s 13ms/step - loss: 0.2564 - accuracy: 0.9091 - val_loss: 1.0438 - val_accuracy: 0.6400 Epoch 183/300 5/5 [==============================] - 0s 14ms/step - loss: 0.2257 - accuracy: 0.9394 - val_loss: 1.0499 - val_accuracy: 0.6000 Epoch 184/300 5/5 [==============================] - 0s 10ms/step - loss: 0.3758 - accuracy: 0.8889 - val_loss: 1.0552 - val_accuracy: 0.5200 Epoch 185/300 5/5 [==============================] - 0s 47ms/step - loss: 0.2237 - accuracy: 0.9495 - val_loss: 1.0937 - val_accuracy: 0.5200 Epoch 186/300 5/5 [==============================] - 0s 8ms/step - loss: 0.2168 - accuracy: 0.9394 - val_loss: 1.1292 - val_accuracy: 0.4800 Epoch 187/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1721 - accuracy: 0.9697 - val_loss: 1.1548 - val_accuracy: 0.4800 Epoch 188/300 5/5 [==============================] - 0s 12ms/step - loss: 0.3109 - accuracy: 0.8788 - val_loss: 1.1938 - val_accuracy: 0.4800 Epoch 189/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2060 - accuracy: 0.9596 - val_loss: 1.2652 - val_accuracy: 0.4800 Epoch 190/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2762 - accuracy: 0.8788 - val_loss: 1.2779 - val_accuracy: 0.5600 Epoch 191/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2848 - accuracy: 0.9091 - val_loss: 1.2436 - val_accuracy: 0.5200 Epoch 192/300 5/5 [==============================] - 0s 13ms/step - loss: 0.1974 - accuracy: 0.9596 - val_loss: 1.1853 - val_accuracy: 0.6000 Epoch 193/300 5/5 [==============================] - 0s 8ms/step - loss: 0.3407 - accuracy: 0.8889 - val_loss: 1.1878 - val_accuracy: 0.6000 Epoch 194/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1949 - accuracy: 0.9495 - val_loss: 1.1323 - val_accuracy: 0.5600 Epoch 195/300 5/5 [==============================] - 0s 8ms/step - loss: 0.1960 - accuracy: 0.9495 - val_loss: 1.0343 - val_accuracy: 0.6000 Epoch 196/300 5/5 [==============================] - 0s 7ms/step - loss: 0.3064 - accuracy: 0.8889 - val_loss: 0.9627 - val_accuracy: 0.6000 Epoch 197/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2225 - accuracy: 0.9394 - val_loss: 0.9226 - val_accuracy: 0.5600 Epoch 198/300 5/5 [==============================] - 0s 8ms/step - loss: 0.2238 - accuracy: 0.9091 - val_loss: 0.8958 - val_accuracy: 0.5600 Epoch 199/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1927 - accuracy: 0.9495 - val_loss: 0.8764 - val_accuracy: 0.5600 Epoch 200/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2087 - accuracy: 0.9394 - val_loss: 0.8523 - val_accuracy: 0.6000 Epoch 201/300 5/5 [==============================] - 0s 8ms/step - loss: 0.1626 - accuracy: 0.9697 - val_loss: 0.8314 - val_accuracy: 0.6400 Epoch 202/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1860 - accuracy: 0.9394 - val_loss: 0.8225 - val_accuracy: 0.6400 Epoch 203/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2872 - accuracy: 0.9192 - val_loss: 0.8045 - val_accuracy: 0.6800 Epoch 204/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2065 - accuracy: 0.9495 - val_loss: 0.8292 - val_accuracy: 0.6800 Epoch 205/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2292 - accuracy: 0.9091 - val_loss: 0.8964 - val_accuracy: 0.6800 Epoch 206/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1425 - accuracy: 0.9697 - val_loss: 0.9343 - val_accuracy: 0.6400 Epoch 207/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1481 - accuracy: 0.9596 - val_loss: 0.9660 - val_accuracy: 0.6000 Epoch 208/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1323 - accuracy: 0.9697 - val_loss: 0.9737 - val_accuracy: 0.6000 Epoch 209/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1809 - accuracy: 0.9394 - val_loss: 0.9552 - val_accuracy: 0.6000 Epoch 210/300 5/5 [==============================] - 0s 6ms/step - loss: 0.3174 - accuracy: 0.9091 - val_loss: 0.9541 - val_accuracy: 0.5600 Epoch 211/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1634 - accuracy: 0.9798 - val_loss: 0.9498 - val_accuracy: 0.6000 Epoch 212/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2590 - accuracy: 0.9293 - val_loss: 0.9431 - val_accuracy: 0.6000 Epoch 213/300 5/5 [==============================] - 0s 8ms/step - loss: 0.1386 - accuracy: 0.9798 - val_loss: 0.9381 - val_accuracy: 0.6000 Epoch 214/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1310 - accuracy: 0.9798 - val_loss: 0.9409 - val_accuracy: 0.6000 Epoch 215/300 5/5 [==============================] - 0s 8ms/step - loss: 0.2439 - accuracy: 0.9293 - val_loss: 0.9179 - val_accuracy: 0.6000 Epoch 216/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1603 - accuracy: 0.9697 - val_loss: 0.9071 - val_accuracy: 0.6000 Epoch 217/300 5/5 [==============================] - 0s 8ms/step - loss: 0.1924 - accuracy: 0.9495 - val_loss: 0.9051 - val_accuracy: 0.6000 Epoch 218/300 5/5 [==============================] - 0s 8ms/step - loss: 0.2037 - accuracy: 0.9495 - val_loss: 0.9060 - val_accuracy: 0.6000 Epoch 219/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1442 - accuracy: 0.9697 - val_loss: 0.9286 - val_accuracy: 0.6000 Epoch 220/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1918 - accuracy: 0.9394 - val_loss: 0.9064 - val_accuracy: 0.6000 Epoch 221/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2199 - accuracy: 0.9394 - val_loss: 0.9049 - val_accuracy: 0.6000 Epoch 222/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1669 - accuracy: 0.9596 - val_loss: 0.9359 - val_accuracy: 0.5600 Epoch 223/300 5/5 [==============================] - 0s 13ms/step - loss: 0.2380 - accuracy: 0.9192 - val_loss: 0.9987 - val_accuracy: 0.5600 Epoch 224/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1674 - accuracy: 0.9495 - val_loss: 1.0890 - val_accuracy: 0.5600 Epoch 225/300 5/5 [==============================] - 0s 12ms/step - loss: 0.1557 - accuracy: 0.9596 - val_loss: 1.1648 - val_accuracy: 0.6000 Epoch 226/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1530 - accuracy: 0.9596 - val_loss: 1.1840 - val_accuracy: 0.6000 Epoch 227/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2280 - accuracy: 0.8990 - val_loss: 1.1537 - val_accuracy: 0.6000 Epoch 228/300 5/5 [==============================] - 0s 13ms/step - loss: 0.1627 - accuracy: 0.9798 - val_loss: 1.1309 - val_accuracy: 0.5600 Epoch 229/300 5/5 [==============================] - 0s 9ms/step - loss: 0.1461 - accuracy: 0.9596 - val_loss: 1.1028 - val_accuracy: 0.5600 Epoch 230/300 5/5 [==============================] - 0s 6ms/step - loss: 0.2000 - accuracy: 0.9495 - val_loss: 1.0737 - val_accuracy: 0.5600 Epoch 231/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1663 - accuracy: 0.9495 - val_loss: 1.1235 - val_accuracy: 0.5600 Epoch 232/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1548 - accuracy: 0.9394 - val_loss: 1.1878 - val_accuracy: 0.5200 Epoch 233/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1751 - accuracy: 0.9495 - val_loss: 1.2622 - val_accuracy: 0.5200 Epoch 234/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2180 - accuracy: 0.9394 - val_loss: 1.3763 - val_accuracy: 0.4400 Epoch 235/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1849 - accuracy: 0.9596 - val_loss: 1.4282 - val_accuracy: 0.4400 Epoch 236/300 5/5 [==============================] - 0s 6ms/step - loss: 0.2231 - accuracy: 0.9495 - val_loss: 1.3746 - val_accuracy: 0.4400 Epoch 237/300 5/5 [==============================] - 0s 7ms/step - loss: 0.0820 - accuracy: 0.9899 - val_loss: 1.3124 - val_accuracy: 0.4800 Epoch 238/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1720 - accuracy: 0.9798 - val_loss: 1.2316 - val_accuracy: 0.6000 Epoch 239/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1732 - accuracy: 0.9495 - val_loss: 1.1614 - val_accuracy: 0.6400 Epoch 240/300 5/5 [==============================] - 0s 7ms/step - loss: 0.0976 - accuracy: 0.9899 - val_loss: 1.1353 - val_accuracy: 0.6400 Epoch 241/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1457 - accuracy: 0.9596 - val_loss: 1.1445 - val_accuracy: 0.6000 Epoch 242/300 5/5 [==============================] - 0s 7ms/step - loss: 0.4236 - accuracy: 0.8788 - val_loss: 1.1486 - val_accuracy: 0.6800 Epoch 243/300 5/5 [==============================] - 0s 7ms/step - loss: 0.0940 - accuracy: 1.0000 - val_loss: 1.1658 - val_accuracy: 0.6400 Epoch 244/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2100 - accuracy: 0.9192 - val_loss: 1.2016 - val_accuracy: 0.6000 Epoch 245/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2449 - accuracy: 0.9394 - val_loss: 1.2003 - val_accuracy: 0.5600 Epoch 246/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1499 - accuracy: 0.9495 - val_loss: 1.1246 - val_accuracy: 0.6000 Epoch 247/300 5/5 [==============================] - 0s 7ms/step - loss: 0.3233 - accuracy: 0.9091 - val_loss: 1.0726 - val_accuracy: 0.5600 Epoch 248/300 5/5 [==============================] - 0s 7ms/step - loss: 0.4258 - accuracy: 0.8485 - val_loss: 1.0414 - val_accuracy: 0.6400 Epoch 249/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1011 - accuracy: 0.9899 - val_loss: 1.0082 - val_accuracy: 0.6400 Epoch 250/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1574 - accuracy: 0.9495 - val_loss: 1.0059 - val_accuracy: 0.6400 Epoch 251/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1312 - accuracy: 0.9798 - val_loss: 1.0187 - val_accuracy: 0.6400 Epoch 252/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1189 - accuracy: 0.9899 - val_loss: 0.9979 - val_accuracy: 0.6400 Epoch 253/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1447 - accuracy: 0.9697 - val_loss: 0.9702 - val_accuracy: 0.6400 Epoch 254/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1625 - accuracy: 0.9293 - val_loss: 0.9384 - val_accuracy: 0.6400 Epoch 255/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1259 - accuracy: 0.9697 - val_loss: 0.9104 - val_accuracy: 0.6400 Epoch 256/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1283 - accuracy: 0.9495 - val_loss: 0.8905 - val_accuracy: 0.6400 Epoch 257/300 5/5 [==============================] - 0s 7ms/step - loss: 0.0935 - accuracy: 0.9798 - val_loss: 0.8787 - val_accuracy: 0.6800 Epoch 258/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1111 - accuracy: 0.9697 - val_loss: 0.8880 - val_accuracy: 0.6400 Epoch 259/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1659 - accuracy: 0.9596 - val_loss: 0.8979 - val_accuracy: 0.6400 Epoch 260/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1719 - accuracy: 0.9394 - val_loss: 0.8967 - val_accuracy: 0.6400 Epoch 261/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1665 - accuracy: 0.9495 - val_loss: 0.8731 - val_accuracy: 0.6800 Epoch 262/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1618 - accuracy: 0.9495 - val_loss: 0.8881 - val_accuracy: 0.6800 Epoch 263/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1466 - accuracy: 0.9697 - val_loss: 0.9066 - val_accuracy: 0.6000 Epoch 264/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1384 - accuracy: 0.9697 - val_loss: 0.9496 - val_accuracy: 0.6000 Epoch 265/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1473 - accuracy: 0.9495 - val_loss: 0.9678 - val_accuracy: 0.6000 Epoch 266/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2533 - accuracy: 0.8990 - val_loss: 1.0066 - val_accuracy: 0.6000 Epoch 267/300 5/5 [==============================] - 0s 6ms/step - loss: 0.1482 - accuracy: 0.9596 - val_loss: 1.0326 - val_accuracy: 0.6400 Epoch 268/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1130 - accuracy: 0.9495 - val_loss: 1.0504 - val_accuracy: 0.6000 Epoch 269/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1360 - accuracy: 0.9596 - val_loss: 1.0279 - val_accuracy: 0.5600 Epoch 270/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1273 - accuracy: 0.9798 - val_loss: 1.0425 - val_accuracy: 0.5600 Epoch 271/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1510 - accuracy: 0.9697 - val_loss: 1.0669 - val_accuracy: 0.5600 Epoch 272/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1573 - accuracy: 0.9798 - val_loss: 1.1212 - val_accuracy: 0.6000 Epoch 273/300 5/5 [==============================] - 0s 7ms/step - loss: 0.0959 - accuracy: 0.9798 - val_loss: 1.1610 - val_accuracy: 0.5600 Epoch 274/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1762 - accuracy: 0.9394 - val_loss: 1.1883 - val_accuracy: 0.5600 Epoch 275/300 5/5 [==============================] - 0s 6ms/step - loss: 0.1279 - accuracy: 0.9798 - val_loss: 1.1909 - val_accuracy: 0.5600 Epoch 276/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1487 - accuracy: 0.9596 - val_loss: 1.1403 - val_accuracy: 0.5600 Epoch 277/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1205 - accuracy: 0.9697 - val_loss: 1.0893 - val_accuracy: 0.6000 Epoch 278/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1003 - accuracy: 0.9899 - val_loss: 1.0631 - val_accuracy: 0.6000 Epoch 279/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1389 - accuracy: 0.9495 - val_loss: 1.0051 - val_accuracy: 0.6000 Epoch 280/300 5/5 [==============================] - 0s 7ms/step - loss: 0.0895 - accuracy: 0.9596 - val_loss: 0.9730 - val_accuracy: 0.6000 Epoch 281/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2119 - accuracy: 0.9091 - val_loss: 0.9900 - val_accuracy: 0.6400 Epoch 282/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1465 - accuracy: 0.9697 - val_loss: 1.0552 - val_accuracy: 0.6000 Epoch 283/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1063 - accuracy: 0.9697 - val_loss: 1.1198 - val_accuracy: 0.6400 Epoch 284/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1440 - accuracy: 0.9596 - val_loss: 1.1713 - val_accuracy: 0.6400 Epoch 285/300 5/5 [==============================] - 0s 7ms/step - loss: 0.0836 - accuracy: 0.9798 - val_loss: 1.1706 - val_accuracy: 0.6400 Epoch 286/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1209 - accuracy: 0.9596 - val_loss: 1.1577 - val_accuracy: 0.6400 Epoch 287/300 5/5 [==============================] - 0s 7ms/step - loss: 0.0884 - accuracy: 0.9798 - val_loss: 1.1919 - val_accuracy: 0.6000 Epoch 288/300 5/5 [==============================] - 0s 7ms/step - loss: 0.0981 - accuracy: 0.9697 - val_loss: 1.2238 - val_accuracy: 0.5600 Epoch 289/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1364 - accuracy: 0.9596 - val_loss: 1.2090 - val_accuracy: 0.5600 Epoch 290/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2071 - accuracy: 0.9293 - val_loss: 1.1778 - val_accuracy: 0.5600 Epoch 291/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2213 - accuracy: 0.9495 - val_loss: 1.1807 - val_accuracy: 0.5600 Epoch 292/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1040 - accuracy: 0.9596 - val_loss: 1.1489 - val_accuracy: 0.5600 Epoch 293/300 5/5 [==============================] - 0s 7ms/step - loss: 0.2094 - accuracy: 0.9394 - val_loss: 1.1044 - val_accuracy: 0.5600 Epoch 294/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1832 - accuracy: 0.9394 - val_loss: 1.0495 - val_accuracy: 0.6000 Epoch 295/300 5/5 [==============================] - 0s 8ms/step - loss: 0.1077 - accuracy: 0.9798 - val_loss: 1.0226 - val_accuracy: 0.6400 Epoch 296/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1191 - accuracy: 0.9596 - val_loss: 0.9835 - val_accuracy: 0.6000 Epoch 297/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1309 - accuracy: 0.9596 - val_loss: 1.0074 - val_accuracy: 0.5600 Epoch 298/300 5/5 [==============================] - 0s 7ms/step - loss: 0.0669 - accuracy: 0.9798 - val_loss: 1.0237 - val_accuracy: 0.6000 Epoch 299/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1229 - accuracy: 0.9697 - val_loss: 1.0412 - val_accuracy: 0.5600 Epoch 300/300 5/5 [==============================] - 0s 7ms/step - loss: 0.1129 - accuracy: 1.0000 - val_loss: 1.0529 - val_accuracy: 0.5600 CPU times: user 15.3 s, sys: 2.15 s, total: 17.4 s Wall time: 12 s
That's it! After some quick exploration of the learning
success:
import plotly.graph_objects as go
epoch = np.arange(300) + 1
fig = go.Figure()
# Add traces
fig.add_trace(go.Scatter(x=epoch, y=fit.history['accuracy'],
mode='lines+markers',
name='training set'))
fig.add_trace(go.Scatter(x=epoch, y=fit.history['val_accuracy'],
mode='lines+markers',
name='validation set'))
fig.update_layout(title="Accuracy in training and validation set",
template='plotly_white')
fig.update_xaxes(title_text='Epoch')
fig.update_yaxes(title_text='Accuracy')
fig.show()
#plot(fig, filename = 'acc_eyes.html')
#display(HTML('acc_eyes.html'))
you evaluate it on the test set
:
score, acc = model.evaluate(X_test, y_test,
batch_size=2)
print('Test score:', score)
print('Test accuracy:', acc)
16/16 [==============================] - 0s 933us/step - loss: 1.1927 - accuracy: 0.6129 Test score: 1.192651391029358 Test accuracy: 0.6129032373428345
Amazing, you did, everything is good...or is it?
via https://tse1.mm.bing.net/th?id=OIP.0BdpxPlMG468RwbZRaFAKQHaEK&pid=Api
Your PI is so happy about this fantastic results that now everyone in the lab should run machine learning analyses
.
Thus you are ordered to give everyone on the lab your analysis script
and a few colleagues try out immediately using the exact same version
of the script
and data
. (I actually asked 6
real people to run it.)
However, all of them get different results...
for the random forest
:
Accuracy
: 0.52, 0.53, 0.55, 0.58, 0.55
MAE
: 1.11, 1.10, 1.12, 1.05, 1.04
and the deep learning analyses
, i.e. the ANN
:
Test score
: 1.55, 1.44, 1.65, 1.54, 0.83
Test accuracy
: 0.55, 0.45, 0.55, 0.55, 0.74
Shocked you rerun your old analyses and it gets worse: your results also differ from the previous run.
print('previous Accuracy = {}, previous MAE = {}, previous Chance = {}'.format(np.round(np.mean(acc), 3),
np.round(np.mean(-mae), 3),
np.round(1/len(labels.unique()), 3)))
acc = cross_val_score(pipe, data, pd.Categorical(labels).codes, cv=10)
mae = cross_val_score(pipe, data, pd.Categorical(labels).codes, cv=10,
scoring='neg_mean_absolute_error')
print('Accuracy = {}, MAE = {}, Chance = {}'.format(np.round(np.mean(acc), 3),
np.round(np.mean(-mae), 3),
np.round(1/len(labels.unique()), 3)))
previous Accuracy = 0.613, previous MAE = 1.105, previous Chance = 0.167 Accuracy = 0.537, MAE = 1.142, Chance = 0.167
score, acc = model.evaluate(X_test, y_test,
batch_size=2)
print('previous Test score:', score)
print('previous Test accuracy:', acc)
fit = model.fit(X_train, y_train, epochs=300, batch_size=20, validation_split=0.2, verbose=0)
score, acc = model.evaluate(X_test, y_test,
batch_size=2)
print('Test score:', score)
print('Test accuracy:', acc)
16/16 [==============================] - 0s 1ms/step - loss: 1.1927 - accuracy: 0.6129 previous Test score: 1.192651391029358 previous Test accuracy: 0.6129032373428345 16/16 [==============================] - 0s 1ms/step - loss: 1.7225 - accuracy: 0.5806 Test score: 1.7224695682525635 Test accuracy: 0.5806451439857483
What is going on...
The inconvenient truth is: as every other analyses you might run within the field of neuroimaging (or other any research field), there is an substantial amount of factors that contribute to the reproducibility of your machine learning analyses
. Even more so, a lot of the problems are actually elevated due to the complex nature of the analyses
.
But why care about reproducibility in machine learning
at all and how can some of the underlying problems be addressed? Let's have a look...
reproducibility
in machine learning
?¶We all know the reproducibility crisis
in neuroimaging
...
but we also know that neuroimaging
is by far no exception to the rule as many other, basically all, research fields have comparable problems.
This also includes machine learning
...
adapted from Martina Vilas
Besides obviously being a major shortcoming as indicated by this quote from Popper (The Logic of Scientific Discovery)
,
Non-reproducible single occurrences are of no significance to science.
adapted from Suneeta Mall
what are crucial aspects when talking about reproducibility
in machine learning
(with a focus on neuroimaging
)?
adapted from Suneeta Mall
understanding
, explaining
, and debugging
& is crucial to reverse engineering
Why is this important?
machine learning
is very difficult to understand
, explain
and debug
adapted from Suneeta Mall
Our example:
model
provided us with the obtained accuracy
model
"learned" from our datafeatures
are more important than othersOur problem(s):
accuracy
every time we run our model
model
learned from the datafeatures
are more important than othersadapted from Suneeta Mall
Why is this important?
machine learning
to make predictions
about humans
(e.g. group clustering
, treatment/therapy plans
, disease propagation
, etc.) and unreproducible model predictions
can lead to devastating errorsOur example:
model
actually "learned" a way to reproducibly predict
the age
of participants
from their resting state connectome
Our problem(s):
model
actually "learned" in a reproducible
manner as the outcome variesmodel
adapted from Suneeta Mall
reproducible
(machine learning
) results are more credible
Why is this important?
FAIR
, verifiable
, reliable
, unbiased
& ethical
analyses
and results
results
from machine learning analyses
adapted from Suneeta Mall
Our example:
age
can be predict from resting state connectomes
in a verifiable
and unbiased
way that is ethical
and FAIR
Our problem(s):
reproducibility
helps with extensibility
of successive steps in a machine learning analyses
Why is this important?
machine learning analyses
(e.g. feature engineering
, layers
of an ANN
, etc.) need to be reproducible
so that the respective analyses
(or pipeline) can be extendedreproducible
subsequent steps like different post-processing
options and augmentation
are not feasible/possibleadapted from Suneeta Mall
Our example:
features
important for the prediction
of age
from the model
augment
our model
with new features
from a different modality (e.g. DTI
, behavior
, etc.)Our problem(s):
features
is actually reliableaugment
our model
because it's current state produces unreproducible
resultsreproducibility
helps with model training
through data generation
Why is this important?
machine learning analyses
needs tremendous amounts of data
to be trained
, evaluated
and tested
ondata sharing
practices and standardization
, as well as quality control
there's actually not enough data
for most planned (or even conducted) analyses
data
(e.g. disorders
/disease
with a low prevalence and/or broad spectra)generation of synthetic data
via machine learning
(e.g. GAN
s) offers amazing possibilities to address this problemsynthetic data
, of the model
need to be reproducible
adapted from Suneeta Mall
Our example:
generate
more data
based on the one we have (e.g. all data
, important features
, etc.), the generation
of this data
should be reproducible
data
for synthesizing
, the derivation
of this data
should be reproducible
Our problem(s):
model performance
, thus also derivation
of data
is not reproducible
At this point you might think:
via https://c.tenor.com/LOuJtZ-WL3kAAAAC/holy-forking-shirt-shocked.gif
and you would be right: there are so many things to consider and that can go wrong.
However, we shouldn't give up quite yet and instead have a look at the underlying problems and how we can address them!
reproducible machine learning
¶Here's another inconvenient truth: every single aspect involved in a machine learning analyses
creates variability
and thus entails a major hurdle towards achieving reproducibility
.
This not only entails the computational infrastructure
one is working with but also the data
as well as common practices during the application of these analyses
.
adapted from Suneeta Mall
machine learning
algorithms require intensive computation and thus can take a very long time to runparallelism
& multiple processing units
(CPU
), graphics processing units
(GPU
), tensor processing units
(TPU
), etc.adapted from Suneeta Mall
reproducibility
based on the underlying floating-point computations
and appears independent of the hardware
parallelism
on CPU
, both intra-ops
& inter-ops
(within operation/across multiple operations) can produce different outcomes when run repeatedlystream multiprocessing
(SEM
) units in GPU
s, i.e. asynchronous computation
, can lead to variable outcomes during repeated runsarchitectures
can also change model outcomes
adapted from Suneeta Mall
frameworks
for machine learning analyses
actually don't guarantee 100% reproducibility
, including cudnn
, pytorch
and tensorflow
bugs
which can be hard to find based on high-level API
sadapted from Suneeta Mall
computation
requires a complex stack of software
that quite often interacts and has cross-dependenciesoperating system
, drivers
, python
, python packages
, etc.numerical errors & instabilities
of the software
, e.g. talking about local minima/maxima
& floating-point precision again
errors
may lead unstable functions
towards distinct local minima
print(sum([0.001 for _ in range(1000)]))
machine learning algorithms
introduce further problemsrandomness
can lead to unreproducible
outcomescomputation
can lead to non-deterministic
resultsadapted from Suneeta Mall
software
and algorithm
, practices
and processes
in machine learning
also heavily influence the reproducibility
of resultsrandomness
, because it's basically everywhererandom initializations
random augmentations
random noise introduction
data shuffles
adapted from Suneeta Mall
Here's just a brief example concerning data shuffling
in training sets
:
for i in range(100):
X_train, X_test, y_train, y_test = train_test_split(data, pd.Categorical(labels).codes, test_size=0.2, shuffle=True)
pd.Series(y_train).plot(kind='hist', stacked=True)
machine learning analysis
need tremendous amounts of data
, the buzz word is "big data
"data standardization & management
are still not frequently applied, even in small datasets
, "big data
" poses additional major problemsdata
, especially "big data
" various aspects need to be addresseddata management
data provenance
data poisoning
under-/over-represented data
adapted from Suneeta Mall
machine learning analyses
, e.g. feature engineering
, augmentation
, etc.models
models
is used more than once, which is grounded in two main factors (among others)trained models
training models
can take a very long time as previously outlined and additionally consume large amounts of resourcestrained models
or weights
thereof is necessary to reduce these costs and further validate the models
in terms of reproducibility
, reliability
, robustness
and generalization
machine learning
is more or less affected by concept drift
concept
of the world that surrounds us and the things within is constantly changing (e.g. disease
/ disorder
subtypes and progressions, etc.)biological agents
need to make artificial agents
aware of thismachine learning
this process is referred to as continual learning
adapted from Suneeta Mall
At this point you might think:
via GIPHY
But again: there's something we can do to make it at least slightly better. Thus, let's get to it!
reproducibility
in machine learning
¶reproducibility
in neuroimaging
can also be utilized to increase reproducibility
in machine learning
this thus also entails tools and resources provided by ReproNim
and its members, as well as adjacent initiatives
in more detail we will have a look at the following challenges and respectively helpful aspects
software
: virtualization
of computing environments
using neurodockeralgorithm
/practices
/processes
: dealing with randomness
data
: standardization
(BIDS), tracking everything (git/github & datalad), sharingReproNim
doesn't offer to hardware
resources yet, so we have to keep that aside for nowvirtualization
of computing environments
using neurodocker¶computing environment
is composed of many different aspectsoperating system
, drivers
, python
, python packages
versions
thereofObviously, we can't send around our machines via post.
So what can we do to address this?
We can make use of virtualization technologies
that aim to:
Overall, there are several types
of virtualization
that are situated across different levels
:
software containers
like docker & singularity
virtual machines
like Virtualbox & VMware
As discussed before, machine learning analyses
need a lot of computational resources
to run in a feasible amount of time. Thus, usually we want to make use of HPC
s which excludes virtual machines
from the suitable options, as their handling, management and resource allocation don't really scale.
In contrast, both python virtualization
and software containers
work reasonably well on HPC
s (regarding utilization, management and resource allocation).
The creation, combination and management of both is made incredibly easy and even reproducible
via neurodocker!
containerized software
aimed at the reproducible generation
of other containerized software
python virtualization
using conda
, docker
and singularity
, as well as neuroimaging-specific software
You might wonder: "why do we need both"?
python virtualization
does not account for often required libraries
and binaries
on other levels (beyond python
)virtualization
, i.e. software containers
that share the host system's kernel
and (can) include everything from libraries
and binaries
upwardsIn order to create a software container
, here docker
, that entails a python environment
with all packages
in the version
s that were used to run the initial analyses
, we only need to do the following:
%%bash
docker run kaczmarj/neurodocker:0.6.0 generate docker --base=ubuntu:18.04 \
--pkg-manager=apt \
--install git nano unzip \
--miniconda \
version=latest \
create_env='repronim_ml' \
activate=true \
conda_install="python=3.10 numpy=1.22 pandas=1.4 scikit-learn=1.0.2 seaborn=0.11" \
pip_install="tensorflow==2.8 datalad[full]==0.15.6" \
--add-to-entrypoint "source activate repronim_ml" \
--entrypoint "/neurodocker/startup.sh python" > Dockerfile
docker build -t repronim_ml .
With that, we already have our software container
ready to go and can run our machine learning analyses
in it:
%%bash
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE repronim_ml latest 86afa00fde81 23 hours ago 2.65GB peerherholz/repronim_ml latest 86afa00fde81 23 hours ago 2.65GB kaczmarj/neurodocker 0.6.0 7029883696dd 2 years ago 79.3MB
Here, the software container
is set up in a way that it automatically executes/runs python
, specifically the version
of the conda environment
we created. Thus, we can simply provide a python script
as input
. (Unfortunately, calling it via this interface won't work. Therefore, I'm showing the command
/code
here but will run it in my terminal
.)
%%bash
ocker run -it --rm -v /Users/peerherholz/google_drive/GitHub/repronim_ML/:/data repronim_ml /data/ml_reproducibility_data.py
Fantastic, now we can run and further test our machine learning analyses
in a dedicated computing environment
that is isolated and shareable!
Now that a large portion of the software dependencies
are addressed, we can continue with the next part: algorithm
/practices
/processes
!
algorithm
/practices
/processes
: dealing with randomness¶randomness
is basically everywhere in the world of machine learning
unreproducible
resultsSo what can we do?
Simply put, we need to seed
the randomness
in our machine learning analyses
!
software
has the option to set the seed
for randomness
, one way or anotherseed
can be set unfortunately varies between software packages
seed
needs to be set at different instancesFor example, in our machine learning analyses
using a random forest
, we didn't seed
randomness
so far, resulting in different outcomes at every run because e.g. training
and test set splits
would vary:
from sklearn.model_selection import StratifiedKFold
cv = StratifiedKFold()
pipe = make_pipeline(
StandardScaler(),
RandomForestClassifier()
)
for i in range(10):
acc = cross_val_score(pipe, data, pd.Categorical(labels).codes, cv=cv)
mae = cross_val_score(pipe, data, pd.Categorical(labels).codes, cv=cv,
scoring='neg_mean_absolute_error')
print('Accuracy run {} = {}, MAE run {} = {}, Chance run {} = {}'.format(i, np.round(np.mean(acc), 3),
i, np.round(np.mean(-mae), 3),
i, np.round(1/len(labels.unique()), 3)))
Accuracy run 0 = 0.484, MAE run 0 = 1.065, Chance run 0 = 0.167 Accuracy run 1 = 0.555, MAE run 1 = 1.09, Chance run 1 = 0.167 Accuracy run 2 = 0.542, MAE run 2 = 1.052, Chance run 2 = 0.167 Accuracy run 3 = 0.561, MAE run 3 = 1.006, Chance run 3 = 0.167 Accuracy run 4 = 0.529, MAE run 4 = 1.168, Chance run 4 = 0.167 Accuracy run 5 = 0.497, MAE run 5 = 1.052, Chance run 5 = 0.167 Accuracy run 6 = 0.535, MAE run 6 = 1.045, Chance run 6 = 0.167 Accuracy run 7 = 0.523, MAE run 7 = 1.161, Chance run 7 = 0.167 Accuracy run 8 = 0.555, MAE run 8 = 1.077, Chance run 8 = 0.167 Accuracy run 9 = 0.568, MAE run 9 = 0.961, Chance run 9 = 0.167
Using the random_state
argument
in all functions
/classes
that deal with randomness
to set the seed
, results in reproducible
outcomes at every run:
cv = StratifiedKFold(random_state=42, shuffle=True)
pipe = make_pipeline(
StandardScaler(),
RandomForestClassifier(random_state=42)
)
for i in range(10):
acc = cross_val_score(pipe, data, pd.Categorical(labels).codes, cv=cv)
mae = cross_val_score(pipe, data, pd.Categorical(labels).codes, cv=cv,
scoring='neg_mean_absolute_error')
print('Accuracy run {} = {}, MAE run {} = {}, Chance run {} = {}'.format(i, np.round(np.mean(acc), 3),
i, np.round(np.mean(-mae), 3),
i, np.round(1/len(labels.unique()), 3)))
Accuracy run 0 = 0.548, MAE run 0 = 0.955, Chance run 0 = 0.167 Accuracy run 1 = 0.548, MAE run 1 = 0.955, Chance run 1 = 0.167 Accuracy run 2 = 0.548, MAE run 2 = 0.955, Chance run 2 = 0.167 Accuracy run 3 = 0.548, MAE run 3 = 0.955, Chance run 3 = 0.167 Accuracy run 4 = 0.548, MAE run 4 = 0.955, Chance run 4 = 0.167 Accuracy run 5 = 0.548, MAE run 5 = 0.955, Chance run 5 = 0.167 Accuracy run 6 = 0.548, MAE run 6 = 0.955, Chance run 6 = 0.167 Accuracy run 7 = 0.548, MAE run 7 = 0.955, Chance run 7 = 0.167 Accuracy run 8 = 0.548, MAE run 8 = 0.955, Chance run 8 = 0.167 Accuracy run 9 = 0.548, MAE run 9 = 0.955, Chance run 9 = 0.167
Regarding our ANN
, we would need to set the random_state
argument
for the train_test_split
function
to create training
and test sets
in a reproducible
manner:
for i in range(100):
X_train, X_test, y_train, y_test = train_test_split(data, pd.Categorical(labels).codes, test_size=0.2, shuffle=True)
pd.Series(y_train).plot(kind='hist', stacked=True)
for i in range(100):
X_train, X_test, y_train, y_test = train_test_split(data, pd.Categorical(labels).codes,
test_size=0.2, shuffle=True,
random_state=42)
pd.Series(y_train).plot(kind='hist', stacked=True)
Additionally, we would need to set the seed
across various parts of our computational environment
. Without that, we get different results at every run:
for i in range(10):
model = keras.Sequential()
model.add(layers.Dense(100, activation="relu", kernel_initializer='he_normal', bias_initializer='zeros', input_shape=data[1].shape))
model.add(layers.BatchNormalization())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(50, activation="relu"))
model.add(layers.BatchNormalization())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(25, activation="relu"))
model.add(layers.BatchNormalization())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(len(labels.unique()), activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
fit = model.fit(X_train, y_train, epochs=30, batch_size=20, validation_split=0.2, verbose=0)
score, acc = model.evaluate(X_test, y_test,
batch_size=2, verbose=0)
print('Test score run {}: {} , Test accuracy run {}: {}'.format(i, score, i, acc))
Test score run 0: 1.310919165611267 , Test accuracy run 0: 0.5806451439857483 Test score run 1: 1.3965305089950562 , Test accuracy run 1: 0.5483871102333069 Test score run 2: 1.4585529565811157 , Test accuracy run 2: 0.5161290168762207 Test score run 3: 1.3764199018478394 , Test accuracy run 3: 0.4516128897666931 Test score run 4: 1.132515788078308 , Test accuracy run 4: 0.6129032373428345 Test score run 5: 1.1708096265792847 , Test accuracy run 5: 0.6451612710952759 Test score run 6: 1.331620454788208 , Test accuracy run 6: 0.5161290168762207 Test score run 7: 1.3042981624603271 , Test accuracy run 7: 0.4838709533214569 Test score run 8: 1.1147429943084717 , Test accuracy run 8: 0.7096773982048035 Test score run 9: 1.3325397968292236 , Test accuracy run 9: 0.5806451439857483
After setting the seed
, for example the general random seeds
and the tensorflow seed
, we get the same output every run:
import os, random
os.environ['PYTHONHASHSEED'] = str(42)
random.seed(42)
np.random.seed(42)
for i in range(10):
model = keras.Sequential()
model.add(layers.Dense(100, activation="relu", kernel_initializer='he_normal', bias_initializer='zeros', input_shape=data[1].shape))
model.add(layers.BatchNormalization())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(50, activation="relu"))
model.add(layers.BatchNormalization())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(25, activation="relu"))
model.add(layers.BatchNormalization())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(len(labels.unique()), activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
tf.random.set_seed(42)
fit = model.fit(X_train, y_train, epochs=30, batch_size=20, validation_split=0.2, verbose=0)
score, acc = model.evaluate(X_test, y_test,
batch_size=2, verbose=0)
print('Test score run {}: {} , Test accuracy run {}: {}'.format(i, score, i, acc))
Test score run 0: 1.233439326286316 , Test accuracy run 0: 0.5483871102333069 Test score run 1: 1.233439326286316 , Test accuracy run 1: 0.5483871102333069 Test score run 2: 1.233439326286316 , Test accuracy run 2: 0.5483871102333069 Test score run 3: 1.233439326286316 , Test accuracy run 3: 0.5483871102333069 Test score run 4: 1.233439326286316 , Test accuracy run 4: 0.5483871102333069 Test score run 5: 1.233439326286316 , Test accuracy run 5: 0.5483871102333069 Test score run 6: 1.233439326286316 , Test accuracy run 6: 0.5483871102333069 Test score run 7: 1.233439326286316 , Test accuracy run 7: 0.5483871102333069 Test score run 8: 1.233439326286316 , Test accuracy run 8: 0.5483871102333069 Test score run 9: 1.233439326286316 , Test accuracy run 9: 0.5483871102333069
Please note, that this appears to be sufficient for our minimal example but more realistic ANN
s would potentially require way more steps to achieve reproducibility
, including:
inter
/intra op parallelism
tf.config.threading.set_inter_op_parallelism_threads(1)
tf.config.threading.set_intra_op_parallelism_threads(1)
determinism
via os
and/or utilizing packages
, like Framework determinismos.environ['TF_DETERMINISTIC_OPS'] = '1'
from tfdeterminism import patch
patch()
The respective solution will depend on your computational environment
, model architecture
, data
, etc. and might sometimes not even exist (at least within a reasonable implementation).
This ongoing concern itself became a new field of study, with fascinating and very helpful outputs, e.g.
Bouthillier et al. (2021): Accounting for variability in machine learning benchmarks
adapted from Martina Vilas
data
: standardization
(BIDS), tracking everything (git/github & datalad), sharing¶data
is crucial & thus everything adjacent to it is crucial as wellstandardization
of data
& models
to facilitate management
, QC
and FAIR
-nessversion control
sharing
data
& models
to maximize reproducibility
& FAIR
-ness, as well as to reduce computational effort
data
& models
¶standardizing
data
& models
, as well as their description is tremendously helpful & important concerning reproducibility
and other aspects, independent of the data
at handbig data
however profits even more from this process as the combination of human
/machine
readability and meta-data
allows to efficiently handle data management
, evaluation
, QC
, augmentation
, etc.One of the best options to tackle this is of course the Brain Imaging Data Structure or BIDS
:
version control
¶machine learning analyses
tend to be very complex and thus a lot can happen along the wayanalysis code
, as well as data input
and output
might change frequentlyreproducibility
is basically not possibleOne commonly know tool to address this is version control
, which itself can take different forms, depending on the precise data
at hand.
Placing everything one does under version control
via these tools
& resources
can help a great deal to achieve reproducibility
, as the respective aspects, e.g. code
& data
are constantly monitored
and changes tracked
.
For example, starting with our machine learning analyses python script
, what if we apply certain changes and the outcomes vary as a result of it? Using version control
, it's "easy" to evaluate what happened and restore previous versions!
We actually adapted our python script
so that the analyses
are more reproducible
. With git
, we can see that:
While code version control
using git
and GitHub
is fortunately more and more common, version control
of other forms of data
, e.g. datasets
is not.
For example, what happens when we run multiple machine learning analyses
on the same dataset
that yield different outcomes? How do we keep track of that?
One up and coming tool
that allows to "easily" track
changes in all kinds of data
, is DataLad
.
With it, one can create version controlled
datasets
of basically any form and monitor every change that appears along the way, including input
s and output
s, files
in general, utilized software
, etc. .
Applied to our machine learning analyses
, this could look as follows:
We create a new dataset
:
%%bash
datalad create repronim_ml_showcase
[INFO] Creating a new annex repo at /Users/peerherholz/google_drive/GitHub/repronim_ML/repronim_ml_showcase [INFO] scanning for unlocked files (this may take some time)
create(ok): /Users/peerherholz/google_drive/GitHub/repronim_ML/repronim_ml_showcase (dataset)
And download
the data
, installing
it as dataset
:
%%bash
cd repronim_ml_showcase
datalad download-url \
--archive \
--message "Download dataset" \
'https://www.dropbox.com/s/v48f8pjfw4u2bxi/MAIN_BASC064_subsamp_features.npz?dl=1'
[INFO] Downloading 'https://www.dropbox.com/s/v48f8pjfw4u2bxi/MAIN_BASC064_subsamp_features.npz?dl=1' into '/Users/peerherholz/google_drive/GitHub/repronim_ML/repronim_ml_showcase/' [INFO] Adding content of the archive /Users/peerherholz/google_drive/GitHub/repronim_ML/repronim_ml_showcase/MAIN_BASC064_subsamp_features.npz into annex AnnexRepo(/Users/peerherholz/google_drive/GitHub/repronim_ML/repronim_ml_showcase) [INFO] Initiating special remote datalad-archives [INFO] Finished adding /Users/peerherholz/google_drive/GitHub/repronim_ML/repronim_ml_showcase/MAIN_BASC064_subsamp_features.npz: Files processed: 1, +annex: 1
download_url(ok): /Users/peerherholz/google_drive/GitHub/repronim_ML/repronim_ml_showcase/MAIN_BASC064_subsamp_features.npz (file) add(ok): MAIN_BASC064_subsamp_features.npz (file) save(ok): . (dataset) action summary: add (ok: 1) download_url (ok: 1) save (ok: 1)
%%bash
cd repronim_ml_showcase
datalad download-url \
--archive \
--message "Download labels" \
'https://www.dropbox.com/s/ofsqdcukyde4lke/participants.csv?dl=1'
We then can use DataLad
's YODA principles to create a directory structure for our project, including folders for data
and code
:
%%bash
cd repronim_ml_showcase
datalad create -c text2git -c yoda ml_showcase
[INFO] Creating a new annex repo at /Users/peerherholz/google_drive/GitHub/repronim_ML/repronim_ml_showcase/ml_showcase [INFO] scanning for unlocked files (this may take some time) [INFO] Running procedure cfg_text2git [INFO] == Command start (output follows) ===== [INFO] == Command exit (modification check follows) ===== [INFO] Running procedure cfg_yoda [INFO] == Command start (output follows) ===== [INFO] == Command exit (modification check follows) =====
create(ok): /Users/peerherholz/google_drive/GitHub/repronim_ML/repronim_ml_showcase/ml_showcase (dataset)
and subsequently clone
the installed dataset
s into the data
folder, indicating that this is our raw data
:
%%bash
cd repronim_ml_showcase/ml_showcase
mkdir -p data
datalad clone -d . ../ data/raw
[INFO] Cloning dataset to Dataset(/Users/peerherholz/google_drive/GitHub/repronim_ML/repronim_ml_showcase/ml_showcase/data/raw) [INFO] Attempting to clone from ../ to /Users/peerherholz/google_drive/GitHub/repronim_ML/repronim_ml_showcase/ml_showcase/data/raw [INFO] Completed clone attempts for Dataset(/Users/peerherholz/google_drive/GitHub/repronim_ML/repronim_ml_showcase/ml_showcase/data/raw) [INFO] scanning for unlocked files (this may take some time)
install(ok): data/raw (dataset) add(ok): data/raw (file) add(ok): .gitmodules (file) save(ok): . (dataset) add(ok): .gitmodules (file) save(ok): . (dataset) action summary: add (ok: 3) install (ok: 1) save (ok: 2)
To make things as reproducible
as possible, we also add our software container
from above to the dataset
, so that the computational environment
and it's application is also version control
led:
%%bash
cd repronim_ml_showcase/ml_showcase
datalad containers-add repronim-ml-container --url dhub://peerherholz/repronim_ml:latest
latest: Pulling from peerherholz/repronim_ml Digest: sha256:e93225af23a9e67f496cca2a8b832213ef05d23e9f720c6b66d96aed43466389 Status: Image is up to date for peerherholz/repronim_ml:latest docker.io/peerherholz/repronim_ml:latest
[INFO] Saved peerherholz/repronim_ml:latest to /Users/peerherholz/google_drive/GitHub/repronim_ML/repronim_ml_showcase/ml_showcase/.datalad/environments/repronim-ml-container/image
add(ok): .datalad/environments/repronim-ml-container/image/042d5ab439dcb5d6c93db5035db2f307435a6a4bbaecbb7f26b0e802b97e9b8d/VERSION (file) add(ok): .datalad/environments/repronim-ml-container/image/042d5ab439dcb5d6c93db5035db2f307435a6a4bbaecbb7f26b0e802b97e9b8d/json (file) add(ok): .datalad/environments/repronim-ml-container/image/042d5ab439dcb5d6c93db5035db2f307435a6a4bbaecbb7f26b0e802b97e9b8d/layer.tar (file) add(ok): .datalad/environments/repronim-ml-container/image/2c21a879aff42e65d780c48622921b5c09a02473ba5d046907cf2b0ae3b05f3d/VERSION (file) add(ok): .datalad/environments/repronim-ml-container/image/2c21a879aff42e65d780c48622921b5c09a02473ba5d046907cf2b0ae3b05f3d/json (file) add(ok): .datalad/environments/repronim-ml-container/image/2c21a879aff42e65d780c48622921b5c09a02473ba5d046907cf2b0ae3b05f3d/layer.tar (file) add(ok): .datalad/environments/repronim-ml-container/image/7b247d4bd38e999588b6ab22a0bca104785bbcbc9b8c8dff43cd5f28f32ef3f2/VERSION (file) add(ok): .datalad/environments/repronim-ml-container/image/7b247d4bd38e999588b6ab22a0bca104785bbcbc9b8c8dff43cd5f28f32ef3f2/json (file) add(ok): .datalad/environments/repronim-ml-container/image/7b247d4bd38e999588b6ab22a0bca104785bbcbc9b8c8dff43cd5f28f32ef3f2/layer.tar (file) add(ok): .datalad/environments/repronim-ml-container/image/86afa00fde813b787433554fb432c12e549833c8c89867c102fa0605b437ff36.json (file) add(ok): .datalad/environments/repronim-ml-container/image/e1c7afeab1c4a740b65109d0bca271e183b904bfa8ce06040df3b1d094958c17/VERSION (file) add(ok): .datalad/environments/repronim-ml-container/image/e1c7afeab1c4a740b65109d0bca271e183b904bfa8ce06040df3b1d094958c17/json (file) add(ok): .datalad/environments/repronim-ml-container/image/e1c7afeab1c4a740b65109d0bca271e183b904bfa8ce06040df3b1d094958c17/layer.tar (file) add(ok): .datalad/environments/repronim-ml-container/image/e242ab1e4864116d055da4b8a7e472de40851021284add8becef0d0c5414e190/VERSION (file) add(ok): .datalad/environments/repronim-ml-container/image/e242ab1e4864116d055da4b8a7e472de40851021284add8becef0d0c5414e190/json (file) add(ok): .datalad/environments/repronim-ml-container/image/e242ab1e4864116d055da4b8a7e472de40851021284add8becef0d0c5414e190/layer.tar (file) add(ok): .datalad/environments/repronim-ml-container/image/eb10e787616d729236010a31a36dff031c85b874acdc02c8ac3b681eba1bce2e/VERSION (file) add(ok): .datalad/environments/repronim-ml-container/image/eb10e787616d729236010a31a36dff031c85b874acdc02c8ac3b681eba1bce2e/json (file) add(ok): .datalad/environments/repronim-ml-container/image/eb10e787616d729236010a31a36dff031c85b874acdc02c8ac3b681eba1bce2e/layer.tar (file) add(ok): .datalad/environments/repronim-ml-container/image/manifest.json (file) add(ok): .datalad/environments/repronim-ml-container/image/repositories (file) add(ok): .datalad/config (file) save(ok): . (dataset) containers_add(ok): /Users/peerherholz/google_drive/GitHub/repronim_ML/repronim_ml_showcase/ml_showcase/.datalad/environments/repronim-ml-container/image (file) action summary: add (ok: 22) containers_add (ok: 1) save (ok: 1)
The only thing that's missing is the python script
that performs the machine learning analyses
, which can simply be added to the dataset
as well.
%%bash
cd repronim_ml_showcase/ml_showcase/
datalad save -m "add random forest & ANN script" code/ml_reproducibility.py
add(ok): code/ml_reproducibility.py (file) save(ok): . (dataset) action summary: add (ok: 1) save (ok: 1)
With that, we already have everything in place to run a (fully) reproducible
machine learning analysis
, that track
s the dataset
s, the computational environment
, the code
and also their interaction via monitoring analyses
input
s and output
s:
%%bash
cd repronim_ml_showcase/ml_showcase/
datalad containers-run -n repronim-ml-container \
-m "First run of ML analyses" \
--input 'data/raw/' \
--output 'metrics.json' \
--output 'random_forest.joblib' \
--output 'ANN.h5' \
"code/ml_reproducibility.py"
[INFO] Making sure inputs are available (this may take some time) [INFO] == Command start (output follows) ===== whoami: cannot find name for user ID 501 2022-03-04 15:39:52.749903: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2022-03-04 15:39:52.750027: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. 2022-03-04 15:39:59.430618: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory 2022-03-04 15:39:59.430731: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303) 2022-03-04 15:39:59.430785: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (e989264455b1): /proc/driver/nvidia/version does not exist 2022-03-04 15:39:59.431798: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Epoch 1/300 5/5 [==============================] - 1s 54ms/step - loss: 2.7575 - accuracy: 0.2020 - val_loss: 1.8426 - val_accuracy: 0.1600 Epoch 2/300 5/5 [==============================] - 0s 15ms/step - loss: 2.6757 - accuracy: 0.1919 - val_loss: 1.8177 - val_accuracy: 0.1600 Epoch 3/300 5/5 [==============================] - 0s 11ms/step - loss: 2.2690 - accuracy: 0.2828 - val_loss: 1.7679 - val_accuracy: 0.2800 Epoch 4/300 5/5 [==============================] - 0s 11ms/step - loss: 2.4675 - accuracy: 0.2222 - val_loss: 1.7296 - val_accuracy: 0.2400 Epoch 5/300 5/5 [==============================] - 0s 11ms/step - loss: 2.2808 - accuracy: 0.1717 - val_loss: 1.7012 - val_accuracy: 0.2400 Epoch 6/300 5/5 [==============================] - 0s 11ms/step - loss: 2.0954 - accuracy: 0.2626 - val_loss: 1.6754 - val_accuracy: 0.2800 Epoch 7/300 5/5 [==============================] - 0s 16ms/step - loss: 2.1909 - accuracy: 0.2626 - val_loss: 1.6747 - val_accuracy: 0.3200 Epoch 8/300 5/5 [==============================] - 0s 11ms/step - loss: 1.9905 - accuracy: 0.2323 - val_loss: 1.6574 - val_accuracy: 0.3600 Epoch 9/300 5/5 [==============================] - 0s 12ms/step - loss: 1.8361 - accuracy: 0.3333 - val_loss: 1.6190 - val_accuracy: 0.4400 Epoch 10/300 5/5 [==============================] - 0s 11ms/step - loss: 1.5518 - accuracy: 0.4141 - val_loss: 1.5858 - val_accuracy: 0.3600 Epoch 11/300 5/5 [==============================] - 0s 12ms/step - loss: 1.5588 - accuracy: 0.3838 - val_loss: 1.5571 - val_accuracy: 0.4000 Epoch 12/300 5/5 [==============================] - 0s 11ms/step - loss: 1.5118 - accuracy: 0.4343 - val_loss: 1.5346 - val_accuracy: 0.4000 Epoch 13/300 5/5 [==============================] - 0s 12ms/step - loss: 1.4530 - accuracy: 0.4646 - val_loss: 1.5271 - val_accuracy: 0.4000 Epoch 14/300 5/5 [==============================] - 0s 11ms/step - loss: 1.7719 - accuracy: 0.3737 - val_loss: 1.5258 - val_accuracy: 0.4400 Epoch 15/300 5/5 [==============================] - 0s 11ms/step - loss: 1.6114 - accuracy: 0.4141 - val_loss: 1.5174 - val_accuracy: 0.4400 Epoch 16/300 5/5 [==============================] - 0s 11ms/step - loss: 1.4501 - accuracy: 0.5051 - val_loss: 1.5160 - val_accuracy: 0.4000 Epoch 17/300 5/5 [==============================] - 0s 10ms/step - loss: 1.3292 - accuracy: 0.4949 - val_loss: 1.5128 - val_accuracy: 0.4000 Epoch 18/300 5/5 [==============================] - 0s 11ms/step - loss: 1.5817 - accuracy: 0.3939 - val_loss: 1.5189 - val_accuracy: 0.4000 Epoch 19/300 5/5 [==============================] - 0s 10ms/step - loss: 1.3796 - accuracy: 0.4646 - val_loss: 1.5278 - val_accuracy: 0.4400 Epoch 20/300 5/5 [==============================] - 0s 12ms/step - loss: 1.1843 - accuracy: 0.5455 - val_loss: 1.5263 - val_accuracy: 0.4000 Epoch 21/300 5/5 [==============================] - 0s 18ms/step - loss: 1.4650 - accuracy: 0.4646 - val_loss: 1.5195 - val_accuracy: 0.3600 Epoch 22/300 5/5 [==============================] - 0s 13ms/step - loss: 1.4457 - accuracy: 0.5152 - val_loss: 1.5057 - val_accuracy: 0.3600 Epoch 23/300 5/5 [==============================] - 0s 11ms/step - loss: 1.1682 - accuracy: 0.6061 - val_loss: 1.5043 - val_accuracy: 0.3200 Epoch 24/300 5/5 [==============================] - 0s 15ms/step - loss: 1.1102 - accuracy: 0.5859 - val_loss: 1.4952 - val_accuracy: 0.3200 Epoch 25/300 5/5 [==============================] - 0s 12ms/step - loss: 1.2644 - accuracy: 0.5354 - val_loss: 1.4872 - val_accuracy: 0.3200 Epoch 26/300 5/5 [==============================] - 0s 12ms/step - loss: 1.0304 - accuracy: 0.6364 - val_loss: 1.4864 - val_accuracy: 0.3200 Epoch 27/300 5/5 [==============================] - 0s 11ms/step - loss: 1.1819 - accuracy: 0.5960 - val_loss: 1.4905 - val_accuracy: 0.3200 Epoch 28/300 5/5 [==============================] - 0s 10ms/step - loss: 1.0610 - accuracy: 0.5960 - val_loss: 1.4964 - val_accuracy: 0.3200 Epoch 29/300 5/5 [==============================] - 0s 12ms/step - loss: 1.1086 - accuracy: 0.6061 - val_loss: 1.4981 - val_accuracy: 0.3200 Epoch 30/300 5/5 [==============================] - 0s 11ms/step - loss: 1.1121 - accuracy: 0.5556 - val_loss: 1.4937 - val_accuracy: 0.3600 Epoch 31/300 5/5 [==============================] - 0s 11ms/step - loss: 1.0619 - accuracy: 0.6364 - val_loss: 1.5006 - val_accuracy: 0.3200 Epoch 32/300 5/5 [==============================] - 0s 11ms/step - loss: 1.0949 - accuracy: 0.5556 - val_loss: 1.5001 - val_accuracy: 0.3200 Epoch 33/300 5/5 [==============================] - 0s 10ms/step - loss: 1.0946 - accuracy: 0.5960 - val_loss: 1.4929 - val_accuracy: 0.3200 Epoch 34/300 5/5 [==============================] - 0s 10ms/step - loss: 1.1763 - accuracy: 0.5960 - val_loss: 1.4730 - val_accuracy: 0.3200 Epoch 35/300 5/5 [==============================] - 0s 10ms/step - loss: 0.9822 - accuracy: 0.6364 - val_loss: 1.4721 - val_accuracy: 0.3600 Epoch 36/300 5/5 [==============================] - 0s 11ms/step - loss: 0.9399 - accuracy: 0.6465 - val_loss: 1.4734 - val_accuracy: 0.3600 Epoch 37/300 5/5 [==============================] - 0s 15ms/step - loss: 1.0011 - accuracy: 0.6566 - val_loss: 1.4822 - val_accuracy: 0.4000 Epoch 38/300 5/5 [==============================] - 0s 10ms/step - loss: 0.9108 - accuracy: 0.6869 - val_loss: 1.4903 - val_accuracy: 0.4800 Epoch 39/300 5/5 [==============================] - 0s 11ms/step - loss: 0.9995 - accuracy: 0.6364 - val_loss: 1.4990 - val_accuracy: 0.4800 Epoch 40/300 5/5 [==============================] - 0s 11ms/step - loss: 0.9260 - accuracy: 0.6364 - val_loss: 1.5055 - val_accuracy: 0.4800 Epoch 41/300 5/5 [==============================] - 0s 11ms/step - loss: 0.9840 - accuracy: 0.6162 - val_loss: 1.5084 - val_accuracy: 0.4400 Epoch 42/300 5/5 [==============================] - 0s 10ms/step - loss: 1.0290 - accuracy: 0.5758 - val_loss: 1.5229 - val_accuracy: 0.4400 Epoch 43/300 5/5 [==============================] - 0s 11ms/step - loss: 0.8786 - accuracy: 0.7273 - val_loss: 1.5429 - val_accuracy: 0.4400 Epoch 44/300 5/5 [==============================] - 0s 15ms/step - loss: 0.9027 - accuracy: 0.6667 - val_loss: 1.5376 - val_accuracy: 0.4400 Epoch 45/300 5/5 [==============================] - 0s 11ms/step - loss: 0.9602 - accuracy: 0.6465 - val_loss: 1.5060 - val_accuracy: 0.4400 Epoch 46/300 5/5 [==============================] - 0s 10ms/step - loss: 1.1042 - accuracy: 0.5960 - val_loss: 1.5022 - val_accuracy: 0.4400 Epoch 47/300 5/5 [==============================] - 0s 11ms/step - loss: 0.7800 - accuracy: 0.6970 - val_loss: 1.5074 - val_accuracy: 0.4400 Epoch 48/300 5/5 [==============================] - 0s 11ms/step - loss: 0.7137 - accuracy: 0.7374 - val_loss: 1.5272 - val_accuracy: 0.4400 Epoch 49/300 5/5 [==============================] - 0s 10ms/step - loss: 0.7656 - accuracy: 0.7172 - val_loss: 1.5474 - val_accuracy: 0.4400 Epoch 50/300 5/5 [==============================] - 0s 10ms/step - loss: 0.6113 - accuracy: 0.7980 - val_loss: 1.5631 - val_accuracy: 0.4400 Epoch 51/300 5/5 [==============================] - 0s 11ms/step - loss: 0.7396 - accuracy: 0.7475 - val_loss: 1.5726 - val_accuracy: 0.4400 Epoch 52/300 5/5 [==============================] - 0s 10ms/step - loss: 0.8504 - accuracy: 0.6970 - val_loss: 1.5693 - val_accuracy: 0.4400 Epoch 53/300 5/5 [==============================] - 0s 11ms/step - loss: 0.7008 - accuracy: 0.7374 - val_loss: 1.5617 - val_accuracy: 0.4400 Epoch 54/300 5/5 [==============================] - 0s 38ms/step - loss: 0.7935 - accuracy: 0.6869 - val_loss: 1.5753 - val_accuracy: 0.4400 Epoch 55/300 5/5 [==============================] - 0s 15ms/step - loss: 0.8662 - accuracy: 0.6768 - val_loss: 1.5595 - val_accuracy: 0.4000 Epoch 56/300 5/5 [==============================] - 0s 44ms/step - loss: 0.8978 - accuracy: 0.6263 - val_loss: 1.5049 - val_accuracy: 0.4000 Epoch 57/300 5/5 [==============================] - 0s 12ms/step - loss: 0.7035 - accuracy: 0.7576 - val_loss: 1.4767 - val_accuracy: 0.4400 Epoch 58/300 5/5 [==============================] - 0s 14ms/step - loss: 0.6870 - accuracy: 0.7273 - val_loss: 1.4456 - val_accuracy: 0.4800 Epoch 59/300 5/5 [==============================] - 0s 11ms/step - loss: 0.4935 - accuracy: 0.8283 - val_loss: 1.4348 - val_accuracy: 0.4400 Epoch 60/300 5/5 [==============================] - 0s 16ms/step - loss: 0.8878 - accuracy: 0.7071 - val_loss: 1.4195 - val_accuracy: 0.4400 Epoch 61/300 5/5 [==============================] - 0s 12ms/step - loss: 0.7460 - accuracy: 0.7172 - val_loss: 1.4126 - val_accuracy: 0.4000 Epoch 62/300 5/5 [==============================] - 0s 12ms/step - loss: 0.7104 - accuracy: 0.7576 - val_loss: 1.4211 - val_accuracy: 0.4000 Epoch 63/300 5/5 [==============================] - 0s 12ms/step - loss: 0.6293 - accuracy: 0.8283 - val_loss: 1.3971 - val_accuracy: 0.4000 Epoch 64/300 5/5 [==============================] - 0s 11ms/step - loss: 0.6753 - accuracy: 0.7778 - val_loss: 1.3980 - val_accuracy: 0.4000 Epoch 65/300 5/5 [==============================] - 0s 11ms/step - loss: 0.7709 - accuracy: 0.7273 - val_loss: 1.4158 - val_accuracy: 0.4000 Epoch 66/300 5/5 [==============================] - 0s 12ms/step - loss: 0.6164 - accuracy: 0.7778 - val_loss: 1.4077 - val_accuracy: 0.4000 Epoch 67/300 5/5 [==============================] - 0s 11ms/step - loss: 0.6458 - accuracy: 0.7576 - val_loss: 1.3958 - val_accuracy: 0.4000 Epoch 68/300 5/5 [==============================] - 0s 11ms/step - loss: 0.5810 - accuracy: 0.7980 - val_loss: 1.3857 - val_accuracy: 0.3600 Epoch 69/300 5/5 [==============================] - 0s 11ms/step - loss: 0.6993 - accuracy: 0.7677 - val_loss: 1.3904 - val_accuracy: 0.3600 Epoch 70/300 5/5 [==============================] - 0s 11ms/step - loss: 0.6392 - accuracy: 0.7576 - val_loss: 1.4127 - val_accuracy: 0.4000 Epoch 71/300 5/5 [==============================] - 0s 11ms/step - loss: 0.5366 - accuracy: 0.8182 - val_loss: 1.4399 - val_accuracy: 0.4000 Epoch 72/300 5/5 [==============================] - 0s 11ms/step - loss: 0.5957 - accuracy: 0.7778 - val_loss: 1.4623 - val_accuracy: 0.4400 Epoch 73/300 5/5 [==============================] - 0s 11ms/step - loss: 0.5883 - accuracy: 0.7677 - val_loss: 1.4854 - val_accuracy: 0.4000 Epoch 74/300 5/5 [==============================] - 0s 10ms/step - loss: 0.6010 - accuracy: 0.8081 - val_loss: 1.4911 - val_accuracy: 0.4000 Epoch 75/300 5/5 [==============================] - 0s 10ms/step - loss: 0.6371 - accuracy: 0.7677 - val_loss: 1.5232 - val_accuracy: 0.4000 Epoch 76/300 5/5 [==============================] - 0s 11ms/step - loss: 0.5737 - accuracy: 0.7778 - val_loss: 1.5637 - val_accuracy: 0.4000 Epoch 77/300 5/5 [==============================] - 0s 11ms/step - loss: 0.5813 - accuracy: 0.8283 - val_loss: 1.5730 - val_accuracy: 0.4000 Epoch 78/300 5/5 [==============================] - 0s 10ms/step - loss: 0.4675 - accuracy: 0.8889 - val_loss: 1.5657 - val_accuracy: 0.4000 Epoch 79/300 5/5 [==============================] - 0s 11ms/step - loss: 0.5793 - accuracy: 0.7576 - val_loss: 1.5513 - val_accuracy: 0.4000 Epoch 80/300 5/5 [==============================] - 0s 11ms/step - loss: 0.4635 - accuracy: 0.8687 - val_loss: 1.5517 - val_accuracy: 0.4000 Epoch 81/300 5/5 [==============================] - 0s 10ms/step - loss: 0.5078 - accuracy: 0.8485 - val_loss: 1.5329 - val_accuracy: 0.4000 Epoch 82/300 5/5 [==============================] - 0s 11ms/step - loss: 0.5829 - accuracy: 0.7879 - val_loss: 1.5354 - val_accuracy: 0.4000 Epoch 83/300 5/5 [==============================] - 0s 11ms/step - loss: 0.4598 - accuracy: 0.8586 - val_loss: 1.5468 - val_accuracy: 0.4000 Epoch 84/300 5/5 [==============================] - 0s 12ms/step - loss: 0.5569 - accuracy: 0.8081 - val_loss: 1.5654 - val_accuracy: 0.4000 Epoch 85/300 5/5 [==============================] - 0s 10ms/step - loss: 0.4620 - accuracy: 0.8485 - val_loss: 1.6194 - val_accuracy: 0.4000 Epoch 86/300 5/5 [==============================] - 0s 11ms/step - loss: 0.5604 - accuracy: 0.8182 - val_loss: 1.6306 - val_accuracy: 0.4000 Epoch 87/300 5/5 [==============================] - 0s 10ms/step - loss: 0.5564 - accuracy: 0.8081 - val_loss: 1.6022 - val_accuracy: 0.4400 Epoch 88/300 5/5 [==============================] - 0s 15ms/step - loss: 0.4662 - accuracy: 0.8586 - val_loss: 1.5947 - val_accuracy: 0.4400 Epoch 89/300 5/5 [==============================] - 0s 10ms/step - loss: 0.4122 - accuracy: 0.8485 - val_loss: 1.5763 - val_accuracy: 0.4400 Epoch 90/300 5/5 [==============================] - 0s 11ms/step - loss: 0.4870 - accuracy: 0.8384 - val_loss: 1.5797 - val_accuracy: 0.4400 Epoch 91/300 5/5 [==============================] - 0s 10ms/step - loss: 0.4857 - accuracy: 0.7980 - val_loss: 1.5682 - val_accuracy: 0.4400 Epoch 92/300 5/5 [==============================] - 0s 11ms/step - loss: 0.5066 - accuracy: 0.8384 - val_loss: 1.5611 - val_accuracy: 0.4400 Epoch 93/300 5/5 [==============================] - 0s 11ms/step - loss: 0.6226 - accuracy: 0.7475 - val_loss: 1.5210 - val_accuracy: 0.4400 Epoch 94/300 5/5 [==============================] - 0s 10ms/step - loss: 0.4607 - accuracy: 0.8485 - val_loss: 1.5088 - val_accuracy: 0.4400 Epoch 95/300 5/5 [==============================] - 0s 10ms/step - loss: 0.4321 - accuracy: 0.8889 - val_loss: 1.4810 - val_accuracy: 0.4400 Epoch 96/300 5/5 [==============================] - 0s 17ms/step - loss: 0.5163 - accuracy: 0.8182 - val_loss: 1.4861 - val_accuracy: 0.4400 Epoch 97/300 5/5 [==============================] - 0s 12ms/step - loss: 0.4546 - accuracy: 0.8485 - val_loss: 1.5115 - val_accuracy: 0.4400 Epoch 98/300 5/5 [==============================] - 0s 12ms/step - loss: 0.5187 - accuracy: 0.8283 - val_loss: 1.5143 - val_accuracy: 0.4400 Epoch 99/300 5/5 [==============================] - 0s 16ms/step - loss: 0.3866 - accuracy: 0.8586 - val_loss: 1.5180 - val_accuracy: 0.4400 Epoch 100/300 5/5 [==============================] - 0s 11ms/step - loss: 0.3409 - accuracy: 0.8788 - val_loss: 1.5119 - val_accuracy: 0.4000 Epoch 101/300 5/5 [==============================] - 0s 14ms/step - loss: 0.4512 - accuracy: 0.8485 - val_loss: 1.5047 - val_accuracy: 0.4000 Epoch 102/300 5/5 [==============================] - 0s 11ms/step - loss: 0.4093 - accuracy: 0.9091 - val_loss: 1.4996 - val_accuracy: 0.4000 Epoch 103/300 5/5 [==============================] - 0s 10ms/step - loss: 0.3903 - accuracy: 0.8687 - val_loss: 1.5050 - val_accuracy: 0.4000 Epoch 104/300 5/5 [==============================] - 0s 11ms/step - loss: 0.4016 - accuracy: 0.8687 - val_loss: 1.5041 - val_accuracy: 0.4400 Epoch 105/300 5/5 [==============================] - 0s 10ms/step - loss: 0.3557 - accuracy: 0.8990 - val_loss: 1.4833 - val_accuracy: 0.4400 Epoch 106/300 5/5 [==============================] - 0s 11ms/step - loss: 0.3772 - accuracy: 0.8889 - val_loss: 1.4815 - val_accuracy: 0.4400 Epoch 107/300 5/5 [==============================] - 0s 12ms/step - loss: 0.2374 - accuracy: 0.9495 - val_loss: 1.5036 - val_accuracy: 0.4400 Epoch 108/300 5/5 [==============================] - 0s 11ms/step - loss: 0.3768 - accuracy: 0.8889 - val_loss: 1.5412 - val_accuracy: 0.4400 Epoch 109/300 5/5 [==============================] - 0s 11ms/step - loss: 0.3065 - accuracy: 0.9192 - val_loss: 1.5614 - val_accuracy: 0.4400 Epoch 110/300 5/5 [==============================] - 0s 12ms/step - loss: 0.4011 - accuracy: 0.9091 - val_loss: 1.5761 - val_accuracy: 0.4000 Epoch 111/300 5/5 [==============================] - 0s 12ms/step - loss: 0.3037 - accuracy: 0.9394 - val_loss: 1.5843 - val_accuracy: 0.4000 Epoch 112/300 5/5 [==============================] - 0s 12ms/step - loss: 0.3910 - accuracy: 0.8889 - val_loss: 1.5611 - val_accuracy: 0.4000 Epoch 113/300 5/5 [==============================] - 0s 12ms/step - loss: 0.3291 - accuracy: 0.9293 - val_loss: 1.5062 - val_accuracy: 0.4000 Epoch 114/300 5/5 [==============================] - 0s 13ms/step - loss: 0.4024 - accuracy: 0.8586 - val_loss: 1.4781 - val_accuracy: 0.4400 Epoch 115/300 5/5 [==============================] - 0s 17ms/step - loss: 0.4149 - accuracy: 0.8586 - val_loss: 1.4690 - val_accuracy: 0.4400 Epoch 116/300 5/5 [==============================] - 0s 14ms/step - loss: 0.3540 - accuracy: 0.8889 - val_loss: 1.4632 - val_accuracy: 0.4800 Epoch 117/300 5/5 [==============================] - 0s 13ms/step - loss: 0.2995 - accuracy: 0.9394 - val_loss: 1.4658 - val_accuracy: 0.4400 Epoch 118/300 5/5 [==============================] - 0s 12ms/step - loss: 0.3443 - accuracy: 0.9091 - val_loss: 1.4750 - val_accuracy: 0.4400 Epoch 119/300 5/5 [==============================] - 0s 11ms/step - loss: 0.3808 - accuracy: 0.8687 - val_loss: 1.4791 - val_accuracy: 0.4400 Epoch 120/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2954 - accuracy: 0.9192 - val_loss: 1.4810 - val_accuracy: 0.4400 Epoch 121/300 5/5 [==============================] - 0s 11ms/step - loss: 0.2591 - accuracy: 0.9495 - val_loss: 1.4657 - val_accuracy: 0.4000 Epoch 122/300 5/5 [==============================] - 0s 10ms/step - loss: 0.3691 - accuracy: 0.9192 - val_loss: 1.4402 - val_accuracy: 0.4000 Epoch 123/300 5/5 [==============================] - 0s 10ms/step - loss: 0.3837 - accuracy: 0.8889 - val_loss: 1.4367 - val_accuracy: 0.4400 Epoch 124/300 5/5 [==============================] - 0s 11ms/step - loss: 0.3313 - accuracy: 0.9091 - val_loss: 1.4619 - val_accuracy: 0.4400 Epoch 125/300 5/5 [==============================] - 0s 10ms/step - loss: 0.4093 - accuracy: 0.8687 - val_loss: 1.4766 - val_accuracy: 0.4400 Epoch 126/300 5/5 [==============================] - 0s 11ms/step - loss: 0.3183 - accuracy: 0.9394 - val_loss: 1.4397 - val_accuracy: 0.4000 Epoch 127/300 5/5 [==============================] - 0s 11ms/step - loss: 0.3472 - accuracy: 0.9091 - val_loss: 1.4202 - val_accuracy: 0.4000 Epoch 128/300 5/5 [==============================] - 0s 10ms/step - loss: 0.3000 - accuracy: 0.8990 - val_loss: 1.4115 - val_accuracy: 0.4000 Epoch 129/300 5/5 [==============================] - 0s 10ms/step - loss: 0.4555 - accuracy: 0.8788 - val_loss: 1.3714 - val_accuracy: 0.4400 Epoch 130/300 5/5 [==============================] - 0s 11ms/step - loss: 0.2963 - accuracy: 0.9293 - val_loss: 1.3260 - val_accuracy: 0.4400 Epoch 131/300 5/5 [==============================] - 0s 11ms/step - loss: 0.3860 - accuracy: 0.8990 - val_loss: 1.3152 - val_accuracy: 0.5200 Epoch 132/300 5/5 [==============================] - 0s 11ms/step - loss: 0.2415 - accuracy: 0.9495 - val_loss: 1.3009 - val_accuracy: 0.5200 Epoch 133/300 5/5 [==============================] - 0s 12ms/step - loss: 0.3452 - accuracy: 0.9293 - val_loss: 1.2888 - val_accuracy: 0.5200 Epoch 134/300 5/5 [==============================] - 0s 11ms/step - loss: 0.3085 - accuracy: 0.8990 - val_loss: 1.2737 - val_accuracy: 0.5600 Epoch 135/300 5/5 [==============================] - 0s 11ms/step - loss: 0.3182 - accuracy: 0.9394 - val_loss: 1.2748 - val_accuracy: 0.5600 Epoch 136/300 5/5 [==============================] - 0s 11ms/step - loss: 0.3304 - accuracy: 0.9091 - val_loss: 1.2839 - val_accuracy: 0.5200 Epoch 137/300 5/5 [==============================] - 0s 11ms/step - loss: 0.3110 - accuracy: 0.9192 - val_loss: 1.2884 - val_accuracy: 0.5200 Epoch 138/300 5/5 [==============================] - 0s 11ms/step - loss: 0.3043 - accuracy: 0.9192 - val_loss: 1.2859 - val_accuracy: 0.5200 Epoch 139/300 5/5 [==============================] - 0s 11ms/step - loss: 0.3145 - accuracy: 0.8586 - val_loss: 1.3100 - val_accuracy: 0.5600 Epoch 140/300 5/5 [==============================] - 0s 10ms/step - loss: 0.3082 - accuracy: 0.8990 - val_loss: 1.3366 - val_accuracy: 0.5200 Epoch 141/300 5/5 [==============================] - 0s 11ms/step - loss: 0.2363 - accuracy: 0.9697 - val_loss: 1.3659 - val_accuracy: 0.5600 Epoch 142/300 5/5 [==============================] - 0s 11ms/step - loss: 0.2440 - accuracy: 0.9596 - val_loss: 1.4113 - val_accuracy: 0.5200 Epoch 143/300 5/5 [==============================] - 0s 11ms/step - loss: 0.2587 - accuracy: 0.9293 - val_loss: 1.4726 - val_accuracy: 0.5200 Epoch 144/300 5/5 [==============================] - 0s 13ms/step - loss: 0.3243 - accuracy: 0.9091 - val_loss: 1.5267 - val_accuracy: 0.4800 Epoch 145/300 5/5 [==============================] - 0s 57ms/step - loss: 0.3988 - accuracy: 0.8788 - val_loss: 1.5574 - val_accuracy: 0.4800 Epoch 146/300 5/5 [==============================] - 0s 29ms/step - loss: 0.3000 - accuracy: 0.9192 - val_loss: 1.5458 - val_accuracy: 0.4800 Epoch 147/300 5/5 [==============================] - 0s 14ms/step - loss: 0.2479 - accuracy: 0.9293 - val_loss: 1.5588 - val_accuracy: 0.4800 Epoch 148/300 5/5 [==============================] - 0s 13ms/step - loss: 0.2433 - accuracy: 0.9091 - val_loss: 1.5828 - val_accuracy: 0.4800 Epoch 149/300 5/5 [==============================] - 0s 11ms/step - loss: 0.3592 - accuracy: 0.8687 - val_loss: 1.6057 - val_accuracy: 0.4800 Epoch 150/300 5/5 [==============================] - 0s 11ms/step - loss: 0.2659 - accuracy: 0.9495 - val_loss: 1.6074 - val_accuracy: 0.4400 Epoch 151/300 5/5 [==============================] - 0s 11ms/step - loss: 0.3623 - accuracy: 0.8586 - val_loss: 1.5989 - val_accuracy: 0.4000 Epoch 152/300 5/5 [==============================] - 0s 11ms/step - loss: 0.2918 - accuracy: 0.8990 - val_loss: 1.5628 - val_accuracy: 0.4400 Epoch 153/300 5/5 [==============================] - 0s 11ms/step - loss: 0.2962 - accuracy: 0.9091 - val_loss: 1.5581 - val_accuracy: 0.4800 Epoch 154/300 5/5 [==============================] - 0s 11ms/step - loss: 0.2506 - accuracy: 0.9596 - val_loss: 1.5465 - val_accuracy: 0.5200 Epoch 155/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2814 - accuracy: 0.9091 - val_loss: 1.5472 - val_accuracy: 0.5200 Epoch 156/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2577 - accuracy: 0.9091 - val_loss: 1.5154 - val_accuracy: 0.5200 Epoch 157/300 5/5 [==============================] - 0s 11ms/step - loss: 0.2535 - accuracy: 0.9293 - val_loss: 1.4647 - val_accuracy: 0.5200 Epoch 158/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2438 - accuracy: 0.9596 - val_loss: 1.4283 - val_accuracy: 0.5200 Epoch 159/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2404 - accuracy: 0.9394 - val_loss: 1.4241 - val_accuracy: 0.5200 Epoch 160/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2848 - accuracy: 0.9192 - val_loss: 1.4622 - val_accuracy: 0.5200 Epoch 161/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2851 - accuracy: 0.8990 - val_loss: 1.5561 - val_accuracy: 0.5200 Epoch 162/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2213 - accuracy: 0.9091 - val_loss: 1.6179 - val_accuracy: 0.4800 Epoch 163/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2243 - accuracy: 0.9495 - val_loss: 1.6255 - val_accuracy: 0.5200 Epoch 164/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1598 - accuracy: 0.9596 - val_loss: 1.6380 - val_accuracy: 0.5200 Epoch 165/300 5/5 [==============================] - 0s 15ms/step - loss: 0.2694 - accuracy: 0.9293 - val_loss: 1.6727 - val_accuracy: 0.4800 Epoch 166/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2262 - accuracy: 0.9394 - val_loss: 1.6746 - val_accuracy: 0.4800 Epoch 167/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1683 - accuracy: 0.9697 - val_loss: 1.6383 - val_accuracy: 0.4400 Epoch 168/300 5/5 [==============================] - 0s 11ms/step - loss: 0.2007 - accuracy: 0.9495 - val_loss: 1.6019 - val_accuracy: 0.4400 Epoch 169/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2234 - accuracy: 0.9596 - val_loss: 1.5700 - val_accuracy: 0.4400 Epoch 170/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2579 - accuracy: 0.9091 - val_loss: 1.5777 - val_accuracy: 0.4400 Epoch 171/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1579 - accuracy: 0.9798 - val_loss: 1.5913 - val_accuracy: 0.4400 Epoch 172/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2335 - accuracy: 0.9192 - val_loss: 1.5967 - val_accuracy: 0.4400 Epoch 173/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2087 - accuracy: 0.9394 - val_loss: 1.5706 - val_accuracy: 0.4800 Epoch 174/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1493 - accuracy: 0.9394 - val_loss: 1.5474 - val_accuracy: 0.5200 Epoch 175/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1928 - accuracy: 0.9798 - val_loss: 1.5236 - val_accuracy: 0.5200 Epoch 176/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2255 - accuracy: 0.9293 - val_loss: 1.5446 - val_accuracy: 0.5200 Epoch 177/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1946 - accuracy: 0.9495 - val_loss: 1.5611 - val_accuracy: 0.5200 Epoch 178/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1792 - accuracy: 0.9495 - val_loss: 1.5671 - val_accuracy: 0.4800 Epoch 179/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2114 - accuracy: 0.9091 - val_loss: 1.5417 - val_accuracy: 0.4800 Epoch 180/300 5/5 [==============================] - 0s 10ms/step - loss: 0.3288 - accuracy: 0.8889 - val_loss: 1.5074 - val_accuracy: 0.5200 Epoch 181/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2332 - accuracy: 0.9394 - val_loss: 1.4741 - val_accuracy: 0.5200 Epoch 182/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2809 - accuracy: 0.9091 - val_loss: 1.4898 - val_accuracy: 0.4800 Epoch 183/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1680 - accuracy: 0.9899 - val_loss: 1.5327 - val_accuracy: 0.5200 Epoch 184/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2097 - accuracy: 0.9495 - val_loss: 1.5696 - val_accuracy: 0.5200 Epoch 185/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1536 - accuracy: 0.9697 - val_loss: 1.5358 - val_accuracy: 0.4800 Epoch 186/300 5/5 [==============================] - 0s 14ms/step - loss: 0.2454 - accuracy: 0.8990 - val_loss: 1.5443 - val_accuracy: 0.4800 Epoch 187/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2459 - accuracy: 0.9091 - val_loss: 1.5477 - val_accuracy: 0.4800 Epoch 188/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2011 - accuracy: 0.9495 - val_loss: 1.5302 - val_accuracy: 0.4800 Epoch 189/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2151 - accuracy: 0.9495 - val_loss: 1.5372 - val_accuracy: 0.4800 Epoch 190/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2228 - accuracy: 0.9293 - val_loss: 1.5153 - val_accuracy: 0.5200 Epoch 191/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1940 - accuracy: 0.9394 - val_loss: 1.5137 - val_accuracy: 0.5200 Epoch 192/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1932 - accuracy: 0.9495 - val_loss: 1.5084 - val_accuracy: 0.5200 Epoch 193/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2555 - accuracy: 0.9091 - val_loss: 1.4834 - val_accuracy: 0.5200 Epoch 194/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1944 - accuracy: 0.9697 - val_loss: 1.4385 - val_accuracy: 0.5200 Epoch 195/300 5/5 [==============================] - 0s 11ms/step - loss: 0.2132 - accuracy: 0.9394 - val_loss: 1.4120 - val_accuracy: 0.5200 Epoch 196/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1869 - accuracy: 0.9394 - val_loss: 1.4174 - val_accuracy: 0.5200 Epoch 197/300 5/5 [==============================] - 0s 11ms/step - loss: 0.2266 - accuracy: 0.9697 - val_loss: 1.4168 - val_accuracy: 0.5200 Epoch 198/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1951 - accuracy: 0.9596 - val_loss: 1.4018 - val_accuracy: 0.5200 Epoch 199/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1381 - accuracy: 0.9899 - val_loss: 1.3829 - val_accuracy: 0.4800 Epoch 200/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1877 - accuracy: 0.9495 - val_loss: 1.3900 - val_accuracy: 0.4800 Epoch 201/300 5/5 [==============================] - 0s 11ms/step - loss: 0.2024 - accuracy: 0.9596 - val_loss: 1.4218 - val_accuracy: 0.4800 Epoch 202/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1638 - accuracy: 0.9394 - val_loss: 1.4343 - val_accuracy: 0.5200 Epoch 203/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1367 - accuracy: 0.9798 - val_loss: 1.4320 - val_accuracy: 0.5200 Epoch 204/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1747 - accuracy: 0.9495 - val_loss: 1.4162 - val_accuracy: 0.5200 Epoch 205/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1904 - accuracy: 0.9495 - val_loss: 1.4078 - val_accuracy: 0.5200 Epoch 206/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1528 - accuracy: 0.9798 - val_loss: 1.4025 - val_accuracy: 0.5200 Epoch 207/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1671 - accuracy: 0.9596 - val_loss: 1.4405 - val_accuracy: 0.5200 Epoch 208/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1820 - accuracy: 0.9697 - val_loss: 1.4902 - val_accuracy: 0.5200 Epoch 209/300 5/5 [==============================] - 0s 11ms/step - loss: 0.2287 - accuracy: 0.9495 - val_loss: 1.5254 - val_accuracy: 0.5200 Epoch 210/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1959 - accuracy: 0.9192 - val_loss: 1.5577 - val_accuracy: 0.5200 Epoch 211/300 5/5 [==============================] - 0s 11ms/step - loss: 0.2142 - accuracy: 0.9394 - val_loss: 1.5929 - val_accuracy: 0.5200 Epoch 212/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1339 - accuracy: 0.9697 - val_loss: 1.6022 - val_accuracy: 0.5200 Epoch 213/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1417 - accuracy: 0.9798 - val_loss: 1.6035 - val_accuracy: 0.5200 Epoch 214/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1621 - accuracy: 0.9697 - val_loss: 1.5920 - val_accuracy: 0.5200 Epoch 215/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1654 - accuracy: 0.9596 - val_loss: 1.6035 - val_accuracy: 0.5200 Epoch 216/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1343 - accuracy: 0.9596 - val_loss: 1.6009 - val_accuracy: 0.5200 Epoch 217/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2297 - accuracy: 0.9293 - val_loss: 1.6690 - val_accuracy: 0.4800 Epoch 218/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1453 - accuracy: 0.9495 - val_loss: 1.7465 - val_accuracy: 0.4800 Epoch 219/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1609 - accuracy: 0.9293 - val_loss: 1.7862 - val_accuracy: 0.4800 Epoch 220/300 5/5 [==============================] - 0s 11ms/step - loss: 0.2077 - accuracy: 0.9394 - val_loss: 1.7598 - val_accuracy: 0.4800 Epoch 221/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1841 - accuracy: 0.9495 - val_loss: 1.7463 - val_accuracy: 0.5200 Epoch 222/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1535 - accuracy: 0.9596 - val_loss: 1.7447 - val_accuracy: 0.5200 Epoch 223/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2932 - accuracy: 0.9394 - val_loss: 1.7288 - val_accuracy: 0.4400 Epoch 224/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1329 - accuracy: 0.9798 - val_loss: 1.7608 - val_accuracy: 0.4400 Epoch 225/300 5/5 [==============================] - 0s 15ms/step - loss: 0.1989 - accuracy: 0.9495 - val_loss: 1.7751 - val_accuracy: 0.4400 Epoch 226/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1821 - accuracy: 0.9596 - val_loss: 1.8195 - val_accuracy: 0.4400 Epoch 227/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1428 - accuracy: 0.9596 - val_loss: 1.8679 - val_accuracy: 0.4400 Epoch 228/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1809 - accuracy: 0.9293 - val_loss: 1.9017 - val_accuracy: 0.4400 Epoch 229/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1844 - accuracy: 0.9293 - val_loss: 1.8851 - val_accuracy: 0.4400 Epoch 230/300 5/5 [==============================] - 0s 12ms/step - loss: 0.2490 - accuracy: 0.8889 - val_loss: 1.8330 - val_accuracy: 0.4000 Epoch 231/300 5/5 [==============================] - 0s 12ms/step - loss: 0.1785 - accuracy: 0.9394 - val_loss: 1.7825 - val_accuracy: 0.4000 Epoch 232/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1607 - accuracy: 0.9697 - val_loss: 1.7279 - val_accuracy: 0.4400 Epoch 233/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1322 - accuracy: 0.9697 - val_loss: 1.7266 - val_accuracy: 0.4400 Epoch 234/300 5/5 [==============================] - 0s 12ms/step - loss: 0.1997 - accuracy: 0.9293 - val_loss: 1.7215 - val_accuracy: 0.4400 Epoch 235/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1601 - accuracy: 0.9596 - val_loss: 1.6581 - val_accuracy: 0.4800 Epoch 236/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1753 - accuracy: 0.9495 - val_loss: 1.6230 - val_accuracy: 0.4800 Epoch 237/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1409 - accuracy: 0.9495 - val_loss: 1.6303 - val_accuracy: 0.5600 Epoch 238/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1896 - accuracy: 0.9293 - val_loss: 1.6375 - val_accuracy: 0.5600 Epoch 239/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1008 - accuracy: 0.9798 - val_loss: 1.6303 - val_accuracy: 0.5600 Epoch 240/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1306 - accuracy: 0.9697 - val_loss: 1.6523 - val_accuracy: 0.5200 Epoch 241/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1494 - accuracy: 0.9596 - val_loss: 1.6667 - val_accuracy: 0.4800 Epoch 242/300 5/5 [==============================] - 0s 46ms/step - loss: 0.2277 - accuracy: 0.9596 - val_loss: 1.7612 - val_accuracy: 0.5200 Epoch 243/300 5/5 [==============================] - 0s 18ms/step - loss: 0.1108 - accuracy: 0.9697 - val_loss: 1.8445 - val_accuracy: 0.4800 Epoch 244/300 5/5 [==============================] - 0s 24ms/step - loss: 0.1398 - accuracy: 0.9697 - val_loss: 1.9154 - val_accuracy: 0.4400 Epoch 245/300 5/5 [==============================] - 0s 15ms/step - loss: 0.1824 - accuracy: 0.9495 - val_loss: 1.9388 - val_accuracy: 0.4400 Epoch 246/300 5/5 [==============================] - 0s 78ms/step - loss: 0.1416 - accuracy: 0.9495 - val_loss: 1.9438 - val_accuracy: 0.4400 Epoch 247/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2035 - accuracy: 0.9394 - val_loss: 1.9400 - val_accuracy: 0.4400 Epoch 248/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1095 - accuracy: 0.9798 - val_loss: 1.8991 - val_accuracy: 0.4400 Epoch 249/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1370 - accuracy: 0.9495 - val_loss: 1.8836 - val_accuracy: 0.4400 Epoch 250/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1403 - accuracy: 0.9697 - val_loss: 1.8868 - val_accuracy: 0.4400 Epoch 251/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1053 - accuracy: 0.9798 - val_loss: 1.8928 - val_accuracy: 0.4000 Epoch 252/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1210 - accuracy: 0.9697 - val_loss: 1.9071 - val_accuracy: 0.4000 Epoch 253/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1244 - accuracy: 0.9697 - val_loss: 1.9103 - val_accuracy: 0.4400 Epoch 254/300 5/5 [==============================] - 0s 11ms/step - loss: 0.2268 - accuracy: 0.9394 - val_loss: 1.8744 - val_accuracy: 0.4000 Epoch 255/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1473 - accuracy: 0.9697 - val_loss: 1.7619 - val_accuracy: 0.4000 Epoch 256/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1470 - accuracy: 0.9596 - val_loss: 1.6590 - val_accuracy: 0.4800 Epoch 257/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1072 - accuracy: 0.9798 - val_loss: 1.6255 - val_accuracy: 0.4800 Epoch 258/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1191 - accuracy: 0.9798 - val_loss: 1.6358 - val_accuracy: 0.5200 Epoch 259/300 5/5 [==============================] - 0s 11ms/step - loss: 0.2258 - accuracy: 0.9293 - val_loss: 1.7195 - val_accuracy: 0.4400 Epoch 260/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1714 - accuracy: 0.9394 - val_loss: 1.7975 - val_accuracy: 0.4800 Epoch 261/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1434 - accuracy: 0.9697 - val_loss: 1.8404 - val_accuracy: 0.4400 Epoch 262/300 5/5 [==============================] - 0s 10ms/step - loss: 0.2401 - accuracy: 0.9394 - val_loss: 1.7858 - val_accuracy: 0.4400 Epoch 263/300 5/5 [==============================] - 0s 14ms/step - loss: 0.1096 - accuracy: 0.9899 - val_loss: 1.7860 - val_accuracy: 0.4400 Epoch 264/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1077 - accuracy: 0.9697 - val_loss: 1.7923 - val_accuracy: 0.4400 Epoch 265/300 5/5 [==============================] - 0s 11ms/step - loss: 0.2871 - accuracy: 0.9091 - val_loss: 1.8368 - val_accuracy: 0.4400 Epoch 266/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1123 - accuracy: 0.9798 - val_loss: 1.8737 - val_accuracy: 0.4800 Epoch 267/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1403 - accuracy: 0.9596 - val_loss: 1.8792 - val_accuracy: 0.4800 Epoch 268/300 5/5 [==============================] - 0s 11ms/step - loss: 0.2257 - accuracy: 0.9495 - val_loss: 1.8238 - val_accuracy: 0.4800 Epoch 269/300 5/5 [==============================] - 0s 10ms/step - loss: 0.0670 - accuracy: 0.9798 - val_loss: 1.7894 - val_accuracy: 0.4800 Epoch 270/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1407 - accuracy: 0.9697 - val_loss: 1.7617 - val_accuracy: 0.4800 Epoch 271/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1618 - accuracy: 0.9596 - val_loss: 1.7801 - val_accuracy: 0.4400 Epoch 272/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1134 - accuracy: 0.9697 - val_loss: 1.7911 - val_accuracy: 0.4400 Epoch 273/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1646 - accuracy: 0.9596 - val_loss: 1.8079 - val_accuracy: 0.4400 Epoch 274/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1357 - accuracy: 0.9495 - val_loss: 1.8052 - val_accuracy: 0.4400 Epoch 275/300 5/5 [==============================] - 0s 11ms/step - loss: 0.0954 - accuracy: 0.9697 - val_loss: 1.7950 - val_accuracy: 0.4800 Epoch 276/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1457 - accuracy: 0.9596 - val_loss: 1.7739 - val_accuracy: 0.4800 Epoch 277/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1069 - accuracy: 0.9697 - val_loss: 1.7751 - val_accuracy: 0.4400 Epoch 278/300 5/5 [==============================] - 0s 17ms/step - loss: 0.1338 - accuracy: 0.9697 - val_loss: 1.8263 - val_accuracy: 0.4000 Epoch 279/300 5/5 [==============================] - 0s 12ms/step - loss: 0.1342 - accuracy: 0.9596 - val_loss: 1.9028 - val_accuracy: 0.4400 Epoch 280/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1200 - accuracy: 0.9596 - val_loss: 1.9621 - val_accuracy: 0.4800 Epoch 281/300 5/5 [==============================] - 0s 17ms/step - loss: 0.1196 - accuracy: 0.9596 - val_loss: 2.0264 - val_accuracy: 0.4800 Epoch 282/300 5/5 [==============================] - 0s 13ms/step - loss: 0.2612 - accuracy: 0.9495 - val_loss: 1.9998 - val_accuracy: 0.5200 Epoch 283/300 5/5 [==============================] - 0s 15ms/step - loss: 0.1265 - accuracy: 0.9596 - val_loss: 1.9100 - val_accuracy: 0.5200 Epoch 284/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1509 - accuracy: 0.9394 - val_loss: 1.8477 - val_accuracy: 0.5200 Epoch 285/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1095 - accuracy: 0.9697 - val_loss: 1.8266 - val_accuracy: 0.5200 Epoch 286/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1075 - accuracy: 0.9697 - val_loss: 1.8473 - val_accuracy: 0.5200 Epoch 287/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1403 - accuracy: 0.9596 - val_loss: 1.8193 - val_accuracy: 0.4800 Epoch 288/300 5/5 [==============================] - 0s 11ms/step - loss: 0.0889 - accuracy: 0.9899 - val_loss: 1.7905 - val_accuracy: 0.4800 Epoch 289/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1118 - accuracy: 0.9596 - val_loss: 1.8198 - val_accuracy: 0.4800 Epoch 290/300 5/5 [==============================] - 0s 11ms/step - loss: 0.0504 - accuracy: 1.0000 - val_loss: 1.8649 - val_accuracy: 0.4800 Epoch 291/300 5/5 [==============================] - 0s 11ms/step - loss: 0.2340 - accuracy: 0.9091 - val_loss: 1.9275 - val_accuracy: 0.5200 Epoch 292/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1022 - accuracy: 0.9899 - val_loss: 1.9417 - val_accuracy: 0.5200 Epoch 293/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1014 - accuracy: 0.9697 - val_loss: 1.9481 - val_accuracy: 0.5200 Epoch 294/300 5/5 [==============================] - 0s 10ms/step - loss: 0.0885 - accuracy: 0.9899 - val_loss: 1.9786 - val_accuracy: 0.5200 Epoch 295/300 5/5 [==============================] - 0s 10ms/step - loss: 0.0808 - accuracy: 0.9798 - val_loss: 1.9947 - val_accuracy: 0.5200 Epoch 296/300 5/5 [==============================] - 0s 11ms/step - loss: 0.2017 - accuracy: 0.9394 - val_loss: 1.9345 - val_accuracy: 0.5200 Epoch 297/300 5/5 [==============================] - 0s 11ms/step - loss: 0.1001 - accuracy: 0.9798 - val_loss: 1.8087 - val_accuracy: 0.5200 Epoch 298/300 5/5 [==============================] - 0s 10ms/step - loss: 0.1414 - accuracy: 0.9697 - val_loss: 1.7490 - val_accuracy: 0.5200 Epoch 299/300 5/5 [==============================] - 0s 10ms/step - loss: 0.0871 - accuracy: 0.9899 - val_loss: 1.7383 - val_accuracy: 0.5600 Epoch 300/300 5/5 [==============================] - 0s 15ms/step - loss: 0.0785 - accuracy: 0.9899 - val_loss: 1.7438 - val_accuracy: 0.5600 16/16 [==============================] - 0s 2ms/step - loss: 1.1486 - accuracy: 0.6452 Results - random forest Accuracy = 0.548, MAE = 0.955, Chance = 0.167 Results - ANN Test score: 1.1485896110534668 Test accuracy: 0.6451612710952759
[INFO] == Command exit (modification check follows) =====
get(ok): data/raw/a.npy (file) [from origin...] get(ok): data/raw/participants.csv (file) [from origin...] add(ok): ANN.h5 (file) add(ok): __pycache__/__autograph_generated_file58kg88r_.cpython-310.pyc (file) add(ok): __pycache__/__autograph_generated_fileufe_xc1w.cpython-310.pyc (file) add(ok): metrics.json (file) add(ok): random_forest.joblib (file) save(ok): . (dataset) action summary: add (ok: 5) get (notneeded: 3, ok: 2) save (notneeded: 1, ok: 1)
Not only can we obtain reproducible
results via the combination of software containers
and seed
ing:
%%bash
cd repronim_ml_showcase/ml_showcase/
cat metrics.json
{"accuracy": 0.548, "MAE": 0.955, "Chance": 0.167, "Test score": 1.1485896110534668, "Test accuracy": 0.6451612710952759}
but we also get a full log
of everything that happened in and to our dataset
.
%%bash
cd repronim_ml_showcase/ml_showcase/
git log
commit 9b9bfa7929526fb45ed46a50874469f96230244f Author: Peer Herholz <herholz.peer@gmail.com> Date: Fri Mar 4 16:40:20 2022 +0100 [DATALAD RUNCMD] First run of ML analyses === Do not change lines below === { "chain": [], "cmd": "python3.1 -m datalad_container.adapters.docker run .datalad/environments/repronim-ml-container/image code/ml_reproducibility.py", "dsid": "b9428f05-ca1c-4373-9c08-f7e8a2d9aac0", "exit": 0, "extra_inputs": [ ".datalad/environments/repronim-ml-container/image" ], "inputs": [ "data/raw/" ], "outputs": [ "metrics.json", "random_forest.joblib", "ANN.h5" ], "pwd": "." } ^^^ Do not change lines above ^^^ commit 416fb5570426739cd98af187f0a7cd2220949872 Author: Peer Herholz <herholz.peer@gmail.com> Date: Fri Mar 4 16:39:29 2022 +0100 add random forest & ANN script commit c6343a962c6e75cb5b719b4a6330c1e484dec810 Author: Peer Herholz <herholz.peer@gmail.com> Date: Fri Mar 4 16:32:45 2022 +0100 [DATALAD] Configure containerized environment 'repronim-ml-container' commit d34b46535dd20ac6ae6da63338d94deb18e179fb Author: Peer Herholz <herholz.peer@gmail.com> Date: Fri Mar 4 16:29:40 2022 +0100 [DATALAD] Added subdataset commit ee5d550e61a73fd1df2196a30afd0fdac1f84021 Author: Peer Herholz <herholz.peer@gmail.com> Date: Fri Mar 4 16:28:33 2022 +0100 Apply YODA dataset setup commit 4ae813eaee72f0207eec6a6b3f36d6203f22ba2a Author: Peer Herholz <herholz.peer@gmail.com> Date: Fri Mar 4 16:28:32 2022 +0100 Instruct annex to add text files to Git commit 16f2eda3f8ab10c4679e6bbb416bdce5ebe3acf9 Author: Peer Herholz <herholz.peer@gmail.com> Date: Fri Mar 4 16:28:30 2022 +0100 [DATALAD] new dataset
This would allow to rerun a specific analyses
using the same computational environment
and data
, as well as track
quite a bit of variability introduced by changes
!
There are also other tools
out there, more tailored towards machine learning
. For example, mlflow:
https://mlflow.org/docs/latest/tracking.html
adapted from Martina Vilas
or pydra-ml (not directly focused on reproducibility
):
Now that we have addressed a good amount of the introduced challenges
to reproducible machine learning
, including software
, algorithms
/practices
/processes
and data
, we already achieved a fair level of reproducibility
. However, we can even do more!
data
& models
¶reproducibility
and FAIR
-ness, data
and models
, basically everything involved in the analyses
need to sharedanalyses
or adapt them, reduces computational costs
data
and different types thereof, both openly and with restricted accessdata
at handFor example, we can store all things on GitHub
, including the software
/information re computing environments
:
with the computing environment
being stored/made available on dockerhub
:
the code
, i.e. machine learning analyses scripts
:
and the pre-trained models
:
speaking of which: there are amazing projects
out there that bring this to the next level, e.g. nobrainer:
Adjacent to that, it's of course required to share
as much about the analyses
as possible when including it in a publication
, etc.. There are cool & helpful checklists out there:
https://www.cs.mcgill.ca/~jpineau/ReproducibilityChecklist.pdf
adapted from Martina Vilas
See? There are many things we can do to make our machine learning analyses
more reproducible
. Unfortunately, all of this won't guarantee full reproducibility
. Thus, always make sure to check things constantly!
However, for now let's briefly summarize what we talked about!
Reproducible machine learning
- a summary¶There are also many other very useful resources out there!
Isdahl & Gunderson (2019)
https://www.cs.mcgill.ca/~ksinha4/practices_for_reproducibility/
adapted from Martina Vilas
via GIPHY
Thank you all for your attention! If you have any questions, please don't hesitate to ask or contact me via herholz do peer at gmail dot com or via "social media":
@peerherholz
Make sure to also check the ReproNim
website: https://www.repronim.org/