The world is developing rapidly and continuously looking for the best knowledge and experience among people. This motivates people all around the world to stand out in their jobs and look for higher degrees that can help them in improving their skills and knowledge. As a result, the number of students applying for Master’s programs has increased substantially.
The current admission dataset was created for the prediction of admissions into the University of California, Los Angeles (UCLA). It was built to help students in shortlisting universities based on their profiles. The predicted output gives them a fair idea about their chances of getting accepted.
We need to build a classification model using neural networks to predict a student’s chances of admission into UCLA.
The dataset contains several features which are considered important during the application for Masters Programs. The features included are:
We will be using Google Colab to run this notebook.
First, let’s import the data so that Colab can access the dataset. One way to load the data in Colab is by uploading the dataset directly in the notebook. The following code does just that. Once you run the cell, it would ask you to choose the file from your local system.
In [ ]:
from google.colab import files uploaded = files.upload()
Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.
Saving Admission_Predict.csv to Admission_Predict.csv
In [ ]:
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.model_selection import train_test_split from sklearn.preprocessing import MinMaxScaler import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout, Activation
Now, let’s load the data using the read_csv() function. One small change from Jupyter here is that the path of the file might be a bit different. To get the path of the data file, follow the below steps:
In [ ]:
# Importing the dataset data = pd.read_csv('/content/Admission_Predict.csv') # Check the top five records of the data data.head()
Out[ ]:
Serial No. | GRE Score | TOEFL Score | University Rating | SOP | LOR | CGPA | Research | Chance of Admit | |
---|---|---|---|---|---|---|---|---|---|
0 | 1 | 337 | 118 | 4 | 4.5 | 4.5 | 9.65 | 1 | 0.92 |
1 | 2 | 324 | 107 | 4 | 4.0 | 4.5 | 8.87 | 1 | 0.76 |
2 | 3 | 316 | 104 | 3 | 3.0 | 3.5 | 8.00 | 1 | 0.72 |
3 | 4 | 322 | 110 | 3 | 3.5 | 2.5 | 8.67 | 1 | 0.80 |
4 | 5 | 314 | 103 | 2 | 2.0 | 3.0 | 8.21 | 0 | 0.65 |
Observations:
In [ ]:
# Converting the target variable into a categorical variable data['Admit'] = data['Chance of Admit '].apply(lambda x: 1 if x > 0.8 else 0)
Now that we have created a new target variable, we can remove the column – Chance of Admit from the dataset. We can also remove the column – Serial No. as it would not add any value to our analysis.
In [ ]:
# Dropping columns data = data.drop(['Serial No.', 'Chance of Admit '], axis=1) data.head()
Out[ ]:
GRE Score | TOEFL Score | University Rating | SOP | LOR | CGPA | Research | Admit | |
---|---|---|---|---|---|---|---|---|
0 | 337 | 118 | 4 | 4.5 | 4.5 | 9.65 | 1 | 1 |
1 | 324 | 107 | 4 | 4.0 | 4.5 | 8.87 | 1 | 0 |
2 | 316 | 104 | 3 | 3.0 | 3.5 | 8.00 | 1 | 0 |
3 | 322 | 110 | 3 | 3.5 | 2.5 | 8.67 | 1 | 0 |
4 | 314 | 103 | 2 | 2.0 | 3.0 | 8.21 | 0 | 0 |
Let’s check the info of the data
In [ ]:
# Let's check the info of the data data.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 500 entries, 0 to 499 Data columns (total 8 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 GRE Score 500 non-null int64 1 TOEFL Score 500 non-null int64 2 University Rating 500 non-null int64 3 SOP 500 non-null float64 4 LOR 500 non-null float64 5 CGPA 500 non-null float64 6 Research 500 non-null int64 7 Admit 500 non-null int64 dtypes: float64(3), int64(5) memory usage: 31.4 KB
Observations:
In [ ]:
# Let's check the summary statistics of the data data.describe().T
Out[ ]:
count | mean | std | min | 25% | 50% | 75% | max | |
---|---|---|---|---|---|---|---|---|
GRE Score | 500.0 | 316.47200 | 11.295148 | 290.0 | 308.0000 | 317.00 | 325.00 | 340.00 |
TOEFL Score | 500.0 | 107.19200 | 6.081868 | 92.0 | 103.0000 | 107.00 | 112.00 | 120.00 |
University Rating | 500.0 | 3.11400 | 1.143512 | 1.0 | 2.0000 | 3.00 | 4.00 | 5.00 |
SOP | 500.0 | 3.37400 | 0.991004 | 1.0 | 2.5000 | 3.50 | 4.00 | 5.00 |
LOR | 500.0 | 3.48400 | 0.925450 | 1.0 | 3.0000 | 3.50 | 4.00 | 5.00 |
CGPA | 500.0 | 8.57644 | 0.604813 | 6.8 | 8.1275 | 8.56 | 9.04 | 9.92 |
Research | 500.0 | 0.56000 | 0.496884 | 0.0 | 0.0000 | 1.00 | 1.00 | 1.00 |
Admit | 500.0 | 0.28400 | 0.451388 | 0.0 | 0.0000 | 0.00 | 1.00 | 1.00 |
Observations:
In [ ]:
plt.figure(figsize=(15,8)) sns.scatterplot(data=data, x='GRE Score', y='TOEFL Score', hue='Admit', size='SOP');
Observations:
In [ ]:
plt.figure(figsize=(10,7)) sns.boxplot(data=data, x='University Rating', y='CGPA', hue='Admit') plt.title('Relationship between different University Rating and CGPA') plt.show()
Observations:
This dataset contains both numerical and categorical variables. We need to treat them first before we pass them onto the neural network. We will perform the below pre-processing steps:
An important point to remember: Before we scale numerical variables, we would first split the dataset into train and test datasets and perform scaling separately. Otherwise, we would be leaking information from the test data to the train data and the resulting model might give a false sense of good performance. This is known as data leakage which we would like to avoid.
Now, let’s split the dataset into train and test datasets. To do that, we would be extracting all the independent variables and save them into a variable features. And the target variable Admit would be saved into a variable target. These two variables will be used to split the parent dataset into train and test datasets.
In [ ]:
features = data.drop(['Admit'], axis=1) target = data['Admit']
The size of the dataset is small and the Keras implementation provides an argument for selecting some percentage of training data as validation data to check the accuracy of the model. Therefore, we will split the data into an 80:20 ratio.
In [ ]:
# Splitting the dataset into train and test data X_train, X_test, y_train, y_test = train_test_split(features, target, test_size=0.2, random_state=42)
Now, we will perform scaling on the numerical variables separately for train and test sets. We will perform fit and transform on the train data and then only we will perform transform on the test data.
In [ ]:
scaler = MinMaxScaler() # Here, we are passing all the features (numerical and categorical), that's okay as min-max scaler will not change values of categorical variables X_train_normalized = scaler.fit_transform(X_train)
In [ ]:
X_test_normalized = scaler.transform(X_test)
In neural networks, there are so many hyper-parameters that you can play around with and tune the network to get the best results. Some of them are –
and so on…
First, let’s set the seed for random number generators in NumPy, Python, and TensorFlow to be able to reproduce the same results everytime we run the code.
In [ ]:
# Fixing the seed for random number generators np.random.seed(42) import random random.seed(42) tf.random.set_seed(42)
Let’s build a feed forward neural network with 2 hidden layers and the output layer.
In [ ]:
# We will be adding the layers sequentially model_1 = Sequential() # First hidden layer with 128 neurons and relu activation function, the input shape tuple denotes number of independent variables model_1.add(Dense(128, activation='relu', input_shape=(7,))) # We will be switching 20% of neurons off randomly at each iteration to avoid overfitting model_1.add(Dropout(0.2)) # Second hidden layer with 64 neurons and relu activation function model_1.add(Dense(64, activation='relu')) # We will be switching 10% of neurons off randomly at each iteration to avoid overfitting model_1.add(Dropout(0.1)) # Output layer with only one neuron and sigmoid as activation function will give the probability of students getting admitted into UCLA model_1.add(Dense(1, activation='sigmoid'))
Once we are done with the model architecture, we need to compile the model, where we need to provide the loss function that we want to optimize, the optimization algorithm, and the evaluation metric that we are interested in to evaluate the model.
Since this is a binary classification task, we will be minimizing the binary_crossentropy and we can choose one optimizer out of
This is a hyper-parameter. You can play around with these optimizers to check which one performs better with a particular data.
For now, let’s try adamax optimizer with accuracy as the metric and see the model’s summary.
In [ ]:
model_1.compile(loss = 'binary_crossentropy', optimizer='adamax', metrics=['accuracy']) model_1.summary()
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense (Dense) (None, 128) 1024 dropout (Dropout) (None, 128) 0 dense_1 (Dense) (None, 64) 8256 dropout_1 (Dropout) (None, 64) 0 dense_2 (Dense) (None, 1) 65 ================================================================= Total params: 9,345 Trainable params: 9,345 Non-trainable params: 0 _________________________________________________________________
From the above summary, we can see that this architecture will train a total of 9,857parameters i.e. weights and biases in the network.
Let’s now train the model using the below piece of code. We will keep the 10% of the training data for validation.
In [ ]:
history_1 = model_1.fit(X_train_normalized, y_train, validation_split=0.1, epochs=150, verbose=2)
Epoch 1/150 12/12 - 4s - loss: 0.6673 - accuracy: 0.7250 - val_loss: 0.7286 - val_accuracy: 0.5750 - 4s/epoch - 343ms/step Epoch 2/150 12/12 - 0s - loss: 0.6471 - accuracy: 0.7278 - val_loss: 0.7256 - val_accuracy: 0.5750 - 86ms/epoch - 7ms/step Epoch 3/150 12/12 - 0s - loss: 0.6348 - accuracy: 0.7278 - val_loss: 0.7179 - val_accuracy: 0.5750 - 74ms/epoch - 6ms/step Epoch 4/150 12/12 - 0s - loss: 0.6195 - accuracy: 0.7278 - val_loss: 0.6950 - val_accuracy: 0.5750 - 50ms/epoch - 4ms/step Epoch 5/150 12/12 - 0s - loss: 0.6028 - accuracy: 0.7278 - val_loss: 0.6717 - val_accuracy: 0.5750 - 47ms/epoch - 4ms/step Epoch 6/150 12/12 - 0s - loss: 0.5843 - accuracy: 0.7278 - val_loss: 0.6590 - val_accuracy: 0.5750 - 49ms/epoch - 4ms/step Epoch 7/150 12/12 - 0s - loss: 0.5683 - accuracy: 0.7306 - val_loss: 0.6287 - val_accuracy: 0.5750 - 46ms/epoch - 4ms/step Epoch 8/150 12/12 - 0s - loss: 0.5462 - accuracy: 0.7333 - val_loss: 0.6108 - val_accuracy: 0.5750 - 50ms/epoch - 4ms/step Epoch 9/150 12/12 - 0s - loss: 0.5291 - accuracy: 0.7389 - val_loss: 0.5909 - val_accuracy: 0.5750 - 48ms/epoch - 4ms/step Epoch 10/150 12/12 - 0s - loss: 0.5090 - accuracy: 0.7444 - val_loss: 0.5695 - val_accuracy: 0.6750 - 64ms/epoch - 5ms/step Epoch 11/150 12/12 - 0s - loss: 0.4856 - accuracy: 0.8000 - val_loss: 0.5488 - val_accuracy: 0.7500 - 80ms/epoch - 7ms/step Epoch 12/150 12/12 - 0s - loss: 0.4649 - accuracy: 0.8083 - val_loss: 0.5406 - val_accuracy: 0.7500 - 118ms/epoch - 10ms/step Epoch 13/150 12/12 - 0s - loss: 0.4502 - accuracy: 0.8278 - val_loss: 0.5172 - val_accuracy: 0.9000 - 139ms/epoch - 12ms/step Epoch 14/150 12/12 - 0s - loss: 0.4376 - accuracy: 0.8417 - val_loss: 0.4965 - val_accuracy: 0.9250 - 134ms/epoch - 11ms/step Epoch 15/150 12/12 - 0s - loss: 0.4251 - accuracy: 0.8444 - val_loss: 0.4907 - val_accuracy: 0.9250 - 120ms/epoch - 10ms/step Epoch 16/150 12/12 - 0s - loss: 0.4107 - accuracy: 0.8417 - val_loss: 0.4663 - val_accuracy: 0.9250 - 90ms/epoch - 7ms/step Epoch 17/150 12/12 - 0s - loss: 0.3963 - accuracy: 0.8694 - val_loss: 0.4427 - val_accuracy: 0.9250 - 81ms/epoch - 7ms/step Epoch 18/150 12/12 - 0s - loss: 0.3840 - accuracy: 0.8889 - val_loss: 0.4400 - val_accuracy: 0.9250 - 91ms/epoch - 8ms/step Epoch 19/150 12/12 - 0s - loss: 0.3778 - accuracy: 0.8861 - val_loss: 0.4272 - val_accuracy: 0.9250 - 66ms/epoch - 5ms/step Epoch 20/150 12/12 - 0s - loss: 0.3689 - accuracy: 0.8833 - val_loss: 0.4156 - val_accuracy: 0.9250 - 48ms/epoch - 4ms/step Epoch 21/150 12/12 - 0s - loss: 0.3566 - accuracy: 0.8944 - val_loss: 0.4067 - val_accuracy: 0.9250 - 53ms/epoch - 4ms/step Epoch 22/150 12/12 - 0s - loss: 0.3428 - accuracy: 0.8944 - val_loss: 0.3862 - val_accuracy: 0.9000 - 48ms/epoch - 4ms/step Epoch 23/150 12/12 - 0s - loss: 0.3390 - accuracy: 0.8917 - val_loss: 0.3771 - val_accuracy: 0.9000 - 63ms/epoch - 5ms/step Epoch 24/150 12/12 - 0s - loss: 0.3326 - accuracy: 0.8889 - val_loss: 0.3694 - val_accuracy: 0.9000 - 47ms/epoch - 4ms/step Epoch 25/150 12/12 - 0s - loss: 0.3257 - accuracy: 0.8944 - val_loss: 0.3609 - val_accuracy: 0.9000 - 55ms/epoch - 5ms/step Epoch 26/150 12/12 - 0s - loss: 0.3133 - accuracy: 0.8889 - val_loss: 0.3545 - val_accuracy: 0.9250 - 49ms/epoch - 4ms/step Epoch 27/150 12/12 - 0s - loss: 0.3045 - accuracy: 0.9028 - val_loss: 0.3415 - val_accuracy: 0.9000 - 156ms/epoch - 13ms/step Epoch 28/150 12/12 - 0s - loss: 0.3012 - accuracy: 0.8944 - val_loss: 0.3492 - val_accuracy: 0.9250 - 133ms/epoch - 11ms/step Epoch 29/150 12/12 - 0s - loss: 0.2995 - accuracy: 0.8861 - val_loss: 0.3412 - val_accuracy: 0.9250 - 97ms/epoch - 8ms/step Epoch 30/150 12/12 - 0s - loss: 0.2945 - accuracy: 0.8917 - val_loss: 0.3222 - val_accuracy: 0.9000 - 113ms/epoch - 9ms/step Epoch 31/150 12/12 - 0s - loss: 0.2837 - accuracy: 0.9083 - val_loss: 0.3221 - val_accuracy: 0.9000 - 110ms/epoch - 9ms/step Epoch 32/150 12/12 - 0s - loss: 0.2806 - accuracy: 0.9028 - val_loss: 0.3162 - val_accuracy: 0.9250 - 94ms/epoch - 8ms/step Epoch 33/150 12/12 - 0s - loss: 0.2813 - accuracy: 0.9000 - val_loss: 0.3091 - val_accuracy: 0.9000 - 49ms/epoch - 4ms/step Epoch 34/150 12/12 - 0s - loss: 0.2695 - accuracy: 0.9083 - val_loss: 0.3082 - val_accuracy: 0.9250 - 55ms/epoch - 5ms/step Epoch 35/150 12/12 - 0s - loss: 0.2771 - accuracy: 0.9111 - val_loss: 0.3005 - val_accuracy: 0.9250 - 49ms/epoch - 4ms/step Epoch 36/150 12/12 - 0s - loss: 0.2614 - accuracy: 0.9028 - val_loss: 0.2926 - val_accuracy: 0.9000 - 48ms/epoch - 4ms/step Epoch 37/150 12/12 - 0s - loss: 0.2663 - accuracy: 0.9028 - val_loss: 0.2856 - val_accuracy: 0.9000 - 49ms/epoch - 4ms/step Epoch 38/150 12/12 - 0s - loss: 0.2589 - accuracy: 0.9028 - val_loss: 0.2865 - val_accuracy: 0.9250 - 50ms/epoch - 4ms/step Epoch 39/150 12/12 - 0s - loss: 0.2748 - accuracy: 0.8944 - val_loss: 0.2834 - val_accuracy: 0.9250 - 49ms/epoch - 4ms/step Epoch 40/150 12/12 - 0s - loss: 0.2560 - accuracy: 0.9000 - val_loss: 0.2720 - val_accuracy: 0.9000 - 49ms/epoch - 4ms/step Epoch 41/150 12/12 - 0s - loss: 0.2600 - accuracy: 0.9000 - val_loss: 0.2845 - val_accuracy: 0.9250 - 52ms/epoch - 4ms/step Epoch 42/150 12/12 - 0s - loss: 0.2455 - accuracy: 0.8972 - val_loss: 0.2733 - val_accuracy: 0.9000 - 63ms/epoch - 5ms/step Epoch 43/150 12/12 - 0s - loss: 0.2474 - accuracy: 0.9028 - val_loss: 0.2856 - val_accuracy: 0.9250 - 47ms/epoch - 4ms/step Epoch 44/150 12/12 - 0s - loss: 0.2428 - accuracy: 0.9000 - val_loss: 0.2805 - val_accuracy: 0.9250 - 48ms/epoch - 4ms/step Epoch 45/150 12/12 - 0s - loss: 0.2526 - accuracy: 0.8972 - val_loss: 0.2712 - val_accuracy: 0.9000 - 49ms/epoch - 4ms/step Epoch 46/150 12/12 - 0s - loss: 0.2433 - accuracy: 0.9111 - val_loss: 0.2703 - val_accuracy: 0.9000 - 47ms/epoch - 4ms/step Epoch 47/150 12/12 - 0s - loss: 0.2441 - accuracy: 0.9056 - val_loss: 0.2682 - val_accuracy: 0.9250 - 46ms/epoch - 4ms/step Epoch 48/150 12/12 - 0s - loss: 0.2399 - accuracy: 0.9056 - val_loss: 0.2628 - val_accuracy: 0.9000 - 46ms/epoch - 4ms/step Epoch 49/150 12/12 - 0s - loss: 0.2378 - accuracy: 0.9028 - val_loss: 0.2655 - val_accuracy: 0.9250 - 47ms/epoch - 4ms/step Epoch 50/150 12/12 - 0s - loss: 0.2454 - accuracy: 0.9000 - val_loss: 0.2613 - val_accuracy: 0.9000 - 49ms/epoch - 4ms/step Epoch 51/150 12/12 - 0s - loss: 0.2264 - accuracy: 0.9167 - val_loss: 0.2602 - val_accuracy: 0.9000 - 46ms/epoch - 4ms/step Epoch 52/150 12/12 - 0s - loss: 0.2236 - accuracy: 0.9111 - val_loss: 0.2602 - val_accuracy: 0.9000 - 49ms/epoch - 4ms/step Epoch 53/150 12/12 - 0s - loss: 0.2339 - accuracy: 0.9056 - val_loss: 0.2579 - val_accuracy: 0.9000 - 56ms/epoch - 5ms/step Epoch 54/150 12/12 - 0s - loss: 0.2224 - accuracy: 0.9083 - val_loss: 0.2520 - val_accuracy: 0.9000 - 46ms/epoch - 4ms/step Epoch 55/150 12/12 - 0s - loss: 0.2242 - accuracy: 0.9056 - val_loss: 0.2483 - val_accuracy: 0.9000 - 45ms/epoch - 4ms/step Epoch 56/150 12/12 - 0s - loss: 0.2383 - accuracy: 0.9056 - val_loss: 0.2464 - val_accuracy: 0.9000 - 47ms/epoch - 4ms/step Epoch 57/150 12/12 - 0s - loss: 0.2116 - accuracy: 0.9278 - val_loss: 0.2444 - val_accuracy: 0.9000 - 49ms/epoch - 4ms/step Epoch 58/150 12/12 - 0s - loss: 0.2323 - accuracy: 0.9000 - val_loss: 0.2438 - val_accuracy: 0.9000 - 46ms/epoch - 4ms/step Epoch 59/150 12/12 - 0s - loss: 0.2215 - accuracy: 0.9111 - val_loss: 0.2401 - val_accuracy: 0.9000 - 52ms/epoch - 4ms/step Epoch 60/150 12/12 - 0s - loss: 0.2047 - accuracy: 0.9278 - val_loss: 0.2392 - val_accuracy: 0.9000 - 52ms/epoch - 4ms/step Epoch 61/150 12/12 - 0s - loss: 0.2260 - accuracy: 0.9167 - val_loss: 0.2486 - val_accuracy: 0.9250 - 65ms/epoch - 5ms/step Epoch 62/150 12/12 - 0s - loss: 0.2108 - accuracy: 0.9194 - val_loss: 0.2489 - val_accuracy: 0.9250 - 47ms/epoch - 4ms/step Epoch 63/150 12/12 - 0s - loss: 0.2341 - accuracy: 0.9028 - val_loss: 0.2397 - val_accuracy: 0.9000 - 47ms/epoch - 4ms/step Epoch 64/150 12/12 - 0s - loss: 0.2190 - accuracy: 0.9083 - val_loss: 0.2385 - val_accuracy: 0.9000 - 49ms/epoch - 4ms/step Epoch 65/150 12/12 - 0s - loss: 0.2267 - accuracy: 0.9056 - val_loss: 0.2436 - val_accuracy: 0.9000 - 49ms/epoch - 4ms/step Epoch 66/150 12/12 - 0s - loss: 0.2205 - accuracy: 0.9000 - val_loss: 0.2473 - val_accuracy: 0.9250 - 49ms/epoch - 4ms/step Epoch 67/150 12/12 - 0s - loss: 0.2221 - accuracy: 0.9111 - val_loss: 0.2352 - val_accuracy: 0.9000 - 48ms/epoch - 4ms/step Epoch 68/150 12/12 - 0s - loss: 0.2149 - accuracy: 0.9111 - val_loss: 0.2348 - val_accuracy: 0.9000 - 49ms/epoch - 4ms/step Epoch 69/150 12/12 - 0s - loss: 0.2091 - accuracy: 0.9167 - val_loss: 0.2381 - val_accuracy: 0.9000 - 51ms/epoch - 4ms/step Epoch 70/150 12/12 - 0s - loss: 0.2037 - accuracy: 0.9139 - val_loss: 0.2390 - val_accuracy: 0.9000 - 45ms/epoch - 4ms/step Epoch 71/150 12/12 - 0s - loss: 0.1996 - accuracy: 0.9278 - val_loss: 0.2503 - val_accuracy: 0.9250 - 51ms/epoch - 4ms/step Epoch 72/150 12/12 - 0s - loss: 0.2088 - accuracy: 0.9111 - val_loss: 0.2346 - val_accuracy: 0.9000 - 47ms/epoch - 4ms/step Epoch 73/150 12/12 - 0s - loss: 0.2053 - accuracy: 0.9167 - val_loss: 0.2369 - val_accuracy: 0.9000 - 48ms/epoch - 4ms/step Epoch 74/150 12/12 - 0s - loss: 0.2091 - accuracy: 0.9028 - val_loss: 0.2389 - val_accuracy: 0.9000 - 46ms/epoch - 4ms/step Epoch 75/150 12/12 - 0s - loss: 0.2134 - accuracy: 0.9194 - val_loss: 0.2393 - val_accuracy: 0.9000 - 47ms/epoch - 4ms/step Epoch 76/150 12/12 - 0s - loss: 0.2071 - accuracy: 0.9167 - val_loss: 0.2417 - val_accuracy: 0.9000 - 56ms/epoch - 5ms/step Epoch 77/150 12/12 - 0s - loss: 0.2051 - accuracy: 0.9111 - val_loss: 0.2358 - val_accuracy: 0.9000 - 51ms/epoch - 4ms/step Epoch 78/150 12/12 - 0s - loss: 0.2194 - accuracy: 0.9056 - val_loss: 0.2326 - val_accuracy: 0.9000 - 49ms/epoch - 4ms/step Epoch 79/150 12/12 - 0s - loss: 0.2108 - accuracy: 0.9028 - val_loss: 0.2396 - val_accuracy: 0.9000 - 48ms/epoch - 4ms/step Epoch 80/150 12/12 - 0s - loss: 0.2177 - accuracy: 0.9056 - val_loss: 0.2299 - val_accuracy: 0.9000 - 46ms/epoch - 4ms/step Epoch 81/150 12/12 - 0s - loss: 0.2060 - accuracy: 0.9194 - val_loss: 0.2281 - val_accuracy: 0.9000 - 45ms/epoch - 4ms/step Epoch 82/150 12/12 - 0s - loss: 0.1968 - accuracy: 0.9222 - val_loss: 0.2385 - val_accuracy: 0.9250 - 64ms/epoch - 5ms/step Epoch 83/150 12/12 - 0s - loss: 0.1968 - accuracy: 0.9194 - val_loss: 0.2348 - val_accuracy: 0.9000 - 48ms/epoch - 4ms/step Epoch 84/150 12/12 - 0s - loss: 0.1955 - accuracy: 0.9222 - val_loss: 0.2272 - val_accuracy: 0.9000 - 46ms/epoch - 4ms/step Epoch 85/150 12/12 - 0s - loss: 0.2159 - accuracy: 0.9194 - val_loss: 0.2335 - val_accuracy: 0.9000 - 47ms/epoch - 4ms/step Epoch 86/150 12/12 - 0s - loss: 0.1948 - accuracy: 0.9278 - val_loss: 0.2298 - val_accuracy: 0.9000 - 63ms/epoch - 5ms/step Epoch 87/150 12/12 - 0s - loss: 0.2119 - accuracy: 0.9083 - val_loss: 0.2319 - val_accuracy: 0.9000 - 64ms/epoch - 5ms/step Epoch 88/150 12/12 - 0s - loss: 0.1945 - accuracy: 0.9167 - val_loss: 0.2371 - val_accuracy: 0.9000 - 72ms/epoch - 6ms/step Epoch 89/150 12/12 - 0s - loss: 0.1973 - accuracy: 0.9222 - val_loss: 0.2371 - val_accuracy: 0.9000 - 75ms/epoch - 6ms/step Epoch 90/150 12/12 - 0s - loss: 0.2039 - accuracy: 0.9167 - val_loss: 0.2308 - val_accuracy: 0.9000 - 48ms/epoch - 4ms/step Epoch 91/150 12/12 - 0s - loss: 0.1962 - accuracy: 0.9139 - val_loss: 0.2359 - val_accuracy: 0.9000 - 48ms/epoch - 4ms/step Epoch 92/150 12/12 - 0s - loss: 0.2120 - accuracy: 0.9139 - val_loss: 0.2339 - val_accuracy: 0.9000 - 49ms/epoch - 4ms/step Epoch 93/150 12/12 - 0s - loss: 0.1962 - accuracy: 0.9083 - val_loss: 0.2355 - val_accuracy: 0.9000 - 51ms/epoch - 4ms/step Epoch 94/150 12/12 - 0s - loss: 0.2040 - accuracy: 0.8944 - val_loss: 0.2396 - val_accuracy: 0.9000 - 65ms/epoch - 5ms/step Epoch 95/150 12/12 - 0s - loss: 0.2050 - accuracy: 0.9167 - val_loss: 0.2350 - val_accuracy: 0.9000 - 50ms/epoch - 4ms/step Epoch 96/150 12/12 - 0s - loss: 0.2158 - accuracy: 0.9250 - val_loss: 0.2323 - val_accuracy: 0.9000 - 53ms/epoch - 4ms/step Epoch 97/150 12/12 - 0s - loss: 0.1839 - accuracy: 0.9139 - val_loss: 0.2313 - val_accuracy: 0.9000 - 52ms/epoch - 4ms/step Epoch 98/150 12/12 - 0s - loss: 0.1966 - accuracy: 0.9056 - val_loss: 0.2352 - val_accuracy: 0.9000 - 47ms/epoch - 4ms/step Epoch 99/150 12/12 - 0s - loss: 0.2034 - accuracy: 0.9083 - val_loss: 0.2284 - val_accuracy: 0.9000 - 52ms/epoch - 4ms/step Epoch 100/150 12/12 - 0s - loss: 0.1955 - accuracy: 0.9167 - val_loss: 0.2279 - val_accuracy: 0.9000 - 49ms/epoch - 4ms/step Epoch 101/150 12/12 - 0s - loss: 0.2002 - accuracy: 0.9111 - val_loss: 0.2303 - val_accuracy: 0.9000 - 47ms/epoch - 4ms/step Epoch 102/150 12/12 - 0s - loss: 0.2002 - accuracy: 0.9139 - val_loss: 0.2357 - val_accuracy: 0.9000 - 47ms/epoch - 4ms/step Epoch 103/150 12/12 - 0s - loss: 0.1976 - accuracy: 0.9056 - val_loss: 0.2273 - val_accuracy: 0.9000 - 48ms/epoch - 4ms/step Epoch 104/150 12/12 - 0s - loss: 0.1857 - accuracy: 0.9222 - val_loss: 0.2330 - val_accuracy: 0.9000 - 50ms/epoch - 4ms/step Epoch 105/150 12/12 - 0s - loss: 0.1903 - accuracy: 0.9111 - val_loss: 0.2407 - val_accuracy: 0.9000 - 48ms/epoch - 4ms/step Epoch 106/150 12/12 - 0s - loss: 0.2061 - accuracy: 0.9167 - val_loss: 0.2322 - val_accuracy: 0.9000 - 48ms/epoch - 4ms/step Epoch 107/150 12/12 - 0s - loss: 0.1941 - accuracy: 0.9111 - val_loss: 0.2362 - val_accuracy: 0.9000 - 60ms/epoch - 5ms/step Epoch 108/150 12/12 - 0s - loss: 0.1985 - accuracy: 0.9111 - val_loss: 0.2307 - val_accuracy: 0.9000 - 47ms/epoch - 4ms/step Epoch 109/150 12/12 - 0s - loss: 0.2018 - accuracy: 0.9083 - val_loss: 0.2257 - val_accuracy: 0.9000 - 51ms/epoch - 4ms/step Epoch 110/150 12/12 - 0s - loss: 0.1935 - accuracy: 0.9083 - val_loss: 0.2369 - val_accuracy: 0.9000 - 54ms/epoch - 4ms/step Epoch 111/150 12/12 - 0s - loss: 0.1796 - accuracy: 0.9111 - val_loss: 0.2333 - val_accuracy: 0.9000 - 47ms/epoch - 4ms/step Epoch 112/150 12/12 - 0s - loss: 0.1979 - accuracy: 0.9194 - val_loss: 0.2288 - val_accuracy: 0.9000 - 46ms/epoch - 4ms/step Epoch 113/150 12/12 - 0s - loss: 0.1937 - accuracy: 0.9250 - val_loss: 0.2342 - val_accuracy: 0.9000 - 45ms/epoch - 4ms/step Epoch 114/150 12/12 - 0s - loss: 0.1807 - accuracy: 0.9278 - val_loss: 0.2424 - val_accuracy: 0.9000 - 48ms/epoch - 4ms/step Epoch 115/150 12/12 - 0s - loss: 0.1834 - accuracy: 0.9222 - val_loss: 0.2368 - val_accuracy: 0.9000 - 50ms/epoch - 4ms/step Epoch 116/150 12/12 - 0s - loss: 0.1674 - accuracy: 0.9250 - val_loss: 0.2325 - val_accuracy: 0.9000 - 49ms/epoch - 4ms/step Epoch 117/150 12/12 - 0s - loss: 0.1869 - accuracy: 0.9222 - val_loss: 0.2324 - val_accuracy: 0.9000 - 48ms/epoch - 4ms/step Epoch 118/150 12/12 - 0s - loss: 0.1932 - accuracy: 0.9083 - val_loss: 0.2384 - val_accuracy: 0.9000 - 46ms/epoch - 4ms/step Epoch 119/150 12/12 - 0s - loss: 0.1899 - accuracy: 0.9194 - val_loss: 0.2392 - val_accuracy: 0.9000 - 46ms/epoch - 4ms/step Epoch 120/150 12/12 - 0s - loss: 0.1951 - accuracy: 0.9028 - val_loss: 0.2334 - val_accuracy: 0.9000 - 46ms/epoch - 4ms/step Epoch 121/150 12/12 - 0s - loss: 0.2012 - accuracy: 0.9139 - val_loss: 0.2301 - val_accuracy: 0.9000 - 48ms/epoch - 4ms/step Epoch 122/150 12/12 - 0s - loss: 0.1879 - accuracy: 0.9194 - val_loss: 0.2397 - val_accuracy: 0.9000 - 46ms/epoch - 4ms/step Epoch 123/150 12/12 - 0s - loss: 0.1935 - accuracy: 0.9139 - val_loss: 0.2281 - val_accuracy: 0.9000 - 46ms/epoch - 4ms/step Epoch 124/150 12/12 - 0s - loss: 0.1989 - accuracy: 0.9167 - val_loss: 0.2247 - val_accuracy: 0.9000 - 45ms/epoch - 4ms/step Epoch 125/150 12/12 - 0s - loss: 0.1803 - accuracy: 0.9250 - val_loss: 0.2285 - val_accuracy: 0.9000 - 47ms/epoch - 4ms/step Epoch 126/150 12/12 - 0s - loss: 0.1948 - accuracy: 0.9111 - val_loss: 0.2362 - val_accuracy: 0.9000 - 76ms/epoch - 6ms/step Epoch 127/150 12/12 - 0s - loss: 0.1888 - accuracy: 0.9222 - val_loss: 0.2314 - val_accuracy: 0.9000 - 51ms/epoch - 4ms/step Epoch 128/150 12/12 - 0s - loss: 0.1928 - accuracy: 0.9083 - val_loss: 0.2349 - val_accuracy: 0.9000 - 48ms/epoch - 4ms/step Epoch 129/150 12/12 - 0s - loss: 0.1845 - accuracy: 0.9222 - val_loss: 0.2304 - val_accuracy: 0.9000 - 49ms/epoch - 4ms/step Epoch 130/150 12/12 - 0s - loss: 0.1780 - accuracy: 0.9167 - val_loss: 0.2365 - val_accuracy: 0.9000 - 46ms/epoch - 4ms/step Epoch 131/150 12/12 - 0s - loss: 0.1826 - accuracy: 0.9222 - val_loss: 0.2379 - val_accuracy: 0.9000 - 49ms/epoch - 4ms/step Epoch 132/150 12/12 - 0s - loss: 0.1723 - accuracy: 0.9139 - val_loss: 0.2363 - val_accuracy: 0.9000 - 48ms/epoch - 4ms/step Epoch 133/150 12/12 - 0s - loss: 0.1823 - accuracy: 0.9222 - val_loss: 0.2286 - val_accuracy: 0.9000 - 51ms/epoch - 4ms/step Epoch 134/150 12/12 - 0s - loss: 0.1748 - accuracy: 0.9222 - val_loss: 0.2320 - val_accuracy: 0.9000 - 48ms/epoch - 4ms/step Epoch 135/150 12/12 - 0s - loss: 0.1829 - accuracy: 0.9250 - val_loss: 0.2338 - val_accuracy: 0.9000 - 47ms/epoch - 4ms/step Epoch 136/150 12/12 - 0s - loss: 0.1839 - accuracy: 0.9333 - val_loss: 0.2395 - val_accuracy: 0.9000 - 49ms/epoch - 4ms/step Epoch 137/150 12/12 - 0s - loss: 0.1823 - accuracy: 0.9194 - val_loss: 0.2368 - val_accuracy: 0.9000 - 47ms/epoch - 4ms/step Epoch 138/150 12/12 - 0s - loss: 0.1796 - accuracy: 0.9333 - val_loss: 0.2356 - val_accuracy: 0.9000 - 48ms/epoch - 4ms/step Epoch 139/150 12/12 - 0s - loss: 0.1968 - accuracy: 0.9083 - val_loss: 0.2308 - val_accuracy: 0.9000 - 48ms/epoch - 4ms/step Epoch 140/150 12/12 - 0s - loss: 0.1755 - accuracy: 0.9417 - val_loss: 0.2410 - val_accuracy: 0.9000 - 47ms/epoch - 4ms/step Epoch 141/150 12/12 - 0s - loss: 0.1843 - accuracy: 0.9139 - val_loss: 0.2316 - val_accuracy: 0.9000 - 48ms/epoch - 4ms/step Epoch 142/150 12/12 - 0s - loss: 0.1769 - accuracy: 0.9194 - val_loss: 0.2329 - val_accuracy: 0.9000 - 48ms/epoch - 4ms/step Epoch 143/150 12/12 - 0s - loss: 0.1967 - accuracy: 0.9194 - val_loss: 0.2416 - val_accuracy: 0.9000 - 51ms/epoch - 4ms/step Epoch 144/150 12/12 - 0s - loss: 0.1790 - accuracy: 0.9278 - val_loss: 0.2384 - val_accuracy: 0.9000 - 50ms/epoch - 4ms/step Epoch 145/150 12/12 - 0s - loss: 0.1706 - accuracy: 0.9139 - val_loss: 0.2319 - val_accuracy: 0.9000 - 55ms/epoch - 5ms/step Epoch 146/150 12/12 - 0s - loss: 0.1772 - accuracy: 0.9194 - val_loss: 0.2434 - val_accuracy: 0.9000 - 49ms/epoch - 4ms/step Epoch 147/150 12/12 - 0s - loss: 0.1833 - accuracy: 0.9167 - val_loss: 0.2380 - val_accuracy: 0.9000 - 50ms/epoch - 4ms/step Epoch 148/150 12/12 - 0s - loss: 0.1968 - accuracy: 0.9139 - val_loss: 0.2377 - val_accuracy: 0.9000 - 47ms/epoch - 4ms/step Epoch 149/150 12/12 - 0s - loss: 0.1935 - accuracy: 0.9167 - val_loss: 0.2423 - val_accuracy: 0.9000 - 67ms/epoch - 6ms/step Epoch 150/150 12/12 - 0s - loss: 0.1680 - accuracy: 0.9361 - val_loss: 0.2365 - val_accuracy: 0.9000 - 49ms/epoch - 4ms/step
In [ ]:
plt.plot(history_1.history['accuracy']) plt.plot(history_1.history['val_accuracy']) plt.title('Accuracy vs Epochs') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['Train', 'Validation'], loc='lower right') plt.show()
Observations:
Let’s try to increase the model complexity by tuning some of the hyper-parameters mentioned earlier and check if we can improve the model performance. Out of all the options we have, let’s try to change the number of hidden layers, the number of neurons in each hidden layer, the activation function in the hidden layer, and the optimizer from adamax to adam. Also, we have observed that validation accuracy became constant after some epochs, let’s try less number of epochs which would also reduce the computation time.
First, we need to clear the previous model’s history from the session. In Keras, we need special command to clear the model’s history otherwise the previous model history remains in the backend. Also, let’s fix the seed again after clearing the backend.
In [ ]:
# Clearing backend from tensorflow.keras import backend backend.clear_session()
In [ ]:
# Fixing the seed for random number generators np.random.seed(42) import random random.seed(42) tf.random.set_seed(42)
In [ ]:
model_2 = Sequential() model_2.add(Dense(128, activation='tanh', input_shape=(7,))) model_2.add(Dropout(0.1)) model_2.add(Dense(64, activation='tanh')) model_2.add(Dropout(0.1)) model_2.add(Dense(32, activation='tanh')) model_2.add(Dense(1, activation='sigmoid'))
In [ ]:
model_2.compile(loss = 'binary_crossentropy', optimizer='adam', metrics=['accuracy']) model_2.summary()
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense (Dense) (None, 128) 1024 dropout (Dropout) (None, 128) 0 dense_1 (Dense) (None, 64) 8256 dropout_1 (Dropout) (None, 64) 0 dense_2 (Dense) (None, 32) 2080 dense_3 (Dense) (None, 1) 33 ================================================================= Total params: 11,393 Trainable params: 11,393 Non-trainable params: 0 _________________________________________________________________
In [ ]:
history_2 = model_2.fit(X_train_normalized, y_train, validation_split=0.1, epochs=100, verbose=2)
Epoch 1/100 12/12 - 1s - loss: 0.6485 - accuracy: 0.7139 - val_loss: 0.7087 - val_accuracy: 0.5500 - 692ms/epoch - 58ms/step Epoch 2/100 12/12 - 0s - loss: 0.5760 - accuracy: 0.7167 - val_loss: 0.6759 - val_accuracy: 0.5250 - 47ms/epoch - 4ms/step Epoch 3/100 12/12 - 0s - loss: 0.5257 - accuracy: 0.6972 - val_loss: 0.6083 - val_accuracy: 0.5500 - 56ms/epoch - 5ms/step Epoch 4/100 12/12 - 0s - loss: 0.4879 - accuracy: 0.7417 - val_loss: 0.5568 - val_accuracy: 0.8000 - 46ms/epoch - 4ms/step Epoch 5/100 12/12 - 0s - loss: 0.4445 - accuracy: 0.7944 - val_loss: 0.5106 - val_accuracy: 0.8750 - 47ms/epoch - 4ms/step Epoch 6/100 12/12 - 0s - loss: 0.4130 - accuracy: 0.8278 - val_loss: 0.4369 - val_accuracy: 0.8000 - 67ms/epoch - 6ms/step Epoch 7/100 12/12 - 0s - loss: 0.3738 - accuracy: 0.8500 - val_loss: 0.4603 - val_accuracy: 0.8500 - 50ms/epoch - 4ms/step Epoch 8/100 12/12 - 0s - loss: 0.3443 - accuracy: 0.8778 - val_loss: 0.3709 - val_accuracy: 0.9250 - 48ms/epoch - 4ms/step Epoch 9/100 12/12 - 0s - loss: 0.3208 - accuracy: 0.8722 - val_loss: 0.3153 - val_accuracy: 0.9000 - 49ms/epoch - 4ms/step Epoch 10/100 12/12 - 0s - loss: 0.2864 - accuracy: 0.8944 - val_loss: 0.2893 - val_accuracy: 0.9250 - 48ms/epoch - 4ms/step Epoch 11/100 12/12 - 0s - loss: 0.2609 - accuracy: 0.9000 - val_loss: 0.3963 - val_accuracy: 0.8500 - 53ms/epoch - 4ms/step Epoch 12/100 12/12 - 0s - loss: 0.2691 - accuracy: 0.8778 - val_loss: 0.2662 - val_accuracy: 0.9250 - 48ms/epoch - 4ms/step Epoch 13/100 12/12 - 0s - loss: 0.2376 - accuracy: 0.9111 - val_loss: 0.2769 - val_accuracy: 0.9250 - 49ms/epoch - 4ms/step Epoch 14/100 12/12 - 0s - loss: 0.2303 - accuracy: 0.9000 - val_loss: 0.3734 - val_accuracy: 0.8750 - 64ms/epoch - 5ms/step Epoch 15/100 12/12 - 0s - loss: 0.2513 - accuracy: 0.8972 - val_loss: 0.2422 - val_accuracy: 0.9000 - 52ms/epoch - 4ms/step Epoch 16/100 12/12 - 0s - loss: 0.2244 - accuracy: 0.9028 - val_loss: 0.2421 - val_accuracy: 0.9250 - 52ms/epoch - 4ms/step Epoch 17/100 12/12 - 0s - loss: 0.2322 - accuracy: 0.9056 - val_loss: 0.2688 - val_accuracy: 0.9250 - 49ms/epoch - 4ms/step Epoch 18/100 12/12 - 0s - loss: 0.2136 - accuracy: 0.9083 - val_loss: 0.2329 - val_accuracy: 0.9250 - 49ms/epoch - 4ms/step Epoch 19/100 12/12 - 0s - loss: 0.2035 - accuracy: 0.9000 - val_loss: 0.2450 - val_accuracy: 0.9000 - 54ms/epoch - 4ms/step Epoch 20/100 12/12 - 0s - loss: 0.1878 - accuracy: 0.9194 - val_loss: 0.2578 - val_accuracy: 0.9250 - 49ms/epoch - 4ms/step Epoch 21/100 12/12 - 0s - loss: 0.2062 - accuracy: 0.9000 - val_loss: 0.2407 - val_accuracy: 0.9250 - 57ms/epoch - 5ms/step Epoch 22/100 12/12 - 0s - loss: 0.1906 - accuracy: 0.9167 - val_loss: 0.2350 - val_accuracy: 0.9500 - 46ms/epoch - 4ms/step Epoch 23/100 12/12 - 0s - loss: 0.1958 - accuracy: 0.9056 - val_loss: 0.2551 - val_accuracy: 0.9250 - 50ms/epoch - 4ms/step Epoch 24/100 12/12 - 0s - loss: 0.2022 - accuracy: 0.9056 - val_loss: 0.2827 - val_accuracy: 0.9250 - 52ms/epoch - 4ms/step Epoch 25/100 12/12 - 0s - loss: 0.1955 - accuracy: 0.9139 - val_loss: 0.2423 - val_accuracy: 0.9250 - 47ms/epoch - 4ms/step Epoch 26/100 12/12 - 0s - loss: 0.2020 - accuracy: 0.9194 - val_loss: 0.2371 - val_accuracy: 0.9250 - 48ms/epoch - 4ms/step Epoch 27/100 12/12 - 0s - loss: 0.1885 - accuracy: 0.9167 - val_loss: 0.2599 - val_accuracy: 0.9250 - 51ms/epoch - 4ms/step Epoch 28/100 12/12 - 0s - loss: 0.1821 - accuracy: 0.9111 - val_loss: 0.2411 - val_accuracy: 0.9250 - 65ms/epoch - 5ms/step Epoch 29/100 12/12 - 0s - loss: 0.2002 - accuracy: 0.9056 - val_loss: 0.2515 - val_accuracy: 0.9250 - 47ms/epoch - 4ms/step Epoch 30/100 12/12 - 0s - loss: 0.2147 - accuracy: 0.8889 - val_loss: 0.2723 - val_accuracy: 0.9250 - 48ms/epoch - 4ms/step Epoch 31/100 12/12 - 0s - loss: 0.1791 - accuracy: 0.9250 - val_loss: 0.2647 - val_accuracy: 0.9000 - 48ms/epoch - 4ms/step Epoch 32/100 12/12 - 0s - loss: 0.1862 - accuracy: 0.9194 - val_loss: 0.2379 - val_accuracy: 0.9250 - 55ms/epoch - 5ms/step Epoch 33/100 12/12 - 0s - loss: 0.1902 - accuracy: 0.9194 - val_loss: 0.2562 - val_accuracy: 0.9000 - 48ms/epoch - 4ms/step Epoch 34/100 12/12 - 0s - loss: 0.1766 - accuracy: 0.9250 - val_loss: 0.2629 - val_accuracy: 0.9250 - 48ms/epoch - 4ms/step Epoch 35/100 12/12 - 0s - loss: 0.1824 - accuracy: 0.9222 - val_loss: 0.2462 - val_accuracy: 0.9250 - 49ms/epoch - 4ms/step Epoch 36/100 12/12 - 0s - loss: 0.1984 - accuracy: 0.9139 - val_loss: 0.2576 - val_accuracy: 0.9000 - 48ms/epoch - 4ms/step Epoch 37/100 12/12 - 0s - loss: 0.1677 - accuracy: 0.9250 - val_loss: 0.2434 - val_accuracy: 0.9250 - 49ms/epoch - 4ms/step Epoch 38/100 12/12 - 0s - loss: 0.1789 - accuracy: 0.9139 - val_loss: 0.2450 - val_accuracy: 0.9250 - 50ms/epoch - 4ms/step Epoch 39/100 12/12 - 0s - loss: 0.1878 - accuracy: 0.9167 - val_loss: 0.2351 - val_accuracy: 0.9500 - 56ms/epoch - 5ms/step Epoch 40/100 12/12 - 0s - loss: 0.1629 - accuracy: 0.9278 - val_loss: 0.2966 - val_accuracy: 0.9250 - 51ms/epoch - 4ms/step Epoch 41/100 12/12 - 0s - loss: 0.1847 - accuracy: 0.9250 - val_loss: 0.2394 - val_accuracy: 0.9500 - 51ms/epoch - 4ms/step Epoch 42/100 12/12 - 0s - loss: 0.1651 - accuracy: 0.9250 - val_loss: 0.2556 - val_accuracy: 0.9000 - 53ms/epoch - 4ms/step Epoch 43/100 12/12 - 0s - loss: 0.1697 - accuracy: 0.9167 - val_loss: 0.2489 - val_accuracy: 0.9000 - 54ms/epoch - 5ms/step Epoch 44/100 12/12 - 0s - loss: 0.1699 - accuracy: 0.9250 - val_loss: 0.2380 - val_accuracy: 0.9500 - 49ms/epoch - 4ms/step Epoch 45/100 12/12 - 0s - loss: 0.1855 - accuracy: 0.9222 - val_loss: 0.2402 - val_accuracy: 0.9250 - 48ms/epoch - 4ms/step Epoch 46/100 12/12 - 0s - loss: 0.1730 - accuracy: 0.9167 - val_loss: 0.2412 - val_accuracy: 0.9250 - 48ms/epoch - 4ms/step Epoch 47/100 12/12 - 0s - loss: 0.1736 - accuracy: 0.9139 - val_loss: 0.2496 - val_accuracy: 0.9000 - 49ms/epoch - 4ms/step Epoch 48/100 12/12 - 0s - loss: 0.1679 - accuracy: 0.9222 - val_loss: 0.2485 - val_accuracy: 0.9000 - 49ms/epoch - 4ms/step Epoch 49/100 12/12 - 0s - loss: 0.1794 - accuracy: 0.9306 - val_loss: 0.2394 - val_accuracy: 0.9250 - 48ms/epoch - 4ms/step Epoch 50/100 12/12 - 0s - loss: 0.1678 - accuracy: 0.9250 - val_loss: 0.2625 - val_accuracy: 0.9000 - 51ms/epoch - 4ms/step Epoch 51/100 12/12 - 0s - loss: 0.1630 - accuracy: 0.9167 - val_loss: 0.2494 - val_accuracy: 0.9500 - 49ms/epoch - 4ms/step Epoch 52/100 12/12 - 0s - loss: 0.1691 - accuracy: 0.9250 - val_loss: 0.2515 - val_accuracy: 0.9000 - 49ms/epoch - 4ms/step Epoch 53/100 12/12 - 0s - loss: 0.1686 - accuracy: 0.9139 - val_loss: 0.2884 - val_accuracy: 0.9250 - 48ms/epoch - 4ms/step Epoch 54/100 12/12 - 0s - loss: 0.1977 - accuracy: 0.9139 - val_loss: 0.2513 - val_accuracy: 0.9250 - 54ms/epoch - 4ms/step Epoch 55/100 12/12 - 0s - loss: 0.1787 - accuracy: 0.9194 - val_loss: 0.2503 - val_accuracy: 0.9250 - 48ms/epoch - 4ms/step Epoch 56/100 12/12 - 0s - loss: 0.1874 - accuracy: 0.9056 - val_loss: 0.2590 - val_accuracy: 0.9250 - 49ms/epoch - 4ms/step Epoch 57/100 12/12 - 0s - loss: 0.1630 - accuracy: 0.9222 - val_loss: 0.2377 - val_accuracy: 0.9500 - 54ms/epoch - 5ms/step Epoch 58/100 12/12 - 0s - loss: 0.1638 - accuracy: 0.9222 - val_loss: 0.2377 - val_accuracy: 0.9500 - 58ms/epoch - 5ms/step Epoch 59/100 12/12 - 0s - loss: 0.1963 - accuracy: 0.9083 - val_loss: 0.2673 - val_accuracy: 0.9250 - 49ms/epoch - 4ms/step Epoch 60/100 12/12 - 0s - loss: 0.1891 - accuracy: 0.9222 - val_loss: 0.2462 - val_accuracy: 0.9000 - 49ms/epoch - 4ms/step Epoch 61/100 12/12 - 0s - loss: 0.1632 - accuracy: 0.9361 - val_loss: 0.2429 - val_accuracy: 0.9250 - 48ms/epoch - 4ms/step Epoch 62/100 12/12 - 0s - loss: 0.1596 - accuracy: 0.9333 - val_loss: 0.2512 - val_accuracy: 0.9000 - 49ms/epoch - 4ms/step Epoch 63/100 12/12 - 0s - loss: 0.1658 - accuracy: 0.9250 - val_loss: 0.2383 - val_accuracy: 0.9250 - 47ms/epoch - 4ms/step Epoch 64/100 12/12 - 0s - loss: 0.1745 - accuracy: 0.9111 - val_loss: 0.2533 - val_accuracy: 0.9250 - 48ms/epoch - 4ms/step Epoch 65/100 12/12 - 0s - loss: 0.1729 - accuracy: 0.9194 - val_loss: 0.2599 - val_accuracy: 0.9250 - 53ms/epoch - 4ms/step Epoch 66/100 12/12 - 0s - loss: 0.1845 - accuracy: 0.9139 - val_loss: 0.2388 - val_accuracy: 0.9500 - 49ms/epoch - 4ms/step Epoch 67/100 12/12 - 0s - loss: 0.1874 - accuracy: 0.9111 - val_loss: 0.2632 - val_accuracy: 0.9250 - 48ms/epoch - 4ms/step Epoch 68/100 12/12 - 0s - loss: 0.1864 - accuracy: 0.9194 - val_loss: 0.2838 - val_accuracy: 0.9250 - 49ms/epoch - 4ms/step Epoch 69/100 12/12 - 0s - loss: 0.1742 - accuracy: 0.9222 - val_loss: 0.2447 - val_accuracy: 0.9250 - 53ms/epoch - 4ms/step Epoch 70/100 12/12 - 0s - loss: 0.1632 - accuracy: 0.9222 - val_loss: 0.2581 - val_accuracy: 0.9000 - 46ms/epoch - 4ms/step Epoch 71/100 12/12 - 0s - loss: 0.1703 - accuracy: 0.9306 - val_loss: 0.2451 - val_accuracy: 0.9500 - 48ms/epoch - 4ms/step Epoch 72/100 12/12 - 0s - loss: 0.1747 - accuracy: 0.9111 - val_loss: 0.2911 - val_accuracy: 0.9250 - 48ms/epoch - 4ms/step Epoch 73/100 12/12 - 0s - loss: 0.1595 - accuracy: 0.9278 - val_loss: 0.2359 - val_accuracy: 0.9500 - 47ms/epoch - 4ms/step Epoch 74/100 12/12 - 0s - loss: 0.1716 - accuracy: 0.9222 - val_loss: 0.2719 - val_accuracy: 0.9250 - 50ms/epoch - 4ms/step Epoch 75/100 12/12 - 0s - loss: 0.1723 - accuracy: 0.9194 - val_loss: 0.2461 - val_accuracy: 0.9250 - 51ms/epoch - 4ms/step Epoch 76/100 12/12 - 0s - loss: 0.1624 - accuracy: 0.9194 - val_loss: 0.2409 - val_accuracy: 0.9250 - 60ms/epoch - 5ms/step Epoch 77/100 12/12 - 0s - loss: 0.1599 - accuracy: 0.9222 - val_loss: 0.2412 - val_accuracy: 0.9250 - 49ms/epoch - 4ms/step Epoch 78/100 12/12 - 0s - loss: 0.1546 - accuracy: 0.9333 - val_loss: 0.2535 - val_accuracy: 0.9250 - 53ms/epoch - 4ms/step Epoch 79/100 12/12 - 0s - loss: 0.1533 - accuracy: 0.9278 - val_loss: 0.2481 - val_accuracy: 0.9250 - 47ms/epoch - 4ms/step Epoch 80/100 12/12 - 0s - loss: 0.1888 - accuracy: 0.9194 - val_loss: 0.2834 - val_accuracy: 0.9250 - 48ms/epoch - 4ms/step Epoch 81/100 12/12 - 0s - loss: 0.1661 - accuracy: 0.9333 - val_loss: 0.2314 - val_accuracy: 0.9250 - 48ms/epoch - 4ms/step Epoch 82/100 12/12 - 0s - loss: 0.1656 - accuracy: 0.9222 - val_loss: 0.2364 - val_accuracy: 0.9250 - 48ms/epoch - 4ms/step Epoch 83/100 12/12 - 0s - loss: 0.1613 - accuracy: 0.9306 - val_loss: 0.2612 - val_accuracy: 0.9250 - 48ms/epoch - 4ms/step Epoch 84/100 12/12 - 0s - loss: 0.1583 - accuracy: 0.9250 - val_loss: 0.2403 - val_accuracy: 0.9250 - 47ms/epoch - 4ms/step Epoch 85/100 12/12 - 0s - loss: 0.1564 - accuracy: 0.9389 - val_loss: 0.2611 - val_accuracy: 0.9250 - 47ms/epoch - 4ms/step Epoch 86/100 12/12 - 0s - loss: 0.1727 - accuracy: 0.9194 - val_loss: 0.2528 - val_accuracy: 0.9250 - 48ms/epoch - 4ms/step Epoch 87/100 12/12 - 0s - loss: 0.1591 - accuracy: 0.9250 - val_loss: 0.2622 - val_accuracy: 0.9250 - 72ms/epoch - 6ms/step Epoch 88/100 12/12 - 0s - loss: 0.1856 - accuracy: 0.9056 - val_loss: 0.2371 - val_accuracy: 0.9250 - 49ms/epoch - 4ms/step Epoch 89/100 12/12 - 0s - loss: 0.1502 - accuracy: 0.9333 - val_loss: 0.2498 - val_accuracy: 0.9250 - 48ms/epoch - 4ms/step Epoch 90/100 12/12 - 0s - loss: 0.1757 - accuracy: 0.9139 - val_loss: 0.2581 - val_accuracy: 0.9250 - 47ms/epoch - 4ms/step Epoch 91/100 12/12 - 0s - loss: 0.1595 - accuracy: 0.9278 - val_loss: 0.2535 - val_accuracy: 0.9000 - 48ms/epoch - 4ms/step Epoch 92/100 12/12 - 0s - loss: 0.1671 - accuracy: 0.9250 - val_loss: 0.2561 - val_accuracy: 0.9250 - 48ms/epoch - 4ms/step Epoch 93/100 12/12 - 0s - loss: 0.1684 - accuracy: 0.9167 - val_loss: 0.2411 - val_accuracy: 0.9250 - 50ms/epoch - 4ms/step Epoch 94/100 12/12 - 0s - loss: 0.1696 - accuracy: 0.9250 - val_loss: 0.2564 - val_accuracy: 0.9000 - 48ms/epoch - 4ms/step Epoch 95/100 12/12 - 0s - loss: 0.1572 - accuracy: 0.9250 - val_loss: 0.2426 - val_accuracy: 0.9250 - 57ms/epoch - 5ms/step Epoch 96/100 12/12 - 0s - loss: 0.1669 - accuracy: 0.9278 - val_loss: 0.2459 - val_accuracy: 0.9250 - 55ms/epoch - 5ms/step Epoch 97/100 12/12 - 0s - loss: 0.1644 - accuracy: 0.9306 - val_loss: 0.2745 - val_accuracy: 0.9250 - 54ms/epoch - 4ms/step Epoch 98/100 12/12 - 0s - loss: 0.1641 - accuracy: 0.9194 - val_loss: 0.2393 - val_accuracy: 0.9500 - 66ms/epoch - 6ms/step Epoch 99/100 12/12 - 0s - loss: 0.1793 - accuracy: 0.9056 - val_loss: 0.2817 - val_accuracy: 0.9250 - 64ms/epoch - 5ms/step Epoch 100/100 12/12 - 0s - loss: 0.1538 - accuracy: 0.9306 - val_loss: 0.2406 - val_accuracy: 0.9500 - 49ms/epoch - 4ms/step
In [ ]:
plt.plot(history_2.history['accuracy']) plt.plot(history_2.history['val_accuracy']) plt.title('Accuracy vs Epochs') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['Train', 'Validation'], loc='lower right') plt.show()
Observations:
Let’s try to further tune some of the hyper-parameters and check if we can improve the model performance.
We will use learning_rate = 0.001 for the optimizer in the training process and increase the model complexity by further increasing the number of layers, the number of nodes in each layer, and the epochs.
In [ ]:
# Clearing the backend from tensorflow.keras import backend backend.clear_session()
In [ ]:
# Fixing the seed for random number generators np.random.seed(42) import random random.seed(42) tf.random.set_seed(42)
In [ ]:
model_3 = Sequential() model_3.add(Dense(256, activation='tanh', input_shape=(7,))) model_3.add(Dropout(0.1)) model_3.add(Dense(128, activation='tanh')) model_3.add(Dropout(0.1)) model_3.add(Dense(64, activation='tanh')) model_3.add(Dropout(0.1)) model_3.add(Dense(32, activation='tanh')) model_3.add(Dense(1, activation='sigmoid'))
In [ ]:
model_3.compile(loss = 'binary_crossentropy', optimizer=tf.keras.optimizers.Adam(learning_rate=0.001), metrics=['accuracy']) model_3.summary()
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense (Dense) (None, 256) 2048 dropout (Dropout) (None, 256) 0 dense_1 (Dense) (None, 128) 32896 dropout_1 (Dropout) (None, 128) 0 dense_2 (Dense) (None, 64) 8256 dropout_2 (Dropout) (None, 64) 0 dense_3 (Dense) (None, 32) 2080 dense_4 (Dense) (None, 1) 33 ================================================================= Total params: 45,313 Trainable params: 45,313 Non-trainable params: 0 _________________________________________________________________
In [ ]:
history_3 = model_3.fit(X_train_normalized, y_train, validation_split=0.1, epochs=200, verbose=2)
Epoch 1/200 12/12 - 1s - loss: 0.6521 - accuracy: 0.6861 - val_loss: 0.6402 - val_accuracy: 0.5250 - 693ms/epoch - 58ms/step Epoch 2/200 12/12 - 0s - loss: 0.5411 - accuracy: 0.7250 - val_loss: 0.6076 - val_accuracy: 0.5750 - 68ms/epoch - 6ms/step Epoch 3/200 12/12 - 0s - loss: 0.4780 - accuracy: 0.7417 - val_loss: 0.5336 - val_accuracy: 0.8000 - 52ms/epoch - 4ms/step Epoch 4/200 12/12 - 0s - loss: 0.4397 - accuracy: 0.7889 - val_loss: 0.4739 - val_accuracy: 0.8250 - 50ms/epoch - 4ms/step Epoch 5/200 12/12 - 0s - loss: 0.4037 - accuracy: 0.8194 - val_loss: 0.4045 - val_accuracy: 0.8500 - 57ms/epoch - 5ms/step Epoch 6/200 12/12 - 0s - loss: 0.3744 - accuracy: 0.8222 - val_loss: 0.3624 - val_accuracy: 0.8750 - 59ms/epoch - 5ms/step Epoch 7/200 12/12 - 0s - loss: 0.3248 - accuracy: 0.8528 - val_loss: 0.3273 - val_accuracy: 0.9250 - 49ms/epoch - 4ms/step Epoch 8/200 12/12 - 0s - loss: 0.2847 - accuracy: 0.8750 - val_loss: 0.3111 - val_accuracy: 0.9250 - 65ms/epoch - 5ms/step Epoch 9/200 12/12 - 0s - loss: 0.2598 - accuracy: 0.8944 - val_loss: 0.3293 - val_accuracy: 0.9250 - 49ms/epoch - 4ms/step Epoch 10/200 12/12 - 0s - loss: 0.2535 - accuracy: 0.8778 - val_loss: 0.2424 - val_accuracy: 0.9000 - 52ms/epoch - 4ms/step Epoch 11/200 12/12 - 0s - loss: 0.2515 - accuracy: 0.8833 - val_loss: 0.2622 - val_accuracy: 0.9250 - 50ms/epoch - 4ms/step Epoch 12/200 12/12 - 0s - loss: 0.2072 - accuracy: 0.9083 - val_loss: 0.2458 - val_accuracy: 0.9250 - 49ms/epoch - 4ms/step Epoch 13/200 12/12 - 0s - loss: 0.2188 - accuracy: 0.8917 - val_loss: 0.2574 - val_accuracy: 0.9000 - 52ms/epoch - 4ms/step Epoch 14/200 12/12 - 0s - loss: 0.1990 - accuracy: 0.9194 - val_loss: 0.3772 - val_accuracy: 0.9000 - 50ms/epoch - 4ms/step Epoch 15/200 12/12 - 0s - loss: 0.2214 - accuracy: 0.9000 - val_loss: 0.3065 - val_accuracy: 0.9250 - 66ms/epoch - 6ms/step Epoch 16/200 12/12 - 0s - loss: 0.1995 - accuracy: 0.9250 - val_loss: 0.2398 - val_accuracy: 0.9500 - 48ms/epoch - 4ms/step Epoch 17/200 12/12 - 0s - loss: 0.1989 - accuracy: 0.9222 - val_loss: 0.2332 - val_accuracy: 0.9250 - 55ms/epoch - 5ms/step Epoch 18/200 12/12 - 0s - loss: 0.2333 - accuracy: 0.9306 - val_loss: 0.2546 - val_accuracy: 0.9250 - 51ms/epoch - 4ms/step Epoch 19/200 12/12 - 0s - loss: 0.2013 - accuracy: 0.9000 - val_loss: 0.2577 - val_accuracy: 0.9250 - 51ms/epoch - 4ms/step Epoch 20/200 12/12 - 0s - loss: 0.1985 - accuracy: 0.9167 - val_loss: 0.2401 - val_accuracy: 0.9500 - 50ms/epoch - 4ms/step Epoch 21/200 12/12 - 0s - loss: 0.2001 - accuracy: 0.9028 - val_loss: 0.2438 - val_accuracy: 0.9250 - 54ms/epoch - 5ms/step Epoch 22/200 12/12 - 0s - loss: 0.1763 - accuracy: 0.9139 - val_loss: 0.2366 - val_accuracy: 0.9500 - 53ms/epoch - 4ms/step Epoch 23/200 12/12 - 0s - loss: 0.1690 - accuracy: 0.9278 - val_loss: 0.2364 - val_accuracy: 0.9500 - 76ms/epoch - 6ms/step Epoch 24/200 12/12 - 0s - loss: 0.2076 - accuracy: 0.9000 - val_loss: 0.2916 - val_accuracy: 0.9250 - 50ms/epoch - 4ms/step Epoch 25/200 12/12 - 0s - loss: 0.1996 - accuracy: 0.9167 - val_loss: 0.2784 - val_accuracy: 0.9250 - 66ms/epoch - 5ms/step Epoch 26/200 12/12 - 0s - loss: 0.1938 - accuracy: 0.9278 - val_loss: 0.2458 - val_accuracy: 0.9250 - 49ms/epoch - 4ms/step Epoch 27/200 12/12 - 0s - loss: 0.1655 - accuracy: 0.9250 - val_loss: 0.2587 - val_accuracy: 0.9250 - 49ms/epoch - 4ms/step Epoch 28/200 12/12 - 0s - loss: 0.1939 - accuracy: 0.9111 - val_loss: 0.2408 - val_accuracy: 0.9250 - 50ms/epoch - 4ms/step Epoch 29/200 12/12 - 0s - loss: 0.2596 - accuracy: 0.8944 - val_loss: 0.2774 - val_accuracy: 0.9250 - 52ms/epoch - 4ms/step Epoch 30/200 12/12 - 0s - loss: 0.2000 - accuracy: 0.9111 - val_loss: 0.2485 - val_accuracy: 0.9250 - 55ms/epoch - 5ms/step Epoch 31/200 12/12 - 0s - loss: 0.1717 - accuracy: 0.9361 - val_loss: 0.2797 - val_accuracy: 0.9250 - 51ms/epoch - 4ms/step Epoch 32/200 12/12 - 0s - loss: 0.2006 - accuracy: 0.9111 - val_loss: 0.2421 - val_accuracy: 0.9500 - 54ms/epoch - 5ms/step Epoch 33/200 12/12 - 0s - loss: 0.1709 - accuracy: 0.9250 - val_loss: 0.2546 - val_accuracy: 0.9000 - 51ms/epoch - 4ms/step Epoch 34/200 12/12 - 0s - loss: 0.1648 - accuracy: 0.9222 - val_loss: 0.2553 - val_accuracy: 0.9250 - 51ms/epoch - 4ms/step Epoch 35/200 12/12 - 0s - loss: 0.1832 - accuracy: 0.9222 - val_loss: 0.2401 - val_accuracy: 0.9500 - 73ms/epoch - 6ms/step Epoch 36/200 12/12 - 0s - loss: 0.1895 - accuracy: 0.9222 - val_loss: 0.2538 - val_accuracy: 0.9500 - 56ms/epoch - 5ms/step Epoch 37/200 12/12 - 0s - loss: 0.1764 - accuracy: 0.9278 - val_loss: 0.2475 - val_accuracy: 0.9500 - 57ms/epoch - 5ms/step Epoch 38/200 12/12 - 0s - loss: 0.1839 - accuracy: 0.9139 - val_loss: 0.2452 - val_accuracy: 0.9250 - 58ms/epoch - 5ms/step Epoch 39/200 12/12 - 0s - loss: 0.1643 - accuracy: 0.9250 - val_loss: 0.2363 - val_accuracy: 0.9250 - 66ms/epoch - 5ms/step Epoch 40/200 12/12 - 0s - loss: 0.1803 - accuracy: 0.9194 - val_loss: 0.3246 - val_accuracy: 0.9250 - 64ms/epoch - 5ms/step Epoch 41/200 12/12 - 0s - loss: 0.1932 - accuracy: 0.9222 - val_loss: 0.2375 - val_accuracy: 0.9500 - 53ms/epoch - 4ms/step Epoch 42/200 12/12 - 0s - loss: 0.1663 - accuracy: 0.9278 - val_loss: 0.2537 - val_accuracy: 0.9250 - 53ms/epoch - 4ms/step Epoch 43/200 12/12 - 0s - loss: 0.1699 - accuracy: 0.9194 - val_loss: 0.2704 - val_accuracy: 0.9250 - 69ms/epoch - 6ms/step Epoch 44/200 12/12 - 0s - loss: 0.1606 - accuracy: 0.9333 - val_loss: 0.2350 - val_accuracy: 0.9500 - 53ms/epoch - 4ms/step Epoch 45/200 12/12 - 0s - loss: 0.1719 - accuracy: 0.9222 - val_loss: 0.2442 - val_accuracy: 0.9250 - 68ms/epoch - 6ms/step Epoch 46/200 12/12 - 0s - loss: 0.1761 - accuracy: 0.9111 - val_loss: 0.2455 - val_accuracy: 0.9500 - 55ms/epoch - 5ms/step Epoch 47/200 12/12 - 0s - loss: 0.1706 - accuracy: 0.9167 - val_loss: 0.2467 - val_accuracy: 0.9500 - 51ms/epoch - 4ms/step Epoch 48/200 12/12 - 0s - loss: 0.1665 - accuracy: 0.9361 - val_loss: 0.2546 - val_accuracy: 0.9250 - 51ms/epoch - 4ms/step Epoch 49/200 12/12 - 0s - loss: 0.1708 - accuracy: 0.9250 - val_loss: 0.2415 - val_accuracy: 0.9500 - 51ms/epoch - 4ms/step Epoch 50/200 12/12 - 0s - loss: 0.1661 - accuracy: 0.9278 - val_loss: 0.2736 - val_accuracy: 0.9250 - 51ms/epoch - 4ms/step Epoch 51/200 12/12 - 0s - loss: 0.1660 - accuracy: 0.9250 - val_loss: 0.2487 - val_accuracy: 0.9250 - 55ms/epoch - 5ms/step Epoch 52/200 12/12 - 0s - loss: 0.1573 - accuracy: 0.9222 - val_loss: 0.2470 - val_accuracy: 0.9250 - 56ms/epoch - 5ms/step Epoch 53/200 12/12 - 0s - loss: 0.1502 - accuracy: 0.9306 - val_loss: 0.2671 - val_accuracy: 0.9250 - 52ms/epoch - 4ms/step Epoch 54/200 12/12 - 0s - loss: 0.1904 - accuracy: 0.9167 - val_loss: 0.2427 - val_accuracy: 0.9500 - 54ms/epoch - 5ms/step Epoch 55/200 12/12 - 0s - loss: 0.1874 - accuracy: 0.9167 - val_loss: 0.2396 - val_accuracy: 0.9250 - 57ms/epoch - 5ms/step Epoch 56/200 12/12 - 0s - loss: 0.1652 - accuracy: 0.9083 - val_loss: 0.2559 - val_accuracy: 0.9000 - 56ms/epoch - 5ms/step Epoch 57/200 12/12 - 0s - loss: 0.1586 - accuracy: 0.9194 - val_loss: 0.2688 - val_accuracy: 0.9250 - 68ms/epoch - 6ms/step Epoch 58/200 12/12 - 0s - loss: 0.1625 - accuracy: 0.9333 - val_loss: 0.2423 - val_accuracy: 0.9250 - 68ms/epoch - 6ms/step Epoch 59/200 12/12 - 0s - loss: 0.2196 - accuracy: 0.8944 - val_loss: 0.3140 - val_accuracy: 0.9250 - 55ms/epoch - 5ms/step Epoch 60/200 12/12 - 0s - loss: 0.1789 - accuracy: 0.9361 - val_loss: 0.2585 - val_accuracy: 0.9000 - 57ms/epoch - 5ms/step Epoch 61/200 12/12 - 0s - loss: 0.1588 - accuracy: 0.9333 - val_loss: 0.2478 - val_accuracy: 0.9250 - 52ms/epoch - 4ms/step Epoch 62/200 12/12 - 0s - loss: 0.1588 - accuracy: 0.9250 - val_loss: 0.2492 - val_accuracy: 0.9250 - 51ms/epoch - 4ms/step Epoch 63/200 12/12 - 0s - loss: 0.1668 - accuracy: 0.9306 - val_loss: 0.2340 - val_accuracy: 0.9250 - 51ms/epoch - 4ms/step Epoch 64/200 12/12 - 0s - loss: 0.1658 - accuracy: 0.9333 - val_loss: 0.2499 - val_accuracy: 0.9500 - 54ms/epoch - 5ms/step Epoch 65/200 12/12 - 0s - loss: 0.1594 - accuracy: 0.9167 - val_loss: 0.2639 - val_accuracy: 0.9250 - 51ms/epoch - 4ms/step Epoch 66/200 12/12 - 0s - loss: 0.1829 - accuracy: 0.9167 - val_loss: 0.2700 - val_accuracy: 0.9250 - 50ms/epoch - 4ms/step Epoch 67/200 12/12 - 0s - loss: 0.2345 - accuracy: 0.8833 - val_loss: 0.3043 - val_accuracy: 0.9250 - 54ms/epoch - 5ms/step Epoch 68/200 12/12 - 0s - loss: 0.2027 - accuracy: 0.9083 - val_loss: 0.2691 - val_accuracy: 0.9250 - 54ms/epoch - 4ms/step Epoch 69/200 12/12 - 0s - loss: 0.1621 - accuracy: 0.9333 - val_loss: 0.2359 - val_accuracy: 0.9250 - 54ms/epoch - 4ms/step Epoch 70/200 12/12 - 0s - loss: 0.1474 - accuracy: 0.9361 - val_loss: 0.2497 - val_accuracy: 0.9250 - 53ms/epoch - 4ms/step Epoch 71/200 12/12 - 0s - loss: 0.1568 - accuracy: 0.9361 - val_loss: 0.2444 - val_accuracy: 0.9500 - 77ms/epoch - 6ms/step Epoch 72/200 12/12 - 0s - loss: 0.1739 - accuracy: 0.9139 - val_loss: 0.3005 - val_accuracy: 0.9250 - 63ms/epoch - 5ms/step Epoch 73/200 12/12 - 0s - loss: 0.1561 - accuracy: 0.9250 - val_loss: 0.2430 - val_accuracy: 0.9250 - 55ms/epoch - 5ms/step Epoch 74/200 12/12 - 0s - loss: 0.1559 - accuracy: 0.9361 - val_loss: 0.2705 - val_accuracy: 0.9250 - 54ms/epoch - 4ms/step Epoch 75/200 12/12 - 0s - loss: 0.1718 - accuracy: 0.9333 - val_loss: 0.2440 - val_accuracy: 0.9500 - 51ms/epoch - 4ms/step Epoch 76/200 12/12 - 0s - loss: 0.1554 - accuracy: 0.9306 - val_loss: 0.2406 - val_accuracy: 0.9500 - 51ms/epoch - 4ms/step Epoch 77/200 12/12 - 0s - loss: 0.1441 - accuracy: 0.9361 - val_loss: 0.2510 - val_accuracy: 0.9250 - 51ms/epoch - 4ms/step Epoch 78/200 12/12 - 0s - loss: 0.1551 - accuracy: 0.9278 - val_loss: 0.2506 - val_accuracy: 0.9500 - 52ms/epoch - 4ms/step Epoch 79/200 12/12 - 0s - loss: 0.1566 - accuracy: 0.9389 - val_loss: 0.2497 - val_accuracy: 0.9500 - 54ms/epoch - 4ms/step Epoch 80/200 12/12 - 0s - loss: 0.1836 - accuracy: 0.9222 - val_loss: 0.2649 - val_accuracy: 0.9250 - 49ms/epoch - 4ms/step Epoch 81/200 12/12 - 0s - loss: 0.1582 - accuracy: 0.9278 - val_loss: 0.2330 - val_accuracy: 0.9500 - 53ms/epoch - 4ms/step Epoch 82/200 12/12 - 0s - loss: 0.1600 - accuracy: 0.9333 - val_loss: 0.2318 - val_accuracy: 0.9000 - 55ms/epoch - 5ms/step Epoch 83/200 12/12 - 0s - loss: 0.1612 - accuracy: 0.9306 - val_loss: 0.2411 - val_accuracy: 0.9250 - 51ms/epoch - 4ms/step Epoch 84/200 12/12 - 0s - loss: 0.1561 - accuracy: 0.9306 - val_loss: 0.2637 - val_accuracy: 0.9250 - 52ms/epoch - 4ms/step Epoch 85/200 12/12 - 0s - loss: 0.1649 - accuracy: 0.9167 - val_loss: 0.2632 - val_accuracy: 0.9250 - 70ms/epoch - 6ms/step Epoch 86/200 12/12 - 0s - loss: 0.1530 - accuracy: 0.9361 - val_loss: 0.2495 - val_accuracy: 0.9250 - 51ms/epoch - 4ms/step Epoch 87/200 12/12 - 0s - loss: 0.1645 - accuracy: 0.9361 - val_loss: 0.2758 - val_accuracy: 0.9250 - 69ms/epoch - 6ms/step Epoch 88/200 12/12 - 0s - loss: 0.2023 - accuracy: 0.9056 - val_loss: 0.2339 - val_accuracy: 0.9500 - 55ms/epoch - 5ms/step Epoch 89/200 12/12 - 0s - loss: 0.1463 - accuracy: 0.9389 - val_loss: 0.2527 - val_accuracy: 0.9250 - 78ms/epoch - 6ms/step Epoch 90/200 12/12 - 0s - loss: 0.1485 - accuracy: 0.9333 - val_loss: 0.2384 - val_accuracy: 0.9500 - 52ms/epoch - 4ms/step Epoch 91/200 12/12 - 0s - loss: 0.1619 - accuracy: 0.9278 - val_loss: 0.2519 - val_accuracy: 0.9250 - 52ms/epoch - 4ms/step Epoch 92/200 12/12 - 0s - loss: 0.1482 - accuracy: 0.9389 - val_loss: 0.2359 - val_accuracy: 0.9500 - 73ms/epoch - 6ms/step Epoch 93/200 12/12 - 0s - loss: 0.1508 - accuracy: 0.9250 - val_loss: 0.2502 - val_accuracy: 0.9500 - 57ms/epoch - 5ms/step Epoch 94/200 12/12 - 0s - loss: 0.1538 - accuracy: 0.9389 - val_loss: 0.2641 - val_accuracy: 0.9250 - 52ms/epoch - 4ms/step Epoch 95/200 12/12 - 0s - loss: 0.1669 - accuracy: 0.9333 - val_loss: 0.2408 - val_accuracy: 0.9500 - 51ms/epoch - 4ms/step Epoch 96/200 12/12 - 0s - loss: 0.1697 - accuracy: 0.9167 - val_loss: 0.2457 - val_accuracy: 0.9000 - 67ms/epoch - 6ms/step Epoch 97/200 12/12 - 0s - loss: 0.1569 - accuracy: 0.9389 - val_loss: 0.2449 - val_accuracy: 0.9500 - 56ms/epoch - 5ms/step Epoch 98/200 12/12 - 0s - loss: 0.1494 - accuracy: 0.9278 - val_loss: 0.2392 - val_accuracy: 0.9500 - 50ms/epoch - 4ms/step Epoch 99/200 12/12 - 0s - loss: 0.1651 - accuracy: 0.9222 - val_loss: 0.2747 - val_accuracy: 0.9250 - 55ms/epoch - 5ms/step Epoch 100/200 12/12 - 0s - loss: 0.1440 - accuracy: 0.9361 - val_loss: 0.2550 - val_accuracy: 0.9500 - 56ms/epoch - 5ms/step Epoch 101/200 12/12 - 0s - loss: 0.1449 - accuracy: 0.9361 - val_loss: 0.2492 - val_accuracy: 0.9500 - 51ms/epoch - 4ms/step Epoch 102/200 12/12 - 0s - loss: 0.1535 - accuracy: 0.9389 - val_loss: 0.2418 - val_accuracy: 0.9500 - 49ms/epoch - 4ms/step Epoch 103/200 12/12 - 0s - loss: 0.1407 - accuracy: 0.9361 - val_loss: 0.3063 - val_accuracy: 0.9250 - 53ms/epoch - 4ms/step Epoch 104/200 12/12 - 0s - loss: 0.1752 - accuracy: 0.9222 - val_loss: 0.2625 - val_accuracy: 0.9250 - 54ms/epoch - 5ms/step Epoch 105/200 12/12 - 0s - loss: 0.1550 - accuracy: 0.9361 - val_loss: 0.2560 - val_accuracy: 0.9500 - 61ms/epoch - 5ms/step Epoch 106/200 12/12 - 0s - loss: 0.1589 - accuracy: 0.9306 - val_loss: 0.2424 - val_accuracy: 0.9500 - 50ms/epoch - 4ms/step Epoch 107/200 12/12 - 0s - loss: 0.1578 - accuracy: 0.9250 - val_loss: 0.2381 - val_accuracy: 0.9500 - 53ms/epoch - 4ms/step Epoch 108/200 12/12 - 0s - loss: 0.1542 - accuracy: 0.9306 - val_loss: 0.2686 - val_accuracy: 0.9250 - 52ms/epoch - 4ms/step Epoch 109/200 12/12 - 0s - loss: 0.1591 - accuracy: 0.9306 - val_loss: 0.2578 - val_accuracy: 0.9500 - 53ms/epoch - 4ms/step Epoch 110/200 12/12 - 0s - loss: 0.1483 - accuracy: 0.9250 - val_loss: 0.2502 - val_accuracy: 0.9250 - 55ms/epoch - 5ms/step Epoch 111/200 12/12 - 0s - loss: 0.1910 - accuracy: 0.9139 - val_loss: 0.3169 - val_accuracy: 0.9250 - 54ms/epoch - 4ms/step Epoch 112/200 12/12 - 0s - loss: 0.1672 - accuracy: 0.9194 - val_loss: 0.2487 - val_accuracy: 0.9250 - 53ms/epoch - 4ms/step Epoch 113/200 12/12 - 0s - loss: 0.1591 - accuracy: 0.9361 - val_loss: 0.2460 - val_accuracy: 0.9500 - 56ms/epoch - 5ms/step Epoch 114/200 12/12 - 0s - loss: 0.1462 - accuracy: 0.9361 - val_loss: 0.2497 - val_accuracy: 0.9500 - 51ms/epoch - 4ms/step Epoch 115/200 12/12 - 0s - loss: 0.1431 - accuracy: 0.9333 - val_loss: 0.2416 - val_accuracy: 0.9250 - 51ms/epoch - 4ms/step Epoch 116/200 12/12 - 0s - loss: 0.1587 - accuracy: 0.9333 - val_loss: 0.2409 - val_accuracy: 0.9500 - 52ms/epoch - 4ms/step Epoch 117/200 12/12 - 0s - loss: 0.1576 - accuracy: 0.9361 - val_loss: 0.2690 - val_accuracy: 0.9250 - 68ms/epoch - 6ms/step Epoch 118/200 12/12 - 0s - loss: 0.1574 - accuracy: 0.9333 - val_loss: 0.2545 - val_accuracy: 0.9500 - 69ms/epoch - 6ms/step Epoch 119/200 12/12 - 0s - loss: 0.1550 - accuracy: 0.9333 - val_loss: 0.2509 - val_accuracy: 0.9500 - 52ms/epoch - 4ms/step Epoch 120/200 12/12 - 0s - loss: 0.1487 - accuracy: 0.9389 - val_loss: 0.2662 - val_accuracy: 0.9250 - 78ms/epoch - 6ms/step Epoch 121/200 12/12 - 0s - loss: 0.1494 - accuracy: 0.9389 - val_loss: 0.2486 - val_accuracy: 0.9500 - 53ms/epoch - 4ms/step Epoch 122/200 12/12 - 0s - loss: 0.1435 - accuracy: 0.9389 - val_loss: 0.2471 - val_accuracy: 0.9500 - 59ms/epoch - 5ms/step Epoch 123/200 12/12 - 0s - loss: 0.1395 - accuracy: 0.9278 - val_loss: 0.2519 - val_accuracy: 0.9500 - 54ms/epoch - 4ms/step Epoch 124/200 12/12 - 0s - loss: 0.1534 - accuracy: 0.9222 - val_loss: 0.2515 - val_accuracy: 0.9500 - 57ms/epoch - 5ms/step Epoch 125/200 12/12 - 0s - loss: 0.1482 - accuracy: 0.9361 - val_loss: 0.2719 - val_accuracy: 0.9250 - 55ms/epoch - 5ms/step Epoch 126/200 12/12 - 0s - loss: 0.1585 - accuracy: 0.9417 - val_loss: 0.2437 - val_accuracy: 0.9500 - 68ms/epoch - 6ms/step Epoch 127/200 12/12 - 0s - loss: 0.1439 - accuracy: 0.9250 - val_loss: 0.2811 - val_accuracy: 0.9250 - 52ms/epoch - 4ms/step Epoch 128/200 12/12 - 0s - loss: 0.1457 - accuracy: 0.9306 - val_loss: 0.2504 - val_accuracy: 0.9250 - 51ms/epoch - 4ms/step Epoch 129/200 12/12 - 0s - loss: 0.1614 - accuracy: 0.9361 - val_loss: 0.2595 - val_accuracy: 0.9500 - 55ms/epoch - 5ms/step Epoch 130/200 12/12 - 0s - loss: 0.1398 - accuracy: 0.9500 - val_loss: 0.2545 - val_accuracy: 0.9500 - 55ms/epoch - 5ms/step Epoch 131/200 12/12 - 0s - loss: 0.1542 - accuracy: 0.9306 - val_loss: 0.2604 - val_accuracy: 0.9500 - 52ms/epoch - 4ms/step Epoch 132/200 12/12 - 0s - loss: 0.1551 - accuracy: 0.9361 - val_loss: 0.2606 - val_accuracy: 0.9250 - 53ms/epoch - 4ms/step Epoch 133/200 12/12 - 0s - loss: 0.1392 - accuracy: 0.9389 - val_loss: 0.2744 - val_accuracy: 0.9500 - 54ms/epoch - 4ms/step Epoch 134/200 12/12 - 0s - loss: 0.1537 - accuracy: 0.9361 - val_loss: 0.2614 - val_accuracy: 0.9250 - 53ms/epoch - 4ms/step Epoch 135/200 12/12 - 0s - loss: 0.1468 - accuracy: 0.9333 - val_loss: 0.3005 - val_accuracy: 0.9250 - 53ms/epoch - 4ms/step Epoch 136/200 12/12 - 0s - loss: 0.1589 - accuracy: 0.9167 - val_loss: 0.2566 - val_accuracy: 0.9500 - 64ms/epoch - 5ms/step Epoch 137/200 12/12 - 0s - loss: 0.1503 - accuracy: 0.9361 - val_loss: 0.2653 - val_accuracy: 0.9500 - 53ms/epoch - 4ms/step Epoch 138/200 12/12 - 0s - loss: 0.1369 - accuracy: 0.9389 - val_loss: 0.2620 - val_accuracy: 0.9500 - 55ms/epoch - 5ms/step Epoch 139/200 12/12 - 0s - loss: 0.1432 - accuracy: 0.9306 - val_loss: 0.3037 - val_accuracy: 0.9250 - 55ms/epoch - 5ms/step Epoch 140/200 12/12 - 0s - loss: 0.1703 - accuracy: 0.9333 - val_loss: 0.2650 - val_accuracy: 0.9250 - 52ms/epoch - 4ms/step Epoch 141/200 12/12 - 0s - loss: 0.1525 - accuracy: 0.9389 - val_loss: 0.2780 - val_accuracy: 0.9250 - 51ms/epoch - 4ms/step Epoch 142/200 12/12 - 0s - loss: 0.1450 - accuracy: 0.9444 - val_loss: 0.2868 - val_accuracy: 0.9250 - 56ms/epoch - 5ms/step Epoch 143/200 12/12 - 0s - loss: 0.1835 - accuracy: 0.9167 - val_loss: 0.2571 - val_accuracy: 0.9250 - 52ms/epoch - 4ms/step Epoch 144/200 12/12 - 0s - loss: 0.1617 - accuracy: 0.9389 - val_loss: 0.2977 - val_accuracy: 0.9250 - 49ms/epoch - 4ms/step Epoch 145/200 12/12 - 0s - loss: 0.1443 - accuracy: 0.9417 - val_loss: 0.2632 - val_accuracy: 0.9500 - 51ms/epoch - 4ms/step Epoch 146/200 12/12 - 0s - loss: 0.1775 - accuracy: 0.9250 - val_loss: 0.2645 - val_accuracy: 0.9250 - 52ms/epoch - 4ms/step Epoch 147/200 12/12 - 0s - loss: 0.1763 - accuracy: 0.9194 - val_loss: 0.3122 - val_accuracy: 0.9250 - 51ms/epoch - 4ms/step Epoch 148/200 12/12 - 0s - loss: 0.1468 - accuracy: 0.9278 - val_loss: 0.2615 - val_accuracy: 0.9250 - 51ms/epoch - 4ms/step Epoch 149/200 12/12 - 0s - loss: 0.1567 - accuracy: 0.9194 - val_loss: 0.2682 - val_accuracy: 0.9500 - 62ms/epoch - 5ms/step Epoch 150/200 12/12 - 0s - loss: 0.1396 - accuracy: 0.9444 - val_loss: 0.2633 - val_accuracy: 0.9500 - 54ms/epoch - 4ms/step Epoch 151/200 12/12 - 0s - loss: 0.1380 - accuracy: 0.9444 - val_loss: 0.2665 - val_accuracy: 0.9500 - 61ms/epoch - 5ms/step Epoch 152/200 12/12 - 0s - loss: 0.1458 - accuracy: 0.9333 - val_loss: 0.2615 - val_accuracy: 0.9500 - 54ms/epoch - 4ms/step Epoch 153/200 12/12 - 0s - loss: 0.1518 - accuracy: 0.9389 - val_loss: 0.2665 - val_accuracy: 0.9500 - 54ms/epoch - 4ms/step Epoch 154/200 12/12 - 0s - loss: 0.1630 - accuracy: 0.9222 - val_loss: 0.2615 - val_accuracy: 0.9500 - 52ms/epoch - 4ms/step Epoch 155/200 12/12 - 0s - loss: 0.1661 - accuracy: 0.9278 - val_loss: 0.2814 - val_accuracy: 0.9250 - 52ms/epoch - 4ms/step Epoch 156/200 12/12 - 0s - loss: 0.1439 - accuracy: 0.9417 - val_loss: 0.2584 - val_accuracy: 0.9500 - 68ms/epoch - 6ms/step Epoch 157/200 12/12 - 0s - loss: 0.1439 - accuracy: 0.9417 - val_loss: 0.2745 - val_accuracy: 0.9250 - 52ms/epoch - 4ms/step Epoch 158/200 12/12 - 0s - loss: 0.1552 - accuracy: 0.9389 - val_loss: 0.2532 - val_accuracy: 0.9250 - 55ms/epoch - 5ms/step Epoch 159/200 12/12 - 0s - loss: 0.1507 - accuracy: 0.9306 - val_loss: 0.2507 - val_accuracy: 0.9500 - 54ms/epoch - 5ms/step Epoch 160/200 12/12 - 0s - loss: 0.1463 - accuracy: 0.9306 - val_loss: 0.2774 - val_accuracy: 0.9250 - 55ms/epoch - 5ms/step Epoch 161/200 12/12 - 0s - loss: 0.1525 - accuracy: 0.9389 - val_loss: 0.2452 - val_accuracy: 0.9500 - 54ms/epoch - 4ms/step Epoch 162/200 12/12 - 0s - loss: 0.1391 - accuracy: 0.9333 - val_loss: 0.2490 - val_accuracy: 0.9500 - 53ms/epoch - 4ms/step Epoch 163/200 12/12 - 0s - loss: 0.1412 - accuracy: 0.9389 - val_loss: 0.2511 - val_accuracy: 0.9500 - 57ms/epoch - 5ms/step Epoch 164/200 12/12 - 0s - loss: 0.1481 - accuracy: 0.9417 - val_loss: 0.2525 - val_accuracy: 0.9500 - 54ms/epoch - 5ms/step Epoch 165/200 12/12 - 0s - loss: 0.1440 - accuracy: 0.9389 - val_loss: 0.2457 - val_accuracy: 0.9500 - 59ms/epoch - 5ms/step Epoch 166/200 12/12 - 0s - loss: 0.1488 - accuracy: 0.9278 - val_loss: 0.3059 - val_accuracy: 0.9250 - 55ms/epoch - 5ms/step Epoch 167/200 12/12 - 0s - loss: 0.1830 - accuracy: 0.9194 - val_loss: 0.2410 - val_accuracy: 0.9500 - 53ms/epoch - 4ms/step Epoch 168/200 12/12 - 0s - loss: 0.1450 - accuracy: 0.9278 - val_loss: 0.2386 - val_accuracy: 0.9500 - 53ms/epoch - 4ms/step Epoch 169/200 12/12 - 0s - loss: 0.1439 - accuracy: 0.9306 - val_loss: 0.2487 - val_accuracy: 0.9500 - 57ms/epoch - 5ms/step Epoch 170/200 12/12 - 0s - loss: 0.1331 - accuracy: 0.9278 - val_loss: 0.2362 - val_accuracy: 0.9500 - 55ms/epoch - 5ms/step Epoch 171/200 12/12 - 0s - loss: 0.1377 - accuracy: 0.9361 - val_loss: 0.2609 - val_accuracy: 0.9500 - 52ms/epoch - 4ms/step Epoch 172/200 12/12 - 0s - loss: 0.1285 - accuracy: 0.9389 - val_loss: 0.2483 - val_accuracy: 0.9250 - 56ms/epoch - 5ms/step Epoch 173/200 12/12 - 0s - loss: 0.1558 - accuracy: 0.9250 - val_loss: 0.2597 - val_accuracy: 0.9500 - 71ms/epoch - 6ms/step Epoch 174/200 12/12 - 0s - loss: 0.1372 - accuracy: 0.9417 - val_loss: 0.2479 - val_accuracy: 0.9500 - 72ms/epoch - 6ms/step Epoch 175/200 12/12 - 0s - loss: 0.1317 - accuracy: 0.9417 - val_loss: 0.2520 - val_accuracy: 0.9500 - 52ms/epoch - 4ms/step Epoch 176/200 12/12 - 0s - loss: 0.1408 - accuracy: 0.9389 - val_loss: 0.2542 - val_accuracy: 0.9500 - 54ms/epoch - 5ms/step Epoch 177/200 12/12 - 0s - loss: 0.1435 - accuracy: 0.9278 - val_loss: 0.2497 - val_accuracy: 0.9500 - 70ms/epoch - 6ms/step Epoch 178/200 12/12 - 0s - loss: 0.1472 - accuracy: 0.9389 - val_loss: 0.2506 - val_accuracy: 0.9500 - 53ms/epoch - 4ms/step Epoch 179/200 12/12 - 0s - loss: 0.1773 - accuracy: 0.9222 - val_loss: 0.3307 - val_accuracy: 0.9250 - 54ms/epoch - 4ms/step Epoch 180/200 12/12 - 0s - loss: 0.1600 - accuracy: 0.9250 - val_loss: 0.2492 - val_accuracy: 0.9000 - 54ms/epoch - 4ms/step Epoch 181/200 12/12 - 0s - loss: 0.1613 - accuracy: 0.9222 - val_loss: 0.2396 - val_accuracy: 0.9500 - 55ms/epoch - 5ms/step Epoch 182/200 12/12 - 0s - loss: 0.1408 - accuracy: 0.9306 - val_loss: 0.3051 - val_accuracy: 0.9250 - 69ms/epoch - 6ms/step Epoch 183/200 12/12 - 0s - loss: 0.1571 - accuracy: 0.9333 - val_loss: 0.2353 - val_accuracy: 0.9250 - 52ms/epoch - 4ms/step Epoch 184/200 12/12 - 0s - loss: 0.1737 - accuracy: 0.9306 - val_loss: 0.3125 - val_accuracy: 0.9250 - 55ms/epoch - 5ms/step Epoch 185/200 12/12 - 0s - loss: 0.1642 - accuracy: 0.9194 - val_loss: 0.2330 - val_accuracy: 0.9250 - 55ms/epoch - 5ms/step Epoch 186/200 12/12 - 0s - loss: 0.1520 - accuracy: 0.9361 - val_loss: 0.2318 - val_accuracy: 0.9500 - 54ms/epoch - 4ms/step Epoch 187/200 12/12 - 0s - loss: 0.1273 - accuracy: 0.9417 - val_loss: 0.2528 - val_accuracy: 0.9250 - 51ms/epoch - 4ms/step Epoch 188/200 12/12 - 0s - loss: 0.1399 - accuracy: 0.9361 - val_loss: 0.2392 - val_accuracy: 0.9500 - 58ms/epoch - 5ms/step Epoch 189/200 12/12 - 0s - loss: 0.1374 - accuracy: 0.9361 - val_loss: 0.2432 - val_accuracy: 0.9500 - 53ms/epoch - 4ms/step Epoch 190/200 12/12 - 0s - loss: 0.1391 - accuracy: 0.9444 - val_loss: 0.2526 - val_accuracy: 0.9250 - 52ms/epoch - 4ms/step Epoch 191/200 12/12 - 0s - loss: 0.1368 - accuracy: 0.9389 - val_loss: 0.2483 - val_accuracy: 0.9500 - 55ms/epoch - 5ms/step Epoch 192/200 12/12 - 0s - loss: 0.1378 - accuracy: 0.9389 - val_loss: 0.2599 - val_accuracy: 0.9250 - 54ms/epoch - 4ms/step Epoch 193/200 12/12 - 0s - loss: 0.1455 - accuracy: 0.9361 - val_loss: 0.2521 - val_accuracy: 0.9500 - 52ms/epoch - 4ms/step Epoch 194/200 12/12 - 0s - loss: 0.1459 - accuracy: 0.9306 - val_loss: 0.2880 - val_accuracy: 0.9250 - 71ms/epoch - 6ms/step Epoch 195/200 12/12 - 0s - loss: 0.1409 - accuracy: 0.9389 - val_loss: 0.2542 - val_accuracy: 0.9500 - 53ms/epoch - 4ms/step Epoch 196/200 12/12 - 0s - loss: 0.1511 - accuracy: 0.9417 - val_loss: 0.2288 - val_accuracy: 0.9500 - 53ms/epoch - 4ms/step Epoch 197/200 12/12 - 0s - loss: 0.1445 - accuracy: 0.9361 - val_loss: 0.2928 - val_accuracy: 0.9250 - 51ms/epoch - 4ms/step Epoch 198/200 12/12 - 0s - loss: 0.1680 - accuracy: 0.9250 - val_loss: 0.2379 - val_accuracy: 0.9500 - 52ms/epoch - 4ms/step Epoch 199/200 12/12 - 0s - loss: 0.1423 - accuracy: 0.9389 - val_loss: 0.2352 - val_accuracy: 0.9250 - 52ms/epoch - 4ms/step Epoch 200/200 12/12 - 0s - loss: 0.1306 - accuracy: 0.9417 - val_loss: 0.2690 - val_accuracy: 0.9250 - 53ms/epoch - 4ms/step
In [ ]:
plt.plot(history_3.history['accuracy']) plt.plot(history_3.history['val_accuracy']) plt.title('Accuracy vs Epochs') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['Train', 'Validation'], loc='lower right') plt.show()
Observations:
In [ ]:
model_3.evaluate(X_test_normalized, y_test, verbose = 1) test_pred = np.round(model_3.predict(X_test_normalized))
4/4 [==============================] - 0s 3ms/step - loss: 0.1059 - accuracy: 0.9600
The test accuracy is coming out to be 96% which implies that our model is able to replicate the performance from the train and validation data on the test (unseen) data.
In [ ]:
from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix print(classification_report(y_test, test_pred)) cm = confusion_matrix(y_test, test_pred) plt.figure(figsize=(8,5)) sns.heatmap(cm, annot=True, fmt='.0f',xticklabels=['Not Admitted', 'Admitted'], yticklabels=['Not Admitted', 'Admitted']) plt.ylabel('Actual') plt.xlabel('Predicted') plt.show()
precision recall f1-score support 0 0.96 0.99 0.97 73 1 0.96 0.89 0.92 27 accuracy 0.96 100 macro avg 0.96 0.94 0.95 100 weighted avg 0.96 0.96 0.96 100
Observations:
In this case study,