Building Blocks of Neural Network from Scratch

Sharifdeen Ashshak
4 min readJun 26, 2022

--

In this post, we will be going to take look at building a simple neural Network model from scratch using Keras and Tensorflow

Contents included

  • Load the dataset from Keras
  • Train images
  • Test images
  • Workflow
  • Preparing of image data
  • Prepare labels
  • Test our model

Load the dataset from Keras

Here in this case you don’t need to download the dataset. keras comes up with a preloaded dataset. in the form of a set of NumPy arrays.the train_images,train_labels are the data the model will learn from. then the model will be tested on test_images,test_labels

here the images are loaded as Numpy arrays and the labels are loaded as digits ranging from 0 to 9. images and labels have one-to-one correspondence. the images are at 28 × 28 pixels. with 60,000 training images and 10,000 test image

Train images

Test images

Workflow

first, before predicting the result from test images we have to feed the train images and labels to the neural network to train the model. then we can use test images to predict the result. in order to check the accuracy feed the labels to get the verification.

The core building block of neural networks is the layer, a data-processing module that you can think of as a filter for data. Some data goes in, and it comes out in a more useful form. Specifically, layers extract representations out of the data fed into them-hopefully, representations that are more meaningful for the problem at hand. Here we have two dense layers which are densely connected neural layers. The second layer is a 10- way softmax layer. which means it will have an array of 10 probability scores. each score will be the probability of our current image belonging to our 10 digital classes. So here out of which score will have #maximum probabilty#, it will be the class of our image. in order to be ready for the network to train we need to pick up the three more things loss function, optimizer, and matrics to monitor during training and testing the loss function will be used to measure the performance of the training data. and thus how it would be steering it self in the right direction. the optimizer will be used to update the network based on the data it sees and its loss function matrics during training and testing , here we only cared about the accuracy of the correctly classified images.

Compilation steps

Preparation of image data

before we going to train our network here we try to process the data by reshaping it into the format the network expects and scallion it into the values as [0,1] intervals. previously we stored our training images in the array shape of (60000,28,28) in the format of uint8 the interval of the values between[0,255]. here we transform it into type float32 then the array of shape (60000,28*28) with values between 0 and 1.

Prepare labels

now it’s the time to train our neural network. Keras call the fit method to train our network. Below you can see two quantities of the training data over the training which are loss of the network over the training data and accuracy of the network over the training data.

Test model

let’s test our model on test data

here we get our model accuracy as 97.8% but in our training data we get our output accuracy as 98.92% it’s a little bit lower than the training accuracy. this is called overfitting. This overfitting will worst quantities in machine learning models.

In my next post, I will be going to take a deep look into every moving piece of it such as tensors, tensor operations, data storing objects, and more.

Originally published at https://www.blackkeyhole.com.

--

--