2 Determine which predictions were incorrect and propagate back the difference

2 determine which predictions were incorrect and

This preview shows page 26 - 29 out of 42 pages.

2. Determine which predictions were incorrect and propagate back the difference between the prediction and the true value (backpropagation). 3. Rinse and repeat till the predictions become sufficiently accurate. It’s quite likely that the initial iteration would have close to 0% accuracy. Repeating the process several times, however, can yield a highly accurate model (> 90%). The batch size defines how many images are seen by the CNN at a time. It’s important that each batch have a good variety of images from different classes in order to pre‐ vent large fluctuations in the accuracy metric between iterations. A sufficiently large batch size would be necessary for that. However, it’s important not to set the batch 24 | Chapter 2: Cats vs Dogs - Transfer Learning in 30 lines with Keras
Image of page 26
size too large for a couple of reasons. First, if the batch size is too large, you could end up crashing the program due to lack of memory. Second, the training process would be slower. Usually, batch sizes are set as powers of 2. 64 is a good number to start with for most problems and you can play with the number by increasing/decreasing it. Data Augmentation Usually, when you hear deep learning, you associate it with millions of images. But 500 images like what we have might be a low number for real-world training. Now, these deep neural networks are powerful, a little too powerful for small quantities of data. The danger of a limited set of training images is that the neural network might memorize your training data, and show great prediction performance on the training set, but bad accuracy on the validation set. In other words, the model has overtrained and does not generalize on previously unseen images. And we don’t want that, right? There are often cases where there’s not enough data available. Perhaps you’re working on a niche problem and data is hard to come by. There are a few ways you can artifi‐ cially augment your dataset: 1. Rotation: In our example, we might want to rotate the 500 images randomly by 20 degrees in either direction, yielding up to 20000 possible unique images. 2. Random Shift: Shift the images slightly to the left, or to the right. 3. Zoom: Zoom in and out slightly of the image By combining rotation, shifting and zooming, the program can generate almost infin‐ ite unique images. This important step is called data augmentation. Keras provides ImageDataGenerator function that augments the data while it is being loaded from the directory. Example augmentations generated by the imgaug ( aleju/imgaug ) for a sample image are shown in Figure 2-3 . Building a Custom Classi er in Keras with Transfer Learning | 25
Image of page 27
Figure 2-3. Possible image augmentations generated from a single image by imgaug library Colored images usually have 3 channels - red, green, and blue. Each channel has an intensity value ranging from 0 to 255. To normalize it (i.e. scale down the value to between 0 and 1), we will divide each pixel by 255.
Image of page 28
Image of page 29

You've reached the end of your free preview.

Want to read all 42 pages?

  • Summer '16
  • panut mulyono
  • Machine Learning, Abstraction layer, Principle of abstraction

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture