Once we had proved that apples could be successfully detected using a pre-trained data-set and acquired enough custom data, a second object detector was built. Instead of making predictions based off the pre-trained model, a custom model was developed using the apple bin images taken in the Nelson region.

 

Of the 3000 apple bin images, ten were initially selected to form a training sub-set. The images were hand picked to cover a range of conditions to include as many variations of illumination/weather, camera angle and scale.

 

Approximately 1000 individual apple annotation labels were then made across the initial ten images

Apple bin with manual labels drawn

With the custom training sub-set correctly labelled, the pre-trained model was retrained. This is called transfer learning. The main benefit is that for retraining a pre-trained model only a small custom training sub-set (in our case 10 images) is needed.

Apple detection with a custom model based on a pre-trained model (output example 1)

Once the model had been developed, new images were chosen for further detection testing. The images below show the automated detection of the retrained model with masks and bounding boxes fitted over the individual apples. The various colours are used to help visualise the different apples detected and does not represent size or quality in any way.

Apple detection with a custom model based on a pre-trained model (output example 2)

In the next post you’ll learn about the next step we have to take: overcoming a pixel challenge.

Let’s talk

If you would like to learn how Hectre’s award winning fruit technologies can support the success of your business, please connect with us.