This page uses cookies. For details and possible settings refer to our Privacy Policy.
Continuing to use this page means accepting the processing of cookie files.

Accept (hide info)
POL ENG GER 中文(繁體) 中文(简体)

Airport Trays Verification

    Institute for Production Technology and Photonic Technologies


As shown by numerous studies, security checks are one of the most stressful points of airport travel. Consequently, after successfully passing the control, people tend to forget their belongings in security trays. When they realize that they lost something it is simply too late and there is nothing else to do except contacting the Lost & found office. Engineers from the Institute of Production Engineering and Photonic Technologies at Vienna University of Technology showcased how Deep Learning tools can make our lives easier.

Introduction

Deep Learning Editor

The main problem in automating security checks and a tray movement is the analysis of the tray content. Such an analysis has to be carried out because many passengers forget their belongings in the trays, which would then be proceeded to the next unknown passenger. Additionally, it turns out that many people leave trash deliberately in the trays, such as empty bottles, receipts and the like. Finally, yet importantly the trays get contaminated over time – therefore, it is necessary to detect the contamination and take those trays off for cleaning instead of providing new customers with it.

In this project, we tried to detect the mentioned states of the tray using deep learning tools, where the trays were assigned to one of the following three classes: Empty, Object, Liquid.

Process description

For the realization of the above-mentioned machine vision task, we used the ClassifyObject tool, which is one of the tools available in Adaptive Vision's Deep Learning Add-on. The model was trained with about 150 images divided into the 3 mentioned classes. During the training phase, we used images with a big variety of contaminations as well as objects in the trays. Furthermore, we varied the positioning of the objects, positioning of the camera, the positioning of the tray itself as well as exposures, light sources and shadows to get a deep learning model that could be used on any airport. Later we found out that such a variety was no needed as Adaptive Vision deep learning tools allow for using data augmentations that add artificially modified samples to training data set.

Training Image (1) - Empty Tray
Training Image (2) - Object
Training Image (3) - Object
Training Image (4) - Liquid

The deep learning model was successfully deployed with 100% classification rate of the training set. After that the algorithm was tested with new and challenging images that were not used for model training and results turned out to be great - there were no wrong classifications in test images so far. Besides the result of classification, the ClassifyObject filter returns a RelevanceHeatmap indicating how strong specific parts of an image influenced the classification. It facilitates the process of result analysis because you can assess whether a deep learning model focuses on relevant features, in this case on any object in a tray.

Test image - Object (with RelevanceHeatmap)


Authors: Laurenz Pickel, Thomas Trautner, Fabian Singer, Gernot Pöchgraber