Automated guided vehicles (AGV) as well as manufacturing execution systems (MES) are becoming an integral part of smart factories as they address today's requirements of manufacturers and their suppliers of logistic equipment. Engineers from the Institute of Production Engineering and Photonic Technologies (IFT) showcased how Aurora Vision Studio helps the fourth industrial revolution become a reality.
A machine vision showcase project was carried out by the Institute of Production Engineering and Photonic Technologies (IFT) as part of the pilot factory of the Vienna University of Technology. Since the IFT has also been focusing on machine vision problems for some time, a first independent MV project was implemented within the pilot factory.
Some of the well-established research activities of the IFT in the pilot factory include:
For the optical identification of workpieces, workpiece carriers and fixtures a flexible solution was integrated, that enables automated integration of varying image processing tasks through microservices. With the goal of stabilization of manufacturing operations, through securing the usage of the correct manufacturing utilities, an optical camera was integrated in the center of the shop floor, where it can be flexibly reached by automated guided vehicles (AGVs). The AGVs serve as a transport mechanism to provide manufacturing cells with the necessary resources for production and therefore are relevant for logistics between each operation within the workplan, provided by the manufacturing execution system (MES). An image processing task may also be provided on a server and executed via webservice. The correct image processing tasks necessary for the specific manufacturing operations are already linked to the MES workplan during production engineering.
Figure 1: System structure
At runtime an active MES workplan publishes its state changes to SAP MII, which triggers a service on a Node-Red instance, with the relevant information about the released operation, the workplace and the necessary resources. Node-Red instantiates the matching process model from a git-server in the centurio.work process engine for execution. The executed BPMN process, that is abstracted through multiple process layers then controls different devices (e.g. machines and robots) which themselves also provide webservices for these tasks. E.g. the AGV is sent to the camera, where an image is taken of the current load and processed with the matching image processing service. Depending on the result the AGV is either sent to the next operation workplace or back to the setup station, with a non-conformance message for the MES.
As mentioned before, the visual inspection constitutes of a webservice: both the image acquisition and the image processing therein are independent services that can be addressed by the MES. The services are implemented using Python and Flask.
The Python webservice executes different image processing programs dependent of the service request. When the web service is called, both the image to be processed is transferred and the type of image processing is determined. The python script then executes the different runtime executable files depending on the HTTP request parameters on the server. The result of the image processing is then returned to the work engine in the form of an XML file. Currently two programs are implemented, but the number can be expanded as required. These two programs for image processing were created using AV Vision Studio:
Program 1
This program checks whether the jaw chuck is loaded with a blank part or not. For this reason, the program is designed to recognize the part. Due to the inhomogeneous ceiling lighting and the resulting reflections of the jaw chuck mount as well as the part itself, a simple threshold filter could not be applied at the start of the program. Therefore, the Gradient-Magnitude-Image filter was used to isolate all the edges. Hereby the intense inhomogeneities caused by the reflections could be reduced significantly. In the next step the Threshold-to-Region filter was used to identify the distinctive areas including the surface of the part. The resulting region then was split up in non-connected BLOBS. These BLOBS were than filtered using a circularity and the area size criterion, which finally results in a TRUE/FALSE statement regarding the loading status of the jaw chuck. The results along with a timestamp are saved in an XML file.
Figure 2: Jaw chuck loaded with coin blank results in a true output statement.
Figure 3: Empty jaw chuck results in a false output statement.
Program 2For this program the template matching filter LocateObjectNCC was used to locate the center of the jaw chuck. Once the center is located a narrowed circular region focused on the part size is created. Within this region the milling contour can be isolated using a GradientMagnitudeImage filter in combination with a ThresholdtoRegion filter. The evaluation of the thresholded region size results in the binary classification: MILLED or NOT MILLED. These results are saved in an XML file as well.
Figure 4: The detection of the milled coin pattern results in a true output statement.
Authors: Laurenz Pickel, Thomas Trautner, Fabian Singer, Gernot Pöchgraber