Collecting Images
For all machine vision applications, whether traditional or deep learning, quality, high contrast images are the key component. In Deep Learning applications, images are the primary input, and the images that are used to train the tool will determine its success. In addition, the images used to train the tool should be the same as the images that you expect to encounter during the tool’s deployment. So, the more consistent and accurately representative the images are during training Training is the process that your tool, which is a neural network, is learning about the features (pixels) based on the labels you made. For example, a tool will learn the defect/normal pixels in each image based on the defect/normal labels you drew. The goal of the tool Training is learning enough to give the correct inspection results of whether an unseen image is defective or not. The key to training is to ensure that you include all possible variations within your training set, and that your images are accurately labeled. Training times vary by the application, tool setup and the GPU in the PC being used to train the network., the better the tool will perform during deployment. It's also important to remember that Deep Learning and the power of deep learning cannot overcome poor image quality. The principle of “garbage in, garbage out” applies. The quality of the input will directly influence the quality of the result that Deep Learning is able to achieve.
The most important factor for programming Deep Learning will be creating an image set that is based on what you expect the software to encounter during its deployment phase. Your images should contain all the information that will be needed for Deep Learning to reach the correct decision. Look for scenarios where your manual inspectors may pick up parts, then manually tilt and rotate them to examine for defects. This will indicate that you will probably need angled imaging or lighting to capture those defects.
Another possible scenario would be where a human inspector sees dust or oil on a part, they pick it up and manually wipe off the dust/oil. If this dust/oil could be confused with a defect, you will need to teach Deep Learning about the dust/oil. This image set will need to include the full range of possible variations that can be captured by the camera. The goal for this is to properly generalize the data set. Generalization refers to the concept in deep learning of determining how effective the tools will be when used on newly acquired images that weren’t used during training. A well generalized tool will perform well on new data. In this scenario, the model A specific spatial arrangement of a set of features (Blue Locate and Blue Read tools only.) During a post-processing step, the Blue Locate and Blue Read tools can fit all of the features detected in an image to the models defined for the tool. The overall pose and identity of the model is then returned. formed by the neural net should fit the initial training set, and account for new data it encounters in unseen images.
The Deep Learning tools are capable of handling image and lighting variability, but the tools must be taught what that variability might entail. If the lighting may be brighter or darker from image to image, capture that variability in the images, and teach the tools using that variability in lighting by adding those images to your training image set A collection of images of your specific application. A training image set represents images of a specific part or process acquired in a consistent way using the lighting, optical, and mechanical characteristics of your runtime system. The training image set includes images that represent the range of image appearances that you expect to see in normal operation..
When configuring your lighting and imaging options you can use typical machine vision lighting and optics techniques. However, with Deep Learning, you want to ensure that the lighting and optics are consistent between training and production. If, for example, you train your images based on a certain lighting and optic setup, and you then alter that configuration during production, the tool will be basing its performance on that initial setup, thus failing during production.
If possible, use controlled lighting to avoid effects caused by ambient lighting or visual changes that can be caused differences in the lighting setup. When setting up the cameras, make sure that the camera setup in the lab is the same as what will be used during production. Also attempt to minimize perspective distortion, changes in lens focus, depth of field and field of view.