Matrox Imaging Library (MIL) Tools
MIL X includes three tools for performing pattern recognition: Pattern Matching, Geometric Model Finder (GMF), and Advanced Geometric Matcher (AGM). These tools are primarily used to locate complex objects for guiding a gantry, stage, or robot, or for directing subsequent measurement operations.
The Pattern Matching tool is based on normalized grayscale correlation (NGC), a classical technique that finds a pattern by looking for a similar spatial distribution of intensity. A hierarchical search strategy lets this tool very quickly and reliably locate a pattern, including multiple occurrences, which are translated and slightly rotated, with sub-pixel accuracy. The tool performs well when scene lighting changes uniformly, which is useful for dealing with attenuating illumination. A pattern can be trained manually or determined automatically for alignment. Search parameters can be manually adjusted and patterns can be manually edited to tailor performance.
The GMF tool uses geometric features (e.g., contours) to find an object. The tool quickly and reliably finds multiple models—including multiple occurrences—that are translated, rotated, and/or scaled with sub-pixel accuracy. GMF locates an object that is partially missing and continues to perform when a scene is subject to uneven changes in illumination, thus relaxing lighting requirements. A model can be trained manually from an image, obtained from a CAD file, or determined automatically for alignment.
A model can also be obtained from the Edge Finder tool, where the geometric features are defined by color boundaries and crests or ridges in addition to contours. Physical setup requirements are eased when GMF is used in conjunction with the Calibration tool as models become independent of camera position. GMF parameters can be manually adjusted and models can be manually edited to tailor performance.
The new generation AGM tool uses edges and related metrics to dependably locate one or more occurrences of a model that are subject to translation, slight rotation, minor scale differences, and partial occlusion. A model is defined from a single image or trained with user assistance from multiple images. The latter delivers greater robustness for a model that varies naturally and is searched in a cluttered scene.