VMM: registeration of images. 

VMM: registeration of images.

When the user measure targets of the same type, it would be cool if the machine would remember the first target and compare the rest of them to it and automatically find the interested edges.

Image registration is employed to find the correspondence between the first target (template) and the rest of them.

For VMM, the transform is apparently rigid, not even affine. So it should be an easy task, but two types of targets should be considered.
(1) targets with much texture information, like printed PCB board and colored particles.
(2) targets with only edge information, such as plugins and little mechanic parts.

I'm now testing two strategys, the first one is used in my MSc thesis to estimate camera motion, i.e. identifying some lanmark points, match them and then compute the affine transformation; the second strategy is based on Chamfer Matching (borgerfors, 1988).

The first strategy proves to be very accurate on PCB board images, but has poor performance on the second type of targets. The algorithms include:
(1) detect landmarkers (something like cvGoodFeaturesToTrack)
(2) match the landmarkers by neighborhood correlation
(3) compute the affine transformation by iterated outlier removing.
To speed up the computation, I did this on a image piramid with different image resolution. Firstly compute the affine model on the coasest image, then localize step(2) according to the model on a finer image and so on.
The average registration error ( defined in many literatures, it's not convinient to input formulas in a blog) decreases each level on the piramid. In the example below, the average error is 60, 53, 26 for each layer in the herachical structure respectively.

fig. layer 3 source image
pic2.jpg

fig. layer 3 dst image
pic2.jpg

fig. layer 2 dist image
pic2.jpg

fig.layer 2 source image
pic2.jpg

BTW: some literatures prefer methods in a optical flow fashion, but these ideas are unsuitable for VMM: the fundamental hypothesis that the image sequence would be 'continuous' doesn't hold here. When a user puts on targets on the platform, a minimum offset would cause the image shift a lot, so the source and target images won't be close spatially at all. For optical flow computation, a very large scan window should be used here, which is undesirable both in time and accuracy.

Return to Main Page

Comments

Add Comment




On This Site

  • About this site
  • Main Page
  • Most Recent Comments
  • Complete Article List
  • Sponsors

Search This Site


Syndicate this blog site

Powered by BlogEasy


Free Blog Hosting