Developement in High-accuracy Vision Pose Estimation Technology

A high accuracy method for pose estimation based on rotation parameters has been proposed by IOE.

The target pose (position and attitude) estimation based on vision is key research forefront dirction in optoelectronic precision measurement technology, and plays an important role in space operation, industrial manufacture and robot navigation. Especially in the field of space, the accurate of the target pose estimation is a key task directly related to the success of the space mission. At present, the pose estimation technology for cooperative goals is very mature, and has been widely used in the Industry, medicine and space fields. However, However, due to the lack of prior information about cooperative targets for most targets, it is a huge challenge for the target pose estimation. 

Recently, under the support of  the National Natural Science Foundation and the Youth Innovation Promotion Association CAS, Dr. Rujin Zhao research team proposed a  high accuracy method for pose estimation based on rotation parameters. The researchers first made the rotation matrix parameterized by Cayley-Gibbs-Rodrigues (CGR) to transform the pose estimation problem into the optimization problem of the rotation parameterrotation. Then, based on Gr?bner basis method, the authors obtained the pose by optimizing the rotation parameters to realize the pose estimation of the number of points under different configurations. Compared with the trandition method, this new method is more versatile, and achieves higher level of accuracy. This research results are expected to be applied to any target pose estimation in the future space missions, and extend to industrial manufacturing, medical assistance, and robot navigation. 

The simulation and pratical experiments verified the availability and robustness. Fig.1 is the errors of compared the simulation experiments. Fig.2 is the results of real image pose estimation. 

The research was published in Measurement. ( https://www.sciencedirect.com/science/article/pii/S026322411830099X ) 

 

Fig. 1. The mean rotation and translation errors of compared methods in different configurations: (a) ordinary 3D case; (b) quasi-singular case; (c) planar case. 

  

 

Fig. 2. Experiment results using real images.( The green signs “+” in the input images remark the outliers, and the red marks “+” denote the feature point inliers; the re-projections of the verified feature points using the estimated camera pose are marked as the blue circle “o”.) 

Contact

CAO Qiang

Institute of Optics and Electronics

Email: caoqiang@ioe.ac.cn 

  Copyright © The Institute of Optics And Electronics, The chinese Academy of Sciences
Address: Box 350, Shuangliu, Chengdu, Sichuan, China
Email:dangban@ioe.ac.cn Post Code: 610 209 备案号:蜀ICP备05022581号