Volume 17 Number 2
March 2020
Article Contents
Ying Li, Xi-Long Liu, De Xu and Da-Peng Zhang. Orientation Measurement for Objects with Planar Surface Based on Monocular Microscopic Vision. International Journal of Automation and Computing, vol. 17, no. 2, pp. 247-256, 2020. doi: 10.1007/s11633-019-1202-y
Cite as: Ying Li, Xi-Long Liu, De Xu and Da-Peng Zhang. Orientation Measurement for Objects with Planar Surface Based on Monocular Microscopic Vision. International Journal of Automation and Computing, vol. 17, no. 2, pp. 247-256, 2020. doi: 10.1007/s11633-019-1202-y

Orientation Measurement for Objects with Planar Surface Based on Monocular Microscopic Vision

Author Biography:
  • Ying Li received the B. Sc. degree in automation from North China Electric Power University (Baoding), China in 2016. He is a Ph. D. degree candidate in control science and engineering at the Institute of Automation, Chinese Academy of Sciences (IACAS), China. His research interests include visual measurement, visual control, micro-assembly and machine learning. E-mail: liying2016@ia.ac.cn ORCID iD: 0000-0002-0213-9247

    Xi-Long Liu received the B. Sc. degree in electrical engineering and automation from Beijing Jiaotong University, China in 2009, and the Ph. D. degree in control science and engineering from Institute of Automation, Chinese Academy of Sciences (IACAS), China in 2014. He is an associate professor at the Institute of Automation, Chinese Academy of Sciences (IACAS), China. His research interests include image processing, visual measurement and service robot. E-mail: xilong.liu@ia.ac.cn

    De Xu received the B. Sc. and M. Sc. degrees in control science and engineering from Shandong University of Technology, China in 1985 and 1990, respectively, and the Ph. D. degree in control science and engineering from Zhejiang University, China in 2001. He is a professor at the Institute of Automation, Chinese Academy of Sciences (IACAS), China. His research interests include visual measurement, visual control, intelligent control, visual positioning, microscopic vision, and micro-assembly. E-mail: de.xu@ia.ac.cn (Corresponding author) ORCID iD: 0000-0002-7221-1654

    Da-Peng Zhang received the B. Sc. and M. Sc. degrees in mechatronic engineering from Hebei University of Technology, China in 2003 and 2006, respectively, and the Ph. D. degree in mechatronic engineering from the Beijing University of Aeronautics and Astronautics, China in 2011. He is an associate professor at the Institute of Automation, Chinese Academy of Sciences (IACAS), China. His research interests include visual control, micro-assembly and medical robot. E-mail: dapeng.zhang@ia.ac.cn

  • Received: 2019-04-16
  • Accepted: 2019-09-17
  • Published Online: 2019-10-31
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Figures (10)  / Tables (4)

Metrics

Abstract Views (750) PDF downloads (20) Citations (0)

Orientation Measurement for Objects with Planar Surface Based on Monocular Microscopic Vision

Abstract: Orientation measurement of objects is vital in micro assembly. In this paper, we present a novel method based on monocular microscopic vision for 3-D orientation measurement of objects with planar surfaces. The proposed methods aim to measure the orientation of the object, which does not require calibrating the intrinsic parameters of microscopic camera. In our methods, the orientation of the object is firstly measured with analytical computation based on feature points. The results of the analytical computation are coarse because the information about feature points is not fully used. In order to improve the precision, the orientation measurement is converted into an optimization process base on the relationship between deviations in image space and in Cartesian space under microscopic vision. The results of the analytical computation are used as the initial values of the optimization process. The optimized variables are the three rotational angles of the object and the pixel equivalent coefficient. The objective of the optimization process is to minimize the coordinates differences of the feature points on the object. The precision of the orientation measurement is boosted effectively. Experimental and comparative results validate the effectiveness of the proposed methods.

Ying Li, Xi-Long Liu, De Xu and Da-Peng Zhang. Orientation Measurement for Objects with Planar Surface Based on Monocular Microscopic Vision. International Journal of Automation and Computing, vol. 17, no. 2, pp. 247-256, 2020. doi: 10.1007/s11633-019-1202-y
Citation: Ying Li, Xi-Long Liu, De Xu and Da-Peng Zhang. Orientation Measurement for Objects with Planar Surface Based on Monocular Microscopic Vision. International Journal of Automation and Computing, vol. 17, no. 2, pp. 247-256, 2020. doi: 10.1007/s11633-019-1202-y
    • In recent years, micro-manipulation has attracted more and more attention, and been widely used in micro-electro-mechanism system (MEMS), medical science, and biotechnology[1-5]. The size range of the manipulated objects is from tens of microns to several millimeters. Pose measurement is an indispensable procedure in many robotic systems, and is quite important to the success of missions[6-9]. The measurement accuracy determines the success rate of manipulation.

      Many methods have been proposed for pose measurement such as laser-based methods[8], inertial measurement unit (IMU) based methods[10], etc. However, the aforementioned methods are contact type or low precision and cannot meet the requirement of high precision in micro-manipulation tasks. Vision is a powerful tool in pose measurement, and is convenient, precise, intuitive and low cost. In pose measurement, vision-based methods can be categorized into two cases: stereo vision methods and monocular vision methods. Stereo vision systems consist of more than one camera. The spatial information can be recovered with geometric constraints of different views. Elementary geometric constraints of multiple views have been studied for many years. Recently, some novel methods have been proposed which focus on feature improvement and more intelligent fusion algorithms. The information from multiple cameras is fused with a Kalman-based method, for pose measurement[11]. The proposed method can achieve higher accuracy and is robust to camera motion and image occlusion. Because there are plenty of uncertainties, pose measurement results from different cameras are different. A novel pose estimation mechanism is presented to measure the contaminated poses[12]. Stereo vision has been widely used for its adaptability and flexibility. However, there is a shortcoming of stereo vision that the object should always be visible to multi cameras[12]. In microscopic vision, because the depth of view (DOV) of camera is small (about several hundreds of microns[4]), the common view field of multiple cameras is quite small. It means that the object cannot be under the common view field of multiple cameras since its size is much larger. Therefore, the stereo vision method in conventional vison systems cannot be used for the pose measurement with microscopic cameras.

      Besides stereo vision, monocular vision is another choice for pose measurement. Generally, some prior knowledge is necessary such as depth information of the object. Then the pose of the object can be determined with geometrical constraints. A new tracking method based on an extended Kalman filter is proposed to fuse the vision information and depth information[13]. The method does not need manual pose initialization and offline training. According to the measurements of an inertial unit, pose is estimated with a set of colored markers which are arranged in a known geometry[14]. An extended Kalman filter is employed to fuse the information of the vision and inertial unit. In medical science, the inertial measurement unit (IMU) and monocular vision work together to measure pose in total hip replacement surgery[10]. An extended Kalman filter is used for data fusion. A pose estimation method is proposed based on the perspective view of a single circle and line[15]. However, the aforementioned methods always need extra sensors to provide some necessary information, which will increase the cost of the measurement system or even decrease the measurement precision. And it can be hard to fuse the auxiliary sensors, such as the IMU, into the microscopic system. Monocular vision is a viable way of achieving microscopic measurement, but the traditional methods cannot meet its requirements. Therefore, novel monocular based methods are highly needed for microscopic measurement.

      In micro assembly, many visual measurement methods have been developed. The geometrical features and prior knowledge of objects have been widely used. For example, feature lines and points are useful marks for orientation and position measurement. In [4], the central points of the two components are determined with using Hough transform, which are used for position measurement. Edge lines of the two components are detected with three microscopic cameras for orientation measurement. A novel method is presented in [7], which achieves efficient pose estimation of long cylindrical components. Multi feature lines are extracted for relative pose measurement. A vision-based multiprobe micro-assembly system and its corresponding algorithms are developed in [5]. The template matching approaches are employed to position the components and the contour lines of the probe are chosen as the template. In [16], a high-speed pick-and-place method is presented for cell manipulation by using a two-fingered micro hand. A template matching method is used for locating the end effector, which is based on contour lines. And the circular object is positioned with contour detection method. In [17], a novel pose measurement method is proposed for a micro sphere based on microscopic vision. However, this method can only measure two rotation angles of the object rather than three angles.

      Therefore, geometrical and physical prior knowledge including feature lines or points is quite useful and effective for orientation measurement in micro assembly and micro manipulation. In industrial situations, the components are always regular. If the prior knowledge of the components can be fully utilized, the precision of orientation measurement can be improved considerably. However, the calibration of the intrinsic parameters of the microscopic camera is difficult because of the small depth of view and narrow scope of view. And the common orientation measurement methods based on intrinsic parameters in conventional vision are not appropriate for measurement in microscopic vision. A novel method is highly needed which is independent of the intrinsic parameters of microscopic camera. The orientation measurement in microscopic vision still faces challenges because of the aforementioned problems.

      In this paper, we present a novel method based on monocular microscopic vision for 3-D orientation measurement of objects with planar surface. The method is independent of the intrinsic parameters of the microscopic camera, which are difficult to be calibrated because of small depth of view and narrow scope of view. The orientation of the object is firstly measured with analytical computation based on feature points. The relationships between deviations in image space and in Cartesian space of feature points under microscopic vision have been used. The pixel equivalent coefficient rather than the intrinsic parameters is calibrated, which simplifies the calibration process dramatically. The results of the analytical computation are coarse because the information about feature points is not fully utilized. In order to improve the precision of the orientation measurement, an optimization-based method is proposed. The objective of the optimization process is to minimize the coordinates differences of the feature points on the object. The optimized variables are the three rotational angles and the pixel equivalent coefficient. The gradient descent method is adopted to optimize the objective function. To guarantee the effectiveness of the optimization, appropriate initial values are needed. And the results of the analytical computation are employed as the initial values of optimization. The precision of the orientation measurement has been improved considerably.

      The proposed orientation measurement method decreases the dependency of the intrinsic parameters of the microscopic camera, which simplifies the calibration process. And it converts the orientation measurement, a common measurement problem, into an optimization process. The rest of this paper is organized as follows. Section 2 introduces the object with a planar surface and establishes the coordinates. The vision-based orientation measurement method is detailed in Section 3. The optimization for measurement results is given in Section 4. Section 5 presents the experiments and results. Finally, this paper is concluded in Section 6.

    • The object with planar surface is shown in Fig. 1. It is a thin object with special structure including holes and arcs. Its external diameter and thickness are about 11 mm and 0.5 mm, respectively. There are 15 micro holes distributed evenly on the object with diameter of about 400 μm. The task is to measure the 3-D orientation of the object.

      Figure 1.  Object with planar surface

    • The structure of the designed measurement system is shown in Fig. 2. It consists of a 4 degree-of-freedom (DOF) adjustment platform, a microscopic camera, a corresponding lighting system and a host computer.

      Figure 2.  System configuration and its coordinates

      Coordinate {P} is established on the adjustment platform on which an object with planar surface is mounted. The object can be moved along the Zp axis and rotated around the Xp, Yp and Zp axes. The camera coordinate {C} is established on the image plane of the microscopic camera with the origin being the intersection of the optical axis and the image plane. The X and Y axes of {C} are along with u and v axes of its image. The Z axis of {C} points from the origin to the object. The object coordinate {S} is established on the object. {W} is the world coordinate.

    • In this section, an orientation measurement approach based on analytical computation is proposed. In microscopic vision, the precision intrinsic parameters of microscopic camera are difficult to obtain because of the small depth of view and narrow scope of view. In this method, the pixel equivalent coefficient need to be calibrated, which is much easier than calibrating the intrinsic parameters. The results obtained with the method of this section may be imprecise. They can be employed as initial values of an optimization method which is detailed in Section 4. In order to determine the feature points of the object, a feature points extraction method is first introduced.

    • The centers of the small holes on the object are chosen as the feature points. The accuracy of the detection is important, and is the basis of the orientation measurement in the following section. The flow chart of the feature points extraction method is given in Fig. 3.

      Figure 3.  Flow chart of feature points extraction method

      The edge points of the object are extracted with the Canny operator. Then the coarse positions of the small holes are obtained with Hough transformation. However, not all the small holes can be extracted because of noise. And some circles can also be extracted which are not the small hole. Then the results are selected with the prior knowledge of the size of small hole. m small holes are obtained, whose centers are denoted as Oi, i = $1,\,2,\,\cdots,m,\,i $ denotes the i-th hole. An ellipse can be fitted with the centers of m holes and the ellipse′s center is denoted as Pb. The coarse centers of other n-m small holes can be determined with Pb and the centers of m holes since all holes are evenly distributed on the ellipse. Then the edge points of each hole are detected along the radial direction. The micro holes are finally fitted using the edge points and their centers can be obtained.

    • At benchmark state, the normal vector of the object with planar surface is parallel to the Zc axis. The coordinates in {S} of several points on the upper surface of object are denoted as Pi = [xi, yi, 0]T, i = $1,\,2\,\cdots,n $, and its corresponding image point is pi = [ui, vi]T. It is assumed that the object is rotated around Zw, Yw, and Xw axes with γ, $ \varphi$ and θ serially from benchmark state to measurement state. The rotational matrix R can be represented as (1).

      $\begin{split} R\; & = {R_x}{R_y}{R_z} = \left[ {\begin{array}{*{20}{c}} 1&0&0 \\ 0&{\cos \theta }&{ - \sin \theta } \\ 0&{\sin \theta }&{\cos \theta } \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {\cos \varphi }&0&{\sin \varphi } \\ 0&1&0 \\ { - \sin \varphi }&0&{\cos \varphi } \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {\cos \gamma }&{ - \sin \gamma }&0 \\ {\sin \gamma }&{\cos \gamma }&0 \\ 0&0&1 \end{array}} \right]= \\ & \quad\left[ {\begin{array}{*{20}{c}} {\cos \gamma \cos \varphi }&{ - \cos \varphi \sin \gamma }&{\sin \varphi } \\ {\cos \theta \sin \gamma + \cos \gamma \sin \varphi \sin \theta }&{\cos \gamma \cos \theta - \sin \gamma \sin \varphi \sin \theta }&{ - \cos \varphi \sin \theta } \\ {\sin \gamma \sin \theta - \cos \gamma \cos \theta \sin \varphi }&{\cos \gamma \sin \theta + \cos \theta \sin \gamma \sin \varphi }&{\cos \varphi \cos \theta } \end{array}} \right] \end{split} $

      (1)

      where Rx, Ry and Rz are the rotation matrices around Xw, Yw and Zw axes, respectively.

      Pri=[xri, yri, zri] is the corresponding point of Pi at the measurement state in {C}. pri=[uri, vri]T is the image point of Pri. pri is determined with the method given in Section 3.1. The relationship between Pi and Pri can be represented as

      $\left[ \begin{array}{l} {x_{ri}} \\ {y_{ri}} \\ {z_{ri}} \\ 1 \\ \end{array} \right] = \left[ {\begin{array}{*{20}{c}} R&t \\ 0&1 \end{array}} \right]\left[ \begin{array}{l} {x_i} \\ {y_i} \\ 0 \\ 1 \\ \end{array} \right]$

      (2)

      where t is the translation vector. The coordinate deviation of two points on the surface of object can be obtained as

      $\Delta {P_{ri}} = R\Delta {P_i}$

      (3)

      where ∆Pri = Pr(i+1)Pri = [∆xri, ∆yri, ∆zri]T, ∆Pi =P(i+1)Pi = [∆xi, ∆yi, 0]T.

      uri and vri can be computed with (4)

      $\left[ {\begin{array}{*{20}{c}} {{u_{ri}}} \\ {{v_{ri}}} \\ 1 \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {{k_m}}&0&{{u_0}} \\ 0&{{k_m}}&{{v_0}} \\ 0&0&1 \end{array}} \right]\left[ {\begin{array}{*{20}{c}} \dfrac{x_{ri}}{z_{ri}}\\ \dfrac{y_{ri}}{z_{ri}}\\ 1 \end{array}} \right]$

      (4)

      where km, u0 and v0 are constants.

      In microscopic vision, zri varies in a small range if the object is within the range where the image is clear. Image points deviation of pri and pr(i+1) can be obtained as (5).

      $\left\{ \begin{aligned} & \Delta {x_{ri}} = k\Delta {u_{ri}} \\ & \Delta {y_{ri}} = k\Delta {v_{ri}} \end{aligned} \right.$

      (5)

      where ∆uri = ur(i+1)uri, ∆vri = vr(i+1)vri and k = $\dfrac{z_{ri}}{k_m} $.

      According to (1), (3) and (5), we have

      $(\cos \gamma \cos \varphi )\Delta {x_i} - (\cos \varphi \sin \gamma )\Delta {y_i} = k\Delta {u_r}_i$

      (6)

      $\begin{split} & (\cos \theta \sin \gamma + \cos \gamma \sin \varphi \sin \theta )\Delta {x_i} + \\ & \quad\quad (\cos \gamma \cos \theta - \sin \gamma \sin \varphi \sin \theta )\Delta {y_i} = k\Delta {v_{ri}}.\end{split} $

      (7)

      The parameter k is the pixel equivalent coefficient and is calibrated offline. ∆xi and ∆yi are computed with (5) by replacing ∆uri and ∆vri with ∆ui and ∆vi, respectively. Then ∆xi and ∆yi are employed as known variables. Theoretically, k is not a constant with different zri. And it can be employed as an optimized variable in an optimization method which is detailed in Section 4.

      Another equation as given in (6) can be obtained with new data ∆xi+1, ∆yi+1 and ∆ur(i+1). The division of these two equations is given as

      $\frac{{(\cos \gamma \cos \varphi )\Delta {x_i} - (\cos \varphi \sin \gamma )\Delta {y_i}}}{{(\cos \gamma \cos \varphi )\Delta {x_{i + 1}} - (\cos \varphi \sin \gamma )\Delta {y_{i + 1}}}} = \frac{{\Delta {u_{ri}}}}{{\Delta {u_{r\left( {i + 1} \right)}}}}.$

      (8)

      Formula (8) can be simplified as

      $ \begin{split} & \left( {\Delta {x_i}\Delta {u_{r\left( {i+ 1} \right)}} - \Delta {x_{i{\rm{ + 1}}}}\Delta {u_{ri}}} \right)\cos \gamma= \\ & \quad\quad \left( {\Delta {y_i}\Delta {u_{r\left( {i{\rm{ + 1}}} \right)}} - \Delta {y_{i{\rm{ + 1}}}}\Delta {u_{ri}}} \right)\sin \gamma .\end{split}$

      (9)

      γ can be computed as

      $\gamma = \arctan \left( {\frac{{\Delta {x_i}\Delta {u_{r\left( {i{\rm{ + 1}}} \right)}} - \Delta {x_{i{\rm{ + 1}}}}\Delta {u_{ri}}}}{{\Delta {y_i}\Delta {u_{r\left( {i{\rm{ + 1}}} \right)}} - \Delta {y_{i{\rm{ + 1}}}}\Delta {u_{ri}}}}} \right).$

      (10)

      There will be two solutions of γ1 and γ2, whose difference is π. The γ is selected with minimum error in (14). The angle $\varphi $ can be obtained from (11) after the known γ submitting to (6). The sign of $\varphi $ is determined with the variation of the length of two feature points.

      $\varphi = \arccos \left( {\frac{{k\Delta {u_{ri}}}}{{\cos \gamma \Delta {x_i} - \sin \gamma \Delta {y_i}}}} \right).$

      (11)

      In order to get the more accurate solutions of γ and $\varphi $, averaging the values of multiple measurements is needed.

      With the known γ and $\varphi $, (7) can be rewritten as

      $\begin{split} & \left( {\Delta {x_i}\cos \gamma \sin \varphi - \Delta {y_i}\sin \gamma \sin \varphi } \right)\sin \theta + \\ &\quad\quad \left( {\Delta {x_i}\sin \gamma + \Delta {y_i}\cos \gamma } \right)\cos \theta = k\Delta {v_{ri}}. \end{split} $

      (12)

      One equation is obtained with two feature points according to (12). sinθ and cosθ can be computed using the least square method with at least three feature points. Then angle θ can be obtained.

    • The orientation measurement results with the method in Section 3 may not be optimal since the information of feature points are not be fully used. Therefore, an optimization-based method is highly needed with the optimized variables of the three angles γ, $ \varphi$ and θ. Besides, zri is different with different orientations but it is neglected in the method of Section 3. It means that the depth of point Pri and Pr(i+1) in {C} is different. And it is inappropriate to keep parameter k constant in (5). k should also be optimized in the orientation measurement process.

    • Defining Eu and Ev as

      $\left\{ \begin{aligned} & {E_{ui}} = (\cos \gamma \cos \varphi )\Delta {x_i} - (\cos \varphi \sin \gamma )\Delta {y_i} - k\Delta {u_{ri}} \\ & {E_{vi}} = (\cos \theta \sin \gamma + \cos \gamma \sin \varphi \sin \theta )\Delta {x_i}+ \\ & \;\;\;\quad \quad (\cos \gamma \cos \theta - \sin \gamma \sin \varphi \sin \theta )\Delta {y_i} - k\Delta {v_{ri}}. \end{aligned} \right.$

      (13)

      The objective function of optimization is defined as (14).

      $E\left( {\theta ,\varphi ,\gamma ,k} \right) = \frac{1}{2}\sum\limits_{i = 1}^n {(E_{ui}^2 + E_{vi}^2)}.$

      (14)

      The partial derivatives of the four variables θ, $\varphi $, γ and k are given in (15)–(18).

      $\frac{{\partial E}}{{\partial k}} = \sum\limits_{i = 1}^n {\left( { - \Delta {u_{ri}}{E_{ui}} - \Delta {v_{ri}}{E_{vi}}} \right)} $

      (15)

      where $\dfrac{{\partial E}}{{\partial k}} $ is the partial derivative of k.

      $\begin{split} \frac{{\partial E}}{{\partial \theta }} =\; & \sum\limits_{i = 1}^n {{E_{vi}}\Delta {x_i}\left( { - \sin \gamma \sin \theta + \cos \gamma \sin \varphi \cos \theta } \right)}+ \\ & \sum\limits_{i = 1}^n {E_{vi}\Delta {y_i}\left( { - \cos \gamma \sin \theta - \sin \gamma \sin \varphi \cos \theta } \right)} \end{split} $

      (16)

      where $ \dfrac{{\partial E}}{{\partial \theta }} $ is the partial derivative of θ.

      $\begin{split} \frac{{\partial E}}{{\partial \varphi }} =\; & \sum\limits_{i = 1}^n {{E_{ui}}\left( { - \Delta {x_i}\cos \gamma \sin \varphi + \Delta {y_i}\sin \gamma \sin \varphi } \right)}+ \\ & \sum\limits_{i = 1}^n {{E_{vi}}\left( {\Delta {x_i}\cos \gamma \sin \theta \cos \varphi - \Delta {y_i}\sin \gamma \sin \theta \cos \varphi } \right)} \end{split} $

      (17)

      where $ \dfrac{{\partial E}}{{\partial \varphi }} $ is the partial derivative of $\varphi $.

      $\begin{split} \frac{{\partial E}}{{\partial \gamma }} =\;& \sum\limits_{i = {\rm{1}}}^n {{E_{ui}}\left( { - \Delta {x_i}\cos \varphi \sin \gamma - \Delta {y_i}\cos \varphi \cos \gamma } \right)}+ \\ & \sum\limits_{i = {\rm{1}}}^n {{E_{vi}}\Delta {x_i}\left( {\cos \theta \cos \gamma - \sin \varphi \sin \theta \sin \gamma } \right)}+ \\ & \sum\limits_{i = {\rm{1}}}^n {{E_{vi}}\Delta {y_i}\left( { - \cos \theta \sin \gamma - \sin \varphi \sin \theta \cos \gamma } \right)} \end{split} $

      (18)

      where $ \dfrac{{\partial E}}{{\partial \gamma }} $ is the partial derivative of γ.

      The goal of optimization is to minimize the coordinates differences of the feature points on the object as given in (19).

      $\left[ {{\theta ^*},\;{\varphi ^*},\;{\gamma ^*},\;{k^*}} \right] = \mathop {\arg \min }\limits_{\theta ,\;\varphi ,\;\gamma ,\;k} E\left( {\theta ,\;\varphi ,\;\gamma ,\; k} \right)$

      (19)

      where ${{\theta ^*},\;{\varphi ^*},\;{\gamma ^*}} $ and ${k^*} $ represent the optimal orientation estimation of the object.

    • Theoretically, the high dimension of parameters θ, $\varphi $, γ and k means the searching space is huge. Therefore, E(θ, $\varphi $, γ, k) may have multiple local minimums which is harmful to the optimization process. However, when the rotational angle and k are adjusted along the right direction with an appropriate step length, E(θ, $\varphi $, γ, k) will decrease smoothly rather than fluctuate violently. Besides, suitable initial values are important. The results of the method in Section 3 can be employed as the initial values of the optimization. Therefore, plenty of numerical optimization methods, such as gradient descent method, Gauss-Newton method and so on, can be employed to optimize (19).

      In this work, the gradient descent method is chosen as the optimization tool. The update approach of the optimized variables is given as (20).

      $\left[ {\begin{array}{*{20}{c}} {{\theta _{m + 1}}} \\ {{\varphi _{m + 1}}} \\ {{\gamma _{m + 1}}} \\ {{k_{m + 1}}} \end{array}} \right]=\left[ {\begin{array}{*{20}{c}} {{\theta _m}} \\ {{\varphi _m}} \\ {{\gamma _m}} \\ {{k_m}} \end{array}} \right] - \eta \left[ {\begin{aligned} {\frac{{\partial E}}{{\partial \theta }}} \\ {\frac{{\partial E}}{{\partial \varphi }}} \\ {\frac{{\partial E}}{{\partial \gamma }}} \\ {\frac{{\partial E}}{{\partial k}}} \end{aligned}} \right]$

      (20)

      where η represents the updating step, m represents the number of iterations.

      Then the orientation of the object can be computed with an optimization approach. The optimization method is given in Algorithm 1.

      Algorithm 1. Optimization with gradient descent method

      Input: θ0, $\varphi_0 $, γ0, k0, M and η

      m ←0

      While m < M do:

        Compute partial derivatives with (15)–(18) at (θm, $\varphi_m $, γm, km)

        mm+1

        Update [θm, $\varphi_m $, γm, km]T with (20)

        Compute E(θm, $\varphi_m $, γm, km) with (14)

      End while

      Output: θm, $\varphi_m $, γm and E(θm, $\varphi_m $, γm, km)

    • An experiment system was established as shown in Fig. 4. In this experiment system, there is a microscopic camera with Baumer TXG50 equipped with a Navitar zoom lens with magnification 0.7× ~ 4.5×. The microscopic camera captures images 15 frames per second and the image size of which is 2 448 × 2 050 in pixel. The depth of view is about 400 μm. The object with planar surface is mounted on an adjusting platform. The platform is composed of KOHZU PK544PMB for rotation around Xp, Yp and Zp axes and translation along Zp axis. The rotation resolution around Xp, Yp and Zp axes is 0.002°. The translation resolution is 1 μm. The microscopic camera is mounted on a moving platform, KOHZU PK566-B, and can be moved along with Xc, Yc and Zc axes.

      Figure 4.  Experiment system

    • There are 15 micro holes on the surface of the object, which are distributed evenly. The centers of the holes are employed as feature points. The detection results with the method detailed in Section 3.1 are given in Fig. 5.

      Figure 5.  Detection results of the micro holes when θ = 0°, $\varphi $ = 0°, γ = 0°

      It can be seen that all the holes can be detected with high precision by using our method. The coefficient k can be calibrated by adjusting the position of the object. The microscopic camera is moved along Xc or Yc axes several times with a step length of 200 μm. In each step, the center of the big circle is determined with the method detailed in Section 3.1. k can be computed with the image deviations of the big circle′s center. The calibration result is given in (21).

      $k = 4.614\,0\;{\rm{\mu m}}/{\rm{pixel}}.$

      (21)
    • In order to measure the orientation of the object, the object should be adjusted to be parallel with the plane XcOcYc, which is employed as the benchmark state. The adjusting method is introduced firstly.

    • The benchmark state cannot be obtained with one time adjustment, and an iterative method is presented. First, the object is adjusted to be roughly parallel with XcOcYc. ∆xi and ∆yi are computed with (5) by replacing ∆uri and ∆vri with ∆ui and ∆vi, respectively. Then the object is rotated with angles θ = 2°, $\varphi $ = 2° and γ = 5°. The orientations of the object θ, $\varphi $ and γ are obtained with (19). Next, the object is rotated with –θ, –$\varphi $ and –γ and employed as the new benchmark state. A final accurate benchmark state is obtained by repeating this process for several times. The rotational angles errors in each step are given in Fig. 6 and Table 1. It can be seen that the angle errors decrease gradually. After 5 times adjustment, the angle errors are within 0.1°.

      No.Orientations of the object θ, $\varphi $, γTure values θ, $\varphi $, γ
      12.22, 2.22, 5.022, 2, 5
      22.43, 2.09, 5.022, 2, 5
      31.73, 2.01, 5.012, 2, 5
      42.25, 2.16, 5.002, 2, 5
      52.07, 1.91, 5.002, 2, 5

      Table 1.  Results of benchmark state determination process (°)

      Figure 6.  Angle errors in the benchmark state determination process

    • After the benchmark state is determined, a set of experiments are conducted with different orientations to validate the efficiency of our method. The parameters in the optimization-based measurement method are given as follows: η = 1 × 10–7, M = 40 000. The objects with different orientations are shown in Fig. 7.

      Figure 7.  Images of the object with different orientations where the dotted line and solid line denote the rotational angle at benchmark state and measurement state, respectively. (a) θ = 0°, $\varphi $ = 0°, γ = 0°; (b) θ = 2°, $\varphi $ = 2°, γ = 5°; (c) θ = –2°,$\varphi $ = –2°, γ = –5°; (d) θ = 4°, $\varphi $ = 4°, γ = 10°.

      xi and ∆yi are computed with (5) by replacing ∆uri and ∆vri with ∆ui and ∆vi, respectively. And ∆ui and ∆vi are obtained with the detection results when θ = 0°,$\varphi $ = 0°, γ = 0°. The results of ∆xi and ∆yi are given in Table 2.

      No.xiyi
      1192.61 195.2
      2–279.41 174.9
      3–709.0981.1
      4–1 029.3634.5
      5–1 195.6193.9
      6–1 177.4–280.0
      7–980.9–709.8
      8–634.9–1 031.2
      9–191.9–1 194.9
      10279.8–1 177.2
      11708.9–980.4
      121 030.7–634.8
      131 193.7–191.2
      141 176.6279.7

      Table 2.  Results of ∆xi and ∆yi, i = $1,\,2,\,\cdots,\,14 $ (μm)

      The object with the planar surface is rotated around Xw, Yw and Zw axes, simultaneously. The rotational ranges are –6° to 6°, –6° to 6° and –15° to 15°, respectively. The rotational steps are 2°, 2° and 5°, respectively. The orientations of the object are first measured with analytical computation with (10)–(12). The results are denoted as coarse results then are employed as initial values of the optimization-based method proposed in Section 4. The fine results are obtained by using the optimization-based measurement method with (19).

      The error E(θ,$\varphi $, γ, k) in the optimization process of an experiment with θ = 2°, $\varphi $ = 2°, γ = 5° is given in Fig. 8. The error E(θ, $\varphi $, γ, k) decreases smoothly with the iteration number increasing. The results of the coarse results and the fine results are given in Table 3 and Fig. 9.

      No.Coarse results θ, $\varphi $, γFine results θ, $\varphi $, γTure values θ, $\varphi $, γ
      1–4.56, –7.05, –14.99–5.53, –5.72, –14.97–6, –6, –15
      2–3.07, –3.89, –10.11–3.48, –3.46, –10.10–4, –4, –10
      3–2.76, –1.99, –4.99–2.25, –2.10, –4.98–2, –2, –5
      41.21, 3.20, 5.091.95, 2.33, 5.082, 2, 5
      52.23, 6.40, 10.123.69, 4.08, 10.104, 4, 10
      67.21, 4.62, 15.125.82, 5.95, 15.106, 6, 15

      Table 3.  Results when the object is rotated around Xw, Yw and Zw axes (°)

      Figure 8.  Error E(θ, $\varphi $, γ, k) in the optimization process of an experiment with θ = 2°, $\varphi $ = 2°, γ = 5°

      Figure 9.  Angle errors when rotated around Xw, Yw and Zw axes

      The coarse orientation results are obtained with analytical computation. But the angle errors are still large with the maximal error about 2.5°. The more precise orientation results are obtained with the optimization-based method. And the angle errors decrease considerably. In the optimization process, the errors decrease gradually with the step number increasing. The average angle errors after optimization are 0.30°, 0.23°, and 0.07° around the Xw, Yw, and Zw axes, respectively. And the maximal angle errors are 0.52°, 0.54° and 0.10°, respectively. The angle errors around Xw and Yw are within limited ranges. And the errors around Zw are extremely small, because the rotation around Zw is very sensitive to the microscopic camera.

    • In order to validate the effectiveness of our methods, a set of comparative experiments are conducted. The PnP (perspective-n-point) method is selected as the comparative method. The intrinsic parameters of the microscopic camera are first calibrated because they are needed in the PnP method.

      The object is adjusted to the benchmark state with the method detailed in Section 5.3. Then the object is rotated around the Xw axis with θ. The intrinsic parameters and the extrinsic parameters are given as follows.

      $\left\{ \begin{aligned} & {M_{in}} = \left[ {\begin{array}{*{20}{c}} {{k_x}}&0&{{u_0}}&0 \\ 0&{{k_y}}&{{v_0}}&0 \\ 0&0&1&0 \end{array}} \right] \\ & {M_e} = \left[ {\begin{array}{*{20}{c}} 1&0&0&{{p_x}} \\ 0&{\cos \theta }&{ - \sin \theta }&{{p_y}} \\ 0&{\sin \theta }&{\cos \theta }&{{p_z}} \\ 0&0&0&1 \end{array}} \right] \end{aligned} \right.$

      (22)

      where Min and Me are the intrinsic parameters matrix and the extrinsic parameters matrix, respectively.

      The relationship between image point pri = [uri, vri]T and the point Pi = [xi, yi, 0]T in {S} on the object is given in (23).

      ${z_{ci}}\left[ \begin{array}{l} {u_{ri}} \\ {v_{ri}} \\ 1 \\ \end{array} \right] = {M_{in}}{M_e}\left[ \begin{array}{l} {x_i} \\ {y_i} \\ 0 \\ 1 \\ \end{array} \right]$

      (23)

      where zci is the coordinate along the Zc axis of Pi.

      Equation (23) can be rewritten as (24):

      $\left\{ \begin{aligned} & \frac{{\sin \theta }}{{{p_z}}}{u_{ri}}{y_i} + {u_{ri}} = \frac{{{k_x}}}{{{p_z}}}{x_i} + \frac{{{u_0}\sin \theta }}{{{p_z}}}{y_i} + \frac{{{k_x}}}{{{p_z}}}{p_x} + {u_0} \\ & \frac{{\sin \theta }}{{{p_z}}}{v_{ri}}{y_i} + {v_{ri}} = \left( {\frac{{{k_y}}}{{{p_z}}}\cos \theta + \frac{{{v_0}\sin \theta }}{{{p_z}}}} \right){y_i} + \frac{{{k_y}}}{{{p_z}}}{p_y} + {v_0}. \end{aligned} \right.$

      (24)

      Another two equations can be obtained with the point Pi+1 on the object′s surface. Subtracting them with (24), we have

      $ \left\{\!\begin{aligned} & \frac{{{k_x}}}{{{p_z}}}\Delta {x_i} \!+\! \frac{{{u_0}\sin \theta }}{{{p_z}}}\Delta {y_i}\! -\! \frac{{\sin \theta }}{{{p_z}}}\left( {{u_{i + 1}}{y_{w\left( {i + 1} \right)}} \!-\! {u_i}{y_{wi}}} \right)\! =\! \Delta {u_{ri}} \\ & \!\!\!\left( \!{\frac{{{k_y}}}{{{p_z}}}\cos \theta\! +\! \frac{{{v_0}\sin \theta }}{{{p_z}}}} \!\right)\! \Delta {y_i} \! \! - \! \! \frac{{\sin \theta }}{{{p_z}}}\!\left(\!{{v_{i + 1}}{y_{w\left( {i + 1}\!\right)}}\! \!-\!{v_i}{y_{wi}}} \right) \!\!=\! \Delta {v_{ri}} . \end{aligned} \right. $

      (25)

      Defining the parameters a, b, c and d as follows:

      $\left\{\!\begin{aligned} & a = \frac{{\sin \theta }}{{{p_z}}},\quad b = \frac{{{k_x}}}{{{p_z}}} \\ & c = \frac{{{u_0}\sin \theta }}{{{p_z}}},\quad d = \frac{{{k_y}}}{{{p_z}}}\cos \theta + \frac{{{v_0}\sin \theta }}{{{p_z}}}. \end{aligned} \right.$

      (26)

      Equation (25) can be rewritten as

      $\left\{ \begin{aligned} & b\Delta {x_i} + c\Delta {y_i} - a\left( {{u_{i + 1}}{y_{w\left( {i + 1} \right)}} - {u_i}{y_{wi}}} \right) = \Delta {u_{ri}} \\ & d\Delta {y_i} - a\left( {{v_{i + 1}}{y_{w\left( {i + 1} \right)}} - {v_i}{y_{wi}}} \right) = \Delta {v_{ri}}. \end{aligned} \right.$

      (27)

      Two equations can be obtained with two feature points. 2n–2 equations can be obtained with n feature points. The four parameters a, b, c and d can be obtained with the least square method with at least three feature points. The rotational angle θ is given. The parameters pz, kx and u0 can be obtained with a, b and c.

      When θ is 0° in (25), we have

      $\left\{ \begin{aligned} & \frac{{{k_x}}}{{{p_z}}}\Delta {x_i} = \Delta {u_{ri}} \\ & \frac{{{k_y}}}{{{p_z}}}\Delta {y_i} = \Delta {v_{ri}}. \end{aligned} \right.$

      (28)

      The relationship between kx and ky can be obtained with the division of the two equations in (28). Then ky can be obtained with the known kx. Then v0 is computed from d, ky and pz. The calibration results of the intrinsic parameters are as follows: kx = 37 363.4, ky = 37 355.9, u0 = 1 685.1 and v0 = 1 329.5.

      In the comparative experiments, the object is rotated around Xw, Yw and Zw axes, simultaneously. The rotational ranges are –6° to 6°, –6° to 6° and –15° to 15°, respectively. The rotational steps are 2°, 2° and 5°, respectively. The object′s pose is computed via the PnP method with the points′ positions in Cartesian space and their corresponding image coordinates at measurement state. The results of the PnP methods are given in Fig. 10 and Table 4.

      No.Results of the PnP method θ, $\varphi $, γTure values θ, $\varphi $, γ
      1–2.04, –1.56, –14.54–6, –6, –15
      24.51, 2.38, –9.63–4, –4, –10
      30.83, 1.36, –4.80–2, –2, –5
      4–1.59, –0.78, 4.972, 2, 5
      5–2.39, –1.41, 10.044, 4, 10
      6–3.10, –0.75, 15.356, 6, 15

      Table 4.  Results of comparative experiments when the object is rotated around Xw, Yw and Zw axes (°)

      Figure 10.  Orientation measurement results of PnP method

      In the PnP method, the average angle errors around Xw, Yw and Zw axes are 5.73°, 4.85° and 0.24°, respectively. And the maximal angle errors are 9.10°, 6.75° and 0.46°, respectively.

      Obviously, our method outperforms the PnP method. The angle errors around Xw, Yw and Zw axes with PnP method are larger than those with our method. The PnP methods including the enhanced or improved versions depend on the intrinsic parameters, which need elaborate calibration. Different from the traditional vision, the precise calibration of intrinsic parameters in microscopic vision is much more difficult because of the narrow scope of view and the small depth of view. It means that accurate intrinsic parameters are difficult to obtain. And it will influence the performance of the PnP method. But, our method does not need the intrinsic parameters, which is more effective and convenient.

    • This paper reports our work on the development of orientation measurement methods of objects with planar surfaces, which do not depend on the intrinsic parameters of microscopic cameras. The feature points of the object are first extracted. The orientation of the object is measured with analytical computation based on the relationships between deviations in image space and in Cartesian space of feature points. The pixel equivalent coefficient rather than the intrinsic parameters is needed, which is much easier to calibrate. In order to improve the precision of the orientation measurement, an optimization-based method is presented. The objective of the optimization process is to minimize the coordinates differences of the feature points on the surface of object. The optimized variables are the three rotational angles, the object, and the pixel equivalent coefficient. The gradient descent method is employed to solve the optimization problem. Besides, the solutions of the analytical approach are employed as the initial values of optimization. Experimental and comparative results validate the effectiveness of the proposed methods.

    • This work was supported by National Natural Science Foundation of China (Nos. 61733004 and 61873266).

Reference (17)

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return