Volume 17 Number 2
March 2020
Article Contents
Zhao-Bing Kang, Wei Zou, Zheng Zhu and Hong-Xuan Ma. Smooth-optimal Adaptive Trajectory Tracking Using an Uncalibrated Fish-eye Camera. International Journal of Automation and Computing, vol. 17, no. 2, pp. 267-278, 2020. doi: 10.1007/s11633-019-1209-4
Cite as: Zhao-Bing Kang, Wei Zou, Zheng Zhu and Hong-Xuan Ma. Smooth-optimal Adaptive Trajectory Tracking Using an Uncalibrated Fish-eye Camera. International Journal of Automation and Computing, vol. 17, no. 2, pp. 267-278, 2020. doi: 10.1007/s11633-019-1209-4

Smooth-optimal Adaptive Trajectory Tracking Using an Uncalibrated Fish-eye Camera

Author Biography:
  • Zhao-Bing Kang received the B. Eng. degree in mechanical engineering and automation from Dezhou University, China in 2008, the M. Eng. degree in mechanical and electronic engineering from Harbin Institute of Technology, China in 2016. Currently, he is a Ph. D. degree candidate in Institute of Automation at Chinese Academy of Science (CASIA), China. He is also with University of Chinese Academy of Science, China. His research interests include visual servoing and robot location and navigation. E-mail: kangzhaobing2017@ia.ac.cnORCID iD: 0000-0001-8239-5429

    Wei Zou received the B. Eng. degree in control science and engineering from Baotou University of Iron and Steel Technology, China in 1997, the M. Eng. degree in control science and engineering from Shandong University of Technology, China in 2000, and the Ph. D. degree in control science and engineering from Institute of Automation, Chinese Academy of Science (CASIA), China in 2003. Currently, he is a professor at the Research Center of Precision Sensing and Control, CASIA. His research interests include intelligent robotics, visual servoing, and robot localization and navigation. E-mail: wei.zou@ia.ac.cn (Corresponding author)ORCID iD: 0000-0003-4215-5361

    Zheng Zhu received the B. Sc. degree from Zhengzhou University, China in 2014. He is currently a Ph. D. degree candidate in IACAS, China. He is also with University of Chinese Academy of Sciences, China. His research interests include computer vision, deep learning and robotics. E-mail: zhuzheng2014@ia.ac.cn

    Hong-Xuan Ma received the B. Sc. degree from Central South University, China in 2016. He is currently a Ph. D. degree candidate in CASIA, China. He is also with University of Chinese Academy of Sciences, China. His research interests include computer vision and robotics.E-mail: mahongxuan2016@ia.ac.cn

  • Received: 2019-06-12
  • Accepted: 2019-11-11
  • Published Online: 2019-12-27
  • This paper presents a two-stage smooth-optimal trajectory tracking strategy. Different from existing methods, the optimal trajectory tracked point can be directly determined in an uncalibrated fish-eye image. In the first stage, an adaptive trajectory tracking controller is employed to drive the tracking error and the estimated error to an arbitrarily small neighborhood of zero. Afterwards, an online smooth-optimal trajectory tracking planner is proposed, which determines the tracked point that can be used to realize smooth motion control of the mobile robot. The tracked point in the uncalibrated image can be determined by minimizing a utility function that consists of both the velocity change and the sum of cross-track errors. The performance of our planner is compared with other tracked point determining methods in experiments by tracking a circular trajectory and an irregular trajectory. Experimental results show that our method has a good performance in both tracking accuracy and motion smoothness.
  • 加载中
  • [1] K. C. Koh, H. S. Cho.  A smooth path tracking algorithm for wheeled mobile robots with dynamic constraints[J]. Journal of Intelligent and Robotic Systems, 1999, 24(4): 367-385. doi: 10.1023/A:1008045202113
    [2] T. M. Howard, R. A. Knepper, A. Kelly. Constrained optimization path following of wheeled robots in natural terrain. Experimental Robotics, O. Khatib, V. Kumar, D. Rus, Eds., Berlin, Germany: Springer, pp. 343–352, 2008.
    [3] T. K. Lee, S. H. Baek, Y. H. Choi, S. Y. Oh.  Smooth coverage path planning and control of mobile robots based on high-resolution grid map representation[J]. Robotics and Autonomous Systems, 2011, 59(10): 801-812. doi: 10.1016/j.robot.2011.06.002
    [4] J. Chen, W. E. Dixon, M. Dawson, M. McIntyre.  Homography-based visual servo tracking control of a wheeled mobile robot[J]. IEEE Transactions on Robotics, 2006, 22(2): 406-415. doi: 10.1109/TRO.2006.862476
    [5] B. Q. Li, Y. C. Fang, X. B. Zhang.  Visual servo regulation of wheeled mobile robots with an uncalibrated onboard camera[J]. IEEE/ASME Transactions on Mechatronics, 2016, 21(5): 2330-2342. doi: 10.1109/TMECH.2015.2504098
    [6] T. C. Lee, C. Y. Tsai, K. T. Song.  Fast parking control of mobile robots: A motion planning approach with experimental validation[J]. IEEE Transactions on Control Systems Technology, 2004, 12(5): 661-676. doi: 10.1109/TCST.2004.826964
    [7] J. B. Coulaud, G. Campion, G. Bastin, M. de Wan.  Stability analysis of a vision-based control design for an autonomous mobile robot[J]. IEEE Transactions on Robotics, 2006, 22(5): 1062-1069. doi: 10.1109/TRO.2006.878934
    [8] J. Chen, B. X. Jia, K. X. Zhang.  Trifocal tensor-based adaptive visual trajectory tracking control of mobile robots[J]. IEEE Transactions on Cybernetics, 2017, 47(11): 3784-3798. doi: 10.1109/TCYB.2016.2582210
    [9] R. K. Mandava, P. R. Vundavilli.  Near optimal PID controllers for the biped robot while walking on uneven terrains[J]. International Journal of Automation and Computing, 2018, 15(6): 689-706. doi: 10.1007/s11633-018-1121-3
    [10] T. Fukao, H. Nakagawa, N. Adachi.  Adaptive tracking control of a nonholonomic mobile robot[J]. IEEE Transactions on Robotics and Automation, 2000, 16(5): 609-615. doi: 10.1109/70.880812
    [11] W. E. Dixon, D. M. Dawson, E. Zergeroglu, A. Behal.  Adaptive tracking control of a wheeled mobile robot via an uncalibrated camera system[J]. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 2001, 31(3): 341-352. doi: 10.1109/3477.931519
    [12] X. W. Liang, H. S. Wang, W. D. Chen, D. J. Guo, T. Liu.  Adaptive image-based trajectory tracking control of wheeled mobile robots with an uncalibrated fixed camera[J]. IEEE Transactions on Control Systems Technology, 2015, 23(6): 2266-2282. doi: 10.1109/TCST.2015.2411627
    [13] X. B. Zhang, Y. C. Fang, B. Q. Li, J. Wang.  Visual servoing of nonholonomic mobile robots with uncalibrated camera-to-robot parameters[J]. IEEE Transactions on Industrial Electronic, 2017, 64(1): 390-400. doi: 10.1109/TIE.2016.2598526
    [14] H. M. Becerra, C. Sagues. A sliding mode control law for epipolar visual servoing of differential-drive robots. In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, Nice, France, pp. 3058–3063, 2008.
    [15] H. M. Becerra, G. Lopez-Nicolas, G. Sagues.  A sliding-mode-control law for mobile robots based on epipolar visual servoing from three views[J]. IEEE Transactions on Robotics, 2011, 27(1): 175-183. doi: 10.1109/TRO.2010.2091750
    [16] K. Wang, Y. H. Liu, L. Y. Li.  Visual servoing trajectory tracking of nonholonomic mobile robots without direct position measurement[J]. IEEE Transactions on Robotics, 2014, 30(4): 1026-1035. doi: 10.1109/TRO.2014.2317891
    [17] L. Y. Li, Y. H. Liu, T. J. Jiao, K. Wang, M. Fang.  Adaptive trajectory tracking of nonholonomic mobile robots using vision-based position and velocity estimation[J]. IEEE Transactions on Cybernetics, 2018, 48(2): 571-582. doi: 10.1109/TCYB.2016.2646719
    [18] H. S. Wang, Y. H. Liu, W. D. Chen.  Uncalibrated visual tracking control without visual velocity[J]. IEEE Transactions on Control Systems technology, 2010, 18(6): 1359-1370. doi: 10.1109/TCST.2010.2041457
    [19] A. Saradagi, V. Muralidharan, V. Krishnan, S. Menta, A. D. Mahindrakar.  Formation control and trajectory tracking of nonholonomic mobile robots[J]. IEEE Transactions on Control Systems Technology, 2018, 26(6): 2250-2258. doi: 10.1109/TCST.2017.2749563
    [20] B. S. Park, S. J. Yoo, J. B. Park, Y. H. Choi.  A simple adaptive control approach for trajectory tracking of electrically driven nonholonomic mobile robots[J]. IEEE Transactions on Control Systems Technology, 2010, 18(5): 1199-1206. doi: 10.1109/TCST.2009.2034639
    [21] J. M. Yang, J. H. Kim.  Sliding mode control for trajectory tracking of nonholonomic wheeled mobile robots[J]. IEEE Transactions on Robotics and Automation, 1999, 15(3): 578-587. doi: 10.1109/70.768190
    [22] B. D. Ning, Q. L. Han.  Prescribed finite-time consensus tracking for multiagent systems with nonholonomic chained-form dynamics[J]. IEEE Transactions on Automatic Control, 2019, 64(4): 1686-1693. doi: 10.1109/TAC.2018.2852605
    [23] W. M. Bessa.  Some remarks on the boundedness and convergence properties of smooth sliding mode controllers[J]. International Journal of Automation and Computing, 2009, 6(2): 154-158. doi: 10.1007/s11633-009-0154-z
    [24] D. Verscheure, B. Demeulenaere, J. Swevers, J. de Schutter, M. Diehl.  Time-optimal path tracking for robots: A convex optimization approach[J]. IEEE Transactions on Automatic Control, 2009, 54(10): 2318-2327. doi: 10.1109/TAC.2009.2028959
    [25] F. Boe, B. Hannaford.  Online improvement of speed and tracking performance on repetitive paths[J]. IEEE Transactions on Control Systems Technology, 1998, 6(3): 350-358. doi: 10.1109/87.668036
    [26] W. C. Sun, S. Y. Tang, H. J. Gao, J. Zhao.  Two time-scale tracking control of nonholonomic wheeled mobile robots[J]. IEEE Transactions on Control Systems Technology, 2016, 24(6): 2059-2069. doi: 10.1109/TCST.2016.2519282
    [27] B. Tovar, R. Murrieta-Cid, S. M. LaValle.  Distance-optimal navigation in an unknown environment without sensing distances[J]. IEEE Transactions on Robotics, 2007, 23(3): 506-518. doi: 10.1109/TRO.2007.898962
    [28] M. Park, S. Lee, W. Han.  Development of steering control system for autonomous vehicle using geometry-based path tracking algorithm[J]. ETRI Journal, 2015, 37(3): 617-625. doi: 10.4218/etrij.15.0114.0123
    [29] H. Yang, V. Cocquempot, B. Jiang.  Optimal fault-tolerant path-tracking control for 4WS4WD electric vehicles[J]. IEEE Transactions on Intelligent Transportation Systems, 2010, 11(1): 237-243. doi: 10.1109/TITS.2009.2033374
    [30] T. Ardeshiri, M. Norrlof, J. Lofberg, A. Hansson.  Convex optimization approach for time-optimal path tracking of robots with speed dependent constraints[J]. IFAC Proceedings Volumes, 2011, 44(1): 14648-14653. doi: 10.3182/20110828-6-IT-1002.01136
    [31] J. Kieffer, A. J. Cahill, M. R. James.  Robust and accurate time-optimal path-tracking control for robot manipulators[J]. IEEE Transactions on Robotics and Automation, 1997, 13(6): 880-890. doi: 10.1109/70.650167
    [32] Y. Z. Huang, X. B. Zhang, Y. C. Fang.  Vision-based minimum-time planning of mobile robots with kinematic and visibility constraints[J]. IFAC Proceedings Volumes, 2014, 47(3): 11878-11883. doi: 10.3182/20140824-6-ZA-1003.01791
    [33] G. Chesi, Y. S. Hung.  Global path-planning for constrained and optimal visual servoing[J]. IEEE Transactions on Robotics, 2007, 23(5): 1050-1060. doi: 10.1109/TRO.2007.903817
    [34] A. H. A. Hafez, A. K. Nelakanti, C. V. Jawahar.  Path planning for visual servoing and navigation using convex optimization[J]. International Journal of Robotics and Automation, 2015, (): -. doi: 10.2316/Journal.206.2015.3.206-4271
    [35] R. Saravanan, S. Ramabalan, C. Balamurugan, A. Subash.  Evolutionary trajectory planning for an industrial robot[J]. International Journal of Automation and Computing, 2010, 7(2): 190-198. doi: 10.1007/s11633-010-0190-8
    [36] F. A. Cheein.  Intelligent sampling technique for path tracking controllers[J]. IEEE Transactions on Control Systems Technology, 2016, 24(2): 747-755. doi: 10.1109/TCST.2015.2450180
    [37] K. Yang, S. Sukkarieh.  An analytical continuous-curvature path-smoothing algorithm[J]. IEEE Transactions on Robotics, 2010, 26(3): 561-568. doi: 10.1109/TRO.2010.2042990
    [38] D. F. Dementhon, L. S. Davis.  Model-based object pose in 25 lines of code[J]. International Journal of Computer Vision, 1995, 15(1–2): 123-141. doi: 10.1007/BF01450852
    [39] Z. B. Kang, W. Zou, H. X. Ma, Z. Zhu.  Adaptive trajectory tracking of wheeled mobile robots based on a fish-eye camera[J]. International Journal of Control, Automation and Systems, 2019, 17(9): 2297-2309. doi: 10.1007/s12555-019-0006-8
    [40] O. Amidi. Integrated Mobile Robot Control, CMU-RI-TR-90-17, Carnegie Mellon University, Pittsburgh, USA, 1990.
    [41] S. Thrun, M. Montemerlo, H. Dahlkamp, D. Stavens, A. Aron, J. Diebel, P. Fong, J. Gale, M. Halpenny, G. Hoffmann, K. Lau, C. Oakley, M. Palatucci, V. Pratt, P. Stang, S. Strohband, C. Dupont, L. E. Jendrossek, C. Koelen, C. Markey, C. Rummel, J. van Niekerk, E. Jensen, P. Alessandrini, G. Bradski, B. Davies, S. Ettinger, A. Kaehler, A. Nefian, P. Mahoney.  Stanley: The robot that won the DARPA grand challenge[J]. Journal of Field Robotics, 2006, 23(9): 661-692. doi: 10.1002/rob.20147
  • 加载中
  • [1] Hong-Xuan Ma, Wei Zou, Zheng Zhu, Chi Zhang, Zhao-Bing Kang. Selection of Observation Position and Orientation in Visual Servoing with Eye-in-vehicle Configuration for Manipulator . International Journal of Automation and Computing, 2019, 16(6): 761-774.  doi: 10.1007/s11633-019-1181-z
    [2] Peng Wu,  Qing-Yuan Wang,  Xiao-Yun Feng. Automatic Train Operation Based on Adaptive Terminal Sliding Mode Control . International Journal of Automation and Computing, 2015, 12(2): 142-148.  doi: 10.1007/s11633-015-0877-y
    [3] Zhen-Guo Liu,  Jin-Ming Huang. A New Adaptive Tracking Control Approach for Uncertain Flexible Joint Robot System . International Journal of Automation and Computing, 2015, 12(5): 559-566.  doi: 10.1007/s11633-015-0898-6
    [4] Jiri Vojtesek,  Petr Dostal. Optimal Choice of Weighting Factors in Adaptive Linear Quadratic Control . International Journal of Automation and Computing, 2014, 11(3): 241-248.  doi: 10.1007/s11633-014-0786-5
    [5] Fang-Fang Zhang,  Shu-Tang Liu,  Ke-Xin Liu. Adaptive Control of Accumulative Error for Nonlinear Chaotic Systems . International Journal of Automation and Computing, 2014, 11(5): 527-535.  doi: 10.1007/s11633-014-0821-6
    [6] Fu-Cai Liu,  Li-Huan Liang,  Juan-Juan Gao. Fuzzy PID Control of Space Manipulator for Both Ground Alignment and Space Applications . International Journal of Automation and Computing, 2014, 11(4): 353-360.  doi: 10.1007/s11633-014-0800-y
    [7] Zhong-Qiang Wu,  Jian-Ping Xie. Design of Adaptive Robust Guaranteed Cost Controller for Wind Power Generator . International Journal of Automation and Computing, 2013, 10(2): 111-117.  doi: 10.1007/s11633-013-0703-3
    [8] B. Srinivasa Prasad,  D. Siva Prasad,  A. Sandeep,  G. Veeraiah. Condition Monitoring of CNC Machining Using Adaptive Control . International Journal of Automation and Computing, 2013, 10(3): 202-209.  doi: 10.1007/s11633-013-0713-1
    [9] Xiao-Cheng Shi,  Tian-Ping Zhang. Adaptive Tracking Control of Uncertain MIMO Nonlinear Systems with Time-varying Delays and Unmodeled Dynamics . International Journal of Automation and Computing, 2013, 10(3): 194-201.  doi: 10.1007/s11633-013-0712-2
    [10] Chun-Li Zhang,  Jun-Min Li. Hybrid Function Projective Synchronization of Chaotic Systems with Uncertain Time-varying Parameters Fourier Series Expansion . International Journal of Automation and Computing, 2012, 9(4): 388-394.  doi: 10.1007/s11633-012-0659-8
    [11] Vineet Kumar, B. C. Nakra, A. P. Mittal. Some Investigations on Fuzzy P + Fuzzy I + Fuzzy D Controller for Non-stationary Process . International Journal of Automation and Computing, 2012, 9(5): 449-457.  doi: 10.1007/s11633-012-0666-9
    [12] Ming-Zhe Hou,  Guang-Ren Duan. Adaptive Dynamic Surface Control for Integrated Missile Guidance and Autopilot . International Journal of Automation and Computing, 2011, 8(1): 122-127.  doi: 10.1007/s11633-010-0563-z
    [13] Mohamed Boukattaya, Tarak Damak, Mohamed Jallouli. Robust Adaptive Control for Mobile Manipulators . International Journal of Automation and Computing, 2011, 8(1): 8-13.  doi: 10.1007/s11633-010-0548-y
    [14] Indirect Adaptive Fuzzy and Impulsive Control of Nonlinear Systems . International Journal of Automation and Computing, 2010, 7(4): 484-491.  doi: 10.1007/s11633-010-0531-7
    [15] Xin-Han Huang,  Xiang-Jin Zeng,  Min Wang. SVM-based Identification and Un-calibrated Visual Servoing for Micro-manipulation . International Journal of Automation and Computing, 2010, 7(1): 47-54.  doi: 10.1007/s11633-010-0047-1
    [16] Fei Li, Hua-Long Xie. Sliding Mode Variable Structure Control for Visual Servoing System . International Journal of Automation and Computing, 2010, 7(3): 317-323.  doi: 10.1007/s11633-010-0509-5
    [17] Tian-Ping Zhang,  Cai-Ying Zhou,  Qing Zhu. Adaptive Variable Structure Control of MIMO Nonlinear Systems with Time-varying Delays and Unknown Dead-zones . International Journal of Automation and Computing, 2009, 6(2): 124-136.  doi: 10.1007/s11633-009-0124-5
    [18] Mohamed Said Sayed Ahmed,  Ping Zhang,  Yun-Jie Wu. Position Control of Synchronous Motor Drive by Modified Adaptive Two-phase Sliding Mode Controller . International Journal of Automation and Computing, 2008, 5(4): 406-412.  doi: 10.1007/s11633-008-0406-3
    [19] Adaptive Control of Rigid Body Satellite . International Journal of Automation and Computing, 2008, 5(3): 296-306.  doi: 10.1007/s11633-008-0296-4
    [20] Feng Qiao,  Quanmin Zhu,  Alan FT Winfield,  Chris Melhuish. Adaptive Sliding Mode Control for MIMO Nonlinear Systems Based on Fuzzy Logic Scheme . International Journal of Automation and Computing, 2004, 1(1): 51-62.  doi: 10.1007/s11633-004-0051-4
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Figures (6)  / Tables (1)

Metrics

Abstract Views (599) PDF downloads (35) Citations (0)

Smooth-optimal Adaptive Trajectory Tracking Using an Uncalibrated Fish-eye Camera

Abstract: This paper presents a two-stage smooth-optimal trajectory tracking strategy. Different from existing methods, the optimal trajectory tracked point can be directly determined in an uncalibrated fish-eye image. In the first stage, an adaptive trajectory tracking controller is employed to drive the tracking error and the estimated error to an arbitrarily small neighborhood of zero. Afterwards, an online smooth-optimal trajectory tracking planner is proposed, which determines the tracked point that can be used to realize smooth motion control of the mobile robot. The tracked point in the uncalibrated image can be determined by minimizing a utility function that consists of both the velocity change and the sum of cross-track errors. The performance of our planner is compared with other tracked point determining methods in experiments by tracking a circular trajectory and an irregular trajectory. Experimental results show that our method has a good performance in both tracking accuracy and motion smoothness.

Zhao-Bing Kang, Wei Zou, Zheng Zhu and Hong-Xuan Ma. Smooth-optimal Adaptive Trajectory Tracking Using an Uncalibrated Fish-eye Camera. International Journal of Automation and Computing, vol. 17, no. 2, pp. 267-278, 2020. doi: 10.1007/s11633-019-1209-4
Citation: Zhao-Bing Kang, Wei Zou, Zheng Zhu and Hong-Xuan Ma. Smooth-optimal Adaptive Trajectory Tracking Using an Uncalibrated Fish-eye Camera. International Journal of Automation and Computing, vol. 17, no. 2, pp. 267-278, 2020. doi: 10.1007/s11633-019-1209-4
    • Uncalibrated visual trajectory tracking is a subproblem in visual servoing and has been widely studied. However, most works only concern global stability of the tracking system. Motion smoothness attracts little attention for the controllers designed in image space, which is helpful in reducing wheel slippage or mechanical damage. Although some researchers focus on smooth motion control of the mobile robot[1-3], their results only can be used in Cartesian space or image space where the camera is calibrated. Smooth and precise control of the mobile robot using an uncalibrated camera is still challenging. To solve this problem, this paper proposes a smooth-optimal adaptive trajectory tracking controller for an uncalibrated fish-eye camera.

      Controllers eligible for the systems whose parameters can be specified have been proposed in [4-8]. In [4, 5, 8], a prerecorded image sequence is required to get feature points, then points matching is conducted to calculate the matrix which contains system parameters that are essential for the controller design. Feedback control methods are employed in [6, 7, 9] to design trajectory tracking controllers based on errors between the real and desired poses. These methods are effective for the systems whose parameters can be acquired accurately. However, some system parameters cannot be obtained or are time-varying in most cases. For this reason, control algorithms taking uncertain parameters in the system into consideration have been proposed in [10-22]. In [14, 15, 21, 23], sliding mode control methods are employed to design controllers. When singularity appears in the system, the original one is replaced by a simple proportional linear velocity controller[15]. Different from the strategy in [15], the epipolar geometry is exploited in [14] to solve the singularity problem. Adaptive control methods are employed in [10-13, 16-20] to solve system parameter uncertainty problems. Adaptive trajectory tracking controllers proposed in [10, 20] fit for the non-vision systems that contain both kinematic and dynamic unknown parameters. Since the objective image information is helpful in reducing the accumulative position error and improving the tracking accuracy, visual tracking controllers designed for the monocular vision systems have been proposed in [11-13, 16-19]. These works take into account both mechanical and visual unknown parameters of the tracking system, such as kinematic and dynamic parameters of the mobile robot, camera intrinsic and extrinsic parameters, etc. However, they do not consider the motion smoothness of the mobile robot, which may cause the system vibration.

      To make the robot motion satisfy the requirements, many optimization methods have been proposed, which can be divided into two categories based on their strategies. One category regards maximizing efficiency as the optimization principle[24-34]. Time-optimal methods without considering motion constraints can be found in [24-28]. In [24], time-optimal path tracking problem is transformed into a convex optimization control problem by nonlinear transformation of variables. An algorithm that can locally adjust the speed based on error tolerance has been proposed in [25]. In this algorithm, if the accuracy requirement is relaxed, higher speeds are possible. Different from [25], a practical control law is designed in [26], which can both improve the transient performance and repress the tracking errors. The theoretical optimal results of the above methods usually are not reachable in practice because of the mechanical and motion constraints of the mobile robots. Therefore, kinematic and/or dynamic constraints[29-31, 35] and visibility constraints[32-34] are taken into consideration simultaneously in the optimization principle. The second category regards motion smoothness as the optimization principle[1-3, 36, 37], which aims to reduce wheel slippage or mechanical damage and guarantee smooth motion. Smoothness strategies based on minimizing the sum of scaled cross-track error and time required to attain the desired trajectory are proposed in [2, 36]. These two categories are only fit for non-vision systems or vision systems where the camera is calibrated. Although optimal methods in [32-34] are eligible for the vision-based path planning, their optimal strategies are conducted in Cartesian space. For the tracking controller designed in the image plane acquired by an uncalibrated camera, motion smoothness strategies in Cartesian space become useless. How to directly get the optimal tracked point along the trajectory in an uncalibrated image is still unsolved.

      To solve this problem, a precise and smooth visual trajectory tracking control method is proposed. The novel contributions in this method can be summarized as follows. First, a smooth-optimal tracking planner is proposed, which takes into account both the velocity change and the cross-track error. The novelty lies in that it can directly determine the optimal tracked point in an uncalibrated image. Second, the fish-eye camera is the only used sensor in the system, and the position and orientation of the mobile robot are estimated by the POSIT (pose from orthography and scaling with iteration)[38] algorithm. Finally, experiments are conducted to verify the effectiveness of our method.

      The novel contributions of this paper can be summarized as follows. 1) A novel smooth-optimal trajectory tracking strategy is proposed. The novelty lies in that it can be used for an uncalibrated fish-eye image. 2) A tracking planner in the image plane is proposed, which can online determine the smooth-optimal tracked points according to a discrete search method. 3) Comparison results with other commonly used control methods demonstrate that our strategy can precisely and smoothly control the mobile robot.

      This paper is organized as follows. Section 2 introduces the problem to be solved. The image-based kinematic model is established in Section 3. Section 4 presents our controller and smooth-optimal tracking planner. Some representative experimental results are given and analyzed to verify the effectiveness of our method in Section 5. Finally, conclusions are summarized in Section 6.

    • The trajectory tracking system is shown in Fig. 1(a), where the world coordinate system $O_wx_wy_wz_w$, the body coordinate system $P_0x_by_bz_b$ and the camera coordinate system $O_cx_cy_cz_c$ are defined. The mobile robot is driven by two rear wheels with differential mode and balanced by a front caster wheel which has no influence on its kinematic properties. A fish-eye camera is mounted on the ceiling and parallel to the ground. Four black circles are fixed on the mobile robot, whose centers are noncoplanar and relative poses are known. During tracking, the robot is represented by the circle center $ P_0 $, which is located on the symmetry axis of the mobile robot with a distance $ d $ to the rear axis in the $ x_b $ direction. The coordinates of $ P_0 $ and the rear axis midpoint in the world coordinate system are respectively denoted as $x_{0}^{w}(t)$, $ y_{0}^{w}(t) $, $ z_{0}^{w} $ and $ x(t) $, $ y(t) $, $ z $. The orientation of the mobile robot is denoted as $\theta_{bw}(t)$, which is the angle between the $x_b$-axis and the $x_w$-axis. With regard to the system configuration, we assume that the tracking system satisfies the following assumptions.

      Figure 1.  (a) The coordinate frames in the tracking system. (b) The projection principle of the fish-eye camera.

      Assumption 1. The ground where the robot moves on is flat and can be regarded as a plane.

      Assumption 2. These four black circles on the robot can be always detected by the camera during tracking.

      Assumption 3. The camera focal length is known.

    • Based on the above description of the tracking system, the issue addressed in this paper can be expressed as follows.

      Problem. For a given image trajectory $ L\left({{y}}_{pd}(t)\right) $, the goal is to find a series of tracked points $ {{y}}_{pd}^*(t) $ on $ L\left({{y}}_{pd}(t)\right) $ and design a controller $ [v, \omega]^{\rm T} $. Under the guidance of $ {{y}}_{pd}^*(t) $ and controlled by $ [v, \omega]^{\rm T}$, the robot can approach to $ L\left({{y}}_{pd}(t)\right) $ and keep moving along this trajectory whatever its initial position is. Namely,

      $ \lim\limits_{t \to \infty}{{y}}_{p}(t) \to {{y}}_{pd}^*(t)$

      for any $ {{y}}_{p}(t) $, while $ v\to 0 $ and $ \omega\to 0 $ will not happen when $ t\to \infty $. Where $ {{y}}_{p}(t) $ is the image coordinate of the robot representative point $ P_0 $. At the same time, the determination of the tracked point and designment of the controller must satisfy: 1) The controller can precisely control the mobile robot using an uncalibrated fish-eye camera; 2) The tracked point can minimize the sum of the velocity change and the cross-track error; 3) The controller outputs are kinematically reachable for the mobile robot.

    • The structure of our trajectory tracking system can be illustrated by the flowchart shown in Fig. 1(a), which is mainly composed by a smooth-optimal planner, an adaptive controller and a pose estimator. After the continuous tracked trajectory is determined by image processing algorithms or planned out according to scene understanding in the image plane, it is discretized into a series of reference points. Within each tracking control cycle, the smooth-optimal planner determines an optimal tracked point by minimizing the velocity change of the robot and the cross-track error in image plane simultaneously, which is taken as the desired input of the adaptive controller and tries its best to make the robot move smoothly. The adaptive controller is designed as an uncalibrated image-based visual servoing controller driving the robot to the desired point, while the fish-eye camera intrinsic parameters can be estimated online by a reasonably designed adaptive law. The image Jacobian matrix of the controller requires the robot pose, which is provided by the pose estimator based on an extension version of POSIT algorithm applicable to fish-eye image. The full pipeline of our control system is shown in Fig. 2. This paper mainly focuses on the design of the smooth-optimal tracking planner.

      Figure 2.  Block diagram of the control system

    • According to the nonholonomic constraint, it can be obtained that

      $ \dot x(t)\sin\theta_{bw}(t)-\dot y(t)\cos\theta_{bw}(t) = 0 $

      (1)

      where $\dot x(t)$, $\dot y(t)$, and $\theta_{bw}(t)$ can be denoted respectively as

      $\begin{split} & \dot x(t) = v\cos {\theta _{bw}}(t)\\ & \dot y(t) = v\sin {\theta _{bw}}(t)\\ & {{\dot \theta }_{bw}}(t) = \omega \end{split}$

      (2)

      where $v$ and $\omega$ denote the linear velocity and the angular velocity of the mobile robot respectively. Since the frame $ O_wx_wy_wz_w $ coincides with the frame $ O_cx_cy_cz_c $, the rotation matrix $^c{{R}_o}$ and the translation vector $^c{{T}_o}$ of the frame $ P_0x_by_bz_b $ relative to the frame $ O_cx_cy_cz_c $ can be expressed as

      $\begin{aligned} & ^c{{R}_o} = \left[ {\begin{aligned} {\cos {\theta _{bw}}(t)}\quad { - \sin {\theta _{bw}}(t)}\quad 0\\ {\sin {\theta _{bw}}(t)}\quad \quad {\cos {\theta _{bw}}(t)}\quad 0\\ 0\quad\quad \quad\quad \quad \quad 0\quad\quad \quad 1 \end{aligned}} \right]\\ & ^c{{T}_o} = \left[ {\begin{aligned} {x_0^c(t)}\\ {y_0^c(t)}\\ {z_0^c}\;\; \end{aligned}} \right] = \left[ {\begin{aligned} {x_0^w(t)}\\ {y_0^w(t)}\\ {z_0^w}\;\;\end{aligned}} \right] \end{aligned}$

      where $ x_{0}^{c}(t) $, $ y_{0}^{c}(t) $ and $ z_{0}^{c} $ denote the coordinates of $ P_0 $ in the camera coordinate system.

      Given the camera focal length $ f $ and four noncoplanar points $ P_0,\cdots ,P_3 $ whose relative poses are known, $^c{{R}_o}$ and $^c{{T}_o}$ can be acquired by the POSIT algorithm using the following equations

      $\frac{f}{{z_0^c}}\overrightarrow {{P_0}{P_i}} \cdot \vec i = \frac{{1 + {\varepsilon _i}}}{{\cos \theta _{P_i}}}{u_{yi}} - \frac{1}{{\cos \theta _{P_0}}}{u_{y0}}$

      (3)

      $\frac{f}{{z_0^c}}\overrightarrow {{P_0}{P_i}} \cdot \vec j = \frac{{1 + {\varepsilon _i}}}{{\cos \theta _{P_i}}}{u_{xi}} - \frac{1}{{\cos \theta _{P_0}}}{u_{x0}}$

      (4)

      ${\xi _i} = \overrightarrow {{P_0}{P_i}} \cdot {I}$

      (5)

      ${\eta _i} = \overrightarrow {{P_0}{P_i}} \cdot {J}$

      (6)

      where $ i $ = 1,2,3, ${I} = \dfrac{f}{z_{0}^{c}}\cdot\vec{i}$, ${J} = \dfrac{f}{z_{0}^{c}}\cdot\vec{j}$, $ \vec{k} = \vec{i} \times \vec{j} $, $ \xi_{i} $ and $ \eta_{i} $ represent the right-hand sides of (3) and (4) respectively, and meanings of parameters in (3) and (4) are listed in Table 1.

      Parameters Meanings
      ${\overrightarrow {{P_0}{P_i}} }$ Vector in the frame ${ P_0x_by_bz_b }$
      ${ \vec{i} }$ The first row of ${^c{{R}_o}}$
      ${ \vec{j} }$ The second row of ${^c{{R}_o}}$
      ${ \vec{k} }$ The third row of ${^c{{R}_o}}$
      ${ \theta_{P_0}}$ The incident angle between the line of sight for the point ${ P_0 }$ and the principal axis ${ z_c }$ in Fig. 1(b)
      ${ \theta_{Pi}^{} }$ Similar to the meaning of ${ \theta_{P_0} }$
      ${ u_{x0} }$ Image row coordinate corresponding to ${ P_0 }$
      ${ u_{y0} }$ Image column coordinate corresponding to ${ P_0 }$
      ${ u_{xi} }$ Image row coordinate corresponding to ${ P_i }$
      ${ u_{yi} }$ Image column coordinate corresponding to ${ P_i }$
      ${ \varepsilon_{i} }$ ${ {\varepsilon _i} = \dfrac{1}{ {z_0^c} }\overrightarrow { {P_0}{P_i} } \cdot \vec k}$

      Table 1.  Meanings of parameters in (3) and (4).

      The coordinates of $P_0$ in the world coordinate system can be expressed as

      $\left[ {\begin{array}{*{20}{c}} {x_0^w(t)}\\ {y_0^w(t)}\\ {z_0^w} \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {x(t) + d\cos {\theta _{bw}}(t)}\\ {y(t) + d\sin {\theta _{bw}}(t)}\\ {z - h} \end{array}} \right]$

      (7)

      where $ h $ is the height from the rear axis to $ P_0 $. The corresponding velocity of $P_0$ can be derived as

      $\left[ {\begin{array}{*{20}{c}} {\dot x_0^w}\\ {\dot y_0^w} \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {\cos {\theta _{bw}}(t)}&{ - \sin {\theta _{bw}}(t)}\\ {\sin {\theta _{bw}}(t)}&{\cos {\theta _{bw}}(t)} \end{array}} \right]\left[ {\begin{array}{*{20}{c}} v\\ \omega \end{array}} \right]$.

      (8)

      The kinematic model based on the fish-eye image is necessary to precisely control the mobile robot. Fig. 1(b) shows the fish-eye camera projection behavior where $P_0$ is projected to $p_0$ in the image plane. $r$ is the distance between $p_0$ and the principal point $C$, and $\varphi$ is the angle between $\overrightarrow{Cp_0}$ and the $x_G^{}$ axis. The position of the point $p_0$ in the image plane can be obtained as

      $\left[ {\begin{array}{*{20}{c}} {x_{P_0}}\\ {y_{P_0}} \end{array}} \right] = fr\left[ {\begin{array}{*{20}{c}} {\cos \varphi (t)}\\ {\sin \varphi (t)} \end{array}} \right]$

      (9)

      where $r = \sin\theta_{P_0}(t)$ is obtained from the fish-eye lens imaging projection model. The corresponding pixel coordinates $ u_{x_0} $ and $ u_{y_0} $ in the image can be derived as

      $\left[ {\begin{array}{*{20}{c}} {{u_{x_0}}}\\ {{u_{y_0}}} \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {{k_x}}&0\\ 0&{{k_y}} \end{array}} \right]\sin {\theta _{P_0}}(t)\left[ {\begin{array}{*{20}{c}} {\cos \varphi (t)}\\ {\sin \varphi (t)} \end{array}} \right] + \left[ {\begin{array}{*{20}{c}} {{u_{xc}}}\\ {{u_{yc}}} \end{array}} \right]$

      (10)

      where $k_x$, $k_y$, $u_{xc}$ and $u_{yc}$ denote uncalibrated intrinsic parameters of the fish-eye camera. $\theta_{P_0}(t)$ and $\varphi(t)$ can be calculated as follows:

      $\begin{split} & \sin {\theta _{P_0}}(t) = \frac{{\sqrt {x_0^c{{(t)}^2} + y_0^c{{(t)}^2}} }}{{\sqrt {x_0^c{{(t)}^2} + y_0^c{{(t)}^2} + z{{_0^c}^2}} }}\\ & \cos \varphi (t) = \frac{{x_0^c(t)}}{{\sqrt {x_0^c{{(t)}^2} + y_0^c{{(t)}^2}} }}.\end{split}$

      (11)

      Differentiating (10) with respect to time and substituting (8) and (11) into (10), it can be obtained that

      $\left[ {\begin{array}{*{20}{c}} {{{\dot u}_x}}\\ {{{\dot u}_y}} \end{array}} \right] = \frac{{{{M}_0}{{M}_1}}}{{{{(x_0^c{{(t)}^2} + y_0^c{{(t)}^2} + z{{_0^c}^2})}^{3/2}}}}\left[ {\begin{array}{*{20}{c}} v\\ \omega \end{array}} \right]$

      (12)

      where

      $\begin{split} & {{M}_0} = \left[ {\begin{array}{*{20}{c}} {{k_x}(y_0^c{{(t)}^2} + z{{_0^c}^2})}&{ - {k_x}x_0^c(t)y_0^c(t)}\\ { - {k_y}x_0^c(t)y_0^c(t)}&{{k_y}(x_0^c{{(t)}^2} + z{{_0^c}^2})} \end{array}} \right]\\ & {{M}_1} = \left[ {\begin{array}{*{20}{c}} {\cos {\theta _{bw}}(t)}&{ - d\sin {\theta _{bw}}(t)}\\ {\sin {\theta _{bw}}(t)}&{d\cos {\theta _{bw}}(t)} \end{array}} \right]. \end{split}$

      Equation (12) can be rewritten as

      $\left[ {\begin{array}{*{20}{c}} {{{\dot u}_x}}\\ {{{\dot u}_y}} \end{array}} \right] = {{J}_P}(t)\left[ {\begin{array}{*{20}{c}} v\\ \omega \end{array}} \right] = {N}\left( {^c{{T}_o},v,\omega ,{\theta _{bw}}(t)} \right){\rho }$

      (13)

      where ${J}_P(t)$ is the image Jacobian matrix, ${\rho } = {\left[ {\begin{array}{*{20}{c}}{{k_x},{k_y},{u_{xc}},{u_{yc}}}\end{array}} \right]^{\rm T}}$, and ${N}\left( {^c{{T}_o},v,\omega ,{\theta _{bw}}(t)} \right)$ is the regressor matrix whose parameters are all known.

    • In this part, an adaptive trajectory tracking controller is first given. Afterwards, a smooth-optimal planner considering both the velocity change and the sum of cross-track errors is proposed to guarantee motion smoothness of the robot during tracking.

    • At time $t$, the desired and real trajectory positions in the image are denoted as ${y}_{pd}(t)$ and ${y}_p(t)$ respectively, then the image position tracking error can be expressed as

      $ \Delta{{y}}_p(t) = {{y}}_p(t)-{{y}}_{pd}(t) .$

      (14)

      The image velocity error can be derived as

      $ \Delta \dot {{{y}}}_p(t) = \dot {{{y}}}_p(t)-\dot {{{y}}}_{pd}(t) .$

      (15)

      An error function can be determined as

      ${e}\left( {{t_j},t} \right) = \left[ {\begin{array}{*{20}{c}} {{u_{x0}}}\\ {{u_{y0}}} \end{array}} \right] - \left[ {\begin{array}{*{20}{c}} {{{\hat u}_{x0}}}\\ {{{\hat u}_{y0}}} \end{array}} \right]$

      (16)

      where ${e}\left(t_j,t\right)$ is the error vector between the real image position and the estimated image position of $ P_0 $ at time $t_j$, and $\begin{bmatrix}\hat{u}_{x0},\hat{u}_{y0}\end{bmatrix}^{\rm T}$ denotes the estimated pixel coordinate that can be expressed as

      $\left[ {\begin{array}{*{20}{c}} {{{\hat u}_{x_0}}}\\ {{{\hat u}_{y_0}}} \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {{{\hat k}_x}}&0\\ 0&{{{\hat k}_y}} \end{array}} \right]\sin {\theta _{P_0}}\left[ {\begin{array}{*{20}{c}} {\cos \varphi }\\ {\sin \varphi } \end{array}} \right] + \left[ {\begin{array}{*{20}{c}} {{{\hat u}_{xc}}}\\ {{{\hat u}_{yc}}} \end{array}} \right]$

      where $\hat{ \rho } = {\left[ {\begin{array}{*{20}{c}}{{{\hat k}_x},{{\hat k}_y},{{\hat u}_{xc}},{{\hat u}_{yc}}}\end{array}} \right]^{\rm T}}$ is the vector that contains all the estimated intrinsic parameters of the camera and is updated by a reasonably designed adaptive law. Substituting (10) and (11) into (16), it can be obtained

      ${e}\left( {{t_j},t} \right) = {{W}_e}\left( {^c{{T}_o}} \right)\Delta {\rho }$

      (17)

      where

      ${{W}_e}\left( {^c{{T}_o}} \right) = \left[\!\!\!{\begin{array}{*{20}{c}}{\dfrac{{x_0^c(t)}}{{\sqrt {x_0^c{{(t)}^2} + y_0^c{{(t)}^2} + z{{_0^c}^2}} }}}& 0& 1& 0\\0 & {\dfrac{{y_0^c(t)}}{{\sqrt {x_0^c{{(t)}^2} + y_0^c{{(t)}^2} + z{{_0^c}^2}} }}}& 0& 1\end{array}}\!\!\!\right]$ denotes the regressor matrix and

      $\Delta {\rho } = \hat{ \rho } - {\rho } = \left[ {\begin{array}{*{20}{c}}{\begin{array}{*{20}{c}}{{k_x} - {{\hat k}_x}}\\{{k_y} - {{\hat k}_y}}\\{{u_{xc}} - {{\hat u}_{xc}}}\\{{v_{xc}} - {{\hat u}_{yc}}}\end{array}}\end{array}} \right]$.

      The inverse of ${ J}_P(t)$ is

      ${J}_P^{ - 1}(t) = {\left( {x_0^c{{(t)}^2} + y_0^c{{(t)}^2} + z{{_0^c}^2}} \right)^{\textstyle\frac{3}{2}}}{M}_1^{ - 1}{M}_0^{ - 1}$

      (18)

      where

      $\begin{split} {M}_0^{ - 1} = \, & \dfrac{1}{{z{{_0^c}^2}\left( {x_0^c{{(t)}^2} + y_0^c{{(t)}^2} + z{{_0^c}^2}} \right)}}\times\\ &\left[ {\begin{array}{*{20}{c}} {\dfrac{{x_0^c{{(t)}^2} + z{{_0^c}^2}}}{{{k_x}}}}&{\dfrac{{x_0^c{{(t)}^2}y_0^c{{(t)}^2}}}{{{k_y}}}}\\ {\dfrac{{x_0^c{{(t)}^2}y_0^c{{(t)}^2}}}{{{k_x}}}}&{\dfrac{{y_0^c{{(t)}^2} + z{{_0^c}^2}}}{{{k_y}}}} \end{array}} \right]\\ {M}_1^{ - 1} = \, &\left[ {\begin{array}{*{20}{c}} {\cos {\theta _{bw}}(t)}&{ - \sin {\theta _{bw}}(t)}\\ { - \dfrac{{\sin {\theta _{bw}}(t)}}{d}}&{\dfrac{{\cos {\theta _{bw}}(t)}}{d}} \end{array}} \right]. \end{split}$

      The existence of $ {{J}}^{-1}_P(t) $ is proved in [39].

      According to [39], the tracking controller and the adaptive law for online estimation of the intrinsic parameters of the camera are given as below.

      $\left[ {\begin{array}{*{20}{c}} v\\ \omega \end{array}} \right] = {\hat{ D}_P}{{\dot y}_{pd}}(t) - {\hat{ D}_P}{{K}_P}\Delta {{y}_p}$

      (19)

      where $\hat {{D}}_P$ denotes the estimation of ${ J}_P^{-1}$, and ${K}_P = k_p{I}_{2\times 2}$ is a symmetric positive definite matrix.

      $\begin{split} \dot{\hat{ \rho }} =\, &{\Gamma ^{ - 1}}\{ {{N}^{\rm T}}\left( {^c{{T}_o},v,\omega ,{\theta _{bw}}(t)} \right){{K}_P}\Delta {{y}_p}- \\ & {{W}_e}\left( {^c{{T}_o}} \right){{K}_e}{e}\left( {{t_j},t} \right)\} \end{split}$

      (20)

      where $\Gamma^{-1}\in {\bf R}^{4\times 4}$ and ${K}_e = k_e{I}_{2\times 2}$ denote symmetric positive definite matrices with corresponding dimensions. The stability of the control system can be seen in [39].

      Remark 1. The first term in the tracking controller (19) is a feedforward term that is necessary to efficiently solve the trajectory tracking problem, and the second term is the feedback term. The first term in adaptive law (20) is used to compensate for the error in the system caused by the estimated parameters, and the second term is used to minimize the estimated projection error in (16). Larger values of $ k_p $ and $ k_e $ can make the mobile robot converge to the desired trajectory faster, but the controller outputs may out of the dynamic limitation of the mobile robot. Based on our experimental results, $ 0.1 \leq k_p \leq 0.5 $ and $ 0.1 \leq k_e \leq 0.3 $ are reasonable intervals.

    • According to (19), we can conclude that the tracked point is the determinate factor for motion smoothness. Our planner is inspired by the pure pursuit method [40] shown in Fig. 3(a). In this method, the tracked point is determined by a lookahead distance. The mobile robot will tend to cut corners when the lookahead distance is too large, while a short lookahead distance often results in instability. The effect of the lookahead distance on the cross-track error and the motion instability is shown in Fig. 3(b). From Fig. 3(b), we can see that the cross-track error is positively correlated to the lookahead distance while the motion instability is negative. There is a distance value that can make the utility function reach the minimum value. So our planner tries to find a tracked point on the image trajectory that can minimize the utility function which consists of the velocity change and the sum of cross-track errors. The planner can be described as follows:

      Figure 3.  The pure pursuit menthod and the impact of the lookahead distance or its performance

      $\begin{split} \arg \min F({{y}_{pd}}(t)) =\,& \left[ {\Delta v(t),\Delta \omega (t)} \right]{\left[ {\Delta v(t),\Delta \omega (t)} \right]^{\rm T}}+\\ & \beta \sum\limits_{{t_j} = 0}^t \Delta {y}_p^{\rm T}({t_j})\Delta {{y}_p}({t_j}) \end{split}$

      (21)

      ${\rm s.t.}\;\;v(0) = 0 \qquad\quad\quad\quad\quad\quad\quad\quad\quad$

      (22)

      $ \dot{v}(0) = 0 \quad\quad\quad\quad\quad\quad\quad\quad$

      (23)

      $ v_{\rm min}\le v(t) \le v_{\rm max}\quad\quad\quad $

      (24)

      $\begin{split} & \left| {\dot v(t)} \right| \le {{\dot v}_{\rm max}}\\ & t \ge 0 \end{split}\quad\quad\quad\quad\quad\quad$

      (25)

      where $\Delta v(t)$ and $\Delta \omega(t)$ denote the difference values of linear velocity and angular velocity from time $t-\Delta t$ to $t$, $\beta$ is a weighted parameter applied to adjust the importance between the cross-track error and motion smoothness, and $v_{\rm min}$ and $v_{\rm max}$ denote the minimum value and maximum value of linear velocity that the mobile robot can be reached. The first term in right-hand side of (21) represents the motion smoothness, and the second term represents the accumulated cross-track errors. According to (19), we can get

      $\begin{split} \left[ {\begin{aligned} {\Delta v(t)}\\ {\Delta \omega (t)} \end{aligned}} \right] =\, & \left[ {\begin{aligned} {v(t)}\\ {\omega (t)} \end{aligned}} \right] - \left[ {\begin{aligned} {v\left( {t - \Delta t} \right)}\\ {\omega \left( {t - \Delta t} \right)} \end{aligned}} \right]=\\ & {{\hat{ D}}_P}(t){{{\dot y}}_{pd}}(t) - {{\hat{ D}}_P}(t){{K}_P}\Delta {{y}_p}(t)+\\ & {{\hat{ D}}_P}(t - \Delta t){{K}_P}\Delta {{y}_p}(t - \Delta t) -\\ & {{\hat{ D}}_P}(t - \Delta t){{{\dot y}}_{pd}}(t - \Delta t).\end{split}$

      (26)

      So (21) can be rewritten as

      $\begin{split} F({{y}_{pd}}(t)) =\, & \Delta v{(t)^2} + \Delta \omega {(t)^2} + \beta \sum\limits_{{t_j} = 0}^{t - \Delta t} \Delta {y}_p^{\rm T}({t_j})\Delta {{y}_p}({t_j}) +\\ & \beta \Delta {y}_p^{\rm T}(t)\Delta {{y}_p}(t).\end{split}$

      (27)

      According to (13), (14) and (26), it can be seen that all variables except ${y}_{pd}(t)$ in (27) can be detected or calculated, therefore the optimal point can be determined by $\dfrac{\partial F}{\partial {y}_{pd}(t)} = 0$. Let

      $\hat{{{D}}}_{p}(t)^{\rm T}\hat{{{D}}}_{p}(t) = \begin{bmatrix}\Delta v(t)\\ \Delta \omega(t)\end{bmatrix} $

      $ \hat{{{D}}}_{p}(t-\Delta t)^{\rm T}\hat{{{D}}}_{p}(t) = \begin{bmatrix}v(t)\\ \omega(t)\end{bmatrix}.$

      Substituting (14) into (27), we obtain

      $\begin{split} \frac{{\partial F}}{{\partial {u_{pd}}(t)}} =& - 2{m_{11}}\left( {{k_p}{{\dot u}_{pd}}(t) + {{\ddot u}_{pd}}(t)} \right)\left( {{u_p}(t) - {u_{pd}}(t)} \right)-\\ & 2{m_{12}}\left( {{k_p}{{\dot u}_{pd}}(t) + {{\ddot u}_{pd}}(t)} \right)\left( {{v_p}(t) - {v_{pd}}(t)} \right)+\\ & 2{n_{11}}{{\ddot u}_{pd}}(t)({u_p}(t - \Delta t) - {u_{pd}}(t - \Delta t)-\\ & \frac{{{{\dot u}_{pd}}(t - \Delta t)}}{{{k_p}}}) + 2{n_{21}}{{\ddot u}_{pd}}(t)({v_p}(t - \Delta t)-\\ & {v_{pd}}(t - \Delta t) - \frac{{{{\dot v}_{pd}}(t - \Delta t)}}{{{k_p}}})+\\ & 2{n_{11}}{{\dot u}_{pd}}(t)({k_p}\left( {{u_p}(t - \Delta t) - {u_{pd}}(t - \Delta t)} \right)-\\ & {{\dot u}_{pd}}(t \!-\! \Delta t)) \!+\! \frac{{2{{\ddot u}_{pd}}(t)}}{{{k_p}}}\left( {{m_{11}}{{\dot u}_{pd}}(t) \!+\! {m_{12}}{{\dot v}_{pd}}(t)} \right)\!+\\ & 2{n_{21}}{{\dot u}_{pd}}(t)({k_p}\left( {{v_p}(t - \Delta t) - {v_{pd}}(t - \Delta t)} \right)-\\ & {{\dot v}_{pd}}(t - \Delta t)) + 2{{\dot u}_{pd}}(t)\left( {{m_{11}}{{\ddot u}_{pd}}(t) + {m_{21}}{{\ddot v}_{pd}}(t)} \right) \end{split}$

      (28)

      $\begin{split} \frac{{\partial F}}{{\partial {v_{pd}}(t)}} = & - 2{m_{21}}\left( {{k_p}{{\dot v}_{pd}}(t) + {{\ddot v}_{pd}}(t)} \right)\left( {{u_p}(t) - {u_{pd}}(t)} \right)-\\ & 2{m_{22}}\left( {{k_p}{{\dot v}_{pd}}(t) + {{\ddot v}_{pd}}(t)} \right)\left( {{v_p}(t) - {v_{pd}}(t)} \right)+\\ & 2{n_{12}}{{\ddot v}_{pd}}(t)({u_p}(t - \Delta t) - {u_{pd}}(t - \Delta t)-\\ & \frac{{{{\dot u}_{pd}}(t - \Delta t)}}{{{k_p}}}) + 2{n_{22}}{{\ddot v}_{pd}}(t)({v_p}(t - \Delta t)-\\ & {v_{pd}}(t - \Delta t) - \frac{{{{\dot v}_{pd}}(t - \Delta t)}}{{{k_p}}})+\\ & 2{n_{12}}{{\dot v}_{pd}}(t)({k_p}\left( {{u_p}(t - \Delta t) - {u_{pd}}(t - \Delta t)} \right)-\\ & {{\dot u}_{pd}}(t \! - \!\Delta t))\! + \!\frac{{2{{\ddot v}_{pd}}(t)}}{{{k_p}}}\left( {{m_{21}}{{\dot u}_{pd}}(t)\! +\! {m_{22}}{{\dot v}_{pd}}(t)} \right)\!+\\ & 2{n_{22}}{{\dot v}_{pd}}(t)({k_p}\left( {{v_p}(t - \Delta t) - {v_{pd}}(t - \Delta t)} \right)-\\ & {{\dot v}_{pd}}(t - \Delta t)) + 2{{\dot v}_{pd}}(t)\left( {{m_{12}}{{\ddot u}_{pd}}(t) + {m_{22}}{{\ddot v}_{pd}}(t)} \right). \end{split}$

      (29)

      From (28) and (29), it can be seen that the accumulated cross-track error before time $t-\Delta t$ is eliminated. Therefore, the result is determined by the velocity change and the current cross-track error and not affected by the errors before $ t-\Delta t $. There are two difficulties in solving (28) and (29).

      1) Equation $\dfrac{\partial F}{\partial u_{pd}(t)} = 0$ and $\dfrac{\partial F}{\partial v_{pd}(t)} = 0$ are both the second order nonlinear nonhomogeneous differential equation whose general solution is complex. Therefore, the judgment of whether having a particular solution on the desired trajectory is difficult.

      2) Since camera intrinsic parameters are unknown, the expression of the image trajectory cannot be acquired from the trajectory expression in the world frame. Therefore, the numerical solutions in the image plane that fit for both $\dfrac{\partial F}{\partial u_{pd}(t)} = 0$ and $\dfrac{\partial F}{\partial v_{pd}(t)} = 0$ are difficult to get.

      To solve this problem and find the optimal trajectory tracked point, we propose adiscrete search menthod. Its main steps are shown in Algorithm 1.

      Algorithm 1. Main steps of choosing the optimal trajectory point and control processes:

      Input: Desired image trajectory: $ L\left({{y}}_{pd}(t)\right) $, relative positions of points $ P_0,\cdots, P_3 $, threshold of choosing candidate trajectory point: $ threshold $

      Output: Linear velocity $ {{v}} $ and angular velocity ${\omega }$

      Discretize the image trajectory into samples at interval of two pixels length along tangential direction of the trajectory, initialize $ k_p = 0.2 $, $ k_e = 0.2 $, ${\rho } = {\left[{v\left( {t - \Delta t} \right)\omega \left( {t - \Delta t} \right)}\right]^{\rm T}}$

      repeat

      for each point $ {{y}}_{pd}^*(t) $ in discritized image trajectory do

      Calculate $ \sqrt{\left(\dfrac{\partial F({{y}}_{pd}^*(t))}{\partial u_{pd}(t)}\right)^2+\left(\dfrac{\partial F({{y}}_{pd}^*(t))}{\partial v_{pd}(t)}\right)^2} $

      if result $ \leq thresh $ then

      Add $ {{y}}_{pd}^*(t) $ into the candidate point vector

      end if

      end for

      for each point $ {{y}}^*(t) $ in the candidate point vector do

      Calculate $ F({{y}}^*(t)) $ according to (21)

      if $ F({{y}}^*(t)) < smallest\_value $ then

      $ smallest\_value $ = $ F({{y}}^*(t)) $

      end if

      end for

      Detect the image coordinates of points $ P_0,\cdots, P_3 $

      Calculate $^c{{R}_o}$ and $^c{{T}_o}$ according to (3)–(6)

      Calculate $ \begin{bmatrix}m_{11} &m_{12} m_{21} & m_{22}\end{bmatrix}^{\rm T} $ and $ \dot{\hat{\boldsymbol{\rho}}} $ according to (19) and (20)

      Update $ \boldsymbol{\rho} $

      until the end of the trajectory

    • Experiments are conducted to show the performance of our smooth-optimal planner in tracking a circular trajectory and an irregular trajectory. We also compared the experimental results with the pure pursuit method and the Stanley method[41]. The pure pursuit method is introduced in Section IV-C of [41], and the Stanley method chooses the nearest trajectory point to the robot as the tracked point.

    • The experimental system setup is shown in Fig. 4. In this system, the fish-eye camera is mounted on the ceiling, whose optical axis is perpendicular to the ground, and four black circles with different radii are fixed on the robot, whose centers P0, P1, P2 and P3 are noncoplanar. The coordinate systems are established according to Fig. 1(a), and the coordinates of these four points are (0,0,0), (0,159 mm,0), (–135 mm,30 mm,0) and (–135 mm,172 mm,–60 mm) in the object coordinate system. The distance d from the rear axis to P0 is 300 mm in the xb direction, and the height h from the rear axis to P0 is 450 mm. Without loss of generality, the initial values of the camera intrinsic parameters are set to kx = ky = 350 and $ u_{xc} = u_{yc} = 500 $. The controller gains are given by ${K}_p = 0.2{I}_{2\times 2}$, ${K}_e = 0.1{I}_{2\times 2}$, $\Gamma^{-1} = 10^{-7}{I}_{4\times 4}$, and the weighted parameter $\beta$ is set to 400.

      Figure 4.  Experimental system setup

    • The radius of the circle trajectory is 700 mm, and to make the results more persuasive, the camera is located far away from the circle center. The Stanley method and the pure pursuit method with 50 100 and 150 pixels lookahead distances are employed to compare with our method. The initial position and orientation of the mobile robot are set randomly but the same for all methods at start, then during tracking, their values are online estimated by (3)–(6). The compared results are shown in Fig. 5.

      Figure 5.  Circle trajectory experimental results (color versions of the figures in this paper are available online)

      Figs. 5(a)-5(d) show the desired trajectory and the real trajectories, tracking errors in the row and column directions and the sum of cross-track errors, respectively. From Figs. 5(b)-5(c), it can be clearly seen that the tracking errors of our method are smaller than all the pure pursuit method with different lookahead distances, but similar to the Stanley method. From Fig. 5(d), it can also be seen that the sum of cross-track errors of the Stanley method is slightly smaller than our method. The Stanley method aims to approach to the trajectory as fast as possible, so it chooses the nearest trajectory point as the tracked point, which results in its smaller sum of cross-track errors but frequent velocity regulations and bad motion smoothness. Our method takes both the cross-track error and the velocity change into consideration and makes a compromise between attaining the trajectory time-consuming and motion smoothness, which results in its slightly higher sum of cross-track errors but lower velocity changes. This above analysis can be illustrated by Figs. 5(e), 5(g)-5(h). Fig. 5(e) shows the linear velocity curves of the mobile robot during tracking. From the start position and the zoom-in subfigure, we can see that the linear velocity change of the Stanley method is bigger than our method. The curves of $ \sqrt{\Delta v^2+\Delta \omega ^2} $ and function F are shown in Figs. 5(h) and 5(g) respectively. From Figs. 5(h) and 5(g) it can be seen that the Stanley method still has higher values for these two evaluation functions during tracking, which illustrates its worse smoothness performance than our method.

    • Irregular trajectory is also used to verify the performance of our method. The Stanley method and the pure pursuit method with 60, 80 and 100 pixels lookahead distances are employed to compare with our method. The compared results are shown in Fig. 6.

      Figure 6.  Irregular curve trajectory experimental results

      Figs. 6(a)-6(d) show the desired trajectory and the real trajectories, tracking errors in the row and column directions and the sum of cross-track errors, respectively. From Fig. 6(b)-6(c), it can be seen that the tracking errors of the pure pursuit method with different lookahead distances are larger than our method. From Figs. 6(c), it can be seen that tracking errors of the Stanley method are nearly the same with our method except for the time interval [10 s, 20 s]. At that time, the mobile robot reaches the first trajectory curve. Since the Stanley method only considers the cross-track error and does not care about the velocity change, therefore, the velocity change of the Stanley method is larger than our method, which can be seen from the zoom-in figure in Fig. 6(g). So the tracking errors of the Stanley method are larger than our method at the interval [10 s, 20 s] for the suddenly motion state changing. This also explains why the sum of the cross-track errors and function F values of the Stanley method are larger than our method after about 13 s.

      From the experimental results, we can conclude that the smooth-optimal tracking planner proposed in this paper has a good performance in precise and smooth control of the mobile robot. Especially for the trajectory with a large radius of curvature, our method can reach a more stable tracking performance than the Stanley method.

    • In this paper, a novel smooth-optimal visual trajectory tracking planner is proposed. An adaptive trajectory tracking controller for wheeled mobile robots is first given, which can drive the robot to the tracked point only using an uncalibrated fish-eye camera. The adaptive controller considers both the tracking errors and the parameter estimated errors. Afterwards, the smooth-optimal tracking planner is proposed, which is realized by minimizing a function that contains both the velocity change and the sum of cross-track errors. Based on this planner, the image trajectory tracked point can be directly determined in an uncalibrated fish-eye image. Our method is compared with the pure pursuit method and the Stanley method in experiments, and experimental results show a good application prospect in the field of indoor navigation for its versatility.

    • This work was supported by National Key Research and Development Program (No. 2018YFB1306303) and National Natural Science Foundation of China (No. 61773374).

Reference (41)

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return