Article Contents
Citation: N. Hacene, B. Mendil. Behavior-based autonomous navigation and formation control of mobile robots in unknown cluttered dynamic environments with dynamic target tracking. International Journal of Automation and Computing, http://doi.org/10.1007/s11633-020-1264-x doi:  10.1007/s11633-020-1264-x
Cite as: Citation: N. Hacene, B. Mendil. Behavior-based autonomous navigation and formation control of mobile robots in unknown cluttered dynamic environments with dynamic target tracking. International Journal of Automation and Computing , http://doi.org/10.1007/s11633-020-1264-x doi:  10.1007/s11633-020-1264-x

Behavior-based Autonomous Navigation and Formation Control of Mobile Robots in Unknown Cluttered Dynamic Environments with Dynamic Target Tracking

Author Biography:
  • Nacer Hacene received the B. Eng. degree in automatic control from University of Mohamed Khider, Algeria in 2010, the M. Eng. and Ph. D. degrees in automatic control and signal processing from University of Abderrahmane Mira, Algeria in 2014 and 2019, respectively. He is with Laboratory of Industrial Technology and Information (LITI), University of Bejaia, Algeria. He is currently an associate professor with Department of Automatics and Electromechanics, Ghardaia University, Algeria. His research interests include mobile robot control, artificial intelligence, metaheuristics and swarm robotics.E-mail: hacenenacer77@gmail.com (Corresponding author)ORCID iD: 0000-0002-8586-4590

    Boubekeur Mendil received the B. Eng., M. Eng. and Ph. D. degrees in industrial control from Setif University, Algeria in 1991, 1994 and 2002, respectively. Currently, he is a professor of robotics and automatic control with the Electrical Engineering Department, Abderrahmane Mira University, Algeria. He is the head of the Soft-computing Research Group, LTII Laboratory, at the same University, Algeria. His research interests include mobile robots, soft-computing, and motion control.E-mail: bmendil@yahoo.fr

  • Received: 2020-01-17
  • Accepted: 2020-10-19
  • Published Online: 2021-03-01
  • While different species in nature have safely solved the problem of navigation in a dynamic environment, this remains a challenging task for researchers around the world. The paper addresses the problem of autonomous navigation in an unknown dynamic environment for a single and a group of three wheeled omnidirectional mobile robots (TWOMRs). The robot has to track a dynamic target while avoiding dynamic obstacles and dynamic walls in an unknown and very dense environment. It adopts a behavior-based controller that consists of four behaviors: “target tracking”, “obstacle avoidance”, “dynamic wall following” and “avoid robots”. The paper considers the problem of kinematic saturation. In addition, it introduces a strategy for predicting the velocity of dynamic obstacles based on two successive measurements of the ultrasonic sensors to calculate the velocity of the obstacle expressed in the sensor frame. Furthermore, the paper proposes a strategy to deal with dynamic walls even when they have U-like or V-like shapes. The approach can also deal with the formation control of a group of robots based on the leader-follower structure and the behavior-based control, where the robots have to get together and maintain a given formation while navigating toward the target, avoiding obstacles and walls in a dynamic environment. The effectiveness of the proposed approaches is demonstrated via simulation.
  • 加载中
  • [1] T. Dash.  Automatic navigation of wall following mobile robot using adaptive resonance theory of type-1[J]. Biologically Inspired Cognitive Architectures, 2015, 12(): -. doi: 10.1016/j.bica.2015.04.008
    [2] M. Panda, B. Das, B. Subudhi, B. B. Pati.  A comprehensive review of path planning algorithms for autonomous underwater vehicles[J]. International Journal of Automation and Computing, 2020, 17(3): 321-352. doi: 10.1007/s11633-019-1204-9
    [3] Y. Zhu, T. Zhang, J. Y. Song, X. Q. Li.  A hybrid navigation strategy for multiple mobile robots[J]. Robotics and Computer-Integrated Manufacturing, 2013, 29(4): 129-141. doi: 10.1016/j.rcim.2012.11.007
    [4] M. Algabri, H. Mathkour, H. Ramdane, M. Alsulaiman.  Comparative study of soft computing techniques for mobile robot navigation in an unknown environment[J]. Computers in Human Behavior, 2015, 50(): 42-56. doi: 10.1016/j.chb.2015.03.062
    [5] O. Motlagh, S. H. Tang, N. Ismail, A. R. Ramli.  An expert fuzzy cognitive map for reactive navigation of mobile robots[J]. Fuzzy Sets and Systems, 2012, 201(): 105-121. doi: 10.1016/j.fss.2011.12.013
    [6] A. Petrovskaya, O. Khatib.  Global localization of objects via touch[J]. IEEE Transactions on Robotics, 2011, 27(3): 569-585. doi: 10.1109/TRO.2011.2138450
    [7] O. Khatib.  Real-time obstacle avoidance for manipulators and mobile robots[J]. The International Journal of Robotics Research, 1986, 5(1): 90-98. doi: 10.1177/027836498600500106
    [8] A. N. Ouda, A. Mohamed, M. EI-Gindy, H. X. Lang, J. Ren.  Development and modeling of remotely operated scaled multi-wheeled combat vehicle using system identification[J]. International Journal of Automation and Computing, 2019, 16(3): 261-273. doi: 10.1007/s11633-018-1161-8
    [9] K. Al-Mutib, E. Mattar, M. Alsulaiman.  Implementation of fuzzy decision based mobile robot navigation using stereo vision[J]. Procedia Computer Science, 2015, 62(): 143-150. doi: 10.1016/j.procs.2015.08.427
    [10] B. Hu, J. C. Wang.  Deep learning based hand gesture recognition and UAV flight controls[J]. International Journal of Automation and Computing, 2020, 17(1): 17-29. doi: 10.1007/s11633-019-1194-7
    [11] M. Beyeler, N. Oros, N. Dutt, J. L. Krichmar.  A GPU-accelerated cortical neural network model for visually guided robot navigation[J]. Neural Networks, 2015, 72(): 75-87. doi: 10.1016/j.neunet.2015.09.005
    [12] X. Q. Wang, Z. G. Hou, F. Lv, M. Tan, Y. J. Wang.  Mobile robots′ modular navigation controller using spiking neural networks[J]. Neurocomputing, 2014, 134(): 230-238. doi: 10.1016/j.neucom.2013.07.055
    [13] S. Kundu, D. R. Parhi.  Reactive navigation of underwater mobile robot using ANFIS approach in a manifold manner[J]. International Journal of Automation and Computing, 2017, 14(3): 307-320. doi: 10.1007/s11633-016-0983-5
    [14] A. M. Rao, K. Ramji, B. S. K. Sundara Siva Rao, V. Vasu, C. Puneeth.  navigation of non-holonomic mobile robot using neuro-fuzzy logic with integrated safe boundary algorithm[J]. International Journal of Automation and Computing, 2017, 14(3): 285-294. doi: 10.1007/s11633-016-1042-y
    [15] Y. L. Chen, J. Cheng, C. Lin, X. Y. Wu, Y. S. Ou, Y. S. Xu.  Classification-based learning by particle swarm optimization for wall-following robot navigation[J]. Neurocomputing, 2013, 113(): 27-35. doi: 10.1016/j.neucom.2012.12.037
    [16] M. A. P. Garcia, O. Montiel, O. Castillo, R. Sepulveda, P. Melin.  Path planning for autonomous mobile robot navigation with ant colony optimization and fuzzy cost function evaluation[J]. Applied Soft Computing, 2009, 9(3): 1102-1110. doi: 10.1016/j.asoc.2009.02.014
    [17] A. Babinec, F. Duchoň, M. Dekan, P. Pásztó, M. Kelemen.  VFH*TDT (VFH* with time dependent tree): A new laser based obstacle avoidance method designed for environment with non-static obstacles[J]. Robotics and Autonomous Systems, 2014, 62(): 1098-1115. doi: 10.1016/j.robot.2014.05.003
    [18] I. Ulrich, J. Borenstein. VFH+: Reliable obstacle avoidance for fast mobile robots. In Proceedings of IEEE International Conference on Robotics and Automation, Leuven, Belgium, pp. 1572−1577, 1998.
    [19] I. Ulrich, J. Borenstein. VFH*: Local obstacle avoidance with look-ahead verification. In Proceedings of IEEE International Conference on Robotics and Automation, San Francisco, USA, pp. 2505−2511, 2000.
    [20] B. Li, C. Xu, K. L. Teo, J. Chu.  Time optimal Zermelo′s navigation problem with moving and fixed obstacles[J]. Applied Mathematics and Computation, 2013, 224(): 866-875. doi: 10.1016/j.amc.2013.08.092
    [21] M. A. K. Jaradat, M. Al-Rousan, L. Quadan.  Reinforcement based mobile robot navigation in dynamic environment[J]. Robotics and Computer-Integrated Manufacturing, 2011, 27(1): 135-149. doi: 10.1016/j.rcim.2010.06.019
    [22] A. S. Matveev, M. C. Hoy, A. V. Savkin.  A globally converging algorithm for reactive robot navigation among moving and deforming obstacles[J]. Automatica, 2015, 54(): 292-304. doi: 10.1016/j.automatica.2015.02.012
    [23] Y. N. Wang, Y. M. Yang, X. F. Yuan, Y. Zuo, Y. L. Zhou, F. Yin, L. Tan.  Autonomous mobile robot navigation system designed in dynamic environment based on transferable belief model[J]. Measurement, 2011, 44(8): 1389-1405. doi: 10.1016/j.measurement.2011.05.010
    [24] X. Y. Zhong, X. G. Zhong, X. F. Peng.  Velocity-Change-Space-based dynamic motion planning for mobile robots navigation[J]. Neurocomputing, 2014, 143(): 153-163. doi: 10.1016/j.neucom.2014.06.010
    [25] A. V. Savkin, C. Wang.  Seeking a path through the crowd: Robot navigation in unknown dynamic environments with moving obstacles based on an integrated environment representation[J]. Robotics and Autonomous Systems, 2014, 62(10): 1568-1580. doi: 10.1016/j.robot.2014.05.006
    [26] O. Montiel, U. Orozco-Rosas, R. Sepúlveda.  Path planning for mobile robots using bacterial potential field for avoiding static and dynamic obstacles[J]. Expert Systems with Applications, 2015, 42(12): 5177-5191. doi: 10.1016/j.eswa.2015.02.033
    [27] Z. H. Li, Z. W. Wu, Y. L. Fu. Dynamic obstacle avoidance of mobile robot tele-operation based on non-contact impedance control. In Proceedings of the 11th World Congress on Intelligent Control and Automation, IEEE, Shenyang, China, pp. 1421−1426, 2014.
    [28] K. Kanjanawanishkul.  Formation control of mobile robots: Survey[J]. Academic Journal Engineering, 2011, 4(1): 50-64.
    [29] I. Navarro, F. Matía.  A survey of collective movement of mobile robots[J]. International Journal of Advanced Robotic Systems, 2013, 10(1): -. doi: 10.5772/54600
    [30] M. J. Mataric. Designing emergent behaviors: From local interactions to collective intelligence. In Proceedings of the 2nd International Conference on from Animals to Animats 2: Simulation of Adaptive Behavior: Simulation of Adaptive Behavior, Honolulu, USA, pp. 432−441, 1992.
    [31] H. Rezaee, F. Abdollahi.  A decentralized cooperative control scheme with obstacle avoidance for a team of mobile robots[J]. IEEE Transactions on Industrial Electronics, 2014, 61(1): 347-354. doi: 10.1109/TIE.2013.2245612
    [32] D. D. Xu, X. N. Zhang, Z. Q. Zhu, C. L. Chen, P. Yang. Behavior-based formation control of swarm robots. Mathematical Problems in Engineering, vol. 2014, Article number 205759, 2014.
    [33] Y. C. Liu, R. Bucknall.  A survey of formation control and motion planning of multiple unmanned vehicles[J]. Robotica, 2018, 36(7): 1019-1047. doi: 10.1017/S0263574718000218
    [34] J. Vilca, L. Adouane, Y. Mezouar. Adaptive leader-follower formation in cluttered environment using dynamic target reconfiguration. In Proceedings of the 12th International Symposium, Springer, Tokyo, Japan, pp. 237−254, 2016.
    [35] K. Raghuwaiya, B. Sharma, J. Vanualailai. Leader-follower based locally rigid formation control. Journal of Advanced Transportation, vol. 2018, Article number 5278565, 2018.
    [36] G. Lee, D. Chwa.  Decentralized behavior-based formation control of multiple robots considering obstacle avoidance[J]. Intelligent Service Robotics, 2018, 11(1): 127-138. doi: 10.1007/s11370-017-0240-y
    [37] H. S. Kim, J. K. Park, T. Y. Kuc, N. Y. Ko, Y. S. Moon.  A formation and traction control design for multiple mobile robots[J]. International Journal of Control, Automation and Systems, 2017, 15(3): 1287-1301. doi: 10.1007/s12555-016-0025-7
    [38] M. Khaledyan, M. de Queiroz.  A formation maneuvering controller for multiple non-holonomic robotic vehicles[J]. Robotica, 2018, 37(1): 189-211. doi: 10.1017/S0263574718000942
    [39] M. A. Lewis, K. H. Tan.  High precision formation control of mobile robots using virtual structures[J]. Autonomous Robots, 1997, 4(4): 387-403. doi: 10.1023/A:1008814708459
    [40] L. E. Parker. Adaptive action selection for cooperative agent teams. In Proceedings of the 2nd International Conference on Simulation of Adaptive Behavior, J. A. Meyer, H. Roitblat, S. Wilson, Eds., Cambridge, UK: MIT Press, pp. 442−445, 1992.
    [41] L. E. Parker. Designing control laws for cooperative agent teams. In Proceedings of IEEE International Conference on Robotics and Automation, Atlanta, USA, pp. 582−587, 1993.
    [42] L. E. Parker. An experiment in mobile robotic cooperation. In Proceedings of ASCE Specialty Conference on Robotics for Challenging Environments, Albuquerque, USA, pp. 131−139, 1994.
    [43] N. Hacene, B. Mendil.  Motion analysis and control of three-wheeled omnidirectional mobile robot[J]. Journal of Control, Automation and Electrical Systems, 2019, 30(2): 194-213. doi: 10.1007/s40313-019-00439-0
    [44] N. Hacene, B. Mendil.  Fuzzy behavior-based control of three wheeled omnidirectional mobile robot[J]. International Journal of Automation and Computing, 2019, 16(2): 163-185. doi: 10.1007/s11633-018-1135-x
    [45] J. J. Craig. Introduction to Robotics: Mechanics and Control, 3rd ed., New York, USA: Pearson Education, Inc., 2005.
    [46] R. A. Brooks. A Robust Layered Control System for A Mobile Robot, Technical Report AIM 864, Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA, pp. 1−25, 1985.
    [47] C. C. Tsai, H. L. Wu, F. C. Tai, Y. S. Chen.  Distributed consensus formation control with collision and obstacle avoidance for uncertain networked omnidirectional multi-robot systems using fuzzy wavelet neural networks[J]. International Journal of Fuzzy Systems, 2017, 19(5): 1375-1391. doi: 10.1007/s40815-016-0239-0
  • 加载中
  • [1] Yashar Mousavi, Amin Zarei, Arash Mousavi, Mohsen Biari. Robust Optimal Higher-order-observer-based Dynamic Sliding Mode Control for VTOL Unmanned Aerial Vehicles . International Journal of Automation and Computing,  doi: 10.1007/s11633-021-1282-3
    [2] Ali Darvish Falehi. Optimal Design of Fuzzy-AGC Based on PSO & RCGA to Improve Dynamic Stability of Interconnected Multi-area Power Systems . International Journal of Automation and Computing,  doi: 10.1007/s11633-017-1064-0
    [3] Nacer Hacene, Boubekeur Mendil. Fuzzy Behavior-based Control of Three Wheeled Omnidirectional Mobile Robot . International Journal of Automation and Computing,  doi: 10.1007/s11633-018-1135-x
    [4] Chao-Long Zhang, Yuan-Ping Xu, Zhi-Jie Xu, Jia He, Jing Wang, Jian-Hua Adu. A Fuzzy Neural Network Based Dynamic Data Allocation Model on Heterogeneous Multi-GPUs for Large-scale Computations . International Journal of Automation and Computing,  doi: 10.1007/s11633-018-1120-4
    [5] Xiao-Jing Wu, Xue-Li Wu, Xiao-Yuan Luo. Adaptive Neural Network Dynamic Surface Control for a Class of Nonlinear Systems with Uncertain Time Delays . International Journal of Automation and Computing,  doi: 10.1007/s11633-015-0945-3
    [6] Hai-Gang Guo,  Bao-Jie Zhang. Observer-based Variable Universe Adaptive Fuzzy Controller Without Additional Dynamic Order . International Journal of Automation and Computing,  doi: 10.1007/s11633-014-0808-3
    [7] Wen-Lei Li,  Ming-Ming Li. Nonlinear Adaptive Robust Control Design for Static Synchronous Compensator Based on Improved Dynamic Surface Method . International Journal of Automation and Computing,  doi: 10.1007/s11633-014-0797-2
    [8] State Observer Based Dynamic Fuzzy Logic System for a Class of SISO Nonlinear Systems . International Journal of Automation and Computing,  doi: 10.1007/s11633-013-0704-2
    [9] Adaptive Neural Network Dynamic Surface Control for Perturbed Nonlinear Time-delay Systems . International Journal of Automation and Computing,  doi: 10.1007/s11633-012-0626-4
    [10] Khalid Jebari, Abdelaziz Bouroumi, Aziz Ettouhami. Fuzzy Genetic Sharing for Dynamic Optimization . International Journal of Automation and Computing,  doi: 10.1007/s11633-012-0687-4
    [11] Zhong-Qiang Wu,  Yang Wang. Dynamic Consensus of High-order Multi-agent Systems and Its Application in the Motion Control of Multiple Mobile Robots . International Journal of Automation and Computing,  doi: 10.1007/s11633-012-0616-6
    [12] Development and Application of a Marine Ecosystem Dynamic Model . International Journal of Automation and Computing,  doi: 10.1007/s11633-011-0568-2
    [13] Ming-Zhe Hou,  Guang-Ren Duan. Adaptive Dynamic Surface Control for Integrated Missile Guidance and Autopilot . International Journal of Automation and Computing,  doi: 10.1007/s11633-010-0563-z
    [14] Abdellatif Naceri, Habib Hamdaoui, Mohamed Abid. An Advanced FMRL Controller for FACTS Devices to Enhance Dynamic Performance of Power Systems . International Journal of Automation and Computing,  doi: 10.1007/s11633-011-0586-0
    [15] Jing Yan, Xin-Ping Guan, Fu-Xiao Tan. Target Tracking and Obstacle Avoidance for Multi-agent Systems . International Journal of Automation and Computing,  doi: 10.1007/s11633-010-0539-z
    [16] Hesham Abusaimeh,  Shuang-Hua Yang. Dynamic Cluster Head for Lifetime Efficiency in WSN . International Journal of Automation and Computing,  doi: 10.1007/s11633-009-0048-0
    [17] Fatima El Haoussi,  El Houssaine Tissir. Robust H Controller Design for Uncertain Neutral Systems via Dynamic Observer Based Output Feedback . International Journal of Automation and Computing,  doi: 10.1007/s11633-009-0164-x
    [18] Xiao-Yuan Luo,  Zhi-Hao Zhu,  Xin-Ping Guan. Adaptive Fuzzy Dynamic Surface Control for Uncertain Nonlinear Systems . International Journal of Automation and Computing,  doi: 10.1007/s11633-009-0385-z
    [19] Qing-Jin Peng,  Xiu-Mei Kang,  Ting-Ting Zhao. Effective Virtual Reality Based Building Navigation Using Dynamic Loading and Path Optimization . International Journal of Automation and Computing,  doi: 10.1007/s11633-009-0335-9
    [20] Shengxiang Yang, Renato Tinós. A Hybrid Immigrants Scheme for Genetic Algorithms in Dynamic Environments . International Journal of Automation and Computing,  doi: 10.1007/s11633-007-0243-9
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Figures (27)

Metrics

Abstract Views (17) PDF downloads (22) Citations (0)

Behavior-based Autonomous Navigation and Formation Control of Mobile Robots in Unknown Cluttered Dynamic Environments with Dynamic Target Tracking

Abstract: While different species in nature have safely solved the problem of navigation in a dynamic environment, this remains a challenging task for researchers around the world. The paper addresses the problem of autonomous navigation in an unknown dynamic environment for a single and a group of three wheeled omnidirectional mobile robots (TWOMRs). The robot has to track a dynamic target while avoiding dynamic obstacles and dynamic walls in an unknown and very dense environment. It adopts a behavior-based controller that consists of four behaviors: “target tracking”, “obstacle avoidance”, “dynamic wall following” and “avoid robots”. The paper considers the problem of kinematic saturation. In addition, it introduces a strategy for predicting the velocity of dynamic obstacles based on two successive measurements of the ultrasonic sensors to calculate the velocity of the obstacle expressed in the sensor frame. Furthermore, the paper proposes a strategy to deal with dynamic walls even when they have U-like or V-like shapes. The approach can also deal with the formation control of a group of robots based on the leader-follower structure and the behavior-based control, where the robots have to get together and maintain a given formation while navigating toward the target, avoiding obstacles and walls in a dynamic environment. The effectiveness of the proposed approaches is demonstrated via simulation.

Citation: N. Hacene, B. Mendil. Behavior-based autonomous navigation and formation control of mobile robots in unknown cluttered dynamic environments with dynamic target tracking. International Journal of Automation and Computing, http://doi.org/10.1007/s11633-020-1264-x doi:  10.1007/s11633-020-1264-x
Citation: Citation: N. Hacene, B. Mendil. Behavior-based autonomous navigation and formation control of mobile robots in unknown cluttered dynamic environments with dynamic target tracking. International Journal of Automation and Computing , http://doi.org/10.1007/s11633-020-1264-x doi:  10.1007/s11633-020-1264-x
    • The autonomous navigation of mobile robots remains a challenging task in dynamic environments. It refers to the ability of the robot to plan a real-time route from an initial position to a desired target or along a desired path with obstacle avoidance capabilities as efficiently and as reliably as possible[1-5]. Furthermore, robots need to perceive objects around them based on sensory information[6].

      Various approaches have been proposed in order to solve the problem of autonomous navigation of mobile robots: classical methods such as the artificial potential field method (APF) introduced in 1985 by Khatib[7]; and modern methods employing artificial intelligence and optimization techniques such as fuzzy logic[8,9], neural networks[10-12], neuro-fuzzy[13, 14], particle swarm optimization[15] and ant colony optimization[16].

      Although the classical methods perform well in static environments, they fail to react against sudden changes in a dynamic environment. Recently, many approaches have been developed to solve the problem of navigation in dynamic environments. Babinec et al.[17] proposed a method based on some modifications of vector field histogram + (VFH+)[18] and vector field histogram * (VFH*)[19] methods by improving the histogramic algorithms and using new sensors like laser rangefinder. Li et al.[20] studied a time optimal Zermelo′s navigation problem (ZNP) with moving and fixed obstacles, the problem is formulated as an optimal control problem with continuous inequality constraints and terminal state constraints. By using the control parameterization technique together with the time scaling transform. Jaradat et al.[21] developed an approach for solving the problem of mobile robot path planning in an unknown dynamic environment based on Q-learning. Matveev et al.[22] proposed a reactive algorithm to navigate a planar mobile robot in dynamic environments using omnidirectional vision of the scene. Wang et al.[23] investigated the possibility of using a transferable belief model (TBM) for the problem of path planning of non-holonomic mobile robots equipped with ultrasonic sensors in an unknown dynamic environment. Zhong et al.[24] studied the problem of dynamic motion planning in an unknown dynamic environment, where the motion planning is translated into a problem of acceleration selection in a new velocity change space (VCS). Savkin and Wang[25] presented an algorithm for collision free navigation of a non-holonomic robot in unknown complex dynamic environments with moving obstacles based on an integrated representation of the information about the environment. Montiel et al.[26] proposed a method called bacterial potential field (BPF) that ensures a feasible, optimal and safe path, this proposal combines an artificial potential field (APF) method with a bacterial evolutionary algorithm (BEA) to obtain an enhanced flexible path planner method. Li et al.[27] proposed a method using a non-contact impedance algorithm in unknown terrain by establishing a non-contact impedance model to form a virtual spring and damper between the robot and local terrain.

      Inspired by the collective behavior of animals such as the schooling (shoaling) behavior of fish, the swarming behavior of insects, the herd behavior of land animals, the flocking behavior of birds and crowding behavior of humans, multi-robot systems are classified into two categories according to the positions that the robots must occupy: formation control and flocking (swarming). Formation control consists in how to coordinate a group of robots to get into and to maintain a formation with a certain shape while moving in the environment[28,29]. Flocking is the problem of moving a group of robots when the shape and relative positions between the robots are not important, and there are no fixed leaders[30].

      In the literature, many researchers classify the formations into three main structures namely: leader-follower, virtual, and behavior-based (behavioral) structures[31-36]. In the leader-follower structure, one agent is considered as a leader and other agents are followers which track the leader by maintaining a distance and angle with respect to the leader. The main advantage of this approach is that the problem of formation is reduced to a tracking problem[37,38]. Virtual structure is first proposed in [39] where it is defined as a collection of elements, e.g., robots, which maintain a (semi-) rigid geometric relationship to each other and to a frame of reference. So, all the robots have a rigid geometric relationship based on a virtual point or virtual leader. In the behavior-based structure, each robot is controlled by a predefined behavior such as seeking a target, obstacle avoidance, keep formation, etc. The earlier work was done in [30, 40-42].

      However, dead ends, trapping in local minima, behavior oscillation, and so on are some drawbacks that most of the existing approaches have in dealing with dynamic environments. Furthermore, they do not focus on dynamic walls. This paper has the objective of solving the problem of autonomous navigation in dynamic environments for both single and multiple robots and to overcome the existing drawbacks for the three wheeled omnidirectional mobile robots (TWOMR) depending on some features:

      1) The proposed strategy adopts a behavior-based controller for TWOMR navigation. It consists of four behaviors: “target tracking”, “obstacle avoidance”, “dynamic wall following” and “avoid robots”.

      2) The robot employs our proposed solution[43] to solve the problem of kinematic saturation, i.e., when the robot plans a translational and angular velocity, the corresponding velocities of wheels must be within the interval between the minimum and maximum velocity of the wheels to avoid actuator saturation.

      3) The proposed strategy predicts the speed of dynamic obstacles based on the use of two successive measurements to calculate the speed of the obstacle in the frame of the sensor.

      4) The proposed strategy can deal with dynamicwalls even they have U-like or V-like shapes.

      5) Formation control of multiple robots based on the leader-follower and behavior-based structures, where the robots have to get into and to maintain a formation with a certain shape while tracking a dynamic target in a dynamic environment.

      This paper is organized as follows. Section 2 presents the problem formulation while Section 3 presents the behavior-based controller design. The simulation results are given in Section 4 before giving the conclusion and the future work in Section 5.

    • The problem addressed in the paper is divided into two sub-problems: the navigation of a single robot, and the formation control of a group of robots, both in a dynamic environment.

      The robot is described in Fig. 1.

      Figure 1.  Robot description

      The kinematics of the TWOMR is given as follows[44]:

      $ \left[ {\begin{array}{*{20}{c}}\dot{{{x}}_{{l}}}\\ \dot{{{y}}_{{l}}}\\ \dot{{{\varphi }}_{{l}}}\end{array}} \right] = \left[ {\begin{array}{*{20}{c}}{{0}} & -\dfrac{\sqrt{{{3}}}}{{{3}}}& \dfrac{\sqrt{{{3}}}}{{{3}}}\\ \dfrac{{{2}}}{{{3}}}& - \dfrac{{{1}}}{{{3}}}& -\dfrac{{{1}}}{{{3}}}\\ \dfrac{{{1}}}{{{3}}{{L}}}& \dfrac{{{1}}}{{{3}}{{L}}}& \dfrac{{{1}}}{{{3}}{{L}}}\end{array}} \right] \left[ {\begin{array}{*{20}{c}}{{{V}}}_{{{w}}{{1}}}\\ {{{V}}}_{{{w}}{{2}}}\\ {{{V}}}_{{{w}}{{3}}}\end{array}} \right] $

      (1)

      where ${{{V}}}_{{{w}}1}$, ${{{V}}}_{{{w}}2}$ and ${{{V}}}_{{{w}}3}$ are the velocities of the wheels, $\dot{{{x}}_{{l}}}$, $\dot{{{y}}_{{l}}}$ and $\dot{{{\varphi }}_{{l}}}$ are the translational and angular velocity of the robot expressed in the local (robot) frame, and L is the radius of the robot.

      The inverse kinematics is given by

      $ \left[ {\begin{array}{*{20}{c}}{V}_{w1}\\ {V}_{w2}\\ {V}_{w3}\end{array}} \right] = \left[ {\begin{array}{*{20}{c}}0 & 1& L\\ -\dfrac{\sqrt{3}}{2}& -\dfrac{1}{2}& L\\ \dfrac{\sqrt{3}}{2}& -\dfrac{1}{2}& L\end{array}} \right] \left[ {\begin{array}{*{20}{c}}\dot{{x}_{l}}\\ \dot{{y}_{l}}\\ \dot{{\varphi }_{l}}\end{array}} \right] . $

      (2)
    • The robot has to track the target T in an unknown cluttered dynamic environment, avoiding static and dynamic obstacles along its path (Fig. 2). The obstacles can be either a static or dynamic wall. The robot is equipped with a ring of twelve ultrasonic sensors which enables it to perceive the environment.

      Figure 2.  Navigation problem of a single robot

      β is the bearing angle. The angle φ of the ${{x}}_{{l}}$ axis with respect to the ${{x}}_{{G}}$ axis can be defined as

      $ \varphi= \int \dot{\varphi } {\rm d}t $

      (3)

      where $\dot{{\varphi }}$ is the robot angular velocity. The transformation of positions between the global frame and the local frame is given as

      $ \left[ {\begin{array}{*{20}{c}}{x}_{G} \\ {y}_{G}\\ 1\end{array}} \right] =\left[ {\begin{array}{*{20}{c}}{{\rm{cos}}}\,\varphi & -{{\rm{sin}}}\,\varphi & {x}_{R}\\ {{\rm{sin}}}\,\varphi & {{\rm{cos}}}\,\varphi & {y}_{R}\\ 0& 0& 1\end{array}} \right] \left[ {\begin{array}{*{20}{c}}{x}_{l} \\ {y}_{l}\\ 1\end{array}} \right] $

      (4)

      where ${{x}}_{{R}}$ and ${{y}}_{{R}}$ are the actual coordinates of the robot in the global frame.

    • The strategy adopted in our approach for formation control is based on the leader-follower structure and the behavior-based control. A number of robots called “followers” follow a robot called the “leader”. The approach proposed to maintain a given formation is to consider virtual targets where the followers have to track these virtual targets as shown in Fig. 3.

      Figure 3.  Problem of formation control

      The positions of the virtual targets are located with respect to the leader, i.e., in its local frame. So, the leader is in charge of tracking the target, while the followers are responsible for tracking the virtual targets that move when the leader moves.

    • The proposed controller employs the behavioral approach to control the robot. The proposed architecture consists of four behaviors as illustrated in Fig. 4:

      Figure 4.  Behavior-based controller architecture

      1) “Target tracking” behavior

      2) “Obstacle avoidance” behavior

      3) “Dynamic wall following” behavior

      4) “Avoid robots” behavior.

      If the robot wants to track the target, it needs the “target tracking” behavior which will be presented in the following subsection.

    • The block diagram of the “target tracking” behavior is shown in Fig. 4. The inputs of the controller are the distance between the robot and the target $ {{D}}_{{{R}}{{T}}} $ and the robot bearing β with respect to the line connecting the actual position of the robot and the target. The outputs are the three translational velocities of the robot wheels $ {{{V}}}_{{{w}}{{1}}} $, $ {{{V}}}_{{{w}}{{2}}} $ and $ {{{V}}}_{{{w}}{{3}}} $.

      As shown in Fig. 5, if there is no obstacle on the front side of the robot, it moves towards the target at maximum speed until it is near to the target where the speed should be decreased until it reaches the target where it should maintain its velocity at the same as the target (If the target is static, its velocity is zero). Assume the current coordinates of the robot are (${{x}}_{{R}}$, ${{y}}_{{R}}$) and the coordinates of the target are (${{x}}_{{{T}}}$, ${{y}}_{{{T}}}$).

      Figure 5.  Target tracking behavior

      The proposed strategy for tracking the target is the simultaneous use of the combined translational and rotational movements.

      The distance between the robot and the target is given by

      $ {D}_{RT}=\sqrt{{({x}_{T}-{x}_{R})}^{2}+{({y}_{T}-{y}_{R})}^{2}}. $

      (5)

      The bearing angle β is given by

      $ \beta = {{\rm{atan}}}2 ({y}_{T}-{y}_{R},{x}_{T}-{x}_{R}) . $

      (6)

      The vector of “target tracking” behavior ${{{V}}}_{ {{T}} {{T}}}$[${\dot{{x}}}_{{{T}}{{T}}}$, ${\dot{{y}}}_{{{T}}{{T}}}$, ${\dot{{\varphi }}}_{{{T}}{{T}}}$]T combines between the translational velocity and the rotational velocity.

      To establish the angular velocity ${\dot{{\varphi }}}_{{{T}}{{T}}}$, we use human expertise. Looking at Fig. 5, the objective is to keep the bearing angle equal to zero, i.e., the robot remains oriented towards the target; if the robot bearing angle is positive, the robot turns left in the positive direction; however, the robot turns right if the bearing angle is negative. This thought can simply be translated by Fig. 6(a). The angular velocity ${\dot{{\varphi }}}_{{{T}}{{T}}}$ is proportional to the bearing angle ${{\beta }}$. When the bearing angle is maximum positive (+${{\pi }}$), the angular velocity takes the maximum positive value (+${\dot{{\varphi }}}_{{\rm{max}}}$), and it takes the maximum negative value ($-{\dot{{\varphi }}}_{{\rm{max}}}$) when the bearing angle is maximum negative (−${{\pi }}$). Fig. 6(a) can be translated into a simple linear equation:

      Figure 6.  Target tracking

      $ {\dot{\varphi }}_{TT}=\frac{{\dot{\varphi }}_{{\rm{max}}}}{\pi }\beta. $

      (7)

      The translational velocity ${{{V}}}_{ {{T}}{{r}}{{a}}{{n}}{{s}}}$ depicted in Fig. 6(b) is a function of the distance ${{D}}_{{{R}}{{T}}}$. It can be expressed as

      $ {V_{Trans}} = \left\{ {\begin{array}{*{20}{c}} {{V_{{\rm{max}}}},\;\;\;\;\;\;\;\;\;{\rm if}\;{D_{RT}} \ge b}\\ {\dfrac{{{V_{{\rm{max}}}}}}{b}{D_{RT}},\;{\rm if}\;{D_{RT}} < b} \end{array}} \right. $

      (8)

      where b is constant whereby the robot starts decreasing its velocity.

      All the components of the “target tracking” vector in the local frame can be given as

      $ {V}_{l\_TT}\left[ {\begin{array}{*{20}{c}}{\dot{x}}_{l\_TT}\\ {\dot{y}}_{l\_TT}\\ {\dot{\varphi }}_{l\_TT}\end{array}} \right]=\left[ {\begin{array}{*{20}{c}}{V}_{T rans}{\rm{cos}}\left(\beta \right)\\ {V}_{T rans}{\rm{sin}}\left(\beta \right)\\ \dfrac{{\dot{\varphi }}_{{\rm{max}}}}{\pi }\beta \end{array}} \right]. $

      (9)

      When the robot tracks the target, it can face obstacles all along its path. “Obstacle avoidance” behavior is therefore necessary.

    • This behavior depends only on the information acquired from the twelve ultrasonic sensors (Fig. 7). It employs two strategies:

      Figure 7.  Configuration of the robot sensors

      1) The first is the prediction of the velocity of obstacles.

      2) The second is the search for the nearest gap or free space between obstacles to bypass them.

      First, we define the safety distance ${{D}}_{{{s}}}$ as a threshold for detecting obstacles, this threshold can be chosen by trial and error since the robot adopts a strategy of predicting obstacles.

      We assign to each sensor k the distance ${{{S}}}_{{{k}}}$ defined as

      $ {S}_{k}= \left\{ {\begin{array}{*{20}{c}} {{S_k}},&{{\rm{if}}\;{S_k} < {D_s}}\\ {{D_s}},&{{\rm{if}}\;{S_k} \ge {D_s}} \end{array}} \right. $

      (10)

      where $ {{{S}}}_{{{k}}} $, k = 1, ···, 12 is the distance between the robot and an obstacle measured by the k-th sensor. Equation (15) shows that the distance ${{S}}_{{k}}$ is limited to the interval [0, $ {{D}}_{{{s}}} $].

      Now we give more details about the first strategy which is the prediction of the velocity of obstacles.

    • It is easy for the robot to plan its motion when the position and the velocity of the obstacles are known, but unfortunately this is not the case in a real environment. In the global frame $\left({{{o}}}_{{G}},{{x}}_{{G}},{{y}}_{{G}}\right)$, the robot and all objects can be at rest or in motion (Fig. 8), but in the local frame $\left({{{o}}}_{{l}},{{x}}_{{l}},{{y}}_{{l}}\right)$ relative to a moving robot, the robot sees itself at rest and all the environment including obstacles are in relative motion even when they are at rest in the global frame.

      Figure 8.  From static to dynamic environment

      The sole information that can be acquired from ultrasonic sensor is the distance of the object from the sensor, and we cannot get any information about the position and the orientation of objects using just one direct measurement of the sensor. But, we can use two successive measurements to calculate the abscissa component of the velocity of the obstacle in the sensor frame (Fig. 9).

      Figure 9.  Predicting the obstacle velocity

      We attach to each sensor a sensor frame ${{{\cal{F}}}}_{{{k}}}\left({{{o}}}_{{{k}}},{{x}}_{{{k}}},{{y}}_{{{k}}}\right)$ where its origin $ {{{o}}}_{{{k}}} $ is coincident with the robot center $ {{{o}}}_{{l}} $ and it makes an angle $ {{\alpha }}_{{{k}}} $ with the $ {{x}}_{{l}} $ axis (Fig. 9). The velocity measured using two successive measurements of the ultrasonic sensor is the abscissa component of the obstacle velocity in the sensor frame $ {{{\cal{F}}}}_{{{k}}}\left({{{o}}}_{{{k}}},{{x}}_{{{k}}},{{y}}_{{{k}}}\right) $.

      Consider $ {{{v}}}_{{{G}}{{o}}{{b}}{{s}}} $ to be the speed of the obstacle in the global frame, and $ {{{v}}}_{{{l}}{{o}}{{b}}{{s}}} $ to be its speed in the local frame, and its speed in the frame of the sensor $ {{{\cal{F}}}}_{{{k}}}\left({{{o}}}_{{{k}}},{{x}}_{{{k}}},{{y}}_{{{k}}}\right) $ is $ {{{V}}}_{{{k}}}({{{V}}}_{{{k}}{{x}}},{{{V}}}_{{{k}}{{y}}}) $. The ordinate component $ {{{V}}}_{{{k}}{{y}}} $ has no effective influence on the robot since it directs the obstacle away from the robot (the robot is at rest in the local frame). So, only the abscissa component $ {{{V}}}_{{{k}}{{x}}} $ can have an effective impact on the robot which directs the obstacle toward the robot in the sensor direction ($ {{x}}_{{{k}}} $ axis). Since the robot has no prior knowledge about the position and the velocity of the obstacle, it uses the ultrasonic sensors to calculate the abscissa component of the obstacle with respect to the sensor frame attached to the sensor that detected the obstacle. Note that even if the obstacle is at rest (static) in the global (world) frame, it will be dynamic in the local (robot) frame where it gains the velocity of the robot in the global frame with inverted sign.

      Consider $ {{{S}}}_{{{k}}} $ the actual distance of the obstacle from the robot and $ {{{S}}}_{{{k}}}^{-} $ the precedent measured distance. The actual abscissa component of the obstacle velocity in the sensor frame $ {{{\cal{F}}}}_{{{k}}}\left({{{o}}}_{{{k}}},{{x}}_{{{k}}},{{y}}_{{{k}}}\right) $ can be given by

      $ {V}_{kx} =\frac{{S}_{k} - {S}_{k}^{-} }{\Delta t} $

      (11)

      where $\Delta{{t}}$ is the sample time.

      Note that this velocity includes both contributions of the obstacle and the robot. We need to eliminate the robot contribution in the whole velocity to get the pure contribution of the obstacle. So, we calculate the abscissa component of the robot velocity in each sensor frame $ {{{\cal{F}}}}_{{{k}}} $.

      The relation between velocities in the local frame $ {{{\cal{F}}}}_{{l}}\left({{{o}}}_{{l}},{{x}}_{{l}},{{y}}_{{l}}\right) $ of the robot and the frame of the k-th sensor $ {{{\cal{F}}}}_{{{k}}}\left({{{o}}}_{{{k}}},{{x}}_{{{k}}},{{y}}_{{{k}}}\right) $ is given in compact form by

      $ {\dot{X}}_{l} = {R}_{k}^{l} ( {\alpha }_{k} ) {\dot{X}}_{k} $

      (12)

      where $ {{R}}_{{{k}}}^{{l}}\left({{\alpha }}_{{{k}}}\right) $ is the rotation matrix which transforms the velocities in the k-th sensor frame $ {{{\cal{F}}}}_{{{k}}}\left({{{o}}}_{{{k}}},{{x}}_{{{k}}},{{y}}_{{{k}}}\right) $ into the local frame $ {{{\cal{F}}}}_{{l}}\left({{{o}}}_{{l}},{{x}}_{{l}},{{y}}_{{l}}\right) $ attached to the robot, it is given by[45]

      $ R_k^l\left( {{\alpha _k}} \right) = \left[ {\begin{array}{*{20}{c}} {{\rm{cos}}\;{\alpha _k}}&{ - {\rm{sin}}\;{\alpha _k}}\\ {{\rm{sin}}\;{\alpha _k}}&{{\rm{cos}}\;{\alpha _k}} \end{array}} \right]. $

      (13)

      Assume $ {\dot{{{X}}}}_{{{r}}{{G}}}\left[{\dot{{x}}}_{{{r}}{{G}}}\; {\dot{{y}}}_{{{r}}{{G}}}\right]^{\rm {T}} $ the velocity of the robot in the global frame. The obstacle located with the k-th sensor will gain this velocity in the local frame with inverted sign ($ {\dot{{{X}}}}_{{{o}}{{l}}{{r}}} $), where

      $ {\dot{X}}_{olr} = - {\dot{X}}_{rG}. $

      (14)

      Then, its velocity $ {\dot{{{X}}}}_{{{o}}{{k}}{{r}}}\left[{\dot{{x}}}_{{{o}}{{k}}{{r}}}\;{\dot{{y}}}_{{{o}}{{k}}{{r}}}\right]^{\rm {T}} $ in the sensor frame $ {{{\cal{F}}}}_{{{k}}} $ is given by

      $ {\dot{X}}_{okr} = {\left[{R}_{k}^{l}\left({\alpha }_{k}\right)\right]}^{-1} {\dot{X}}_{olr}. $

      (15)

      The abscissa component $ {\dot{{x}}}_{{{o}}{{k}}{{r}}} $ is the robot′s contribution to the obstacle velocity in the sensor frame $ {{{\cal{F}}}}_{{{k}}} $. The pure abscissa contribution of the obstacle can be calculated easily. The abscissa velocity $ {{{V}}}_{{{k}}{{x}}} $ of the obstacle with regard to the robot (which is the velocity measured by the k-th sensor) including both contributions of the robot $ {\dot{{x}}}_{{{o}}{{k}}{{r}}} $ and the obstacle $ {\dot{{x}}}_{{{o}}{{k}}} $ is given by

      $ {V}_{kx} = {\dot{x}}_{ok} - {\dot{x}}_{okr} . $

      (16)

      Then, the pure velocity contribution of the obstacle in the velocity measured by the k-th sensor is

      $ {\dot{x}}_{ok} = {V}_{kx} + {\dot{x}}_{okr} . $

      (17)

      After the obstacle contribution is calculated, we can distinguish between three cases:

      1) ${\dot{{x}}}_{{{o}}{{k}}}= {{0}}$ means the obstacle is static in the global frame.

      2) ${\dot{{x}}}_{{{o}}{{k}}} < {{0}}$ means the obstacle is dynamic in the global frame, moves toward the robot and can make collision with it.

      3) ${\dot{{x}}}_{{{o}}{{k}}} > {{0}}$ means the obstacle is dynamic in the global frame and runs away from the robot, thus, this obstacle is not taken into account since it makes no threat to the robot.

      This can be formulated by the following equation:

      $ {\dot x_{ok}} = \left\{ {\begin{array}{*{20}{c}} {{{\dot x}_{ok}}},&{{\rm{if}}\;{{\dot x}_{ok}} < 0}\\ 0,&\;{{\rm{if}}\;{{\dot x}_{ok}} \ge 0}. \end{array}} \right. $

      (18)

      We return the resulting velocity from (18) to the local frame (robot frame). The pure predicted velocity of the obstacle in the frame of the sensor has the following coordinates ${\left[{\dot{{x}}}_{{{o}}{{k}}},{{0}}\right]}^{{\rm{T}}}$. To obtain their coordinates in the local frame (robot), we multiply by the rotation matrix as follows:

      $ {\dot{X}}_{olk} = \left[ {\begin{array}{*{20}{c}}{\dot{x}}_{ol}\\ {\dot{y}}_{ol}\end{array}} \right] = {R}_{k}^{l} ( {\alpha }_{k} ) \left[ {\begin{array}{*{20}{c}}{\dot{x}}_{ok}\\ 0\end{array}} \right] = \left[ {\begin{array}{*{20}{c}}{{\rm{cos}}}\;{\alpha }_{k}{\dot{x}}_{ok}\\ {{\rm{sin}}}\;{\alpha }_{k}{\dot{x}}_{ok}\end{array}} \right]. $

      (19)

      After completion of the first strategy (the prediction of the velocity of obstacles), the robot needs free spaces or gaps to bypass the obstacles, this is the “gap search strategy” given next.

    • This strategy looks for possible gaps or free spaces between obstacles to bypass them. A gap can be formulated by calculating the minimum of each two adjacent sensors (Fig. 10) as

      $ \left\{\begin{array}{l}{G}_{k} = {\rm{Min}} ({S}_{k},{S}_{k+1}), \;\;k=1,\cdots,11\\ {G}_{12}= {\rm{Min}} ({S}_{1},{S}_{12}). \end{array}\right. $

      (20)

      Since the distance $ {{{S}}}_{{{k}}} $ belongs to the interval [0, $ {{D}}_{{{s}}} $], then $ {{G}}_{{{k}}} $ belongs to the interval [0, $ {{D}}_{{{s}}} $], too. Because the robot usually moves forward toward the target, the gap search strategy uses the five frontal gaps ${{G}}_{{{4}}}$, ${{G}}_{{{5}}}$, ${{G}}_{{{6}}}$, ${{G}}_{{{7}}}$ and ${{G}}_{{{8}}}$ as illustrated in Fig. 10.

      Figure 10.  Five frontal gaps

      The robot looks for the nearest gap on its front based on the search algorithm given by the flowchart illustrated in Fig. 11.

      Figure 11.  Flowchart of the gap search algorithm

      The translational velocity ${{{V}}}_{ {{g}}{{a}}{{p}}}$ corresponds to the gap with the angle α is given by

      $ {V}_{ gap} = [{V}_{{\rm{max}}} {\rm{cos}}\left(\alpha \right), {V}_{{\rm{max}}}{{\rm{sin}}}\left(\alpha \right)]. $

      (21)

      The angle $ \alpha $ is obtained from the search algorithm (Fig. 11). The angular velocity is chosen equal to zero, $\dot{{\varphi }}={{0}}$, to ensure the maximum translational velocity to escape the obstacles.

    • To give priority to avoiding obstacles or using the nearest gap to pass forward, we define two weights $ {{{p}}}_{{{k}}} $ and $ {{{q}}}_{{{k}}} $ as follows:

      $ {p}_{k} =\frac{{D}_{s}-{S}_{k}}{{D}_{s}} $

      (22)

      $ {q}_{k} = \frac{{S}_{k}}{{D}_{s}} $

      (23)

      where

      $ {p}_{k}{+ q}_{k}=1. $

      (24)

      If the obstacle under the safety distance is close to the robot, then $ {{{p}}}_{{{k}}} $ will be close to 1 and $ {{{q}}}_{{{k}}} $ will be close to 0, which means that obstacle avoidance gains more priority than the nearest gap, and vice versa.

      The aggregate velocity of all obstacles with their priorities can be calculated as

      $ {V}_{obs} = \sum _{k=1}^{12}{p}_{k} {\dot{X}}_{olk}. $

      (25)

      By adding the priority to ${{{V}}}_{{{g}}{{a}}{{p}}}$ (26), we get the gap velocity given by

      $ {V}_{g}=\left[{{\rm{min}}}\left({q}_{k}\right)\right]\times {V}_{gap},\;\; k=1,\cdots ,12 .$

      (26)

      By adding both vectors $ {{{V}}}_{{{o}}{{b}}{{s}}} $ and $ {{{V}}}_{{{g}}} $, we get the vector in which we must direct the robot to avoid obstacles.

      $ {V}_{A}=\left[ {\begin{array}{*{20}{c}}{\dot{x}}_{A}\\ {\dot{y}}_{A}\end{array}} \right]={V}_{obs}+{V}_{g} .$

      (27)

      This vector incorporates the gap velocity and the aggregate velocity of the obstacles with their priorities. Therefore, the output of “obstacle avoidance” behavior is given by

      $ {V}_{l\_OA}\left[ {\begin{array}{*{20}{c}}{\dot{x}}_{l\_OA}\\ {\dot{y}}_{l\_OA}\\ {\dot{\varphi }}_{l\_OA}\end{array}} \right]=\left\{\begin{array}{c}{V}_{{\rm{max}}}{\rm{cos}}\left(\gamma \right)\\ {V}_{{\rm{max}}}{{\rm{sin}}}\left(\gamma \right)\\ \dot{\varphi }=0\;\;\;\;\;\;\;\;\;\end{array}\right. $

      (28)

      where

      $ \gamma ={{\rm{atan}}}2({\dot{y}}_{A},{\dot{x}}_{A}) .$

      (29)

      After formulating the “obstacle avoidance” behavior, the robot can now track the target and avoid obstacles along its path. But, what about large obstacles or walls, especially if they are dynamic? To answer this question, the robot needs another behavior, the “dynamic wall following” behavior that allows it to bypass dynamic walls. This is the subject of the next subsection.

    • The “dynamic wall following” behavior is used for avoiding large obstacles or walls. It combines translational and rotational motion simultaneously. The translational motion is derived from the velocity prediction of dynamic obstacles to avoid dynamic walls even when they have U shape, while the rotational motion (the angular velocity) is derived from the rear gap search algorithm. If there is no gap on the front of the robot meaning the presence of a large obstacle or a wall, the robot turns right or left toward the nearest rear gap (Fig. 12) and moves away from the wall. Note that a static wall is a special case of a dynamic wall with zero velocity.

      Figure 12.  Rear gaps

      First, we define:

      $ {G}_{m} = {\rm{Max}} ( {G}_{4} , {G}_{5} , {G}_{6} , {G}_{7} , {G}_{8} ). $

      (30)

      Note that since $ {{G}}_{{{k}}} $ belongs to the interval [0, $ {{D}}_{{{s}}} $], therefore, $ {{G}}_{{{m}}} $ belongs to the interval [0, $ {{D}}_{{{s}}} $] too, we can distinguish between two cases:

      1) $ {{G}}_{{{m}}}<{{D}}_{{{s}}} $ means there is no gap on the frontal side of the robot which means the presence of a wall.

      2) $ {{G}}_{{{m}}}={{D}}_{{{s}}} $ means there is a gap on the frontal side of the robot.

      When the robot meets a wall on its frontal side ($ {{G}}_{{{m}}} $< $ {{D}}_{{{S}}} $), it turns right or left toward the possible nearest rear gap based on the rear gap search algorithm (Fig. 13).

      Figure 13.  Flowchart of rear gap search algorithm

      The angle δ correspondent to this gap is employed in the expression of the angular velocity (31), which is illustrated in Fig. 14 and expressed as

      Figure 14.  Angular velocity of the “dynamic wall following” behavior

      $ {\dot{\varphi }}_{l\_{DWF}}=\delta \frac{{\dot{\varphi }}_{{\rm{max}}}}{3\pi }. $

      (31)

      The “dynamic wall following” behavior combines the angular velocity $ {\dot{{\varphi }}}_{{{l}}\_{{D}}{{W}}{{F}}} $ and the translational velocity in the same direction of the aggregate velocity of all obstacles $ {{{V}}}_{{{o}}{{b}}{{s}}} $ in (25). If we consider ($ {\dot{{x}}}_{{{o}}{{b}}{{s}}} $, $ {\dot{{y}}}_{{{o}}{{b}}{{s}}} $) the components of $ {{{V}}}_{{{o}}{{b}}{{s}}} $, then, the “dynamic wall following” behavior output can be expressed in the local frame by

      $ {{V}_{}}_{l\_{DWF}}\left[ {\begin{array}{*{20}{c}}{\dot{x}}_{l\_{DWF}}\\ {\dot{y}}_{l\_{DWF}}\\ {\dot{\varphi }}_{l\_{DWF}}\end{array}} \right]=\left\{\begin{array}{c}{V}_{{\rm{max}}}{\rm{cos}}\left(\theta \right)\\ {V}_{{\rm{max}}}{{\rm{sin}}}\left(\theta \right)\\ \delta \dfrac{{\dot{\varphi }}_{{\rm{max}}}}{3\pi }.\;\;\;\;\;\;\end{array}\right. $

      (32)

      where

      $ \theta ={{\rm{atan}}}2({\dot{y}}_{obs},{\dot{x}}_{obs}).\;\;\;\;\;\; $

      (33)

      Now, having achieved the three previous behaviors, the robot is able to navigate safely to the target avoiding dynamic obstacles or walls. All this is for the navigation of a single robot. If we want to perform a task that is difficult or impossible for a robot to do, cooperation between several robots is therefore necessary. But, this cooperation requires coordination between robots, hence the need for another behavior: this is the behavior “avoid robots” described in the next subsection.

    • This behavior aims to coordinate a group of robots that must enter a given formation and maintain it, it prevents robots from colliding. It is called “avoid robots” and will be activated when the robot detects other robots on its front side. Note that we mean by front side, the side in which the robot moves, because the robot is holonomic omnidirectional and can move in any direction in translation and in rotation simultaneously; cantrast to the differential drive which has only one direction of motion forward or backward due to the non-holonomic constraint. Then, each robot deals only with its frontal side, and that reduces the task for each robot. Thus, if a robot detects another on its front side, directed to the other direction, only this robot triggers this behavior, and the other robot has nothing to do with this robot located at the rear side, and it deals only with its front side.

      Consider N robots forming a formation. Let us take two robots $ {R}_{i} $ and $ {R}_{j} $ and suppose that the robot $ {R}_{i} $ detects the robot $ {R}_{j} $ on its front face (Fig. 15).

      Figure 15.  “Avoid robots” behavior

      So, the vector that connects the two origins of the robots is $\overrightarrow{{{{O}}}_{{{i}}}{{{O}}}_{{{j}}}}$ given as follows:

      $ \overrightarrow{{O}_{i}{O}_{j}}=\left[ {\begin{array}{*{20}{c}}{x}_{j}\\ {y}_{j}\end{array}} \right] $

      (34)

      where $ {{x}}_{{{j}}} $ and $ {{y}}_{{{j}}} $ are the coordinates of the robot $ {{R}}_{{{j}}} $ in the local frame of the robot $ {{R}}_{{{i}}} $. The strategy employed in the “avoid robots” behavior is to avoid the detected robot perpendicularly. So, we can form two vectors perpendicular to $ \overrightarrow{{{{O}}}_{{{i}}}{{{O}}}_{{{j}}}} $:

      $ {V}_{i+}=\left[ {\begin{array}{*{20}{c}}-{y}_{j}\\ {x}_{j}\end{array}} \right]\;\;\;{\rm{in}}\;{\rm{the}}\;{\rm{positive}}\;{\rm{sense}}\;{\rm{with}}\;{\rm{9}}{{\rm{0}}^ \circ } $

      (35)

      $ {V}_{i-}=\left[ {\begin{array}{*{20}{c}}{y}_{j}\\ -{x}_{j}\end{array}} \right] \;\;\;\;{\rm{in}}\;{\rm{the}}\;{\rm{negative}}\;{\rm{sense}}\;{\rm{with}}\;{\rm{ - 9}}{\rm{0}}^ {\circ} . $

      (36)

      The direction in which the robot j is located with respect to the robot i is given by

      $ \rho ={\rm{atan2}}({y}_{j},{x}_{j}). $

      (37)

      The robot employs this behavior when it detects a robot on its front side, i.e., if the following condition is satisfied:

      $ -90{^ \circ }<\rho -\widehat{\beta }<90{^ \circ } \;{\rm{and}} \;{Dis}_{ij}-2L\le {D}_{SR} $

      (38)

      where

      $ {Dis}_{ij}=||\overrightarrow{{O}_{i}{O}_{j}}||=\sqrt{{{x}_{j}}^{2}+{{y}_{j}}^{2}} .$

      (39)

      $ \widehat{\beta } $ is the direction angle of motion of the robot i, ${{D}}_{{SR}}$ is the safety distance between the robots as illustrated in Fig. 15 and L is the radius of the robot.

      If the condition (38) is satisfied, then, the robot has two choices: avoid the robot in the direction of the vector ${{V}}_{{{i}}+}$ or in the direction of the vector $ {V}_{i-} $. So, we have to prioritize a direction as it is used in traffic in a roundabout. So, we choose the positive sense in the direction of the vector $ {V}_{i+} $, and by dividing with the magnitude of the vector $ \overrightarrow{{O}_{i}{O}_{j}} $, we get the unit vector:

      $ {V}_{ij}=\frac{1}{\sqrt{{{x}_{j}}^{2}+{{y}_{j}}^{2}}}\left[ {\begin{array}{*{20}{c}}-{y}_{j}\\ {x}_{j}\end{array}} \right]. $

      (40)

      And taking all the robots into consideration, the final vector will be

      $ {V}_{i}=\sum _{j=1}^{N-1}{V}_{ij}. $

      (41)

      Equation (41) gives us the direction by which the robot avoids collisions with other robots.

      To make the robot more efficient, we can add the obstacle avoidance property to this behavior. The robot therefore considers robots as dynamic obstacles, while avoiding them perpendicularly using (41). So, we add the contribution of the obstacles previously provided by (27) to the vector of (41), and then we obtain

      $ {V}_{R}={V}_{i}+{V}_{A}.\;\;\;\;\;\; $

      (42)

      This vector of (42) gives the direction (angle) ${{\alpha }}_{{{A}}{{R}}}$ in which we direct the robot, i.e., the direction of the translational velocity:

      $ {\alpha }_{AR}={\rm{atan}}2({V}_{Ry},{V}_{Rx}). $

      (43)

      To keep the robot pointing toward its target when dealing with other robots (this has the advantage of reducing the robot′s task in many situations after avoiding the robots, so, it goes directly to its target rather than looking for it, which may result in coming back to avoid the robots again), we must apply an angular velocity, as we did with the “target tracking” behavior (Section 3.1).

      $ {\dot{\varphi }}_{l\_AR}=\frac{{\dot{\varphi }}_{{\rm{max}}}}{\pi }\beta. $

      (44)

      All of the components of the “avoid robots” behavior in the local frame can be given as

      $ {V}_{l\_AR} \left[ {\begin{array}{*{20}{c}}{\dot{x}}_{l\_AR}\\ {\dot{y}}_{l\_AR}\\ {\dot{\varphi }}_{l\_AR}\end{array}} \right]=\left[ {\begin{array}{*{20}{c}}{V}_{{\rm{max}}}{\rm{cos}}\left({\alpha }_{AR}\right)\\ {V}_{{\rm{max}}}{\rm{sin}}\left({\alpha }_{AR}\right)\\ \dfrac{{\dot{\varphi }}_{{\rm{max}}}}{\pi }\beta\;\;\;\;\;\;\;\;\;\;\;\; \end{array}} \right]. $

      (45)

      Once all four behaviors are complete, these behaviors must be coordinated to avoid conflict. Therefore, a behavior coordinator is needed. This is given in the following subsection.

    • The subsumption architecture proposed by Brooks[46] is adopted for the proposed controller by integrating the four behaviors using the behavior coordinator. The rules that govern the behavior coordinator are provided in the flow chart depicted in Fig. 16.

      Figure 16.  Flowchart of the behavior coordination

      From the flowchart, avoiding obstacles and robots has priority over tracking the target, so, if there are neither obstacles nor robots, the robot triggers the “target tracking” behavior. Avoiding obstacles has priority over avoiding robots, i.e., if there are obstacles and robots, the robot considers the robots as dynamic obstacles and triggers the “obstacle avoidance” behavior or the “dynamic wall following” behavior to bypassing them, otherwise, if there are no obstacles but only robots, then the “avoid robots” behavior will be triggered.

    • A simulation platform was carried out to prove the effectiveness of the proposed approach using Matlab software. The proposed strategy can deal with both static and dynamic obstacles and walls with both static and dynamic targets. The simulation section will be divided into three subsections, the first subsection concerns single robot navigation, the second deals with dynamic walls and the third is dedicated to the formation control of a group of robots. The safety distance used is ${{D}}_{{{s}}}$= 0.3 m. The figures (Figs. 17-22) in these simulations are constructed from a sequence of snapshots collected and numbered increasingly starting from the left to the right and from the top to the bottom. Note that the robot has no any prior knowledge about the locations or the velocities of the obstacles, the robot reacts to the environment changes using only the information acquired from the ultrasonic sensors.

      Figure 17.  Tracking the dynamic target moving in an eight-like trajectory in dynamic environment

      Figure 22.  Escaping the local minimum: Avoid the U-like dynamic wall when converging to the target

    • Simulation of single robot navigation in a dynamic environment was done with a complex scenario (Fig. 17). The robot is initially located at point (1,0), has to track the target which moves in an eight-shape trajectory (the target is the red point and its path with black color). The environment is cluttered with fifteen dynamic obstacles that move in different paths. These different complex motions are used to ensure multiple meetings with obstacles in narrow spaces.

      Note that the robot has no prior knowledge about the motion of either the target or obstacles; the only information acquired is from the ultrasonic sensors. The robot starts with “target tracking” behavior (Snapshot 1). It meets obstacles (Snapshot 2) where the “target tracking” behavior was deactivated to activate the “obstacle avoidance” behavior to avoid the surrounding moving obstacles (Snapshots 3−5). After the robot has successfully bypassed the obstacles (Snapshot 5), it has returned to use the “target tracking” behavior (Snapshot 6). Then, it encounters another set of dynamic obstacles (Snapshot 7), so it leaves the target by deactivating the “target tracking” behavior and triggering the “obstacle avoidance” behavior to get out from the complicated case in narrow space (Snapshots 7 and 8). The robot returns to the “target tracking” behavior after bypassing the obstacles (Snapshots 9 and 10), then, the “obstacle avoidance” behavior (Snapshot 11−16) to return to the “target tracking” behavior in Snapshots 17−19. In Snapshots 20−22, it uses the “obstacle avoidance” behavior to avoid the obstacle, then, the “target tracking” behavior in Snapshots 23 and 24.

    • One advantage of the proposed approach is that it can deal with dynamic walls and dead ends having V or U shapes. This is demonstrated by the scenario of Fig. 18. This scenario is formed from 24 snapshots numbered from 1 to 24. The robot initially located at point (1, 1) has to track the dynamic target moving in a flower-like path. The environment is equipped with two dynamic walls that move in an oscillating movement, one moves horizontally while the second moves vertically. Furthermore, there are six dynamic obstacles with different motions.

      Figure 18.  Dynamic target tracking moves in a flower-like trajectory in an environment with two dynamic walls and six dynamic obstacles

      The robot starts with the “target tracking” behavior (Snapshot 1), then it falls between two dynamic walls forming a dead end with a V-like shape (Snapshots 2). To get out from this trap, the robot switches the “obstacle avoidance” behavior and the “dynamic wall following” behavior as illustrated in Fig. 18 (Snapshots 3 and 4). After bypassing the horizontal wall, the robot triggers the “target tracking” behavior to encounter the vertical wall and an obstacle (Snapshot 5). The robot returned to the “target tracking” behavior (Snapshots 6) to meet the target for the first time, and to fall into a V-like dead end. In the remaining snapshots, the robot switches between the three behaviors the “target tracking”, the “obstacle avoidance” and the “dynamic wall following” to safely navigate in the complex dynamic environment. It encounters and tracks the dynamic target many times after bypassing the dynamic walls and obstacles.

      Fig. 23 shows along with time, the speed of the walls and how the robot used the different behaviors to navigate in the environment shown in Fig. 18. Fig. 23 is divided into 10 parts representing different situations of walls and the behaviors used; the two velocities of the vertical and horizontal walls are coincident.

      Figure 23.  Distribution of the use of all behaviors over time during the robot′s navigation in the environment represented in the scenario of Fig. 18. Colored figures are available in the online version.

      In Part 1, the robot begins navigation with the “target tracking” behavior for about 0.56 s to track the target until it encounters an obstacle to activate the “obstacle avoidance” behavior, then instantly switches to the “dynamic wall following” behavior when it has identified this obstacle as a wall. The robot can instantly make a decision to choose the behavior that will be activated to manage the actual situation, and this is due to the advantage of the behavioral structure where simple parallel behaviors allow a real-time response at a low computational cost. So, the robot responds instantly to each situation, i.e., stimulus-response. Therefore, in the first part (Part 1), the robot switched between the “dynamic wall following” and the “obstacle avoidance” behaviors to avoid the V-like wall consisting of two walls perpendicular to each other, moving in an oscillating motion. The time required to overcome such a situation is 1.79 s. We can see that even if the wall increases its velocity, the robot has enough time to avoid it because the velocity of the wall is less than the velocity of the robot (about half), and the robot can instantly make a decision.

      In Part 2, after bypassing the V-like wall, the robot switches between the “obstacle avoidance” behavior and the “target tracking” behavior. It uses the “obstacle avoidance” rather than “dynamic wall following” to follow the vertical wall, since it detects gaps in its front side. After that, it activates the target tracking” behavior for tracking the target to find itself against the horizontal wall, so, it returns to the “obstacle avoidance” behavior again, since it can find gaps.

      In Parts 3, 5, 7 and 9, the robot switches between the “dynamic wall following” and the “obstacle avoidance” behaviors, while in Parts 4, 6, 8 and 10, the robot switches between the “obstacle avoidance” behavior and the “target tracking” behavior.

      The same thing can be said for all the scenarios of the simulation section. As a result, the robot can make a decision instantly (stimulus-response). And the robot can handle any complex situation. The robot can bypass long walls and high speed walls if their velocities are less than its velocity.

    • This subsection is dedicated to the simulation of the formation control. Different scenarios are performed. Five robots have to form the formation, one is the leader, and the remaining four robots are the followers. Different formation types are performed: horizontal line formation, V-like shape formation and square formation. The performance evaluation is carried out in the last of this section.

    • In this formation, the robots have to form the shape of a horizontal line which is parallel to the $ {{x}}_{{l}} $ axis of the leader robot (see Fig. 19). Initially, the leader robot (the green one) is located at point (0, 0), and has to reach the target located at point (4, 4). Follower robots are initially located as follows: the red robot is located at point (1, −1), the yellow robot at point (−1, −1), the magenta robot at point (0.5, 0.5), and finally the cyan robot at point (−1, 1).

      Figure 19.  Horizontal-line formation

    • Five robots are commanded to get together and maintain a V-like formation while tracking a dynamic target moving in a circular trajectory (Fig. 20), avoiding three dynamic obstacles and a static obstacle, the initial locations of the robots are the same as in the scenario of Fig. 19.

      Figure 20.  V-like shape formation tracks a target moving in a circular trajectory

    • The robots have to form a square, where the leader is situated in the middle of the square while the four follower robots have to reach the vertices. The target moves in an eight-like trajectory. The formation has to avoid collision with a dynamic wall located at the middle of the trajectory. The wall has the shape of a circle with two gates, the wall rotates around itself. The results presented in Fig. 21 show that the robots are able to reach the formation and maintain it during navigation while avoiding the complex wall.

      Figure 21.  Square shape formation tracking a dynamic target dealing with dynamic wall

    • The performance of the proposed approach has been compared with another approach from [47]. To make the comparison, the same scenarios as those used in [47] are reproduced. One of the advantages of the proposed approach is that the leader robot can be a real robot or a virtual robot, so the virtual robot is only responsible for tracking the target.

      The approach proposed by Tsai et al.[47], presents a distributed consensus formation control with collision and obstacle avoidance using fuzzy wavelet neural networks (FWNNs) for a group of networked mobile Mecanum-wheeled omnidirectional robots. In the first scenario (Fig. 24), four robots considered as followers moving in parallel, have to follow the virtual leader moving in a straight line trajectory. The initial positions of the followers are (−4 m, −5 m), (−2 m, −5 m), (2 m, −5 m) and (4 m, −5 m). The virtual leader starts from the position (−2 m, −4 m) and goes towards the target located at (4 m, 2 m). Four static obstacles are located at (−4 m, −2 m), (−2 m, −2 m), (2 m, −2 m) and (4 m, −2 m).

      Figure 24.  Line formation

      Compared with the approach proposed in [47], we can see that the robots with the proposed approach (Fig. 24(a)) can easily converge to the formation in the fastest way by following the shortest path, whereas in the approach of [47], the robots can reach the formation but with a deviation from the desired path that takes longer (Fig. 24(b)).

      The second scenario is shown in Fig. 25. The robots have to converge to a line pattern formation by moving in a corridor. We can see that the robots with the proposed approach can easily and smoothly converge to the formation by following the shortest trajectories. On the other hand, in the approach of [47], the robots can also reach the formation but with the deviations from the shortest paths to the targets, which can be seen in the trajectories of the blue and black robots.

      Figure 25.  Line formation in a corridor

      In the third scenario (Fig. 26), the robots have to change the formation from a line pattern to a square pattern. Always, the robots with the proposed approach can reach the line formation and change it to the square formation by following the shortest paths contrary to the approach of [47].

      Figure 26.  Changing the formation from line pattern to square pattern

      The unique advantage of the proposed approach is that it can deal with dynamic walls having the shape of the letter “U” (Fig. 22) which is not addressed in the literature where the existing techniques deals with static U-shaped walls. The robot starts with the “target tracking” behavior (Snapshot 1), continues toward the target (Snapshots 2 and 3) till it meets the U-like dynamic wall where it deactivates the actual behavior and triggers the “dynamic wall following” behavior (Snapshot 4).

      The robot continues with this behavior (Snapshots 5−7) till it detects a gap on the left side, then it deactivates the “dynamic wall following” behavior to activate the “obstacle avoidance” behavior for using the appeared gap to escape from the dead end (Snapshots 8−13). In Snapshots 14 and 15, the robot uses the “target tracking” behavior to reach the target.

      The performance of the proposed approach can be summarized as follows:

      1) The proposed approach is unique in that it can deal with dynamic U-shaped obstacles and walls, while existing approaches deal only with static U-shaped walls.

      2) The proposed approach does not suffer from falling into local minima as shown in Fig. 22.

      3) The robots using the proposed approach can easily and smoothly converge to the desired formation; there is no oscillation when reaching the target as it appears in the approach of [47].

      4) The robots with the proposed approach follow the shortest path; this is due to the behavioral approach where simple parallel behaviors allow a real-time response-stimulus-response.

      5) In addition, the proposed approach by a simple behavioral approach can beat the approach in [47] that uses artificial intelligence techniques. Moreover, it can allow the robot to safely navigate in complex scenarios such dynamic walls as mentioned above.

    • An autonomous navigation approach for a three-wheeled omnidirectional mobile robot has been proposed to solve both problems: tracking a dynamic target in an unknown cluttered dynamic environment including dynamic walls, and the problem of formation control of a group of robots based on the leader-follower structure and the behavior-based control. To do so, a behavior-based controller has been achieved based on four behaviors “target tracking”, “obstacle avoidance”, “dynamic wall following” and “avoid robots”. A strategy to predict the velocity of obstacles based on using two successive measurements of the distance from the obstacle is proposed. The resulting velocity represents the abscissa component of the obstacle relative velocity in the sensor frame which includes both contributions of the robot and the obstacle, and then the pure contribution of the obstacle can be isolated. The kinematic saturation problem was respected based on a simple but effective approach, whereby the robot can correctly plan the control velocities avoiding actuator′s saturation. An advantage of the proposed approach is that it can handle any obstacle motion without prior knowledge about its shape, path or velocity. Furthermore, the robot is safe from trapping in well-known drawbacks such as local minima, oscillation in behaviors and dead ends. Moreover, the robot can deal with dynamic U-shape and V-shape walls.

      The behavioral structure of the proposed controller allows a real-time response where the robot can instantly make a decision (stimulus-response), and deal with any complex situation. For a long wall, it is just a problem of time to bypass it, since the velocity of the wall is less than the velocity of the robot. For a high speed wall, where its velocity is close to or greater than that of the robot, it is difficult even for a human being to cope with such a situation. However, we can handle this case if we adapt the “safety distance” from the constant value to another adaptive value which can adapt with the change of the speed of the walls, to give the robot more time to avoid the high speed walls.

      A simulation platform has been developed with different simulated scenarios in order to test the robot′s ability to deal with different types of dynamic environments with different types of target motions and a different number and motion types of obstacles including dynamic walls. Different types of formations have been tested for the formation control of a group of robots. Simulation results have shown the effectiveness of the proposed approach.

      From a practical point of view, if the coordinates of the target are given in terms of geodetic coordinates (latitude, longitude), we need transformations from Cartesian coordinates to geodesic coordinates and vice versa.

      For our robot, the local frame $ \left({{{O}}}_{{l}},{{x}}_{{l}},{{y}}_{{l}},{{z}}_{{l}}\right) $ is attached to the robot and the global frame $ \left({{{O}}}_{{G}},{{x}}_{{G}},{{y}}_{{G}},{{z}}_{{G}}\right) $ is considered as the local East-North-Up (ENU) coordinates system (local ENU) which is the ground coordinate system that is fixed to the earth′s surface (Fig. 27). Since the navigation environment is within a range of a few kilometers, this Cartesian coordinate system has sufficient accuracy with very little distortion due to the curvature of the earth′s surface. The transformation from the WGS84 geodetic coordinates to the local frame of the robot involves the following three steps:

      Figure 27.  Transformations between the local frame of the robot and the geodetic coordinates

      1) Convert geodetic coordinates to Earth-centered Earth-fixed (ECEF) coordinates;

      2) Convert ECEF coordinates to local ENU coordinates $ \left({{{O}}}_{{G}},{{x}}_{{G}},{{y}}_{{G}},{{z}}_{{G}}\right) $;

      3) Convert ENU coordinates $ \left({{{O}}}_{{G}},{{x}}_{{G}},{{y}}_{{G}},{{z}}_{{G}}\right) $ to the robot′s local coordinates.

      The transformation from the local frame of the robot to the WGS84 geodetic coordinates takes the inverse sense.

      Further works include different types of robots such as non-holonomic mobile robots. Adding vision to the current system will help improve the approaches proposed for a dynamic environment. The combination of vision and ultrasonic sensors will give the best results in navigation and building an environmental map. Future work should focus on that.

Reference (47)

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return