We've structured our system architecture using the diagram below with each box performing an independent role.
The optimized robot state is calculated using the robot's proprioceptive sensor information (IMU, Joint encoder) and extereoceptive sensor information (LiDAR). (Green box)
And then, we generate an A* based global path to the goal point with the prebuilt map, and compute the optimized robot state and commands to be given to the robot using and MPC based local planner. (Yellow box)
Locomotion is performed by selecting various gaits such as walking, low posture walking, and jumping according to the input command and given situation. (Blue box)
Additionally, we utilize key regions and robot sensor information required to make the robot perform various missions such as jump detection, dynamic obstacle avoidance, and traffic signal detection. (Red box)
