Por favor, use este identificador para citar o enlazar este ítem:
http://cimat.repositorioinstitucional.mx/jspui/handle/1008/1122
VISUALLY-GUIDED HUMANOID WALKING PATTERN GENERATION | |
NOE GUADALUPE ALDANA MURILLO | |
Acceso Abierto | |
Atribución-NoComercial | |
CIENCIAS DE LA COMPUTACIÓN | |
This thesis addresses the problem of the navigation of a humanoid robot in a visual memory using vision as the main source of perception of the environment. The navigation of a robot based on a visual memory typically implies three distinct stages. First, the learning stage consists of making the robot build a representation of the initially unknown environment, by means of a set of key images that forms the so-called visual memory. In the localization stage, an image corresponding to the destination location (objective image) and the image currently observed by the camera mounted on the robot are specified as inputs, and the robot must be localized within the visual memory. Then, in the autonomous navigation stage, the robot has to reach a location associated with the destination image by following a visual path. Specifically, we focus on the stage of localization within a visual memory and on the stage of autonomous navigation. In the appearance-based localization of a humanoid robot, the main contribution is a specific vocabulary to deal with the issues generated by the humanoid locomotion. A hierarchical visual bag of words (VBoW) approach was used to achieve this goal. This algorithm represents an image as a numerical vector in the form of a histogram of visual words, which allows fast image comparisons. The main contribution in the autonomous navigation stage is to show how visual constraints, particularly homographies and epipolar geometry, can be integrated tightly into the locomotion controller of a humanoid robot to drive it from one configuration to another, only by means of images. The visual errors generated by these constraints are stacked as terms of the objective function of a Quadratic Program so as to specify the final target of the robot with a reference image. By using visual constraints instead of specific points, we avoid the feature occlusion problem. Three applications are presented of the visual navigation strategy: Simultaneous locomotion and visual obstacle avoidance, visual path-following, and navigation in a straight corridor. First, we propose a framework to handle the avoidance of obstacles within our scheme for autonomous navigation. In the second problem, we extend the image-based strategy to solve the problem of following a visual path by a humanoid robot, which allows the robot to execute much longer paths than when using just one reference image. Finally, we present an approach in which we add a rotational degree of | |
25-03-2021 | |
Trabajo de grado, doctorado | |
OTRAS | |
Versión aceptada | |
acceptedVersion - Versión aceptada | |
Aparece en las colecciones: | Tesis del CIMAT |
Cargar archivos:
Fichero | Descripción | Tamaño | Formato | |
---|---|---|---|---|
TE 828.pdf | 38.11 MB | Adobe PDF | Visualizar/Abrir |