Show simple item record

dc.contributor.advisorLang, Haoxiang
dc.contributor.authorAl-Shanoon, Abdulrahman
dc.date.accessioned2021-06-01T20:05:54Z
dc.date.accessioned2022-03-29T18:10:01Z
dc.date.available2021-06-01T20:05:54Z
dc.date.available2022-03-29T18:10:01Z
dc.date.issued2021-04-01
dc.identifier.urihttps://hdl.handle.net/10155/1311
dc.description.abstractThe exceptional human’s ability to interact with unknown objects based on minimal prior experience is a permanent inspiration to the field of robotic manipulation. The recent revolution in industrial and service robots demands high-autonomy and intelligent mobile-manipulators. The goal of the thesis is to develop an autonomous mobile robotic manipulation system that can handle unknown and unstructured objects with the least training and human involvement. First, an end-to-end vision-based mobile manipulation architecture with minimal training using synthetic datasets is proposed in this thesis. The system includes: 1) effective training strategy of a perception network for object pose estimation, 2) the result is utilized as sensing feedback to integrate into a visual servoing system to achieve autonomous mobile manipulation. Experimental findings from simulations and real-world settings showed the efficiency of using computer-generated datasets, that can be generalized to the physical mobile-manipulator task. The model of the presented robot is experimentally verified and discussed. Second, a challenging robotic manipulation scenario of unknown-adjacent objects is addressed in this thesis by using a scalable self-supervised system that can learn grasping control strategies for unknown objects based on limited knowledge and simple sample objects. The developed learning scheme can be beneficial to both generalization and transferability without requiring any additional training or prior object awareness. Finally, an end-to-end self-learning framework is proposed to learn manipulating policies for challenging scenarios based on minimal training time and raw experience. The proposed model learns from scratch, from visual observations to sequential decision-making, manipulating actions and generalizes to unknown scenarios. The agent comprehends a sequence of manipulations that purposely lead to successful grasps. Results of the experiments demonstrated the effectiveness of the learning between manipulating actions, in which the grasping success rate has dramatically increased. The proposed system is successfully experimented and validated in simulations and real-world settings.en
dc.description.sponsorshipUniversity of Ontario Institute of Technologyen
dc.language.isoenen
dc.subjectAutonomous systemen
dc.subjectMobile manipulationen
dc.subjectRobotic-object interactionen
dc.subjectDeep learningen
dc.subjectVisual servoingen
dc.titleDeveloping a mobile manipulation system to handle unknown and unstructured objectsen
dc.typeDissertationen
dc.degree.levelDoctor of Philosophy (PhD)en
dc.degree.disciplineMechanical Engineeringen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record