Show simple item record

dc.contributor.advisorvan Veen, Lennaert
dc.contributor.advisorTamblyn, Isaac
dc.contributor.authorBeeler, Chris
dc.date.accessioned2019-10-28T19:33:30Z
dc.date.accessioned2022-03-29T17:27:07Z
dc.date.available2019-10-28T19:33:30Z
dc.date.available2022-03-29T17:27:07Z
dc.date.issued2019-08-01
dc.identifier.urihttps://hdl.handle.net/10155/1108
dc.description.abstractHere we discuss ideas of reinforcement learning and the importance of various aspects of it. We show how reinforcement learning methods based on genetic algorithms can be used to reproduce thermodynamic cycles without prior knowledge of physics. To show this, we introduce an environment that models a simple heat engine. With this, we are able to optimize a neural network based policy to maximize the thermal efficiency for different cases. Using a series of restricted action sets in this environment, our policy was able to reproduce three known thermodynamic cycles. We also introduce an irreversible action, creating an unknown thermodynamic cycle that the agent helps discover, showing how reinforcement learning can find solutions to new problems. We also discuss shortcomings of the method used, the importance of understanding the class of problem being handled, and why some methods can only be used for certain classes of problems.en
dc.description.sponsorshipUniversity of Ontario Institute of Technologyen
dc.language.isoenen
dc.subjectReinforcement learningen
dc.subjectMachine learningen
dc.subjectMathematicsen
dc.subjectPhysicsen
dc.subjectChemistryen
dc.titlePerpetually playing physicsen
dc.typeThesisen
dc.degree.levelMaster of Science (MSc)en
dc.degree.disciplineModelling and Computational Scienceen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record