Date of Award
12-2023
Document Type
Dissertation
Degree Name
Doctor of Philosophy (PhD)
Department
Mechanical Engineering
Committee Chair/Advisor
Gang Li
Committee Member
Georges Fadel
Committee Member
Rahul Rai
Committee Member
Zhen Li
Committee Member
Oliver Myers
Abstract
Advances in machine learning algorithms and increased computational efficiencies have given engineers new capabilities and tools for engineering design. The presented work investigates using deep reinforcement learning (DRL), a subset of deep machine learning that teaches an agent to complete a task through accumulating experiences in an interactive environment, to design 2D structural topologies. Three unique structural topology design problems are investigated to validate DRL as a practical design automation tool to produce high-performing designs in structural topology domains.
The first design problem attempts to find a gradient-free alternative to solving the compliance minimization topology optimization problem. In the proposed DRL environment, a DRL agent can sequentially remove elements from a starting solid material domain to form a topology that minimizes compliance. After each action, the agent receives feedback on its performance by evaluating how well the current topology satisfies the design objectives. The agent learned a generalized design strategy that produced topology designs with similar or better compliance minimization performance than traditional gradient-based topology optimization methods given various boundary conditions.
The second design problem reformulates mechanical metamaterial unit cell design as a DRL task. The local unit cells of mechanical metamaterials are built by sequentially adding material elements according to a cubic Bezier curve methodology. The unit cells are built such that, when tessellated, they exhibit a targeted nonlinear deformation response under uniaxial compressive or tensile loading. Using a variational autoencoder for domain dimension reduction and a surrogate model for rapid deformation response prediction, the DRL environment was built to allow the agent to rapidly build mechanical metamaterials that exhibit a diverse array of deformation responses with variable degrees of nonlinearity.
Finally, the third design problem expands on the second to train a DRL agent to design mechanical metamaterials with tailorable deformation and energy manipulation characteristics. The agent’s design performance was validated by creating metamaterials with a thermoplastic polyurethane (TPU) constitutive material that increased or decreased hysteresis while exhibiting the compressive deformation response of expanded thermoplastic polyurethane (E-TPU). These optimized designs were additively manufactured and underwent experimental cyclic compressive testing. The results showed the E-TPU and metamaterial with E-TPU target properties were well aligned, underscoring the feasibility of designing mechanical metamaterials with customizable deformation and energy manipulation responses. Finally, the agent's generalized design capabilities were tested by designing multiple metamaterials with diverse desired loading deformation responses and specific hysteresis objectives. The combined success of these three design problems is critical in proving that a DRL agent can serve as a co-designer working with a human designer to achieve high-performing solutions in the domain of 2D structural topologies and is worthy of incorporation into a wide array of engineering design domains.
Recommended Citation
Brown, Nathan, "Deep Reinforcement Learning for the Design of Structural Topologies" (2023). All Dissertations. 3485.
https://open.clemson.edu/all_dissertations/3485