Date of Award
12-2025
Document Type
Thesis
Degree Name
Master of Science (MS)
Department
Electrical and Computer Engineering
Committee Chair/Advisor
Lan Zhang
Committee Member
Tao Wei
Committee Member
Luyang Zhao
Abstract
Dynamic 3D videos are now widely used in VR, AR, and telepresence, where users often want to change the style or appearance of objects such as clothing over a whole sequence. However, editing such dynamic 3D content is still hard, because appearance, geometry, and motion are tightly coupled, so even a simple color change must stay consistent in all views and at all time steps. Existing editing methods based on 4D Gaussian Splatting often work in a frame-by-frame way, which is slow and makes it difficult to keep temporal and multi-view consistency.
This thesis proposes Editable-4DGS, a representation-centric framework that separates appearance from motion on top of 4DGS. We first reconstruct a 4DGS scene and learn a canonical UV texture that stores the appearance of the 4D geometry. Then we use optical-flow–conditioned Gaussian flow to track each Gaussian over time and to compute a UV trajectory for each frame, while the canonical texture itself stays fixed. On top of this, per-Gaussian semantic features allow us to select meaningful regions, such as a shirt, and perform semantic-aware retexturing directly in UV space. In this way, editing a dynamic scene is reduced to changing a single canonical texture, and the system automatically propagates this change to all frames. Experiments on DyNeRF and Tensor4D datasets show that Editable-4DGS produces more temporally and multi-view consistent retexturing than the baseline, and also provides better semantic control.
Recommended Citation
Lyu, Shuai, "Editable 4D Gaussian Splatting: Scalable and Consistent Video Editing via UV-Texture Decomposition" (2025). All Theses. 4663.
https://open.clemson.edu/all_theses/4663