FragmentDiff: A Diffusion Model for Fractured Object Assembly
DescriptionFractured object reassembly is a challenging problem in computer vision and graphics with applications in industrial manufacturing and archaeology. Traditional methods based on shape descriptors and geometric registration often struggle with ambiguous features, resulting in lower accuracy. To address this, we propose a novel approach inspired by diffusion models and 3D transformers. Our method applies diffusion denoising combined with a 3D transformer to predict the pose parameter of each fragment. We evaluate our approach on a fractured object dataset and demonstrate superior performance compared to state-of-the-art methods. Our method offers a promising solution for accurate and robust fractured object reassembly, advancing the field of computer vision in complex shape analysis and assembly tasks.
Event Type
Technical Papers
TimeTuesday, 3 December 20249:00am - 12:00pm JST
LocationHall C, C Block, Level 4