Artwork

Content provided by Itzik Ben-Shabat. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Itzik Ben-Shabat or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

DiGS - Chamin Hewa Koneputugodage

40:32
 
Share
 

Manage episode 331574818 series 3300270
Content provided by Itzik Ben-Shabat. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Itzik Ben-Shabat or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In this episode of the Talking Papers Podcast, I hosted Chamin Hewa Koneputugodage to chat about OUR paper "DiGS: Divergence guided shape implicit neural representation for unoriented point clouds”, published in CVPR 2022.
In this paper, we took on the task of surface reconstruction using a novel divergence-guided approach. Unlike previous methods, we do not use normal vectors for supervision. To compensate for that, we add a divergence minimization loss as a regularize to get a coarse shape and then anneal it as training progresses to get finer detail. Additionally, we propose two new geometric initialization for SIREN-based networks that enable learning shape spaces.
PAPER TITLE
"DiGS: Divergence guided shape implicit neural representation for unoriented point clouds"
AUTHORS
Yizhak Ben-Shabat, Chamin Hewa Koneputugodage, Stephen Gould
ABSTRACT
Shape implicit neural representations (INR) have recently shown to be effective in shape analysis and reconstruction tasks. Existing INRs require point coordinates to learn the implicit level sets of the shape. When a normal vector is available for each point, a higher fidelity representation can be learned, however normal vectors are often not provided as raw data. Furthermore, the method's initialization has been shown to play a crucial role for surface reconstruction. In this paper, we propose a divergence guided shape representation learning approach that does not require normal vectors as input. We show that incorporating a soft constraint on the divergence of the distance function favours smooth solutions that reliably orients gradients to match the unknown normal at each point, in some cases even better than approaches that use ground truth normal vectors directly. Additionally, we introduce a novel geometric initialization method for sinusoidal INRs that further improves convergence to the desired solution. We evaluate the effectiveness of our approach on the task of surface reconstruction and shape space learning and show SOTA performance compared to other unoriented methods.
RELATED PAPERS
📚 DeepSDF
📚 SIREN
LINKS AND RESOURCES
💻 Project Page
💻 Code
🎥 5 min video
To stay up to date with Chamin's latest research, follow him on:
🐦 Twitter
👨🏻‍🎓LinkedIn
Recorded on April 1st 2022.
CONTACT
If you would like to be a guest, sponsor or just share your thoughts, feel free to reach out via email: talking.papers.podcast@gmail.com
SUBSCRIBE AND FOLLOW
🎧Subscribe on your favourite podcast app
📧Subscribe to our mailing list
🐦Follow us on Twitter
🎥YouTube Channel
#talkingpapers #CVPR2022 #DiGS #NeuralImplicitRepresentation #SurfaceReconstruction #ShapeSpace #3DVision #ComputerVision #AI #DeepLearning #MachineLearning #deeplearning #A

🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com

📧Subscribe to our mailing list: http://eepurl.com/hRznqb

🐦Follow us on Twitter: https://twitter.com/talking_papers

🎥YouTube Channel: https://bit.ly/3eQOgwP

  continue reading

35 episodes

Artwork

DiGS - Chamin Hewa Koneputugodage

Talking Papers Podcast

0-10 subscribers

published

iconShare
 
Manage episode 331574818 series 3300270
Content provided by Itzik Ben-Shabat. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Itzik Ben-Shabat or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In this episode of the Talking Papers Podcast, I hosted Chamin Hewa Koneputugodage to chat about OUR paper "DiGS: Divergence guided shape implicit neural representation for unoriented point clouds”, published in CVPR 2022.
In this paper, we took on the task of surface reconstruction using a novel divergence-guided approach. Unlike previous methods, we do not use normal vectors for supervision. To compensate for that, we add a divergence minimization loss as a regularize to get a coarse shape and then anneal it as training progresses to get finer detail. Additionally, we propose two new geometric initialization for SIREN-based networks that enable learning shape spaces.
PAPER TITLE
"DiGS: Divergence guided shape implicit neural representation for unoriented point clouds"
AUTHORS
Yizhak Ben-Shabat, Chamin Hewa Koneputugodage, Stephen Gould
ABSTRACT
Shape implicit neural representations (INR) have recently shown to be effective in shape analysis and reconstruction tasks. Existing INRs require point coordinates to learn the implicit level sets of the shape. When a normal vector is available for each point, a higher fidelity representation can be learned, however normal vectors are often not provided as raw data. Furthermore, the method's initialization has been shown to play a crucial role for surface reconstruction. In this paper, we propose a divergence guided shape representation learning approach that does not require normal vectors as input. We show that incorporating a soft constraint on the divergence of the distance function favours smooth solutions that reliably orients gradients to match the unknown normal at each point, in some cases even better than approaches that use ground truth normal vectors directly. Additionally, we introduce a novel geometric initialization method for sinusoidal INRs that further improves convergence to the desired solution. We evaluate the effectiveness of our approach on the task of surface reconstruction and shape space learning and show SOTA performance compared to other unoriented methods.
RELATED PAPERS
📚 DeepSDF
📚 SIREN
LINKS AND RESOURCES
💻 Project Page
💻 Code
🎥 5 min video
To stay up to date with Chamin's latest research, follow him on:
🐦 Twitter
👨🏻‍🎓LinkedIn
Recorded on April 1st 2022.
CONTACT
If you would like to be a guest, sponsor or just share your thoughts, feel free to reach out via email: talking.papers.podcast@gmail.com
SUBSCRIBE AND FOLLOW
🎧Subscribe on your favourite podcast app
📧Subscribe to our mailing list
🐦Follow us on Twitter
🎥YouTube Channel
#talkingpapers #CVPR2022 #DiGS #NeuralImplicitRepresentation #SurfaceReconstruction #ShapeSpace #3DVision #ComputerVision #AI #DeepLearning #MachineLearning #deeplearning #A

🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com

📧Subscribe to our mailing list: http://eepurl.com/hRznqb

🐦Follow us on Twitter: https://twitter.com/talking_papers

🎥YouTube Channel: https://bit.ly/3eQOgwP

  continue reading

35 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide