Uou this is awesome! And it's very nicely presented in the website. I'm wondering how you mapped from the UV to the 3D model. I would like to add that feature to the addon.
TLDR: using a KD-tree, I find the face containing the UV coordinate. Then I transform the UV coordinate to barycentric coordinates within that containing face, then put that barycentric coordinate through the local -> world -> view -> perspective transform matrices
A common approach in rendering engines to convert screen space coordinates to objects is to render a second image with light and shadow disabled where the color uniquely maps to an id. You then can uniquely identify 24 bits worth of objects without needing to maintain a KD tree.
So many likes, was not expecting that! I will be presenting this work tomorrow at MICCAI and then I will post my presentation link in the README of the repository!
Thank you for your work! Looking forward to the presentation.
I had to look up MICCAI. To others: 23rd INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING & COMPUTER ASSISTED INTERVENTION
(4-8 OCTOBER 2020)
https://www.miccai2020.org/en
Love it!
Should maybe add a link in addition to the video "image". Was not intuitive if it is a an image from a video or a link to a video (I am not used to video "preview" on Github).