Hacker Newsnew | past | comments | ask | show | jobs | submit | bertdb's commentslogin

Did you do any inpainting experiments? I can imagine a pixel-space diffusion model to be better at it than one with a latent auto-encoder.


Not yet, we focused on the architecture for this paper. I totally agree with you though - pixel space is generally less limiting than a latent space for diffusion, so we would expect good performance inpainting behavior and other editing tasks.


Could the NeRF be replaced with a voxel grid, backpropagating to the voxel color values directly? Or is there a reason that wouldn't work?


should work, and there are tons of new differentiable mesh and volumetric representations to try!


Any open source projects with a nice Django/DRF (+ Vue/React frontend) codebase, that roughly follow this style guide?


No, but all the code for the book is on GitHub: https://github.com/Alex3917/django_for_startups


Thanks! Would be great to learn more about your use case, we'll be in touch with you.


That book is great if you want to go in-depth! If you're a practitioner who wants to get to a trained model as quickly as possible, you're probably better of just following a tutorial. The official Keras tutorial on segmentation looks pretty good [1]. We also have a blog post with code samples on how to set up an image segmentation workflow with Segments.ai and Facebook's detectron2 framework [2].

[1] https://keras.io/examples/vision/oxford_pets_image_segmentat...

[2] https://segments.ai/blog/speed-up-image-segmentation-with-mo...


Thanks, your tutorial seems great!


Thanks for your feedback!

1. If the segment you start dragging from is already selected, all the segments you drag through will get deselected, and vice versa.

2. Did you try changing the granularity of the segments by scrolling your mouse wheel? We've had good experiences with microscopic imagery before, happy to connect and dig a bit deeper.


Thanks for a quick reply!

1. Oh, I see. I didn't guess that's the intended behaviour. I wonder if it's not too clever.

2. Yes, then segments get too "excited" about the background noise. I would be able to make it work but with loads of manual tweaking which is, as I understand, the pain Segments wants to alleviate.


The segments you see on the screen are generated by our ML model. If your data is very noisy, our out-of-the-box model might not be the best fit. We can always improve performance by training a custom model for you on a small set of manually labeled data though.


Happy to listen to what you need, feel free to shoot me an email.


Thanks Brian!


Looking forward to hearing your feedback when you give it a try!


Thanks! The existing tools on the market for image segmentation are not very sophisticated, so it's a niche where we can immediately make a difference.

In a sense, image segmentation labels are strictly more informative than bounding box labels: you can trivially extract the containing bounding box from a segmentation mask. One big reason that segmentation labels are not used more often, is simply because they are too expensive. Labeling a bounding box requires only two clicks, while labeling a segmentation mask requires much more time with manual tools. We're trying to solve that problem.

In the future we want to dig even deeper into this problem, and expand our scope to video and 3D segmentation labeling. We believe there will be a huge need for such tools now that everyone is getting smartphones with Lidar and AR/VR capabilities in their pockets.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: