Hacker Newsnew | past | comments | ask | show | jobs | submit | theschwa's commentslogin

Yeah, looking at that example feels like jumping straight into the deep end of the pool. I think it helps going through a tutorial that breaks down the why of each piece. I really liked this tutorial on it: https://www.typeonce.dev/course/effect-beginners-complete-ge...

Some core things from Effect though that you can see in that Express example:

* Break things down into Services. Effect handles dependency injection, that's typed, for services so you can easily test them and have different versions running for testing, production, etc. * Fibers for threaded execution * Managing resources to make sure they're properly closed with scope

I think a lot of these things though often aren't truly appreciated until you've had something go wrong before or you've had to build a system to manage them yourself. *


But I feel like I've worked with massive systems with a lot going on where nothing has gone wrong that this sort of thing specifically would solve it. I think it would just increase learning curve and make people make other types of mistakes (business logic or otherwise) because it's so much less readable and understandable. I've seen similar libraries used in the past that have caused much more worse bugs because people misunderstand how they exactly work.


Yeah, this looks like the tutorial I needed. Thanks.


I feel like this is particularly interesting in light of their Vision Pro. Being able to run models in a power efficient manner may not mean much to everyone on a laptop, but it's a huge benefit for an already power hungry headset.


I believe they are using the original Gaussian Splatting code from Inria under the hood: https://github.com/graphdeco-inria/gaussian-splatting


Aha thank you, I was also stuck with the stage of getting point clouds but there is a video tutorial linked in that page that covers everything from images to the end product.


"How does “reflection” of the vase on the metal part of the table work, you might ask? The gaussians have “learned” the ages-old trick of duplicating and mirroring the geometry for reflection!"

Thank you! I'd seen people remove some of the SH coefficients while still getting reflections, and I couldn't understand how that worked. The old tricks sometimes really are the best tricks.


Does make you wonder if you'd see a shadowy vase if you look at the table from below.

Although now that I think about it, there's no real reason to not just make the Gaussians invisible from behind.


I think I've seen so in videos. Like someone dipping a camera underwater and the reflection is real splats.

But since the splats aren't a surface, they don't exactly have a "behind". They might render from every angle on purpose.


> But since the splats aren't a surface, they don't exactly have a "behind".

Spherical Harmonics apply to the whole sphere, splats can learn a SH color for their "behind". But by definition there is no data for these (no camera to tell what the color is). Nothing is preventing current pipelines to define that the opposite direction (splat looking at the camera) has a special color (black, zero-alpha, blend of blurred splats between camera and splat, etc); or to regularize all splats so that there is some amount of an "undefined" transparent component to be applied where camera won't define the SH.


> Nothing is preventing current pipelines to define that the opposite direction (splat looking at the camera) has a special color (black, zero-alpha, blend of blurred splats between camera and splat, etc)

Wouldn't you need to know that there is no other camera view from the opposite direction? (sorry if dumb question, haven't actually looked at how GS are generated from input data)


I can't be considered an expert either, but from what I understand they're currently using spherical harmonics (or some others set of basis functions) to model how a surface emits light from various directions.

However in most cases this will simply cut off as soon as you view a surface from the other side, so it kind of makes sense to add some special handling for that scenario. Especially since it can be hard to properly fit a discontinuity like that.

As it currently stands I imagine the reflection trick would be unable to work if you had a camera view from the other side, which is not ideal.


Oh right, because if we're looking at (say) a ball from two sides, we're not looking at the same splat. So except for very thin 1D-approximating shapes like the spokes of a bike wheel, or pointy convex shapes, most splats will effectively be domes, right?


Can someone help me understand why a CPU based method like EDT would be used instead of a more GPU friendly method like jump fill?

It seems like you could use the sun pixel distances like he did, but then use them with jump fill, but maybe I'm missing something?


Jump flood is multi-pass and approximate not exact. I wouldn’t assume it’s faster for small input images, which is what the article is aiming for.

Also the distance transform algorithm in the article could be implemented on a GPU using a thread per row & column rather than per pixel. (At least in CUDA - I’m not immediately certain how to do it in GLSL but I guess someone could do it.) This is not optimal, of course, but parallelizing rows is perhaps a lot better than a single-threaded loop over the image.


It is trivially done with compute shaders in two passes, one vertical, one horizontal.


Most likely compatibility is more stable across clients

GPUs aren't universally supported by browsers or available on client devices, and implementing a user-agent/telemetry based response adds overhead.


But,I believe this is being used for his Use.GPU project, which requires not just GPU support, but the latest WebGPU support.


We used to have some "disabled" checkboxes, but they caused a ton of confusion, and negative user feedback. We switched to toggles with On and Off written on either with all verbiage written in the positive, and we no longer get questions about it. e.g.: Use "blah blah" data source: Off (--o) On


Meta and Apple are both working on creating an environment in which they can succeed in AR. The hardware for AR is a good ways off, but I think the current market is more about building the momentum of software, hardware, and ecosystem that Meta and Apple need to succeed with AR. They are both willing to take losses to get there. In that regard, I don't think this is a make or break moment, but I do think that Apple is hoping to put out a product that gives people a better glimpse of their vision for mixed reality and AR.


I believe they've said that this was a choice to reduce the size and the weight, and it lets them bring the device closer to the users face. It also means that it can stay comfortably with just the one strap.


As an artist and shader geek, I just want to say that this looks awesome. I'd love to see that blog post on Swift + Electron.


Thank you so much for this! I've always wanted this capability, and I plan on making extensive use of it.


Awesome, glad you like it!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: