_Really_ impressive demo but it'd be oh-so-much-more-impressive if it was smooth, right now ex. deleting a word or adding a space causes 4 inferences in quick session so it feels janky (EDIT: maybe intentional? step by step displayed?)
Btw this is from fal.ai, I first heard of them when they posted a Stable Cascade demo the morning it was released.
They're *really* good, I *highly* recommend them for any inferencing you're doing outside OpenAI. Been in AI for going on 3 years, and on it 24/7 since last year.
Fal is the first service that sweats the details to get it to the point it runs _this_ fast in practice, not just in papers. ex. web socket connection, use short-lived JWTs to avoid having to go through an edge function to sign a request with an API key, etc.
Good point. If it’s this fast, maybe it should generate intermediate images along a smooth path through the latent space, rather than just jumping right to the target
It's sort of the inverse if I'm seeing it correctly: adding one character triggers one inference, but you see steps 1, 2, 3, and 4 of the inference
the latent space stuff became popular through it being a visual allegory, which accidentally confused the technical term it originated from. there's nothing visually smooth about it, it's not a linear interpolation in 3D space, it's a chaotic journey through 3 billion dimension space
Well, it ends up being a journey through different images pulled from the same noise, so yes, any smoothness results more from the degree to which the sampling approach produces similar features when pulled towards slightly different target embeddings than from intrinsically the images being 'neighbors'.
These low-step approaches probably preserve a lot less of the 'noise' features in the final image so latent space cruising is probably less fun.
Sure, but it still has to result in a smooth interpolation. If the relation between latent and pixel space isn't continuous you're gonna have problems during learning.
Btw this is from fal.ai, I first heard of them when they posted a Stable Cascade demo the morning it was released.
They're *really* good, I *highly* recommend them for any inferencing you're doing outside OpenAI. Been in AI for going on 3 years, and on it 24/7 since last year.
Fal is the first service that sweats the details to get it to the point it runs _this_ fast in practice, not just in papers. ex. web socket connection, use short-lived JWTs to avoid having to go through an edge function to sign a request with an API key, etc.