> So, if traditional game worlds are paintings, neural worlds are photographs. Information flows from sensor to screen without passing through human hands.
I don't get this analogy at all. Instead of a human information flows through a neural network which alters the information.
> Every lifelike detail in the final world is only there because my phone recorded it.
I might be wrong here but I don't think this is true. It might also be there because the network inferred that it is there based on previous data.
Imo this just takes the human out of a artistic process - creating video game worlds and I'm not sure if this is worth archiving.
>I don't get this analogy at all. Instead of a human information flows through a neural network which alters the information.
These days most photos are also stored using lossy compression which alters the information.
You can think of this as a form of highly lossy compression of an image of this forest in time and space.
Most lossy compression is 'subtractive' in that detail is subtracted from the image in order to compress it, so the kind of alterations are limited. However there have been previous non-subtractive forms of compression (eg, fractal compression) that have been criticised on the basis of making up details, which is certainly something that a neural network will do. However if the network is only trained on this forest data, rather than being also trained on other data and then fine tuned, then in some sense it does only represent this forest rather than giving an 'informed impression' like a human artist would.
>These days most photos are also stored using lossy compression which alters the information.
I noticed this in some photos I see online starting maybe 5-10 years ago.
I'd click through to a high res version of the photo, and instead of sensor noise or jpeg artefacts, I'd see these bizarre snakelike formations, as though the thing had been put through style transfer.
The article contains a paragraph about ads and it reads like a nightmare:
> The full-page advertisement you saw on the previous page is an example.
> the paginated website design opens up new possibilities for ad placement and presentation, allowing for innovative approaches in terms of ad type, location, size, and interaction compared to traditional scrolling.
> highlighted and annotated sections often represent the user's key interests, allowing for more targeted and relevant ad placements
> ads embedded in the website will be included in these downloaded PDFs, increasing ad exposure each time the user reviews the PDF file.
> Paid subscribers could download ad-free PDFs, while non-paying users receive PDFs with embedded advertisements, providing a tiered reading experience.
This describes how ads are displayed in paper magazines. Sounds fine to me, I prefer static ads in predictable spots that don't cause page jumps the way "dynamic ads" do on web.
You're right: the site is SXG-enabled, and the point of the demo is to show that Google can prefetch and serve it even when you're offline — but only if you search for a specific phrase on Google. Random searches won’t trigger the right preload.
The 4-digit code is intentionally hardcoded — it's not a security check, but a way to make sure the user follows the instructions and gives Google a bit more time to prefetch the content before going offline.
The progress bar also helps with timing — making sure the preload has a chance to complete.
After reading your comment, I added a link to a blog post in the final step of the instructions, in case the demo doesn't work as expected. (If the page is cached from an earlier visit, a refresh helps — it's got a long browser cache.)
I saw the code as kind of a clever joke, kind of a "gotcha" where it obfuscated the process to make it seem like it's relevant. That was my favorite part of the whole thing, realizing it's not magically sending data over in loading, it's just tricking you.
Love it or hate it, furries make the internet go. A lot of them are very early adopters and developers themselves. Source: I've had more than a few friends in and around the community since I was a teen.
Eh, lemmy is a bit of an exception because it is a leftwing split from reddit and reddit has been on very left mood those last years. The most obvious one I was thinking of was Voat, which was shutdown a while ago.
Yeah, for me, it's just a _little_ bit better of a search experience. Would not expect that the be the case for everyone. Was wondering if that would actually be the case for _most_ people. Glad to see some folks some see value in it.
> But there was one downside to AI. AI was like a human which is why you could talk to it like a human. But because it was like a human, it could also be wrong like a human. So I had to make sure that it wasn't getting the answer wrong. For which I would sometimes use Google Search.
I really don't buy that this was written/created by an actual 9yr old.
But this might just be my unhealthy pessimism/skepticism when it comes to stuff on the internet.
Both parents are programmers, and have also written blogs. While the motivation/drive is his own, he has helpful guidance to accomplish what he wants to do.
The blog is his own words. We helped with the outline, and also provided multiple rounds of feedback to ensure good clarity of what he's trying to express. We tried not to interfere too much with his thoughts. The quoted thought "because it was like a human, it could also be wrong like a human" is something he was telling us when he discovered it hallucinated. But he doesn't understand what an LLM hallucination is, that was his words. I asked him to share that in his blog post.
The code is written on his own. But when he gets stuck, he has us to give him hints. As programmers, we can speed up him significantly by steering him in the right direction.
I'm skeptical too, and I started at age 10 (C64, though, not html+css+javascript).
I dunno if such skepticism is healthy or not, but looking at the source I feel that it contains too many things that need explaining to a 9yo: `DOCTYPE` and all those `META` tags correctly set when they make no difference to the game, why `box-sizing` has to be specified, all those different `display` attributes correctly set for the display that is needed for that element, what the `ease-in-out` means ...
And that's without even getting to the Javascript stuff, like why use const vs let, why use backticks and interpolation, things commented out temporarily instead of removed, the way the code is modularised, etc
In short, there are too many irrelevant-to-the-output best practices implemented that, I feel (after seeing what a lot of beginner/student programmers produce) demonstrates a level of experience that cannot come from "My First Game".
The signs of an experienced hand in the development is, to me, unmistakable.
Kids dad here. There's no doubt he had a ton of guidance - both parents are experienced programmers. Many things needed explaining to a 9yo, the same things that would have needed explaining to any new developer.
We went through MANY iterations (test-play/code-review + feedback + dev) before it was released to the public, which meant there was a lot of discussions and lots of opportunities for him to correct many small issues.
Some thoughts:
* DOCTYPE & meta utf-8 - he learned from Khan academy
* meta viewport - I showed him how to test for mobile and pushed to make mobile a priority
* const/let/backticks - he uses prettier in VSCode, which does this automatically
* code modularization - as a result of discussions around maintainability
I started around the same age. Though sadly not with JS and modern browser tech.
The most difficult concepts in use here are arrays and function calls. So quite possible for a clever 9 year old.
If he was using an entity component system or monads I would be more skeptical
After 10 years of mandatory school most states have what is called "Berufsschulpflicht" until you are at least 18 years old.
That means you have to learn a job, which is not the same as working full-time and still considered education.
reply