Of course, training on synthetic data can't do everything! My main point is: it's been doing a bunch of surprisingly-beneficial things, contra the obsolete beliefs about model-output-worthlessness (or deleteriousness!) for further training to which I was initially responding.
But also: with regard to claims about what models "can't experience", such claims are pretty contingent on transient conditions, and expiring fast.
To your examples: despite their variety, most if not all could soon have useful answers answers collected by largely-automated processes.
People will comment publicly about the "vibe" & "people-watching" – or it'll be estimable from their shared photos. (Or even: personally-archived life-stream data.) People will describe the banana bread taste to each other, in ways that may also be shared with AI models.
Official info on policies, processing time, and staffing may already be public records with required availability; recent revisions & practical variances will often be a matter of public discussion.
To the extent all your examples are questions expressed in natural-language text, they will quite often be asked, and answered, in places where third parties – humans and AI models – can learn the answers.
Wearable devices, too, will keep shrinking the gap between things any human is able to see/hear (and maybe even feel/taste/smell) and that which will be logged digitally for wider consultation.
But also: with regard to claims about what models "can't experience", such claims are pretty contingent on transient conditions, and expiring fast.
To your examples: despite their variety, most if not all could soon have useful answers answers collected by largely-automated processes.
People will comment publicly about the "vibe" & "people-watching" – or it'll be estimable from their shared photos. (Or even: personally-archived life-stream data.) People will describe the banana bread taste to each other, in ways that may also be shared with AI models.
Official info on policies, processing time, and staffing may already be public records with required availability; recent revisions & practical variances will often be a matter of public discussion.
To the extent all your examples are questions expressed in natural-language text, they will quite often be asked, and answered, in places where third parties – humans and AI models – can learn the answers.
Wearable devices, too, will keep shrinking the gap between things any human is able to see/hear (and maybe even feel/taste/smell) and that which will be logged digitally for wider consultation.