Hacker News new | past | comments | ask | show | jobs | submit login

>Quite impressive results nevertheless!

How does it remain impressive? Is this just deference to the institutions of the authors? Or the general aura around AI/ML/DL?

A couple of years ago when FB trained some kind of architecture to compute derivatives by presenting it with problem,answer pairs I called it out as "just wrote memorization" and got promptly downvoted. Now here we have people using GPT3 (a network widely recognized to be just memorizing) for a similar task and people are still impressed? Like you said, their only insight was to augment the data and translate to a computable (parsable) form.

I'm guessing people don't understand what a papermill is, especially at these schools that have access to the gobs of compute that you need to make these projects go through. It's junk science - probably not reproducible, definitely not extensible, doesn't transfer out of the sample set - purely for the sake of incrementing publication count for all involved (cf. the number of authors on the paper). And before people label me a hater: I speak from experience as someone at one of these types of schools that has their name on several of the same sorts of "turn the crank" papers.




They probably weren't judging the unique contribution here but rather the system as a whole. If you set out to solve this from scratch it would be very difficult. It's just recognizing what has been achieved not just by this group but the entire field which made this possible. This isn't an academic conference paper review session.


This is so charitable thank you. But please tell me are you this charitable when some derivative node package gets published? Or python library? Or when the 100th data science startup is announced?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: