Hacker News new | past | comments | ask | show | jobs | submit | smlacy's comments login

If you're curious what a "real" performance of this piece sounds like, here's a great version: https://www.youtube.com/watch?v=9QKnUzyh84U

For me this transitioned the piece from "a fun academic exercise" to "an amazing piece of real music"

Also includes a great explanation of the piece musically.


Bruh is all you need

Just wait for "Switch 2 lite" which will be the same size as the original Switch and compatible with the original switch joycons.

I find the "Can you ..." phrasing used in this demo/project fascinating. I would have expected the LLM to basically say "Yes I can, would you like me to do it?" to most of these questions, rather than directly and immediately executing the action.


If an employer were to ask an employee, "can you write up this report and send it to me" and they said, "yes I can, would you like me to do it?", I think it would be received poorly. I believe this is a close approximation of the relationship people tend to have with chatgpt.


Depends, the 'can you' (or 'can I get') phrasing appears to be a USA English thing.

Managers often expect subordinates to just know what they mean, but checking instructions and requirements is usually essential and imo is a mark of a good worker.

"Can you dispose of our latest product in a landfill"...

Generally in UK, unless the person is a major consumer of USA media, "can you" is an enquiry as to capability or whether an action is within the rules.

IME. YMMV.


I'm very curious why you think that! Sincerely. These models undergo significant human-aided training where people express a preference for certain behaviours, and that is fed back into the training process: I feel like the behaviour you mention would probably be trained out pretty quickly since most people would find it unhelpful, but I'm really just guessing.


What distinguishes LLMs from classical computing is that they're very much not pedantic. Because the model is predicting what human text would follow a given piece of content, you can generally expect them to react approximately the way that a human would in writing.

In this example, if a human responded that way I would assume they were either being passive aggressive or were autistic or spoke English as a second language. A neurotypical native speaker acting in good faith would invariably interpret the question as a request, not a question.


In your locality.

I've asked LLM systems "can you..." questions. I'm asking surely about their capability and allowed parameters of operation.

Apparently you think that means I'm brain damaged?


Surely there's better Windmills for you to tilt at.


For sure.

It's basically an observation on expectations wrt regional language differences. HAND.


LLMs are usually not aware of their true capabilities, so the answers you get back have a high probability of being hallucinated.


So far, they seem to be correct answers.

I assume it's more a part of explicitly programmed set of responses than it is a standard inference. But you're right that I should be cautious.

ChatGPT, for example, says it can retrieve URL contents (for RAG). When it does an inference it then shows a message indicating the retrieval is happening. In my very limited testing it has responded appropriately. Eg it can talk about what's on HN front page right now.

Similarly Claude.ai says it can't do such retrieval - except through API use? - and doesn't appear to do so either.


Cute idea.

Not a single photo of this from any other angle than straight-on, so I presume it's very thick and that such off-angle/oblique photos would be unflattering to the product.


There's a video on the product page here, plus dimensions.

https://timestoptech.com/products/d-20-steel

Case Width: 37 mm Thickness: 10 mm Lug-to-Lug: 41 mm Lug Width: 20 mm Weight: 48 g


Is there a description or some explanation of what this is?


Maybe it was missing before, or just out of view and not obvious, but currently there’s a ‘more information’ link at the bottom of the page. Requires scrolling on iPad for me to see it.


Algorithmic art


negative times a negative is a positive


It’s just math, duh.


How many volts is it though?


1 volt, 275 amps


If you give a man a fish, he eats for a day. If you teach a man to fish, he eats forever.

These instructions are a fish, and don't really any is the core principles and knowledge that a developer should really know.

Yes, it fixes the problem but doesn't really address the "Why?" part at all.


Which is basically a total rip-off of the Stuff Made Here version where the actual engineering process is the highlight, not Rober's shilling. https://www.youtube.com/watch?v=Gu_1S77XkiM


That video is linked literally in the first paragraph of the PDF.

The video also starts with him acknowledging the Stuff Made Here version, and the fact that the developer of that project said he wasn't totally satisfied with his solution.

I think calling it a rip-off is a bit harsh.


He also pointed out that the Stuff Made Here video came out well after they had already started.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: