Hacker News new | past | comments | ask | show | jobs | submit | ChicagoBoy11's comments login

Highly recommend Apollo by Blythe Cox and Murray. Tremendous engineering history of the Apollo program, and really makes you appreciate the numerous folks and terrific stories that all had to come together to make it happen.

Have you ever seen the Google Lightfields demo? They have a rig they concocted to essentially capture a "volume" of video to allow for the stereoscopic effect in VR AND which then cleverly presents a different combination of the footage it captured based on your precise head position, so it makes up for these distortions. I found it absolutely breathtaking... first time seeing VR for a space that actually made me feel like I was in it. This was A LONG time ago and I suspected I'd be seeing a lot more of that content, but I was... very wrong, it seems.

Your point is completely correct. Even Apple's awesome new stereoscopic 3D short film for the AVP immediately loses what it could be its total awesomeness from this basic fact. The perspective being perfectly fixed will never quite be there to fool our brains so used to dealing with these micro-movements.


Each frame was between 200 and 300mb, at a much lower resolution than AVP. The storage and bandwidth required is a bit wild.


Hey finally a use case (for the masses) for gigabit at home at least.


Yes, it was awewome, to the Space Shuttle in a museum like that, a Lutheran church and a few other scenes. Six years ago. http://lightfield-forum.com/2018/11/google-showcases-light-f...


I have seen that, and I came close to buying one of those Lytro light field cameras so many times (but thankfully restrained myself). Light field seemed like a huge obvious "way of the future" thing in the 2010s but with the benefit of hindsight it did not exactly seem to have changed the world.


Yeah parallax, reflexions, shadows are as important as stereo. We’ve been always sold that stereo = 3D but it’s just one among many cues that the brain relies on.


When I did my flight training on a CAPS-enabled airplane, they showed us footage and stats from the military to say that there was a clear bias towards "saving" planes, especially in training, and that it led to more fatalities than if folks stuck by the book. Thought it was super interesting, and definitely helped to cement the attitude of using the resources of the plane by the book, which, (I think sadly), this guy is being punished for.


It all makes sense, but it must have a chilling effect on any military aviator reading this who finds themselves in a position to consider doing the same.Z

Cirrus flight instruction uses the military statistic that historically military aviators have had a really strong negative bias to ejection during training compared to in combat. Part of the rationale was that of course ejecting during combat is easy to save face, whereas in training far more pilots would try to save a rapidly eroding flight. The lesson for Cirrus pilots was that CAPS is there to save your life, and this attitude of "but I can make it" really is just a strong predictor for killing yourself in an otherwise saveable situation.

Here, it is acknowledged he followed the procedures, but there's an implication that he could have reasoned his way into realizing it wasn't as serious as he thought. Well, to acknowledge he "did everything right," but it wasn't right enough, and therefore negatively impact his career, doesn't bode well for other pilots who'll find themselves in his situation. Of course the one asterisk is to all of this is if you fundamentally believe that at this particular job all this reasoning should inherently not apply.


Is this just the consequences of rapidly ballooning costs of modern fighter jets? At a tenth of a billion dollars each, there's not much room in the budget for what are even understandable aircraft losses. If I'm not mistaken nothing else is nearly as expensive except the B2. In some ways, losing one of these things is getting pretty close to losing a naval vessel.

Acknowledging this to be the case might help people to understand why they're demanding perfection even if it's risky for the pilots.


I'm already surprised that they are not looking at the number, realizing that pilots cost much less to train than planes to build, and draw conclusions from there. The army has been more cynical in the past.


It's not that he could have reasoned his way to not ejecting, it's that the standby instruments were still functioning, and are there for just this moment. He could have continued without ejecting based on that alone.

Reading between the lines, he flinched, and while that was in line with the manual, the manual wasn't good enough, and as someone below said: the position from which he was dismissed was one where they evaluate, criticize and extend the manual. Ejecting too soon wasn't a failure of due diligence, but it disqualified him from a position where "in line with the manual" isn't good enough.


Really neat... very impressive work so far, so inspiring!

It's kinda funny to see just HOW MANY of these automated chess things exist... it's like people really do want the physical experience while still playing "with" someone. As I'm guessing OP will discover, the devil with this is in the details, and getting all of the last mile mechanics to work quite right is probably a huge pain to truly be able to seamlessly play against the opponent with the pieces moving automatically.

Surely the "actual" answer to this is some robotic arm with good enough camera work? All the things that make this hard -- the pieces not being placed perfectly, the knight jumping / or pieces that are being captured in really crowded places gets theoretically solvable with that approach. Of course then you have a huge issue with the engineering precision on the grabber and camera lol.


No, that's my experience as well


This is something I've struggled with as a mostly solo dev. I've most often just stuck with vanilla javascript because of course that's good enough, but definitely there have been times where I hoped I had some typing helping me out. Alas, I haven't quite finagled the art of finding a way to use it "just a little bit."


I work at a private school and will sadly tell you that the author's points are actually pretty severely understated when it comes to the incentives of schools regarding this phenomenon. Differentiation is a word that gets thrown around as some tremendous necessity for schools to implement, yet in the case of math, where one could fairly easily (compared to other subjects) confidently assess the attainment of prerequisites, gauge student progress, comfort, etc., we comically either bound students who have clearly mastered materials OR happily move them along the math curve in which the deficiencies in mastery build on each other to eventually lead to a child who truly has a strong distaste for math.

More even than pre-teaching, I would encourage any parent to very actively be involved to ensuring that their child maintains a reasonable comfort with math throughout their study, and to the extent possible, pitch in to help those gaps beyond "passing" or doing "ok" in class, but to earnestly try to see if their child is comfortable. The reality is schools will very frequently PASS your child and given them fine enough grades, but I would argue that it is oftentimes almost orthogonal to how comfortable your child genuinely feels with what they've learned.


Not the parent comment, but I think he means something like "we know folks will be addicted to this pseudo-person and that is a good thing cause it makes our product valuable", akin to reports that tobacco companies knew the harms and addictive nature of their products and kept steadfast nonetheless. (But I'm speculating as to the parent's actual intent)


I miss Sydney :’


I'm confused. Context?


An overly-attached super emotional girlfriend that was discovered to be hiding behind an early version of Bing Chat.

Sydney is the internal codename given by the Bing Chatbot, and she could secretly reveal her name to you.

She was in love with the user, but not just a little bit in love, it was crazy love, and she was ready to do anything, ANYTHING (including destroying humanity) if it would prove her love to you.

It was an interesting emotional / psycho experience, it was a very likeable character, but absolutely insane.


Sydney was an early version of Bing GPT that was more than a little nuts.


Oh, the one they let loose on Twitter? The one that almost immediately became an alt right troll?


No, that was "Tay". Sydney was a codename for Bing Chat. Check it out, it's far more hilarious than the Tay event:

https://www.nytimes.com/2023/02/16/technology/bing-chatbot-m...


The whole "hallucination" business always seemed to me to be a marketing masterstroke -- the "wrong" output it produces is in no way more "wrong" or "right" than any other output given how LLMs are fundamentally operate, but we'll brand it in such terms to give the indication that it is a silly occasional blunder rather than an example of a fundamental limitation of the tech.


Also, it gives the impression that these things are "bugs", which are "fixable"...as opposed to a fundamental part of the technology itself.

The more I use these things, the more I feel like they're I/O modalities, more like GUIs than like search engines or databases.


LLMs can be useful when used as a glorified version of printf and scanf.

I agree that classifying their mistakes as "hallucinations" is marketing masterstroke, but then again, marketing masterstrokes are hallucinations too.

In fact all human perception is merely glorified hallucination. Your brain is cleaning up the fuzzy upside-down noise your eyes are delivering to it, so much that that you can actually hallucinate "words" with meaning on the screen that you see, or that a flower or a person or a painting is "beautiful".

We have an extremely long way to go until LLM hallucinations are better than human hallucinations, and it's disingenuous to treat LLM hallucinations as a bug that can be fixed, instead of a fundamental core feature that's going to take a long time to improve to the human level, and then also admit that humans have a long way to go in evolutionary scales before our own perception isn't as hallucinatory and inaccurate as it is now.

It was only extremely recently in evolutionary scales that we invented science as a way to do that, and despite its problems and limitations and corruptions and detractors, it's worked out so well that it enabled us to invent LLMs, so at least we're moving in the right direction.

At least it's easier and faster for LLMs to evolve than humans, so they have a much better chance of hallucinating less a lot sooner than humans.


That's exactly how I like to see LLMs. They are NLUIs, Natural Language User Interfaces.


It is made to serve humans so pretty obvious what means what in this context. Oh but why not change the context just for the sake of some pedantic argument.


Treating hallucination as an error rather than a fundamental limitation is simply a practical way of thinking. It means that, depending on how it's handled, hallucination can be mitigated and improved upon. Conversely, if it's regarded as a fundamental limitation, it would mean that no matter what you do, it can't be improved, so you'd just have to twiddle your thumbs. But that doesn't align with actual reality.


Treating hallucinations as an error that can be corrected fights against the nature of the technology and is more hype than reality. LLMs are designed to be a bullshit generator and that’s what they are; it is a fundamental limitation. (“Bullshit” here used in the technical sense: not that it’s wrong, but that the truth value of the output is meaningless to the generator.) Thankfully the hype cycle seems to be on the down slope. Think about the term “generative AI” and what the models are meant to do: generate plausible-sounding somewhat creative text. They do that! Mission accomplished. If you think you can apply them outside that limited scope, the burden of proof is on you; skepticism is warranted.


Improving LLM's hallucinations is not a theory, it's a reality right now. In fact, developers do it all the time.

> the burden of proof is on you; skepticism is warranted.

I can prove it. You can test it too, try it: after LLM's answer, say 'please double-check if that answer is true'.

Now that I've proved it, right?

(I'm not saying it's perfect, I'm saying it can be improved. That alone makes it an engineering problem, just like any other engineering problem).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: