Hacker News new | past | comments | ask | show | jobs | submit | claiir's comments login

"Study [..] found that the drug can reduce brain function"

Causal language in news on correlational (case-control) studies should be a crime.

This is a brazen misrepresentation of the results. The direction of the causal arrow (cannabis -> dumb vs dumb -> cannabis)--or if there even is a causal arrow (other factor(s) -> cannabis+dumb)--is purely editorialization and born of a severe lack of journalistic integrity.


I would be also careful on mixing the word dumb -> lower brain activity. The study focuses on brain activity. We cannot say for sure either if the lower brain activity results into dumber life choices. Brain activity in those areas == intelligence???

Maybe cannabis users become efficient like MoE models, and they don't have to activate as many neurons at each inference step :)

Even worse: proponents of the Neural Efficiency Hypothesis[1] might interpret the "mean brain activation" values reported in the study[2] in the exact opposite manner. :)

[1]: https://en.wikipedia.org/wiki/Neural_efficiency_hypothesis

[2]: https://jamanetwork.com/journals/jamanetworkopen/fullarticle...


Glad someone said it. Unfortunately we are swimming against the tide...

"mogged" in an actual piece of journalism... perhaps fitting

> DeepSeek undercut or “mogged” OpenAI by connecting this powerful reasoning [..]


Carvana stock didn’t seem to react much at all to that Jan 2 report (higher today than on Jan 1, even)? I wonder if, and possibly how much, Hindenburg lost on that trade.


It seemed to very positively react to the shutdown though


An alternative technique called “divergence” [1] (pulling your eyes apart) is significantly less straining on your eyes than crossing them (“convergence” [2]) while being equally as effective spotting differences, even on video. It’s also what your eyes naturally do when you watch stereoscopic 3D with tinted glasses—the stereoscopic images are pulled out (divergence) not pushed in (convergence/cross-eyed). I’ve been doing this since I childhood. If you get good at it, you can watch side-by-side 3D videos in 3D with just your naked eye (e.g. VR)! I believe there’s a reddit covering the more prurient variety of that.

[1] https://en.wikipedia.org/wiki/Vergence#Divergence

[2] https://en.wikipedia.org/wiki/Vergence#Convergence


This is what I do, the only issue is that I don't have nearly as much "range" with divergence as I do with convergence, so I have to make the pictures as small as possible when using it to line up two images (as opposed to autostereograms, which usually have a much smaller divergence offset).


Do you have a training method for divergence?

Similar to the finger moving closer and closer to the upper nose technique, for convergence.


It's a little more abstract since you don't have handy moving-reference-object like your finger, but: Place the picture in front of something deep, like a long hallway. Look off at something in the distance behind the picture, like the end of the hallway. Notice how the edge of the picture is a double image. Focus on gradually resolving the edge of the picture down from double-image to single-image, and then do the reverse by looking down the hallway again and seeing the picture go back into double-vision. Just keep practicing that until you get the feel for controlling your depth perception and then try holding the same depth of the hallway while you turn your gaze to the picture and try the same action with your eyes.


Damn! After reading this I was surprised by the fact that this sounded very familiar.

I actually "practiced" a lot like this because I was always amused to notice how we could basically "see through" objects with this double-image thingy (see experiment below).

So I decided to film myself... and I was actually already doing a divergence! Not convergence!

Thanks a lot for your comment which made me realize that.

Experiment:

1. place your phone (handy size/shape for the experiment) in front of one eye (X), at about 20cm.

2. close the other eye (Y) and look at your phone

3. Open Y and look straight without focusing on the phone. By blinking Y, the double-image should appear/disappear, as if it was unveiling what's behind your phone.

4. By closing X and with Y open, looking at your phone, you should see it displaced from where it was when X was open and Y was closed. The size of this displacement is equal to the size of the double-image transparent part.


This is called “divergence” [1] and is less straining on your eyes than crossing them (“convergence” [2]) while being equally as effective spotting differences, even on video. It’s also what your eyes naturally do when you watch stereoscopic 3D with tinted glasses—the stereoscopic images are pulled out (divergence) not pushed in (convergence/cross-eyed). I’ve been doing this since I childhood. If you get good at it, you can watch side-by-side 3D videos in 3D with just your naked eye (e.g. VR). I believe there’s a reddit covering the more prurient variety of that!

[1] https://en.wikipedia.org/wiki/Vergence#Divergence

[2] https://en.wikipedia.org/wiki/Vergence#Convergence


The only problem with divergence is that you can't go too much farther out than the distance between your eyes, whereas convergence works for larger images as well.


Convergence highlighted the differences for me in all four images.

Divergence only worked for me in the cat bear image. For the others, I could see a combined image but I could not see any differences highlighted, even though I knew what to look for.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: