Hacker Newsnew | past | comments | ask | show | jobs | submit | Certified's commentslogin

Vindication! I’ve spent over a decade of my life putting physical interactives into museums. I have preached (sold) many museums on the stance that they should put unique experiences into museums that can’t happen on an iPad at home, to varying degrees of success. The museums that have listened are the ones that continue to be wildly successful to this day.

They are hard to do right though. I used to compete in combat robotics and the stresses put on museum exhibits is higher. I tell my new engineers that if their exhibit can be dropped into a gorilla enclosure and survive, they are about half way strong enough. Little makes up for raw experience in the art of building bomb proof exhibits, and many companies have failed before getting good. The amateur hour exhibits from the low bid newcomers that inevitably fail and/or need a lot of expensive maintenance has left a sour taste in a lot of museum’s mouths. A lot of those museums have knee jerk reactioned the opposite direction to touchscreen exhibits, only to see their ticket sales slowly drop. Thankfully, i’m seeing the pendulum of the industry swinging back towards physical interactives again.


THANK YOU for fighting this fight. I hope the responses here might add some empirical weight to your arguments — some people apparently do care about this.

And I believe you on how hard the reliability/durability challenges must be in engineering these things — I've seen what the kids do to them.

BTW, I think the mechanisms themselves are no small part of the interest; kids don't just get to see whatever phenomenon is being demonstrated by the device, they get to poke at the thing that does it and try to figure out how it works, and that's a lot of fun for a curious kid; there are layers there.


It's amazing what adults do to things too.

I believe it's actually easier to cope with what kids will do (banging it, trying every nook out etc), compared to many adults putting more force than needed on common mechanism or button or whatever as they figure it out.

But ultimately, it's about wear and tear.


I think up until about 15 years ago, there was no such negativity against "screens", so it was genuinely seen as something modern to add them. With the added benefit of being more robust (no moving parts) and cheaper to change the content to keep it fresh.

Now that both adults and kids spend their days on screens, and are looking to limit their exposure, it suddenly makes less sense to have them in museums.


> A lot of those museums have knee jerk reactioned the opposite direction to touchscreen exhibits, only to see their ticket sales slowly drop.

According to what you've written here something close to 100% of those touchscreen exhibits should be broken. Are they?


I think they say that because screens are really easy to make bomb proof. You just lock them in a big metal case. Even more points if you interact with them through Kinect because you can now make the layer of hardened glass in front of them a full centimeter thick.

They are probably referring to the much larger driver facing curved touchscreens for carplay/android auto that merges with the screen used for the instrument gauge. Also, the driver assist tech on the newer kia/hyundais is very good, especially for highway traffic, needing very little driver intervention.


Embrace, Extend, Extinguish.

Microsoft knew they would never get significant market share unless they offered open source alternatives that let you circumvent the telemetry in the early days of VScode. Embrace. The acquisition of github was part of this strategy. They made an ecosystem that sucked a lot of plugin developer talent into their ecosystem. Extend. Now the market share is firmly in their grasp and competitors have become weaker. Extinguish.


Microsoft couldn't have telegraphed their intentions more clearly if they tried, yet tons and tons of people and organizations fell for it (again!).

VS Code source is under MIT, but the built product is under an EULA - and all Microsoft extensions are under an EULA that requires the use of the EULA build.

As has been already posted multiple times here... https://ghuntley.com/fracture/


Yeah, the main reason I never switched from emacs to VSCode is because I was worried about Microsoft's stewardship of it, particularly the fact that the extension ecosystem, which is so critical to a good editor, was burdened. There have been a lot of discussions about VSCodium's use of the manifest files from the original VS Code manifest without permission, and while that wasn't enforced, it was never really resolved.

Sad to see it go in such a predictable direction.


This is it in a nutshell, with a lot of corps; IBM, Microsoft, etc. Be careful who you lie in bed with. Seemslike newer companies like Facebook and Google have a much much better track record. They may end a project but they don't suck you in and then say "nah, it's proprietary now"


Android slowly became that.

AOSP used to be the complete Android system, more or less. And when you bought a Nexus device from Google, that's what you got. But they progressively abandoned the stock apps to replace them by their proprietary counterparts, or ones tied to their online services.

Then, they replaced their Nexus line of phones with the Pixel line. Pixels are full of proprietary technology, and their last move was to make Android development private.


AOSP is still fully open source and allows you to build a complete Android system on it though. Theirs open source GrapheneOS, LineageOS, /e/OS, and the closed source onset on Chinese domestic phones that have their own proprietary versions on play services.


Here's a pretty good Linus Tech Tips video where he installs stock AOSP on a Pixel phone and goes over how it's virtually unusable. Just like you say, while the Pixel UI may be Google's vision for how the Android platform should work, they've moved to keeping their UI development private just like every other Android vendor. Meanwhile, stock AOSP has basically been left to rot. https://www.youtube.com/watch?v=-hlRB2izres


However, GrapheneOS is thoroughly de-Googled and it regularly incorporates and benefits from new AOSP releases.


It's still a better experience than a pinephone.


The track record of Facebook and Google may be better because their open-source strategy is to never open things that are core to their business. Projects like React will not give you a competitive advantage to build a Facebook competitor. What a project like React gives to Facebook is marketing and a carrot to bring promising talent to the company.

The issue with VS Code is that it opened the door to many other editors, which, in a sense, drive people away from the Microsoft ecosystem. The combination of VSCode, GitHub, and TypeScript is ideal for MS: they win by attracting companies to GitHub services (which also offer code spaces based on VSCode); they also win by attracting users to Copilot, which helps them improve their tools. Creating an editor like VS Code is expensive; they are not paying the core developers because they prefer to give away money. They do it because it's part of their business strategy. You don't pay for VS Code; companies that subscribe to GitHub services do. A VS Code fork circumvents that strategy.


Eh. Google may be better than Microsoft in this regard, but this is basically what they're doing with Android. AOSP is now lacking a lot of core functionality that comes with Google Pixel phones, such as RCS messaging, emoji reactions to text messages, camera features and photo editing, voicemail transcription, crash detection. Even the keyboard is worse in AOSP.


GPT 4.5 seems to get it right, but then repeat the 700 pounds

"A woodchuck would chuck as much wood as a woodchuck could chuck if a woodchuck could chuck wood.

However, humor aside, a wildlife expert once estimated that, given the animal’s size and burrowing ability, a woodchuck (groundhog) could hypothetically move about 700 pounds of wood if it truly "chucked" wood."

https://chatgpt.com/share/680a75c6-cec8-8012-a573-798d2d8f6b...


That answer is exactly right, and those who say the 700 pound thing is a hallucination are themselves wrong: https://chatgpt.com/share/680aa077-f500-800b-91b4-93dede7337...


Linking to ChatGPT as a “source” is unhelpful, since it could well have made that up too. However, with a bit of digging, I have confirmed that the information it copied from Wikipedia here is correct, though the AP and Spokane Times citations are both derivative sources; Mr. Thomas’s comments were first published in the Rochester Democrat and Chronicle, on July 11, 1988: https://democratandchronicle.newspapers.com/search/results/?...


Linking to ChatGPT as a “source” is unhelpful, since it could well have made that up too

No, it absolutely is helpful, because it links to its source. It takes a grand total of one additional click to check its answer.

Anyone who still complains about that is impossible to satisfy, and should thus be ignored.


I've heard the answer is "he could cut a cord of conifer but it costs a quarter per quart he cuts".


Whatever happened to the transflective lcds that were popular in carputers in the 2000s? They seem to be a perfect fit for a tablet and I have been puzzled that no one has jumped on using them in one.

from the transflective wikipedia page [1]

"A transflective liquid-crystal display is a liquid-crystal display (LCD) with an optical layer that reflects and transmits light (transflective is a portmanteau of transmissive and reflective). Under bright illumination (e.g. when exposed to daylight) the display acts mainly as a reflective display with the contrast being constant with illuminance. However, under dim and dark ambient situations the light from a backlight is transmitted through the transflective layer to provide light for the display. The transflective layer is called a transflector. It is typically made from a sheet polymer. It is similar to a one-way mirror but is not specular."

[1] https://en.wikipedia.org/wiki/Transflective_liquid-crystal_d...


LEDs replaced CCFL tubes in backlighting. LED was still too dark until iPhone 4S or 5, but once it became bright enough and automatic brightness control matured, it quickly eliminated needs for transflective displays.

Transflective displays are also generally "low quality" in eyes of regular consumers. That drives down margins and eliminate less flashy options.


The daylight computer is a transflective LCD with fancy marketing.


Thanks for pointing that out. I thought it was some sort of variation on an E-ink display because of the black and white limitation. Nothing about transflective tech limits it from full color other than price. I guess that leads me to an evolution of my question: Why are no* tablets using full color transflective displays?

*I did find the HannsNote2 [1] does, but it only came out last year, and this tech has been around for donkey's years.

[1] https://www.hannspree.com/product/hannsnote2


They don't showroom well, so were discontinued outside of nautical usage and special-purpose outdoor devices.

Unfortunate, my Stylistic Fujitsu ST-4110 w/ transflective display was one of my favourite devices ever.



This reminds me of one of my favorite scenes from "The IT Crowd"

https://www.youtube.com/watch?v=12LLJFSBnS4


I'm a big fan of debouncing in hardware with the MAX3218 chip. It will debounce by waiting 40ms for the signal to "settle" before passing it on. This saves your microprocessor interrupts for other things. It also will work with 12 or 24 volt inputs and happily output 3.3 or 5v logic to the microprocessor. It is pricey though at $6-10 each.


That chip is more expensive than having a dedicated microcontroller that polls all of its GPIOs, performing software debouncing continually, and sends an interrupt on any change.


It's price is it's biggest drawback, but it is also replacing any electronics used to run the switches at 12 or 24v which gets you above the noise floor if you are operating next to something noisy like a VFD. from the 6818 data sheet: "Robust switch inputs handle ±25V levels and are ±15kV ESD-protected" [1]

[1] https://www.analog.com/media/en/technical-documentation/data...


My thought is: This introduces latency that is not required (40ms could be a lot IMO depending on the use.) It's not required because you don't need latency on the first high/low signal; you only need to block subsequent ones in the bounce period; no reason to add latency to the initial push.

Also, (Again, depends on the use), there is a good chance you're handling button pushes using interrupts regardless of debouncing.


I guess I should rephrase. It saves all the interrupts except the one triggered at 40ms delay. For every button press without hardware debouncing, you can have 10s - 100s of 1to0 and 0to1 transitions on the microcontroller pin. This is easily verified on a oscope, even with "good" $50+ honeywell limit switches. Every single one of those transitions triggers an interrupt and robs cpu cycles from other things the microprocessor is doing. The code in the interrupt gets more complex because now it has to do flag checks and use timers (bit bashing) every time they are triggered instead of just doing the action the button is supposed to trigger. None of this is to say one way is the "right" or "wrong" way to do it, but putting the hardware debouncing complexity into hardware specifically designed to handling it, and focusing on the problem I am actually trying to solve in firmware is my personal preferred way of doing it.


that seems like a real overkill - it's a full-blown RS232 receiver _and_ transmitter, including two DC-DC converters (with inductor and capacitor) that you don't even use... Also, "R_IN absolute max voltage" is +/- 25V, so I really would not use this in 24V system.

If you want slow and reliable input for industrial automation, it seems much safer to make one yourself - an input resistor, hefty diode/zener, voltage divider, maybe a schmitt trigger/debouncer made from opamp if you want to get real fancy.


Thanks for pointing that out. I realized I called out the wrong chip. I was actually trying to call out the Max6818.


That's a neat chip, especial max6816/max6817 version in SOT23 package!

but yeah, very expensive for what it does. If my MCU was really short on interrupts, I'd go with I2C bus expander with 5v-tolerant inputs and INT output - sure, it needs explicit protection for 24V operation, but it also only needs 3 pins and 1 interrupt.


Edit: I meant to call out the MAX6818, not MAX 3218


In a monkeys in front of a typewriter world, statistically, you are as likely to have a one off event that matches a specific bit pattern in the underlying format as you would the encrypted format. It would not be reproducible though since most encryption uses nonces


I really like all the caveats and the time taken to explain things in the first part of that document, but later it starts to rush and gloss over important details and caveats. On page 151 of that link, when it starts talking about using parallax to measure the distance to nearby stars, it says "However, if one takes measurements six months apart, one gets a distance separation of 2AU." This is obviously incorrect because the whole solar system is orbiting around the galactic core, which itself is moving with respect to the CMB rest frame. I did a quick calc based on the 552.2 km/s galactic velocity value from Milky Way wiki [1] and found that it moves an additional 0.97AU in 6 months. I am assuming that this has been accounted for by scientists, and is being simplified to make it more digestible for the reader, but it hides a rather large dependency for every higher rung on the cosmic distance ladder. A cosmic velocity ladder that seems to be based off of Doppler CMB measurements [2]. If we are indeed using measurements many months apart and under or overestimating our velocity through the universe, even a little bit, every higher rung of the ladder would be affected wouldn't it?

In the process of writing this, I thought "Surely we have launched a satellite pair that can take parallax measurements at similar times in different places!" They could range off of each other with Time of Flight, be positioned much further apart than a few AU, and take parallax star measurements at more or less the same time without atmospheric distortion, but it doesn't seem like we have. Both Hipparcos and Gaia were satellites that were deployed to measure parallax, but not as a pair. My reading suggests they used multi-epoch astrometric observations (speed ladder dependent) to generate their parallax measurements and it seems our current parallax and star catalogues are based on the measurements taken by these two satellites. New Horizons got the most distant parallax measurements by comparing simultaneous* earth observations, but it was limited to Proxima Centauri and Wolf 359, far from a full star catalogue.

I would love if someone more knowledgeable can steer me towards a paper or technique that has been used to mitigate the cosmic distance ladder's dependency on this cosmic speed ladder. Regardless of how certain we think we are of our velocity through the universe, it seems to me that sidestepping that dependency through simultaneous* observations would be worthwhile considering how dependency laden the cosmic distance ladder already is.

[1] https://en.wikipedia.org/wiki/Milky_Way

[2] https://arxiv.org/pdf/astro-ph/9312056

* Insert relativity caveat here for the use of "simultaneous". What I mean in this context is more simultaneous than waiting months between measurements.


I wish Supah Valves [1] were still available so I could recommend them. They use a smaller sprinkler valve as a pilot to move a much larger piston based valve _very_ quickly. That lets a lot more of your reservoir pressure get to the barrel before your projectile has shot out the end. The result? More range. More grins.

[1] https://www.spudtech.com/store/index.php?main_page=product_i...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: