Hacker News new | past | comments | ask | show | jobs | submit | wdrw's comments login


This makes no sense... consider these scenarios:

A) You work for a US company, earn money from the US company, pay income taxes in the US, live and spend money (and thus sales taxes) in the US

B) You work for a US company, earn money from the US company, pay income taxes in the US, but live and spend money (and thus sales taxes) in Japan

Clearly (B) is better for Japan economically? I think these laws are mostly enforced out of inertia and not any rational reason.


> Clearly (B) is better for Japan economically?

Scenario B is amazing for the US. I don't see how it's clearly better for Japan. I don't know about you but I pay far more in income tax than sales tax. You spend money but you also consume government services and infrastructure while paying less in tax to Japan than a resident employed in Japan would.


But in scenario (B) you're spending money in Japan, basically you're directly injecting US money (your US salary) into the Japanese economy. Don't see why it's "amazing" for the US and not for Japan.


> Don't see why it's "amazing" for the US

Because you're paying US income taxes while consuming next to no US government services or infrastructure.

> But in scenario (B) you're spending money in Japan

Anyone who lives and works in Japan spends money in Japan. What's great about that? Most of those people also pay taxes.

> basically you're directly injecting US money (your US salary) into the Japanese economy

Japan might say: if this US company doesn't mind someone working from Japan and paying them an American salary, why not a person who already lives there and pays taxes there? That's obviously better than someone new who doesn't pay taxes there.


I think you both are right in a way and what is maybe relevant is the duration.

If someone lives full time permanently in another country working remotely, they probably are already actually a tax resident of that country and would typically pay tax to that country.

What the country doesn't want is someone traveling and in the country for a few months and then taking a local job that could have been taken by a citizen while also not being a tax resident, which is what the work restrictions on visas are intended to prevent.

But if someone is traveling and in the country for a few months, and works remotely while there, it really makes little difference to the country compared to another tourist other than the fact that the visitor now has access to more funds to be spending in their country while there; but visas don't support this well.


Oh, I built something like this as a hobby project a few years ago! Still online (with a now-expired cert...), but very likely to go down with even a bit of usage : ) Still, here it is: https://fasterbadger.com


I remember actually looking forward to new OS versions as a teenager. DOS 6.00 was genuinely exciting! What a contrast to the forced updates of today.


Hindenburg jokes aside, how do you ensure safety? Even if you only inspect infrastructure away from human habitation, there's still the risk of forest fires and such if there is an accident.


I've flown hydrogen balloons before (just because it was cheaper than helium--although you do need different fittings for the tank). I've also lit them on fire just to see what happend.

I don't think they're as dangerous as people think. If they ignite they go up in a whoosh, not a bang. The only debris is your payload, now falling. So as as your payload is not also made out of flammable material (as was the case with the Hindenburg) then I don't think it's any more of a fire threat than having power lines near trees is in the first place. Up is conveniently the right direction for a ball of flame.

Of course all of this goes out the window if you let it become entangled in a tree...


That's awesome! In what situation did you had the opportunity to fly a H2 balloon?

Do you have any video that you could share about the lighting on fire of the balloon?


A bunch of friends and I wanted to get video of a balloon's flight as it approached outer space, popped, and descended. In violation of FAA rules, we used a cellphone which we had embedded in a styrofoam box with hand warmers in order to prevent the battery from freezing.

We lost contact with the payload almost immediately and never recovered it, but lost enthusiasm to try a second time.

We had bought two balloons just in case we needed a second. Waited for a rainy day and took the second one camping. I wish I had grabbed a video, but some among us were overly paranoid about creating evidence.

It was just a big blue orb and a whooshing sound. Presumably there were some flaming bits of rubber involved but we weren't able to recover them (we made the tether too long, out of fear for it being more dramatic than it was, that it was hard to get a good idea of what specifically went on).

Has there ever been a burning man effigy with a lighter-than-air component? That would be a good venue for exploring the dynamics of baloon fires (it could tethered such it wasn't above anybody when it went up).


Safety is the most important thing when it comes to aerial industry. We are working with people who are manufacturing their own H2 gas balloons that litterally fly with people in it. Check it out: https://balloonfiesta.com/Gordon-Bennett-2023 There are special materials and glue that ensure safety from electricity and fire hazards (antistatic material). The people building their balloons use that kind of materials and it works!


Most people don't realize that the paint they used to seal the shell of the Hindenburg is a popular solid rocket fuel. The hydrogen was the least of their problems during that crash.


Citation needed. I googled it and it appears to be a common myth.

(Aluminum powder is used in propellant, it was used to coat the Hindenburg, therefore the Hindenburg was coated in rocket propellant.)


For me, the intuitive way of understanding it is, "how badly would a gambler lose in the long term, if they keep betting on a game believing the probability distribution is X but it is in actual fact Y". It also explains why KL divergence is assymetric, and why it goes to infinity / undefined when the expected probability distribution has zeros where the true distribution has non-zeros. Suppose an urn can have red, blue and green balls. If the true distribution (X) is that there are no red balls at all, but the gambler believes (Y) that there is a small fraction of red balls, the gambler would lose a bit of money with every bet on red, but overall the loss is finite. But suppose the gambler beleives (Y) there are absolutely no red balls in the urn, but in actual fact (X) there is some small fraction of them. According to the gambler's beliefs it would be rational to gamble potentially infinite money on the ball not being red, so the loss is potentially infinite. There is a parallel here to data compression, transmission, etc (KL divergence between expected and actual distributions in information theory) - if you believe a certain bit sequence will never occur in the input sequence, you won't assign it a code, and so if it ever does actually occur you won't be able to transmit it at all ("infinite loss"). If you beleive it will occur very infrequently, you will assign it a very long code, and so if it actually occurs very frequently your output data will be very long (large loss, large KL divergence).


It's great at log pocessing (generating commands for awk, sed, complex grep regexps, shell scripts to combine it all). Anything where I'm not an expert but need something very basic done quickly (e.g. the bulk of my day job is in C++, but I frequently need little bits of Python and ChatGPT is often the quickest way to get the answer I need).


The Mosh SSH client for intermittent connectivity ( https://mosh.org/ ) has definitely saved me at least 100 hours. Too bad that it's only available for Windows as a Chrome extension, and Chrome will discontinue support for it starting in the new year. Really not looking forward to having to search for an alternative...


> Too bad that it's only available for Windows as a Chrome extension

Looks like it's available under msys2 on windows: https://packages.msys2.org/base/mosh

As an aside: msys2 mingw64 and friends are > 100 hours saved if you are a linux-soul in a windows environment. I don't think msys gets the attention it deserves.


And if you work in locked down corporate windows environments, asking for Git for Windows ("for local version control only") is a sneaky way to get the basic Unix utils installed. It's a lot easier to ask for VSCode + Git than some open source tools that will be viewed with suspicion by the local support team.


I've switched from Mosh to Eternal Terminal (https://eternalterminal.dev) because of its excellent native scrolling support.


Eternal Terminal pitches itself as entirely superior to Mosh, but also describes itself as using TCP (Mosh uses UDP). I'm curious how that can actually cover the use cases Mosh provides?

Mosh using UDP means that as a connectionless protocol, your end points can move (eg: from WiFi to LTE, or vice-versa), and beyond a small hiccup, your connections remain alive and well.


ET adds a layer between application and TCP sockets that persists connections. https://eternalterminal.dev/howitworks has more.

If you are mostly on unreliable and high-latency connections, mosh will likely feel better, but with no native scrollback.


To add on to that, I use iTerm2 with tmux control mode which combines a native UI frontend with a tmux backend on a remote server, meaning I can spawn new native tabs, windows, or panes and they're all tracked by the remote so I can reconnect to all of them at once if I disconnect.

I keep one laptop at home and one laptop at work and can seamlessly switch between the two without having to manage my active sessions at all. If I open a new tab at work and go home for the day it'll be there on my laptop at home.


I was using mosh+tmux earlier this year, but ended up switching to wezterm which has a native MacOS and Linux versions, and gives native interface to the terminal. And I can reconnect to my session if I suspend my mac or go somewhere else.


I guess you can install it in WSL?


how is it different from autossh? never used mosh yet


In my ~20-year career I have never once been in a workplace setting where creativity wasn't valued. This is especially true of "technological creativity" (finding a new way of solving a complex technical problem, or better yet - finding some way to NOT have to solve the problem in the first place). But it is also true of "product / business creativity", "marketing creativity", etc. That does not mean that all ideas have equal merit. A new-grad hire may think their creative ideas aren't valued, but that is just because they don't yet truly know all the problem constraints - in a few years' time their creativity will shine.

I also had a situation a couple of times very early in my career, where as an intern I proposed some truly novel approaches, and was told that yes, they're potentially very good, but very risky and never tried before, and that I'm only there for X months and won't be around to deal with the consequences if they fail, so they're not going to do it - but I was told that if I was a full-timer, the decision may have been different.


> In my ~20-year career I have never once been in a workplace setting where creativity wasn't valued.

Mayhaps, but if true this is atypical and fortunate. Ninety-nine percent of people get shunted into subordinate labor with no creative meat, in which their job is to support the manager's career and that it all that matters--and this is also true in software, now that the Jira jockeys have taken over and turned it into ticket-shop day labor. Congrats if you've escaped the sprint work, but most people can't.


I disagree that "supporting the manager's career" is necessarily at odds with creativity. Once you reach a certain level of knowledge in a workplace (problem domain knowledge, codebase knowledge, etc), different options appear to deal with each incoming Jira. You can solve them naively, or you can solve them creatively to produce better results and set up the codebase for more success in the future, or you can even (surprisingly often!) question some assumptions and find a way to not even do the work as-originally-specced but rather substitute some simpler version. These latter options make your team (and thus your manager) look even better, there's no conflict of interest here. Sure, this doesn't work with grunt-level "move this button over here" tasks, but presumably after you've actually built up the knowledge I was talking about earlier, you're assigned more interesting and challenging tasks (which are never in short supply).


> Congrats if you've escaped the sprint work, but most people can't.

Are you not involved in designing the solutions and the tickets too? And is that process not creative?


Our experiences are different.

Once, on a job where I was full-time, not an intern, I proposed something new and my boss said, "That's a great idea. Please don't tell anyone. They'll want us to do it."

This was not the best boss I ever had but also not the worst.


The parallel port was wonderfully hackable. Having your PC control an LED light, or even a motor (with some effort), didn't take a whole lot of electronics knowledge. Or 8 LEDs, one per pin. And controlling it was as easy as sending text to a printer. We lost this in the age of USB...

When I was a kid my dad helped me hook up a pin to a motor. And not just any motor, but one inside a cassette tape player. I could then make little games in BASIC that had real voice! Of course it had to be linear, it basically played chunks of voice recording and stopped at the right times.


One of the most delightful and hilarious things I've ever seen anyone do with a computer involved a C64 and its parallel port. This guy I sorta knew in college had built a box with a bunch of relays and 120V AC power outlets. The parallel port would control the relays. Into this box, he plugged a bunch of lamps, then he'd turn out the other lights and play music on the computer, and the lamp would basically create a light show in time with the music.

He also had written some kind of memory-resident program (a TSR in DOS terms) that he could trigger while another program was running, and it would render pages of memory to the screen in real time. So he could start any program that had music, browse around through its memory space until he found a byte that seemed to be updating in time to the music, and then select that as the byte whose value got periodically copied to the parallel port.

So you could load up your favorite video game and then have a light show in sync with its music. Or any other activity of interest.


Parallel ports were very popular for connecting joysticks to MAME cabinets

The Covox speech thing was a nice hack as well

https://en.wikipedia.org/wiki/Covox_Speech_Thing


And homebrew SNES controller adapters! Good ol' "SNES controller on lpt1" in zsnes got me many hours of emulation fun back in the day.

https://www.raphnet.net/electronique/snes_adaptor/images/sne...


I remember learning about this by some company that made DOS shareware clones of PacMan and Galaga but can't remember the name. Ring a bell?


I found it - ChampSoftware and the ChampCable


and ModPlay: https://awe.com/mark/dev/modplay.html

Amazing software for that time.


This sounds almost exactly like my first taste of computing with my dad; 8 low voltage fillament bulbs (and some buffering) wired to the parallel port, along with some simple code to count in binary. Sure, I was a kid and more excited just to have flashing lights, but in my more studious moments I did learn so much about binary, how things are represented in computers, programming, interfaces etc etc.


It seems like all of our modern abstraction layers have lead to a dirth of tinkerers on the edge between metal and software. There isn't any easy gpio on your standard Chromebook or desktop for instance. Yeah you can get a usb to gpio or make one. But being able to sit down at any computer and do something low level was nice.


On the contrary, any device which has DDC support and a VGA, DVI or HDMI output will have an i2c pin in the connector. Even DP carries i2c in its aux channel. https://hackaday.com/2014/06/18/i2c-from-your-vga-port/


that is a cool hack, but its just using i2c to talk to an atmega and have it do the gpio. You could just as well use the atmega from usb. Or an esp over wifi.


When I got my first Internet connection at home (kinda late as I was "big times" into BBSes before that), I was already using Linux. But we were too cheap to have a network at home so... I'd use "PLIP" (parallel port IP) to share our Internet connection between me and my brother: a parallel cable between my beige PC (a 486? Pentium? Don't remember) and a crappy old laptop. And so the two of use were simultaneously using Netscape or something.

It made me sad when computers starting shipping without a parallel port: I mean... Even laptop had these back in the days!


I've used my parallel port as a JTAG (IEEE-1149.1) interface to reflash/debug embedded targets. It's slow, but it works.

http://zoobab.com/simple-jtag


When I was young, I used it for a morse code decoder, and then later to decode magstripe data from a reader circuit I made with spare parts. QBasic made it trivial to interface with! I remember being able to achieve toggles at near a MHz with QBasic, since it was just a memory write.

Having that many IO, ready to go, was really great. I bought a separate ISA expansion card so I wouldn't burn out my motherboard's port. Now, you can get a little USB powered micro python board for the same price!


PIC and AVR microcontrollers can be programmed using a parallel port; passive adapters with nothing more than resistors used to be common.


I once made a homebrew GraphLink parallel port cable so that I could upload games and programs to my TI graphing calculator. Made it a lot easier to write programs than to tap them out on the calculator itself.

A friend and I wrote a chat program that used the calculator-to-calculator GraphLink cable to send messages back and forth so we could chat in class. This was a neat hack, but considering the cable that came with the 85 was like 2 feet long, it was kinda useless in practice unless we were sitting at the same table. So I made a longer cable at home out of stuff from RadioShack.

Everything was cool until we got caught using the homebrew cable in class. Fortunately, once we showed the principal what we'd done, we were told that it was very clever, but not to have the cable in class anymore. :)

You know, I think I might still have my TI-85 and homebrew cables in a box somewhere. I wonder if any of my old programs are still there.


I recall dragging my massive second-hand Zenith 286 "laptop" into school and serving out zshell assembly programs to people with my homebrew GraphLink. I even sold a couple of them. Many a math class spent playing tetris on my TI-85...


> We lost this in the age of USB..

And got it back with rpi with true gpio pins


Well, if your ok with the non-realtime characteristics of the RPi's GPIO via linux userspace, one might as well just buy one of the dozens+ of USB->GPIO adapters all over ebay/amazon/aliexpress/etc with windows/linux/etc drivers.

Or if you need something more advanced than basic LED blinking/etc, just pick up a wemos D1, or any of the dozens of other similar microcontrollers that can be had for a dollar or two that have GPIO and a USB serial interface and do your bitbanging on the ESP8266/etc in an environment that isn't susceptible to a heavyweight OS failing to schedule your task for long periods of time. One can hang that device off a PC and talk to it with simple serial programming/commands or just plug it into the target device as a wifi adapter and do all the control over Wifi+JSON/etc. Which it turns out are basically what most of the USB->GPIO adapters are anyway.

So, in the end, using a RPi for this is really sub optimal on every front, cost, performance/accuracy, complexity, power utilization, etc.


RPI is $70-110 in the United States nowadays. There has got to be a cheaper alternative that is widely available like some sort of NodeMCU?


There are many cheap system-on-a-chip microcontrollers that have enough computing power for many basic uses for which RPi is overkill, and some GPIO and built in wifi and/or bluetooth wireless, and they support dev frameworks like arduino or micropython.

Another comment mentioned a bunch of other models, but I had some fun with the Espressif ESP32 platform where the bulk chips are like $4 each (depending on quantity) but there are some nice beginner-friendly devkits using the same chips for $15 like https://docs.m5stack.com/en/core/atom_matrix or https://shop.m5stack.com/collections/m5-controllers/products... that are sufficient for a bunch of tasks.


Just because demand is high doesn't mean you have to pay more. https://twitter.com/rpilocator tweets stock alerts - and they are pretty regular.


Indeed, my first PC to experimental hardware interfacing was through the parallel port with QBasic. It helps that the parallel port is a DB25-F and you can just poke wires into it :P


The best thing was using it as a DAC. A bunch of resistors and off you go! That was one of the greatest things I ever made as a kid.

https://hackaday.com/2014/09/29/the-lpt-dac/


Yeah, I remember banging together a very simple ADC to take readings from various sensors straight into the parallel port - and being amazed at how easy it was to build something that could talk directly to PC hardware.


I have used a parallel port to drive a JTAG port on a different device to program a boot ROM.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: