Hacker News new | past | comments | ask | show | jobs | submit login
Mastodon for Apple II (colino.net)
222 points by kirubakaran on Oct 2, 2023 | hide | past | favorite | 48 comments



Now I kind of want to buy a SuperSerial card for my II+ to try this thing out! Too bad any kind of Apple II hardware is really hard to find where I live.

Edit: Now that I look at this closer, it looks to me like I'd also need to buy an 80-column card to run this on my system.


Hi! Author here. That was a requirement until this week-end, but I've done some work to support II+ without 80-column cards this week-end. The menu is toggled instead of being on the left.


Very cool that you added support for that too! Now I'll 100% be sourcing a SuperSerial card for my II+!



This is impressive!


Save for encryption and video playback, if social media and chat protocols were public we could still comfortably use Facebook, Whatsapp, LinkedIn and whatever from computers from the 80s


On one hand this project just uses an Apple II as a front end for something running on a raspberry pi.

On the other hand, the 80386 and 68000 are both '80s era CPUs. The 80486 just barely squeezed in in 1989, with the first systems available fourth quarter. Super VGA video cards were available by 1987. Macintosh system software had a TCP/IP stack in 1988. I would think that you could access modern social media from an '80s era computer if you went with the latter half of the decade. If you are talking about the first half? Not likely. A 1 MHz 6502 would have a hard time keeping up with SLIP over 9600 baud, let alone 56kb.


One of the beauties of USENET was it’s store and forward model. As such I had no real issues using USENET and email back in the day with a 2400 baud Hayes SmartModem.

Also helped it was all lightweight text messages and, naturally, I didn’t download the entire feed, just the few topics I was interested in.


But would all those social networks then exist at all, with their massively broad user base?

I’d love to believe they would - we still have SMTP and HTTP.

But all open source social networks seem to believe complexity is a benefit, so put off most people, and the network effects don’t materialise.


There were social networks in the 80s: BBSes. And the scaling limitations of BBSes had more to do with the existing network infrastructure than with client capability.


How well did FidoNet work? It's not quite as fast as federation over TCP/IP, but looks like it solved a lot of issues on paper.


Ironically, much of the things that people are learning and experiencing with the FediVerse have straightforward precursors from FidoNet, such as: the idea of supporting one's local sysop; or the idea that not everyone uses the same off-line reader program, or BBS software; or the idea that it's important to know what country's laws apply to a node.

FidoNet at its height was tens of thousands of nodes, worldwide, with each node supporting userbase sizes ranging from the single digits to the triple. Discussions could and did span the planet.

And yes, there was all the same culture shock, then as now, for people coming from the worlds of BIX and CompuServe, where a business provided and regulated a single centralized system, to a world where tens of thousands of people worldwide supported a decentralized system, sometimes in their spare time and out of their own pockets, and one had to (gasp!) pick a sysop that one liked and trusted or go the whole hog and run a node onesself with no-more-complex-than-hobby-level software and equipment that was readily available.


It worked surprisingly well (as did some of the other BBS message networks of the time). Sure it took a bit for messages to traverse the network, however most of that was to try to reduce on cost per minute long distance.


Usenet was also a social network. And it was much bigger than any BBS.

There will never again be a social network of any size with an average IQ that high.


Where does the complexity from Mastodon and other Fediverse social platforms come from? The fact that there are instances? Mastodon seems to streamline that experience pretty well. Honestly, curious for feedback


I see people get held up by the instances all the time. Not sure what's complicated about it, since it works exactly like email, but everybody seems to want there to be a single instance. They don't want to choose, as if it matters at all.


It's not exactly the same as email though. Email servers don't have a culture and you don't generally interact with other users of your email server unless they have given you the email address. And most people don't choose their email server. For personal email they use Gmail/Yahoo/Outlook and at work they use the provider their IT Department chose.


People come to Mastodon, in general, with experience from other, centralized social media networks, so they're bringing those expectations. Whether it works like email is irrelevant because it doesn't work like Twitter. Here are a few key ways in which it doesn't work like Twitter that are off-putting, and why they are hard to change, even if we wanted to.

1) Following another user because you liked one post: when that user isn't on your instance, you usually have to do a three-step dance: click on the user to get to their timeline (on another instance), click "follow," and now enter your username into a dialog box because you're on another server. That last step is weird and off-putting. It's also completely necessary, because different servers don't have a cross-domain trust to pass your username around so (we know as web devs) you must tell the other server who you are; the architecture is protecting your anonymity by design by not divulging that data. But, relative to Twitter, that's weird. I don't think it is fixable without a change to the domain-based trust model. And, of course, this is yet another example of how people say they want privacy and anonymity online, but when you implement it for them they get frustrated at the usability tradeoffs those concerns demand.

2) A defederation split between your node and another node means you could lose access to people you follow. There is an analogy in the Twitterverse... Someone you follow could get banned. But that's different than someone you follow going away because their "neighbor" was being a Nazi and your server admin axed the whole node in response. Twitter users don't have "neighbors." Everyone's a neighbor. Entirely new mental model moving to the Fediverse. This is, again, a feature... But it's a feature some people find extremely valuable and others find off-putting complexity.

(Sidebar: I actually got into running my own node for this reason: I realized a good friend of mine wasn't followable from my first account because years ago my server admin had decided on a "no furries" rule and my friend's server happened to be furry-content friendly. Twitter doesn't make you build a red-string-map of historical drama to figure out what node to join).

3) Smaller nodes change the risk model. There are tradeoffs to decentralization: if your node goes dead, the whole network hasn't gone dead. But if your node goes dead, that's hugely inconvenient... And with no money on the table and individual communities being smaller, nodes go dead more often than Twitter goes dead (the fact that Twitter is still there in spite of everything that's happened to it is a strong example of the stickiness of a corporate-backed venture with a war chest). Of the three, this is the least-concerning one... If you sign up for an account at mastodon.social, you'll probably be fine. But in general, the system working as intended asks the user to trade out the security of a large, capital-backed network for the responsibility of being aware of the ambient health of their own digital neighborhood. It's nothing more complex than the ancient BBS model, but that's the thing... A whole generation or two of computer users never used a BBS. They aren't used to having to find something else to do with their browsing habits because Frank is having a bad year and decided to shut down his server for his own mental health.

It is worth noting, of course, that (2) and (3) aren't issues if you self-host. But I don't even think I need to put down a bullet point on why "You need to administer your own Ruby on Rails, Sidekiq, postgresql, and (fourth server I can't even remember right now) service behind its own public domain name" would be a non-starter for people.


1) clients solve this, but also there are web extensions that solve this. I think this could actually be fixable with the current stack. (I use 'Graze for Mastodon' on firefox)

2) Choose your 'neighborhood' wisely. Some of these smaller to mid sized mastodon instances, especially those who espouse strong free speech doctrines might get you banned from federating with some other instance because of the actions of one of your neighbors when the 'HOA' (your neighborhood admin) refuses to do anything about them.

3) This goes with #2, choose your neighborhood wisely.

As you discuss in your postscript I am one of those who chose to run my own 'neighborhood', just for myself at this point but I could see opening the door to a couple of close friends. I've been running my own mail/web/etc services for many years now. I will say that the main mastodon software kinda sucks for this, it's built to scale somewhat and therefore sidekiq and redis and all the rest, and that kinda sucks. They have some docker options that make it a little less of a pain but I would love to see a more streamlined version or a fully api compatible piece of software that is cleaner to run (maybe compiled into binaries so i don't have to deal with ruby)...


Twitter doesn't require the user to either use a custom client or a web extension. When you require either for an improved user experience, you can chop N% of potential users off your projections.

Neither Twitter nor Facebook require you to choose a 'neighborhood.' It's all one tent. Having to build a red-string map of relationships to pick one is a real chore and a turn-off for potential users. Chop another N% off the projections.

That having been said... It's entirely possible that all of that is fine! We don't need every user on the planet; this isn't a VC-driven startup idea, we don't need unlimited growth to make a stock market and some money-suits happy. And the things 1, 2, and 3 give users have value (privacy and anonymity, the ability to choose who you trust with your private information while still using the service, and not being obligated to rub elbows with Nazis because either daddy Musk or papa Zuckerberg have either not noticed they're Nazis or they've chosen not to care, because, hey, Nazis are part of that X% of total possible users too).

Of all of them, (1) is the only one where I feel some change to the infrastructure of the web might be worth discussing. It would have to be done very carefully to preserve user privacy and anonymity, but I think a case can be made that the current domain-centric security model actually makes for soft incentive to centralize services (fewer auth bridges to build), which may not actually be a categorical good for the overall health and future of the web as a technology.

Maybe we should expand the client-side trust model to allow for trusting a federation (in a way better than the [related website sets](https://developer.chrome.com/docs/privacy-sandbox/related-we...) proposal, which in fact hyper-centralizes the understanding of trust behind the browser's control and is basically a way for the FAANG sites to link together their user experience across YouTube, Google, Blogger, et. al. without a lot of complicated server-side state passing).


Twitter does require you to deal with a bunch of fucking white power and libertarian assholes though so ... i'll take the tradeoff.


Doesn't this statement just mean "all 80s computers were good for was rendering text"? Encryption is a pretty big advantage of modern computing.


You said it -- encryption is a big deal but images and video aren't. Social media and chat are fine being entirely text based. Consider it a commentary on social media and chat, not on 1980s era graphics.


> images and video aren't. Social media and chat are fine being entirely text based

... I mean, speak for yourself. I think most social media and chat users would find lack of images to be a significant loss.


And here we are, discussing this on hacker news.


Meanwhile, something like three or four orders of magnitude more people are scrolling on TikTok.


I vehemently disagree. If I couldn't send images and video to my friends, my experience would be orders of magnitude poorer.

Case in point, I literally just finished making my foam cutting CNC machine two minutes ago, and all my friends around the world have already seen a video of it in action. That's worth a lot to me.


Even "text-based" gets iffy if you want to support i18n and proper text rendering of non-Latin scripts. We take that stuff for granted today, but it's probably infeasible on pre-mid-1990s hardware.


If you're into this idea, here's an example: https://www.youtube.com/watch?v=NentMKyVGog


Just curious, wouldn't networking be a problem as well?


Ethernet cards and tcp/ip stacks are available.

ip65 (https://github.com/cc65/ip65) for the 6502 supports 3 chips, 6 cards, and 3 platforms (Apple II, Atari, and C64).

For under $5 you can buy a wiznet chip that has an onboard hardware tcp/ip stack.


Well, you could hook it to an AppleTalk network w/ stock hardware and then setup a Mac gateway to get it onto an Ethernet network. I think I've got a setup for that around here somewheres...


What would stop 80s computers from using encryption?


Speed. A computer from the 80s wouldn't be able to encrypt/decrypt web traffic quick enough to be usable.


Now I'm curious to know the actual difference in wall-clock time between decryption on a modern device and on an Apple II processor. Seconds, minutes, hours?


According to people at CryptoAncienne (https://github.com/classilla/cryanc), a 25MHz 68030 needs about 22 seconds of maths to handshake a modern TLS server. During that time, most servers close connection.

So on an 1MHz 6502, I think it'd be minutes just for handshaking.


Thanks for that, I was going to point out its the public key, and cert validation that is going to be the problem more than the actual data encryption. I had this problem a couple years back with a project on a esp8266, which was taking on the order of 5 seconds at 160Mhz to setup a TLS connection. And it got worse with longer key lengths, and validating a cert chain.

So, ballpark it probably takes multiple minutes, and probably consumes most of the RAM for the intermediate steps with longer keys.

OTOH, I switched to an ESP32 because it has RSA offload, and something like that could be attached to an apple ][ fairly easily, to provide a connection offload accelerator.


I wonder if it would be possible to make a usable dedicated hardware encryption card for the Apple II using 80s tech.

(Of course, it has the downside that upgrading to a new protocol would require a new card, but hey... we're just having fun musing on retro-futurism here!)


The 68030 is also a 32-bit processor with 8 general purpose registers. The 6502 is an 8-bit processor with one accumulator and two index registers, though it could use the first 256 bytes of memory (zero page) as pseudo-registers.


Yes, although the zeropage is quite cramped. There's only about 8 bytes free there if you don't want to overwrite anything. Accessing the zero-page only gains 1 cycle out of 4 needed to access non-zero-page memory locations, anyway, so that's only a 25% performance gain in very limited applications.


Not that anyone would find this useful or practical, but I wonder if it would make sense to define an alternate protocol where the handshake is asynchronous and doesn't require the server to hold a continuously open connection while the client performs the encryption. This might be a non-starter for interactive applications, but for batch things like downloading emails (where you could get away with checking for new mails every hour or so) this could be tolerable.


That does make sense. I was assuming that HTTPS would be possible but slow in 80s' hardware, I was not considering that the slowdown would be so massive modern hardware would consider it a lost connection.


It should be possible to drop in a coprocessor board to handle the encrypt/decrypt. It's compute bound rather than bus bound so it should speed up nicely.


Most of us throw that in the form of an HTTPS-stripping proxy on a Pi :)


I mean, that would be the sensible approach. I was just thinking that designing an FPGA board for my Apple II might be...fun.


It does sound fun, although I shudder at trying to prove that such a thing would be free of side channel attacks.


There was a discussion about this before and from what I read TLS 1.3 isn’t possible on an 8 bit micro like a 6502. I’m assuming this is because of the timeout in the handshake

https://news.ycombinator.com/item?id=32116761


Might double the number of people who use Mastodon.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: