I'd like to clarify that my code was running only on machines that were otherwise idle. Not many people were in the lab late in the evenings. MPQS processing nodes could be added and removed dynamically, so if somebody needed a computer that was part of my cluster, they could just quit my program and everything would go back to normal.
Also, once the number theory professor learned of what I had implemented, he worked out an agreement with the lab manager to give me legitimate access to the machines. :)
Right about that exact same time I commandeered an entire lab (30-40?) of SGI Indigo 2's at Ohio State to do distributed raytracing. Wasn't nearly as educational or diplomatic but I did have fun with it until I got shut down for essentially using twice the storage in my home directory as the entire rest of the class. Between that, usenet (of course) and trolling cuseeme reflectors all over the world from the odd smelling Mac lab, I didn't get much studying done.
It was pretty awesome. I commuted to school and my 2400 baud modem wouldn’t cut it. But lots of students would run clients or ‘reflectors’ from their dorm rooms and i would stay in the labs until the wee hours just hanging out. I never got into MUDs but ‘talkers’ were similar in concept. Just themed text chat. I would hang out on one called ‘Oceanhome’ and make dumb faces on cuseeme. It felt like another universe
People weren’t that much different na back then then they are today.
This provides a per good impression of what it was like. This was 30 years ago.
> I could enter that program in the system monitor, but I needed a way to run it.
The simple way: in that mini debugger, the ‘G’ command (for ‘Go’) takes an optional address and jumps to it. “G 40F6D8”, for example, continued execution at address 40F6D8.
I don't remember the exact details, but I did try the G command, and it didn't work out. The problem was something like the program had nowhere to return to, so just ending with RTS would crash, and ending with a call to ExitToShell() would just restart At Ease and put you right back in the secure environment. I had to trick the computer into executing the program as a subroutine from inside another running program, which is accomplished by using the drag hook.
I thought the _Launch call to start a new program killed the current program, so I didn’t consider how to return to the original program. However, they my have changed that when MultiFinder was introduced.
I guess the proper way would have been to JMP to wherever a bare ‘G’ would have returned, not an RTS (if there is a proper way to do this kind of thing. I’m not sure the system guaranteed what you could do at that time. You might be in a memory manager call, which meant any drawing calls were off limits)
Ah, good times. Our lab ran sshd on each workstation, so I launched cli processes on every system, at daytime, with dozens of students still working on their project. I thought running as "nice -n 19" would be safe.
Wrong. My program ran out of memory, systems started thrashing, desktops froze up. Confused students, panicking about unsaved changes, swearing, rebooting. Meanwhile I was frantically trying to kill my processes, but even sshd became unresponsive so I couldn't stop the madness. They never found out it was me :)
what a wild story. the first thing that came to my mind reading through wasn't a picture of a modern Mac but of TempleOS. It's amazing how we study computing changed since then as in we're focusing more on learning about all the required plumbing and duct tape than the actual machines. Not that this isn't fun too but it has essentially shifted everything. Getting into "mainstream" computers back then was such a different world than what it is now.
Suppose you could get the same type of "fun" today by playing with breadboards but you'd have to seek it out on purpose. Sitting in front of a modern machine is just so much more abstraction that gets in the way.
You can also get some of the same near-the-machine fun by going into security. Shellcode has to be short and close to the OS, and breaking the stack (stringing together gadgets, and so on) by its nature subverts the higher abstraction layers.
I was neither idealizing nor being nostalgically "lamenting", but thanks for the negativity. I was merely pointing out the differences in how "the getting results in X" has moved up the stack.
In 19 and 82 or 3, I worked at Lockheed Shipyard in Seattle, in a sort of tech writing capacity on a Navy ship building project, based on my recent experience in the Navy. The business closed with completion of the ships.
We had to fill out lots of forms that documented what we called "analysis," and we'd often have to change them based on some factor changing. One change could cascade through the whole form.
Paper forms. Green see-through plastic letter guides. Whiteout. Lots of whiteout.
We had access to a department mini computer, don't remember what it was. The language might have been Basic-like, but I didn't know enough to recognize it as such.
I figured out how to write and run programs, someone showed me how to print, and I wrote a program that would accept values for all a form's fundamental values, and cascade those through calculations for the dependent values, and print out the filled in form. You could save it, update a new value, re-cascade, and print it out again. No more whiteout.
Part of the calculation involved sorting. I didn't know anything about sorting, so I implemented what I later learned was bubble sort. Because that's obviously how you'd do that.
The system administer noticed more load when people ran my program. He found me, and told me not to do that again.
I learned that there was a thing called a system administrator. He might have given me a better canned sort, don't remember.
I eventually thought it would be a good idea to quit and go to school, so I did.
(I took a number theory class, but had to drop it. I don't have the math nature.)
In the same time frame, I worked at Boeing doing design work on the 757. I didn't know how to do complex numbers using drafting tools, but I knew how to use a computer. The only computer in the building was an '11 in a special locked room with raised floor and A/C. It was under the control of Boeing Computer Services, a totally separate division from Boeing Commercial Airplane Company, which I worked for.
Anyhow, I befriended the sysadmin and he on the sly gave me an account and the code for the door. I wrote a bunch of numerical integration matrix programs, and showed the results to my lead, who told me computer results were all bullshit (he'd gotten bad numbers from them before) and called over his best draftsman, and told him to show me up (all in a friendly way, he was a great fellow).
The draftsman worked for two days, and came up with a sheet full of numbers to 3 significant digits. Mine were to 6. We compared, and one was wrong. He said my number was wrong. I knew it wasn't, because continuous functions don't have anomalous behavior. He good naturedly agreed to redo that one, and in a while came back and said my numbers were right. After that, my lead trusted my work, and I got all the math work in his group.
Eventually, some manager in BCS discovered I was making unauthorized use of the '11, and went several management levels over my head to demand I be censured. My lead (one of the engineering treasures at Boeing) went to bat for me. That went many levels up, and the BCS manager got a smackdown and was ordered to legitimize my usage.
In another incident, I got the job of writing some test plan specifications. The usual technique was to hand write them and hand them off to the secretary pool to type up. I get writer's cramps after handwriting 5 or 6 words. The secretary pool was in a special doored room full of Wang word processors. All women, and the supervisor was a woman. I asked for access to a Wang, as it was faster for me to type than hand write. She said ok, but I'd have to take the 2 week Wang course first. I asked if I could just have a look at the manual. She laughed and said sure. She was sure I'd fail.
I looked through it for a few moments, fired up the Wang and started editing. It was, after all, just a text editor. The supervisor was annoyed, but she kept her word.
If I'd stayed at Boeing I'd have likely been the first engineer to bring in my own PC for my desk, at my own expense.
A friend of mine did something similar, although in a tax office.
Ploughed through weeks of works in days (he is really fast at typing too).
When he came back with his work the manager of course wouldn't belive him, set two men to verify it for weeks and they concluded it wasn't only correct but even had less errors.
After that he was "promoted" to a customer facing job (since he was "unqualified" to program).
My dad told me this story when I was a kid. It's how I ended up as a developer and why I still am.
Delighting customer by making weeks of boring work vanish into empty air is still fantastic.
Edit: I've been luckier than him. It is not always that people can be helped but I was never punished.
> In the same time frame, I worked at Boeing doing design work on the 757.
After I graduated I worked at Boeing in a test group, and wrote control software for test rigs, mainly to test Fuel Quantity Indicating Systems on the 57 and 67, 47-400, and 77.
That was in Les Carpenter's group, and they still had some test rigs lying around built with relays mounted on wooden boards.
This brought back a memory. In the late 80s I got a job at a very high-tech science and engineering firm. A colleague told me to be careful about writing code. His supervisor had walked by his office one day and saw him writing code on a terminal. He chewed him out for "typing" because "typing is for secretaries." Apparently the correct procedure was to hand-write your code onto a paper form and hand it to a secretary for entry into the computer. I assumed my colleague was joking but checked out the official procedure and it was true.
I wonder how many of those women had to learn to code to fix the typos?
The level of structural sexism back then was off the charts, at least it was overt. Now we just tell people there is no glass ceiling and it must be their problem.
Worked for a medium-size software company in the 90s. When our company got a new CEO someone at an all-hands asked him to describe his experience with computers. He was very clear that he didn't use them, wouldn't have one in his office and that he had an executive assistant for that sort of thing.
I were tasked to migrate a pool of wang ladies to wang pc. It was horror (as the wang pc is not as good as their wang processor). The meeting turned into a nightmare for me. As some of these are also typist for all the top guys in the group the wang pc does not end up well. I survived. But still have dark feeling of a room full of wang ladies.
> But still have dark feeling of a room full of wang ladies.
They all either ignored me or were nice to me.
The interesting thing is the supervisor, who sat facing the workers like a teacher in a classroom. When she found I could use the Wang as well as anyone by just looking at the manual for a few minutes, she realized that her fiefdom was doomed. Her moat had been the 2 week training course.
Great story. I wonder if somewhere on a site like the daily wtf you can find the sys admin’s retelling where he relays in horror how a bubble sort almost brought down the system!
It sounds like you essentially invented the spreadsheet. Nice work!
> Part of the calculation involved sorting. I didn't know anything about sorting, so I implemented what I later learned was bubble sort. Because that's obviously how you'd do that.
I would like to politely contest the authenticity of this specific detail. In my fairly strong collection of evidence, people just do not bubble sort without being told how to do that.
People's natural mental sorting algorithm is most often insertion sort. That's what you use when you hold a hand of playing cards, for example. Sometimes something else (radix sort with monopoly money, for example), but I've never seen anyone invent bubble sort without first being told about it one way or another.
(Which is a good thing because bubble sort is just such a stupid idea I don't know why we teach it. In my life, I've only ever been notified of one single good use case for it: you need to slowly converge on an order but the minimum amount of computation is necessary for each step of the convergence.)
> I would like to politely contest the authenticity of this specific detail. In my fairly strong collection of evidence, people just do not bubble sort without being told how to do that.
I taught a C programming class for first-year students for a few years at the end of the 90s, where one of the tasks was to sort an array of words, with no hints given as to what kind of algorithm they should implement. Most students came up with bubble sort. Insertion sort was much less prevalent, and the few students who did typically did a kind of out-of-place insertion sort. (I also saw a O(N^4) algorithm once.)
> I would like to politely contest the authenticity of this specific detail. In my fairly strong collection of evidence, people just do not bubble sort without being told how to do that.
FWIW, my first sort was selection sort. "Find the smallest element, place it at the start, repeat" seems like the obvious way to do it, without the fussing about with shuffling already sorted elements out of the way that insertion sort requires.
Perhaps it's a thinking top-down vs bottom-up kind of thing? People who think bottom-up stumble on bubble sort, people who think top-down stumble on selection or insertion.
I just now realised that when I talked about insertion sort I really meant selection sort. How embarrassing of me to lecture others and then go on to make such a basic mistake myself!
What confuses me is that isn't the data and loop management for bubble sort even more complicated?
Insertion sort is literally just "keep track of the index beyond the last sorted element, then scan for the next smallest element and swap it in, repeat until you have gone through the entire array." This is about as simple as a comparison sort gets. (And it's what literally everyone I've watched does when they get dealt a few playing cards.)
Bubble sort, on the other hand, requires you to reason about consecutive adjacent swaps, keep track of whether the current iteration dirtied the array, and so on. (Just the idea of bubble sorting playing cards gets me sweaty. It would take long, be cumbersome, and hard to keep track of.)
Edit: who is embarrassed now? What I'm describing is apparently not insertion sort but selection sort.
But you're speaking from knowledge and analysis. I only had the perspective at the time of looking at the world through a paper towel roll. Anyway, it was the first idea that bubbled up for me.
> Bubble sort, on the other hand, requires you to reason about consecutive adjacent swaps, keep track of whether the current iteration dirtied the array, and so on.
You only need to keep track of whether or not the current iteration dirtied the array if you want to stop as soon as a pass leaves the array sorted. If you don't want to do that (or don't think of it) then it is just something like:
for pass = 1 to n
for index = 1 to n-1
if array[index-1] > array[index]
tmp = array[index]
array[index-1] = array[index]
array[index] = tmp
There is less extra to keep track of then in insertion or selection sort.
Bubble sort was perfectly natural to me before I learned about algorithms. I think your “fairly strong collection of evidence” is weak. We teach it because it’s a study in how not to sort, most of the time.
> People's natural mental sorting algorithm is most often insertion sort. That's what you use when you hold a hand of playing cards, [...]
I agree about the playing cards. But bubble sort was the natural sorting algorithm for me to come up with, too, when I was dealing with qbasic back in the 1990s.
Bubble sort is just fine if you're sure you'll only have a few items, or if it's likely that it might already be sorted, but it's still embarrassing admitting implementing it. Even Barack Obama knows that!
>The only significant advantage that bubble sort has over most other algorithms, even quicksort, but not insertion sort, is that the ability to detect that the list is sorted efficiently is built into the algorithm. When the list is already sorted (best-case), the complexity of bubble sort is only O(n). By contrast, most other algorithms, even those with better average-case complexity, perform their entire sorting process on the set and thus are more complex. However, not only does insertion sort share this advantage, but it also performs better on a list that is substantially sorted (having a small number of inversions). Additionally, if this behavior is desired, it can be trivially added to any other algorithm by checking the list before the algorithm runs.
>In popular culture: In 2007, former Google CEO Eric Schmidt asked then-presidential candidate Barack Obama during an interview about the best way to sort one million integers; Obama paused for a moment and replied "I think the bubble sort would be the wrong way to go."
Here's an implementation of bubble sort written in PostScript by Sam Leffler, renowned BSD/Lucasfilm/Pixar/SGI/Alias/Softimage/VMWare hacker, which he sent me and gave me permission to distribute in 1988, but which he was too embarrassed to sign his own name to, so he used the pseudonym "Bobo Leffler".
Also here's a PostScript heap sort by Owen Densmore of Sun, and a PostScript insertion sort by John Warnock of Adobe, both of whom used their real names:
What is so revealing about that Obama quote, the way I see it, is how dangerous it can be to "teach what not to do". There's a significant risk the student ends up just remembering the wrong way and none of the right ways.
Ah yes, good memories. I'm having a really hard time right now remembering the name of that program that blocked access to the Finder; I think it was "Easy"-something. Google fails me on the name too. Breaking it was a hobby.
Some school labs had left access to Hypercard available through that program, so you could just pull up Hypercard and make a new stack that would tell the program to quit.
The other way to get around it -- or around many other misbehaving programs in the cooperative multitasking system -- was to bring up the programmer's interrupt like the author describes, and enter "SM 0 A9F4", followed by "G 0". This would set memory location 0 to the _exitToShell function in the OS ROM and then resume execution from there, which immediately terminated whatever was in the foreground.
The rest of the system would often be a little unstable after that though, so you only had a few moves left before a restart would be needed.
To youths with curious mindsets, an anti-authoritarian streak, and seemingly limitless amounts of free time, little restrictions like these only improved our skills. Systems with challenging but imperfect security are a great way to foster new young talent.
> To youths with curious mindsets, an anti-authoritarian streak, and seemingly limitless amounts of free time, little restrictions like these only improved our skills. Systems with challenging but imperfect security are a great way to foster new young talent.
I miss the days when the stakes for testing boundaries and experimenting were lower.
Love it. Could be fun to toss up a WEP ‘secured’ SSID in an untrusted DMZ VLAN that has doesn’t have filtering. Figuring out how to use the aircrack suite on an ancient thinkpad to bust an SSID was what really got me interested in computers as a kid.
You could also trigger the NMI via Apple key + power button and I remember the magic code to set the memory to zero and crash finder being: SM<space>0<space>A964<return>G<space>O<return>
> At the ">" ROM debugger prompt, type the following lines, pressing Return after each:
> SM 0 A9F4
> G 0
> In the first line, the "SM" stands for "Set Memory", the "0" signifies memory location "0", and the "A9F4" is the trap number for the "ExitToShell" trap. This line puts "A9F4" at memory location "0". In the second line, the "G" stands for "Go" and the "0" stands for memory location "0". This line tells the computer to execute the instructions starting at memory location "0". Since "A9F4" is at memory location "0", the "ExitToShell" trap is executed.
In the late 1980, we did some Mac consulting for a largish ad agency, and one lower tier executive wanted to demonstrate the importance of his work by having us install some password protection on his Mac.
Suspecting that this would not end well, I installed a very light weight solution; he promptly forgot his password within a week, called me in again, and I bypassed the password prompt with the above NMI+ExitToShell() method. He was suitably impressed and asked me to write down the magic incantation on a post-it note that he attached somewhere to his desk...
What can I say? They always promptly paid their bills!
My school's computer lab ran Windows 3.1 for Workgroups on top of DOS. The network login prompt was in DOS. There were some hackers in upper grades who had installed games like DOOM, Descent, Duke Nukem 3D, etc. in a folder named Alt+255 on many of these computers, which looked like a space in DOS but was invisible to Windows Explorer.
The lab manager found out about our Ctrl+Break hack and put in an event listener/handler for that keycode in the DOS login program. So I spent several hours trying other key combinations and eventually figured out that Alt+3 sent an ASCII character that had an effect that was equivalent to Ctrl+Break in DOS -- and we were back in business.
No, it was on the west coast... but I bet a lot of places were running similar setups. Network admins at the time (at least the good ones) were likely receiving the same periodicals and reading the same books.
Breaking things with ResEdit was pretty much my jr. high school hobby. It took a bit before I figured out how to restore things when I was done.
Head "tech" guy in the building eventually figured out it was me, showed me a few things and ended up just giving me an un-limited login and a few rules. Also taught me how to intelligently use ResEdit instead of the brute-force method(s) I was used to.
You didn't even need a boot disk and ResEdit. Just hold down the Shift key while the Mac was booting from the hard drive, and system extensions including Foolproof would be disabled.
I remember AtEase from when I was a kid. Somehow I discovered it on our Macintosh Performa at home and was totally fascinated by it. Kinda like Mouse Practice (https://en.wikipedia.org/wiki/Mouse_Practice), which I must have played a gazillion times.
From what i remember, MiniFinder was written for MFS, and did not support hierarchical directories. You couldn't use applications unless they were at a filesystem root.
It was AtEase. In the earlier macs, those with a button to trigger a debugger in firmware, you could completely exit AtEase by entering the debugger and typing “G H”
I know it worked on System 7, and I think there was a variation that worked in System 8 as well.
Apart from using hacking to overcome imposed limitations such as these, I am amazed by the number of times I had to use my chops to overcome what was simply a bug. Opening the web inspector is such a normal moment of dealing with other people's websites, I have no clue how people manage without it.
I was very... surprised when I managed to productively use "javascript:(some code)" in the URL bar of the browser of my phone once when I was stuck in the airport probably like 10 years ago and needed to do something that just wasn't working normally. Now that we're used to smart phones it probably sounds quite basic but it was absolutely weird back then.
About a year ago I was attempting to book a room through the Hilton payment system but my card kept getting declined.
Opening up the web inspector revealed that the JS was stripping any leading 0's for the CVV. I ended up manually crafting a curl request to hit the GraphQL endpoint and was able to successfully book the room.
I did reach out to a Platform Architect on LinkedIn describing the bug, but never heard back about it. Was a total shot in the dark, though, so no surprise there.
>>Opening the web inspector is such a normal moment of dealing with other people's websites, I have no clue how people manage without it.
As a C++ programmer who has absolutely no idea how to even open the dev console in browser - I just close the website. If I can't scroll it, if it has stupid popups that I can't dismiss, if it lets me get halfway through checkout and then misteriously empties my basket? I'll maybe give it one more try and then just close the website.
The web is broken. Someone's website wouldn't scroll (I don't remember the details, and I don't think I wanted to find out --- maybe it was overflow: hidden, but it could've been something else), and the only way I could figure out to deal with it was to open up the devtools responsive design mode and pretend the screen was bigger than it was so that the full page could show, and then use the browser's scrollbar from within that view.
I’m one of the few weirdos who actually seeks out and reads the ToS & privacy policy posted on sites I visit and I can’t tell you how many times I’ve run into a site with infinite scrolling and a footer (with all the “important” legal stuff) that a user will never see but for a fleeting moment. A feature or a bug? I’m never quite sure.
In the footer, there's also sometimes the language picker, the "desktop version" link (often acting as a button with same urls, so you can't just save it), and the dev docs links. There's a site where I have to specifically open a page that is not infinite to make sure I enabled the desktop mode before going back to whatever I actually came to do
I remember the DSI bookmarklet community that utilized "javascript:". That was so long ago and I was so young, I don't even remember how I found out about them.
>Opening the web inspector is such a normal moment of dealing with other people's websites, I have no clue how people manage without it.
I ocassionally use it to debug faulty script submission or stuck websites or some paywall banner (as a consumer), but one great way to manage without it, is to avoid those websites in the first place.
If I see an incompetent business website, I take my business elsewhere...
I did a nasty hack a little like this when I was in college. College wired internet had all sorts of filters on it, I can't remember the details exactly. The important thing was it didn't allow the upd ports for counter strike from the wired connection we had in a society room. The computer science department had their own web proxy, which was only accessible from inside their network. Cs students could SSH into special lab machines running linux, which were on the cs network. Crucially, you could connect to them from the general college network, not just another cs network ip. These machines had a series of predictable hostnames, iirc lg12l$hostnumber (lg12 was the labs name). So, the solution was in two parts: first, ssh tunnel to the cs dept proxy, second: connect to openvpn on a vps somewhere using a mode that openvpn has which allows use through a http proxy. This happened on a machine running a dhcp server, which then acted as a nat router for the whole room.
The lab machines I was using were physically accessble to students, who could reboot them, shut then down, whatever. So I wrote a script that would iterate through the hostnames until it found one that it could SSH tunnel through. Mostly it used lg12l0, the first host. One day I got an email from the cs dept sysadmin telling me that lg12l0 was a test host in his office, not a real lab machine, could I please use another one? And no complaints over what I was actually doing with it. I still wonder if he noticed. This setup was running for at least a year, maybe two. Sorry about that mate :D
Wifi from those days wasn’t nearly as good as a hard wire. Also, it might have just not been available. It wasn’t nearly as ubiquitous 20 years ago as it is now.
Play counter strike on wireless? Hahaha no way man. CS is a highly latency sensitive game (as is any competitive fps). Wifi is much too prone to disturbances and packet loss, so you will experience lag/jitter/loss.
Everything interesting on twitter takes more than one tweet.
I wonder if there's a place for "fast Usenet", where you can write a message of arbitrary length with some markdownish syntax, let it be flooded to anybody subscribed to that group, and automatically drop into a searchable no-new-comments archive after, say, a week.
Curious why your comment got collapsed, it's very interesting.
Genuine question: why archive?
While I don't really have a substantive list, I very often come across archived threads on here and elsewhere that I'd like to respond to. "Wow this is awes--" "oh it's from 2016".
I guess I just don't understand the hate against necroposting. If there's a thread out there that has exactly the context and disposition I was looking for, _let me respond to it_. In a lot of cases I don't actually care if any reply(s) take 4 years to come through!
Why the anti-necroposting provision? Because if it's really relevant, having you jump through the additional hoop of quoting the good bit and copy-pasting into a new thread not only forces you to put a little bit of thought into it, but also prevents threads from being months or years (or decades) long and thus only being accessible to people who have read the whole darn thing.
In my experience in Usenet, conversations go off topic remarkably quickly.
Maybe HN should append [twitter tale] or something like that to the title as they do for [video] and [pdf]. Then I would first try to find a one-page transcript in a comment...
I have a similar story but I was not quite so clever. Back in 1990 and 1991, I used a cluster of IBM RS6000 workstations in my university to train neural networks. I had previously tried to get backpropagation to work on a Connection Machine but found it too frustrating to work with. The cluster of RS6000s actually ran my code (written in C) very fast and was able top get some good results and graduate with PhD. I built a distributed queuing system to ensure that each work station would pull the next job from a central queue.
Main problem was that it was hard to get time on our CM2 to run jobs. You had to sign up in advance and it was a nuisance. Also debugging the parallel version of C for the CM2, called C* was pretty hard to debug as well.
There used to be something called PVM - Parallel Virtual Machine - a fairly simple library for distributed computing. Had to use it once back in Uni, it wasn't bad.
For anyone still at or associated with a university, how would they react to behavior like this today?
When I was in school (circa 2000), the IT offices were starting to crack down on students (with some threats of expulsion) for activities like this, though it wasn't yet typical or uniform. I know at GT there were a few computer labs that, if you paid a bit of attention, you could easily get ssh access to every computer in the cluster, and then using nohup or screen (this was pre-tmux) you could have your program run as long as the system was up. I had to ssh in a couple times because I'd forgotten to logout and didn't want to get "baggy pantsed".
In a public lab that is used frequently, I believe this would be viewed quite negatively. Not from a "you hacked our systems" perspective but a "it could have been damaging to other students' education if things had gone wrong". To make the differentiation somewhat more clear, filling up a bucket of water from the bathroom sink to clean something vs. removing all the shower heads and making a super funnel of hoses to spray wash your car outside. The former activity is clearly much less likely to cause issues with other students. If the computer lab was in some basement used by 3 students throughout the year, I don't think anybody would care.
Building on that, many universities have computational resources for any level of needs as long as they are justified. Free network storage to last a lifetime and access to computer clusters that comes with your university email. Upgraded computer clusters for class projects, personal research (like this), or really any legitimate need (as long as you don't say something like "I want to mine bitcoin") is an email away from the university IT. The next level would be the university's super computer that generally needs a short proposal to justify the academic purpose. When these systems are in place now, it is kind of hard to justify these type of things.
Edit: The author of the post put a comment basically confirming that things haven't really changed. He was being respectful (low usage lab, allowed people to stop his programs if they needed a computer, low impact to others, etc.). He was given access to more legitimate resources afterwards at the request of the professor. Not as easy as it is now, but that's Moore's law.
I graduated last year, and I worked for my colleges IT department. We used to have to come knocking at the door if a student even had their own printers set up.
It definitely would not go well.
That being said, I did know people who clogged the cs department's machines with batch jobs to train really expensive ML models that did God knows what. Its not the same as bypassing a security system, but it is an instance of people having the ability to run whatever code they'd like in certain circles.
Funny, because I remember bypassing my university's print quotas in grad school a few years ago simply by printing directly to the IP of the printer. I didn't set out to do this. It was just an unintentional side effect, and one I never abused, but I did happen to notice that whenever I printed, my quota allotment never went down.
I did this out of pure convenience, and actually never experienced the university's Windows account system at all (would I have had to sign up for an account? who knows). I had a list of strategically-located printers in my printcap, chose one on the way to whatever class I was headed to, kicked off the print job, and collected it as I passed by to class. In class I would get the previous week's assignment back, take the staple out with my pocket knife, and reuse it to staple the current week's homework before turning it in.
I couldn't imagine a situation where if someone knocked on my dorm room door asking if I had a printer in there, I would not have told them off with very unflattering language and telling them to perform fellatio upon themselves.
How many students are just rolling over and saying sorry? I would imagine most would give you a very hard time.
When I was in university IT ~2008 our main concern was people accidentally serving their own DCHP onto the network and colliding with ours, generally by plugging in a router LAN port out. I could imagine a decree with this goal saying "no personal devices" to keep it simple.
> How many students are just rolling over and saying sorry?
Probably most/all. IT infrastructure is critical and there's a very good chance that the IT staff have a process in place for dealing with kids who won't let them do their jobs. And it's probably a pretty effective process that has been refined over the past decades.
Replying all gets you booted off the school network for a certain amount of time, it's in the TOS so they can do it. I imagine people value network access over having their own printer.
In 1993 I was one of about 3 people on a dorm floor of ultraturbonerds who had a printer in my room; it was a Canon BJ130, I think, which had the distinction of being a tractor-fed 300dpi black-and-white inkjet, and thus much much quieter than my friend's IBM ProPrinter 24XL. How much quieter? It wouldn't wake up people in the next room.
From what my college-going friends tell me, some colleges want students to pay for a higher print quota instead of having their own printer. More money for the college that way.
A primary mission of the Pittsburgh Supercomputing Center is to train students, including undergraduates, in high performance computing. To this end, PSC offers Coursework allocations which are grants of free supercomputing time to supplement other teaching tools.
Typically, Coursework allocations have been used in heavily quantitative subjects, such as numerical methods, computational fluid dynamics, and computational chemistry. But we encourage all fields, including the social sciences and humanities, to take advantage of Coursework allocations.
I think that it totally depends on the definition of 'behavior like this today'.
If 'behavior like this' is creatively toying with lab equipment that you have been given physical access to, bending the rules a little bit in the process, to participate in an academic challenge? If it would be up to me then a slap on the wrist would suffice, now just as well as presumably in 1993.
But if 'behavior like this' means breaking into computers, which nowadays are essential parts of a university's infrastructure, in straight violation of rules and conventions on hacking that have been in place since before you were born (say 2000), just to get some computer time to, say, mine whatevercoins? Then a little more than a slap on the wrist would be completely fine by me.
Hah, fair enough, HPC clusters are certainly more common than when I went to school. In that case, take the "like" to be a generalization (how I meant it): A student (or students) pushing the limits of the existing rules and infrastructure to accomplish something productive or interesting. Yes, it breaks some rules, but what's a typical reaction anymore? Do they get hired by the university or pushed to work with particular researchers like when I was starting school or do they get pushed out or punished like when I was wrapping up college?
Today the sysadmin sends you a frustrated email at you asking you not to do that again. You are using someone else's stuff that is carefully maintained by other people, it's polite to abide by the rules these maintainers have set up since it's their job and their free time on the line when things go wrong.
Not even that. You can reach those kinds of levels with a GPU these days (and GPU programming is sufficiently obscure that most people won't even attempt to go that route).
Though... with the whole GPU shortage going on right now, maybe it'd be easier to steal computer time from someone else's lab right now!!
When I was in uni a few years ago it wasn't exactly unusual for students to use the PC pool for things like this, but you didn't need to circumvent any security to do it (which presumably would have drawn some ire - some poking at security was ok, but always a bit uneasy). Nobody cared unless you really got in the way of people wanting to work on the machines.
At my uni, it's pretty much solved by having the student's computer society own the most capable machines on campus (sponsored by Facebook if I've understood things correctly). If any one want something done, they just ask the root of the society, and move on with their newfound computer resources. This access can range from a tiny VPS (automatic access for all students) to shell access to the (small but capable) HPC cluster.
My university had a lap specifically for computer security research. So you could do anything you wanted in there, including screw with network infrastructure (though, you were asked to plug all the machines back into the appropriate ports after you were done).
There were apparently scripts set up that would reset everything every Sunday.
True, you're not going to get this extreme level of access to the machine. But even doing things like ssh'ing in to many machines to use it as a personal distributed cluster, even if you did everything to make your program run at the lowest reasonable priority, got some people in trouble at one university I was at.
People that are smart as hell using a tool built around what feels like mental whiplash to tell a relatively complex story.
I actually think there is value with Twitter applying pressure and economy of words (I would fall into the category of over-explainer) helping folks like myself be more concise, but can't the story just go here?
Thinking out loud I'm sure there is an approach towards multiple networks being hit simultaneously but it'd really be nice if linking to a Twitter thread roll-up just automatically dumped the whole text here without needing to click into and pull in ${threadReaderTooling}
Back in 1992 I was using a neural network library in Pascal. I did not know the admins were killing long running nohup tasks, rather than the program failing, so I could not run it on the UNIX boxes in the background, hence lots of foreground and usenet reading. The neural network use was applied, 'how did number of nodes, number of layers matter?' to analysing fetal heart rate data. Backpropagation was built in. I was recovering from a nervous breakdown, so my virtual neural network having breakdowns was a little bit strange. I got a 2:1 for the project i.e. good, but a 2.2 for the whole final year, i.e. very average.
Amazing how times have changed. Took just over 15s inside Mathematica on an old Intel MacBook Air with dozens of other apps running. Didn't even spin the fans up.
The twitter thread and this comment section is a good reminder I don't belong on HN. I feel like the most complex coding I have done is write a CRUD web app haha.
As long as you can be thoughtful and curious about tech-related stuff and communicate that thoughtfulness well in text, you belong on HN. There's certainly a tilt toward career SV types for obvious reasons, but this place has everything from genuine, jaw-dropping geniuses to people who are just hobbyists, and a lot in-between.
HN community is awesome. But the comment thread web UI is a big duh! I'm not comparing it with Twitter as I haven't used it much. I can't use HN without Materialistic so on Android. It's so much easier to collapse and expand comments on mobile than having to click on a small + sign.
FWIW, I'm from 1990 as well and I prefer to read plain websites with no JS/CSS. But the HN comment threading interface could use some love.
My highschool physically removed the right mouse button from the mice, because we were right clicking to make text files that we'd rename as batch files to get a command prompt open.
Some people would just bring their own mouse in to get past the defences.
Oh and it wasn't. That was the least of the school's problems though.
They also used a surveillance system called LanSchool, which sent out all of its commands entirely unencrypted and unauthenticated, so people would spoof the remote takeover command and steal exams from teachers' accounts. It ended up being a whole thing my senior year.
This thread is making me remember how incompetent my old school district used to be with technology. Once upon a time, a friend of mine got detention for opening a command line and running `tree`. The teacher said that he was hacking the computer.
Ah, the era of primitive DIY security. Those were the days. It's kinda of sad that today's kids have to deal with industry-grade encryption and professional best practices, instead of going toe-to-toe with an underpaid self-taught middle-aged "computer guy" like my generation did...
I had a few amusing tactics. Even with 'professional best practices' (debatable), the human element was still vulnerable.
One example was that my school refused to give out the Wi-Fi password as a form of security, so they'd demand to manually type it in themselves on your machine. A simple keylogger and now the whole school just ignored that rule for a few months.
Another is that while they tried to block things like SSH, VPNs, etc... to get around the school's internet filtering, turns out you could just run SSH over port 80 and have a tunnel out :)
Our computer guy was pretty cool, he set up Starcraft LAN parties during recess. However he also used passwords like Spring1998, so everyone could get year-round admin rights if they knew his pattern.
When I was in college, we would set the process name of the MUD client to whatever WordPerfect used on the VAX so we wouldn't get busted for playing games during non-gaming hours. Of course the sysadmins saw right through the charade, but it was fun to feel like you were getting away with something for half a minute.
My current (government) employer blocks installation of Firefox and Chrome setup exes, but it's still just a hardcoded block on Chromesetup.exe and can be bypassed by calling it literally anything else.
In my high school IT class we got to play Halo CE if we got all of our work done early. I made a play game batch script that started the game, but also planted a secret batch script with an unsuspecting name into your startup folder, that would open and close your CD drive at random intervals. The teacher let it go on for a week or two, he thought it was hilarious
In 1967, while I was in school, I reverse-engineered the Montreal Metro system transfer tickets. It used a custom paper tape-type punched code. I was able to manufacture tickets. Was real popular with my classmates.
In 1983 I reverse-engineered the Xerox 9700 laser printer font format. Xerox were real SOBs and refused to share technical info with their customers, who were paying $12K/mo rent on the huge printers (120 impressions/min., two-sided). I altered random bits in the font files, and displayed the distorted results. My apotheosis was selling new fonts to the local Xerox branch.
Back in high school (which wasn't really all that long ago) just about every computer on the network had really a massive set of content and program restrictions, but it was all very surface level. Very little security if you knew what you were doing, and thankfully the login script that was used for your district-wide account, was run in Powershell.
Not nearly as interesting as OP's hack here, but there was plenty that you could do if you just knew what to use Powershell for. When the machines got upgraded to Windows 10 from 8, every computer you logged into would use the Win10 default background instead of the blank blue one they used previously, and when in my CS class I found it rather distracting. Of course I went over the top, and since you were disallowed from changing the background picture in settings, or with the right click menu, I created a little tool my freshman year to change the value of the my user account's registry key for the background, to whatever else I wanted.
I understand IT's reasoning against giving highschoolers that access (preventing sexually explicit backgrounds), but I just wanted a nice winter landscape, and so when they eventually caught wind they didn't really mind, though they did give me the 'with great power comes great responsibility' speech.
Through my next three years the tool grew to give me command line access, bypass certain parts of the filter, be able to play a bit of music over a pair of headphones, the works, and I was always the cool guy with the one non-standard background in class. It probably looked much more like I was "Hacking the mainframe" than what I was doing in reality, playing MUDs during boring presentations via telnet.
> But they were running some kind of secure software that didn't give you access to the Finder, only specific software, so I couldn't pop in a disk and run my own programs.
That software would be AtEase. And I used to run circles around it trying to break out into Finder and gain access to the nascent, to me, Internet. One of the first things I installed was NCSA Mosaic. Good Times!
I had once a modest hack with an immediate payoff. We were using a troublesome 3rd party C++ library (back in the cfront days), whose source wasn't available.
I took care of one critical bug this way:
#define private public
#include <foo.h>
Now all the innards of the foo class were available to be fixed!
And nothing changed. Probably even worse. In most modern software companies you are blocked by a lack of permission at some point. Hard- and software became more and more hardened and complex. Thus, tinkering with it to work-around rather requires luck or a very very deep knowledge.
I wonder what the alternatives are. I think systems that provide fully-enabled sandboxes are they way to go. Also, self-service instead of tickets (are the humans approving your tickets and better in detecting unauthorized requests?). I think that's why containerization is so popular.
When I did a university security course a few years ago that involved a contest to factor primes those of us who wanted to be competitive had to buy AWS processing time...
I have memories of the kiddie version of this, which was using "locked" down computers at stores or libraries and abusing the hell out of the context menu from, say, an arbitrary print window that shouldn't appear, which would normally be enough to get you a separate Explorer (or File Explorer) window, from which you could do whatever. Or enough to launch a command prompt as well, depending on the system.
I remember discovering Cain & Abel in high school, then discovering that my principal (who had gone to the same high school years earlier) had logged into the computer I had for my keyboarding class. Turns out, he still used the same password a decade later… I had way too much fun with that password.
Brings back memories of my friend borrowing the lab computers over the weekend to do some long running Monte Carlo simulation runs (so trivially parallel). Except he had a bug that implicitly gave all the runs the same RNG seed. Took a bit of head scratching to work out what the hell had happened.
A quick lookup for CPU speeds of yesteryear shows the Motorola 68040 in those macs did about 35 MFLOPS [1].
If you could parallelize this program to use all 2.6 TFLOPS in a Macbook Air M1's 8-core GPU, you're seeing about 75,000x speed improvement. So a 12 hour "overnight" job may take just over half a second.
I generated two random 35 digit primes, multiplied them, and then tossed the product into the first hit for "factor integer online": https://www.alpertron.com.ar/ECM.HTM
I'd very much like to revive my old code (which I still have) at some point and see how well it runs on modern computers. My guess is that a typical 64-bit ~3 GHz quad-core machine could accomplish the same task in a few minutes today.
Hate to be the choosing beggar, and I appreciate all the answers, but ... everyone is answering about how long that 70 digit job would take now, when what I meant was, how big a number could you factor “overnight” with similar cost hardware.
The purpose of the FaaS (Factoring as a Service) project is to demonstrate that 512-bit integers can be factored in only a few hours, for less than $100 of compute time in a public cloud environment.
On that note, I'd like to share a clip about how fast modern machines are relative to early 90s supercomputers (the relevant part is in the first 5 minutes):
Back in 1969 I was taking a number theory class, and I remember being taught that it was the purest of pure math subjects with (essentially) no application. It was simply a subject with beautiful results and beautiful proofs—good training for a budding mathematician.
Ironically, now the results of number theory are an important foundation for essential computer science.
I love the stories here. Mine isn't as cool or flashy, but did save me some money in college. My university implemented a $0.05/sheet of printed paper system. We found out that we didn't lock down the printers themselves. We could print a test sheet at the printer, get its IP address, then directly it add it to our personal machines. We saved a ton of money and hassle since buying printer credits was a pain.
From around 1996-1997, when Linux desktop use became popular, some university departments had this distributed computing system installed in both classroom computers and staff's desktop computers, to run distributed computing jobs when the computers were not in use otherwise.
Meanwhile, in my number theory class we just learned how to write loops in Maple.
Not sure if this was some graduate class or if there are really undergraduates with enough CS + number theory knowledge that could be expected to pull off something like that.
SM 0 a9f4 writes a9 f4 into memory at address 0
G 0 starts executing at address 0
A9F4 is the ExitToShell A-trap, which terminates the
current process.
With those commands you could terminate the foreground task on System 7 and were (usually) given a chance to save or recover something before rebooting.
It could also be used to kill At Ease on most school setups.
Wow, blast from the past. I learned so much poking around on my mac with Macsbug and later, TMon. I think that most of the time I was just looking through games for where they stored things like experience points.
Just curious, could you just put the address for Finder into that hook instead of the auxiliary program? Or even better, just put the main MPQS program?
Playing Roblox and Minecraft? Well, maybe doing some remote system hacking or flashing LineageOS on ther smartphone. But the fact is that systems are much more complex and secure today. There is high barrier of entry and too much to learn.
This sort of thing is really impressive. But at least for me, it stops being impressive and starts becoming tedious pretty quick. Again at least for me, it's important to know that I am capable of doing hard/tedious/impressive things, but it's not important to keep doing them over and over. Because the tedium just kills it.
Also, once the number theory professor learned of what I had implemented, he worked out an agreement with the lab manager to give me legitimate access to the machines. :)