I remember my freshman year of college in 1994 when I took C++ and asked the professor about how to write a GUI program. She told us about the Motif toolkit. I got a book from the library and typed in a hundred lines of boiler plate code just to make "Hello World" in Motif and C.
It then took about 5 minutes to compile on a DECstation 5000 with 32MB RAM running Ultrix.
Someone in the computer lab saw me with the Motif toolkit book and said "Why don't you use Tcl / Tk?" I said "What's that?" and they showed me. I had "Hello World" in 3 lines and could edit and start my new version in a couple of seconds instead of minutes to compile.
After I graduated I got a job in the semiconductor industry which ran on Solaris and HP-UX and now Linux. All our digital tools from Cadence and Synopsys started using Tcl as the built in scripting language. I ended up teaching my older coworkers how to program in Tcl. I was still writing Tcl yesterday. There are a lot of things I hate about it but it has stuck around.
The text before the interview mentions AFS which I did not think Ousterhout had anything to do with. I thought it was from Carnegie Mellon and then IBM Transarc. We used AFS at my school and I still miss it. Going to NFS in the industry seemed like a step backward.
We only had an 8MB quota for our entire Unix account in college. It was great when working on a group project that I could make a dir in my account for that project and then add the 3 other classmates login id's to the group dir ACL list.
These days at work depending on the size of the company we have to send an email to the sysadmin / devops team to say "Add user ABC to project group ID XYZ" and then wait hours or days. At my current job we can request Unix group access through a web site but I still think it is some human running commands to manually add users because it takes about 24 hours.
> The text before the interview mentions AFS which I did not think Ousterhout had anything to do with. I thought it was from Carnegie Mellon and then IBM Transarc.
Ya - I wonder if the article writer actually meant LFS (log-structured file system)[0] which was part of the Sprite operating system project.
I use tcl in EDA reguarly also. It's quirky, but useful. I'd not choose it for anything non EDA today, but it works well in that niche.
The most horrific thing I discovered about tcl is that curly braces within comments can affect flow control. This is because comments are interpreted as a command that does nothing, not the same as complete ignored. Finding that out the hard way was not fun.
> The most horrific thing I discovered about tcl is that curly braces within comments can affect flow control.
This is not true. Once Tcl sees a valid comment, the rest of the line up to the newline is treated as a comment. An odd number of trailing backslashes suppress the newline for interpretation purposes. Within the comment you can use braces or whatever without affecting control flow.
Tk is what made people use Tcl, because it was a rare scripting language for GUI on Unix workstations.
The Tcl language is actually really clever and interesting, but the textual evaluation model (a much smarter approach than Bourne and C-shell) still made it poor for representing and manipulating graphs.
I was lead for the extension language in a new CASE system, which had rich semantic graphs, including cycles. Tcl ended up being chosen by a cofounder, so I wrote up a doc on language limitations and how to work around them.
(I'd also argued in the early/mid-'90s that the high-level extension language should also be the application implementation language. Such that we'd develop much of the application functionality ourselves in this high-level language, atop a C++ or C core API for foundation that needed that. Python eventually went this route, but hadn't yet. At the time, people in these sectors (which were very different from MIS developers) were thinking of extension language as things customers could use for simple reports or crude customizations or integration glue. Python and Tcl were new candidates for extension languages, and some big-ticket CAD/EDA/publishing tools had already been using their own more powerful Lisp variant implementations.)
> We only had an 8MB quota for our entire Unix account in college.
This brings backs some memories for me. In high school we only had a 10MB quota. I remember bringing in stacks of floppies to workaround the quota so I could download junk off their T1 and using file splitting programs. One of my friends even blew all his McJob money on an external Zip drive. By the time I started college USB thumb drives came into vogue and made everything so much easier.
> One of my friends even blew all his McJob money on an external Zip drive.
Ah...I recall doing this too. Funny to remember so much of that money going to buy the best possible hardware for the least amount of money.
Like I thought I won the lottery when I got a deep discount on an OEM AWE32 card that came loose in a cardboard box, literally bare and rubber-banded to a pile of floppy discs. Just so I could have low-latency midi and load soundfonts into the card.
The zip drive was frustrating though. I got to college and had all my HS portfolio work on a zip disc but no drive anymore. So I bit the bullet and bought a drive, thinking it would be a while before that format was useless.
Functionally I was right but for practical purposes I never used the drive much after copying my files over. Especially after hearing about the... Click of death issue I think it was called?
Brings back memories for me. Started college in ‘95, system had a 10MB quota on AFS and I also made use of the ACLs for projects with others. Systems ranged from Solaris to IRIX to AIX. My first GUI app was in Perl/Tk since I already knew Perl for scripts; it was a mail watcher utility that showed recent messages in a window.
Thanks for this article. A few years ago I had a task involving reading some data files to understand whether they were corrupted or still usable. The software to read the files usually came as part of a proprietary package with a price tag high enough that it made no sense to license for this one small task.
I asked myself: "how hard can it be?"
I decided to search online for easy and free ways to read the files. It had been several years since I last programmed anything though I had done a bit of script modifications on some perl/awk/sed scripts. I found Tcl/Tk after a bit of digging and decided to give it a whirl.
In all I spent about 16 hours dinking around with Tcl and had a nice, short program written to read everything, display the data for a QC check, and write it to disk. The documentation was detailed and concise. Of all the programming languages that I have used over the years the fact that I could sit down and find the right set of simple commands and assemble all of it into a working program in a couple of days with no learning curve pain or friction was amazing.
I managed to completely mangle that post via phone editing. Meant to say, I re-read it whenever I sometimes fall into a technical hole as it reminds me design and craftsmanship are the most important aspects of building great software.
John Ousterhout ... Tcl may be the thing that most associate with his name, but to me he's Mr Magic[1]. Magic is the research program that (helped) begat efabless[2] and gives me the hope of making my own chip some day, just after this next major project is over and done with.
I’ve spent around 10 years building software with and around Tcl, embedding Tcl as command processors in a distributed system —- lots of fun.
Some decades later I see that Ousterhout, among others of course, is still contributing to the community, e.g. the Raft consensus algorithm, while the steam may have left Tcl.
The whole "is Rust relevant" feels like that needs the Ousterhout "A little bit of slope makes up for a lot of y-intercept" graph[1] stapled to the same.
I'm not sure what the y-intercept of Rust is, but its slope seems to be pointed towards more code being written in it than ever before. For example, the wasm compilation is probably the clincher for my toy projects because I like to build stuff to run on a website, but I don't always like writing in JS.
You just have to look to domains where C++ rules to see that while the community is aware of Rust, hardly anything is happening in HPC, HFT, GPGPU, Khronos APIs, LLVM/GCC, MPI, OpenMP, AUTOSAR[0], Unreal/Unity/Godot, PlayStation, XBox, Switch, Apple and Windows official APIs,....
So while Rust is being adopted, it is at the same scale that C++ was against C around the early 1990's, in regards to market size adoption.
[0] - They are looking into it, still no standard update that allows for Rust
An official work in progress Windows binding, still far behind of what C# existing bindings are capable of, or legacy toolkits like MFC.
Also given how the team has managed C++/CX transition to C++/WinRT with lesser tooling stuck on C++17, dropped Modern C++ bindings [0][1], before going into other shinny thing, I wonder how long they will keep at it.
It doesn't matter if the project is driven by Microsoft or not, the cat (of automatically generated language bindings) is out of the bag. E.g. Zig is using the same approach without being an official MS project: https://github.com/marlersoft/zigwin32, and Apple has an automatically generated C++ API for Metal (https://developer.apple.com/metal/cpp/).
In the future, the question won't be "what language do I need to learn to code on this platform", but instead "where are the language bindings for my favourite language".
I think it's just a balancing of where work is needed.
C++/Cx (and it's predecessors) was bad in requiring special compiler support whilst WinRT had already seen real adoption and cppwin32 didn't really give any benefits apart from another backend so they seem to have concluded that C++ devs would easier be supported by something more mature (that needed support anyhow) and then just focus the win32metadata project (That's still alive) on "new" languages.
Adoption by WinDev, nobody else cares after they screwed their customers.
They should all have been fired if it was up to me, we don't pay VS licenses for killing our workflows like that.
WinRT is Windows only technology, who cares about extensions.
Plenty of people don't have any issues with GCC and clang extensions, or macOS specific ones like Objective-C++, or TI, or ARM SDK, or whatever fancies their party, only MS ones are bad.
Rust is being used where it makes sense--generally security. So, gamedev won't use it for main loops but will use it for networking code. The crypto Ponzi brigade chose Rust for a reason. Some of us who write communication stacks (TCP/IP, CANBUS, etc.) try to use Rust whenever possible.
However, you are correct in that Rust popularity is growing, but it's going to be slow. If you look at the TIOBE index, there is a limit as to what Rust can really displace. A lot of the top languages are now GC-based, you're not going to reach for Rust if you can use those. So, Rust can realistically only displace C, C++, and ... nothing else. That means almost 75% of programming language use cases aren't even in scope. And C programmers really aren't going to gravitate to Rust as it's more of a C++ replacement.
So, you need a specific usecase (security) or a greenfield project that would use C++. That's going to be a slow climb.
This is one of my favorite books on software development. It focuses on timeless principles, but uses concrete examples to ground everything he teaches.
It is an excellent book and a better example of how to be a low BS human being that strives to get the point across, not dress things up to look smart.
I don’t think threads are really a concept that’s applicable to Verilog/VHDL.
Stuff operates in parallel in those languages, because you’re essentially describing little machines that all run independently of each other. You probably have some handoff from one machine to another. It’s more like Factorio and less like multi-threading.
Everyone is talking past each other here at different semantic levels.
From the old hn convo, which I agree with.
> Structured threads aren't that hard (e.g. task-based systems, thread pools).
> Unmaintanable raw-pthread messes are a nightmare sequel from the director of Endless GOTOs.
I think @Amelius is over generalizing on parallelism while Ousterhout is talking specifically about fine grain locking as a specific form of implementing parallelism.
Richard often calls sqlite itself “a Tcl extension that escaped into the wild.”
It’s no accident that a lot of sqlite tests are written in Tcl, or that a “weird language like Tcl” has first class sqlite support. In fact, for a while Richard also sat on the Tcl Core Team (TCT), shepherding its development.
The C source has been translated ('transpiled') to Go source. That's not what is usually meant by 'implemented in', but I guess it's not wrong, since there now exists an implementation in Golang.
Ah, ok, that makes more sense then. The "implemented in. Not a clone" made it sound like the actual, core language implementation had been switched over to Go... and clearly that didn't sound right.
Right, the reference Tcl implementation is still in C. This project takes the C code and produces Go, along with a supporting libc emulation in Go. It's still the original C code, not a clone as Jim or Picol.
Thanks for the clarification. I don't use Tcl for normal work anymore (though I still use it for small things), but it still has a very special place in my heart. It's one of my favorite languages to program in.
I don't want to like tcl at all but it's asbsurdly easy to embed in C and has very powerful constructs once you get used to them and no Global Interpreter Lock like python.
So there was a discussion about why people still use C or similar recently and I think that C+tcl is a potent combination where you can mix C and TCL in a very natural way such that you never do in C what TCL does much more easily but you can still get excellent performance where it counts. It's all theoretically possible in python but just feels like much more work.
This is not true. Journaling file systems trace their roots at least as far back as Cedar ("Reimplementing the Cedar File System Using Logging and Group Commit") in the late 1980s. Ousterhout and Rosenblum worked on log-structured file systems in the early 1990s, which are markedly different from modern journaling file systems (JFSs uses a log to ensure crash consistency, but metadata/data end up in a fixed place; LFSs put everything into the log and keep it there).
I am fortunate enough to have had John as a professor. He truly cares about his teaching and his students. Each Friday, he would end the class with a life lesson/a question to think about over the weekend unrelated to the course content, just trying to help us think about how to lead more fulfilling lives. I have a lot of respect for him as a human being, on top of his contributions to the field.
Echoing this. One of his favorite such lessons was, "a little bit of slope make up for a lot of Y-intercept". It's easy to get demoralized encountering someone way more skilled than you. But the right response is to learn from them and accelerate your own growth.
Ah, Tcl. A madman's idea of Lisp-style metaprogramming power. But nothing is faster for building GUIs than Tcl/Tk, especially for wrapping already existing Unix functionality. Not even something like Visual Basic. So it has earned a special place in my heart, even if it's not my first choice for generic application development.
Sorry, I didn't mean to imply all of OEM was based on Oratcl, just the Tcl extension used to access the database. I was listed in one of the OEM manuals in the copyright statement, along with the RBOC where I worked when I created Oratcl. I don't recall the names of the people at Oracle who contacted me to tell me they were using Oratcl, perhaps 'Mario' or 'Eric' ??
The funny part was when I got an email from someone at Oracle, perhaps in legal, who wanted me to sign an "indemnify and hold harmless" statement for my code, code that I shared freely, received no compensation for, and had no control on how it was used by Oracle in their product -- Rrriiight. I replied "no thanks", and if Oracle wanted to use my software, they simply should comply with the BSD license that it shipped with, including the attribution clause.
There's also a 'ps' listing on page 50, with a user id 'aholser', if that rings any bells.
I haven't used Oracle since about 1998, and never Oracle's Enterprise Manager or whatever agent, so no idea how well it worked. I'll read your article.
TCL is awful. I am unfortunately forced to use it because the entire EDA industry has settled on it as the scripting language of choice (silicon engineers tend to be pretty terrible programmers in my experience; despite silicon design and verification being programming they often excuse shitty practices by saying "we aren't software engineers").
Performance isn't such a big deal - even though he says it isn't as fast as a really slow language like Python, the scripts tend to be very short.
The issue is "everything is a string". Anyone who has written any significant Bash or CMake will be familiar with how shitty that is. TCL is actually a bit nicer than CMake or Bash, but it still suffers from the fundamental problems of stringly based languages.
Quoting is insane and hard to get right. Type errors are very common. I would say TCL has had a negative effect overall at least on the EDA industry. Maybe if it didn't exist we could have had a slightly nicer embedded language like Lua. Or even just a C API.
This really makes me wonder if you’ve used Tcl. Quoting in Tcl is so much simpler than in e.g. bash or cmake. It’s all about know when a string will be evaluated or not.
> Type errors are very common.
In my experience, no more so than in any other dynamic language, e.g. Python or Lua.
Again in my experience, if one wants a scripting language — i.e., one used to orchestrate a number of commands — then Tcl is exactly what the doctor ordered.
Funnily enough I'd never used Tcl until I started tinkering with FPGAs, and while initially I hated it, I went through a process of developing a grudging respect for it, and now actually quite like it!
What I will say is it's absurdly easy to write a Tcl extension - I wanted to talk to a soft CPU using a Xilinx Platform Cable USB (which isn't supported by OpenOCD). I ended up creating a simple Tcl extension around xc3sprog and it worked a treat - I'm pretty sure the engineering effort to do the same for a different scripting language would have been much greater.
On the contrary, compared to e.g. bash, unless you're doing something super unusual or wrong, you rarely have to think about quoting in Tcl. Use "" when you want substitution, {} when you don't, {*} when you need to pass a list as multiple words, construct lists and dicts using their appropriate constructors and never by hand — and you're safe. Unlike bash, quoting in Tcl is dead simple, predictable and never surprising.
I do prefer tcl quoting to bash quoting, but you do have some mental overhead when writing procedures, since unpaired braces inside quotes may do surprising things, if you're thinking "writing a command" rather than "raw strings". Comments are similar.
That being said, those are very much edge cases.
More damning, from my POV, is that you can't get ref counting of things like C objects or file handles, since they're just string handles. But there are a lot of uses that don't need that
Yes it's not as bad as some other stringly typed languages, but compare it to literally any other language with proper types. Even terrible ones like PHP don't have those issues.
Before Lua got popular, Tcl was the only sane option to add scripting to a project. The interpreter was trivial to embed and modify (for instance I created my own stripped down "minitcl" library which compiled to around 20 to 30 KBytes).
I actually really like TCL as a language. It’s very internally consistent, and quite powerful. Has a bit of a LISP flavor.
I agree that its syntax is pretty weird when compared to any other language. And a lot of stuff works differently than you’d expect if you’re coming from Python.
For me, there’s always a mental context switch that needs to happen before I can get in the flow with TCL. Once I’m there though, it’s a very fast/powerful language to implement stuff with.
As a network engineer, Tcl/Expect/bash scripts made it really easy for me to automate some of my tasks back in the day (e.g. backing up a router config). I since moved on to Python + Netmiko. I can still see learning Expect being useful for someone trying to dip their feet into simple automation.
I always forget his name, but never forget Magic. There were so many cool features of Magic. I wrote scripts so that if you made a block cell, like for a multiplier, you could then script it out to any size you wanted by replicating cells, but renaming wires and pads for export to Spice tools. So nice.
I was into Tcl (and Tk) in the mid 90's. There was a great database that I think had Tcl bindings called Qddb (quick and dirty database). I used these to hack together a few Web cgi-bin applications. One was a super fast library catalogue (something like 10 times quicker than the proprietary one I imported the catalogue data from). The other one I like to think as my "almost invented Facebook" moment. It was a template driven web application that allowed you to publish your profile page (and of course edit it in the browser) and also project pages. It got some use within the multinational resource company I was in as a way of improving collaboration in those early web days.
TCL remains my favorite scripting language of all time, and I did a port of it that was the very first software I had a hand in that was adopted by a major corporation (albeit with no compensation).
It then took about 5 minutes to compile on a DECstation 5000 with 32MB RAM running Ultrix.
Someone in the computer lab saw me with the Motif toolkit book and said "Why don't you use Tcl / Tk?" I said "What's that?" and they showed me. I had "Hello World" in 3 lines and could edit and start my new version in a couple of seconds instead of minutes to compile.
After I graduated I got a job in the semiconductor industry which ran on Solaris and HP-UX and now Linux. All our digital tools from Cadence and Synopsys started using Tcl as the built in scripting language. I ended up teaching my older coworkers how to program in Tcl. I was still writing Tcl yesterday. There are a lot of things I hate about it but it has stuck around.
The text before the interview mentions AFS which I did not think Ousterhout had anything to do with. I thought it was from Carnegie Mellon and then IBM Transarc. We used AFS at my school and I still miss it. Going to NFS in the industry seemed like a step backward.
We only had an 8MB quota for our entire Unix account in college. It was great when working on a group project that I could make a dir in my account for that project and then add the 3 other classmates login id's to the group dir ACL list.
These days at work depending on the size of the company we have to send an email to the sysadmin / devops team to say "Add user ABC to project group ID XYZ" and then wait hours or days. At my current job we can request Unix group access through a web site but I still think it is some human running commands to manually add users because it takes about 24 hours.