Hacker News new | past | comments | ask | show | jobs | submit login
Six programming languages I’d like to see (buttondown.email/hillelwayne)
481 points by johndcook on July 13, 2022 | hide | past | favorite | 441 comments



>> A serious take on a contract-based language

Ada has design-by-contract as part of the language:

https://learn.adacore.com/courses/intro-to-ada/chapters/cont...

Since Ada is used for safety-critical systems programming you could argue that it is very serious about it.

>> And tool integration! One of the coolest things Eiffel sorta did was use contracts to infer tests. If you have contracts, you can use a fuzzer to get integration tests for free.

How about tools that extend design-by-contract to formal verification?

https://learn.adacore.com/courses/intro-to-spark/chapters/01...

SPARK is limited to a subset of Ada, so it is not without limitations, but it can be very useful depending on what you are trying to do.


If Adacore had published a full featured free compiler for students/universities by the time Ada 95 standard was published, I am sure that Ada could have occupied the space C++ takes today.

Mostly what Ada lacks is not the features but the comunity effect, we saw that when comparing feature parity with C++ and we see that again when comparing feature parity with Rust.

There are a lot of briliant features and tooling in Ada, but unless there is a community effect (or a gigantic industry sponsor e.g. golang) it is difficult to convince people to switch over.

I can't remember who said it so I will paraphrase, the best language/target to program in is whatever your friends/colleagues are using.


>> If Adacore had published a full featured free compiler for students/universities by the time Ada 95 standard was published, I am sure that Ada could have occupied the space C++ takes today.

Agreed. If there had been more free (gratis) or low-cost Ada compilers at that time Ada would be far less niche than it is today.

There is increased interest in Ada today thanks to Rust. There is also interplay between Rust and Ada/SPARK features with Rust getting some Ada features and SPARK getting some Rust features.

Ferrous Systems is working with AdaCore on the Ferrocene Language Specification to formally document the Rust subset that Ferrocene will use:

https://ferrous-systems.com/ferrocene/

https://ferrous-systems.com/blog/ferrocene-language-specific...

It is exciting to see Rust mature so it can one day be used in safety-critical work.


This seems to be a common theme among certain languages, software, etc. from the 90s. I've heard similar things about Smalltalk.


Long before the digital era that common theme would have been expressed as —

Don't look a gift horse in the mouth.


> If Adacore had published a full featured free compiler for students/universities by the time Ada 95...

Didn't they? I wasn't around then, so I'm working with what information I can find online. According to Wikipedia it was fully validated in 1995[1], and I'm pretty sure it was always a free compiler based on GCC. I think the dust had settled on the battlefield of the language wars long before Ada 95. It was/is still common for European universities to teach Ada as part of the computer science curriculum. That still hasn't managed to give Ada the industry presence it deserves. In my opinion it's by far the superior language. It's just losing out for human reasons.

> ...gigantic industry sponsor e.g. golang...

It had the biggest industry sponsor at one stage: The US Department of Defense. Back in the late 70s, and early 80s when the DoD was directly sponsoring Ada's development, they had incredibly deep pockets. I'd say this is why Ada is so comprehensive today. They had the money, and motivation to design the language by committee down to the most minor details.

[1] https://en.wikipedia.org/wiki/GNAT


Ada 83 had long had a reputation for stunningly high prices when my first job chose C++ in 1992 (when templates were pretty new and sort-of worked).


I guess that prior to the FSF releasing GNAT, and the DoD dropping its Ada mandate, compiler vendors had a captive audience, and could charge a serious premium for their software.

I've read that the hardware required to use an Ada compiler was far more expensive than its competitors as well. I've heard more than one person say that this was a serious barrier to its adoption in the wider industry.


> If Adacore had published a full featured free compiler for students/universities by the time Ada 95 standard was published, I am sure that Ada could have occupied the space C++ takes today.

The same can be said about Eiffel.


Or Delphi


Yup, also the commercial ADA IDEs were brilliant, even compared to Borland, but o-so-expensive.


The D programming language, too.

https://dlang.org/spec/function#contracts


+1 on D. It even has contracts for members.


Forgetting about SPARK was a big whoops for me.

(Lots of formal verification languages are based on contracts, like SPARK, Frama-C, and Dafny. I'm interested in uses for contracts that aren't just formal verification, though!)


>> I'm interested in uses for contracts that aren't just formal verification, though!

Ada 2012 design-by-contract features aren't just for formal verification, they can be used in a general purpose way similar to Eiffel:

https://learn.adacore.com/courses/intro-to-ada/chapters/cont...

Can you elaborate a bit more about the wanted features for a language with semantic relations?


An example off the top of my head would be writing a serializer/deserializer pair and explicitly saying they are inverses of each other, as opposed to having to implicitly say that in a round-trip property test. Another idea would be being able to say that two data representations are isomorphic under a given transformation, so I can define a function for one transformation and call it on the other. I don't know how useful this would be in practice, but it seems interesting to me!


>> writing a serializer/deserializer pair and explicitly saying they are inverses of each other, as opposed to having to implicitly say that in a round-trip property test.

That capability would be very useful. There are many cases where I have had to define serialization and deserialization functions for numerous objects and had to create so much "boilerplate" unit test code like you describe. It made me wish I was using a Lisp with built-in reader and printer functions.

>> being able to say that two data representations are isomorphic under a given transformation, so I can define a function for one transformation and call it on the other.

This strikes me as very Haskell-ish, but could be quite useful to ensure certain properties or invariants.


I use fuzzing for this. In fact that's the main case I use it for.


Better still would be to take progressively typed languages, combined with contracts and property-based testing, and iterate toward a more precise contract for all of your code whenever it became necessary or whenever you got bored.

What are the consequences of deciding that no Foo's are allowed to be red? Hmm, that's interesting/unintended...

The main difficulties with statically typed languages seem to be both 1) quickly arriving at a workable prototype, and 2) dealing with the consequences of regrettable decisions made early on. Those two being in tension contributes a lot of drama, and not just in analysis paralysis.


Also Nim has contracts and supports formal proofing: https://nim-lang.org/docs/drnim.html


A more practical one might be this contracts library https://github.com/Udiknedormin/NimContracts


Its not public language, but at Risk Management Systems, I worked on Contract Definition Language- an external DSL for computable insurance, re-insurance contracts. This was used to simulate insurance losses when catastrophic events happen (earthquake, hurricane)


Any good reference for that CDL? The idea seems quite useful for what I'm working on. All I found is a wishy-washy doc with poorly chosen examples [0]

[0] https://www.riskdataos.org/html/HelpCenter/Content/CDL/CDL_S...


RMS never played nice on interoperability. I wouldn’t bet on the CDL to become a useful standard, because it is under-powered, under-specified and stagnant.

The “open” alternative is OASIS. Their Open Exposure Data standard [1] is complete and supported by many industry participants. Unfortunately it’s a database based format, rather than a text based DSL, but still betterthan the alternatives.

[1] https://github.com/OasisLMF/OpenDataStandards/tree/master/Op...


Thank you! I wasn't aware of OASIS. Reading up now.


When I read "contract-based language"I immediately thought of Ada. I could be wrong but I think SPARK is incorporated into the Ada 2022 standard at this point, it could really be thought of as part of the language.

It's the furthest thing from a "dynamically typed language" but it can probably also check the "math" requirement here too.


Fuzion has contracts as well:

https://flang.dev/tutorial/pre_post_conditions https://github.com/tokiwa-software/fuzion

Disclaimer, I work in the Fuzion team.


Java has a similar “extension” as well: https://en.m.wikipedia.org/wiki/Java_Modeling_Language


Similarly, Ruby had RDL - which implemented pre/post contracts and type checking prior to Sorbet. It was actually pretty nice, if slow.

https://github.com/tupl-tufts/rdl


I'm a little surprised that nobody has pulled off the "VB6 of javascript". I don't mean one of the purely no-code products, but like a literal "create a new page, drag a button, then double click it to hop into the code for a new route and a new component and go straight into the onclick handler".

Maybe that's harder than I can imagine, or it exists and just isn't a very good idea in practice (how would you even do responsive?) so it's not that popular.

VB was a garbage language by modern standards but I always liked that they gave you a visual builder, but didn't try to hide the coding from you too much.


VB6 was a gem. To a young me who was starting out programming, there was nothing more empowering than dragging a bunch of different buttons, resizing them with mouse, fiddling with their labels and creating a calculator within half an hour. I crave for a similar dev experience but no language today seem to possess that simplicity. Today, it's all container widgets of different kinds that I have no idea how to use.


I wonder how much that is to do with the fact that software is expected to run on a much wider range of screen sizes these days? Doing a drag-and-drop sort of thing when you have to support phones up to big desktop displays tends to result in the container widgets you describe.

On the web side I bet the right person could do something quite intuitive with CSS Grid though. You could drag out where you wanted various content blocks to appear at different screen sizes and generate a `grid-template-areas` [0] property to match it.

[0]: https://developer.mozilla.org/en-US/docs/Web/CSS/grid-templa...


But isn't WinForms basically VB6 RAD via .NET?

WinForms satisfies my VB6 craving.


this. I feel exactly the same, we are probably at the same age :) when I started learning html & css it was awkward for me that I need to specify stuff in pixel coordinates. For me it was a step backwards in terms of technical progress :)


I'd say it's probably on par with 2020s web. Web is more potent in theory but for a large amount of needs, VB+forms was actual RAD. If you need more complex then you dropped aside.


XOJO? It's quite a feature rich OO BASIC with VB like GUI tools IMHO. Can compile for various platforms or generates HTML. I used it a bit in work to create an internal web app (it uses Bootstrap under the hood for that target). I found the GUI based inspector really lacking in that it does not expose all of properties of a visual component so expect to go hunting in the API docs.

But for network/web service tools, I like that it compiles into a single executable where its internal web server generates the HTML dynamically. It allows me to write small, single focus tools quickly.

[1] https://www.xojo.com


XOJO appeals to me, it's free for non-commercial use, and very reasonably priced for commercial use. The user community seems nice. Free Pascal/Lazarus seems nifty and capable, but good luck finding thorough documentation on the Lazarus GUI designer. Of course, Delphi is reputed to be absolutely awesome, but Good Grief! the price is not something for individual users.


Lazarus on the FreePascal compiler does exactly that, borrowing from Delphi's heritage. It copies VB6 almost literally, in many ways, but can output binaries for win/mac/linux.

I've written a few toy apps in it and in my limited exploration it seems to be a joy to work with, once you get used to Pascal's quirks... and I'm not a fan of the ancient MDI workflow it imposes either.

I also learned to program with VB and Lazarus is the closest thing I've found. The WinForms designer in Visual Studio would be the next closest.

Both GTK and Swing have various GUI designers that are similar, but have their own design and use different layout metaphors. If you're looking for a more modern take on what's left of RAD maybe check them out.


Small correction. If I remember correctly, VB copied Delphi.

That's why it looks so similar.

I still remember the great VB vs Borland Delphi wars of the 90's.

But feel free to correct me if I'm wrong.


Another input for VisualBASIC was Apple's MacBASIC:

https://www.folklore.org/StoryView.py?story=MacBasic.txt


Interesting... Last time I brought this up there were some very fervent Delphi fans that seemed to agree with you. My first dev environment was VB, so I am probably biased.


> but can output binaries for win/mac/linux

I took a cursory look and I don't think it supports Apple silicon out of the box. Considering that Intel Macs are dead end it't not too encouraging.


Hmm, they mention it at the top of their Mac portal (https://wiki.lazarus.freepascal.org/Portal:Mac) but I don't see it anywhere else.

Emulation works pretty well from what I hear, is the lack of support that big of a deal right now?


Given how well Rosetta 2 works, it's probably not a show stopper.


I think Plasmic is doing that for React.

https://www.plasmic.app/



> I'm a little surprised that nobody has pulled off the "VB6 of javascript". I don't mean one of the purely no-code products, but like a literal "create a new page, drag a button, then double click it to hop into the code for a new route and a new component and go straight into the onclick handler".

This is kind of what WebDev (by PC Soft) does. I worked with it in an enterprise environment. It's nice for internal products, though the code I worked with was old code made by people that didn't have any formal education and not much experience in software engineering.

My favorite part of this was not the GUI, but the deployment part: you click on a few buttons and it just deploys stuff to the server and works. My least favorite part was that sometimes this deployment didn't work and troubleshooting it was long and tedious.

All in all, it wasn't that bad to use, but a big problem is that it's not that well known, so when you look up how to do things, you have the official documentation which isn't great, the forums that are helpful from time to time, or doing it by yourself.


The problem of designing adaptively for different screen sizes (much less orientations) was one even VB6 had no answer for.


VB6 could make apps that adapted themselves to window size, which was more flexible than different screen sizes.

It was no Apple's Cassowary, but so isn't HTML/CSS.


Some of the Wordpress, Drupal and Squarespace theme designers do the drag/drop GUI part where you can drop into the html, but it's more for graphic designers than programmers. I have watched a graphic designer be super productive with the WP Divi theme builder. But graphic designers are used to Adobe-like products with visual designers.


Yet things that were built, have been used.


Unfortunately the days of assuming everyone has a mouse, keyboard, and at least 4”x6” of screen to place a window on are long gone.


And yet apparently they've never been good enough, because people keep asking for it.


Coming to Windows programming from x86 assembly language at the lowest level and Clipper at the highest, VB6 was awesome. It made rapid application development really easy.


There's GuidedTrack[1], which is a simple way to program complex surveys or tests, but can be used to create apps as well. It has been used to create Mind Ease[2] and UpLift[3].

There is a quick demo[4] on the site to show how it works.

Full disclosure: I'm a developer on GuidedTrack. Any feedback is appreciated.

[1]: https://guidedtrack.com/

[2]: https://mindease.io/

[3]: https://www.uplift.app/

[4]: https://www.guidedtrack.com/demo


It's Python and not JS, but to me Anvil has always felt a bit like VB for the web:

https://anvil.works


SproutCore tried to do that, I believe. They were developing a framework called... Greenhouse(?) that would do exactly that, drag and drop GUI and auto write boilerplate.

It never took off because SproutCore turned into Ember and then died, I think, but not before my company had written numerous things in both SC and Ember...


Greenhouse was Sproutcore's response to Atlas which was a Cappuccino product developed by YC company 280North.

In principle it worked very differently though; as you said, it was mostly a code generator. Atlas was a port of Cocoa's Interface Builder (which uses state/value serialization as opposed to code generation) to the web.


I see what you did there... "dying ember" :)


Someone has reimplemented it: https://www.radbasic.dev/components.php :)


Have you tried https://impulse.dev/ ?


Didn't ColdFusion work like that?


Kinda. But IIRC it didn't generate much (if any) dynamic code, mostly HTML. That feels like ages ago and isn't in any way useful anymore today, I fear.


Not in any way, no.


You don’t have to do responsive. Responsive UI is not at all needed for amateur GUIs.


it does exist. I am building it at yazz.com


Depending on if you count it as a serious programming language, Racket has pretty much all the contract stuff. It has dedicated syntax for contracts, tight integration with the module system, and contract-random-generate attempts to generate a value satisfying a contract so writing an auto fuzzer wouldn't be too hard. In fact, I think Racket's system predates Clojure since there was the 2002 paper "Contracts for Higher-Order Functions" discussing it and Clojure first appeared in 2007.

The only reason I would ever use Clojure instead of Racket would be if I needed to work with the JVM ecosystem or the browser via Clojurescript (which are compelling reasons).

Totally agree about a good calculator language.


Clojure mentions Racket's contracts as prior art: https://clojure.org/about/spec


As more of a Sys Admin / scripting type, I felt happy and sad at the same time at the actual mention of Visual Basic.

VB is often scoffed at, and probably to an extend rightfully so.

But those (i.e. VB1 through to VB6) were the ONLY languages that I ever managed to create useful, finished tools and programs in. Maybe it's part psychological, but I needed the approach where I created a neat UI first, got my methods pre-populated for me, and had something tangible to look at early in the process. Then went on to fill it with all with custom code. Yes, it's BASIC, but it was extensible. People I knew wrote OCX and such control elements, and code routines in DLLs in C/C++ and integrated that into VB projects, so the sky was still the limit.

Staring at an empty text document with possibly some header file statements never evoked the same creativity with me that VB did.

p.s. my proudest creation was a Windows 3.1 UI around the MS-DOS packing program "arj", for those who remember. Had I been a better businessman, that might have predated WinZip :)


A company I used to work for took roughly this path:

1. 1 mildly technical, mostly business-domain person, a spreadsheet + auto-clickers

2. 1.5 persons still only mildly technical and mostly business-domain-y, a VB gui and some basic network integration

3. [some time and quite a bit of revenue passes]

4. they hire some java programmers to rewrite it properly in Swing, since the VB versions had literally become impossible to modify without breaking (there were forked versions where you used version A for some feature, version B for another, version C for another - and people had tried to merge them together while keeping the whole thing working, but never managed it).

It's now been, oh, over a decade I suppose, and while I understand that there are currently some efforts underway to port the java applications to an SPA web app, it's more because the owner wants to get rid of java (and only have to hire python devs) than because the apps don't work or are hard to maintain.

My point is that without VB, the company probably wouldn't have existed to even need the Swing rewrite. So while I personally find VB to be quite caustic to my senses, I cannot deny its ability to create business value, especially because it's so approachable only mildly-technical folks.


The company I used to work at started out in MS BASIC sometime in the early 80's. The founder is a mechanical engineer who started programming all the formulas he was using instead of cranking them by hand every time. In the 90's, he ported it to VB and turned it into a GUI application for Windows 3.1. In the early 00's, the company had to switch over to using C++ for a fair amount of the UI, but the core off the application is still that library of VB code (now VB.NET) that was started in the 80's, and the RAD tools for VB was what allowed them to stand out as a mechanical design program earlier than most of the other tools.


> But those (i.e. VB1 through to VB6) were the ONLY languages that I ever managed to create useful, finished tools and programs in.

Have you tried Delphi?

https://en.wikipedia.org/wiki/Delphi_(software)

- my experience was in the 90s, but it has a similar ‘quickly build native Windows UI programs’ feel to it.


If you liked Delphi I recommend giving FreePascal and Lazarus a try.

https://www.lazarus-ide.org/


Both of those are still pretty nice to work with and I'd say can still be used for a variety of different pieces of software! They also have a Software Gallery page that shows a few different programs developed with them: https://wiki.freepascal.org/Projects_using_Free_Pascal

Though the language is also showing its age and perhaps isn't for everyone, like how C++ might also be an acquired taste. Some things like talking to RabbitMQ, Redis, gRPC, GraphQL or MongoDB (just as examples) might be a bit more difficult than in other languages that get more love.


CLOG and the full UI builder CLOG Builder do exactly what you are looking for.

It already offers plugin controls that work with the builder or just code.

Can deliver native local apps, websites, webapps, iOS and Android apps, etc.

Database controls like in VB and Delphi - and much more to come.

It is programmed in and with Common Lisp, however you can easily program events in JavaScript and Python coming soon too.

https://github.com/rabbibotton/clog


I probably wrote my best "software" in VB too, i.e., stuff that hung together well enough for others to use over a long time period. One of my programs is still used daily in the factory, with virtually zero failures in 14 years.

What happened was that I lost interest in creating that kind of thing. And the business decided that such programs have to go through a formal approval and release process, so it left me with no advantage over letting the software department maintain that stuff at considerably greater expense. Today, I'm willing to burden my users with stuff that requires a bit more care and feeding on their part, such as plain Python scripts. Also, VB turned into VB-dot-net.

I'm equally creative in C, but on the embedded side. Something about working within severe limitations. I'm reminded of a quote from Richard Feynman: “The game I play is a very interesting one, it’s imagination in a tight straitjacket.”


Gambas and Lazarus covers a lot of ground, but I also miss a system where you can easily go from an UI designer directly to a small self-contained executable, and then start to add in code.

Visual Go, Visual Rust, Visual Nim and Visual Crystal, where are you?


In the ruby realm, this seem like what you are looking for...

https://github.com/AndyObtiva/glimmer


Those do sound cool. Visual Go would be really nice too! I’ve spent some time using CLOG recently, and I’m aware that is can produce a self-contained executable, but haven’t gotten that far in the tutorials yet. Anyways just wanted to share.

https://github.com/rabbibotton/clog


I don’t understand the the purely psychological disdain for Basic.

I developed in an odd Language…LabVIEW…and I get what you’re talking about with first class GUI support and being able to see something each step along the way.


Some developers seem to react angrily towards anything which makes software development more accessible.

That said - while VB had an amazing IDE and overall developer experience, the VB language itself had some serious warts.


I could regale you with tales of the silliest conversation I’ve had with programmers who claimed that a graphical programming language couldn’t possibly do “real things” without even trying it.

Your point is spot on.

Wait until someone shows “real” programmers PLC ladder logic and they become aware of just how much critical infrastructure runs in a programming language that was literally designed to allow electricians and mechanical engineers automate real-time safety critical automation controls.


I recall most of the complaints had to do with the lack of inheritance and exception handling.

On Error Resume Next anyone?


Same. Well, GUI based projects that is. When I learned C++ the effort to recreate some of my VB work just seemed so mindnumbingly long and boring. Which was probably to my detriment...if it were the only way, what fun. But when you know the easier way, you question why.

Myself and a lot of kids probably grew up on VB. I was one of those annoying kids writing AOL 3.0 'progs.' I don't think I've ever had as much fun coding as those days, but that's probably due to age and community moreso than VB's greatness.


I was a little late full-on BASIC, but enjoyed writing little TI-BASIC programs on my calculator.

I don't know why people would scoff at BASIC, tons of people make their careers writing Python, which is 100% going to be regarded as the BASIC of 201X-202X.


TI Basic was great - as a high-schooler it was much more approachable than Z80 assembly (or trying to figure out the custom gcc for the TI-83).


VB was the peak of computing, modern dev wishes it was as capable.

One of the truely great things about the drag and drop form builder was being able to sit with a user and go through requirements with them, to wire it all up later.

"Ok, so when you type in the customers details, we should have a button to save them into the database. Will we put it here?"


I got into programming with VB when I was a kid and made a hundred little projects in VB6. It was so much fun browsing Planet Source Code looking for cool stuff.


My first uni year roommate had a similar background: numerous cute projects in vb, along with some real deep math. I was different as my dad was a geek I never had anything but linux at home.

I fondly remember trying to explain why Emacs doesn't have a green run/build button :-)

Vb was easy in a way *nix never was!


Me too.

>Planet Source Code

So many good memories of browsing that auto-scrolling list of new projects, downloading and trying something different and learning by seeing new code.


This part of the experience didn't fundamentally change between VB6 and .NET with WinForms - when you create a WinForms app project in VS, you still get a blank form to place controls, double-click on them to create the appropriate event handlers etc.

And this all still works today in VS 2022.


Was is because of the language or because of the IDE/Environment/Library?

My impression is VB is a language sucks but VB the entire package (forms, form editor, IDE, etc) was pretty awesome.

So the arguably a similar IDE with a better language would be a huge win.


Yes, VB as a language was terrible, in the same way that almost every BASIC is "terrible". There were some old ideas in the BASIC heritage that didn't age well. IIRC, it wasn't until VB6 that you no longer had to prefix string variables with $. But by that point there was an ecosystem issue of old documentation, legacy code, etc.

WinForms absolutely captured most of the same feel of VB6 from strictly the perspective of the tooling. I think it took a while to really catch on because, being a from-scratch rewrite, it took a while to gain complete feature parity, while also lacking the subcomponent market that a lot of VB6 projects had grown to rely on.

But then, GUI requirements also evolved. HighDPI displays, internationalization, accessibility, bigger and bigger projects needing better organizational methods. You can solve most of these issues in WinForms by being super fastidious about componentization, but it's not the happy path of just double-click-on-the-form-designer.

All of the XAML-based GUI systems that Microsoft developed since then have tried to invert that issue. They wanted to start people off by being considerate of componentization. Unfortunately, it came at the sacrifice of the usability of the tools.


Just FYI, you didn't have use the $ to denote string variables from VB2 upwards. It had been a QBasic thing, pretty sure it ended up in VB1, and then was dropped. (though you could still optionally use it) Option Explicit... first line that should have been in every VB program.

I had an entire career built on developing back end tooling in VB for businesses, some of which (.. 25 years later?) are still doing their jobs. It was sometimes painful but it really did set a bar for quick productivity. Fond memories of those days!


Yes, so use Delphi because Pascal is better than the BASIC in Visual Basic.


I made good coin coding in vb6. Usually I was automating reports (that clients paid for). People would do a job then sit down and create 30 spreadsheets to send out to clients. I created software to pump all these out in seconds. Excel sheets with graphs, intricate layouts and formulas. 15 years later I think they finally have a working web version. I got consulting asks for a decade.

The weirdest thing (I have ever seen in software): Once VB.net hit, all VB devs quit. VB.net was a totally different language. I guess we all moved on.


VB was basically Excel of GUI programming - straightforward to get shit done - but exponentially harder to manage with more features.


And Lotus Notes was the Excel of workflow, making use of its own @formulas, vbscript, and eventually incorporating Java and JavaScript into the platform.


Excel has integrated JavaScript these days: https://docs.microsoft.com/en-us/office/dev/add-ins/referenc...


Doesn't the C# IDE basically do what VB6 did


Same thing for Qt Creator, although license may be an issue as always (although if it is then it's also probably not a problem, but I digress)


When I was a student, our teacher told us to just drag and drop GUI elements in Visual Studio and write the functionalities in C#. At that time I didn't knew that existed other ways to create user interfaces.


Winforms designer is pretty much it. But who's building windows apps these days :) ?



I'm sure there's plenty of legacy apps out there that are getting maintained - but I seriously don't see anyone picking windows desktop as the only target for a new project these days.


Games, laboratory, healtcare and factory automation devices, kiosks.


Even automation stuff is moving to web interfaces and headless network connected controllers. Unless you're doing legacy stuff or niche stuff you're probably not writing stuff for windows desktop.

Game tooling has also moved to cross platform these days with popularity of MacOS for content creators and iOS as a target market requires MacOS for development tools.


Good luck with Web interfaces on air gaped devices.

Mac platform is about 15% of world desktop market, hope they can manage all requests from game studios all around the globe.


If you want that same GUI/RAD style, Delphi is still around.


Have you tried CLOG?! It apparently matches the exact use case you described above. It’s a GUI first approach but its aim is to be highly extensible. I only ask that you be gentle. It’s being built by a passionate developer in his spare time, but he’s making great progress.

https://github.com/rabbibotton/clog


I've seen quite a lot of people use C# + winforms to do similar things, most "professional" devs kind of turn their noses up at winforms, but, you can bang out functional little apps without too much effort. I haven't tried with the latest .net 6 stuff, but I think you can pretty much compile the whole thing into a single executable now.


lha, arj, pak, rar, pkzip. these were magic.


Young me always thought arc, then arj, then rar. But arj brings back fond memories of unpacking very large files


> A language designed around having first-class GUI support

Delphi and friends[1] is still out there. There's also Lazarus[2] if you don't want to fork off a few grands.

[1] https://www.embarcadero.com/products [2] https://www.lazarus-ide.org/


I'd say Dart[0] fits the description.

[0]https://dart.dev/


Yes it does[1] and it works on Desktop, mobile and web.

JavaFX has SceneBuilder[2] which also fit the description but for some reason, it never caught on. Gluon seems to have that work not only on desktop, but also on mobile.

[1] https://docs.flutter.dev/development/tools/devtools/overview

[2] https://gluonhq.com/products/scene-builder/


Can you confirm that Dart desktop apps are actually desktop apps and not Electron apps or similar?


Of course... it's a very different tech. Flutter's graphics are based on Skia[1], same as used in Android and Chrome. Dart compiles to actual executable binaries, so it's not at all like Electron.

[1] https://skia.org/


That's just for web apps.


Flutter can be used for both mobile and desktop apps, although mobile is definitely the primary focus. In my view, Flutter basically equals Dart. I don't know what else people use Dart for.


I use Dart for custom cli tools and recently deployed small service that feeds data to my backend service. It's simple and nice to write, no learning curve to speak of.


Imba is a compile-to-javascript language that makes HTML and CSS first-class parts of the language as well as react-like custom components. Its the fastest way to create UI that I've found.

https://imba.io


There is also Vala that was made for GTK/GObject developement. Racket also has pretty good out of the box GUI support afaik.


Difference between language and IDE.


Delphi (and freepascal) have good support in the language for visual components.


In the library, not so much in the Pascal implementation, though the Pascal does support the library very well.


The reactive programming language one is really interesting. A compiler ought to be able to compile "normal" straight-line code into conventional efficient code. It's hard to know how that sort of thing would "pollute" the rest of the code, though, like, will every program turn into a rat's nest of dependencies such that it's just impossible to manage or what? Hard to tell without trying, especially since developing such a language would also require significant effort into developing a standard library and best practices to go with it even if the language was in hand today. And one hell of a debugger. Circular dependencies are a problem, too; I'm inclined to ban them but that's probably beyond what a type system can help you with, or at least, any type system I currently know, so it's going to be an adventure.

Still, there's a lot of things that would be more useful with such a language. In-memory highly-correct cache invalidation would become trivial code to write. In fact I think the cache would actually update itself live. Out-of-memory cache invalidation might be pretty easy. Certain auditing things would be easy. UIs would probably take some work to get right but would be interesting, at the very least. Game programming would also be interesting; a lot of game patterns would be incorporated into the language at that point.

Probably need to be lazy; I suspect proactively recalculating everything all the time would be a bad idea. Haskell has pushed lazy programming a long way, but "re-thunkifying" a previously calculated cell would be a new frontier.

Come to think of it, while I think the performance could be improved and you'd want to lift up to a full language eventually, I think a good Haskell programmer could bash together a prototype of this in a few days and start playing with it. It's going to need a lot of playing with before someone casts it into concrete as a language specification and corresponding compiler and runtime.


I'm attempting to build such a language (you can find a link in my profile if you're interested. I don't want too much attention yet since I'm planning on a first release later in the year). You're right it can become difficult to follow code when it's not presented in a linear fashion. The flip side though is that you can write code in smaller chunks that can do a lot, so you write far fewer lines overall. The other thing is that it opens the door to more advanced tooling than we're used to as developers: things like time travel debugging, saving program state and querying it, or running hypothetical execution paths to find the best one. Bret Victor is famous for demoing prototypes of these debugging tools in one of his talks, and they are finally becoming real.

My language turns out to be very fun to write UIs and games in, so I'm glad you honed in on those two examples. Actually one way to view it is that it's a programming language with a game engine as a runtime. But robotics is the primary application I'm targeting.

In fact I had to choose between lazy and eager evaluation, and I've chosen eager because I want latency guarantees. With a lazy evaluation scheme, I was unsatisfied with the windup that would occur in some situations that would totally blow performance. And we don't have to recalculate everything all the time; we only have to recalculate the paths that have an input change. This may lead to recalculating everything, but most of the time we only have to focus on the execution path that is "dirtied" by the updated input.

Anyway if you want to test drive such a language as you're imagining, send me an email and I'll give you a tour/demo. The language isn't really user-friendly yet so there are some pointy edges that make it unusable to new users at this time (which is the main reason I don't want more attention on it until that is resolved.)


You might like optimistic evaluation for this. Once you've got a thunk, send it into a thread pool. When you need the result, it's either been handled in the background already or you compute it on demand (and ideally cancel the in progress copy).


The reactive programming idea reminded me of Ken Tilton and "Cells", which exploits the flexibility of CLOS (the Common Lisp Object System) to create a reactive programming language on top of Common Lisp.

https://github.com/kennytilton/cells

and he has slides from a talk

https://github.com/kennytilton/cells/blob/main/Lisp-NYC-2018...

to give context.


For what it's worth, Svelte https://svelte.dev/ is a Javascript-to-javascript compiler that adds reactive statements to the language (by using the $ label):

   $: b = a + 1
Since it's a compiler, it knows what inputs were used in that statement, and what outputs were mutated. The statement is re-run every time its inputs change.

This is then used to make a web development environment where "react to input changing" is much more natural than with e.g. React.


I think the Lisps, and especially Common Lisp, in addition to Smalltalk gives you a lot of the things you mention here. That interactive development environment where you can compile any code at any time is almost a given in these, with example programs such as GNU Emacs and Nyxt, that can be reprogrammed on the fly in release mode too. It is pretty interesting to look at and play with, if you fire up something like Emacs + Slime/Sly and SBCL for Common Lisp or the bundled IDE for a Smalltalk such as Squeak or Pharo.


Specifically, for reactive "spreadsheet-like" code in Lisp, Kenny Tilton's Cells project comes to mind: https://github.com/kennytilton/cells


The reactive language was how I thought Mathematica notebooks should work when I first encountered them, and I was very disappointed to find out they don't. Maybe there's a Jupyter extension to do it?


The Julia notebook called Pluto.jl is reactive! But unfortunately is otherwise very limited compared to Jupyter Lab.

Reactivity in Jupyter would probably have to be provided by the kernel, not by the Lab frontend.


I don't know much about Mathematica but doesn't Dynamic work a bit like that?



Mobx works fairly well like this and in our case we use it for the UI where it works well. At the centre you have your pure data and then a bunch of transformations on the way out to your interface (eg, you have a list of all cars, but you have a computed function of all cars available for sale sorted by price). Then at the top, only stuff that’s actually currently used for rendering needs to be computed at keep hot.


My experience with reactive systems is they work well for small projects but once the data gets large the overhead of being reactive kills perf.


Mathematica is a decent general-purpose language (though closed-source) and its "map the factorial function over the list" is exactly the same length as the quoted J code:

    #!&/@l
It has the same "everything is a list" problem - it's really verbose at handling strings, for example, and last I checked its date-time handling was not only verbose but also [flat-out wrong](https://mathematica.stackexchange.com/q/239480/30771). But it does support the dynamism mentioned immediately below, if you use `SetDelayed` (`:=`) rather than `Set` (`=`).


None of these are top of my list. Mine is a better PHP.

To be clear, I like PHP. It actually has many attributes that make it almost ideal as a Web development language, most notably:

1. Pretty much everything is request-scoped. This makes it a lot harder to leak resources when everything is torn down;

2. A stateless functional core. This avoids a lot of class loading that has dogged Java (as one example);

3. No ability to start threads. This is actually a positive. Most application code should avoid creating threads like the plague.

But PHP comes with a lot historical cruft. It's type system is also primitive (Hack is a better example of this).

Where I think this could really shine is in data analysis (ie the numpy/scipy realm).


You might want to revisit PHP. It is getting better with every release. We have enums, match expressions, union types and much more now!

As for the type system, haven't worked Hack but I absolutely love the gradual typing experience that PHP offers. Yeah, we had some things that could not be expressed but they are slowly being added, again we have finally union types!

I would even say PHP might be the only mainstream language that offers a first class gradual typing experience.

In Python your type hints are a lie as they are not enforced at all and different linter will give you different results and there is no standard.

In JS, you have to use a whole other languages that compiles down to it, slowing you down with an extra compile step.

Meanwhile PHP just works. Just use PHPstan for linting but even if you don't you get at least runtime checks.


PHP might get better. But currently every new PHP release breaks compatibility with some things extensively used in old code bases. It's so much work! I wouldn't use it for new projects just because of that.


PHP tends to be remarkably conservative in this regard. Things which are scheduled to be deprecated will typically just throw a warning for a major version or two before being removed entirely, and the migration documentation is very thorough for each major and point release. Yes, the older a code base is, the more of a pain it will be to migrate it to work with a major release without warnings, but that sort of technical debt will be the same regardless of language.


I don't think that is true sadly. I have never seen a change as offensive as https://www.php.net/manual/de/function.implode.php, where they simply swapped the parameter order of a core function. That is not conservative.

For other languages, Ruby for example feels a lot more stable. I'd expect that tcl, perl and most lisps are as well. But in fact I am still searching alternatives for the times I don't want to use ruby.


They swapped the parameters (after a long period of either order being valid, then the old order being deprecated) to more closely match those of explode(). It was a bug fix if anything; the old form should have been more "offensive" than the change.


It broke programs for some sense of aesthetics. First rule of infrastructure is to not do that, like Linus' "Don't break userspace" command. By definition that can't be a bugfix.


> 1. Pretty much everything is request-scoped.

I don't know if it's changed recently but that's something I really appreciated when I wrote PHP: it was unabashedly singular in it's purpose as a web scripting language. The source article talked about VB6 as being a language where the GUI was a first class thing; for PHP the only focus was on making applications around web requests. No useless bloat while trying to be a general purpose shell scripting language; no delusions of grandeur of trying to be an enterprise business logic tier language... it did it's one task, did it well, and left other languages to other tasks. I think we would be better served to have languages will small standard libraries focuses on specific tasks rather than gigantic one-language-to-do-everything but about which nobody could possibly know everything (ahem C# ahem).


> 1. Pretty much everything is request-scoped. This makes it a lot harder to leak resources when everything is torn down;

I write PHP every day (just took a break from that to be here) and the request scoped content is both a nice thing and a pain in the ass some times. It's great that each request is on it's own - really handy for cleanup like you mentioned. There are some thing I want to be shared though, like a DB connection pool, a caching layer (one layer below my Redis layer), and clients for various other services.

Not the end of the world, but an example is AWS Secrets Manager libraries for Python support secrets caching, but they can't offer that in PHP since we won't be able to share the objects. You can use the file system, but that comes with its own hosts of quirks.

That said, PHP really is a fine language. I'm personally not a fan of the use of truthy/falsy values, but they've really dove head first into solid type support. Lot's of good progress has been made with it.


Why not use APCu?


I hadn't seen this before, but I'll absolutely look into this, thank you!

As for why AWS doesn't use it (if that's your question), I assume they didn't want to deal with the need for a system dependency also. That's something that is kind of a pain to me with PHP as well. Some PECL, some Composer makes mentally tracking and managing deps a bit trickier.


Saw this “PCP” project in Clojure which you might enjoy https://github.com/alekcz/pcp


Inform 7 is the closest thing I can think of to "everything is a graph". Basically, objects in the world are vertices and relations are edges. When you first encounter relations they don't necessarily seem that exciting, but it turns out that having relationships between objects be a first-class concept really changes the way you think about and design systems.


Did you know Inform is now (finally) open source?! https://github.com/ganelson/inform


Ada is already mentioned here.

You also want to look at Dafny for a contract-based language: https://github.com/dafny-lang/dafny

Since it has verification support it also covers the second point about semantic relations.


Dafny is a really fun language and I've done some stuff in it before. I was really sad when MSR stopped working on it, but I think AWS has picked up the slack?


Yes, it's under heavy development, and is getting various improvements for usability. You probably want to take a look at the release notes.


glee


I kind of thought Coco/R was a dead project but it seems dafny is using it as the parser generator library.

…have to poke around a bit methinks.


Nice article! I'm building a language (https://flame.run/) in Rust that aims to have WebGPU+GUI support built-in. I'm playing with refinement types, which you effectively described in the second-to-last section. I'm not familiar with contract-based languages (other than Solidity), but I think refinement types would allow specifying function parameter requirements in a similar fashion to what you described in your contract-based language section.


> A serious take on a contract-based language

In Shen, the contract rules applied on each function are themselves a Turing complete language::

https://thestrangeloop.com/2014/shen-a-sufficiently-advanced...

https://shenlanguage.org

"static type checking based on sequent calculus"

"one of the most powerful systems for typing in functional programming"


I had hopes in Shein but it was way too ambitious with way too less design (in the sense of UI/UX and art).

Shein never became a serious language. It's highest point was when @deech made a talk about it and it also promoted their klambda but it had constant problems on every single aspect of the system and the design, from code size expanded as macros to unusable APIs due inconsistent design of libraries and features the author collected from everywhere. I even bought the book and felt deceived after it's unsuitability for anything other than academic examples on how to implement itself.


Joy (implemented in Prolog) could tick some of these boxes.

> window2 is an optimized version of window1 and should have the same outputs for every input. I should be able to encode that in the language, and have the tooling generate checks showing the two are the same, and also run benchmarks testing if the optimized version actually is faster.

That should be possible with Joy, it's certainly something I want to explore. The interesting semantic relationships are those that let the machine automatically deduce optimizations

> I also like the idea of modifying function definitions at runtime. I have these visions/nightmares of programs that take other programs as input and then let me run experiments on how the program behaves under certain changes to the source code. I want to write metaprograms dammit

Lotta metaprogramming in Joy. Many functions work by building new functions and running them, it's a natural idiom in Joy.

- - - -

> A language designed around having first-class GUI support

Red? ( https://www.red-lang.org/ )

> Visual Interface Dialect ... is a dialect of Red, providing the simplest possible way to specify graphic components with their properties, layouts and even event handlers. VID code is compiled at runtime to a tree of faces suitable for displaying.

https://github.com/red/docs/blob/master/en/gui.adoc

> You can’t work with strings, json, sets, or hash maps very well, date manipulation is terrible, you can barely do combinatorics problems, etc etc etc. I want a language that’s terse for everything.

That also sounds like Red.


Imagine if the computing world would have standardized on Lisp and Smalltalk 40 years ago.

If Alan Kay had got microprocessor companies to steal the Xerox microcode design like they got Apple to steal the GUI design, then the genius compiler engineers optimized the the heck out of it all.

I can only shudder at what the supercomputer in my pocket would be capable of.


We'd be all dead by now courtesy of GAI, at least with java and other abominations our generation is still fine ant'ing our way through feelin' happy fixing shit.


The computing world has never standarized on anything. And Lisp/Smalltalk aren’t the magic bullets you seems to think they are.


I think the supercomputer in your pocket would be capable of the exact same things it can now, which are plenty cool as is.

Lisps are not the God-languages.


Also, the bottleneck that limits pocket computing is mostly HCI. It's really hard to design a good user interface for tiny screens and clumsy input devices.

Lisp ain't gonna help with any of that.


I'd suggest Scheme instead of Lisp.


I'd like to see a language that can clearly model concurrency using a Petri Net-like structure, ie a data flow language with flexible state/transition handling.

Technically, the runtime is the easy part (relatively speaking). Modelling an array of swimlanes on a plain-text 2D page is the real challenge here IMO.


Can you say more about this or have any particular references or applications? This seems interesting, and I'd like to know more. Is there any reason why this couldn't be a full 2D visual language, if done right?


Yes, it certainly could be (and probably has been) a visual language. And it could also be a plain text-based language too.

But a visual language either requires specialist tooling which I think deters a large number of people, or if implemented in plain text, degenerates into ASCII art, which is very hard to manipulate in a simple text editor (believe me, I've tried!)

A structured non-visual language is also possible, but that turns fairly quickly into a functional language which per se is not bad, but is not helpful when trying to present the overall flow of data around the network. You still have to jump around the file from function to function to try and understand what is going on.

There must be a middle ground out there somewhere. Simple plain text presentation of multiple streams of processing.


Thanks for the thoughts. Any particular references or applications of Petri nets? I found the book Petri Net Theory and the Modeling of Systems by James Peterson. I have either never heard of Petri nets or had but didn't pay attention.


Also check out https://statebox.org/


I'm doing something along these lines.


>A really dynamically-typed language

Raku, previously known as Perl6, has this[1].

sub MAIN(Int $a, Int $b where 10 < $a < $b > 20) { say "OK: $a $b"; }

You can also pull out the "Where" clause to ouside the function and make it define a new type.

Raku is a very awesome language in general, for more reasons than one.

[1] https://andrewshitov.com/2020/08/14/the-pearls-of-raku-issue...


But does it generalize? $a > 10 is easy but what if you want "where $a and $b are coprime"? Or a subtype of the integers of all prime numbers.


Yes, "$a > 10" isn't special, any boolean expression in its place is perfectly legal. Define coprime(Int a,Int b) -> Bool and request that coprime($a,$b) holds in the Where clause, define prime(Int a) -> Bool and request it holds for both $a, $b.

The same machinery that runs "<" or ">" at runtime would just as well run coprime and prime.


About "Everything is a Graph", I tried to design programming languages for graph processing a few year back. The problem here is the "language" part as language always implies sequence (because that is what text is). Sequences of statements just don't cut it. In order to efficiently work with graphs, you need to work in graphs. Thus, it becomes a program graph rather than a programming language. And at that point you run into issues of general tooling. Every single tool ever invented around programming is used in combination with text and sequences.

In retrospective it is funny how everybody implicitly assumes that the next big thing will be yet another programming language. And I am not talking about no-code or AI code synthesis.


The point about Lisp is that, lists aren't just the main data structure: programs themselves are lists. Many years ago, I began developing a language in which programs are directed graphs (http://fmjlang.co.uk/fmj/FMJ.html), and on which I still work when I find the time. One of my aims was to make the language fully homoiconic, like Lisp. The problem I had, and which I've only solved recently, is that directed graphs in mathematics just have vertices and arcs, each of which can be labelled, and that's all, but programs in FMJ have vertices (but with ordered inputs and ordered outputs) and arcs, but also constants and enclosures. So programs are extended directed graphs, and the extensions are only really useful for writing programs. Now I have shown that programs are homeomorphic to directed graphs in the mathematical sense - graphs with just labelled vertices and arcs can be converted to programs and vice-versa. I have also added graph processing primitives, but haven't got much further as I've been busy with other things. I do intend to update my web pages soon, with descriptions of the graph processing and the other features I've added.


After the introduction regarding programs that should be different, etc I expected something innovative but... how is fmjlang different than any lisp? it seems like nothing else than visual polish notation with lisp semantics. Perhaps the "parallel" part is just a different evaluator for a lisp, but I see no other differences to be honest.


FMJ is a statically typed dataflow language. Programs are directed graphs. It has no program counter or stack. There are queues of tagged values at vertex inputs, and vertices execute asynchronously when all their inputs have values with the same tag.

Lisp is a dynamically typed impure functional language. Programs are nested lists. Its interpreter runs by calling eval and apply, which call each other. After compilation, code runs on a machine with a program counter and stack.

I don't see much similarity myself, and I wrote FMJ and know Lisp well.


I think you should try more lisps. Still good that someone tries to experiment with visual Polish notation, typed or not.


Most APLs work just fine with n-dimensional arrays, even though the language only lets you write 1-dimensional array literals. So in practice, we write 1-dimensional arrays and reshape them as needed.

My point is just because you use a sequence-based input method (a programming language) doesn't mean you can't express structured data.

For example jq[1] does a decent job of expressing queries and transformations on JSON (JSON data are trees rather than arbitrary graphs). But trees are a very important type of graph, and I think a language with first-class support for trees would be in a better position to handle arbitrary graphs than a typical programming language.

[1]: https://stedolan.github.io/jq/


Hypertext, which is inherently a graph. There's been a lot of tooling (code browsers, IDEs, version control) that's been focused around navigating a codebase as a graph rather than as plain text.


Not totally sure that languages are sequential (they are recursive and Turing complete… although declarative as well).

The actual sequential thing is consequence of states s0, s1, etc and the only deviation form sequence would be another timeline or state that did not produce.


> A better calculator language

Especially with reactive programming as an ask there, I would recommend Julia with Pluto.jl (Pluto is a reactive notebook).

> A really dynamically-typed language

Julia ;) It is really dynamic, has meta-programming (macros + staged functions), solid semantics around eval/invokelatest that still allow for optimizations and you can add types at runtime (not modify them though).


Try APL for 6 months and tell me Julia is not too verbose for a calculator...


Well, there’s always APL.jl.


If you mean with a Graph language, a complex data structure language, you might have a look at https://github.com/FransFaase/DataLang which gives some ideas about modelling complex data structures and where I talk about the different kind of reference that you might want in such a language.


I'd like to see more languages with built-in, language level support for unit tests. Pyret (https://www.pyret.org/) does this but is considered a "learning language". I'm aware that to many people co-locating units tests with the functions they're testing is "bad" but I find it to be quite the opposite. The two are so tightly coupled that having the unit tests in a separate file, sometimes in a separate source tree, is counterintuitive to me.


Racket has language-level support for contracts (https://docs.racket-lang.org/guide/contract-boundaries.html) as well as unit tests (https://docs.racket-lang.org/rackunit/quick-start.html).

It's true that Racket unit tests are an optional module within the standard library, but the command line tools and package system work directly with it. You can place unit tests right alongside the regular code but within a specially-named "test" submodule, so the unit test code doesn't run at runtime by default, except in certain contexts (package installation, "raco test" command, or running the file from within the Racket IDE).


Yea, I like this about Racket unit tests. It really lowers the barrier to writing tests, and you don't have to go creating separate test files in specific locations.


Elixir has ExUnit which is apart of the language https://elixirschool.com/en/lessons/testing/basics

you can create comments above methods that act as tests


You can do this in Rust. Just put a namespace tagged as I think #[cfg(test)] or the like and they can live in the same file but won't be compiled into the non-test binary.


Technically you don't even have to `cfg`-gate it, it's just good practice to avoid e.g. unused code warnings in the test module.


It could be interesting to think about unit tests and contracts together. In theory a function is valid for anything allowed under its contract, which could make generative testing (aka property-based testing) more powerful and automatic.

It would also be nice to have system-level contracts. Like, say, "every item in list X should have an entry in dictionary Y". Or "no button should be located outside the visible viewport". These are the sorts of things that people procedurally unit test, but expressed in a declarative way you can imagine all sorts of tooling.


Here, OOP seems to be actually a solution with its encapsulation. And in a way, certain parts of it already exist in the form of JML ( https://en.m.wikipedia.org/wiki/Java_Modeling_Language )


D[1] and Zig[2] also have built-in unit testing.

[1] https://dlang.org/spec/unittest.html

[2] https://ziglearn.org/chapter-1/


I'd love to see a language with first class relational data types. This would basically be Turing-complete SQL with better syntax and obviously with a very modern language outside the relational stuff. Bonus points for providing hooks to allow transparent persistence of data, making the program and its database potentially the same thing and allowing easy persistent state programs with complex data.

This could theoretically be added to other languages with libraries (with varying elegance depending on the language), but a real first class language would be carefully designed so that everything is serializable and deserializable with excellent performance. Things that fundamentally are state like network connections would also have to be carefully handled in some way where their essence was persisted such that the program could restore them and rebuild their state.

This would pretty much give you an ORM without the impedance mismatch and would eliminate a TON of "CRUD" code and boilerplate.


I am thinking maybe Datalog? It seems to match all your features, including being available in other languages as a library.

https://en.wikipedia.org/wiki/Datalog


I helped build a language like this called Eve: https://github.com/witheve/Eve

It's defunct now, but it was indeed a Turing-complete SQL (but actually relational) with Prolog-like syntax and set semantics. We supported persisting data, and indeed you could think of it as programming within a database. Even though it's not worked on anymore, the last version in the repo worked pretty well IMO. I built a Spotify clone, and even a robot in the language. If you're interested in these ideas, you should give it a shot! Hopefully someone will pick up these ideas and run with them even further.

It's a pretty mind-bending experience to program this way, and I promise you'll have a new perspective on programming as a practice once you grok it (that was common feedback from our users).


Cell?


The "reactive calculator" language the author wants is just Julia running in a Pluto notebook: https://github.com/fonsp/Pluto.jl


> When I’m trying out lots of different equations, keystokes matter a lot.

That's a pretty weird way to do math, honestly. Shouldn't you figure out what equation you need and then just type it? If you're off by one or whatever, adjust it then, you're still only typing it twice.


Elixir function marching feels kind of like contracts. You specify different variants of the same function for different contracts of what the inputs thrown at it are.

I found that for much of my code it really separated the “where to next” from the “what to do”. And I became acutely aware of how many of the statements in other languages are more about “where does the code go next” and less about getting stuff down.


marching = matching


> and basically no languages have an implication operator

Prolog does, but more intriguingly, so does Nix!


I was surprised to find that Algol 60 had implication in among the Boolean operators, makes sense given the formal background, never got picked up by its many descendants.


Also Visual Basic!


Most PC BASIC dialects had the IMP operator - it was already there in GWBASIC, for example.

But it's not quite the same thing, because all "boolean" operators in BASICs of old were actually bitwise! It worked in practice because the convention was to represent true as all-bits-set (i.e. -1 in two's complement). No short-circuit evaluation, of course - but back then, even languages that had dedicated boolean data types, like Pascal, didn't do it either.


And coq.


His python code can be shorten to

  from math import *
  prod(map(factorial, l))
which isn't much longer than his J code especially if you count the number of tokens rather than the number of characters as the J code uses ridiculously terse names.


You don't type tokens. Maybe if your keyboard has a 'factorial' button that's better, but mine doesn't. (Also J is extremely useful in a variety of other ways that would be much more complicated to replicate in python)


I'll keep my moaning about VB6 and JavaFX to myself. These days I think HTML5 and a decent theme is probably enough for everything. Yes it can sometimes suck, but there doesn't seem to be much love for heavy app development anymore.


> there doesn't seem to be much love for heavy app development anymore.

I bet that changes eventually when we reinvent personal computing again (currently we are reinventing mainframes).


Hah, I like to think that we're finally wrapping up with mainframes redux, but someone has to make the next phase happen.


You should try dart with flutter with ide support. Is not visual lang but it’s main purpose is totally that one.


> I also like the idea of modifying function definitions at runtime. I have these visions/nightmares of programs that take other programs as input and then let me run experiments on how the program behaves under certain changes to the source code. I want to write metaprograms dammit

I've been working on an extremely dynamic programming language for a while. Ricing away on the syntax to make it as flexible as possible, whilst maintaining readability.

One of the things that already works is some insane metaprogramming. It doesn't really have functions. It has callable lists of things. Which means you can modify them at runtime (still working on the names for the functions for doing that).

It is stack-based, so it's a little quirky, but as a taste of some very simple meta-programming:

    block
        varname:
        add! 1 $ $varname
        : $varname
    end
    inc-env:

    a: 1
    inc-env! a
Bare-words are a symbol-type. So they only resolve from the environment when you ask for it with the $ instruction, which is how the above works. A little tedious, but does enable all kinds of dynamism.


Neat! Reminds me of the Lisps around that still have f-exprs, like NewLisp (f-exprs are functions that take their arguments unevaluated, functions are mutable lists of code you can overwrite, etc)


R (yes, the statistics language) has exactly this.

You can literally extract the body of a function as a list of "call" objects (which are themselves just dressed-up lists of symbols), inject/delete/modify individual statements, and then re-cast your new list to a new function object. Or you can construct new functions from scratch this way.

I've written utilities before that will inline specific function calls within the body of another function, or that implement function composition by constructing a new function from scratch.

I don't know why the original devs thought this was necessary or even desirable in a statistics package, but it turns out to be a lot of fun to program with. It has also made possible a wide variety of clever and elegant custom syntaxes, such as a pipe infix operator implemented as a 3rd-party library without any custom language extensions [0]. The pipe infix operator got so popular that it was eventually made part of the language core syntax in version 4.1 [1].

[0]: https://magrittr.tidyverse.org/

[1]: https://www.r-bloggers.com/2021/05/the-new-r-pipe/


And Rebol.

Arguments sent as `[blocks]` are always unevaluated but you can also explictly set a function argument to not be evaluated...

    f: func ['a] [
        print ["you gave me: " a]
        print ["and this is what it evaluates to:" get a]
    ]

    val: "hi!"

    f val
Outputs:

  you gave me:  val
  and this is what it evaluates to: hi!
And the function body is mutable...

    f: func [a] [
        print a
    ]

    f 10   ;; => 10

    append second :f [* 2]

    f 10   ;; => 20


Absolutely! That and Vau Calculus are the inspirations. My vague thought that started it all was that if you take an f-expr, and apply it to a stack machine, you might be able to take a stack-based language from a toy and turn it into something that could look and act professional.

PostScript is one of the very few stack-based languages that is used anywhere, and it's mostly used in anger. Forth has a bit of a following, but the syntax isn't as regular as Lisp, which makes metaprogramming a tad annoying at times. So I was curious if I could use the f-expr to change both of those. Thus far... I think it's a workable concept.


The fexpr idea is at the very heart of Lisp, which I think makes it truly transposable to other paradigms. Alan Kay credits the fexpr as one of the inspirations for Smalltalk[0] taken from Lisp:

> My next question was, why on earth call it a functional language? Why not just base everything on FEXPRs and force evaluation on the receiving side when needed? I could never get a good answer, but the question was very helpful when it came time to invent Smalltalk, because this started a line of thought that said "take the hardest and most profound thing you need to do, make it great, and then build every easier thing out of it". That was the promise of LISP and the lure of lambda—needed was a better "hardest and most profound" thing. Objects should be it.

You can see the basic idea for the Vau calculus in that quote. Shutt (RIP)[1] later elaborated the theory and proved they were not "trivial" and anathema to compilation [2]. It's great to see new interpretations and uses of the fexpr!

[0] http://worrydream.com/EarlyHistoryOfSmalltalk/

[1] https://en.wikinews.org/wiki/Wikinews_mourns_loss_of_volunte...

[2] https://fexpr.blogspot.com/2011/04/fexpr.html


Oh cool, another Vau Calculus enthusiast! There are dozens of us :D F-exprs for stack machines is very interesting, I'll have to ponder on that - do you have somewhere I can follow your work on your language?


> ...do you have somewhere I can follow your work on your language?

Not really. Not committing to anything publicly has let me feel more free to tear down and start from scratch an embarrassing number of times, and experiment a bit more. Makes it easier not to get over-invested in any one technique.

I have written a little bit, as I've slowly changed my approach. But it's haphazard. Not a whole heap. You can, though, see a few of the previous versions, that do "work":

+ 2 years ago: https://git.sr.ht/~shakna/esom

+ 3 months later: https://git.sr.ht/~shakna/stackly

+ 6 months ago: https://git.sr.ht/~shakna/james_scriptlang

It's still evolving. You can see where some things have stuck, and others have been tossed by the wayside. The current incarnation isn't public at all, yet.

If I ever get satisfied that I've got the right approach, I'll probably do a big write-up, then.


Or you could just use a lisp. Someone had to say it.


Well, now it's been said... Scheme is my all-time favourite language. The regularity of Scheme's syntax, and the way scoping works in the language, are certainly influential on the way I'm going about designing this weird thing.


Is Scheme scoping any different from any other language with lexical/static scoping?


It is simpler in some ways, mostly because scheme has a "syntax tower" where even the most basic things can be understood in terms of other things. Let can be defined in terms of lambda. Let* in terms of let. Letrec* is also not that hard.

Once you know this and the difference between definition context and bodies (which is simple) there isn't much to add. The top-level works mostly like letrec*. Internal definitions work like letrec* Libraries work like letrec*


Or Smalltalk.


Yes please to contracts.

Dependent types and things like that are probably great. Being able to prove correctness is great. But I would settle for a way to declaratively express invariants and then let the language for the most part take care of inserting checks (in debug or using a special mode perhaps, or all the time) and generating tests/property tests. Why isn’t this more of a thing?


Refinement types is exactly what you are looking for. Languages like F* (F-star), Dafny, and Liquid Haskell are good examples.


> Why isn’t this more of a thing?

Because that's a half-assed way of using and leveraging a good type system? DBC always strikes me as a "three blind men and a type system worth using" parable.


Languages like Dafny and Ada show that you can contracts are great for augmenting a good type system by letting you express really complicated invariants that are tough to express in types. I didn't include them because they're not "mainstream", IMO.


Dependent types are extremely difficult to use in practice for non-experts, and more or less don't work at all on floating-point numbers. Idris and a few other languages are working on it, but I think we are a long way off from there. Dynamic contracts solve the problem well enough for most use cases, even if they don't solve it fully.


Unit tests don't really work on those either.


To my mind contracts only complement a type system. I might want to effectively assert that a function returns a sorted list but I don’t want to assume it is true just based on the assertion never failing; I don’t want the sorted property to be reflected in the type.

A type that proves that all values that inhabit it are sorted would also be nice. But if either the language doesn’t support dependent types or it is too difficult to prove then a contract could be used instead (again: not as a replacement).

So to my mind (repeating idiom) it can work paralell to the type system. What’s not to like (honest question :))? It doesn’t have to be intrusive, as far as I can see.


> A type that proves that all values that inhabit it are sorted would also be nice. But if either the language doesn’t support dependent types or it is too difficult to prove then a contract could be used instead (again: not as a replacement).

This is really easy in a dependently typed language. If something is hard to prove by construction you can do

    sort: List -> List
which sorts a list without proving it to be sorted, and then write a function

    isSorted: List -> Maybe SortedList
then just get your safe sort function

    safeSort: List -> SortedList
    safeSort l = case sort l of { Nothing -> panic; Just x -> x }
Of course, you don't even need dependent typing to do this. If SortedList is a new nominal type and you trust your issorted function, everything works out.


The not-so-easy part arises when you need to operate on a SortedList and want to enforce that it remains sorted after said operation. Maybe in the specific case of SortedList it's not that bad, but my (limited) experience is that the complexity in such cases can accumulate surprisingly high and surprisingly quickly.

I have frequently found myself out of my depth when trying to write what I thought should be straightforward dependently-typed code. I got the hang of it after a lot of practice (as well as a lot of reading, note-taking, and outside help). But now you're asking programmers to learn an entirely new set of skills on top of everything else they're expected to know.

I love the idea of dependent types, and I've had a lot of fun messing around with Idris 2, but I think dependently-typed languages have a lot of work to do in order to bridge the ergonomics gap.

I'm not convinced that, for garden-variety software development, dependent types are substantially safer than extensive use of contracts and property-based testing.


That's exactly what my framework does. As long as you're okay with runtime panics (which would happen anyway in a non-dependently typed language), you just have any function you don't know how to prove go from SortedList -> List. Then you have a check List -> Maybe SortedList. And finally a function with a panic that lets you write the SortedList -> SortedList version.


> a function with a panic that lets you write the SortedList -> SortedList

Ah, I think I see what you mean. Yes, that's also possible, but you are losing the full safety of dependent types, and are just back to contracts!


This is adjacent to one of my main complaints with Racket's contracts. If I want to use Typed Racket (which I almost always do) then it becomes significantly more difficult to attach non-type contracts to code. I find I often run into cases where I want to encode something as a contract that is impossible to describe with refinement types. Especially when gradually typing an old untyped codebase.


It’s not exactly a language but the Emacs calculator (aka GNU Calc) can do a lot of the things J does for the author. Product of factorials would be:

  v M ! v R *
Where the v may be capitalised if you like and the spaces are the way the commands are written but aren’t typed. That decomposes into some easy mnemonics: Vector Map factorial, Vector Reduce multiplication.

It has a bunch of mathematical features, a quick interface for plotting simple graphs (with gnuplot), some symbolic algebra, and it sort-of has the reactive feature too. (I think there’s also a mode for using it in files).

Date handling is ok but simple (a date/time is internally represented as a Julian day so if you add 1 to a date/time you get the next day; times go into the fractional part. Leap seconds / time zones aren’t handled. There is an option to configure when your calendar switched from Julian to Gregorian.

It’s usually fast enough for what I use it for but it can be limited by performance.


The items "A serious take on a contract-based language" & "A language with semantic relations" are covered pretty nicely by Ada's SPARK subset/tools... and the really great thing is that the "aspects" are part of the code and don't "go stale" like annotated comments do.


Interesting to see the complaints on J. I've been playing with the Q language (basically K with some sugar) that is part of KDB+ and I think it solves all your problems in that it is a lot cleaner/simpler, very fast, support for dictionaries, strings, JSON, datetimes (it is a database too), very terse file IO...etc. The only drawback is it's a commercial product. Who knows though. Maybe they would give you a pretty cheap license if you agree to use it mostly like a calculator (not for large-scale financial analysis). The whole install was basically a single executable and a license file. Very elegant. The doc is finally pretty good too.

Edit: I tried using J and the install was pretty large iirc and the whole language is just too big and complicated. I know you can do cool stuff with it, but it just seemed to me that the cost was too high relative to benefit. Your mileage may vary.


  input = 1

  out1: input + 1
  #out1 is now 2

  input = 4
  #out1 is now 5

  out2: out1.replace(+, -)
  #out2 is now 3

  # let's pull an APL
  input = 4 2
  #out1 is 5 3
  #out2 is 3 1
The reactive programming idea a la Excel but text-based looks really neat. I'd love to play around with that kind of thing.


I haven't used it beyond playing with the tutorial, so I don't know how well it works in practice, but Svelte does this with JavaScript: https://svelte.dev/tutorial/reactive-declarations


I've been using Svelte in production for ~3 years now. Hard to go back once you get used to reactive programming.


I have a vague recollection that the language used by MS PowerApps might work like that:

https://docs.microsoft.com/en-us/power-platform/power-fx/ove...


doesn't interactive Julia (that's what it's called?) do this? i remember seeing an example but i might be misremembering things


Pluto.jl[1] works like this. It’s a reactive notebook. It’s a really neat project.

[1]https://github.com/fonsp/Pluto.jl


Ah yes, that's what I was thinking about. thanks! its so cool xD


Yes. And Julia also does the "optimized version of...". It dispatches on types.


This looks like hell to me. It's side effects to the tenth power, which must make debugging an absolute nightmare.


Not really – you won't get into an inconsistent state when everything is reactive. One thing that might lead to catastrophe is having cyclic dependencies, but reasonable runtime/compiler can detect those.


> you won't get into an inconsistent state when everything is reactive

That's like saying "There will never be bugs".

There will be bugs.

And when you come across one and you can change a variable and have another variable affected in some other part of your program and you have no idea how these two variables are connected, you are going to have a nightmare of a time debugging.


You would only have no idea those variables are connected if the tooling for the language can't help you figure that out. Ideally, in such a language you could just query the program state to determine everything that would be influenced downstream. Also step-by-step execution and record/replay or program state can help debug those tricky issues. New paradigms of programming call for new paradigms of tooling and debugging, and figuring out what those are is part of the fun!


Of course there will be bugs. But they won't be caused by the system NOT updating the variable when it should be. But it is still up to the programmer to create a valid reactive graph.

> you have no idea how these two variables are connected

Simply traverse the reactive graph to reveal how/if two variables affect each other.

Cycles and dynamically changing bindings are still problems, but they can be tackled.


This would probably be terrible for application code. But when you are doing quick math/science exploration with mostly linear program flow and want to change a few things upstream, it could be useful. Everything has its place


Pyret has interesting contract functionality.

Lua is based around tables, which, aren’t quite graphs but seem relatively close enough. What is a graph if not a collection of points associated with their edges?

As far as semantic function relations, I could swear Ada has something like this, but it’s been more than a decade since I’ve touched any.


“A serious take on a contract-based language“

Not ambitious enough. We already have dependent type systems and what I will call “property based types” since I forget the proper name e.g. https://ucsd-progsys.github.io/liquidhaskell-blog/.

The compiler can check anything from silly out of bounds conditions to more complex assertions.

Even without that, a good type system will allow you to alias types but have a constructor do a check. Once you use the Percentage0_100 type you know that it must be in that range, and the method doesn’t need to check it again.

In think Haskell is “too much” for a lot of teams, but sprinkling in some compile time assertions and making that ergonomic with linting and so on would be a boon on par with async/await.

“Unit test?” no need, I have a proof!


Even better: Dependent Types combined with Refinement Types. See F* (F-star).


Refinement Types… thats the phrase I was looking for. Thanks!


All six features listed are very redundant or useless, in my opinion.

Contracts? how they're different or less verbose than plain asserts? what they do better?

"reactive programming"? if remove that strange code editing "replace", just a chain of definitions instead of variables in, say, ruby, gives you basically the same effect.

etc.

What I'd love to see is a language with a first class grammars to replace many uses of regexes or badly written DSL's, like what Perl6 tried to do.

and, somewhat related (both are using backtracking), adoption of some of the ideas of https://en.wikipedia.org/wiki/Icon_(programming_language), not on a whole language level, but in some scoped generator context would be nice for some tasks I had to do.


I'm taking "reactive" here to mean the language tracks data dependencies, so expensive computations are automatically made incremental. Facebook had a research language attempting to do "reactive" programming called Skip (originally Reflex), which is now defunct. The runtime made the language like textual statically-typed Excel.

The use-case was to abstract away having to manually figure out where to put caches and how to invalidate them on front-end servers. Rather, have the runtime figure out how to cache content and page fragments when e.g. rendering the Facebook timeline. However, it was too difficult to bridge the gap to the existing Hack codebase, iirc in particular to the entity system used. There were also a lot of headaches trying to figure out how to power the underlying cache invalidation system.

https://www.youtube.com/watch?v=AGkSHE15BSs

https://web.archive.org/web/20200219222902/http://skiplang.c...

The author I think means something slightly different though, closer to prologue where you define facts and then ask the runtime to make an inference about those facts.


There are a ton of reactive languages though ? QML is a mainstream one used in plenty of UIs and shipped as part of Qt (https://qmlonline.kde.org), there's ReactiveML, Céu...


my take on a reactive language was a tiny AST manipulator language. Since `a=b+3` assigned the ast `b+3` to `a`, you would implicitly get `a==4` when `b=1`. There was also an "eval" operator for when you really wanted `a=b+3` to just assign the evaluation of `b+3` (a single number) to a.


> Contracts? how they're different or less verbose than plain asserts? what they do better?

How do you turn asserts into generative tests, or assign blame? Clojure's spec has support the former (https://clojure.org/guides/spec#_generators), racket's contracts have support for the latter (https://www2.ccs.neu.edu/racket/pubs/popl11-dfff.pdf). Also, many popular languages have a pretty broken assert statement (C, C++, python come to mind) which conflates optimization levels or/debugging with assertion support. Rust gets this right.

> What I'd love to see is a language with a first class grammars to replace many uses of regexes or badly written DSL's, like what Perl6 tried to do.

Lua sort of does this with LPeg. There's also https://rosie-lang.org/, which is more special-purpose.


Perl6 is now Rakulang.

https://raku.org/


When I need to parse text with Lua, the first thing I reach for is LPeg. It's great when you can create a standalone expression to parse, say, an IPv4 address, then reuse that in a larger expression. And the data can be transformed as it's being parsed (say, converting a string of digits into an actual integer value).

I have a bunch of Lua modules based around LPeg: https://github.com/spc476/LPeg-Parsers


Contracts, if done right, could be used by the compiler or some dedicated linter or tester before execution. This could open up safety guarantees that are way beyond what we can currently use. The question, of course, is what kind of constraint language is both useful and solvable. Unfortunately, people have been focused on the "dynamic" typing (i.e., no type checks) side of things for so long that static checking lags behind in usability (Rust is on a good way to improve things, though).

Regarding first-class grammars you have to understand that it basically prohibits any other tool to parse your language. This means everyone has to fully implement the language to create any kind of small helper tool. In turn, your language might easily fall into the "exotic" camp (like, e.g., TeX - this language effectively has first-class grammars, albeit at a low level).


> Unfortunately, people have been focused on the "dynamic" typing (i.e., no type checks) side of things for so long that static checking lags behind in usability

The longer I see things like mypy and typescript evolve, I’m actually really glad that people have been focused on gradually typing dynamic languages. As far as I can tell, really useful contracts (or really flexible types) are super burdensome in the most general cases (e.g. `refl` brain twisters in dependent types), but still insanely useful in frequent cases. It reminds me of what people say about static typing’s benefits and burdens.

So I’m hoping to see the gradual typing equivalent for contracts and verification. Start with usefulness and add a spectrum of optional safety that the programmer can ignore or use as they see fit. Personally, at least, that would be my ideal scenario.


Kotlin has some functionality for this that help the compiler. A good example is the isNullOrBlank() extension function on String? (nullable String). It has a contract that treats the string as not null after it returns false. So if you do a null check, it smart casts to a non nullable string without generating a compile error.

There are a few more variations of that in the Kotlin standard library and you can write your own contracts as well. There's just not a whole lot you can do with it other than stuff like this. But it's useful.


Types in Idris sound like the contracts you mention. I learnt about them in the book "Type-Driven Development with Idris".


There is a fairly close relationship between a dependently typed language, like Idris, and 'contracts' (really, pre- and post-conditions plus other propositions) in languages like Ada/SPARK, Dafny, and Frama-C.

The major differences are that most (all?) dependently typed languages are functional and require the programmer to prove the correctness of the contract in (an extension of) the language itself, while the others typically use special annotations in normal Ada/C/a generic ALGOLish language and dump the proof onto external tools like SMT solvers, all resulting in a more 'normal' programming experience.


I think liquid types, ala Liquid Haskell are a preferable middle ground in this scenario. The SMT is built into the type checker and the refinements are limited to a linearly decidable proof fragment. Dominic Orchard has done some work generalizing the capabilities of these types of refinement by showing that the refinement fragment need to just be a semi-ring structure and the SMT can still resolve. This would cover a large portion of contracts and not impart the development process in large part.


There is ATS as an example of an imperative language with dependent types.


> Contracts? how they're different or less verbose than plain asserts? what they do better?

The difference to asserts is that they express a property over to points in the execution of a program, whereas an assert only states a property of one point in the execution in the program. Practically, that means that you can refer to the state before some code in addition to the state after in the post-condition of that code.


This reminds me of @NotNull/@NonNull annotations in Java. Those annotations may trigger, depending on how the method is called.

Then there's just a plain old notNull() static method, which will trigger.


Xerox had xfst, which replaced regular expressions by regular _relations_, and offered named sub-expressions, which solved the regex "write only" problem.

Xerox' original XRCE (Research Centre Europe) pages are gone, but other sites offer a glimpse, and FOMA is an open source implementation of the same language:

[1] https://sites.google.com/a/utcompling.com/icl-f11/home/xfst-...

[2] https://dsacl3-2018.github.io/xfst-demo/


> What I'd love to see is a language with a first class grammars to replace many uses of regexes or badly written DSL's, like what Perl6 tried to do.

Recently I've been tinkering with an indie Clojure-like Lisp called Janet. It implements Parsing Expression Grammars as a core library module: https://janet-lang.org/docs/peg.html


In SPARK contracts can actually be proven (though not all, and not all without effort). So they are a step above plain asserts, or really several steps. Contracts in SPARK are also checked at runtime, which can be disabled if you have proven them either with an automated prover or via some other mechanism (sufficient testing, manual analysis of the code, etc.). Though what can be proven right now is limited so you don't really get the full scope of Ada, but they are constantly working on extending what it can handle.

If you write tests now, the extra notations SPARK introduce aren't much more code than you're already writing, and really it's just entering into the code what you (hopefully) have in your head or on paper elsewhere.


One of the really fun experiences I've had with SPARK/Frama-C/some dependently typed languages was moving runtime tests into the contracts.

Your function only works on arrays of a certain length? Rip out the test-and-return-error code. Skip asserts (that get removed if you optimize). Put the check in the contract and the code won't compile unless the array is appropriate---which might involve a test at a higher level, where the error is easier to handle---and you get zero run-time cost.


> Inheritance and interfaces are relationships between classes. But what about relationships between functions?

Classes ARE functions. Conceptually, they are closures with some automatically generated features and language semantics.

https://www.youtube.com/watch?v=mrY6xrWp3Gs

Classical inheritance is BAD and you wouldn't want that kind of relationship for functions. Ofc, what the article suggests is basically assertions - why execute code once, when you can do it twice!?

Modern languages should have function signatures. After I write tests (a full testing framework should also be native to a language), being able to get a function signature which ensures a pairing with a specific set of tests would be great. No more side effects added in without breaking tests, ensuring you would have to make a test change, even if it's introducing some new side effect that doesn't break an existing test.


What I'd love to see is a language with a first class grammars to replace many uses of regexes or badly written DSL's, like what Perl6 tried to do.

There is recent research in this area.

https://conservancy.umn.edu/handle/11299/188954


> What I'd love to see is a language with a first class grammars to replace many uses of regexes or badly written DSL's, like what Perl6 tried to do.

Why not parser combinators as a library? The language has to be sufficiently advanced to allow that, but many are these days. E.g. F# has FParsec.


ANTLR can embed “normal” code into the grammar itself, it is a really great tool that can be like a superpower for some specific problems.


Refinement types (for example) can be used to prove the code correct before even running it. Assertions typically can’t do that.


  input = 1
  
  out1: input + 1
  #out1 is now 2
  
  input = 4
  #out1 is now 5
  
  out2: out1.replace(+, -)
  #out2 is now 3
  
  # let's pull an APL
  input = 4 2
  #out1 is 5 3
  #out2 is 3 1
Not quite as sleek syntax, but in APL with my Lazy library[1]

        input ← 1
  
        ]lazy out1 ← input + 1
        out1
  2
        input ← 4
        out1
  5
        ⎕FX'out1' '\+'⎕R'out2' '-'⎕NR'out1'
        out2
  3
        input ← 4 2
        out1
  5 3
        out2
  3 1
[1] https://github.com/abrudz/Lazy


I want a language for safe collaboration. Suppose I create a GPL-licensed library for decoding mpeg4 that I publish somewhere. Someone should be able to fix a bug in that library and publish a new with minimal involvement from my side and minimal overhead work. I shouldn't have to review pull requests or anything. It should all be automatic and the language should protect against malicious users inserting flawed code.

Copyright would be handled using a block chain so every commit's author would be publicly visible to users of the library.

The "source code" would be stored as a call-flow graph and perhaps nodes would have different permissions to sandbox the effect of untrusted contributors changes.


> Copyright would be handled using a block chain so every commit's author would be publicly visible to users of the library.

I think you might not want a blockchain: what you need is a merkle tree, which is basically what backs both git and blockchain


Honestly, Ada fits a LOT of that bill.

The type-system and forced spec/implementation split both work well to catch errors; you can go further with SPARK [proving] and using Pre- and Post-conditions, type-invariants.


I'm curious what the author means by "everything is a graph". Are graphs the primitives of the language? How are graphs defined or specified by the programmer, without using an even lower level primitive?

The author says:

> Give me a language where key-value maps are emulated with directed bipartite graphs

So this means you can't define graphs using key-value dictionaries (like Smalltalk or Javascript with their objects primitives). So how exactly are these graphs defined? RDF format?

Something I've found while trying to design languages is that the hardest part is running into contradictions and circular logic. And I feel this "everything is a graph" idea will very quickly run into this issue.


It's not really different than Lisp. Lists are not a native data representation on any computer other than a Lisp Machine, and S-exprs are a textual representation of a list, not a list itself. Also, most realistic Lisps still admit other primitive types like numbers, strings, symbols, and booleans. Many also have hashtables, structs, and binary blobs. Nobody programs with numbers represented by Peano axioms.

Nevertheless, Lisp is still a very useful primitive for understanding computation, because the means of combination of these primitives are all themselves represented with Lisps. That makes for a very succinct definition for the rules that define computation, because there are fewer special cases to handle. Likewise, it makes it easy to write program-manipulating-programs, because there are fewer special cases to handle.

In theory a graph-based language could have similar benefits. Most compilers already represent programs as graphs internally: there are call graphs, control flow graphs, data flow graphs, type graphs, etc. You could imagine a language that tries to make graph construction & manipulation very succinct, and then see how far it gets you. Sure, at some point you have to implement on real hardware, but you can hide the actual implementation, just like the sophisticated compiler techniques used to transform lists into efficient machine code are hidden from a Lisp programmer who just wants to write some macros.


Maybe the author implies APL-style programming: in APL, your base datatypes are "array of number" or "array of character" and there's first class support for multidimensional arrays and/or nested arrays. But the main thing is that all the operators take arrays as input and return arrays as output.

With "everything is a graph", all your operators would expect to consume and produce graphs. So you'd have operators for parent/sibling relationships, operators for various kinds of traversals, extracting graph shape information to build higher-level operators like subgraph isomorphism (or maybe that would be a built-in operator).


So the idea is to first define some static graphs and then use operators to construct larger (possibly dynamic) graphs? But to define those small static graphs in the first place, you have to either use key-value mappings (how Smalltalk/Javascript does it) or maybe RDF format or something.


That is how I imagine it, yeah.

If I was designing an "APL for graphs" I would probably take a lot of inspiration from NetworkX[1]. Especially: generators for various graph types, the ability to read dicts/lists as graph data, and a huge suite of graph-related functions.

But I would also want dedicated notation for the most common/composable operations like APL. I particularly like how Jq[2] "functions" are inherently combinators, and Jq has some nice semantics for mapping and filtering operations on trees, but I don't really like its syntax that much.

[1]: https://networkx.org/documentation/stable/tutorial.html

[2]: https://stedolan.github.io/jq/


>> Everything is a Graph >> Lisp: everything’s a list

Actually, the universal 'cons' cell data type of LISP (which is used to build lists) allows to represent graphs. Take, for example, Common Lisp. There are read-macros which make this possible: The expression #1=(hello . #1#) builds a circular list that only contains the symbol "hello" [1]. This example can be extended to build graphs.

[1]: https://letoverlambda.com/index.cl/guest/chap3.html#sec_5


> The standard bearer for contracts used to be Eiffel, which people stopped caring about in the mid-90’s. Nowadays the only mainstream language to get serious about contracts is Clojure. Most languages have a contracts library, which mean the language doesn’t have affordances to use contracts well. Take the predicate is_sorted(l)

Some languages though have contracts as a library, but that library uses tools of the language, to actually change the language. Languages with capable macro systems! Just because something is a library, that does not mean, that it will not integrate into the language, even beyond a library function level. In language with a capable macro system, one can create new keywords and forms, which are just as much part of the language, as predefined things. It allows to create ones "affordances". With all the interest in language features, I am surprised, that this was overlooked.

The post inspired me to try myself at creating a macro in Scheme / GNU Guile, which implements the "requires" and "ensures" idea, although evaluating at runtime: https://notabug.org/ZelphirKaltstahl/guile-examples/src/d749...


For contract-based programming, I'm personally planning on experimenting with Prusti: https://github.com/viperproject/prusti-dev

The withdraw example would look something like

    impl Account {
        #[requires(amount <= self.balance)]
        #[ensures(self.balance >= 0)]
        pub fn withdraw(&mut self, amount: uint) { ... }
    }
and Prusti has a good story for going from this to larger proofs.


I would love to see a programming language with some sort of bitemporal data store built in. Let me define dataflow graphs, including highly stateful logic over streams of data, but let me go back and change data, or fix bugs and replay from the beginning of time and view the state of the system (in a nice REPL) at any point in time. Gimme a fast, distributed runtime that can scale to any level of complexity or data size.

I can imagine it being Materialize who build this first but in my heart of hearts I’d love it not to be SQL.


I want an IaC and nixos level language with prolog inspiration. Let me express how to go from fact to goals with constraints.

It would probably make more sense than HCL or Pulumi.

Agents based language are close in theory but they are really targeted at specific simulation and not at explaining what went wrong.

It would also make it easier to do partial application and resumption of it. And probably make constraints for security easier.

Probably make composition of infra level work easier too.


I've been working on something like this (though written using CLojure & core.logic)!

I'm glad to see there's someone else out there who think's it's useful.


The dynamic typing example doesn't have anything to do with dynamic typing and is in fact already a feature of some statically-typed programming languages.


Common Lisp can do that:

    CL-USER> (defvar i 1)
    I
    CL-USER> (declaim (type (integer 1 10) i))
    (I)
    CL-USER> i
    1
    CL-USER> (setf i 8)
    8
    CL-USER> i
    8
    CL-USER> (setf i (1+ i))
    9
    CL-USER> (setf i (1+ i))
    10
    CL-USER> (setf i (1+ i))
    ; Evaluation aborted on #<TYPE-ERROR expected-type: (INTEGER 1 10) datum: 11>.
    CL-USER> i
    10


in pike:

    #pragma strict_types
    void main()
    {
      int(1..10) i = 3;
      i += 7;
      i += 1;
      write("%O\n", i);
      i = 11;
    }
output:

    typetest.pike:6: Warning: An expression of type int cannot be assigned to a variable of type int(1..10).
    typetest.pike:8:Bad type in assignment.
    typetest.pike:8:Expected: int(1..10).
    typetest.pike:8:Got     : int(11..11).
    Pike: Failed to compile script.


As TFA mentions, Smalltalk probably does it as well, but Common Lisp has all of the features for "A really dynamically typed language"


well since all graphs can be represented as binary graphs and bgs can be conmfortably represented in lists, Lisp is de facto the friendliest graph language. But I agree with the sentiment, specially in regards the syntax... something new has to be imagined in order to representa language oriented toward graph traverse and manipulation.


I'm a little late to the party, but humbly submit my thoughts/design from a while back for your review/feedback: https://github.com/vinodkd/halo/blob/master/doc/UserGuide.md...

Edit: I've posted this same comment in multiple places in this conversation to get feedback from specific commenters, not as spam (it's a old personal project)


Give me a language like Rust that allows me to define an efficient concurrent garbage collector as a library.


As I noted elsewhere here, but since it is relevant to your comment I think: What I would like to see, along these lines, is a language that is as easy to learn/use as python/ruby/etc, but based on rust and rust-like, using rust libraries if at all possible, with garbage collection, so one can have an easy entrypoint to Rust, and if ever needing the full performance etc of rust, change a compiler switch (or something) and the full rust kicks in.

In other words, a step-by-step learning process from simple to hard, maybe 2-4 levels, and you never have to throw away what you already learned and start over, when going to the next step.


OCaml is "sort of like Rust, but with a GC." It could serve as a good introduction to functional programming, but without having to worry about the borrow checker.

(I know that's not quite what you're saying -- it sounds like you're asking for a "Rust--", which is Rust with some complicated stuff turned off. But it's a start.)


Thanks. I think I read in wkp that Rust itself was originally written in OCaml.

To slightly clarify my earlier comment: I am learning rust and plan to use it, but in the long run it seems it could save time for the world overall, if there were something to recommend to others, say, members of a team who want or need something easier, but is easily transitioned to full rust, with as much shared knowledge between the two as possible.


You might be interested in Cone [0] which is centered around letting their users build custom memory management strategies, and then making them all work together with each other and a borrow checker.

[0] https://cone.jondgoodwin.com/


Very interesting, thanks for the link!


Re the "everything is a graph" idea, Mathematica has been building primitives for some time toward this idea where literally everything is a graph: https://www.wolframphysics.org/


We use an internal language at work thats a graph based language with first party GUI support. Some interesting info about it:

* There is just a few primitive node types (Class, Method, Attribute, Relationship, GUI Bucket) with the rest of the system being derived off of that.

* Underneath is a replaceable layer of java (+js/html for UI).

* Still not sure if it's quite front or back end code, as it contains business logic, GUI info, and largely runs on the server.

* Back end improvements happen automatically and system wide. The limiting nature of the graph ensures minimal regressions.

* Most of the SKU product code is contained within this graph, making for definable relationships between any part of the code.

* Internal languages mean lots of internal tools that enable better management of the graph.


I would like to see a proper language with built in support for Unix Shell commands. Similar to bash or perl with a use Shell import, but without the limitations of bash or the craziness of perl.

I really like the idea of zx, but still doesn't look clean enough to me.


Have you seen oilshell.org?


This list of wanted language features is quite interesting.

> A serious take on a contract-based language

As the article itself notes Clojure does support contracts in the language and with libraries. But there has been a ton of research about that, e.g. JML [1].

> A language with semantic relations

They can be expressed in contracts (e.g. Clojure) or with property-based test frameworks (e.g. Haskell, Clojure, and many others nowayday).

> Everything is a Graph

See graph rewriting, e.g. [2]. That was just my first google hit with the proper search term.

> A better calculator language

That idea looks to me like automatic transformation from offline to online algorithms [3]. I am not aware of any compilers being able to do that. But dataflow graphs can be used to design such systems. A list of such languages or libraries (e.g. for Clojure) can be found in this Stackoverflow article [4]

> A really dynamically-typed language

As some other answers here already mentioned modern static type systems are able to express, new types at runtime, e.g. with Scala's path dependent types. Again Clojure has an interesting entry here: clojure.spec [5], a runtime represented specification language, which can be used together with contracts or to formulate proeprty-based tests, or anything else you would like to do with it.

Btw, I've only toyed with Clojure so far and I used to be an avid proponent of Haskell/ML-style static type systems.

[1]: https://www.cs.ucf.edu/~leavens/JML/index.shtml

[2]: https://link.springer.com/chapter/10.1007/978-3-642-32211-2_...

[3]: https://en.wikipedia.org/wiki/Online_algorithm

[4]: https://stackoverflow.com/questions/461796/dataflow-programm...

[5]: https://clojure.org/guides/spec


As far as graph-based languages and languages with arbitrary metadata and relationships between objects are concerned, I've been mulling over a language where expressions are represented as RDF graphs and that has built-in support for manipulating RDF graphs. I've use the concepts as an intermediate representation for [mostly numeric] functional expressions in a few different systems (including Factorio's map generator), but haven't yet had the motivation to really flesh it out into a full-blown language. https://github.com/TOGoS/TOGVM-Spec


For calculator languages, I think there are several choices. Depends a bit on what you know, and what you need...

Frink (https://frinklang.org/) has been around for ages, and is rooted in physical unit conversions

Calca (http://calca.io/) has come up a handful of times. It looks pretty reasonable

R, if that's your flavor

Anything with a REPL. Though the OP suggests these are cumbersome, I'd counter argue that anything you know well is usually more efficient than learning something new.

A very long list of commercial tools: Matlab, Mathematica, WolframAlpha, STS, SAS, etc


This is very nonambitious, but I’d love to see a Clojure with less LISP. That is - something like JS syntax but with immutable by default types and const only bindings. Macros would be great too, even if they require a different compilation mode. Top it off with great standard library that has _all_ the collection operations you’ll ever need (e.g. partition-all and dedupe in Clojure). nREPL support or the like is a hard requirement.

Spending the last 6 years writing Clojure has been great. That said, the parens don’t add positively to the experience. They feel like semicolons in C - “the compiler is too dumb to understand this so I have to help it.”


I'm wondering if that has to do with the time spent writing Clojure code?

I have only wrote LISP code during uni and on pet projects and I always feel like the parentheses are making things easier to visually process. The AST is explicit and basically just before my eyes, and it looks nice because of the functional style.


did you try structural editing and using keyboard shortcuts to select, cut and paste the whole inner contents of a parens scope? for my that made the whole difference and I never found such a similar good feel in other languages.

Fair to mention dart (flutter) in VSCode has a refactoring option which removes, adds and modifies all widgets but it falls short in comparison to lisp.


Structural editing is basically a hard requirement with Clojure - I don't think you can realistically live without it. Paredit is fine, but it does take 2-3 months for your fingers to adapt. It would be even better if they didn't have to though.


Elixir feels similar, being built on macros and doing transformations on immutable values.


What I really miss is a lang that can be used to write code efficiently on smartphones.


> A language with semantic relations

This is a cool idea especially around a formal structure to express relationships between parts of your code. As to the example in the article on this, it seems property based testing is a decent way to achieve something like this with existing languages.

A simple example of a property based test for a `const add = (a, b) => a + b` function would be that the result of the invocation is always larger than the arguments IE

  expect(add(a, b)).toBeGreaterThan(a);
  expect(add(a, b)).toBeGreaterThan(b);
And then you can randomly generate or fuzz the values for a and b but the test should always pass


> I miss VB6.

Me too. But there is something nice on Linux called GAMBAS: http://gambas.sourceforge.net/en/main.html


A guy I know actually did make a graph programming language, implemented on top of neo4j. It was very slow, but he did use it successfully for analyzing SQL stored procedures and finding fraudulent transactions.


What I would like to see is a language that as easy to learn & use as python/ruby/etc, but based on rust and rust-like, using rust libraries if at all possible, with garbage collection, so one can have an easy entrypoint for anyone who wants that, and if ever needing the full performance etc of Rust, change a compiler switch (or something) and the full Rust kicks in.

In other words, a step-by-step developer growth process from simple to hard, maybe 2-4 levels, and you never have to throw away what you already learned and start over, when going to the next step.


In D you can do everything with automatic memory management (garbage collection) if you like. It does make for quick development.

You can also do explicit memory management, which of course takes more programmer effort.

It turns out most programs wind up using a mix of the two, as GC is better for some things, and explicit is better for others.


Thanks, that is interesting. I mentioned Rust because of its broader use and apparent growth trajectory. Ie, to have the likeliest most uses in the most places, with minimal re-learning of a new language to work on a new codebase. Or to increase the odds when possible, at least (scripting but also Linux kernel use, etc).

I will keep D more in mind though...is it on openbsd? (Edit: I see it is probably at least there via gcc, at least.)


D support for OpenBSD via GCC and dmd should be ok. Brian Callahan is a name to look for in the blogosphere.

Also the D in gcc is full fat D, no "at least" required.


ps: to clarify: I am learning rust, but in the long run it seems it could save time for the world overall, if there were something to recommend to others, say, members of a team who want or need something easier, but is easily transitioned to full rust.


A language designed around having first-class GUI support: Tk dialect of Tcl


Dart is a language optimized for use in the Flutter GUI framework. Well done.


> A language designed around having first-class GUI support That's Imba https://imba.io

And it's fast, enjoyable and productive


imba looks interesting, and promises full js interoperability, but i went through the docs and could not see anything about how that was done (specifically, using existing javascript libraries from imba).



For the really-dynamically typed language, that for sure is covered in Smalltalk.

And for the first-class GUI support, in its own way too, but also Swift in XCode is very very good.


Yes, Swift UI is awesome, if you don't fear making a UI in code as opposed to a drag-and-drop UI editor (the former always "clicked" better with me than the latter anyway, perhaps due to my web dev background). Really wish it had broader cross-platform support - but that's not exactly something VB was known for, either.


The language I want is a specification language (a bit like Pluscal or P) which allows me to specify distributed system behavior, then ask both correctness (liveness/safety) questions and quantitative behavior questions. I wrote a bit about it here: https://brooker.co.za/blog/2022/06/02/formal.html


For a contract based language and a "really dynamically typed language", I'm working on https://letlang.dev

And it's because I haven't thought yet about how to do static type checking with such a feature.

I haven't got any time to work on it in the past few weeks, and I'm the only dev (would really love some help). So, it will be ready when it will be ready :P


For reactive programming, verilog does that as its primary function. You can write imperative code as well. But it is a very primitive language that doesn't even support structures and loads of historical cruft. Systemverilog improves on it greatly. VHDL is contemporaneous with verilog and is much more principled because it was modeled on ADA. I have never heard of anyone using verilog for anything other than modeling logic.


The reactive programming idea is implemented by the Kotlin Compose project as a compiler plugin so it behaves as a language feature. Compose is confusingly named in that there is a more popular Android UI toolkit with the same name built on top of the reactive feature. But Compose the compiler plugin is available by itself without the UI thing.

I've been playing around with it and find it can really simplify a lot of code.


Some versions of K have reactive programming -- they call them views.

normal assignment:

var : expression

view:

var :: expression

https://code.kx.com/q/learn/views/

https://code.kx.com/q4m3/9_Queries_q-sql/#911-views


+1 for "I miss VB6"

Using a dynamic web tooling product at the moment, and at first I was enthusiastic, but it's crippled compared to VB6


I'd like to see a language with baked in support for dependency injections, so that my functions can take two types of parameters: its "real" parameters, which it needs to perform its function, and "dependencies", that are passed implicitly by the runtime.

Basically, a formalization of Dagger/Guice into a language.


You can do dependency injection with dynamic binding. See another post in this thread: https://news.ycombinator.com/item?id=32085346

An approach to do it statically is with implicit parameters. Scala has them, for example.


Monadic functional languages have native support for dependency injection and typeclasses/interfaces provide interoperability and easy mocking or alternative implementations with type-checking / propagation of which dependency they are using. (As to avoid mismatching between mocks and a real implementation for example)

https://itnext.io/hmock-first-rate-mocks-in-haskell-e59d7c3b...


This article is just about mocks and doesn't really touch on dependency injection at all, and Haskell simply doesn't have the mechanisms to supply the kind of implicit passing that useful DI requires.


Sounds like you are taking about ReaderT, which is a common way of implicitly passing more data to functions. There are many other ways, though.


`Reader` and `ReaderT` are not implicit at all, they still need to pass parameters explicitly.

What other ways do you have in mind?

I am not aware of any Haskell functionality or compiler flag that enables implicit parameter passing, like Java, Kotlin, and Rust allow.



People have written drafts of a design for this for Rust. Calling it capabilities.


As a part of the language standard / standard library, or as a third party crate? I'm confused how this would work, as as far as I'm aware, making this a language-level feature would require significant changes to the ethos of the language.

There are some examples, e.g. the Bevy ECS doing more or less dependency injection for you (and ensuring against things like aliased mutable references), which is pretty neat.



Fascinating. Not sure what my opinion is of it yet.

Reminds me of implicit parameters in Scala - https://docs.scala-lang.org/tour/implicit-parameters.html

Edit: I think I like it. It doesn't solve dependency injection like e.g. Guice but the examples given, e.g. providing implementations for traits defined in external crates that take arbitrary state, is compelling.


Is there a relationship between contracts and proof languages like Idris, or are these completely different approaches?


I thought Pascal's ability to specify the bounds of an array were cool:

    vector = array [1..25] of real;


Fortran has that:

   real :: vector(-2:5)
declares an 8-element array with bounds -2 and 5.


> A better calculator language

See Calca: http://calca.io


I miss VB too. It's amazing how simple and intuitive it was comparing to how we create UIs nowadays.


Is there a good programming language and environment for kids? I started learning coding in my own around 10 yrs old with qbasic. Scratch is super aawesome, but what is the next level up from that that is actually typing the code out instead of moving puzzle pieces?


Microsoft MakeCode will feel very familiar coming from Scratch, but compiles into pretty readable (and editable) JavaScript, so it might be a good way to move up: https://arcade.makecode.com/

And it still gives you the image editing tools (pixel editing for MakeCode), asset management, game loop, etc – all of which are too much to take on for a new programmer, but are really helpful, and hopefully make it feel like less of a regression.


Logo? Racket?


Recently I have been shopping for a configuration language that has static types and is easy-to-use and exports to common formats. Dhall came close but I was surprised to find that it does not support recursive types in a simple way. Maybe I should just use Haskell.


what kind of software configurations may require recursive types? I've never had to encode trees in my configs so far.


Is there a language where everything is observable? Like RxJS all the way down. Can that exist?


The one I want to see is an analytics PL where dataframes are 1st class citizen. The only ones that do that right now are SQL. But really pandas shouldn't be a library in python, it should be in python itself with a specific syntax


That's R (and S/S+ before it), for better or worse.


Dataframes aren't first class in Julia? I guess it's provided by a library. But Julia has really good vector and matrix support with broadcasting, like R and unlike Python, which relies on NumPy for that (Pandas builds on top of NumPy).

The R and Julia syntax are better than Python for vectorized operations, although NumPy and Pandas mostly make up for it using the magic methods.

In terms of keystroke efficiency, nothing beats the array languages. One might think APL is a little too terse, whereas Pandas is a little on the verbose side.


As it's been mentioned, that's pretty much R. Unfortunately if you need to do anything that can't be coerced into a dataframe it's quite ugly.


What about R?


Array languages (APL/J/K/Q) all do this well, particularly kdb


With respect to the everything is a graph language, I think SetL [1] is pretty close.

[1] https://en.wikipedia.org/wiki/SETL


I'm sorry but these all seem superficial to me.

E.g. contracts language could mean better understanding of substructural stuff so we can handle "real resources". But coming up with some syntax for this is the last step.


> A language designed around having first-class GUI support

Definitely qt, right? https://www.qt.io/

And HTML/CSS :downtrodden-developer-with-woozy-face:


Qt is not a language though?


Qt Quick is its own language (and a really nice one at that)


> A better calculator language

Well there are many of them like MATLAB, IDL, scilab, R and others. Python is verbose because it's not a calculator language but a general purpose language. I find J unreadable.


R is my go-to for this, and one I recommend often. It makes a nice calculator language (with only a little weird syntax), has a very rich standard library of functions, and is reasonably fast.

The great thing about R is that it scales well with task difficulty. It works well for a couple numbers. Then you can use data frames, and the Tidyverse (https://www.tidyverse.org/) packages to do a lot of data analysis (especially dplyr, which is extremely powerful). Finally, if you want to look at your data, the built-in plotting capabilities are solid, and ggplot2 is both extremely powerful and reasonably easy to use.


Indeed, the example:

> import math prod([math.factorial(x) for x in l]) No! Bad python! In J it’s just / ! l, quite literally an order of magnitude fewer keystrokes.

I don't know anything about `match.factorial()` or `prod()` but I can deduce what it's doing. `/ ! l` is nonsense.


I don't need you to be able to deduce what the J is doing. What I need is to write the computation as fast as possible so I can get the answer I need to know and then get back to whatever else I was doing. If I was writing something I needed to share with someone else, for sure I wouldn't use J!


match? definitive proof that verbosity introduces bugs ;) (also you can't deduce what ! means? it's literally the factorial symbol)


> A better calculator language

https://calca.io/


> You know how annoying linked lists are? Graphs are 1000x worse.

I thought linked lists were one of the easier data structures to implement? Is he saying they're annoying to use or implement here?


> I thought linked lists were one of the easier data structures to implement?

Unless you are a Rustacean…


There are plenty of really dynamically-typed languages. io is probably the most so of them, but I'd argue Smalltalk and even Objective-C are pretty up there.


What’s the point of contracts? Can’t you use tests for that?


Think of them as richer versions of asserts. And in languages where they are first-class parts of the language, that is they become meaningful metadata on functions, variables, classes, etc., then you can end up deriving tests from them or running them through a prover (not always possible, or might restrict you to a subset of the language). They are complementary to the type system and test systems, just like asserts are, but richer in potential than plain asserts. See the recent AdaCore blog post:

https://blog.adacore.com/i-cant-believe-that-i-can-prove-tha...

HN discussion: https://news.ycombinator.com/item?id=31975507


“Everything is a graph”

This is sort of an odd thing to want. You need a diversity of data structures. And BTW the acronym not withstanding, lisp represents graphs not lists.


Under the hood, almost every data structure is a graph.

Linked lists, trees, and heaps are all just graphs with certain requirements about their structure.

Everything else can be represented as a graph, just not as elegantly.

Also as a data point, I really really wish there was an everything is a graph language. I work in graphs all day (in the private industry) and I have to keep modeling them as arrays. It's a pain.


I'm a little late to the party, but humbly submit my thoughts/design from a while back for your review/feedback: https://github.com/vinodkd/halo/blob/master/doc/UserGuide.md...

Edit: I've posted this same comment in multiple places in this conversation to get feedback from specific commenters, not as spam (it's a old personal project)


How so? The cons data structure only points to a single object. You could do singly-linked lists and trees (only with kids pointing to parent) and that's about it, right? And I guess simple loops?


The cons is a directed edge.


To have a graph, in general, you have to have multiple directed edges from a given object. A cons does not offer this, and the best you could do would be to treat cons cells in a extremely convoluted way as binary hypergraph nodes.

You said: "lisp represents graphs not lists". I think you meant that Lisp's defining data structure, based on the cons, was for general graphs, not lists. But that is not really true. If what you meant was "Lisp isn't just restricted to linked lists", sure, but that's true of every programming language with object arrays. So what?


Well, that's part of my point. However, in Lisp you get to macro the language into whatever you like, so you can make lisp into the desired graph language, layered on CONSs, but not so much with other languages because only in Lisp do you get to create an entirely new language. (People think of Lisp wrongly as a programming language, whereas it's really a meta-programming language.)


> A serious take on a contract-based language

This already exists: programming languages that supports refinement types. Examples are Liquid Haskell and F* (F-star).


I want a programming language that takes dynamic scope seriously. Not because I think it would be good, but I think it would be interesting.


Kernel is very interesting. It has fexpr-like combiners (operatives) which implicitly receive a reference to the dynamic environment of their caller. The environments are first-class and modelled as a DAG, with the local bindings in the root node with a list of other parent environments. The list of parents, and any bindings in the parent environments cannot be modified, but only the local bindings can. This puts constrains the operatives a bit - they can't just mutate anything in the dynamic environment, but only the local bindings of an environment for which they have a direct reference.

Additionally, Kernel has dynamic binding built-in, and done in a nice way. You never explicitly access a dynamic variable but you only have an accessor function to it. A new scope is created when binding a dynamic variable, and the accessor will return whatever was bound anywhere in the dynamic extent of this scope.

    ($define! (with-str get-str) (make-keyed-dynamic-variable))

    ($define! f
        ($lambda ()
            (print (get-str))))

    (with-str "Hello, world!"
        ($lambda ()
            (f)
            (with-str "Goodbye, world!" f)
            (f)))
Prints "Hello, world!" "Goodbye, world!" "Hello, world!"

Perhaps the only downside to this is that a runtime error will be raised if `get-str` is called before a value is bound. I think it would be a bit nicer to include a default binding as an argument to the call of `make-keyed-dynamic-variable`.

http://web.cs.wpi.edu/%7Ejshutt/kernel.html


Like Common Lisp? You get dynamic scope through what it terms "special variables". They work as expected, and can be handy.


Lisps often have dynamic scoping. elisp used to only have dynamic scoping.


if I understand what you mean, elisp has dynamic scope. that's turned out to be useful to me a couple times...balanced against the hundreds of times that it did something I didn't want.


You can opt into lexical scope for a script by starting it with:

  ;; -*- mode: Emacs-Lisp; lexical-binding: t; -*-
You can then opt into dynamic scope for single variables, when it makes sense:

  (defvar variable)


Perl has dynamic scoping (along with lexical scoping).


For the contract language, I’d recommend checking out Daml! It’s a Haskell-based smart contracts language with some pretty neat behaviors


> I want a language that’s terse for everything. Maybe you could namespace operators.

Haskell and lenses is pretty nice here.


Solidity (ethereum cryptocurrency language) is based on contracts.

Svelte (javascript framework) has reactivity build in.


The closest examples I can thing of to these, in sequence:

- Idris (dependent types rather than contracts, but squint and the UX is the same) - Python's decorators enable some of this, but Aspect oriented programming may also be what they are looking for. Boomerang is totally different but also an exploration in this space. - Io, or even JavaScript fit this description - there's a calculator like this I am forgetting the name of at the moment - this looks like Dependent typing to me, see Idris above


Liquid Haskell is much closer to contracts than Idris, if you haven't heard of that already. I'm interesting in exploring contracts independently of types, though.


I have, but Idris feels much more like a dynamically typed language. Just pretend they are saying contract when they say type.


I feel VB6 appeals to sys admin types who don't get kept up at night by misaligned padding or a misplaced pixel. It's similar to the MS Access crowd that happily pump out functional abominations without a second thought. My first UIs were made using AutoHotkey in the early 2000s and while I felt empowered, I ditched it as fast as I could.


For a GUI-focused languages, we have C# and Java.

For a fully dynamic scripting language, look no further than Python.

Function inheritance? We have decorators in Python already, and most functional languages support seamless function composition. Take your pick between Haskell, Lisp, Scala and even Rust.

Designing a new languages needs to have a pragmatic purpose and sometimes, valid ideology.


> (Someone’s gonna tell me this is 100% smalltalk for sure)

IIRC Objective-C can bind new methods at runtime


I'd like to see one like rust, but with an AI-enabled compiler that can help you by suggesting implementations, and also estimating time/space complexity for you, detecting infinite loops, etc. So rust + copilot, but more, and designed into the language from the start, not cobbled together from public repos.


Graphs are really common data structures but there hasn’t yet been an “everything’s a graph” language

Nearly all common languages like Python, JS, Java, OCaml, etc. let you express graphs with records / objects. Everything is a graph!

If you want a homogeneous graph, you just declare a single node type, like

    class Node:
      edges: List[Node]  # or maybe Dict[str, Node] if you want them to be named
      payload: int
- Or you can have a heterogeneous graph with many different node types.

- If you want to label the edges, you can reify them as their own type

The whole heap is graph-shaped!

It's true that many programs in these languages are more tree-like than graph-like. And sometimes imperative code to manipulate graphs is hard to visualize.

But I think there doesn’t need to be a separate language of graphs for this reason.

---

If you want to see graphs done in plain C, look at this DFA code by Russ Cox:

https://swtch.com/~rsc/regexp/regexp1.html

e.g. Implementation: Compiling to NFA

(copy of lobste.rs comment)


Everything is a graph doesn't mean you can make a graph from other data structures, it means you have to make other data structures from a graph.

A graph isn't defined as a list of nodes. A list is defined as a directed graph where each node has one vertex in each direction.


But why? What would that make easier?

A list is defined as a directed graph where each node has one vertex in each direction.

That definition works equally well for existing languages, e.g.

    struct Node {    
      struct Node *prev, *next;
      int payload;
    }:
The types gives you some constraints on the shape of the graph.


I want to see an emulated hardware design lang to make desktop apps.


I’d like to see a modern BASIC-like language with line numbers :)


And that weird hybrid of line editor and full screen editor that eight-bit micro BASIC typically had.


Aren't contracts just a worse form of dependent types?


Java ecosystem but C# language.

There needs to be a J# project lmao


> keystokes matter a lot

Meta


Regarding graphs, I also have wondered why thy are not part of the regular suite of collections / literal syntaxes in most languages. In fact, they are really rare to even see in day to day programming, despite that most of our problems could make use of them in some way. I think part of it must come down to that you can't come up with a satisfactory way to encode graphs of data in a text-based literal. You will have to have a lot of repeating of nodes depending on how you want to express it. Take this simple graph from Wikipedia for instance https://upload.wikimedia.org/wikipedia/commons/thumb/5/5b/6n...

We could express this as an adjacency list like

    Graph{  
    1 <-> 2  
    1 <-> 5  
    2 <-> 3  
    2 <-> 5  
    3 <-> 4  
    4 <-> 5  
    4 <-> 6  
    }
So there's already a lot of repetition of the node labels, but in real life applications you would probably have actual data associated with each node (like maybe a user_id, or a name, etc, whatever the "elements" of your graph are). Actually, you would in most cases have some key that identifies the node (like user_id), and some other data associated with the node (like date of birth, locale, etc). So the above notation would force some kind of notation like:

    ("user1", (1970, USA)) <-> ("user2", (1980, UK))  
    ("user1", (1970, USA)) <-> ("user5", (1990, RU))  
    ...
So now the notation would require not only repetition of the node name, but also the data itself. This would be even more of a problem if the data was not a literal but was computed, like: ("user2", (getDateOfBirth("user2"), getLocale("user2"))

So then the way around this would be to use the programming language's own variable/identifier system, like

    var user1 = ("user1", (1970, USA))  
    var user2 = ("user2", (1980, UK))  
    var user5 = ("user5", (1990, RU))
    
    Graph{  
      user1 <-> user2,  
      user1 <-> user5,  
      ...  
    }
or perhaps some other node registration system, and then just repeat the labels

    Graph{  
      Nodes {  
        ("user1", (1970, USA)),  
        ("user2", (1980, UK)),  
        ("user5", (1990, RU)),  
        ...  
      }  
      Edges {  
        "user1" <-> "user2",  
        "user1" <-> "user5",  
        ...  
      }  
    }
So now we've avoided repeating data, but we still have to repeat nodes. In some situations, a different way of expressing a graph would also be preferable to adjacency lists (you might want an adjacency table, but in sparse graphs the list is more efficient)

but by the way, for your graph system to be generally useful, you'll have to come up with a syntax and flexibility of design that ergonomically allows graphs that vary in:

directedness: support for undirected and directed edges edge wights and data: edges could be identical or they could have weights or even keys and other attributes associated with them

graph/multigraph: can there be multiple edges between the same 2 nodes?

in statically typed languages, the data type of the "key" of the nodes, the "value" of the nodes, the "key", "value" of the edges, all have to be represented to some degree in the language's type system, so at least 4 generic parameters in a graph

So, seems that there is just a huge design space and a variety of different usecases that makes coming up with a simple solution to the general problem very difficult.


I'm a little late to the party, but humbly submit my thoughts/design from a while back for your review/feedback: https://github.com/vinodkd/halo/blob/master/doc/UserGuide.md...

Edit: I've posted this same comment in multiple places in this conversation to get feedback from specific commenters, not as spam (it's a old personal project)


Basically the man wants two opposite things.

He wants contracts which is basically dependent types. These rules live in types and already exists in agda, Idris and coq and has a range of tradeoffs.

Essentially these languages can enforce static checks and "contracts" so powerful you don't need unit tests. These "contracts" cover more ground and are safer then tests. The tradeoff is you need a big brain and lots of time to write these things.

Then he wants a language that is truly dynamic. Which is like the opposite.


Actually he wants 6 different things. Some of those six different things might live in the same language some of them wouldn't but wanting six different things is in no way problematic. Different tools/toys for different jobs/hobbies.


No he wants two things. Those 6 different things have isomorphisms and can be reduced down to two things.

Think of it like 1*2 && 2. Both expressions are isomorphic. Dependent types literally encompasses everything he wants.


He doesn't want the building blocks though. I could build those 6 things in any number of languages. What he want's are languages that explore making them ergonomic at the language level. That's like saying everything is doable in assembly. Which I mean sure, but that doesn't mean we don't want more ergonomic abstractions of those things.


> That's like saying everything is doable in assembly

Dependent types are not doable in assembly though. Assembly has no notion of types. The claim that all languages are equivalent because they're turing complete has absolutely no relevance here. While true in terms of capability, the syntax and semantics of languages do make them different. One can add a contract library to idris. You cannot do that to assembly code because asembly code as a library. Such a thing is nonsensical since assembly's compilation model has no notion of type checking.


A dependently typed language is executed on a computer, therefore it is expressible in assembly code. Like sentences are expressed with words which are expressed with, say, letters. Letters are not sentences, but sentences do guide the expression of letters and vice versa. And sentences can indeed be made out of letters.


Neural networks are expressed and represented by mathematical symbols, yet those mathematical symbols alone cannot make a prediction or do anything useful.

No one is saying assembly language cannot be used to write a dependently typed language system, with a type checker and runtime (which are one and the same in a DT language anyway). However, that does not make assembly language dependently typed.

You are committing the logical error of assuming that the result of a process means inputs to the process are equal to the outputs. Just because I could take grain and make it into bread doesn't mean grain is equivalent to bread.


When talking about these things there is a meta level we are targetting that you're not understanding. Types exist at a meta level and are part of the language.

Assembly language does not have types. What you can do is create a new language with assembly and have that language have types. The reason why it has to be done this way is because these types of checks CANNOT happen at runtime. It's a pre-runtime check that prevents the code from even running if it's not correct.

Think about it. Can you add types to assembly language without creating a new language? No.


>What he want's are languages that explore making them ergonomic at the language level.

This is exactly what dependently typed languages like coq and idris are attempting to do. What he wants ALREADY exists and these things are trying to fulfill EXACTLY his intent. I am not attempting to talk about some obscure isomorphism.

You also cannot build all 6 things in any number of languages unless you are using those languages to build new languages. These checks exist at the type level, not at runtime, but pre-compile time.


I think what you are missing is that the author of the article does not consider the affordances of those languages as sufficient exploration of the space. Just because they can express these things and are meant to fulfill his intent doesn't mean they succeed in the ways he wants.


Importantly, he doesn't want these conflicting things in one language.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: