I remember reading this essay when it first came out. To try and reword it using modern terms: The author wishes that programming languages had database persistence capabilities as 1st-class built-in syntax instead of cumbersome bolted-on API functions.
Examples where database syntax (i.e. SQL syntax) is 1st-class without noisy syntax of function calls, without command strings in quotes, etc :
- business languages like COBOL
- programming languages in ERP systems like SAP ABAP, Oracle Financials
- stored procedural languages inside the RDBMS engine such as T-SQL in MS SQL Server, PL/SQL in Oracle, sp in MySQL
In the above, the "database" is the world the programming language is working in.
The more general purpose programming languages like C++, Java, Javascript, Python omit db manipulation as a core language feature. This 2nd-class status requires 3rd-party libs, which means have extra ceremony syntax of #includes, imports, function calls with parentheses, etc. Some try to reduce the cumbersome syntax friction with ORMs. In contrast with something like SAP ABAP, the so-called "ORM" is already built in to process data tables without any friction.
The author works a lot on CRUD apps so a language that has inherent db syntax would enable "Table-Oriented-Programming".
But we can also twist the author's thesis around. A programmer is coding in SAP ABAP or a stored-procedure in MySQL and wonders why raw memory access where contiguous blocks of RAM can be changed with pointers is not easy to code in those languages. So an essay is written about advantages of "pointer-oriented-programming" because direct memory writes are really convenient for frame buffers in video games, etc.
In any case, I don't see any trend where a general-purpose programming language will include DB SQL as 1st-class. Even the recent languages like Rust and Zig don't have basic SQLite3 db persistence as convenient built-in syntax. If anyone proposed to add such db syntax, they would most likely reject it.
I think you are missing the big twist. It's not just tables as 1st class citizens, but allowing logic to be driven by the tables.
Instead of config files, you update the table. Changes to the processing flow? Update the table, including dates for when the new rules apply. The tables held code which drove the processing, along with tables holding data.
It's not just orm or persistence, and not just programming in the database as stored procedures. It was an odd melange of all of this.
I ran into this in the 90s, and it was great for RAD. But it felt odd to have to code into tables, and each tool was proprietary such that moving off of the table system was a full rebuild. They usually allowed migration to new databases systems to scale, but that was all they had.
I don't expect to see a language like this come around again anytime soon, but the ideas were really interesting in a world before git-ops and yaml configs.
>, but allowing logic to be driven by the tables.
Instead of config files, you update the table. Changes to the processing flow? Update the table, [...]
I didn't miss that angle and I think it's actually a minor part of his thesis. If you look at the entire essay, the vast majority of his bullet points and supporting examples are mostly about ergonomics of builtin syntax to manipulate tables. If his ideal _language_ (aka the syntax) did that, it would naturally support table-oriented-programming (aka the philosophy). He starts the essay with critique of OOP-the-syntax.
But to your point about config and code itself being persisted in the database, the SAP ABAP environment already works like that. SAP has over 10,000 db tables for configuration -- instead of YAML or JSON files. Change the values in the config tables to alter behavior instead of modifying IF/THEN/ENDIF statements in code. And when ABAP programmers hit "save", the code gets saved to a database table instead of a text file. So if one squints a certain way, the SAP system is a giant million-line stored procedure in the database.
> In any case, I don't see any trend where a general-purpose programming language will include DB SQL as 1st-class. Even the recent languages like Rust and Zig don't have basic SQLite3 db persistence as convenient built-in syntax.
There is a trend, but you’ll have to look further off the beaten path than even Zig to find it. Languages like Eve [0] tried to do this circa 2015 in the tradition of Datalog. Code was written in “blocks” that resembled Prolog horn clauses, but which featured set semantics on selected records. Natural joins happened automatically on records using identifiers. The whole language was actually a database!
Eve died [1], but you’ll see many such projects that have the same ethos in communities in the web, such as this one [2].
There aren’t a lot of users of these languages, but this is where a lot of big ideas are percolating right now.
And we can verify it’s a trend because the hallmark of all CS trends, the formation of a conference, has made itself known in this area [3].
> I remember reading this essay when it first came out. To try and reword it using modern terms: The author wishes that programming languages had database persistence capabilities as 1st-class built-in syntax instead of cumbersome bolted-on API functions.
This reminds me of M/MUMPS, used by Epic to power the biggest EHR system by market share in the US.
Perhaps the big difference is that the M "database" is key-value structured. True tables are flat and do not distinguish part of the tuple as the "key" and part as the "value".
I wonder if this is the source of the oft-discussed "mismatch" between the programmer's model of data and the relational model of data. Programmers like to assign values to things, while relational DBs like to do CRUD operations on records. (This is sometimes called the "object-relational impedance mismatch" but I've always found this term silly - needlessly jargon-laden and scoped overly narrowly to the OO paradigm.)
There's clearly some kind of "isomorphism" or translation between the two models, but they're not quite the same.
Is this what ORMs are about? Translating between the programmer model of data and the relational DB model?
> Such tools existed and were popular in 1990s: DBase, Clipper, FoxPro.
Also Crystal Reports, Paradox...
> They worked pretty well in their domain: data entry and report generation, with lightweight transaction processing and general computation.
Having bought Ashton Tate, Borland got DBase and Interbase in addition to Paradox, and built data access components into the VCL class library (in effect "almost-first-class citizens" of the language), which IMO made Delphi the natural and superior successor to those languages: Not just "lightweight", but fully advanced (i.e, ~C++-level) general computation. (And with transaction processing built into the RDBMS connection components.)
> Then happened the internet and client-server architectures, and these do not map as neatly onto local, single-user, single-transaction tables.
Weeelll... Seen the spate of recent posts on here about how SQLite is good enough for pretty much anything? :-) And arguably, that's where Delphi was at too, over twenty years ago: AFAICR, there was a "Fishbase" (facts about tropical fish) demo included with Delphi, which in one variant could be built as a standalone Web service / server.
Also, AFAICS, that's where Free Pascal / Lazarus is at now, only using SQLite / Firebird / MySQL / PostgreSQL (and lots of other DBMSes) in stead of DBase / Clipper / FoxPro / Crystal Reports / Paradox. (I've been planning to look into that a bit closer myself, but haven't got around to it. Procrastinating away too much of my time on Hacker News, I suppose. :-( )
One of my favorite programs ever was built on DBase in the mid-90s.
Installing or moving the entire application and database to a new PC was as easy as "drag folder onto flash drive" -> "drag folder off of flash drive".
Instant loading, tiny file sizes, but it became too complicated for users when 64-bit windows pushed 16-bit usage into VMs.
I have a ton of games, bought or downloaded from a variety of places, some even for different systems and some more than once (e.g. Steam or GOG or sometimes some Humble Bundle bundle giving a game i already had) and in some cases a different version. Most of them are on my external HDD.
One thing i want to do is to make a database of all of them since they're easily more than a thousand and i want it on my PC. I want to be able to have a title, description, tags, screenshots, overall category, target platform info, links to the setup/archive files, the versions i have, extras like wallpapers, ringtones, music or even perhaps any box pictures or ads if they are available (some sites like Zoom Platform do provide those with the games you buy), reviews, patches (official and unofficial), links to folders with any mods i might have, etc.
This sounds like something that back in the day would be perfect for dBase or Fox Pro, except for the part where i want it to be graphical (remember: i want screenshots, wallpapers, etc). But if those were made with a GUI in mind, i'd expect them to be perfect. So basically, i think Access would be it. But AFAIK Access is now dying, last time i checked the UI it seemed like a massive downgrade compared to what i remember from when i first saw it back in Access 97 days (look, some stuff really work better with nested windows / MDI applications - having a two field form or 5 field table take up the entire available application space when 90% of is empty space makes no sense when you could have a bunch of windows with those things visible at the same time). And it is limited on Windows and TBH it looks like a massive program anyway.
Some alternatives mentioned are too much "programming" and not enough "database-ing" IMO. Ideally i should only need to bother with code to cover for any missing functionality not provided by the GUI but most of the DB and UI design (for the forms, etc) and ideally most simple behavioral stuff should be done from the GUI without using any code.
But i don't think there is anything really like that and the modern computing world feels way too "incompatible" with what i have in mind - e.g. whenever i mention this to some friends of mine they start thinking in terms of wiring together stuff like SQLite (or even worse, PostgreSQL), Python/PHP/whatever, some web-based framework, etc and other "mass-of-unrelated-software-held-together-by-duct-tape" solutions, when what i really want is a completely self contained program with no external dependencies (aside from the basic stuff for showing a GUI, etc, i mean not requiring stuff like setting up a PostgreSQL DB), no servers, etc, just a binary/EXE, saving the DB somewhere on the filesystem (ideally in a single file like Access so it is easy to copy/backup/pass around), being able to interact with the rest of the OS (remember that bit about keeping track of setup programs / archives /etc ? I'd like being able to run those directly from the DB GUI), etc.
Well, one of the 28378423 things i want to work on at some point in the future. Hopefully those life extension studies that are posted on HN now and then will eventually move on from rats :-P
I'd be curious to see with other options exist, but the one that I know is Filemaker Pro. I'm only familiar with the stand alone version, and while expensive, to me it sounds exactly like what you're talking about wanting.
I've never used it (and the fact that you need to fill a form to download a demo means that chances are i'll never use it :-P) but judging from the Wikipedia page it looks like it might have been close at some point but somewhere around late 90s/early 2000s it succumbed to the need to excuse its pricepoint and got too overbloated and enterprise-y (i mean, apparently even in 1997 it would act as an FTP server...? :-P).
What i have in mind is more like something between dBase and VB (VB3 at most), but with a fully integrated database that can understand graphics (ie. in addition to data types like text, number, etc it also has graphics), a more GUI driven workflow for basic stuff (e.g. instead of placing a button in a form and doubleclicking it to enter in a code editor where i type something like "form2.show" to show a second form, instead i have a dialog appear that provides some common tasks like "show a dialog, add/revert/delete/etc record, open external file, etc" with "runs script" being an option for when others wont do), being able to either infer table structure from form fields or automatically create forms from tables, etc.
Also completely self contained, no need to install separate "database drivers", or run any sort of server (even locally), just unzip some archive somewhere and run the program from there.
Another relevant software category is the statistical analysis languages, including SAS, Stata and SPSS.
Old-school SAS included only two data types (floats and character strings), but allowed for SQL and sequential data-steps to live together. Persistence was baked in. The floats could be used to represent dates, datetimes and other formats. I particularly appreciated being able to use macros to define a data-step view to split the follow-up for an individual from a table. Such a view could then be collapsed using SQL. More recently, R tools such as dplyr have brought together data-frames and relational operations. However, I miss the sequential coding in SAS, using macros as higher-level tools to define the logic, including corner cases.
For strictly typed records, I have always wanted to spend more time with SML# [0] this allows for record updating, with close ties to SQL -- an under-appreciated version of SML.
> In any case, I don't see any trend where a general-purpose programming language will include DB SQL as 1st-class.
Which is a pity I'd say. The new languages you mention would benefit most from such a feature. I believe they won't do it not because it's not useful, but because it's difficult to make it right (efficient, safe, natural, scalable) and then maintain forever.
There are many benefits of having this functionality working out of the box (and a few disadvantages, obviously). Many (if not most) apps are just CRUD apps with some added functionality. But a standard way of connecting the language to a database is still missing. The great success of the ActiveRecord back in the day shows that this is something many developers would benefit from (it was/is good, but still not ideal). And I don't believe patching the situation with a multitude of incompatible ORMs solves anything.
Rust has rather sophisticated macros, which let you do stuff like this outside the core language implementation, which is IMO very much where such things belong.
> In any case, I don't see any trend where a general-purpose programming language will include DB SQL as 1st-class. Even the recent languages like Rust and Zig don't have basic SQLite3 db persistence as convenient built-in syntax. If anyone proposed to add such db syntax, they would most likely reject it.
What poor old BottomFeeder missed was that with a good object library / framework, you can get so close that it almost doesn't matter. I tried to convince him to even try Delphi, with its marvellous TDataSet descendants in the VCL... But AFAIK he never even downloaded the free version I pointed him to, whatever it may have been called back then.
The K language of kdb also count, but being proprietary and having a fairly impenetrable and alien syntax haven't helped it branching out of the niche where it is very successfully.
I have to use it at work, helping quants try to actually maintain production code rather than just vomit horrible one-time scripts they patch together in an endless stream of layers on top of layers.
This thing should be banned. Everything is one-letter, it's impossible to Google, it takes the opposite decision of every imaginable convention, I have yet to see someone who can read his own stuff 2 weeks later. Notwithstanding free tools to query force-expire every 3 months (QPad grrr) and as you said it's so closed they have to take webdevs like me to just help them maintain it all eventually.
It's risky for the bank (and we're not a small one :s), it's expensive for the programmer, and it's misery for the quants (but they all feel like geniuses spending weeks on simple stuff on kdb, so their misery is only in people looking at them wasting away their brain like that).
> but they all feel like geniuses spending weeks on simple stuff on kdb
They are not experienced in the environment then? It is not that hard. Sure it is spartan as the tooling is not what you expect in 2021, but this seems a bit over the top?
> I have yet to see someone who can read his own stuff 2 weeks
http://nsl.com/ can, years later even (as can I but he has an impressive portfolio which makes it obvious).
On the things it does well - being a memory mapped column store, it hardly had any competition a few years ago (clickhouse might be getting there these days, perhaps)
And if your fluent in K, one-of scripts and queries are significantly shorter and easier to get right.
But it is not good as a general purpose language in a commercial setting - not because of the language itself which is fine if a bit spartan - but because it is hard to find people who are willing and able to work with it.
Much like its predecessor APL, it’s a tool for thought more than a tool for implementation.
I think many people would work in it, but the job market for it is limited and driven by who knows who as far as I noticed. I hope Shakti will make things better: so far it's nice and definitely it's pricepoint is a lot better. Not sure where it will go.
If you use k a lot, you start to automatically recognize idioms/stanzas; it just becomes automatic to recognize a group of chars. And then it becomes readable. The throwaway nature, in my experience and opinion, comes not from it being unreadable (as many people who never used languages like this seem to think), but rather from the fact that the code you are are changing is so short that thinking it up and typing it in (by combining idioms) is faster than placing your cursor and actually editing existing code. Also you are on a repl or editor that can run code on the cursor, so interactive development even makes that case stronger: before you know it, you have produced a new version of a function without actually even looking at the old one.
Yeah but there s low value in throwaway, and it's not succint, it's 1-letter. ONE lol, for every imaginable labguage keyword.
For instance to parse a binary encoded dictionary from a table column: -9!'columnName(I remember it because it s the first time I spent a day on something so useless yet so indispensible yet so fucked up). I challenge you to google it.
> Examples where database syntax (i.e. SQL syntax) is 1st-class without noisy syntax of function calls, without command strings in quotes
There more examples which I think qualify but don't quite fit into your categories:
* The E language runs all code in "Vats", each of which is a single threaded compartment with transparent persistence.
* Taking inspiration from E, the Waterken server did this for Java, but required annotating mutable fields in a certain way so the persistence layer could track them.
With some kinda dynamic naming system, you could probably load SQLite table schemas at runtime and provide bindings to columns automatically. Or maybe that's best done once at the compile step. Either is possible with racket :-)
Although it's not a relational database, MUMPS is another example of a language where there is nothing special whatsoever about manipulating the database compared to manipulating the same data but stored in a variable in RAM.
Pascal's records are basically like C's structs: you can use them to make a database (and many did) but there isn't any real support from the language, though some implementations may have had their own extensions since the original Wirth Pascal had no support for files at all and the standard had some very limited support.
Not as any kind of standard, or even a popular extension.
Microsoft BASIC implementations for DOS however had ISAM in some of the editions (PDS and VB for DOS) which were similar: you'd declare a struct type, and then open a file with that type as the record type.
I wrote CRUD apps that did this in Turbo Pascal back in the days of MS-DOS. I copied the idea of screen fields that I had seen in DBASE II, and make a set of field editors for each record type, I could put together a database pretty darned quick back in the day.
I actually work in a table oriented language, harbour, a child of clipper/xBase mentioned in the article. There are a few issues I've found with a table oriented architecture:
1. Managing state is a bit of a nightmare. Harbour is based off of DBF databases, which are essentially flat files of a 2d array, and keeps your record number within any given db. You can then query a field with the arrow operator (table->field) but you have no guarantee that any subfunction is not changing state.
2. DBMS lock in. Because you're operating is totally different paradigm moving dbs is actually rather challenging. Harbour has a really nice system of replaceable database drivers(rdd), but when your code is all written assuming movement in a flat file, switching to a SQL based system is challenging. I'm currently in the process of writing a rdd to switch us to postgres, but translating the logic of holding state to the paradigm of gathering data then operating on it in an established code base is quite a challenge.
For people like me, that worked in FoxPro, this is the dream.
Despite the claim this kind of tools is for "basic CRUD" they could do much more, much better, precisely because can deal MUCH better with the most challenged kind of programming:
CRUD apps.
Making apps in finance, erps, bussines, etc, are far more complex and challenging than build chat apps, where the scope is MORE clear and the features, reduced.
"Simple" crud apps NEVER stay simple.
NEVER.
If you allow it, in no time you are building a mix of your owm RDBMs, programming language, API orchestation, authorization framework, inference engines, hardware interfaces and more...
then, it must run in "Windows, Linux, Mac, Android, iOS, Web, Rasperry, that computer that is only know here in this industry", "please?"... and it will chases, also, all fads, all the time.
The request/features pipeline never end. The info about what to do is sketchy at best.
The turnaround to bring results is measure in HOURS/DAYs.
So, no.
No language without this, is in fact, good for the niche.
This is pretty cool. I've had thoughts (or dreams, more accurately :) of a language like this every time I get a runtime 'type error in q. I gotta say, I prefer q's syntax, though :)
q with static typing and a sensible pricing model would be amazing.
I do think that q's main strength is not its speed, but the fact that qSQL statements are a first class citizen in the language - no network hops, no awkward marshalling and unmarshalling of data, no awkward mismatch around how to use nulls, nans, tz-aware timestamps etc.
I started Empirical with the goal of "q like Haskell". The end result went in a radically different direction, but the guiding light has always been to have a statically typed language where tables and queries are a first-class operation.
The source code is publicly available under AGPL with the Commons Clause:
The hardest thing is the load() function, particularly in the REPL. It looks dynamic, but is actually static. Pulling off this slight-of-hand requires both type providers and automatic compile-time function evaluation on arbitrary expressions.
F# is the only other language I know of that has type providers. They invented it.
As for CTFE, languages like Zig and D require the user to indicate when to evaluate something ahead of time. I wanted this to happen automatically and still be available for compound expressions, user-defined functions, user-defined types, etc. Doing that requires tracking purity (no state or IO) in an expression, plus a mechanism to actually do the evaluation. I've never seen a language take it to the extreme that Empirical does.
So an existing statically typed language would need (1) a REPL interface, (2) purity tracking, (3) compile-time function evaluation, (4) some kind of types-as-parameters setup, and (5) array notation. Most existing statically typed languages don't have a REPL; the ones that do generally lack array notation. I couldn't find a language that did all of that plus type providers and automated CTFE on arbitrary expressions.
I've written similar in Julia, you can see the record type used in https://www.juliapackages.com/p/namedtuples. The full library, not in the open source, uses this type for time series analysis. It's all type safe and allowed expressions such as x = vwap( ts, 5) - l1( vwap( ts, 5)) through to a time moving PCA. Julia makes writing this sort of thing short and quick. The total impl was only a thousand lines or so of code.
I checked your website; do you have an example of how to load data from a file into NamedTuples? Specifically, can NamedTuples infer type from an external source?
Also, do you have an example of what a displayed table looks like? Julia has a DataFrames package that can display a table. I am curious to know how your time-series library displays a table.
unfortunately, I don't have access to that code anymore, I wrote a number of loaders for different data set types including CSV. The time series were all modeled as forward iterating stream of tuples, so there is no specific table abstraction. There is an implicit assumption that the stream is ordered by the join key, in a time series this being the timestamp, though nothing in the implementation enforced that.
Joins are always n-way merge joins, so you can write something like y = 2x^2 - 3z + c and fold that into a single streaming operation y = f( x, z, c ) where y, x, z and c are time streams.
When rendered to screen they looked very similar to your examples. With plugins in the IDE you could directly plot and array of time series as a chart.
I don't think I get it. I do a lot of pandas in a bank so I recognize your dataframes for what they are, but what advantage do you have over python+pandas ?
I hate Python (I'm a Java dev helping Quants), but it's that or KDB, and I think I could murder the creator of KDB :D And I have to admit Pandas is instinctive, Python is easy enough to extend, what are you doing that's so important you made a language for it ?
Empirical is statically typed. Python and q/kdb+ are dynamically typed.
I spent years using those products in finance. I would set-up a simulation that would crash after four hours because of a misspelled column name. Empirical prevents that by refusing to run a script that has a type error or unresolved identifier. No more crashed overnight sims!
You should say this under the question of how it's different than Julia.
It's not enough to say it's statically typed, since not everyone is convinced of the benefits based on the context they're coming from.
I just saw a talk by Rich Hickey about Clojure, and he eschews static typing, since he thinks of it as a coupling in a language. And based on the types of programs he writes and runs, he hasn't seen a benefit.
So I think when you're specific about what statically typed buys you in the context of the job Empirical does for you, I think it's more convincing.
I can answer for the type stable julia case, if you have a struct in julia that is composed only of primitive types this is stored as a C struct with zero overhead and fixed byte length. An array of these is then crazy efficient when it comes to streaming into the CPU etc. If you dig around the GPU support in Julia you can see this used to good effect.
I'm in the opinion that tables would make a lot of sense as first-class citizens for shell environments. Lots of data typically handled in shells is inherently tabular in nature (for example the outputs of ls and ps etc) and some of the common tools also are intended for tables (awk in the forefront, but also cut and sort as examples). But in practice lot of it is currently very ad-hoc, and handles any sort of edge cases poorly.
osquery already demonstrates that lot of info can be structured into tables, but what I feel is missing is more convenient, shell-like language environment to work with such data.
I think Microsoft Powershell [0] sort of approaches what you’re describing. It’s not exactly table-oriented, but object-oriented such that there’s a lot more structure to data than in traditional command line environments. For example, their equivalent of ls returns an array of objects (i.e. rows) which you can filter, sort, etc. based on the properties of those objects.
The proliferation of field types has made data more difficult to transfer or share data between different applications and generates confusion. ITOP has only two fundamental data types: numeric and character, and perhaps a byte type for conversion purposes. (I have been kicking around ideas for having only one type.) The pre- and post-validators give any special handling needed by the field. A format string can be provided for various items like dates ("99/99/99"), Social-Security-Numbers ("999-99-9999"), and so forth. (Input formats are not shown in our sample DD.)
Types like dates and SSN's can be internally represented (stored) just fine with characters or possibly integers. For example, December 31, 1998 could be represented as "19981231". This provides a natural sort order.
This is very nineties and I must disagree. The datetime-as-string example shows it most clearly: wanting to sort by full date is only one thing you want to do with calendar data; often you will want to compare, say, things that happened on Mondays vs things that happened over the weekend, or things that happened within so-and-so many hours around a given point in time and so, not to mention the complexities of DST and timezones. You can do all that with text-based strings but you'd have to write quite a bit of logic that will get applied to strings over and over again, or else you can store the results of parsing a date string into separate fields. Dates expressed as text also don't allow you to validate "19990229" or "20020631" in a very straightforward manner.
I think our collective and by now decades-old experience with duck/weakly-typed languages like Python, JavaScript, Ruby and so on clearly shows that what you gain in simplicity you lose in terms of assured correctness.
The way to deal with dates is not by having separate fields. It's by having a single value represent the time (in Linux it's time_t). Every other format gets translated to time_t, all processing is done with time_t, and then the time_t gets translated to the desired output format.
Any other scheme is doomed to working 99% of the time, and that last 1% will be impossible to fix.
> It's by having a single value represent the time
I have limited experience dealing with human originated time references, but from my encounters, the various idiosyncratic forms of date storage often seem to arise out of an aversion to commit to well defined intervals of uncertainty / margins of error. Coercing people of limited mental bandwidth or interest beyond immediate gratification to go through the pain of constricting their mental models to time_t levels of precision seems to basically be a non-starter.
This is of course exactly what I've been meaning to say here. We want more specific datatypes (e.g. a true date(time) ADT) with better functionality (say interval computation) and constraint checking (AKA domains, such as 'let n be an even, positive integer gt zero').
This idea of "the database should be invisible!" abstraction was widely pursued back in the 90's when people were still obsessed with "the network should be invisible!" and Remote Procedure Calls (RPC). A lot of ORM's still reflect this obsession, and some programmers still get angry that they should have to deal with "this low-level SQL nonsense!"
Attempts to make I/O invisible failed and failed and failed again, and continue failing and failing again because it turns out that I/O is incredibly fundamental and not something you can just wave off as "low-level details". A networked database is a massive abstraction in its own right, and if invisible I/O is a doomed abstraction, forget invisible databases. Well, first go fail a few more times, then forget it, because we're not quite there yet on this one, are we...
The bigger the abstraction, the more it leaks. Sometimes you have enough headroom to go further, and sometimes you have to recognize that you've gone way too far.
> Fundamental and Consistent Collection Operations
I recently discovered that Scala collection library was designed with this exact goal in mind.
Interface of collections is highly consistent between various types and you can create custom collections using the same interface with very little custom code.
We do a thing where we project all of the domain state (i.e. for a given user's session/work) into an in-memory database and then execute the business's SQL queries against it in order to determine logical outcomes.
I wouldn't really call it low/no code, since developing effective queries is non-trivial for many cases, but it does make it much more feasible for a non-developer to add incremental value to our product.
My history with ****e *****s goes way back... we must have had more than a decade of pro-vs-contra-OOP flame wars on various Web fora, starting over twenty years ago; but in the last ten or so I haven't heard (directly) from him.
In the mean time, I have softened my stance and can admit that traditional inheritance-based OOP may not be ultimate panacea, but I doubt he has softened his anti-OOP stance at all. :-)
He is / was present on at least Slashdot and, I noticed, the original (now archived, i.e. read-only) C2 Wiki, and probably a few others I'm forgetting right now, under the names "Tablizer", "TopMind" (or sometimes, IIRC, just "Top".)
My wish-list for my ideal (non-system) programming language:
- first class tables and named tuples as the primary datastructure. Includes the full set of relational operations, and transaction support. Optional persistence. Everything is not a table though. Tables are great but pragmatism trumps dogmatism.
- structural typing (ties neatly with the above) and support for row polymorphism
- shared nothing, distributed, multiprocessing, except for explicitly shared tables as transactions allow for safe controlled mutation of shared tables. Messages are just named tuples and row polymorphism should allow for protocol evolution. Message queues and stream can be abstracted as one pass tables.
- Async as in Cilk not JS. No red/green functions. Multiprocessing can be cheap, just spawn an user thread. The compiler will use whatever compilation strategy is the best (cactus stacks, full CPS transform, whatever).
- seamless job management, pipelines, graphs. Ideally this language should be a perfectly fine shell replacement. But with transparent support for running processes on multiple machines. And better error management.
A bit more nebulous and needs more thoughts:
- exceptions, error codes and optional/variant results are all faces of the same medal and can look the same with the right syntactic sugar.
- custom table representation. You can optionally decide how your table should be physically represented in memory or disk. Explicit pointers to speed up joins. Nested representation for naturally hierarchical data. Denormalized
- first class graphs. Graphs and relational tables are dual. And with the above point it should be possible to represent them efficiently. What operations we need?
- capabilities. All dependencies are passed to each function, no global data and code. You can tell if your function does IO or allocates by looking at its signature. Subsumes dependency injection. Implicit parameters and other syntactic sugar should make this bearable.
- staged compilation via partial evaluation. This should subsume macros. Variables are a tuple of (value, type), where type is a dictionary of operation-name->operation-implementation. First stage is a fully dynamic language, but by fixing the shape of the dictionary you get interfaces/traits/protocol with dynamic dispatch, by fixing the implementation you get static dispatch. Again, significant sugar is needed to make this workable.
edit:
missed an important element:
- transparent remote code execution: run your code where your data is. Capabilities are pretty much a requirement for security.
I'm no longer convinced of the need for row or record polymorphism. It encourages passing around types that have no clear domain or purpose, so I think it inhibits understanding in general. Do you have any examples where it's indispensable?
I don't think it is indispensable, I think it is convenient and still better than what is done today were types without clear domain and purpose are already passed around.
At the very least with row polymorphism, a function can declare which subset of the type it actually care about instead of taking an unwanted dependency on the whole blob.
In particular I'm considering the scenario were a large application (or better a collection of applications) evolve without a central plan and messages tend to grow to accommodate orthogonal requirements (the alternative is splitting the messages, but it has performance, complexity and organizational overhead).
In theory the alternative is message inheritance, but in my experience it has never worked well and it is very hard to retrofit anyway.
> At the very least with row polymorphism, a function can declare which subset of the type it actually care about instead of taking an unwanted dependency on the whole blob.
This is the argument I no longer find convincing. Do you have an example where this is so much clearer than alternate, simpler ways of doing it?
For instance, in principle you could easily rewrite a function that works on a record with 3 fields to just accept 3 parameters. The only additional "burden" is that the caller has to pass in those 3 fields, where before they could just pass in the record.
The row typed function is also less reusable for that reason.
If you have two fields of compatible type such that you can confuse which one to pass as a parameter, then it's likely you're not making enough domain specific type distinctions that would disambiguate these fields.
If these fields were really compatible domain specific types, then it's more likely you would want to be able to use that function with both fields at some point. Row typing then either hinders this reuse (not good), or requires you to refactor to encapsulate both fields in a new record with compatible fields and pass that in (maybe good?). This is code you wouldn't have to write without row types.
But as I said, I would like a concrete example to discuss if anyone has one. Speaking in abstract like this isn't likely to be convincing either way.
You're taking indispensable too literally. If you have to commonly write 1,000 lines of code without a feature, but the feature permits you to to reduce this to 1 line of code, I'd consider that to be pretty indispensable.
Where the indispensable line is is debatable, hence my request for an example.
This idea (or at the least nostalgia for xBase) pops up every now and then and while it certainly isn't describing Prolog I think the idea would be a lot more interesting if the authors had enough familiarity to compare and contrast.
Oh, that kind of table. I was expecting decision tables.[1]
"Smart contracts" for Etherium should have been decision tables. But no, they had to make it Turing-complete. A good thing about decision tables is that there's a finite and small number of cases, so they can be exhaustively tested. Also, they're readable. That's what you want for contracts. Not Solidity programs, which are expensively insecure.
I remember this page from geocites... Opend my eye about some ugly aspects of OOP. But, without proper marketing and without some luck a lot of ideas should be rediscovered again and again.
And maybe the table oriented programming ideas are too common sense and therefore not a good kind of diferentatior compared with other smart ppl...
I recall debating this on Slashdot back in 2002. (I was a Bertrand Meyer OO convert back then). Good memories.
Functions and data are like spacetime and gravity. Beneath the emergent behavior in any software system, they are the things you find lurking underneath.
Just wondering about the meaning of the satements from the article "Arrays are evil! Arrays are the Goto of the collections world". Anyone know exactly what it means? Is it referring to the raw array with pointers in C/C++ or array in C++ collections?
I took the meaning as: Array processing can cause hidden bugs (like GOTO). The bugs are usually introduced when the size of the array changes while processing.
Also ‘find’ing and ‘filter’ing arrays is more error prone (like not properly handling a miss on a find).
I don’t completely agree with the author, but the idea that more care needs taken when dealing with arrays is worth considering.
Arrays will always have important uses. Tuples, enumerations, and the like. However, ‘n’ database records shoved into an array and then iterated over in a for-loop is a cumbersome way to structure a program.
Examples where database syntax (i.e. SQL syntax) is 1st-class without noisy syntax of function calls, without command strings in quotes, etc :
- business languages like COBOL
- programming languages in ERP systems like SAP ABAP, Oracle Financials
- stored procedural languages inside the RDBMS engine such as T-SQL in MS SQL Server, PL/SQL in Oracle, sp in MySQL
In the above, the "database" is the world the programming language is working in.
The more general purpose programming languages like C++, Java, Javascript, Python omit db manipulation as a core language feature. This 2nd-class status requires 3rd-party libs, which means have extra ceremony syntax of #includes, imports, function calls with parentheses, etc. Some try to reduce the cumbersome syntax friction with ORMs. In contrast with something like SAP ABAP, the so-called "ORM" is already built in to process data tables without any friction.
The author works a lot on CRUD apps so a language that has inherent db syntax would enable "Table-Oriented-Programming".
But we can also twist the author's thesis around. A programmer is coding in SAP ABAP or a stored-procedure in MySQL and wonders why raw memory access where contiguous blocks of RAM can be changed with pointers is not easy to code in those languages. So an essay is written about advantages of "pointer-oriented-programming" because direct memory writes are really convenient for frame buffers in video games, etc.
In any case, I don't see any trend where a general-purpose programming language will include DB SQL as 1st-class. Even the recent languages like Rust and Zig don't have basic SQLite3 db persistence as convenient built-in syntax. If anyone proposed to add such db syntax, they would most likely reject it.