Do they offer any meaningful differences to an otherwise "mainstream" language - anything beyond syntax, tooling ecosystem and the usual functional vs. procedural vs. OOP paradigms?
Anything I can't just whip up in modern C#, C++, Java or Python if I wanted to constrain myself to a specific subset of features or a specific paradigm?
I get the "it's fun to try something new every once in a while" part but people tend to forget that the distinctions between programming languages have blurred significantly over the last 10+ years.
Yes, there are plenty of meaningful differences. It's true most mainstream languages in industry are essentially syntax swaps of each other, but Prolog and Haskell are not that at all.
Imagine if you could run a Python function backwards and feed something into the return value and have Python tell you the parameters to match. That's the uncomon, twisty difference of Prolog relations. In Python a test whether this combination of length and area makes a valid square:
def test_square(side_length, area):
return (side_length ** 2) == area
which says imperatively "square the side length and compare it to the area, return the result" becomes a Prolog relation:
:-use_module(library(clpfd)).
square_side_area(X, A) :-
X #> 0,
X*X #= A.
which says declaratively "square_side_area holds for X and A if X is positive and X squared equals A". This can be used in the same way as the Python code to ask "does length 5 and area 25 make a valid square, yes/no?" but it can also be queried to find: "given length X 5, can this relation hold?" and it answers yes, if A=25. Or "given area 25, can this hold?" and it answers "yes, when X is 5". Or "are there any solutions?":
?- square_side_area(5,25).
true.
?- square_side_area(5,AREA).
AREA = 25.
?- square_side_area(SIDE,25).
SIDE = 5.
?- square_side_area(SIDE, AREA).
SIDE in 1..sup,
SIDE^2#=AREA,
AREA in 1..sup.
Then, unlike the Python, you can combine this with extra conditions outside e.g. "(that), and AREA is between 1 and 25" and have it show all the possible answers in that range:
?- square_side_area(SIDE, AREA), AREA in 1..25, label([SIDE,AREA]).
SIDE = AREA, AREA = 1 ;
SIDE = 2,
AREA = 4 ;
SIDE = 3,
AREA = 9 ;
SIDE = 4,
AREA = 16 ;
SIDE = 5,
AREA = 25.
If you can spare 30 minutes for a video, this Sudoku Solver in Prolog[1] shows quite well this style of thinking and its consequences. As you watch it, wonder what you might write in C# to solve sudoku (and how much code), and notice here what there is in the way of variables, loops, recursion, indexing, control flow, etc. This kind of thing "I give the constraints to a combinatorial problem, through constraint propagation find some valid solutions" is one of Prolog's strengths.
It isn't /only/ a built in brute-force search, or a "do everything with recursion" computer science lesson timewaster.
Well, Prolog for one seems very different. In functional/procedural/OOP you have to write a program describing every step of the way. In Prolog you just have to describe the destination.
Do they offer any meaningful differences to an otherwise "mainstream" language - anything beyond syntax, tooling ecosystem and the usual functional vs. procedural vs. OOP paradigms?
Anything I can't just whip up in modern C#, C++, Java or Python if I wanted to constrain myself to a specific subset of features or a specific paradigm?
I get the "it's fun to try something new every once in a while" part but people tend to forget that the distinctions between programming languages have blurred significantly over the last 10+ years.