Functional core, imperative shell. This seems to be a good approach, especially when you push pretty hard to stuff as much as possible into the functional core.
Writing tests is much easier and requires much less code, and bugs are much easier to track down (since you have less surface area of "what might have modified this object I now hold?").
Since a functional approach also goes well with single-responsibility-principle and composability, you can end up with multiple times more actual functions. They're all (ideally) very conceptually simple, but the larger number of more specialized functions means you have to work a lot harder on choosing meaningful names. And to write meaningful names of specific functions, you can end up with some quite long names. This is actually quite ok as it makes things much more readable, but some people have some strong negative reactions to seeing 4-6 word function names. But at least in this case (Java being the most famous counter scenario) each function probably has more real value as it's not just OO layer artefacts.
One thing that I have seen in the Julia ecosystem, where multiple dispatch is pervasive, is that function names can get away with only being verbs. Without dispatch, you often see an approximation by baking in (Hungarian notation) the name of a type into the function.
This is most clearly seen with conversation functions between types defined in different libraries. A collection class might have a method like "from_Array".
One of the takeaways is that he prefers the simplicity of C but uses operator overloading for similar reasons as you describe. One example includes addition of vectors, which is certainly a thing you do often in game and graphics related code.
You can find this pattern in a lot of places if you are a little bit flexible with your definition of "functional".
For example, why would we not consider some combination like SQL and PHP to be hinting at this exact same kind of archetype? You've got a crusty outer shell of yucky hackarounds (PHP) that talks to this (ideally) well-normalized & clean data store (SQL). Assuming you "stuff as much as possible" into core part of the solution, you could swap the outer shell for anything you desire without much headache.
PHP is more like “imperative core, functional shell”. After all, a PHP request is a pure stateless function call. The only way it can modify state anywhere is with IO (eg store something in the database, write something to the session store, etc) and all data you accumulate in your PHP code gets purged after each request.
Throwing away what you know with every request is orthogonal to being functional. You really have to squint and turn your head sideways to think of PHP+SQL as being functional, even though PHP is stateless.
From the outside it's stateful, because well you are interacting with a DB.
From within the PHP codebase, it's imperative and requires discipline and extended knowledge of any API you call to understand whether something mutates.
The only time you can view it as actually stateless and somewhat functional is when you test or debug an integration from idempotent requests to responses without interleaving mutations.
That's a hard disagree. The absolute VAST MAJORITY of SQL complexity is in the select queries (joins, groups, aggregation, window functions, cursors, filtering, procedures...), which are mostly not about mutation.
You’re right about SQL the language, but I think the role of MySQL in a PHP application is much more to manage state than to provide interesting SELECT queries. The popularity of ORM in this world shows that what 95% of applications are looking for is native-ish collections that can be mapped to disk and updated concurrently. PHP’s shared nothing architecture means it must externalize this responsibility, and MySQL/Postgres are the customary choices.
With actual SQL, most people don’t get beyond JOIN, which they mainly need in order to deserialize nested or pointer structures from normal form. I learned aggregation, grouping, and windowing working with Spark and Hive on warehouse replicas of my DBs/topics to troubleshoot and analyze my stuff. Never used one in an actual request handler.
IMO you need to see how the SELECT queries are 100% functional to understand properly the power of it which is based on its relational algebra and the closure property of that algebra. Then the basic relational algebra was augmented with a lot of stuff but it's still functional in the end.
This is my perspective as well. I look at SQL as a defacto functional-relational programming language precisely as advertised in Out of the Tar Pit - Chapter 9.
Mutation is fine in pragmatic functional programming and IMO is not the defining feature what makes it so good for industrial use cases. Functional programming must allow easy immutable programs, and you can do that in SQL by never erasing anything (which will require a bit more bookkeeping for sure).
I don't think it is "all about mutation". Indeed it has create/update/delete, but there are a ton of functional capabilities. So much of what SQL offers is related to data shaping. You can have a huge read-only data store and do amazing things with it.
SQL is a weird beast. Very much on the contrary, the language has a relational core (note that functional is a special case of relational), at least in principle, but it’s been grafted with so many imperative-style hacks over time.
This approach works very well. Side effects are pushed out of core and no mocks are required for end to end tests. It's quite hard to "sell" to fellow devs, but when it clicks, the code is very clean.
Can anyone explain why not "functional core, functional shell"? Is this partly because the shell has to mostly interact with things like UI APIs, file system APIs, network APIs and database APIs that are usually implemented in an imperative way already so you'd make your life harder this way? What if the APIs were written in a functional style?
For example, I find in TypeScript, most libraries and the language itself are not written with immutability in mind so there's constant gotchas. So while I strongly prefer immutable data structures, going against the grain to use immutability everywhere usually isn't worth the frustration. That for me doesn't mean immutability is a bad idea though, it's more that the libraries and languages support is lacking.
> Can anyone explain why not "functional core, functional shell"? Is this partly because the shell has to mostly interact with things like UI APIs, file system APIs, network APIs and database APIs that are usually implemented in an imperative way already so you'd make your life harder this way? What if the APIs were written in a functional style?
Fundamentally the user isn't functional - if you ask them what they want to do they'll say different things at different times. You can have a 100% purely functional programming language that works like a calculator (with no memory) - the user can put in expressions and the language will evaluate them - but generally users want their language to be able to do things that are non-idempotent and non-time-symmetric and so on. You can, and should, push that part to the edge, but it needs to be there somewhere.
What you say is true, but FP has many different ways of dealing with user input, environment, IO, that are no more complex than imperative procedures (arguably the FP equivalent is simpler).
Such as? Haskell-style IO types have no real denotational semantics (they don't even have a notion of equality), they're useful for jamming imperative instructions into an expression model but they don't actually make them any less imperative, and the only other models of user input I've seen are even more obtuse and harder to work with. If something can only be understood operationally, representing it as an imperative procedure is probably the least bad approach IME (and diving into a "the C language is purely functional" style tarpit is far worse; look at SBT for where that leads).
Not an issue with F#. It’s trivial to exit ’purely functional’ domain, write it like ’pythonic C#’ and still end up with a program that is correct out-of-the-box.
1) IMO as a community we still haven’t quite figured out the best abstractions for functional UIs. Various FRP and Elm-like libraries are good, but they have drawbacks.
2) Not all frontends have functional APIs, you may be stuck writing a lot of FFI glue code.
3) Front end code often gets pushed to devs with different experience and skill sets, in my experience they’re probably less likely to have functional or hard CS backgrounds.
4) Harder recruiting if all the devs need to be proficient with FP.
I personally don’t find any of these reasons compelling enough to negate all of the benefits of FP.
Answers saying FP isn’t suited for IO or for interaction with users are rhetorically interesting but contradict my experience.
Your comments suggest a frontend perspective. But on the backend there's a ton of code (including business logic, which theoretically should be the most important bit).
I wouldn't be qualified to argue the merits of FP on frontend, but on the backend it really seems to have value.
Pure functions can't perform side effects, but they can describe them.
For a functional shell, your program's entry point would be composed of pure functions describing effectful actions to take. The necessary computations get bubbled up to the entry point during evaluation, at which point the runtime / compiled program executes them.
(At least, this is the abstracted perspective you should view the purely functional source code from. The final program itself in practice is, of course, effectful throughout).
It's also fun to note that the IO monad in some perspectives is an abstraction for "functional core, imperative shell" moving as much of the imperative "order matters" computations to an outer "shell". The IO monad in some ways is the way of mathematically representing the "shell" of an application.
The problem with monads is that they're a poor abstraction. Need to output: WriterMonad. Need input: ReaderMonad. Need to store something: StateMonad. Local mutation: MonadST. IO/global mutation: MonadIO. Transactions: MonadSTM. And to combine them: a stack of monad transformers. How delightful!
This is why there's now more focus on algebraic effects as an alternative for keeping a functional core, imperative shell. Monads/transformers are just an ugly solution that even hardcore functional programmers don't want to use.
In .net, most UI stuff is easy on C# and semi-painfull on F# because of limited tooling and examples. I don’t see any theoretical reason why a usable functional language could not be used everywhere. The ecosystem maturity and tooling are the limiting factors.
Your functional core also includes an imperative core inside it. Most of your “pure” functions modify the process page table and GC roots. Basically all the runtime is imperative. Finally some of your functions are written in something equivalent to the ST monad to have reasonable performance.
The fact that they've added all of this great stuff and have STILL not added real discriminated unions is a damn travesty. It would just so drastically improve the language.
It's missing the killer feature though, which is the compiler warning you when you haven't done an exhaustive match. That is the magic which makes adding new values easy instead of hunting through code to find if you missed the new case anywhere.
I used to think that.... but actually, in practice, and I've been using them for a long time now, it doesn't make much difference, good tooling will generate the cases and help you find all instances pretty quick (I use Rider).
I've only needed discriminated unions inside C# when I'm using pinvoke to C DLLs. I would think Rosylin supports some compiler warning for them, but I haven't checked that.
This is the kind of OG way to do it even before we had patterns. There are a couple of problems with it, least of which is that I can't then use one or both of those in a wider union elsewhere, because their definitions are bound to that parent class. Ideally I'd like to see something very similar to the linked OneOf library. That allows you to do both in-line definitions OR subclass from OneOf<T1,T2,T3...> to reify the union as a class as well. If that is done and integrated properly into the pattern matching system, I believe it will yield very powerful expressiveness.
Is is kind of a pain particularly when working across Typescript projects. OneOf is cool, but it DOES NOT work well with null and thus optional parameters.
Given few people anticipated ValueTuple and C# adding a more direct tuple syntax, I feel like it is only a matter of time before C# adds discriminated unions.
One of the problems here is that C# and F# interop isn't always as easy as the article implies. F# has a lot of types which, when exposed in a public API, are ugly as sin in other .NET languages.
That issue is not F# or C#'s fault - it's a limitation of the expressiveness of the CLR's type-system (it doesn't support higher-kinded types, varadic type parameterization, or non-typename type parameters).
The hope and the expectation is that the CLR will gain support for first-class representations of F# concepts to allow for greater interop with C# scenarios, but the CLR's development has always been tied to C#, with other CLR languages like VB.NET and C++/CLI only exerting minor influence on the CLR's design with most of their language-specific idiosyncrasies being handled by library-code and compile-time tricks instead (e.g. VB.NET's "On Error Resume Next" statement is implemented by having the compiler wrap each individual statement in a try/catch instead of having the CLR specifically support it (though in this specific case that's probably a good idea as OnErrorResumeNext is a horrible idea I'm sure we all agree).
Minor correction, only Managed C++ and C++/CLI have the full power of the CLI, many of the performance improvements in C# have been related to exposing those MSIL capabilities to C# as well, which before required generating bytecode directly.
Which is kind of ironic given how they usually leave C++/CLI out of the picture, including the cross platform story.
Did you check the recents blog posts about C#12? Clearly the team has ran out of ideas for meaningful improvements to the language, while ignoring discriminated unions which is the most important concept missing in the language.
I’m not arguing there is more important stuff, but DUs would enhance the language on a wide, fundamental level, accelerating a lot of other advancements.
The only thing I really missed, has been adopted during the latest years, pattern matching.
Languages are not used in isolation, great IDE experience, and having mature libraries for every use I can think of, is more valuable than grammar and semantics.
Also a reason why I would rather do FP in C++23 than Haskell, even with all the warts and paper cuts it entails, ecosystem.
If only the tooling was at the same level as C# and Java.
Additionally, they add friction to a development stack, now everyone needs to be confortable with two language stacks, and most of the time it isn't really worth it.
its because they are easy to do in C# without explicit support, but the proposal is still in the works, but they argue a lot about the syntax and about exhaustive type checking for all the edge cases.
FWIW, this a switch expression rather than a switch statement.
But in any case I really love this addition to the language but the inability to have multi-line or block expression arms is a constant annoyance for me.
You can even combine these with the new one line record syntax to create a poor man’s discriminated union.
Your pedantic correction is actually important for another reason: the switch expression (unlike the switch statement) is defined at the language level as an expression (evaluating to/“returning” a value), which would be ok except C# doesn’t (at the language level) have a void/unit type, meaning the switch statement has to return an actual value, limiting the places you can use it compared to the F# counterpart (or the match expression from rust, etc) to very specific cases, usually those performing an assignment.
The workaround for that is the same as the workaround for the really lame one line limitation: you need to call a (preferably (static) local) function in the handler portion and then return something like `true` assigned to a discard. Hacks all around!
Eg
_ = foo switch a when … => CaseA(foo),
_ => CaseB(foo);
With CaseA and CaseB returning bool in order to call a function depending on the value of foo rather than assign a value.
The blog is indeed the worst example you could run for this case. As you pointed out, modern C# would be same length (which is his "wow" effect on the functional core) but also has the F# not really a higher readability than the shown C# code.
I want to like F#, but the syntax is just bonkers to me. There are keywords which are used in only one context, such as `then`, and sometimes an operator means something in one context, but something else in another. e.g: equality/assignment
let a: int = 1 // Assignment
a = 3 // Testing equality, will give you an error
a <- 4 // Assignment, valid
Some of the complaints in the article are certainly valid (why would there be early-return in the context of computation expressions, but not in ordinary functions?), but some I claim are actually features, and some are false.
* Linearly ordered files. Several times in F# I have tracked down an issue simply by bisecting the codebase, which is impossible in C# because there's no meaningful way to halve the code. I think I've only once ever had a problem with the linear ordering that wasn't solved simply by reordering some files.
* Explicit type conversions. I've only ever found this awkward when interfacing with C# code that is designed around C#'s extreme laxity. Explicit is better than implicit!
* "No struct tuples" is false - that's what the `struct` keyword is for. (It may have been true when the article was written.)
* Dot notation for indexing is now no longer necessary.
It's certainly true that OO idioms are often clunky and feel strangely like they were constructed by Frankenstein, but then I almost never find myself trying to use them anyway. You don't notice oddities in features which you never think of using! Similarly, the number of times I use `<-` is so low that I don't think of it as being incongruously odd - why shouldn't there be a baroque syntax to indicate the place in your code that's likely to have a 50% higher chance of bugs?
I have been informed that computation expressions do not have early return. For example, `async` does allow you to express what appears to be an early return, but only of the `unit` type, and in fact it doesn't return early; it simply proceeds straight past the `return` keyword. Raised https://github.com/dotnet/fsharp/issues/15759 .
You can create true "early return" behavior by writing a lazy computation builder [0]. It feels esoteric but taking the time to craft a domain specific computation builder can drastically simplify the rest of your code base. I don't know if there's a more modern (more ergonomic) way to do this since this has been around since ~2012 or so.
Alternatively you can take the more functional approach and rewrite your function logic to use `if x then (value) else` throughout instead of `return (value)` or `yield (value)`. Early return isn't really a thing in f# but if-else is.
most of the time early return can be replaced by function calls. what bites is that f# has no way to force TCO so sometimes it's safer to write a loop with a mutable flag.
> "No struct tuples" is false - that's what the `struct` keyword is for. (It may have been true when the article was written.)
Quite probably. The underlying CLR ValueTuple type is a relatively recent .NET addition (and partly only exists because C# asked for it). (It was added in Fx 4.7 / Core 1.0, whereas the reference type Tuple was added way back in Fx 4.0.)
> but I just want something closer to Scala, but for .Net
That's what I have been working toward with my language-ext library [1]. Including a ZIO like effects system [2], Haskell-like Pipes [3], Clojure-like concurrency primitives [4], the fastest immutable data-structures in .NET [5], and lots of other common FP bits.
Obviously more support for expression based programming would be welcome (and higher kinds), but you can do a lot with LINQ and a good integrated library surface.
That's cool, but question: why is the language-ext library documentation using Haskell's Haddock to render the pages? Wouldn't C#'s native documentation system be better?
It isn't Haddock, it's simply inspired by Haddock. It's a library I wrote called best-form [1]. The README gives context:
"This is a C# doc-gen tool. It was primarily built to support my Language-Ext project. Which is non-idiomatic in its approach. I couldn't find documentation generators that did it justice. I also really liked the Hackage documentation style from Haskell, so have taken a styling approach from there (even if it looks a little dated now, it was always the documentation I felt most comfortable reading)."
language-ext isn't idiomatic C# and so idiomatic C# doc-gens have quite a bad time with this library. One of the major benefits over other doc-gens is it grabs the README.md from each folder and prepends it to the relevant section - allowing me to write some contextual documentation outside of the auto-generated API documentation. This makes it a much more approachable reference.
It also supports markdown in the XML documentation, avoiding a lot of the need for lots of ugly XML special characters [in code] and extending the layout options compared to the common C# doc-gens.
This is a compiler error because you haven't made your variable mutable.
Also, I actually like that initial assignment and equality testing both use =. I think of `let a = 1` to be less like assignment and more like `Assume that a = 1 is true`, so using the same operator (e.g. `a = 3`) makes sense for comparisons.
And `<-` as a separate operator is good because mutability should be exceptional for many (most?) codebases.
I just don't read code as I would a mathematical proof. I think in terms of what memory locations are equal to what in the stack, or the heap, and what is the lifecycle of that data in that memory address.
When I read "int a = 1" in C#, I implicitly translate that to "take a 4 byte piece of memory on the stack, and set it equal to 1". I don't think in the abstract sense of a formula.
When I see a class like:
class Foo {
int x = 2;
string xyz = "Hello";
}
var foo = new Foo();
I read this as "allocate a chunk of memory big enough in the heap to insert a 4 byte integer and an 8 byte pointer. Set that 8 byte pointer equal to a static chunk of memory where the "Hello" string is pooled.
> When I read "int a = 1" in C#, I implicitly translate that to "take a 4 byte piece of memory on the stack, and set it equal to 1".
I think that's overspecifying a bit. It could be kept in a register rather than the stack. And due to to the Single Static Assignment transformation that modern optimising compilers do, variables don't correspond exsctly to registers anymore; each time you modify the variable, it becomew a new variable, and then the compiler removes or changes extraneous modifications and dead code. It only keeps track of the values that move through the code. You could really only count on variables corresponding exactly to stack space registers before the SSA form existed.
> I just don't read code as I would a mathematical proof.
That's fine, although I'd say that FP is probably just not for you. It's very much a style of programming that lends itself more towards "programming is akin to proofs" than "programming is about manipulating things on a von neumann architecture". Neither is an incorrect view of the world, but they do represent different ways of reasoning about things and it's better to use a language more suited towards one way of thinking.
To be honest, when I read e.g. Haskell code, I don't get the sense of a mathematical proof at all, not compared to proof assistants like Isabelle/HOL, Coq, or Lean. It looks more like abstract, high-level wizardry, casting that transform the output of one function into the correct type so that it can become input into another; and yet it's very abstract typing, nothing like proof-assistant tactics (though I admit I'm not all that experienced with proof assistants either).
I think immutability is more about making it easier to reason about the code than mathematical proofs specifically.
This is the biggest hurdle for a lot of people, at least for me.
The immutability.
IF you have grown up doing objects, or C#, and thinking with variables. Then 'a=1' means a memory for variable a has a 1, and you should be able to change that. But it is really a like a function where the function returns a 1.
'let a = 1' is not assigning the value 1 to variable a.
'a' is a function that returns a 1.
I think this is biggest reason why people trying to learn functional programming in languages that don't enforce immutability, have a harder time than with languages that do enforce it.
Like moving to another country, and the people around you purposely don't speak English so you have to learn the language.
If the did speak English to help you, then you wouldn't learn the language.
Well, most people learn about variables [1] before variables [2], so I find it funny that you think of the mutable ones as English.
Mutable variables are better explained as slots or cells, I think. Both OCaml and Rust have a concept of ref cells, for example (and they entirely replace mutable local variables in OCaml).
I got the concept of immutability, but I inevitably ended up recreating OOP patterns whenever I tried working with F#. I do Asp.net WebAPI Projects and trying to do dependency injection ends up just translating C# to F# code rather writing idiomatic F#. I just don't know if HTTP is something F# is really good at doing. I can see it being really useful if you were writing something math-heavy
Any particular documents/tutorials you recommend? Feels like SAFE doesn't have a ton of either but I might just be looking in the wrong places. Like I admit I didn't try the Dojo yet because that sounded like a second round of learning not the very basics, but I'm also only eh at web development
Appreciate the links. Fable/SAFE has been something I'm curious about for a while. And I already meant to look at the elmish book but had not gotten around to it.
I've actually gone through most of the Dojo tonight except for the reset button.
I tried using the safe stack, but it seemed like it wasn't up to date with the latest version of .NET. Is that normal? Or did I hit a weird edge case in the ecosystem? (Or was it a case of PEBKAC?)
Yes you're right that they are a little behind. Currently on .Net 6. That is a shame as .Net 7 seems to have made significant improvements to performance.
That said I don't think there are any features that I'm missing out on being at .Net 6.
I think there is a limited number of maintainers of the SAFE template so they might not be able to keep on the bleeding edge. But it's not that hard to update the components to the latest versions. As can be seen on this waiting pull request https://github.com/SAFE-Stack/SAFE-template/pull/564
I should also mention that when I've hit issues with packages not working on the latest .Net version I've asked in the F# slack and twice had the maintainers fix things within a day so it would run at the latest version.
I found the Slack channel to be pretty responsive and friendly. Still, I was a bit bummed because I kept running into issues. Some documentation links were broken and I tried to fix them, but I couldn't get the project to build on Debian. I tried safe stack and had the wrong version of .NET. Once I got the right one, I had issues running the project (turned out the path did not support the "#" character). A lot of the documentation assumed you knew the .NET ecosystem, but I was coming from python. It seemed like I kept hitting sharp edges.
I'm still curious about F#. It has a lot of neat looking features (units of measure, computation expressions, active patterns) and has some of the easiest to read code I've seen. I just don't know if my experience was representative or if it was abnormal. :/
Yeah the .NET learning curve is steep. It's a little worse than some other languages out there. I found the same thing. I came from a Linux background, PHP, Perl and similar.
ah. yeah, if you are doing web programming. then for F#, you would use a library that is really like a DSL for the web. One of the things in practice in F#, it seems the most useful libraries are written as DSL's, so it is like a language extension for doing 'whatever'. To really get these web based ones, need to understand "ELM architecture".
In any case. I get it. If you have done ASP.NET, and C#, then suddenly this F# way of building web pages by programming through combining functions, it is hard to get over the hump. Like brain has to re-change how to think through the whole flow.
Really, F# on the web is like ELM.
Giraffe is nice because it is itself built "just" as ASP.NET Core Middleware so it plays a bit more nicely than Suave with a mixed stack of C#-defined Middleware.
It's more likely you accidentally fall back into just translating C# patterns to non-idiomatic F# with Giraffe, but it's also nicer when in that case of needing to live in both worlds and use a mixture of libraries built for C# ASP.NET projects.
F# is severely hampered by its standard library, which is object-oriented by design. Some newer things like fluent api's or object initializers are more amenable to F#'s functional style, but on the whole my experience matches yours: as soon as you start interfacing with any .net library, it feels like you're writing in a foreign language.
The reason this bothers you is because you're trying to apply your imperative experience to a functional language. Once you stop doing that, F# will make a lot more sense. For example:
let a: int = 1
This isn't assignment; it's a binding that permanently associates the identifier `a` with the value `1`.
I used FABLE, that is like F# transpiler to other languages. It compiles F# to other languages.
The Python seems hard to read, but could be my unfamiliarity with Python.
This is such a weird example to me, mostly because I've yet to come across a language that doesn't have awkward gotchas, and that one is one of the easiest ("ah yeah...= is only assignment in a let..weird"). The article cited ends with -
> That’s it for now. Hopefully, some of these issues will be addressed in future versions of F#, though as I said, most of this is simply annoying / inconsistent. It shouldn’t stop anyone from using this awesome language
and as other have pointed out, some of this is complaining about features or not working with the language. File order matters should probably exist in more languages (the amount of debugging/code reading time it saves is insane) and complaining about explicit typing because it's slowing down your contest code is just eye rollingly silly to me.
I felt the same, but once you get used to the syntax, it's much nicer for functional first coding. If you find yourself fighting the syntax a lot, you're probably using it in an imperative style that is not idiomatic in f# anyway.
Lines 1 and 2 there don't mean different things. They're both attempting to bind/unify `a` with a value. It's just that in the second case, `a` already has a different value, so the binding fails.
In what way are the algebraic types more complete in F# than Scala? I was interested in F# much earlier in the day, but that was when Mono was not really a thing and I was not really going to go back to Windows.
While some of the complaints in the article are reasonable, the author seems to miss that F# began as essentially "OCaml for .NET". It's syntax is ML derived and the .NETisms came later, which is why there's sometimes two ways to do the same thing.
In early versions of F# you had to stick `#light` at the top of a file to use lightweight syntax, which is now the default. The verbose syntax, which is closer to ML was originally the default.
> 1. You have to manually order your files in F# projects in order they must be processed by the compiler.
This is one of my favourite features of F# because it forces you to design your code in a way that minimizes mutually recursive types, since all mutually recursive types must live in the same code file. You end up with cleaner code which is easier to test, maintain and reason about.
> 2. No break and continue for loops. 3. No do-while loops.
I've never seen as a problem because I rarely write loops. It's so rare that I use them I end up having to double-check the syntax. I always reach to .map, .fold, .filter, etc first, and I can't remember a time I couldn't express what I wanted using these.
> 6. Awful type constructor syntax
The primary constructor syntax makes sure all other constructors must call it. Prevents you from not instantiating a class incorrectly and is a better fit for immutable-first types. Perhaps could have some improvements but the common case becomes more terse to write.
> 7a. a) array type can be declared as “int[]” or “int array”; same for list and seq; moreover, it’s the same for any other generic type with one type argument.
Comes from ML, where multi-parameter generics are also written `(x * y) type`. I think it was a good idea to make F# closer to .NET and use `Type<X, Y>`, but they should have done the same for single-parameter generics too to make it consistent. I never use `X array` syntax and prefer `array<x>`, though I would prefer if they also made these case consistent and I could write `Array<x>`, but Array is a module. Not sure how to fix this.
> 8. “fun x y z -> …” syntax for lambda expressions
Again from ML. I imagine there are syntactic ambiguities if you drop the `fun`, as `->` is also used in pattern matching and other places.
I dislike the idea of spaces being significant, and there are already gotchas in F# where this is the case. `x().y` and `x ().y` are not always the same thing.
> 12. “<-” for assignment. Another personal thing, maybe, but “:=” seems easier to type.
`:=` was used for ref cell assignment, but is now deprecated.
Yeah, my bad but the code from grandparent is also incorrect, as for it to be compilable variable must be declared as mutable.
That confused me, as I assumed that code there is compiling. But the only way for it compile is shadowing, so yeah, I kind of assumed that to shadow is to use this syntax. I'm not working with F# for some time, so should've checked before posting.
There's another power when using the same underlying runtime environment as well: no bridging overhead. If your specific problem set involves a lot of boundary crossing bridging overhead can easily dominate all else.
Long ago I integrated IronJS (written in F#) into a project written in C#. IronJS at the time was a very primitive JS compiler though it was a real compiler using the runtime machinery to emit Expression objects that were JIT'd into native code.
I was very excited about switching to V8 for the vastly improved performance. To my shock and horror once I got it working the performance was worse. It turned out most scripts called our API a lot (the equivalent of the DOM in a browser). Bridging between CLR types and V8 (C++) types completely erased V8's superior performance. No amount of lazy bridging and zero-copy abstractions were enough and I eventually abandoned that branch.
It was a humbling experience that I learned a great deal from.
There is a nuance I love to bring up: .NET biggest strength in the apps market today is the best support for native interop. Bridging .NET to Swift/Objective-C or .NET to (Android) Java. Every time there are layer in between, you loose. Either by loosing performance (your case) or by loosing capabilities (like the typical Cordova, React Native or Flutter Plugin).
Btw, work of Miguel de Icaza via the Xamarin acquisition.
I love seeing F# mentioned on Hacker News, but this article in particular is a bit dubious. Yes, you can use F# from C#, but it gets frustrating quickly.
Fortunately, almost anything you can do in C# can be done better in F# anyway, so there's really very little need to use C# at all once you take the plunge into functional programming.
I agree - I guess this article is more about a (very) gradual way to adopt F# into an existing C# code base, rather than an inherently useful combination of C# and F#.
True, but for any serious integration the F# library should expose an API that's designed to be more idiomatic for C# callers. When I see `ListModule.OfSeq` being called from C#, I cringe a little.
Would you mind sharing some of the frustrations you have had using F# from C#? I haven't hit anything too hard yet but I also haven't worked on any large projects that are setup like that.
In regards to the C# vs F# issues. Have you tried using WPF or any desktop GUI frameworks from F#? If so how was the experience?
It's been a few years, but yes, I've implemented WPF apps in F# and found it quite doable. FsXaml was very helpful: https://fsprojects.github.io/FsXaml/
Reactive Programming I guess is the key to GUIs from F#/FP perspective. It is the modern way of doing things anyway but requires then a UI stack which is optimized for it.
C# is required for the UI programming, though. I don’t know any F# module, which interacts and manipulates UI elements. Yes, creating a web UI is possible, but not for the desktop.
I’d love if someone could point the way around (because until today, I’ve to combine both languages for desktop appt)
I developed a very non-trivial ASP.NET Core application mixing F# and C#. I learned you have to take the best parts of both languages to get maximum benefits. Specifically you're going to want a project graph that looks like this:
1. Your top-level project should be a csproj where all your "web" stuff is. Controller routes, DTOs, startup code, etc... This project is responsible for converting data into nice objects that are passed to the F# code.
2. A mid-tier project written in F#, an fsproj. This is where you can be nice, pure, functional, live out your best life pretending C# doesn't exist. However you will eventually run into issues interfacing with external libraries, EF Core, lack of standard library support for whatever you're doing. That's ok! The final layer will solve it.
3. A foundational project written in C#. This project's job is to hide the ugly and make your F# project into the functional programming nirvana you know it can be. Wrap those dependencies into pure functions, write your entity framework junk, use all the latest C# language and runtime features and expose them to your F# project.
So something like:
C# -> F# -> C#
Ultimately, I decided that C# really is good enough and I didn't see enough benefits to continue using F#.
> Ultimately, I decided that C# really is good enough and I didn't see enough benefits to continue using F#.
I’ve heard this story repeated so many times over the years. The responsiveness of the C# language team in particular (as opposed to the runtime or ASP teams, etc) is remarkable and the thoughtful inclusion of functional-inspired features over the years has really dramatically improved QOL for C# devs. (With the exception of some annoying carve outs like going too far with trying to please JS/node refugees with not only introducing support for top-level statements but also pushing it as the default in the various templates, the botched delivery of global includes, and other such changes —- but to be fair, many of these were implemented correctly and without any complaints on my end in the language itself but my quibbles are more with the delivery, integration, and polish in the IDE and downstream by teams like the ASP.NET Core one.)
After a long pause during the denouement days of the .NET Framework, the C# of today would be unrecognizable to C# devs of yesteryear, and the changes keep coming.
(All this is from the POV of a developer that’s been using C# since 2002; back when J# was a thing and I was slowly adapting to writing code in a case-sensitive manner after years of VB.)
Yes, that is the ultimate problem with F#, Microsoft behaves as if it was a kind of mistake to add it to the Visual Studio 2010, the experience always lags behind C# and VB, leaves to the community to do most of the work, and now with stuff like code generators being used everywhere is getting worse.
Funny, I did exactly the opposite! I really enjoyed f#'s functional-style web framework Suave, but put it on top of services written in OO style, which all used immutable f# records and lists under the hood. It ended up being a lot easier to do all this in f# than mix the two languages, since f# is dual purpose. I called it sandwich-oriented programming.
Using a framework like Suave probably reduces the "impedance mismatch" between ASP.NET and F#. If I had used that, I probably wouldn't have needed to split the code base into C# and F# parts.
I’ve tried to use “ASP.NET Core with F# underneath” for several projects, thinking it would make for an easier bridge for the rest of the team members who were very comfortable with asp.net already.
And each time, it turned out to be a mistake.
Things went way more smoothly by just using an F#-native web framework like Falco or Suave or Giraffe or …
Surely C# qualifies more as "multi-purpose" than F# does? It's certainly more of a multi-paradigm language - my own C# code freely mixes functional, imperative and OO- style (multiple implementations of the same interface etc.) techniques, the latter two of which I struggle to envision being improved by rewriting in F#.
Yes, but from what I've seen, many OO patterns in F# are actually a good deal more verbose than they are in C# (whereas the reverse is true for functional patterns). And writing procedural/imperative-style logic in F# feels like it's fighting against what the language was designed for.
Don Syme makes a distinction between OOP and "object programming" various places on the internet. Object programming is quite natural in F#; possibly even moreso than in C#. And it's certainly not more verbose in F#.
That's what I meant by "OO services" in original comment: a class (or composition of classes) that maintains some state e.g. a connection, maybe some cached data or whatever, and wraps some third party services or DB calls, etc, and optionally create an interface it implements, for IoC and/or unit testing. This works quite well in F#. F#'s anonymous classes are a great conciseness aid there too (C#'s aren't sufficient; they're really just anonymous records).
It's just all the class inheritance hierarchy stuff that is uglier in F#.
> It's just all the class inheritance hierarchy stuff that is uglier in F#.
Yes, and that makes it harder for F# code to effectively re-use existing C# libraries. I know it is _possible_, fsc can reference DLLs no problem but unless that code was written to be idiomatic F# then I will have to awkwardly write some classes with a worse syntax than C#.
F# deserves a project like Phoenix that the entire community can get behind. World-class docs, zero getting started friction(docs, examples, and libraries all just work), and bomber F# APIs over any ubiquitous C# libs it might need.
I'd argue .NET isn't actually that great of a multi language platform. We have a mixed F# and C# solution at work, and that split sometimes causes a lot of friction. If you have some C# code that depends on F# code, that constrains your ability to have F# code depend on C# code. You need to be very careful about how you organize your work to get a good solution (more than on other platforms.)
In the JVM world, mixing languages works a lot better, because you can compile .class files instead of whole assemblies. So mixing clojure and Java, for example, is very easy to do in any order.
The F# is certainly nice, but the C# is odd. This is a textbook example of either inheritance or interfaces. I did inheritance here but it would be trivial to change it to an interface.
The class implementations add a bit of upfront verbosity but the calculation at the end is arguably nicer to read: that method is concerned with applying the discounts, but in the F# case it has to know how to apply each type of discount, while the C# encapsulates that in the Discount implementation.
let subtotal = List.sumBy (fun ol -> ol.Product.Price \* (decimal ol.Quantity)) orderLines
let totalDiscount =
discount
|> Option.map (fun d ->
match d with
| Percentage p -> subtotal \* (decimal p / 100M)
| FixedAmount f -> f)
|> Option.defaultValue 0M
subtotal - totalDiscount
Though I'm also pretty sure F# could do the same thing with the classes if you wanted. And someone else pointed out you can write some C# that looks the same as the F# if you use a switch statement. And I'm pretty sure you could make it look even more F# if you used more Linq.
I guess the F# matching is pretty much like interface/class checking as follows? I don't see the functional difference:
if (discount is IPercetage percentage) { .. subtotal \* percentage.value / 100 ...
else if (discount is IFixed fixed) { .. subtotal - fixed.value ...
And you could do even more with some newer C# features like anonymous classes etc.
I'm not left feeling like I'm missing out on anything without discriminated unions in particular. I'd be interested to see some cases where the functional nature of F# does make a large improvement over the equivalent C# though.
The main difference is the totality checking. A discriminated union guarantees that you've handled the entire modelled domain (and nothing else); repeated interface checking merely guarantees that you've handled a subset of it (and also you may have handled a bunch of other stuff).
It's so comforting to know up front that you've done everything you need to do!
let calcDiscount =
function
| Percentage p -> subtotal * (decimal p / 100M)
| FixedAmount f -> f
And then:
let totalDiscount =
discount
|> Option.map calcDiscount
|> Option.defaultValue 0M
F# is missing some of the convenience operators, but you could easily add these yourself if you wanted:
let (?) x f = Option.map f x
let (|?) x y = Option.defaultValue y x
and so totalDiscount becomes:
let totalDiscount = discount ? calcDiscount |? subtotal
The benefit of DU's over inheritance for me is that you can't tie yourself to any additional details that get stuck onto Discount (since you can't extend DU's). If you have loosely coupled data you won't have sort-of-relevant methods possibly mutating shared state. The discount calc only takes a simple thing hidden away by the Discount DU, and so there are fewer plates you're trying to keep spinning in your head (all of "what could be").
Inheritance isn’t necessary here. You achieve the same loose coupling with interfaces and you gain the benefit of your Order type not having to know how to calculate every single discount.
I know it's a really trivial matter, but I actually got to use the tuple swap thing the other day and it felt so good.
(ItemA, ItemB) = (ItemB, ItemA)
~50% of our methods don't even have block bodies anymore... Switch expressions are our bread & butter now. Once you learn how to combine expressions + switch + LINQ, you have the triforce of poor-man's functional programming inside the perfect imperative realm by which to protect it.
F# is nice, I used to work with traders using it...
then move to a so called startup in France and CTO laughed when I was speaking about F#... this is why people dont use it more because they never learned FP.... so sad
To be fair, you'd still need a private constructor in Discount, otherwise the sumtype is not closed, like it is in the F# case. And the percentage Value should be `float` :)
Ideally you would want to write `abstract sealed` for `Discount` too, but C# does not allow it. CIL does support it however, and it is how F# implements sum types. There's no reason I see for C# to continue enforcing this restriction, and lifting it would allow us to write proper DUs.
While I tend to make the SUM type cases sealed, it is theoretically not required to make the sum type closed. All subtypes of a case belong by definition to that case. So any code that handles the defined cases also handles the subclasses implicitly.
Many believe that F# is better for functional code and C# is better for imperative code. Hence F# immutable core and C# immutable shell.
But I think this is a myth. F# the language is better than C# at most things and certainly better overall. If you are going to use F#, you may as well go all in and get all of the benefits.
For web services, Giraffe or Suave combinators are much easier to reason about than ASP.NET MVC patterns.
While there's a thread on F#, I'd like to complain about how I attempted to create/publish a F# dotnet app in entirely using the terminal (i.e. without the bloat of Visual Studio) and it doesn't seem to be supported, which was unfortunate.
dotnet new webapp -lang F# -o MyFSharpWebApp -f net8.0
was the command I used. This was in Windows 10. I'll probably attempt it again in WSL2 when I have time, though I'm not expecting better results.
This is, unfortunately, a consequence of the annoying naming Microsoft does (I am allowed to complain about this because I worked on the .NET team -- nobody can name things we can't).
You'll note that in the list of default templates:
ASP.NET Core Empty web [C#],F# Web/Empty
ASP.NET Core gR... grpc [C#] Web/gRPC
ASP.NET Core We... webapi [C#],F# Web/WebAPI
ASP.NET Core We... webapp,razor [C#] Web/MVC/Razor Pages
ASP.NET Core We... mvc [C#],F# Web/MVC
ASP.NET Core wi... angular [C#] Web/MVC/SPA
ASP.NET Core wi... react [C#] Web/MVC/SPA
"webapp" is actually a flavor of MVC with Razor Pages to define the UI, which is itself a flavor of UI that's different from the typical HTML/JS/react setup most people use.
I think the cleanest way to do web development with .NET is to leave .NET-isms out of the UI (.NET UI is, ironically, its weakest link now) and keep it on the server.
I had a great time writing a static site generator for my website. Initially I wrote it all in C# but I started writing specific components in F#. Currently the only F# portion is the sync-over-FTP feature but I do plan to convert a lot of the data processing portion to F# where if makes sense.
I do look forward to having opportunities to mix the two. C# + a gui framework and F# for the non-gui logic works really well in my experience.
Does F# still require you to manually order all your source files for the compiler? Sounds like a thing that should've been figured out a while ago (and OCaml doesn't have that limitation anymore with Dune etc)
Required ordering encourages you to organize your code to form an almost directed acyclic graph by minimizing cyclic dependencies between types/functions. The only way you can have the cyclic dependencies is to include them in the same code file and use `type ... and ...`, or `namespace rec`/`module rec`, which are non-defaults and not encouraged.
I have a ~300kloc project with zero cyclic dependencies between any types or functions, and its delightful to maintain and test. If I had written this in C# I would've taken the easy way out many times and just made cyclic dependencies between types, which become non unit-testable without creating mock types just to test.
What if the compiler detected the order itself, found cyclic dependencies and refused to compile if such dependencies exist? It can do that without requiring you to specify your order manually. Ordering by hand or C# are not the only two choices.
I could implement this with a super simple prototype where you list dependnat files as comments at the top of each file. I'd honestly rather do that than reoder items in XML by hand.
Absolutely, it better be a decimal. These numbers in .NET are decimal floating-points, they can represent all these 12.34% without any loss of precision.
> Where C# is the most dominant language in the .NET world
Why would we want to live in the ".NET world"? I say adopt a programming language, or languages-ecosystem, that's not useful only in the "[some niche] world", but more-or-less everywhere. Whether it's older and well-trodden or newer and up-and-coming, I don't see the benefit of limiting yourself that way.
It is not really a "niche". The ".NET world" is more-or-less everywhere: it is cross-platform, open source, and runs on Linux/Windows/macOS/iOS/Android/more.
The ".NET world" is comparable to the "JVM world" or even the "LLVM world": a virtual machine target with a varied language ecosystem and a lot of real machines supported at runtime (in a cross-platform, open source way today).
".NET world" is defining a complex language ecosystem, and it probably isn't as "limited" an ecosystem as you seem to think.
In practice, most of the RPFs that we get for UNIX deployments, rather want us to provide solutions in ecosystems born out of the UNIX world and aren't keen in hearing about .NET, even if it supports UNIX platforms nowadays.
Back in the .NET Core 3.1 days, I had a migration project from .NET Framework to Java, as the customer didn't want to stay in the .NET ecosystem, and given the amount of code they had to rewrite to be fully functional in .NET Core, they decided to move elsewhere.
They aren't alone, Sitecore once the the lighthouse of enterprise .NET CMS, is now a polyglot platform, where most of the new products are written in a mix of Java and JS/TS.
Given the recent efforts in WCF Core compatibility and System.Web wrappers, it is quite clear that the decision to create a Python 2 / 3 schism in the .NET world is taking its toll.
> Given the recent efforts in WCF Core compatibility and System.Web wrappers, it is quite clear that the decision to create a Python 2 / 3 schism in the .NET world is taking its toll.
We've had very different experiences. I've never had a problem replacing dependencies on WCF or System.Web, have yet to ever see a use case for WCF Core, and have yet to find System.Web dependent middleware that wasn't trivial to rewrite as modern ASP.NET (Core) Middleware. (ETA: Or toss. Some of what I find done in System.Web has only that one destination, because it is obsolete or was a bad idea in the first place or there's an easier way to do it.)
.NET is big for mobile app development, particularly when you include Unity. It's really one of the best solutions for writing Android + iOS apps, where you can share a significant amount of code and still have native-looking apps. You don't need to depend on clunky electron which makes your apps huge. .NET apps are smaller and faster than electron based apps. Swift is probably a common choice these days, but wasn't an option a few years ago.
Unity is used by a huge number of mobile games and uses C#. The Godot engine also supports C#.
On desktops not as much, but it is sometimes used for cross platform development, and plenty of games on Steam are written in Unity and run cross platform.
Linux desktop could have had a better story when MonoDevelop was gaining popularity, but Linux/desktop support basically got sidelined when Xamarin shifted focus to mobile development, and the acquisition by MS killed mono.
Since MS went fully open source on .NET, the situation for Linux is improving, but it's not yet a choice for most developers on Linux and may likely never be, because everyone wants to write web apps.
Many of the Linux desktop apps written with Mono have been abandoned or are barely maintained.
Even those not using containers, Linux website hosts are generally cheaper (including on most cloud providers). In addition to new apps I've seen some older web apps pressured into .NET upgrades just to have Linux runtime support to run on cheaper servers.
Writing tests is much easier and requires much less code, and bugs are much easier to track down (since you have less surface area of "what might have modified this object I now hold?").
Since a functional approach also goes well with single-responsibility-principle and composability, you can end up with multiple times more actual functions. They're all (ideally) very conceptually simple, but the larger number of more specialized functions means you have to work a lot harder on choosing meaningful names. And to write meaningful names of specific functions, you can end up with some quite long names. This is actually quite ok as it makes things much more readable, but some people have some strong negative reactions to seeing 4-6 word function names. But at least in this case (Java being the most famous counter scenario) each function probably has more real value as it's not just OO layer artefacts.