In my experience PMs often work at a very high level. How things shold work are defined in a incosistent way when we take into account all the user flows, subtelness and restrictions of other systems. So programmers end up doing a significant chunk of the work by refining the specs so that the thing actually makes sense.
I will believe this theory if someone shows me that the ratio of scientists to engineers of leading teams of the leading companies deploying AI products is bigger than 1.
I don’t think the dichotomy between scientists and engineers that’s being established here is making much sense in the first place. Applied science is applied science.
I think it's the part where he says you need additional separate module signature or to take two modules as parameters. With objects ocaml will infer object's signature for you and with modules you need to (directly or indirectly) pass it explicitely in arguments.
In the `show_content` example, using first class modules: `module H : ...`. The type need to be specified and since module composition is hard, it leads to very verbose solution
>> ARM has supported such capability via the standard CoreSight Program Trace Macrocell (PTM)[3]/Embedded Trace Macrocell (ETM)[4] since at least 2000.
Where are the performace tools that wrap those capabilities? IPT has Magic Trace what is the equivalent tool for ARM?
Green Hills Software Path Analyzer[1] and newer History[2] which likely invented or at least popularized the modern trace/callstack visualization used by Perfetto, Firefox Profiler, etc.
Shrug, I find them more helpful than Pydantic models for lots of canonical cases.
I have had good success with DRF model serializers in like Django projects with 100+ apps (was the sprawling nature of the apps itself a problem? Sure, maybe). Got the job done
As with anything you gotta built your own wrappers around these things to get value in larger projects though
Raw gemm computation was never the real bottleneck, especially on the newer GPUs. Feeding the matmuls i.e memory bandwidth is where it’s at, especially in the newer GPUs.
I save having to deal with a full-blown DBMS daemon accessed through a network boundary unless I need this, and I have approximately zero pain, because I read the documentation and set up my database schemas to work on both solutions (and use WALs so sqlite is rock stable). I also know how to open a file on the command line if I need raw database access for some reason (very rare with django anyway, django's console is just too good not to use it). I also design my apps to not deadlock on unnecessarily long transaction and don't turn every request into a database write, so I can scale out pretty far before I have to worry about write performance. And if I do, I can still use postgres. Until then, I can do unified, consistent backups of all state by snapshotting the filesystem containing uploads and sqlite files.
So I dunno why people insist on spreading so much FUD.
It's not FUD. For all the trouble you claim to have with Postgres I experienced 0 of it in the last 4 years. The only thing extra for a simple setup is a couple of lines in your docker compose files which is completely amortised because you already have a multi process architecture with Python anyway (proxy + webserver + web server works). The upfront cost is so small that for me the expected total cost will rarely make sense even if you assume that your application has 1% of chance of scaling beyond what you can do with sqlite.
If all monad instances work differently what is the value of the Monad interface? What kind of usefull generic code can one write against the Monad interface.
The more constrained your theory is, the fewer models you have of it and also the more structure you can exploit.
Monads, I think, offer enough structure in that we can exploit things like monad composition (as fraught as it is), monadic do/for syntax, and abstracting out "traversals" (over data structures most concretely, but also other sorts of traversals) with monadic accumulators.
There's at least one other practical advantage as well, that of "chunking".
A chess master is more capable of quickly memorizing realistic board states than an amateur (and equally good at memorizing randomized board states). When we have a grasp of relevant, powerful structures underlying our world, we can "chunk" along them to reason more quickly. People familiar with monads often can hand-wave a set of unknowns in a problem by recognizing it to be a monad-shaped problem that can be independently solved later.
> There's at least one other practical advantage as well, that of "chunking".
> When we have a grasp of relevant, powerful structures underlying our world, we can "chunk" along them to reason more quickly.
This is one thing I've observed about Haskell vs. other languages: it more readily gives names and abstractions to even the minutest and most trivial patterns in software, so that seemingly novel problems can be quickly pattern matched and catalogued against a structure that has almost certainly been seen before.
One example: I want to run two (monadic) computations, and then somehow combine together their results (with some binary operation). Such a trivial and fundamental mode of composition, that seems to lack a name in almost every other programming language. Haskell has a name for this mode of composition, and it's called liftM2.
Never again will you have to re-write this pattern for yourself, leaving yourself open to error, now that you have this new concept in your vocabulary. Other languages will happily let you reinvent the wheel for the umpteenth time, or invent idiosyncratic patterns and structures without realizing that they are just particular applications of an already well-studied and well-worn concept.
Every monad is also an applicative and liftA2 does/is the same thing as liftM2. The only reason they both exist was due to Monad being popularized in Haskell earlier than Applicative and thus not having it as a superclass until the Functor-Applicative-Monad Proposal in Haskell 2014. It was obviously correct, but a major breaking change that also got pork barreled a bit and so took a while to land.
Yes you're absolutely right. I had a bit of a brain fart moment here. If they were Applicative operations then you would not be able to use `liftM2`, not the other way around.
This is my fear when I think about doing actual in a team using a functional language. That there's an imbalance in understanding between participants in a team making all discussions about problem x into the pattern matchning problem y. Like "is this liftM2 or liftA2?"
I've only had a couple of months of experience working with scala before the team switched to Java. The reasons were many but one of them was that the external consultant that was most knowledgeable in "thinking with functions" was kind of a dick. Making onboarding into a horror show of "go look up yt video x before I can talk about this functionality" with a condescending tone. So within a month he was let go and then no one in the remaining team really had the courage to keep debe it further. Some thought that they maybe could maintain the current functionality but the solution was only like half complete. (in the consultant mind it was complete because it was so generic you only needed to add a couple of lines in the right place to implement each coming feature)
That said, I would love to work in a hardcore Haskell project with a real team, one with a couple of other "regular" coders that just help each other out when solving the actual problems at hand.
Well, I can't speak to your experience but in the case of liftM2 vs liftA2 I have never even seen liftM2 get used. Its more of a historical oddity that it is available.
Traverse (or foldM) is probably a good start, likely the most useful monad-generic (or applicative-generic) function, that is simple but incredibly powerful and useful.
More generally, Monads essentially support function composition between monadic functions, so you can use it to write code that is agnostic to the monad it runs in. This can let you write e.g. prod. Code that is in IO or Async or Maybe, but for unit testing run it in Identity.
Also, it allows syntax sugar such as do notation that makes it clear to work with even when you know which monad you're working in.
Your basic problem is that your programming language can’t express the concept cleanly. You need what’s called “Higher-Kinded Types”.
To give you a concrete example, in C#
Func<A, B>, List<A> -> List<B>
Func<A, B>, Task<A> -> Task<B>
Func<A, B>, Func<C, A> -> Func<C, B>
Can’t be expressed using a generalisation. But in Haskell, you can write
(Functor F) => Func<A,B>, F<A> -> F<B>
One of the biggest things that makes monads hard to understand is that the type systems of most languages can’t represent them. Annoying, that includes the “typeless” ones.
I'm sorry, I'm not sure I understand entirely what you are trying to express by
Func<A, B>, List<A> -> List<B>
That said, in C#, you can write:
List<A> listA;
Task<A> taskA;
Func<A, B> func;
List<B> listB = from i in listA select func(i);
Task<B> taskB = from t in taskA select func(t);
And if it can resolve a method on List<T> called 'Select' that takes a Func<T, S> that returns a List<S>, and a method on Task<T> called 'Select' that takes a Func<T, S> that returns a Task<S> this will compile.
In other words, I kind of think that Select methods (which can be Select extension methods, of course) amount to functors in C#?
and define a signature for it where `xs` and `fn` are the only input arguments, so that it accepts both `listB` and `taskB` without a compilation error.
Yes, but you can’t write something that’s generic over “things that support Select” because that’s not expressible via the type system.
So you can’t write a function, then get a version that works on lists, then a version that works on tasks, then a version that works on nullables, then get a version that works on parsers. One if the big secrets on monads is that Haskell users don’t spend very much time thinking about them, while people without monads have to think about them all the time.
> Right, ‘some type that has a Select(Func<T, S>) method available’
not just Select(Func<T, S>), but a Select(Func<T, S>) that preserves its original contextual instance of Select and doesn't leak into a different instance with Select.
>One of the biggest things that makes monads hard to understand is that the type systems of most languages can’t represent them. Annoying, that includes the “typeless” ones.
Well, yeah, since a monad is a type, then a "typeless" PL will not be able to represent it.
See for instance the MonadIO typeclass from Haskell [0]. Constraining against this typeclass allows one to write monadic code / do-notation that works with any monad, so long as that monad supports the execution of IO statements.
Now for instance, arbitrary effects (error handling, resource management, etc) can be composed on top of an IO monad (e.g. via monad transformers), and MonadIO code, that is written to only depend on the IO effects, can still be executed in these contexts with more effects layered on top.
Here is an analogy. List is a container whose elements can be any type. There are general operations applying to a list, e.g. map, reduce, filter, find, etc. Any data type (int, float, or bool) of list elements can use these same operations regardless.
It’s similar for monad. If you can provide a unit constructor to turn an object value into a monad value and a “map” operation that unwraps a monad value, applies a function to it, and wraps the result, you have monadized the object type. Your objects can participate in any algorithm operates on monads.
The monad algorithms are the same. The only things different are the unit constructor and the mapping function.
> a “map” operation that unwraps a monad value, applies a function to it, and wraps the result
It can be misleading to think of "unwrapping" a monadic value, since the monad interface does not support it. For example, there's no way to implement a function `List<T> -> T` using monad operations; it requires something entirely separate (e.g. indexing into a List, in this case).
What monads do provide is `join`, which turns nested monadic values into flat ones, like `List<List<T>> -> List<T>`. Even this seemingly trivial example is interesting though, since there are many ways to "flatten" a List<List<T>> into a List<T>: we could concatenate (e.g. depth-first), interleave (e.g. breadth-first), diagonalise (to support infinite lists), operate on chunks at a time (e.g. iterative deepening), etc.
> For example, there's no way to implement a function `List<T> -> T` using monad operations; it requires something entirely separate (e.g. indexing into a List, in this case).
this is called catamorphism, that is folding. The opposite transformation is called anamorphism, that is generation from a seed value.
Lots of useful generic code. MapM is a version of `map` that works with any Monad, `sequence` works with any monad, and so on. These are used very frequently.
But the bigger benefit is when syntax sugar like `do` notation comes in. Because it works for any Monad, people can write their own Monads and take advantage of the syntax sugar. That leads to an explosion of creativity unavailable to languages who "lock down" their syntax sugar to just what the language designers intended. In other words, what requires a change to other languages can often be a library in Haskell.
It's not about what a monad can do, it's about a property of the language: referential transparency. Haskell has referential transparency, Python doesn't. That's a technical condition but here's a simple consequence: effect typing. In Haskell you can know what possible effects an operation has from its type. Here's an example from my effect system, Bluefin:
foo ::
_ =>
Exception String e1 ->
State Int e2 ->
Eff es Bool
foo = ...
We know that `foo` produces a `Bool` and the only effects it can do are to throw a `String` exception and mutate an `Int` state. That's it. It can't yield anything to a stream, it can't make network connections, it can't read from disk. In order to compose these operations together, `Eff` has to be an instance of `Monad`. That's the only way `Monad` turns up in this thing at all.
So, that's what you get in Haskell that Python doesn't give you.
Those annotations create a compile-time enforced typological relationship between the input and output values of the function. Python doesn't have mandatory type-checking so it can't do that. But parent wasn't referring to type-checking. They claimed monads have expressive power unavailable in other languages.
> Those annotations create a compile-time enforced typological relationship between the input and output values of the function. Python doesn't have mandatory type-checking so it can't do that
Python doesn't have (mandatory) compile time type checking, no, but in principle a dynamically typed language could still be referentially transparent, and then it would (or at least could) still be the case that the only effects that a particular operation can perform are those arguments that are passed into it.
> But parent wasn't referring to type-checking. They claimed monads have expressive power unavailable in other languages.
That's true. But think of other syntax sugar like async/await. That comes for free in Haskell with monads and do notation.
Monad is a weird type that a lot of languages can't properly represent in their type system. However, if you do what dynamically-typed scripting languages do, you can do any fancy thing that Haskell does, because it is weakly typed in this sense. (The sense in which Python is "strongly typed" is a different one.)
What you can't do is not do the things that Haskell blocks you from doing because it's type-unsafe, like, making sure that calling "bind" on a list returns a list and not a QT Window or an integer or something.
> Monad is a weird type that a lot of languages can't properly represent in their type system.
While true, a lot of FP-inspired libraries in the majority of languages that don't have HKT will just implement one or several specific monads as well as the common operations on them. This creates some redundancy and slight inconsistency, but often the shared vocabulary is still strong enough to carry around expectations more or less, even if it's not explicitly enforced by the type system. That's how you can have sequence(): List<Either<L,R>> -> Either<L, List<R>> in e.g. Kotlin, for example.
Even in Scala, where you actually can define a monad typeclass (trait), there are very popular libraries like ZIO that effectively give you a monad without actually adhering to any Monad trait. I believe they do this for type inference reasons.
Haskell monads have been described as "programmable semicolons" because they specify ways to interpret "do" blocks.
In some sense they are a little bit similar to Python classes. A Python class is a block of code, which runs in a normal Python way, and then the variables put in scope by that code are passed to a metaclass constructor which creates some kind of object based on them. Monads are nothing like that, but they are similar in that user code is interleaved with framework code to produce an effect similar to a DSL. Monads run one statement at a time, interleaving one statement execution with one monad join operation.
This may be a dissenting opinion, but... Haskell tried to avoid mutable state. "Local state manipulation" was not really a thing you could do in Haskell, deliberately. Then someone figured out that you could (ab)use a monad to do that. And because that was the only way, whenever they needed to manipulate state, Haskell programmers reached for a monad.
So it's not "what can a Haskell monad do that a Python class cannot". It's "what can a Python class do in a straightforward way that Haskell has to use a monad for, because Haskell put the programmer in a straightjacket where they couldn't do it without a monad". It's basically a pattern to get around the limitations of a language (at least when it's used for state).
This is not historically how Haskell was developed. Haskell didn't try to "avoid mutable state". Haskell tried to be (and indeed succeeded in being) referentially transparent. Now, it turns out that you can't uphold referential transparency whilst having access to mutable state in the "traditional" way, but you can access mutable state if you introduce monads as a means of structuring your computation.
So, they're certainly not a means of getting around a limitation of the language. If it was just a limitation that limitation would have been lifted a long time ago! It's a means of preserving a desirable property of the language (referential transparency) whilst also preserving access to mutable state, exceptions, I/O, and all sorts of other things one expects in a normal language.
But historically, wasn't there a fair period of time between Haskell insisting on referential transparency (and therefore not allowing traditional mutable state) and monads being introduced as a way to deal with it? That was my understanding of the history.
And if so, then it seems fair to say at least that monads were a way to get around the limitations imposed by a desirable feature of the language...
> But historically, wasn't there a fair period of time between Haskell insisting on referential transparency (and therefore not allowing traditional mutable state) and monads being introduced as a way to deal with it? That was my understanding of the history.
Yes, although there were solutions in the meantime. I/O was performed in the original version of Haskell through input-output streams and continuation passing style. It turns out that both approaches could have been given monad interfaces if "monad" as an abstraction had been understood at the time, but it wasn't, so they had ad hoc interfaces instead.
> And if so, then it seems fair to say at least that monads were a way to get around the limitations imposed by a desirable feature of the language...
I mean, sort of, but that seems more of a judgement than a fact. Would you say that function calls in C were a way to "get around the limitations imposed by not allowing global jumps"?
In both cases I'd just say they're a useful abstraction that lets you achieve a well-specified goal whilst preserving some desirable language property.
The truth is that it’s not a very useful abstraction in and of itself.
You can build some generic tooling on top of monads and applicatives and that tooling is useful and can give familiarity to new data structures but objectively that’s true mostly because monads are so common in Haskell. Thinking monads are common for this reason is reversing cause and consequence.
So why are monads so prevalent in Haskell, you will ask. Because there is sugar to make their use easy. And why is there sugar? Because I/O uses a monadic interface. That was Haskell new idea. You can easily keep track of side effects with the type system if you use a monadic interface and some sugar.
> If all monad instances work differently what is the value of the Monad interface? What kind of usefull generic code can one write against the Monad interface.
Code that composes a bunch of operations, for whatever kind of composition those operations need (some people call Monad "programmable semicolons", because it's a lot like sequencing). E.g. traversals of datastructures, or some kind of "do this in a context" operation. Essentially any function you pass a "callback" to should probably be written in terms of monads so that it can accept callbacks that need different kinds of composition beyond just being called at different points in the control flow.
As I so often do, I find it helpful to analogize Monad to Iterator for questions like these, because it's a typeclass/interface/etc. that people are more used to and does not have that aura of "if I feel like I understand it I must not understand it" attached to it that blocks so much learning.
You extremely often use iterators in a context where there's no way you could usefully slot in just "any" iterator and have some useful code. Suppose you have an iterator that iterates over the links that appear in an HTTP document, and write some code to fetch the HTTP resources so referenced. Well, obviously, "writing against the iterator interface" doesn't do you any good in that case. It's not like you can slot in an iterator that iterates over prime numbers to such code and get anything out of it.
What you can do with the Iterator interface is provide extremely generic tools that can be used against any Iterator, like, take the first x, skip every other one, reverse the iterator list (if finite and for a price), filter the results against a type-specific acceptance function, all kinds of things: https://docs.python.org/3/library/itertools.html These tools do not depend on the details of what the iterator is or how it works, only that it is one. In this case you might even use something as powerful as "give me an iterator and a function to run against the value that comes out of the iterator and I will run it in a parallel map and limit the number of workers and handle errors in this specific way", but all that code has no specific knowledge about URLs or fetching things from the web or anything like that. It just knows it has an iterator and a matching function for the value coming out.
Similarly, "writing to the Monad interface" gives you access to a wide variety of tools that work across all things that implement the monad interface: https://hackage.haskell.org/package/base-4.21.0.0/docs/Contr... What exactly they do depends on the underlying monad implementation. It happens that they turn out to be very useful in practice a lot of the time.
You can also create new compositions of the tools that only pay attention to the interfaces, like, "drop the first x values and then filter the rest" for an iterator, though often the libraries ship with the vast majority of what you need.
Written against the interface specifically you can only use exactly what is in the interface. But you also have the concrete types to work with, with whatever it is they do. Just as you can't really do much "real work" against just "something that provides a next value" when you have no idea what that next "value" is, but iterators are very useful with specific types, monads are the same way.
(You can then later work up to code that is allows swapping out which monad you may be using depending on how it is called, but I prefer to start here and work up to that.)
This is a cool example but I think it is missing the perspective of what the interface can abstract. For example if I program a data structure to provide an Iterator I get to use these itertool functions for free no matter how complex the data structure is underneath.
The trouble I have with Monads is that what get for free doesn't seem very exciting. Feels like I'm stuck in the world of a particular monad like State or Promises and then to do anything remotly usefull you have to bring ll of this monad tranformer machinery to switch worlds again.
Actually, it sounds to me like you largely have it.
"The trouble I have with Monads is that what get for free doesn't seem very exciting."
I think there's a lot of truth to that, actually.
One of the persistent myths about "monad" is that they somehow "add" to a datatype, that the datatype was able to X and Y, but now that it's a monad now it can do X and Y and Z and M and N. But that's not true. Monad is an interface that can be implemented on things. Once you implement it, you get a lot of little tools, but individually, none of them are necessarily mindblowing, and pretty much by definition it can't be anything the data type couldn't already do.
(Likewise, I'd suggest that what you get with iterator isn't really all that "exciting" either. Useful, oh yes beyond a shadow of a doubt. But it's not exciting. Iterator qua iterator doesn't let you do anything you couldn't do without it.)
The convenience comes in that they're now the same across all monads. mapM does what it does and you no longer need to consult the specific type you are currently using for what it does, and so on for each thing.
If one "removed" monad from Haskell, that is actually what would happen. It's not the Haskell wouldn't be able to do any fewer things. It's just that you'd have to consult each data type for these functions, and they'd be named different things. (And you wouldn't be able to abstract over these operations in different datatypes without basically rebuilding monad in the process.)
I think the standard for "knowing" monad isn't that you can type a bit of a do block and get it to do something, or that you can understand what a particular block of code using list as a monad does; it's when you completely naturally are programming along in Haskell, and you realize "Hey, I've typed
do
x <- something1 arg1
y <- something2 x
z <- something3 y
t <- something4 z
out, I bet there must be something to do that in Control.Monad" and you go and look it up and find out that yes there indeed is, and add >=> to your bag of tricks.