Hacker Newsnew | past | comments | ask | show | jobs | submit | still_grokking's commentslogin

What parent describes is pretty simple: That's just how the compiler transforms some code.

Do-notation in Haskell, or for-comprehensions in Scala are just syntax sugar for nested calls to `flatMap`, `filter`, and `map`.

I think this here shows it nicely:

https://www.baeldung.com/scala/for-comprehension#for-compreh...

In Scala you can add the needed methods to any type and than they will "magically" work in for-comprehensions. In Haskell you need to implement a Monad instance which than does the same trick.

The concrete implementations of these methods need to obey to some algebraic laws for the data structure which defines them to be called a monad. But that's pretty much it.

In my opinion all that Haskell in most "monad tutorials" just blurs an in principle very simple concept.

The in practice relevant part is that a monad can be seen as an interface for a wrapper type with a constructor that wraps some value (whether a flat value, some collection, or even functions, makes no difference), does not expose an accessor to this wrapped value, and has a `flatMap` method defined. It also inherits a `map` method, coming from an interface called "Functor". The thing is also an instance of an "Applicative", which is an interface coming with a `combine` method which takes another object of the same type as itself and returns a combination of again the same type (classical example: string concatenation can be a `combine` implementation if we'd say that `String` implements the `Applicative` interface).


Scala's new effect library Kyo just uses `map`. See:

https://getkyo.io/#/?id=the-quotpendingquot-type-lt

All pure values are automatically lifted into the Kyo monad, so `map` is effectively `flatMap`.

From the linked docs:

> This unique property removes the need to juggle between map and flatMap. All values are automatically promoted to a Kyo computation with zero pending effects, enabling you to focus on your application logic rather than the intricacies of effect handling.

In the end it makes a lot of sense I think. What you do is manipulating values inside some wrapper. Whether this wrapper is a monad or not should not matter. Just do something with the value(s) inside, and that's mapping.


> the user is responsible to obey the license of the original work

The user is in this case the "AI" company.

They already violated law almost certainly by copying stuff from the net in the first place. (Almost certainly as it by now does not look like this was fair use, and there is no other means to make legal what they did.)


That's not true.

Some people (like the NYT¹) are in fact demanding before curt the destruction of the trained models as they obviously contain copyrighted content.

¹ https://pressgazette.co.uk/media_law/new-york-times-open-ai-...


But why should things work differently for "AI"?

One could of course say that "IP" ("intellectual property") is an ill concept all in all and abolish it. But than without exceptions! If it's not a valid concept in regard to "AI" it's not a valid concept in general. But if we want to keep that concept "AI" needs to bend to the same rules that apply to anybody else. Simple as that. Law is universal. (At least it should be like that everybody is equal before the law.)


As you say, "China" is here only a placeholder.

This "logic" that you need to do obviously illegal stuff (under current legislation) because otherwise someone else would do it and take the loot from that raid remains highly questionable no matter what country you put in that placeholder.

The same "logic" would for example dictate that you should dismiss patents as otherwise "someone in China" could do it before you. Somehow nobody would agree that this is a valid approach in that case…


Huh? No, im saying if China builds the ability to teleport soldiers around a field, and we’re stuck getting consent waivers from everyone because teleportation technically means you’re being killed, we’re gonna lose the war.

If you’re the leader of a nation, that’s the reality you need to work within. I really don’t care, as someone responsible for 330 million souls, if everyone is pretty equally robbed to support the whole nation. That’s like… taxes, man.


Rust? Since when is Rust the pinnacle of static type safety?

After I've worked for some time with a language that can express even stronger invariants in types than Rust (Scala) I don't see that property anymore as clear win regardless circumstances. I don't think any more "stronger types == better, no matter what".

You have a price to pay for "not being allowed to do mistakes": Explorative work becomes quite difficult if the type system is really rigid. Fast iteration may become impossible. (Small changes may require to re-architecture half your program, just to make the type system happy again![1])

It's a trade-off. Like with everything else. For a robust end product it's a good thing. For fast experimentation it's a hindrance.

[1] Someone described that issue quite well in the context of Rust and game development here: https://loglog.games/blog/leaving-rust-gamedev/

But it's not exclusive to Rust, nor game dev.


> You have a price to pay for "not being allowed to do mistakes": Explorative work becomes quite difficult

This is a huge deal for me.

At the beginning of most "what if...?" exercises, I am just trying to get raw tuples of information in and out of some top-level-program logic furnace for the first few [hundred] iterations. I'll likely resort to boxing and extremely long argument lists until what I was aiming for actually takes hold.

I no longer have an urge to define OOP type hierarchies when the underlying domain model is still a vague cloud in my head. When unguided, these abstractions feel like playing Minecraft or Factorio.


I can't remember if I came up with this analogy or not, but programming in Rust is like trying to shape a piece of clay just as it's being baked.


> Explorative work becomes quite difficult if the type system is really rigid

Or to put it another way, the ease of programming is correlated with the ease of making undetected mistakes.


I'm not sure you tried to understand what I've depicted.

As long as you don't know how the end result should look like there are no "mistakes".

The whole point of explorative work is to find out how to approach something in the first place.

It's usually impossible to come up with the final result at first try!

After you actually know how to do something in general tools which help to avoid all undetected mistakes in the implementation of the chosen approach are really indispensable. But before having this general approach figured out too much rigidity is not helpful but instead a hindrance.

To understand this better read the linked article. It explains the problem very well over a few paragraphs.


"Rust as both language and community is so preoccupied with avoiding problems at all cost that it completely loses sight of what matters, delivering an experience that is so good that whatever problems are there aren't really important."

Eh, I'm being mugged by Rust-lovers. But as soon as I read Dijkstra's snotty remark about how making mistakes is the opposite of easy (!?) I had an intuitive reaction of "balls". Maybe that was a mistake, but it came easy.


By chance I came just today across GitHub's SpecLang:

https://githubnext.com/projects/speclang/

Funny coincidence!

I leave it here for the nice contrast it creates in light of the submission we're discussing.


> isn't that just copilot "explain", one of the earliest copilot capabilities. It's definitely helpful to understand new codebases at a high level

In my experience this function is quite useless. It will just repeat the code in plain English. It will not explain it.


I was actually positively surprised at how well even qwen2.5-coder:7b managed to talk through a file of Rust. I'm still a current-day-LLM-programming skeptic but that direction, code->English, seems a lot safer, since English is ambiguous anyway. For example, it recognized some of the code shapes and gave English names that can be googled easier.


Haven’t tried copilot but cursor is pretty good at telling me where things are and explaining the high level architecture of medium-largeish codebases, especially if I already vaguely know what I’m looking for. I use this a lot when I need to change some behavior of an open source project that I’m using but previously haven’t touched.


AI can predict how some code behaves when run?

So AI can predict whether some program halts?

Seriously?


https://www.researchgate.net/publication/357418662_Profile_G...

What exactly do you think PGO data looks like? The main utility is knowing that (say) your error handling code is cold and your loops are hot, which compilers currently (and so on).

This is indeed unknowable in general but clearly pretty guessable in practice.


Well spotted! :)


Barely


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: