Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Minimal APIs at a glance in .NET 6 (hanselman.com)
152 points by ingve on Sept 13, 2021 | hide | past | favorite | 111 comments


This is part of a larger push to make ASP.NET Core and .NET easier for beginners. The difference between node.js + express vs. ASP.NET Core was a lot of boilerplate to return the first Hello World. Overcoming that churn is a huge problem for .NET adoption.

Do not take three-line-minimal APIs as the end product for robust, safe and reliable endpoints. We all know better than trusting 3-line-presentations.

As a seasoned .NET programmers I have a love-hate relationship to this new set of features (there are more!!!). The simplicity for rapid prototyping (or beginners) vs. the organized OOP template. The .NET 6 release will be very controversal. .NET will not get worse, however, a lot of (existing .NET/Java) people will complain about the new project/coding style attached to it.


I've recently started using the new .NET 6 templates and honestly I don't miss the OOP boilerplate. Yes it's a bit of a shock at first, but you quickly realize that having your ASP.NET Core startup code split into two files and three functions has no real-world benefit.

I'll still be using traditional controllers for my endpoints, however the syntactic sugar in C# 10 does improve the experience by reducing nesting and noise in general (file-scoped namespaces & implicit global usings).

Overall I'm pretty happy with it.


>code split into two files and three functions

You could always put multiple classes in a file if you wanted to organize that way. The new language changes just reduce nesting and boilerplate the IDE already took care of. (which isn't nothing). These framework changes to focus on lambdas is pretty nice though.


I'm happy to get rid of the "organized OOP template" which is really just a bloaty mess of unnecessary classes. Given that all the IoC and request binding systems are supported in these minimal APIs I have no reason to believe I'll ever have to write a Controller class in .NET 6 again at the moment. And I won't miss it. The MVC pattern from ASP.NET MVC never did make sense for web requests.


I never considered this as OOP ... they are just names to give some organization. I think the only OOP is the instantiation of the controllers ... which are instantiated per request => pointless ;).

So you are right, easy to transfer, but we will see midterm new file/naming based structures showing up when the first teams working with minimal apis with massive amount of apis.


> I think the only OOP is the instantiation of the controllers ... which are instantiated per request => pointless ;).

which has nothing to do with oop. that is IoC and they are (by default) scoped per request (but do not need to be scoped per request, singleton controllers are fine. (AddControllersAsServices)) btw. controllers by default do not get "built" by IoC but the helper "ActivatorUtilities#CreateInstance<T>(IServiceProvider, Object[])" uses it to resolve dependencies of the controller. if you add "AddControllersAsServices" it will use the IoC directly and skip using "ActivatorUtilities#CreateInstance" (and the IServiceProvider of the controller as created with the IServiceScopeFactory) and you can basically overwrite the behavior however you want, by using `IControllerFactory`.


The bigger issue for me will be the way that these kinds of things tend to split communities.

It doesn't do big enterprise devs much good if we have a lot of cool new features if there's no path towards using them in our 20 year old legacy applications that keep the business running.


If you are a server side web app, the migration process from legacy net Framework app to net 5/6 has gotten _a lot_ easier in past few years.

The biggest barrier for most in my experience has been getting through the project file (*.csproj) changes needed. Even today, there is still no fully automated way to do it. Thankfully, after ~4 years of community complaints, Microsoft finally aquiessed and made a CLI tool that does about 80 percent of that work:

https://github.com/dotnet/upgrade-assistant

Before the release of this tool, you had to rely on community made tools/guides or just rewrite the project file by hand to the new format. I inherited a project with over 600 large net4x csproj files - prior to this tool Microsoft's official stance was "rewrite them by hand". This goes down like a cup of cold sick at many Enterprise software shops with hundreds or thousands of project files to update.

I still think the upgrade process needs some more polish/wizards in the VS UI, for junior developers especially, but it's gotten much better. The vast majority of important third party dependencies are all net5 ready now in my experience now, so often just replacing the csproj file and setting target framework to net5 is enough.

If you have a legacy WinForms/GUI app however and want full cross-platform benefits of net5... cost to move to modern UI framework can be significant. CLI apps/web apps are by far the easiest to port to net5.


I don't think moving from legacy .NET Framework to modern .NET is the issue here. I have a web API application that has moved from .NET Framework 4.5 to .NET Core to .NET 5 and just updating for the newer framework has been the simplest part of each upgrade. The real problems come with the changes to the ASP.NET libraries and the changed expectations regarding how such applications are built.

This push towards the "minimal APIs" for .NET 6 may be great for trivial or just-bigger-than-trivial projects but it starts falling apart whenever you have something more complex. The setup class system of older ASP.NET editions, while decried as boilerplate, actually makes it easier to manage the configuration and structure of larger web app projects.

I don't expect to ever move any of the ASP.NET projects that I work on to use these minimal APIs, and I expect that any new projects will be quickly adapted away from them out of the new project templates out of Microsoft.


When it is purely about ASP.NET, getting rid of the old style Startup class is ok for me.

The code is more compact, concise and less verbose. Most of the code in the startup is not touched in any meaningful sense after the initial setup.


The old convention based model for controllers, was really difficult to understand and debug. You never knew what routes exactly were discovered by reflection. It was just too much magic, and too many conventions.

I hope this will make it easier. Reminds me of the good old Nancy times.


More importantly, if the default is to depend heavily on conventions, MS should have had a big glaring button / warning / notification in the Introduction page, which summarizes all the conventions used. Would have saved many beginners a lot of time.

I am an experienced programmer and was very frustrated having to search for all the conventions used in the framework.


Absolutely. I've wasted hours doing things that, if I'd been shown the way Microsoft wanted me to do them, I'd solve in 10 minutes, but instead I need to spend a ton of time trying to figure out why the new variable on my model isn't binding correctly because I didn't realize they need to be properties.


That always drove me nuts. You need to learn all those conventions from the examples. And then you always got the strangest surprises and still needed to use attribute based routing.


Something that drives me crazy about the current ASP.NET Core documentation is that it is overly example based. I wish there were more comprehensive coverage of the subsystems.


Totally. Most of the time I don’t even check the docs, I go straight away to the source code on GitHub.


True story. We developers are an interesting species. Documentation is for us never sufficient because we want to know how the sausage is done down to the if statement.

And for good reasons.


What about attribute based routing - that makes things nice and explicit?


It is still some kind of reflection magic.

Also most frameworks moved away from attributes and towards builders. For example entity framework or Microsoft.extensions.dependendyinjection.


What's so bad about using reflection and attributes in this context?


1) Startup time: reflection can take a ton of time to run. It only needs to happen once, but especially as more and more web servers move to idle models where cold startup time matter rather than just holding on to a warm service, it's not always as amortized as before.

2) AOT compilation support/size: relying on Reflection APIs means that much more metadata/symbol data has to be preserved during an AOT compilation. This impacts metadata that needs to be passed to the AOT pass and final build size from the AOT. Some web applications are interested in delivering full, streamlined AOT builds rather than typical .NET CLR applications (and JIT compilation).


Attributes have to be at one specific spot (before the method/class declaration).

Functions and builders you can compose. You can, for example loop, over something and add multiple routes programmatically. For example if you load the routes from a database, it’s really hard to create attributes. You would need to compile and load assemblies on the fly, or do some reflection black magic.


"if you load the routes from a database"

Serious question (honestly not being snarky) - why would you want to do that?


I don’t know, nobody needs more the 640k of RAM, right?

Maybe you have multiple modules, and depending on the configuration the routing changes. Or feature flags, that enable a breaking api change. Or a stupid customer that pays you a million dollar for custom routes.

But what benefit do the attributes bring? You need to learn them by heart, on builders Intellisense can help you.


Well yes - if you need some kind of dynamically configurable routing then attributes aren't a good fit. I like the fact they are there right next to the relevant method...


Another thing is, that attributes totally opinionate your code. You pull in a framework into your code.

If you - for example - write one controller you want to use with multiple frameworks (ASP.NET Core or Nancy), you need one controller per framework, just for the attributes.

I always found them more annoying then helpful. Most people seem to see it that way. Most frameworks are moving away from attributes for a good reason.


CMS:s do that! Any kind of editorial content being served is a use case.


btw. in .net core you can either use attributes or builders to create your routes or even more stuff. at least since (i think) 3.0. and if that is not a great fit you can actually build your own routing by creating a middleware (btw. the endpoint routing stuff is just a middleware)


Probably attributes have to be added to classes, builders can receive types through a method. If you can't change the code, you can't use attributes.

Construction time is also probably the safest place to inject dependencies.


Still too much magic.


It never takes long when throwing together toy examples with these kinds of APIs to introduce a security risk:

    var uploads = Path.Combine(uploadsPath, file.FileName);
Where file.FileName appears to be drawn from the content-disposition header of the request. MS’s own asp.net docs on file uploads say:

“Use a safe file name determined by the app. Don't use a file name provided by the user or the untrusted file name of the uploaded file.”

I’m always wary of APIs designed to look good in a DevRel evangelist’s presentation, to show how you can do something in ‘just a few lines of code’, when the reality is that actual applications will need to deal with a bunch more concerns anyway and this terse little API is not going to make one jot of difference to the essential complexity you’re trying to express.

Whether you have a node express style server where you’re able to declare all your controller mappings in-line using lambdas, or a spring style router where controllers are in separate classes and methods makes no real difference to the fact you still have to write the controller body, and in any real application it is not going to be handling everything from request body handling to file system operations in one place anyway.

At least when the controllers are all classes I can unit test them.


> At least when the controllers are all classes I can unit test them.

There is normally not such a big benefit in unit testing them, as controllers should only call some service behind. This service you can test.

Controllers are better integration tested, with a mock service behind. So you actually test if your routing and response codes are picked up correctly by the framework.

There is TestServer for asp.net core. It gives you a very easy solution for integration testing, if set up correctly they are nearly as fast as plain unit tests.


If it has a mock service behind it, how is that an integration test?


You test the integration of your code with the web framework. An integration test doesn't have to be end-to-end. It tests the integration between multiple components. In this case it would be your code with the integration of asp.net core.

A controller usually doesn't contain any business logic, it does the HTTP part. So you test serialization, model binding, return codes, etc.

So you use a mock to check, if the framework does the expected model binding. If the values you post via JSON end up in the right properties for the service call.

a typical controller method would look like that:

   IActionResult StoreProduct(Product p) {
      var success = service.Store(p);
      return success
         ? Ok("Stored")
         : BadRequest("Product doesn't contain all necessary fields");
   }
There is no reason to unit test this controller, because the code is trivial. But it makes sense to integration test it, and verify the serialization and binding of Product works, if the correct status codes and content types are returned. Because that could go wrong.

It is often easier to do that with a mocked service, but you can also use the real service for that. Often this would hit the database and make those tests much slower and harder to set up.


Interesting. I would still consider that a unit test (I don’t insist that a unit test must directly call the method it’s testing - it could be it needs to put it in some sort of test harness to get it executed)

Much how if a piece of code uses a regex, I wouldn’t mock the regex library when unit testing it, nor consider a test that uses the real regex library as an integration test that tests my code’s integration with the regex library, I wouldn’t consider my controller code’s interactions with a web framework as ‘integrations’ to test. They are part of how the unit is implemented.

But this is just a naming debate - I’m in agreement that that is a useful kind of test to write.

I just reserve ‘integration test’ for contexts where I am testing that multiple individually tested components of my own work together as they’re supposed to when combined - not that they integrate together correctly with something else I didn’t build.

There’s a danger if you go down that path of ‘testing the framework’.


> At least when the controllers are all classes I can unit test them.

I was always having great difficulty testing ASP.NET MVC controllers given the use of completely closed off, cyclical dependency, sealed class, properties like HttpContext.

Helper libraries made it somewhat easier to set them up but they were always heavy tests as a direct result of the large object graph needed.


But why? generally you don't test controllers directly.

You test "service"/"handler" classes that handle those requests.

If you want to test controllers, then you write E2E tests that send HTTP requests

Here's some example of how it might look like

https://docs.microsoft.com/en-us/aspnet/core/test/integratio...


If there's any critical behavior you're reliant on the controller (and, in frameworks like this, its attribute-driven configuration) to provide for you - throwing exceptions for bad requests, authentication or authorization, returning particular cache headers for particular response cases... I'd recommend looking for some way to write tests that, in isolation, go some way to let you assert those behaviors.

Don't bother writing the testcase that asserts that if the controller is called with a valid body, it calls the right service and passes on the request data. Write the testcase that asserts that if it's called with a body that's way too big that it throws the appropriate 400 exception, though.

One of the biggest problems with web frameworks which let you configure a lot of stuff like caching, auth and parameter validation using attributes, DI, or magic syntax, is that often the only way you can verify those behaviors are in place is with integration tests. And integration tests for failure modes are hard to write, so they don't get written. Which means it's easy to deploy a service which, say, is supposed to respond with stale cached data if a dependency is unavailable - but it actually doesn't because nobody was able to express a test case that verified that the controller caching attributes actually work the way they thought it did.


For Selenium tests I've been setting up an actual instance of application in test with different Startup (e.g sqlite instead of real database) and running at different port

This way you can test those fancy things, but I agree it's kinda "hard" to get it working for the first time


and wasteful. Starting up selenium et al is painfully slow compared to tests that run in <seconds.


While I do agree, then you don't have to run Selenium tests on your PC, but let it be done by e.g ci/cd

Also depends how many pages do you want to test with Selenium - if you want to test e.g 5 pages, then it'll be fast enough I guess.

There are a few tricks/configurations that make Selenium significantly faster - https://stackoverflow.com/a/57720610

A lot of is also on how do you write your "testing infrastructure" - maybe you could reuse engine/browser instance? execute tests parallelly?

There's also alternative to Selenium by Microsoft: https://github.com/microsoft/playwright

AFAIK it's way more reliable than Selenium (false errors)

__________

Overall you can still start application inside tests with different startup, without Selenium and send http requests, so you'll have end-to-end tests done fast.


Relying on CI server to run tests is even slower :) Note I am not stating "don't use CI" as that is still an invaluable tool.

The waste I am referring to is time it takes to get that feedback whilst the work is being actively developed.

A simple feature of a controller (to borrow someone else's example) is to change the response based on an attribute's configuration. It is wholly astonishing that one needs a _web-server_ to test that.


Controllers still control, believe it or not.

I want tests to assert that I return a file type. Theres method on the base controller for doing that. It, too, is protected. So I need to wrap my controller in a test.

Need to assert that the response differs based on the content of a header, or a cookie? Need to use test the controller.

etc.

The basest of assertions I want tested is that the controller returns/uses the right view. That _should_ be trivial but no. Instead I must rely on a super heavy integration test, or worse, a selenium or other test that requires a user-agent and actual webserver, when all I want to is to assert that "View X" is returned for "Scenario Y"


But there are multiple ways to return a file type. There are also multiple ways to define a route. You can name the method item[] GetItem() or use attributes [HttpGet] item[] Items().

If you do integration testing, you verify what will be returned in the end to the caller over http. For that you don’t need selenium. Just use testserver, it’s just a few lines of c# Code.


Unit testing and integration testing are not mutually exclusive. I use both. I also want the feedback from tests/build of what I am developing to be as immediate as possible. Failing early with a test that takes a second to spool and run with a message like "expected File got Content" or whatever is more valuable than waiting for a much longer test for the same (or more obscure due to depth of stack that will need a debug session) message.

TL;DR: Running an entire webstack to test trivial things is waste.

> Just use testserver, it’s just a few lines of c# Code.

Orthoganol metric is orthoganol.

    await Task.Delay(TimeSpan.FromHours(1));
Is just one line of c#. Doesn't meant it's going to run any quicker.


TestServer is great. And if you use Alba[1], it gets even better.

[1]: https://jasperfx.github.io/alba/guide/hosting.html


In ASP.NET Core the HttpContext is very mockable.


And is very much appreciated.


> At least when the controllers are all classes I can unit test them.

So I've literally just jumped into this world, having never touched asp.net or similar before.

I looked into how to test the controller side of things, and came across this[1] post arguing that unit testing controllers weren't as useful as integration testing them.

Integration testing would definitely be useful, and something I will do, but I assume also a lot more time-consuming so perhaps not something we could do on a per-commit basis.

Would highly appreciate views on this topic.

[1]: https://andrewlock.net/should-you-unit-test-controllers-in-a...


Unit tests are valuable if you are doing some sort of "computation". If all code in question is doing is plugging x into y, then unit tests are useless and integration tests are where the payoff is.

If you have code that does both plug and compute you should refactor them so that computing and plugging don't happen in the same place. Doing so will reduce instances of hard to test code.

In general controllers are plugging code not computing code, so integration tests tend to be the more appropriate choice.


Integration testing as im used to it takes less time to write than unittests since you need to spend less time mocking things (mainly just mock external requests). Of course the initial setup requires more time but there are many readymade examples you can copy from. For example the official documentation.

https://docs.microsoft.com/en-us/aspnet/core/test/integratio...


> At least when the controllers are all classes I can unit test them.

You can usually very easily transform lambas in function, so testing shouldn't be a problem.


But if I'm not writing my controllers using inline lambda syntax, but as functions in classes in a controllers directory,, what's the value of this minimal route declaration syntax over a 'heavierweight' solution that... also puts my controller declarations in methods of classes in a controllers directory?


Lambas, the famous Elvish mathbread?


>At least when the controllers are all classes I can unit test them.

Do what on "controllers"???

heresy



I do too! One thing I do hope happens is Giraffe is able to use some of the newer APIs to simplify configuration. Config is generally something you stuff in a file to not look much at, but it can sometimes feel kinda hairy.


Suave is a cleaner API in my view but Giraffe has better performance on the edges.


When I was doing F#, I loved both of these projects. I don't think we should always view ExpressJS/Sinatra (from Ruby) as "the standard", but there's also something to not having to do an hour of work before we can get endpoints serving responses.


Suave can do an API server in a few lines in an F# script file - I think it’s even terser than ExpressJS, which requires a package.json.


Is there any example of how this would work with dependency injection? Or it's expected to manually call the service provider inside the lambda?

Or if I just add my interfaces to the lambda parameters they will automatically resolved?

edit: It supports DI using `[FromServices]` attribute!

https://gist.github.com/davidfowl/ff1addd02d239d2d26f4648a06...

edit: Apparently this was in the article, but I overlooked it..


Just pass them as parameters for the lambda. Good Example: https://github.com/DamianEdwards/MinimalApiPlayground/blob/m...


The target audience for it, though, is probably us folks who want to remove dependency injection altogether. It was going against the grain before because the whole framework was built around Controllers and the part of the pipeline that had a monopoly on constructing them. With these changes it looks like it will be fully supported to write server-side HTTP code as if it were a computer program. Like you can in any other language. Better late than never!


That's interesting, I've been avoiding .Net for web projects because of the DI. Time to take another look.


I love this, not because of the "it's for beginners" framing, but from a "I want less boilerplate" framing. I look forward to a future .NET that gets less verbose each release, regardless of language chosen.


Same! Your Simple Programs proposal in that area is really great: https://github.com/dotnet/designs/pull/213

I am really looking forward to the day when I can use C# for all my scripting needs instead of Bash/Python/PowerShell. We’re getting there.


On the one hand, I've been using approximately this style for defining routes for years using NancyFx. It's easy, it's explicit, it's much better than either the attribute or convention routing ASP MVC and WebApi have used.

On the other, I hate how this is telescoping all the configuration and setup code into a blackbox. Any kind of real project, even minimal microservices, is going to have to deal with authentication and caching and logging and monitoring middlewares, and it's bad that these samples obfuscate the way that one would do that.


Things start getting a little verbose with everything on 1 line thrown inside a function.

  app.MapGet("/todos/{id:int}", [Authorize("AdminsOnly")] (int id) => "This endpoint is for admins only");
Attributes make things cleaner with everything having its own line. Also you lose the ability to group a bunch of methods in 1 controller together under the same authentication policy.


There is a multitude of ways in which you could format that code:

  app.MapGet(
      "/todos/{id:int}",
      [Authorize("AdminsOnly")](int id)
          => "This endpoint is for admins only");
Or for people who don't like expression bodied members:

  app.MapGet(
      "/todos/{id:int}",
      [Authorize("AdminsOnly")](int id)
      {
          return "This endpoint is for admins only");
      });
Extracted to a method:

  app.MapGet("/todos/{id:int}", GetTodos);

  [Authorize("AdminsOnly"]
  string GetTodos(int id) => "This endpoint is for admins only";
And more...

You could also put these methods in a class and put the attribute on the class, which would solve your "grouping" problem.


Translation for non-web-people: how to write a minimal web service in dotNET. I probably missed when the acronym "API" was hijacked by the web people to describe a custom web service protocol.


I think the point is that the .NET API for writing a minimal web service is now more minimal, thanks to being designed around lambdas instead of heavy use of OOP. The web server API is just an example of this style of .NET API.


Considering these are http wrappers around program A (the web service) that allows program B (a client) to interact with program A, I'd say API is the correct term for what this is.


Yeah, it's like 'front-end' by default means a bunch of js and html mashed together, rather than native UI.


Also the word "public" is missing. Cute examples, but I guess you're supposed to set up firewall for auth.


You can just add auth middleware in there. This is still full fledged ASP.NET Core. They just wire it up using a different style. There is a chapter in the referenced gist about adding middleware.

But I guess you more meant the example than the actual tech.


If you are interested in following an architect in the .NET team at Microsoft, follow @davidfowl. I met him once irl, and he is the most amazing person.

He is the most influential architect within the .NET team at Microsoft and basically created ASP.NET Core together with Damian Edwards as PM. The non-core edition you need to search higher up the ranks ... Scott Guthrie ... boss of Azure.


I don't know what to think about it. Half-baked idea: use free functions only in one file in project is meh..

Modern Kotlin, Scala, F# can and use free functions and reduce boilerplates to minimum. C# developers wants attract newbies, juniors, fans of J's, python etc and simultaneously don't disturb old users, fans of c#/java and enterprise development.

"It's only for quick scripts, prototyping, for teaching". Choose a side, don't pretend that you are modern and cool.

Modern c# is fine, but some features are only sugar, and yet i still waiting for proper sum types with exhausting pattern matching. Slow progressing java have this in 17 version..


The real reason is to attract and onboard people onto the platform. Part of me always felt they should of just put a functional wrapper on their APIs, have an API that avoids the mutative fluent interface, and then just show some F# examples. Especially for people outside the .NET ecosystem it would show how terse, yet performant and feature rich the platform actually is. It would compare well to examples from other langs/ecosystems.

In the workplaces I've been in newer developers typically try then abandon the OO paradigm. They've seen they can get by without it and get decent performance, and there is a learning curve which experienced people have already paid for and typically then take for granted (patterns, DI, class structure, interfaces vs abstract classes, etc etc etc).


So basically this is expressjs for .NET.


best of both worlds


Nancy and Sinatra anyone? :)


As someone who isn't familiar with .NET, I wish this post had an example with a JSON request body and response.



These sort of efforts are commendable.

I still don't like the state of parts of ASP.NET Core. 9/10 times I will just write my own Middleware and directly work with HttpContext. Blazor handles 99% of our former MVC concerns today, so we've basically done away with controllers.


Looking forward to this. I quite enjoyed C# but the boiler plate of ASP.NET always turned me off.


Everything in a single file without namespaces brackets and other boilerplate. Good.


Shoehorning OOP into building HTTP APIs was the original sin IMO.


So now that they have gotten bored copying Java Spring, they start copying Express.js?


What would be the alternative?

Not integrate and adopt useful features and smart paradigms that other frameworks have?


So I guess we've had 'JavaScript but like C#', in the form of TypeScript; so I suppose now we get 'C# like JavaScript'? I must admit, I find this amusing.

I suppose this raises two questions: "Why use C# like this in place of express or koa on node, which idiomatically similar?", and "Why use C# like this instead of C# like that (the way all other AspNetCore apps do it)?"


They neither broke the type system to make it weaker nor did they stick a browser engine where it doesn't belong.


1. The web server component could be fronting a middle-tier written in C#. There are numerous advantages to doing this in .Net over Node.

2. Performance.

3. Native AOT is coming in 6.0. Meaning, you get a native platform specific app which can run without the framework - like golang. But with GC, so not like C/C++/Rust.


I don't get it why people always hated (tracing) GC. Unless you are working on a hard real time systems such as DSP or audio synthesizer, having some kind of GC is always the boon.

Not only you can run code faster in some case (by suspending GC in some critical code and just do plain allocate and resume GC at a later time, and this is also how you make it more real-time compatible), it can also help curb fundamental security issues such as dangling pointers and use-after-free.

GC is also not the blame to the bloated size of your app, Nim [1], for example, is a language with GC/runtime while being lightweight. Another honorable mention would be Haxe [2] for gamedev, where it can generate much more compelling C++ code that the binary size is just slightly bigger to what you normally do with C++ while having much less lines needed to code. And it has a tracing GC.

So I do think GC-enabled language can be like C/C++/Rust. Even if there is a binary and performance difference, it won't be huge. But the way it makes your program safer by a huge margin and your programmer less mentally pain, makes having GC a huge difference.

By the way, even smart pointers/move semantics and ownership are one kind of GC (by leveraging linear/affine logic to ensure resources will not drop out of its controllable/life phase commonly called lifetime), so most of the time what I refer to GC is more specifically tracing GC where mark-and-sweep algorithms that usually need to stop-the-world is indeed a fundamental problem in GC design especially for multi-core platform which is going more and more popular nowadays.

[1]: https://nim-lang.org [2]: https://haxe.org


There is definitely a terminology problem. "Automatic memory management" is a more precise term than "garbage collection". It refers to the more user visible trait, and pairs better with the antonym of "manual memory management" which is much more commonly seen. I would encourage people to use "AMM" as a term even if in their coding they do not use it. :-) "AMM" also begs the right questions such as "automatic -- in what sense, exactly?" It also tracks with the primordial C "auto" for stack variables (the automatically managed part of C - yes I am aware modern C++ repurposed it). It might even make some discussions have less talking past each other. This kind of oceanliner ship is hard to turn, though.

There are a few high profile prog.langs and environments like Java/Go/old Lisp stop-the-world/etc. style where (some) people struggle with "GC". These struggles often give AMM itself a bad reputation - while C stack variables (often coupled with what people call "value types" today) are often the way to be very, very fast (yes, because of mostly CPU-private stacks..even so).

Nim is very fast and has many choices in AMM from none to Boehm-Wiser to its own tracing variant to a new extremely low overhead ARC and ORC that have more Rust-ike aspects (but copying in some cases to be safe which is just about 10x easier to use in deviousmeters). With a TinyC/tcc backend Nim yields near Interpreter/REPL-like edit/compile/test cycles. It's really a joy to write code in most of the time. People should look into Nim more.


I've been using C# since it was released, and I'm a huge fan.

But I can almost guarantee that native AOT doesn't appear with dotnet 6 - they've been promising it for several years, and it's become Microsoft's Duke Nukem Forever,


yeah and the sad thing without NativeAot there won't be the WebAssembly things which we would like. the way it works now is way too lazy for anything real world. (compiling the runtime to wasm and loading dll files...)


NativeAOT is supported in more places in .NET 6, there has been a tremendous amount of work to ensure that libraries are trimmer and linker friendly. But I believe ASP.NET’s targeting .NET 7 for that.


I think the fact that this maintains type safety, IDE integration and other useful traits of modern C# like zero-allocation primitives gives it a leg up over TypeScript/JS in some cases.


This


Funny, that David ends this discussion.

Thanks!

PS: oh no, now I am ending it :)


It is about attracting new developers to .NET. Boilerplate does not help. So it boils down what the cool kids consider simple: Express + node.js => ASP.NET Core Minimal API.

Evolve or die. When you see the releases in the last 5 years, C#/.NET tries to evolve and stay relevant.

Do we need to like that: No. Is it necessary for .NET: Yes.


You still get C#/.NET, with it's superior performance and type safety, but just without all the OOP bloat.


Wait, you think this is not OOP?

    var app = WebApplication.Create(args);
 
    app.MapGet("/", () => "Hello World");
 
    app.Run();
You're creating an object and calling methods on it..


This is more functional. C# is an multi paradigm language.

There is not a big difference between an FP module and an OOP static or single instance class.


I think the word OOP has lost all meaning and just means "crusty old code I hate".


In the OOP way you create controller classes. In the FP way you pass lambdas/functions.

So yes, it’s not 100% functional. But probably 80%?


You're creating an instance of an app class, calling methods on it, and passing objects in that method (string and lambda are objects).

I just feel like the term OOP can't win here. If it's concise people will just claim it's FP.


Off course there is still OOP involved. Also most FP languages support objects to some extent. Also records and their modules have some similarities to OOP. And even FP can be never completely pure, there always has to be state.

But this approach is just more functional than the one before.


Also MapGet seems to be changing the state of the WebApplication object... which doesn't seem to be very functional?


The same can be done in pretty much any other language, it's just a HTTP server wrapper which maps specific URL paths to callback functions which return a HTTP response.

E.g. in Go: https://gobyexample.com/http-servers, or in Python via Flask: https://flask.palletsprojects.com/en/2.0.x/quickstart/#routi...


because you dont want to work with a broken module system?

im talking about you, ESM, for all the headaches you induced me on TypeORM and NestJS. I really miss the proper Java packages and dotnet namespaces design




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: