TurboTax and other tax companies don’t just lobby to prevent a free filing solution, they also lobby for more complicated tax laws to create a greater need for their product
Literally every special interest group lobbies for more complicated tax laws.
"We need to incentivize [green tech|solar panels|electric cars|homeownership|child care|education|capital investments|...]" just translates to more complicated tax laws.
There is a material difference between all of those examples and Intuit though.
It is normal for governments to use incentives to drive their policy objectives. If they want to incentivize home ownership for example, the resulting increase in complexity of the tax code may be a reasonable trade off.
Intuit is just trying to get a government guarantee that their business model will continue to exist, without their being any positive public policy angle to compensate for the negative externality. In other countries free tax filing is the norm, because although they have the same types of incentives to promote policy objectives, the government has not allowed themselves to become captive to someone like Intuit.
For example, I have filled my taxes in myself (for free) in the UK for about 20 years now. It used to be a (simple) paper form. Then around 15 years or so ago it changed to be web or paper (your choice) and now they strongly incentivise web over paper. For the last 2 or 3 years all the important details of my income and pension are already filled out automatically from the record they get from my employer and I only need to fill in the capital gains and charitable giving parts of my return.
What's the gps argument? That intuit lobbies? In 2024 that amount was ~$3.8m.
Even if 100% of that is to make taxes complicated. Is their lobby somehow driving the complicated tax code and if we just tell them "cut it out" or have the government spin up its own TurboTax, something that other private companies have not been able to do successfully at scale, it'll somehow stop?
I don't get the point other than people don't like paying for stuff they think should be free. But my point is there is a real cost to helping people fill out their taxes, and I'd prefer that's done on the private side as government isn't historically good running at running tech companies. Either way you pay, just one you have a choice and it's explicit how much it costs and the other is another murky inefficient government run agency.
The argument is that Intuit engages in corrupt practices to boost their profits at the expense of american taxpayers' time and money, by complicating tax code and limiting the development of government-funded solutions. Your comparison to other forms of lobbying doesn't change that, and neither does sowing doubt about the efficacy of said lobbying; these practices are unethical and wrong regardless of your ideologically-motivated justifications.
Well the counter example is when you have a program you know runs correctly but has to run many times. If all you have to do is run it through gnu parallel, that's a close to zero development cost.
How does this work under the hood? Does Ruby keep a giant map of all strings in the application to check new strings against to see if it can dedupe? Does it keep a reference count to each unique string that requires a set lookup to update on each string instance’s deallocation? Set lookups in a giant set can be pretty expensive!
Even if it didn't dedupe strings, mutable string literals means that it has to create a new string every time it encounters a literal in run time. If you have a literal string in a method, every time you call the method a new string is created. If you have one inside a loop, every iteration a new string is created. You get the idea.
With immutable strings literals, string literals can be reused.
You make an arrow function that takes an object as input, and calls another with a string and a field from the object, for instance to populate a lookup table. You probably don’t want someone changing map keys out from under you, because you’ll break resize. So copies are being made to ensure this?
This would have fooLit be frozen at parse time. In this situation there would be "foo", "f", and "o" as frozen strings; and fooLit and fooVar would be two different strings since fooVar was created at runtime.
Creating a string that happens to be present in the frozen strings wouldn't create a new one.
> How does this work under the hood? Does Ruby keep a giant map of all strings in the application to check new strings against to see if it can dedupe?
1. Strings have a flag (FL_FREEZE) that are set when the string is frozen. This is checked whenever a string would be mutated, to prevent it.
2. There is an interned string table for frozen strings.
> Does it keep a reference count to each unique string that requires a set lookup to update on each string instance’s deallocation?
This I am less sure about, I poked around in the implementation for a bit, but I am not sure of this answer. It appears to me that it just deletes it, but that cannot be right, I suspect I'm missing something, I only dig around in Ruby internals once or twice a year :)
There's no need for ref counting, since Ruby has a mark & sweep GC.
The interned string table uses weak references. Any string added to the interned string tables has the `FL_FSTR` flag set to it, and when a string a freed, if it has that flag the GC knowns to remove it from the interned string table.
The keyword to know to search for this in the VM is `fstring`, that's what interned strings are called internally:
The way it works in Python is that string literals are stored in a constant slot of their parent object, so at runtime the VM just returns the value at that index.
Though since Ruby already has symbols which act as immutable interned strings, frozen literals might just piggyback on that, with frozen strings being symbols under the hood.
They do, but that article also mixes “transpile” and “compile” often enough that it is near impossible to deduce what different meanings they might ascribe.
I believe there’s no other allocator besides jemalloc that can seamlessly override macOS malloc/free like people do with LD_PRELOAD on Linux (at least as of ~2020). jemalloc has a very nice zone-based way of making itself the default, and manages to accommodate Apple’s odd requirements for an allocator that have tripped other third-party allocators up when trying to override malloc/free.
What are the advantages of this over using higher order functions? In Ruby I can do list.map { }.select { } …. That feels more natural (doesn’t require special language support), has a very rich set of functions (group_by, chunk_while, etc.), and is something the user can extend with their own methods (if they don’t mind monkeypatching)
LINQ is higher-order functions - Ruby `map` is 'Enumerable.Select`, Ruby `select` is `Enumerable.Where` etc.
The special syntax is really just syntactic sugar on top of all this that makes things a little bit more readable for complex queries because e.g. you don't have to repeat computed variables every time after binding it once in the chain. Consider:
from x in xs
where x.IsFoo
let y = Frob(x)
where y.IsBar
let z = Frob(y)
where z.IsBaz
order by x, y descending, z
select z;
If you were to rewrite this with explicit method calls and lambdas, it becomes something like:
Note how it needs to weave `x` and `y` through all the Select/Where calls so that they can be used for ordering in the end here, whereas with syntactic sugar the scope of `let` extends to the remainder of the expression (although under the hood it still does roughly the same thing as the handwritten code).
I tend to use the query syntax a lot for this exact reason.
It would be even better if it supported exposing pattern matching variables and null safety annotations from where clauses to the following operations but I guess it's hard to translate it to methods.
Something like this:
from x in xs
where x is { Y: { } y }
select y.z
Another feature I'd like to see is standalone `where` without needing to add `select` after it like in VB.net.
Pattern matching shouldn't be hard to translate, really. It would not be a 1:1 mapping to the existing IEnumerable methods, but it's a straightforward translation to a single Select that returns a Nullable<ValueTuple> (to indicate match success/failure and pass the data in the former case) followed by a Where that would remove failures. Better yet, they could always add a new extension method to IEnumerable for this, and then translate to that.
One feature I'd like to see is integration with foreach so that you don't have to repeat the variable and come up with a different name to work around shadowing rules. I.e. instead of:
foreach (var x in from x0 ...)
it would be nice to be able to write simply:
foreach (from x in ...)
and have it "just work", including more complicated cases with multiple nested from-clauses, let etc (effectively extending the scope of all of those into the body of foreach).
Like mentioned, groupby,etc all operate on functions, (map/reduce/filter/etc) are just named differently (select/aggregate/where/etc).
What makes people love Linq is that it handles 2 different cases (identical syntax but different backing objects).
1: The in-memory variant does things lazily, select/aggregate/where produce enumeration objects so after the chain you add ToArray, ToList, ToDictionary,etc and the final object is built lazily with most of the chain executed on as few objects as possible (thus if you have an effective Where at the start, the rest of the pipeline will do very little work and very few allocations).
2: The compiler also helps the libraries by providing syntax-tree's, thus database Linq providers just translates the Linq to SQL and sends it off to the server, letting the server do the heavy lifting _with indexes_ so that we can query tables of arbitrary sizes with regular C# Linq syntax very quickly without most of it never going over the network.
Linq came out as part of a set of features that addressed the comforts of languages like Ruby. I don't know if they considered Ruby to be a threat at the time but they put a bunch of marketing power behind the release of linq. The way I understand it, as someone who jumped from C# to Ruby just around the time Linq came out is that it's a DSL for composing higher order functions as well as a library of higher order functions.
I always liked how the C# team took inspiration from other language ecosystems. Usually they do it with a lot more taste than the C++ committee. The suppose the declarative linq syntax gives the compiler some freedom of optimization, but I feel Ruby's do syntax makes higher order functions shine in a way that's only surpassed by functional languages like Haskell or Lisp.
LINQ methods are higher order functions ? LINQ syntax is just sugar and probably a design mistake (dead weight feature that I've only seen abused to implement monadic composition by insane people).
And Ruby doesn't even enter this conversation if we're talking about these kinds of optimizations - it's an order of magnitude away from what you're aiming from if you're unrolling sequence operations in C#.
> LINQ syntax is just sugar and probably a design mistake
I find that the term "LINQ" these days tend to mean the extensions on IEnumerable/IQueryable, and not the special query syntax. Whatever the term meant when it was launched is now forgotten. Almost no one uses the special query syntax, but everyone uses the enumerable/queryable extension methods like Select() etc, and calls it "Linq".
> "Whatever the term meant when it was launched is now forgotten"
Language INtegrated Query. The SQL query isn't written inside a string, opaque and uncheckable, it's part of the C# language which means the tooling can autosuggest and sanity check database table names and field names against the live database connection, it means the compiler is aware of the SQL data types without manually building a separate ORM/layer.
That people who don't use C# think it's just a Microsoft way to write a lambda filter on an in-memory list is sad.
Source? I'm in multiple current, active development projects with companies, they are all using the LINQ query syntax.
Not to mention all of the legacy code out there that is under active maintenance.
To say almost no one feels very much antithetical to my (albeit anecdotal) experience. I can't imagine, I'm the only c# consultant that has multiple clients that use LINQ queries extensively throughout their applications.
Trying to crudely estimate the popularity i tried doing a regex search for C# statements ending with select ___; vs calls to Select(/Where(.
It was (only) a factor 10x which was way less difference than I thought to be honest. I have seen and used one in the wild just a handful of times during 22 years of C#. But it might also vary with industry. It's likely a lot more common in fields where there are databases than where there are not.
I'm actually surprised reading some of those LINQ queries, first few pages, none of them (IMO) should be using query syntax. Hell, I hate query syntax to begin with, and use the extensions any opportunity I'm allowed.
But I would also say that the companies using query syntax are also vastly underrepresented on GitHub public repositories.
> It's likely a lot more common in fields where there are databases than where there are not.
I think beyond that, it's probably more common with developers that came from doing SQL queries but "needed" type safety.
I've worked with developers that didn't even know the extension methods existed. They went from SqlCommand to LINQ to EF or LINQ to SQL.
The only time I think query syntax is better than the extension methods is when dealing with table joins.
> but everyone uses the enumerable/queryable extension methods like Select() etc, and calls it "Linq"
Most likely because those extension methods are all under the System.Linq namespace. Really they should've gone under System.Query or something like that.
IIRC it was to implement a constraint solver, which I couched in monadic terms somehow, don't remember the details. Not sure if I'd do it the same way again, but I did get it to work.
If it's some small isolated part or personal project I guess it doesn't matter - but I've seen a mature codebase that was started that way - and it was among the worst codebases I've seen in 20 years (of similar scale at least).
Few people even knew how to use it or what monads were, it was a huge issue when onboarding people. When the initial masochist that inflicted this on the codebase left, and stop enforcing the madness, half of the codebase dropped it, half kept it, new people kept onboarding and squinting through it. This created huge pieces of shit glue code that was isolating the monadic crap everyone was too afraid to touch. Worst part was that even if you knew monads and were comfortable with them in other languages they just didn't fit - and it made writing the code super awkward.
Not to mention debugging that shit was a nightmare with the Result + Exceptions - worst of both worlds.
It's basically writing your own DSL by repurposing LINQ syntax - DSLs are almost always a bad idea, abusing language constructs to hack it in makes it even worse.
Is this like the Karatsuba algorithm, where it's theoretically faster but not actually faster when run on real hardware?
Btw, it's worth noting that if you know that the result will be symmetric (such as is the case for X * X^T), you can make things faster. For example in cuBLAS, cublas*syrk (the variant optimized for when the result is symmetric) IME isn't faster than gemm, so what you can do instead is just do smaller multiplications that fill in one of the two triangles piece by piece, and then copy that triangle to the other one.
The mention 5% improvements and small matrices in the abstract, so my gut says (I haven’t read the actual paper yet) that is probably is a practical-type algorithm.
Since this is 2x2 there's simd instructions that can do this (or with two simd dot products) both on CPU and inside each GPU core. So with current hardware you won't beat writing this out manually.
I figured it might, but I think that this is a top of mind question for people and would be nice to make clear in the comments of the post too. So often there’s some theoretical improvement on multiplication that isn’t actually practical. Regardless, they don’t seem to have posted results for CUDA, which is arguably more important than CPU multiplication which is what they tried
Aren't there some scripting languages designed around seamless interop with Rust that could be used here for scripting/prototyping? Not that it would fix all the issues in that blog post, but maybe some of them.
I’m surprised this isn’t getting more love. My experience with other debuggers with Rust was quite poor, I hope this one can fare much better. For example, I couldn’t call functions with previous debuggers