Hacker Newsnew | past | comments | ask | show | jobs | submit | mansoor_'s commentslogin

If the contents of the PDF were any more vague, it could be replaced by whitespace.


To add to your comment: I think you could probably post an article with white space with a title "AI is going to kill [fill in the blank]" and it will at least initiate a discussion these days.


You could probably start a company with that as your mission statement and get a couple million in VC


My morals keep me poor it seems.


It's already about 80% whitespace.

I think it doesn't even pass the bar of an undergraduate research paper.


It's from a "totally independent, not-for-profit" research/consultancy/think tank which is "funded predominantly on a project to project basis". So this is a work product, implying all the constraints that go with that.


[flagged]


Idk about putting hidden messages into our minds or BOM chemtrails but being part of a lizard people overlay network is plausible - https://arxiv.org/abs/2502.12710


It might've been written by an LLM. I'm not joking.


Nice! This comment has the potential to become the next "If my grandmother had wheels, she would have been a bike."


Can you tell us about the tech stack you used and why?


FYI newer builds of GCC have this functionality.


Note that you will always have this problem, because the data it is trained on has its own biases.


It is amazing how well community moderation works for Wikipedia, but it doesn't seem like the same model translates well to SO.


10/10


I wonder how this will affect latency,


Not really. For a trivial function fitting problem, a KAN will allow you to visualise the contribution of each base function into the next layer of your network. Still, these trivial shallow networks are the ones nobody needs to introspect. A deep NN will not be explainable using this approach.


Yeah. I'm not sure if anything with millions or billions of parameters will ever be "explainable" in the way we want.

I mean, imagine a regular multivariable function with billions of terms, written out on a (very big) whiteboard. Are we ever really going to understand why it produces the numbers it does?

KANs may have an order of magnitude fewer parameters, but the basic problem is still the same.


Good points.

Personally I'm still basically with Geoff Hinton's early conjecture that people will have to choose whether they want a model that's easy to explain or one that actually works as well as it could.

I'd imagine the really big whiteboard would often be understandable in principle, but most people wouldn't be very satisfied at having the model go "Jolly good. Set aside the next 25 years in your calendar then, and tell me when you're ready to start on practicing the prerequisites!".

On the other hand, one might question how often we really understand something complex ostensibly "explained" to us, rather than just gloss over real understanding. A lot of the time people seem to act as if they don't care about really knowing it, and just (hopefully!) want to get an inkling what's involved and make sure that the process could be demonstrated not to be seriously flawed.

The models are being held to standards that are typically not applied to people nor to most traditional software. But sure, there are also some real issues about reliability, trust and bureaucratic certifications.


I came across "Learning XOR: exploring the space of a classic problem" other day: https://www.maths.stir.ac.uk/~kjt/techreps/pdf/TR148.pdf

Even something with three units and two inputs is nontrivial to understand on a deep level.


> Are we ever really going to understand why it produces the numbers it does?

I would expect so, because we can categorize things hierarchically.

A medium-sized library contains many billions of words, but even with just a Dewey decimal system and a card catalog you could find information relatively quickly.

There's no inherent difficulty in understanding what a billion terms do, if you're able to just drill down using some basic hierarchies. It's just about finding the right algorithms to identify and describe the best set of hierarchies. Which is difficult, but there's no reason to think it won't be solvable in the near term.


KAN's have O(N^(-4)) scaling law where N is the number of parameters. MLPs have O(N^(-1)) scaling or worse.

For where you need MLP with a tens of billions of parameters you may need KAN with thousands.


I found these articles very interesting in the context of future ways to understand LLM/AIs

https://www.astralcodexten.com/p/the-road-to-honest-ai

https://www.astralcodexten.com/p/god-help-us-lets-try-to-und...


When you have a tax bill like Alphabet's, these investments/write-offs seem less foolish.


Are they foolish otherwise?

They’re making 50,000+ rides per week already, with a very limited rollout.

They only have to match the cost and convenience of Uber in order to utterly dominate the market.

Once they fully solve self driving, which they inch ever closer towards doing, they can focus on cost reductions. Given that they don’t have to pay drivers, the potential profit margin is incredible.

They will be fully uncontested soon. The only possible competition is Tesla, which has very impressive models, but is still far away from deploying actual robotaxis.

I wish I could invest.


The total investment is now approx $10B, with a revenue of $ 50 million (projection). Not great.

Yeah, they don't pay drivers, but their cars' hardware costs mean that they won't break even with human-driven taxis for quite some time (unless they manage to decentralise the computing).

To put some rough numbers to this, the sensor set in each car, is probably ~$75K (was initially $150K).


Imagine thinking alphabet views waymo as a tax write off.


It does. You can tax loss harvest from companies that you own, so it makes sense to have a bunch of bets that are money pits.


Imagine thinking a 0.5% return on investment is a good use of money when treasury bonds return almost 5%.


> Imagine thinking a 0.5% return on investment is a good use of money

What is waymo's valuation? How much has alphabet invested in waymo? $10 billion? What could they sell it off for today? $30 billion? More? What would the ROI be?

> when treasury bonds return almost 5%

And yet alphabet invests in waymo? I wonder why? Oh that's right. Alphabet created waymo as tax write off scheme. Absolute genius.

It's insane how ingnorant hn commentors are when it comes to finance, investment and technology. Every earnings and every financial news, it's the most ignorant who rant about nonsense confidently. But then again, according to the geniuses of hn, tesla, meta, bitcoin, etc would have imploded years ago.


If it were publicly traded, it would have a P/E of 600.

I'll help you out since you don't read, a normal P/E is 20-30. To say it is not a great financial investment is unequivocal. It is effectively a write-off for the foreseeable future.

But that is not to say you shouldn't invest in difficult problems and try to solve them, even if they don't make money. This is one of them. I hope people do the same for other challenging problems.


P/E is a good measure for value stocks, where you're looking for consistent returns. Waymo still has to scale, the number means nothing.

Tesla for example had a P/E ratio of 1000 two years ago. Okay it's still potentially overvalued and the share price has declined but their P/E ratio has dropped dramatically.


Waymo is 15 years old,


And it's nascent.


A common equation you will find in aerodynamics texts is:

Drag = 1/2 * fluid density * velocity^2 * C_d * Ref. Area

It approximates the drag experienced by objects as they move within a fluid (atmosphere). You can see that drag is proportional to the square of velocity, so going twice as fast induces 4 times the drag.

Ergo, when you speed up, you produce a lot more drag. This will slow you down until you reach an equilibrium between thrust and drag (unless you apply more thrust).


So you're just agreeing with what I said:

> I think what he's getting at is that drag increases with the square of speed


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: