Hacker News new | past | comments | ask | show | jobs | submit | snickmy's comments login

that's not exactly how public companies work :)

I mean ... 0.6% against a massive tech sell off, feels a very well managed fund to me :D

This was inevitable and we'll see it playing out all over Europe.

You have a desire to be relevant in an important technological shift.

On one side, you have big tech companies laser-focused on attracting the best talent and putting them in a high-pressure cooker to deliver real business outcomes, under a leadership group that has consistently proven effective for the last XX years.

On the other side, you have universities, led by the remnants of that talent pool—those who were left behind in the acquisition race—full of principles and philosophical opinions but with little to no grounded experience in actual execution. Instead, you find a bunch of PhD students who either didn’t make the cut to be hired by the aforementioned tech companies or lack the DNA to thrive in them, actively avoiding that environment. Sprinkle on top several layers of governmental bureaucracy and diluted leadership, just to ensure everyone gets a fair slice of the extra funding.

I'm surprised anyone is surprised.


I don't think universities should become industry. I mean, that is exactly what we have industry for. If you want to be put in a pressure cooker under leadership focused on business outcomes, great, do industry.

The problem really is that universities are treated as if they have the same mandate as industry. Government people shouldn't tell a professor what kind of research is interesting. They should let the best people do what they want to do.

I remember an acquaintance becoming a professor, promoted from senior reader, and he was going to be associated with the Alan Turing Institute. I congratulated him, and asked him what he was going to do now with his freedom. He answered that there were certain expectations of what he would be doing attached to his promotion, so that would be his focus.

This way you don't get professors, you turn good people into bureaucrats.


Yes. The demand for increasing control, driven by the "taxpayer's money!" lot evident in this thread, strangles almost all state-funded research because it demands to know up front what the outcome will be. Which instantly forces everyone to pick only sure-bet research projects, while trying to sneak off to do actual blue-sky research in the background on "stolen" fractions of the funding. Like TBL inventing the WWW at CERN: that wasn't in his research brief, I'm sure it wasn't something that was funded in advance specifically for him to do.

Mind you, it was evident to me even twenty years ago when briefly considering a PhD that CS research not focused on applying itself to users would .. not be applied and languish uselessly in a paper that nobody reads.

I don't have a good answer to this.

(also, there is no way universities are going to come up with something which requires LLM like levels of capital investment: you need $100M of GPUs? You're going to spend a decade getting that funding. $10bn? Forget it. OpenAI cost only about half of what the UK is spending on its nuclear weapons programme!)


That doesn't sound like a fair appraisal of university research at all. How much do we rely on day to day that came out of MIT alone? A lot of innovation does come from industry, but certain other innovation is impossible with a corporation breathing down your neck to increase next quarter's profits.


The person you replied to is talking about the UK and Europe. I suspect that funding for research works differently at MIT and in the US generally.


Europe also seems to hand out PhDs like candy compared to the US (you can earn one faster, and you're less prepared for research), and there's a lot more priority put on master's degrees, which are largely a joke in the US outside a few fields like social work and fine arts.


European academia is not as uniform as in the US.

Where I'm from, master's was the traditional undergraduate degree. Bachelor's degrees were introduced later, but the society was reluctant to accept them. For a long time, the industry considered people with a bachelor's degree little more than glorified dropouts.

Our PhDs also used to take really long, being closer to a habilitation in some European countries than what is currently typical for a PhD. But starting in the 90s, there was a lot of pressure towards shorter American-style PhDs.

These days, the nominal duration of studies is 3 years for a bachelor's, 2 years for a master's, and 4 years for a PhD, but people usually spend at least a couple of years more. Which is pretty comparable to how things are done in the US.

The other end of the spectrum is the British system, where you can do a 3-year PhD after a 3-year bachelor's. But they also have longer PhD programs and optional intermediate degrees.


I would argue a European PhD prepares you for research better than a US one. You're expected to hit the ground running with required prior research experience and you have no classes or teaching obligations which explains why they're typically 3-4 years long.


US universities (the usual suspects) have a substantial different approach to industry integration then European one.

Yet, European leaders have not got the memo, and expect the same level of output.


Your rhetorical begs the question -- I can't think of anything more recent than the MIT license.

What DO we rely on that has come out of MIT this century? I'm having a real hard time thinking of examples.


https://news.mit.edu/2024/fifteen-lincoln-laboratory-technol... Here are a number of things we will rely on presently.


I think when talking about university research output it's pretty clear that the objective of university research is to produce output that is much earlier in the stack of productisation than something that comes out of a corporate entity. The vast majority of university research probably won't impact people on a day-to-day basis the way that running a product-led company will, but that's not to say it isn't valuable.

Take mRNA vaccines for instance - the initial research began in university environments in the 80s, and it continued to be researched in universities (including in Europe) through the 00s until Moderna and BioNTech were started in the late 00s. All the exploratory work that led to the covid vaccine being possible was driven through universities up to the point where it became corporate. If that research hadn't been done, there would have been nothing to start a company about.

It's the same in computing - The modern wave of LLMs was set off by Attention is All you Need, sure, but the building blocks all came from academia. NNs have been an academic topic since the 80s and 90s.

I suspect that in 2050, there will be plenty of stuff being built on the foundations of work conducted in academia in the 00s and 10s.

I wouldn't expect to see that many groundbreaking innovations being useful in day-to-day life coming out of contemporary university research. You have to wait several decades to see the fruits of the labour.


Same, the best I could think of that they did since the 90s was MIT OpenCourseware


The problem is the "desire to be relevant in an important technological shift".

There's loads of worthwhile research to do that has nothing to do with LLMs. A lot of it will not or cannot be done in an industrial environment because the time horizon is too long and uncertain. Stands to reason that people who thrive in a "high-pressure cooker" environment are not going to thrive when given a long-term, open-ended goal to pursue in relative solitude that requies "principles and philosophical opinions" that aren't grounded in "actual execution". That's what makes real (i.e. basic) research hard and different as opposed to applied research. Lots of people in industry claiming to be researchers or scientists who are anything but.


"actual execution" in the business world seems to be more and more synonymous with recklessly and incompetently fucking things up. See also: doge.


Shhh you’ll let the cat out of the bag, me and a whole crap ton of people put food on the table doing just that


they see, "actual execution", I see, "failed upwards in a ZIRP economy"


A lot of business basically just steal from their future selves in perpetuity until the interest they’ve accumulated is too great and they implode.

It’s the GM and intel school of business. Constantly choose what makes money now, and avoid advancements and infrastructure. Wait now the competition is 20 years ahead? So you have 20 years worth of infrastructure to pay off, right now? Oh…

Everyone is just faking it until the chicken inevitably comes homes to roost. Then they walk about from the explosion and go to another company and say “see? Look how much share holder value I made! Nevermind that the company got destroyed shortly after I left!”


Yes, this is so telling:

> For example, neither the key advance of transformers nor its application in LLMs were picked up by advisory mechanisms until ChatGPT was headline news. Even the most recent AI strategies of the Alan Turing Institute, University of Cambridge and UK government make little to no mention of AGI, LLMs or similar issues.

Almost any organisation struggles to stay on task unless there's a financial incentive or another driver, such as exceptional staff/management in place. Give them free money - the opposite of financial incentive - and the odds drop further.


I’m sorry to read this — it just doesn’t feel grounded in my own lived experience.

Many of the best Engineering and Computer Science departments, around the world, operate a revolving door for people to go in and out of industry and academia and foster the strongest of relationships bridging both worlds.

Look at Roger Needham’s Wikipedia page and follow his academic family tree up and down and you’ll see what I mean.

https://en.m.wikipedia.org/wiki/Roger_Needham


> remnants of that talent pool—those who were left behind in the acquisition race—full of principles and philosophical opinions but with little to no grounded experience in actual execution.

I do believe that these people at universities do have experience in the actual execution - of doing research. What they obviously have less experience in is building companies.

> Instead, you find a bunch of PhD students who either didn’t make the cut to be hired by the aforementioned tech companies

Or because they live in a country where big tech is not a thing. Or because these people simply love doing research (I am rather not willing to call what these AI companies are doing "research").


Jesus… are you this judgmental about everyone in society? Some people just value the university environment. It doesn’t mean they’re incompetent and had no other options. Not everyone values money above all else, nor does choosing to opt out of the private sector mean people are “remnants”.


From my perspective it's almost exactly opposite. Almost all of the people I consider exceptionally talented are vying for positions in academia (I'm in mathematics), and the people who don't make it begrudgingly accept jobs at the software houses / research labs.

I'm frequently and sadly reminded when I visit this website that lot of (smart) people can't seem imagine any form of success that doesn't include common social praise and monetary gain.



"Academia is an absolute fucking cesspool of political corruption, soul crushing metrics gaming and outright fraud functioning mostly as a jobs program for nerds that only produces valuable science completely in spite of itself, not thanks to it, because it manages to trap some genuinely smart and hard working people there like a venus fly trap its prey and keeps them alive to suck more “citations” and “grants” out of them."

This is not my experience with academia. Rather, my experience was that a lot of very idealistic people tried to make their best out of the complicated situation set up by incompetent politicians.


> This is not my experience with academia

Glad to hear that. I shared this with my former academic friends and every one of them groaned in agreement.

Henry Kissinger said that academic politics were worse than White House politics.


Another point re: grounded experience, good professors/researchers make a point to take sabbaticals to work in industry for that purpose.


Have met lots of professors who are glorified managers who do not actual research taking sabbaticals for a fat paycheck. I doubt very much they do any real work during these sabbaticals either. If I had to guess, I would bet that these sabbatical positions are frequently sinecures.


Could be the case for some, but in the cases I know of, it was nothing like that - they took it seriously and used it as I said.


Went to the demo, played some basic counterpoint, probably some poorly recalled lines from Bach, and I always get surprised how well the music of that time fits the instrument capabilities.


this is a great way to put it, that said, it was not obvious to me that the attention space (how it is structured in LLMs) is a frequency domain


I wrote down the following in the internal Slack chat on 01.06.2025, but of course, the performance of the actual effort is much more than writing it down.

Large language models (LLMs) operate in a high-dimensional token space, where tokens (words, subwords, or characters) can be viewed as discrete signals covering the multi-dimensional knowledge space. So FFT analysis methods can be applied to reduce time domain complexity to frequency domain representation with an idea to reduce computational complexity. So we can map token signals into the frequency domain. This transformation allows us to analyze token dynamics, such as their frequency of occurrence, temporal correlations, and interactions across contexts, with computational efficiency. In this approach, embeddings are treated as signals, and their relationships in sequence are captured as patterns in the frequency domain. FFT could be used to decompose token streams into dominant frequency components, revealing periodic or recurrent patterns in language usage - these patterns are repeatable across human generated knowledge and generally follow a predefined set of rules so the signals are not just white noise, they are predictable. By analyzing these frequency components, predictions of the next token can be made by emphasizing high-energy components in the frequency spectrum, reducing noise and focusing on statistically probable outcomes. Using this method we can reduce computational overhead during training and inference by enabling lightweight spectral analysis rather than heavy attention mechanisms, especially for long-context or repetitive sequences. Also using classical signal filtering techniques (LPF, HPF, band pass) could help align model behavior with human linguistic patterns, refine token embeddings, and improve efficiency in both training and inference phases.


A cartoon:

To form a coherent idea you need to coordinate a lot of tokens. In other words, ideas are long-distance correlations between tokens. Ideas are the long-wavelength features of streams of tokens.

Is it exactly right? No. But as a cartoon it can motivate exploring an idea like this.


Right. This makes sense. But why Fourier space in particular. Why not, for example, a wavelet transform.


> Why not, for example, a wavelet transform.

That is a great idea for a paper. Work on it, write it up and please be sure to put my name down as a co-author ;-)


Or for that matter, a transform that's learned from the data :) A neural net for the transform itself!


That would be super cool if it works! I’ve also wondered the same thing about activation functions. Why not let the algorithm learn the activation function?


This idea exists (the broad field is called neural architecture search), although you have to parameterize it somehow to allow gradient descent to happen.

Here are examples:

https://arxiv.org/abs/2009.04759

https://arxiv.org/abs/1906.09529


Mostly because of computational efficiency irrc, the non linearity doesn’t seem to have much impact, so picking one that’s fast is a more efficient use of limited computational resources.


Now you’re talking efficiency—-certainly a wavelet transform may also work. But wavelets tend to be more localized than FTs.


This way you end up with time dilated convolutional networks [1].

[1] https://openreview.net/pdf?id=rk8wKk-R-


I like this. Anything that connects new synapses in my skull via analogy is a good post.


This is really a very interesting way of visualizing it.


Exactly. Exploiting the structure of the matrix (e.g., it is well approximated by a circulant matrix) is natural if there is structure to exploit. If everything in the preprint holds up, that might suggest some symmetries (e.g., approximate stationarity in time) in the data at hand.


we still have some units, but available only in UK


UK closed its ports and airports or what?


I'm not aware the new nest has a haptic feedback for the detents (those tiny steps as you rotate) to create a force resistance


plus 1! Really well done website, where the 3d bits are not detracting from the overall experience.


because the guy spent more time shipping the product than iterating on the website that describes the product. (Deadly sin of every startup wannabe)


what problem are you solving?

Discovery ? Ease of arrangement and management?

If discovery, you are fighting against worth of mouth of a very clicky group of people, that have plenty of other ways to exchange infomration that leads to discovery.

If it is ease of arrangement, you are fighting against SMS/text-messages.

TLDR: there is no business opportunity here, but it's a cool CRUD app to build and i'm sure you'll learn something in the process.

If you disagree with the above, do some user research with your target demographic. If you agree with the above, do some user research with your target demographic :)


Thanks, helpful response.. I'm a high school student and our school brings in students from anywhere with 5 mins commute time to an hour. It's also not cool to take a bus as upper classmen. If I'm someone from say a neighborhood with 1k homes going to a large high school with over 5k kids, I wouldn't know which of these kids are nearby me. I wouldn't know their phone numbers say when I lose my ride for the day. Plus we could reduce carbon footprint by ride sharing that possibly are driving towards the same destination.

My app will allow kids from same school see each other's requests that either offer to drive or request a ride ensuing safety. Thoughts?


> It's also not cool to take a bus as upper classmen.

What say you did something uncool. Others might follow. It is the best solution.


Thanks for your feedback.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: