Hacker Newsnew | past | comments | ask | show | jobs | submit | dvse's commentslogin

This is already supported via listChanged. The problem is that >90% of clients currently don’t implement this - including Anthropic’s, https://modelcontextprotocol.io/clients

The model in question is "convince small and medium businesses that they need to buy my software in order to do things the same way as large companies X and Y and have a hope of remaining competitive". For Oracle X and Y were banks, retail and logistics companies, for the new generation of "big data" vendors it is Google and Facebook.


But the poster in question didn't say "X" and "Y" did they...this feels like an exercise in pedantry now, but he really did say "exactly" and "google".


Ok, here's some "conversational language" insight:

He said: "Exactly this business model" -- that is, as it pertains to it's essence.

NOT to be read as:

"Exactly this business model as it pertains to inconsequential details, like which big company they should be imitating".


I wonder what proportion of "pure" CS algorithms can be conceptualized as an application of one of the fixed point theorems and/or properties of monotone operators. There has got to be _something_ useful about the functional analysis perspective and people doing serious work in algorithms are certainly familiar with it as a group.


> I wonder what proportion of "pure" CS algorithms can be conceptualized as an application of one of the fixed point theorems and/or properties of monotone operators.

You might enjoy http://www.amazon.com/Graphs-Dioids-Semirings-Algorithms-Ope.... It has some cool ideas, but you'll first have to wade through a sea of abstract nonsense a la Bourbaki.


Have you checked the "Articles" section on betterexplained.com? Clearly the author of the article is trying to put his ideas into practice - not perfect but rather helpful in fostering something like actual understanding of topics that are all too often presented as a sequence of rote manipulations (more or less all of K12 and many "service" university courses follow this).


dvse, see my response to kalid. I thought "Math Wars" was a reference to a specific set of criticisms levied against Kahn Academy recently, when perhaps it was not a reference to that at all. If so, the whole thing could be a misunderstanding on my part. Waiting for kalid to clarify.


Some really nice suggestions in the article that for whatever reason are not often brought up while discussing maths education.

A good way to bootstrap the "math avengers" website would be to get people to write up running commentary to some classic maths texts, e.g. Silvanus Thompson (out of copyright), Halmos, Rudin etc.

Essentially all high school maths programs are less than great to put it politely and often created by people with rather limited appreciation of the subject. Better university textbooks are not readily accessible without an instructor. Running commentary from several authors giveing additional motivation, examples, clarifications or alternative derivations can be of great help to students and wikipedia style platform can be great for organising such a project.

Sites like wikipedia and mathoverflow / stackexchange are great for specific questions but lack structure - centering the efforts around certain "canonical" texts can help to organise the material which otherwise would be overwhelming.


Thanks for the ideas. Exactly, something like running commentary / different approaches could work. I greatly prefer text to online videos [not sure why, I think I get impatient that I can't read at the exact speed I want?] and would go through those.

Wikipedia is self-described as a reference [not teaching tool], stackexchange is good for point-fixes [q&a], but some more "guided tour" could be useful. Especially to help appreciate math as an art/journey vs. pure problem solving.


As pointed out in the comments on blogspot, CS is hardly the degree with biggest focus on optimization. Try operations research, systems & control, economics or even the MBA!


If you don't already understand them, probably a good idea to skip the detailed derivations and look for the big picture. The course is really quite hard to follow closely if it's your first exposure to the material.


These "best paper" lists are almost comical and a very poor way to find important papers. Have a look at citation statistics for the older "winners", they are only marginally better than the average for the venue - essentially nobody still reads them today and they've had hardly any impact all.

To find _important_ papers you want at least 5-10 years of hindsight - look for those that are still being cited a lot correcting for citation rings, dubious journals/conferences etc. As a side benefit, these can almost always be found online on some course website without requiring IEEE / ACM subscriptions.


What's happening to hackernews? While there's still a handful of insightful comments and submissions, the culture feels like it's starting to shift with everyone having their noses in the air. I'm starting to use this site less and less, and it's a shame. Oh well.


Your rant is relevant to the insightful parent --which also provides arguments for what it says--, how exactly?


How does one get a list of the most cited papers of the last 5 years? Or, see trends on how a paper has been cited? Is there a site that does this?


Find a semi recent advanced textbook (preferably one that is used in at least one of the better schools) and use google scholar from there (both papers cited in the text and those citing it).

The formal bibliometric tools such as scopus, reuter-thompson etc are hugely misleading to say the least with the ever growing avalanche of publications over the last decade (increasing number of people are being paid bonuses for each publication in an "international" venue). See this character who according to reuter-thompson is a "rising star" of computer science [1] and also happens to be a collaborator of El-Naschie [2].

[1] http://sciencewatch.com/inter/aut/2008/08-apr/08aprHe/

[2] http://www.timeshighereducation.co.uk/story.asp?storycode=42...


There is a number of really quite good applied maths courses on youtube (roughly in this order from each group):

MIT 18.06, 18.085, 18.086, 6.262, 6.450

Stanford EE263, EE364A, EE261

Profs Strang, Boyd and Gallager are quite a bit better with maths than the typical engineering lecturer, even though their courses are not exactly at the level of Rudin, Breiman et al.


My favorite math class was probably 18.310, intro to applied math, which is on ocw: http://ocw.mit.edu/courses/mathematics/18-310c-principles-of...

It's not comprehensive by any means, and you probably need to know at least calculus to be ready for most of this, but it covers some pretty cool stuff, including RSA.


Not surprising at all - kaggle (as currently implemented) is a fundamentally broken model. On top of the rather unpleasant "everybody pays auction" or "winner take all" system, they have a severe problem with metrics - the majority of the datasets are not anywhere near large enough to give stable out of sample error estimates, which means that in many cases the "winners" are barely better than random.

Perhaps they might be onto something with "kaggle prospect", but unless they pivot in some creative new direction, it's hard to see the service being very useful.


(Disclaimer: I work at Kaggle.)

We're actually using public competition as a way to find out who are some of the world's strongest data scientists. Once somebody has performed well in several public competitions, we start inviting them to private competitions where: 1. we invite 15 members; 2. prize money is generally much higher; and 3. everyone wins something (the higher the position the more one wins)

We're aiming to give many of the world's data scientists the opportunity to earn great full time incomes by competing in our competitions.


Thanks for the reply. If you are already running these private competitions it might be useful to advertise them a bit more openly (then you will need some formal criteria for joining to avoid user resentment, e.g. those in top 100). You are right that the system as is can help to find people who have some combination of persistence, general intuition for dealing with data and a modicum of modelling skills - "the world's best data scientists" is doubtful but certainly beats interviewing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: