Hacker News new | past | comments | ask | show | jobs | submit | Agentus's comments login

well he managed to turn the twitter blunder into literal political capital with trump. so it all worked out in the end.

yeah what is a general tutorial to this. is there a website that keeps track of keywords to keep track of. also a website that generalizes core nn tech and frontier stuff thats promising.

There are geostationary satellites. they loiter.


What they don’t have is high accuracy pointing. If you want to listen to activity on 128MHz at a specific military base from GEO you’re going to be getting a huge amount of interference from everything else in CONUS on the same frequency. At LEO you can do much much better pointing with your antenna.


Geostationary is only possible on very specific orbits - over the equator on one height. Any other orbit cannot be geostationary.

wall isnt necessary just need the police, security guards and legislation to chase out / make homeless miserable.


Indeed, all physical walls in our world are ultimately psychological walls.


kinda interesting, every single CS person (especially phds) when talking about reasoning are unable to concisely quantify, enumerate, qualify, or define reasoning.

people with (high) intelligence talking and building (artificial) intelligence but never able to convincingly explain aspects of intelligence. just often talk ambiguously and circularly around it.

what are we humans getting ourselves into inventing skynet :wink.

its been an ongoing pet project to tackle reasoning, but i cant answer your question with regards to llms.


>> Kinda interesting, every single CS person (especially phds) when talking about reasoning are unable to concisely quantify, enumerate, qualify, or define reasoning.

Kinda interesting that mathematicians also can't do the same for mathematics.

And yet.


Mathematicians absolutely can, it's called foundations, and people actively study what mathematics can be expressed in different foundations. Most mathematicians don't care about it though for the same reason most programmers don't care about Haskell.


I don't care about Haskell either, but we know what reasoning is [1]. It's been studied extensively in mathematics, computer science, psychology, cognitive science and AI, and in philosophy going back literally thousands of years with grandpapa Aristotle and his syllogisms. Formal reasoning, informal reasoning, non-monotonic reasoning, etc etc. Not only do we know what reasoning is, we know how to do it with computers just fine, too [2]. That's basically the first 50 years of AI, that folks like His Nobelist Eminence Geoffrey Hinton will tell you was all a Bad Idea and a total failure.

Still somehow the question keeps coming up- "what is reasoning". I'll be honest and say that I imagine it's mainly folks who skipped CS 101 because they were busy tweaking their neural nets who go around the web like Diogenes with his lantern, howling "Reasoning! I'm looking for a definition of Reasoning! What is Reasoning!".

I have never heard the people at the top echelons of AI and Deep learning - LeCun, Schmidhuber, Bengio, Hinton, Ng, Hutter, etc etc- say things like that: "what's reasoning". The reason I suppose is that they know exactly what that is, because it was the one thing they could never do with their neural nets, that classical AI could do between sips of coffee at breakfast [3]. Those guys know exactly what their systems are missing and, to their credit, have never made no bones about that.

_________________

[1] e.g. see my profile for a quick summary.

[2] See all of Russeel & Norvig, as a for instance.

[3] Schmidhuber's doctoral thesis was an implementation of genetic algorithms in Prolog, even.


i have a question for you, in which ive asked many philosophy professors but none could answer satisfactorily. since you seem to have a penchant for reasoning perhaps you might have a good answer. (i hope i remember the full extent of the question properly, i might hit you up with some follow questions)

it pertains to the source of the inference power of deductive inference. do you think all deductive reasoning originated inductively? like when some one discovers a rule or fact that seemingly has contextual predictive power, obviously that can be confirmed inductively by observations, but did that deductive reflex of the mind coagulate by inductive experiences. maybe not all deductive derivative rules but the original deductive rules.


I'm sorry but I have no idea how to answer your question, which is indeed philosophical. You see, I'm not a philosopher, but a scientist. Science seeks to pose questions, and answer them; philosophy seeks to pose questions, and question them. Me, I like answers more than questions so I don't care about philosophy much.


well yeah its partially philosphical, i guess my haphazard use of language like “all” makes it more philosophical than intended.

but im getting at a few things. one of those things is neurological. how do deductive inference constructs manifest in neurons and is it really inadvertently an inductive process that that creates deductive neural functions.

other aspect of the question i guess is more philosophical. like why does deductive inference work at all, i think clues to a potential answer to that can be seen in the mechanics of generalization of antecedents predicting(or correlating with) certain generalized consequences consistently. the brain coagulates generalized coinciding concepts by reinforcement and it recognizes or differentiates inclusive instances or excluding instances of a generalization by recognition properties that seem to gatekeep identities accordingly. its hard to explain succinctly what i mean by the latter, but im planning on writing an academic paper on that.


I'm sorry, I don't have the background to opine on any of the subjects you discuss. Good luck with your paper!


>Those guys know exactly what their systems are missing

If they did not actually, would they (and you) necessarily be able to know?

Many people claim the ability to prove a negative, but no one will post their method.


To clarify, what neural nets are missing is a capability present in classical, logic-based and symbolic systems. That's the ability that we commonly call "reasoning". No need to prove any negatives. We just point to what classical systems are doing and ask whether a deep net can do that.


Do Humans have this ability called "reasoning"?


well lets just say i think i can explain reasoning better than anyone ive encountered. i have my own hypothesized theory on what it is and how it manifests in neural networks.

i doubt your mathmatician example is equivalent.

examples that are fresh on the mind that further my point. ive heard yann lecun baffled by llms instantiation/emergence of reasoning, along with other ai researchers. eric Schmidt thinks the agentic reasoning is the current frontier and people should be focusing on that. was listening to the start of an ai machine learning interview a week ago with some cs phd asked to explain reasoning and the best he could muster up is you know it when you see it…. not to mention the guy responding to the grandparent that gave a cop out answer ( all the most respect to him).


>> well lets just say i think i can explain reasoning better than anyone ive encountered. i have my own hypothesized theory on what it is and how it manifests in neural networks.

I'm going to bet you haven't encountered the right people then. Maybe your social circle is limited to folks like the person who presented a slide about A* to a dumb-struck roomfull of Deep Learning researchers, in the last NeurIps?

https://x.com/rao2z/status/1867000627274059949


possibly, my university doesn’t really do ai research beyond using it as a tool to engineer things. im looking to transfer to a different university.

but no, my take on reasoning is really a somewhat generalized reframing of the definition of reasoning (which you might find on the stanford encylopedia of philosophy) thats reframed partially in axiomatic building blocks of neural network components/terminology. im not claiming to have discovered reasoning, just redefine it in a way thats compatible and sensible to neural networks (ish).


Well you're free to define and redefine anything and as you like, but be aware that every time you move the target closer to your shot you are setting yourself up for some pretty strong confirmation bias.


yeah thats why i need help from the machine interpretability crowd to make sure my hypothesized reframing of reasoning has sufficient empirical basis and isn’t adrift in lalaland.


Care to enlighten us with your explanation of what "reasoning" is?


terribly sorry to be such a tease, but im looking to publish a paper on it, and still need to delve deeper into machine interpretability to make sure its empirically properly couched. if u can help with that perhaps we can continue this convo in private.


a trickle of people sure, but most people never accidentally stumble upon good evaluation skills let alone reason themselves to that level, so i dont see how most people will have the semblance of an idea of a realistic trajectory of ai progress. i think most people have very little conceptualization of their own thinking/cognitive patterns, at least not enough to sensibly extrapolate it onto ai.

doesnt help that most people are just mimics when talking about stuff thats outside their expertise.

Hell, my cousin a quality-college educated individual, high social/ emotional iq, will go down the conspiracy theory rabbit hole so quickly based on some baseless crap printed on the internet. then he’ll talk about people being satan worshipers.


You're being pretty harsh, but:

> i think most people have very little conceptualization of their own thinking/cognitive patterns, at least not enough to sensibly extrapolate it onto ai.

Quite true. If you spend a lot of time reading and thinking about the workings of the mind you lose sight of how alien it is to intuition. While in highschool I first read, in New Scientist, the theory that conscious thought lags behind the underlying subconscious processing in the brain. I was shocked that New Scientist would print something so unbelievable. Yet there seemed to be an element of truth to it so I kept thinking about it and slowly changed my assessment.


sorry, humans are stupid and what intelligence they have is largely impotent. if this wasnt the case life wouldnt be this dystopia. my crassness comes from not necessarily trying to pick on a particular group of humans, just disappointment in recognizing the efficacy of human intelligence and its ability to turn reality into a better reality (meh).

yeah i was just thinking how a lot of thoughts which i thought were my original thoughts really were made possible out of communal thoughts. like i can maybe have some original frontier thoughts that involve averages but thats only made possible because some other person invented the abstraction of averages then that was collectively disseminated to everyone in education, not to mention all the subconscious processes that are necessary for me to will certainly thoughts into existsnce. makes me reflect on how much cognition is really mine, vs (not mine) a inevitable product of a deterministic process and a product of other humans.


> only made possible because some other person invented the abstraction of averages then that was collectively disseminated to everyone in education

What I find most fascinating about the history of mathematics is that basic concepts such as zero and negative numbers and graphs of functions, which are so easy to teach to students, required so many mathematicians over so many centuries. E.g. Newton figured out calculus because he gave so much thought to the works of Descartes.

Yes, I think "new" ideas (meaning, a particular synthesis of existing ones) are essentially inevitable, and how many people come up with them, and how soon, is a function of how common those prerequisites are.


Sounds like your cousin is able to think for himself. The amount of bullshit I hear from quality-college educated individuals, who simply repeat outdated knowledge that is in their college curriculum, is no less disappointing.


Buying whatever bullshit you see on the internet to such a degree that you're re-enacting satanic panic from the 80s is not "thinking for yourself". It's being gullible about areas outside your expertise.


how about a extra large dose of your skepticism. is true intelligence really a thing and not just a vague human construct that tries to point out the mysterious unquantifiable combination of human behaviors?

humans clearly dont know what intelligence is unambiguously. theres also no divinely ordained objective dictionary that one can point at to reference what true intelligence is. a deep reflection of trying to pattern associate different human cognitive abilities indicates human cognitive capabilities arent that spectacular really.


My guess as an amateur neuroscientist is that what we call intelligence is just a 'measurement' of problem solving ability in different domains. Can be emotional, spatial, motor, reasoning, etc etc.

There is no special sauce in our brain. And we know how much compute there is in our brain– So we can roughly estimate when we'll hit that with these 'LLMs'.

Language is important in a human brain development as well. Kids who grow up deaf grow up vastly less intelligent unless they learn sign language. Language allow us to process complex concepts that our brain can learn to solve, without having to be in those complex environments.

So in hindsight, it's easy to see why it took a language model to be able to solve general tasks and other types deep learning networks couldn't.

I don't really see any limits on these models.


interesting point about language. but i wonder if people misattribute the reason why language is pivotal to human development. your points are valid. i see human behavior with regard to learning as 90% mimicry and 10% autonomous learning. most of what humans believe in is taken on faith and passed on from the tribe to the individual. rarely is it verified even partially let alone fully. humans simple dont have the time or processing power to do that. learning a thing without outside aid is vastly slower and more energy or brain intensive process than copy learning or learning through social institutions by dissemination. the stunted development from lack of language might come more from the less ability to access the collective learning process that language enables and or greatly enhances. i think a lot of learning even when combined with reasoning, deduction, etc really is at the mercy of brute force exploration to find a solution, which individuals are bad at but a society that collects random experienced “ah hah!” occurrences and passes them along is actually okay at.

i wonder if llms and language dont as so much allow us to process these complex environments but instead preload our brains to get a head start in processing those complex environments once we arrive in them. i think llms store compressed relationships of the world which obviously has information loss from a neural mapping of the world that isnt just language based. but that compressed relationships ie knowledge doesnt exactly backwardly map onto the world without it having a reverse key. like artificially learning about real world stuff in school abstractly and then going into the real world, it takes time for that abstraction to snap fit upon the real world.

could you further elaborate on what you mean by limits, because im happy to play contrarian on what i think i interpret you to be saying there.

also to your main point: what intelligence is. yeah you sort of hit up my thoughts on intelligence. its a combination of problem solving abilities in different domains. its like an amalgam of cognitive processes that achieve an amalgam of capabilities. while we can label alllllll that with a singular word, doesnt mean its all a singular process. seems like its a composite. moreover i think a big chunk of intelligence (but not all) is just brute forcing finding associations and then encoding those by some reflexive search/retrieval. a different part of intelligence of course is adaptibility and pattern finding.


well gary marcus a non lay person is helping spread word that ai winter is again upon us.

but maybe statistical learning from pretraining is near its limit. not enough data or not enough juice to squeeze more performance out of averages.

though with all the narrow ais it does seem plausible you might be able to cram all what these narrow ais can do in on big goliath model. wonder if reinforcement learning and reasoning can manage to keep the exponential curve of ai going if there are still hiccups in the short term.

the difficulty in just shoehorning llms as they are in any and every day task without a hitch might be behind the temporary hype-dying down trend.


The beauty of LLMs isn’t just these coding objects speak human vernacular but they can be concatenated with human vernacular prompts and that itself can be used as an input, command or output sensibly without necessarily causing error even if a series of inputs combinations weren't preprogrammed.

I have an A.I. textbook that has agent terminology that was written preLLm days. agents are just autonomous ish code that loops on itself with some extra functionality. LLMs in their elegance can more easily out the box selfloop just on the basis concatenating language prompts, sensibly. They are almost agent ready out the box by this very elegant quality(the textbook agentic diagram is just a conceptual self perpetuation loop), except…

Except they fail at a lot or get stuck at hiccups. But, here is a novel thought. What if an LLM becomes more agentic (ie more able to sustain autonomous chain prompts that do actions without a terminal failure) and less copilotee not by more complex controlling wrapper self perpetuation code, but by means of training the core llm itself to more fluidly function in agentic scenarios.

a better agentically performing llm that isnt mislabeled with a bad buzzword might not reveal itself in its wrapper control code but through it just performing better in an typical agentic loop or environment conditions with whatever initiating prompt, control wrapper code, or pipeline that initiates its self perpetuation cycle.


small talk isnt pointless if done with an aim. this video help explains that https://www.youtube.com/watch?v=Hw5CPtEyedU. seduction as applied to conversation.

also rather mundane conversation can be fun if done in the right manner. this video talks more on the right manner: https://www.youtube.com/watch?v=IRG-YubP1rw


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: