"If the human brain were so simple that we could understand it, we would be so simple that we couldn’t." - without trying to defend such business practice, it appears very difficult to define what are necessary and sufficient properties that make AGI.
Yeah, I was thinking that the while modern social media has made the "cost of entry lower," and everyone can theoretically reach more people than ever, it's hard to even describe most of it as "fame" anymore. I mean, does content even "go viral" anymore, with users subdivided into the tiniest niche communities or audiences? Even if things get wider traction for a while, there's so much competition with so much other content that everything seems to get quickly drowned out and then can't even be found again later through search.
"The real problem is the ROI on AI spending is.. pretty much zero. The commonly asserted use cases are the following:
Chatbots Developer tools RAG/search"
I agree with you that ROI on _most_ AI spending is indeed poor, but AI is more than LLM's. Alas, what used to be called AI before the onset of the LLM era is not deemed sexy today, even though it can still make very good ROI when it is the appropriate tool for solving a problem.
AI is a term that changes year to year. I don't remember where I heard it but I like that definition that "as soon as computers can do it well it stops becoming AI and just becomes standard tech". Neural Networks were "AI" for a while - but if I use a NN for risk underwriting nobody will call that AI now. It is "just ML" and not exciting. Will AI = LLM forever now? If so what is the next round of advancements called?
There is a video on the page in which Bret Victor explains what it is all about. I find it very difficult to summarize, but my best attemp would be something like transforming computation into an activity that a community of people performs via manipulating real world objects.
This reminds me of what I learned about myself during my years spent at the university. I observed that in the morning my brain is better at understanding new concepts. Mornings were the best time for me to practice and improve problem solving, but I tend to remember less details of what I come across. However, at about 2pm my brain appears to switch to memorizing mode, where I struggle with problem solving compared to the morning, but I will remember a lot more of what I read. I structured my learning activity leveraging this observation. Even to this day (am 46) I can feel the same tendency, e.g., if a problem seems somewhat difficult, I just wait until the next morning, if I can, only to find it easy to come up with some solution that seemed out of reach the previous evening. Also, I try to do most of my reading at night (well, life with a family doesn't leave a whole lot of options for timing anyway).
> an attacker passively eavesdropping a GSM communication between a target and a base station can decrypt any 2-hour call with probability 0.43, in 14 min
The authors give the above example in the abstract. It does not look like the typical use case for embedded systems. I would think embedded systems send and receive small amounts of non-critical data over GSM, hopefully encrypted, as the parent pointed out. But I may be wrong here - is there a real use case for attacking embedded systems using this method?
I read his book on relativity theory, which I would characterize as one written for popular consumption [1]. I recommend reading it if you have not done so yet. I found the explanation of the special theory in the book easily accessible and enlightening, less so the explanation of the general theory, although it did help me understand it better.
"As the article states, no sensible application does 1-byte network write() syscalls." - the problem that this flag was meant to solve was that when a user was typing at a remote terminal, which used to be a pretty common use case in the 80's (think telnet), there was one byte available to send at a time over a network with a bandwidth (and latency) severely limited compared to today's networks. The user was happy to see that the typed character arrived to the other side. This problem is no longer significant, and the world has changed so that this flag has become a common issue in many current use cases.
Was terminal software poorly written? I don't feel comfortable to make such judgement. It was designed for a constrained environment with different priorities.
sure, but we do so with much better networks than in the 80s. The extra overhead is not going to matter when even a bad network nowadays is measured in megabits per second per user. The 80s had no such luxury.
Not really. Buildout in less-developed areas tends to be done with newer equipment. (E.g., some areas in Africa never got a POTS network, but went straight to wireless.)
Yes, but isn't the effect on the network a different one now? With encryption and authentication, your single character input becomes amplified significantly long before it reaches the TCP stack. Extra overhead from the TCP header is still there, but far less significant in percentage terms, so it's best to address the problem at the application layer.
It was not just a bandwidth issue. I remember my first encounter with the Internet was on a HP workstation in Germany connected to South-Africa with telnet. The connection went over a Datex-P (X25) 2400 Baud line. The issue with X25 nets was that it was expensive. The monthly rent was around 500 DM and each packet sent also had to been paid a few cents. You would really try to optimize the use of the line and interactive rsh or telnet trafic was definitely not ideal.
We derrive most of our other units from time, so differences in time accuracy translate into metrology improvements more generally.
Existing atomic clocks based on electrical interactions are extremely sensitive to the surrounding magnetic and electrical environment-- so for example accuracy is limited by collisions with other atoms, so state of the art atomic clocks have optically trapped clouds in high vacuums. Beyond limiting their accuracy generally makes the instruments very complex.
One could imagine an optical-nuclear atomic clock in entirely solid state form on a single chip with minimal support equipment achieving superior stability to a room sized instrument.
If atomic clocks become a few orders of magntidude better than the current state of the art (see atomic lattice clocks) then such clocks would do direct gravitational wave measurements and measure some fundemental constants.
The latter is important in physics to determine if these constants are truly constant in space and time. Which is a large assumption we have about the universe.
Sounds interesting. If you don't mind me asking, what sort of computation requires synchronization across data centers? And why couldn't it be done with NTP?
Essentially distributed consistency and co-ordination. NTP isn't accurate/consistent enough because of light speed being limited. This matters for applications like network management and large scale control.
If I remember correctly GPS is effected but the ultra precise version the gov uses can error correct pretty well. I would think greater GPS precision at a lower cost?
On your point about tech not solving social problems: I fully agree. Moreover, I think that tech _aggravates_ some of these problems. Case in point: why do some people think they can run around and interrupt others at any time for any reason? I think it could be related to having tech in our pockets constantly interrupting us throughout the day, which makes us operate in a new normal, where attention span a shorter, and where it's ok to just quickly check the notifications or just send a quick message while in a personal live conversation with someone.
A Faraday cage may become an important component to taking time for relaxing.