Hacker Newsnew | past | comments | ask | show | jobs | submit | more airstrike's favoriteslogin

Instead of brutalist, I'd call it "NASA Revivalist" as it is very reminiscent of the 1970s NASA graphic design style manual[0].

Having personal experience designing in the context of / restoring brutalist architecture (the kind people live and work* in), I submit with gratitude that this tool misses some key aspects of the style:

1. No concrete used in construction, and therefore no concrete smell, aka "eau de mid-century Americana."**

2. No sense of impending arrest by secret police around every corner.

3. Does not require regular pressure washing to avoid looking like a set-piece from a post-apocalyptic horror movie.

* for certain values of "live" and "work" ** sans cigarettes

0 <https://www.nasa.gov/wp-content/uploads/2015/01/nasa_graphic...>


FWIW, Pavel, one of the authors, has devoted considerable time into what is one of the very, very few attempts at a formal specification for CSS (the static/float layout fragment cf [1]). It's a Racket program generating Z3 SMT solver code for verifying an instance layout (which also looks like Scheme) so it's not for the faint-hearted ;) but maybe just what an FP fan on HN is looking for as a challenge.

[1]: https://pavpanchekha.com/blog/css-floats.html


This is the list of discussion topics from the Dartmouth Workshop on Artificial Intelligence (1955) where the term was first introduced:

  The following are some aspects of the artificial intelligence problem: 

  1 Automatic Computers

  If a machine can do a job, then an automatic calculator can be programmed to simulate the machine. The speeds and memory capacities of present computers may be insufficient to simulate many of the higher functions of the human brain, but the major obstacle is not lack of machine capacity, but our inability to write programs taking full advantage of what we have.

  2. How Can a Computer be Programmed to Use a Language

  It may be speculated that a large part of human thought consists of manipulating words according to rules of reasoning and rules of conjecture. From this point of view, forming a generalization consists of admitting a new word and some rules whereby sentences containing it imply and are implied by others. This idea has never been very precisely formulated nor have examples been worked out.

  3. Neuron Nets

  How can a set of (hypothetical) neurons be arranged so as to form concepts. Considerable theoretical and experimental work has been done on this problem by Uttley, Rashevsky and his group, Farley and Clark, Pitts and McCulloch, Minsky, Rochester and Holland, and others. Partial results have been obtained but the problem needs more theoretical work.

  4. Theory of the Size of a Calculation

  If we are given a well-defined problem (one for which it is possible to test mechanically whether or not a proposed answer is a valid answer) one way of solving it is to try all possible answers in order. This method is inefficient, and to exclude it one must have some criterion for efficiency of calculation. Some consideration will show that to get a measure of the efficiency of a calculation it is necessary to have on hand a method of measuring the complexity of calculating devices which in turn can be done if one has a theory of the complexity of functions. Some partial results on this problem have been obtained by Shannon, and also by McCarthy.

  5. Self-lmprovement

  Probably a truly intelligent machine will carry out activities which may best be described as self-improvement. Some schemes for doing this have been proposed and are worth further study. It seems likely that this question can be studied abstractly as well.

  6. Abstractions

  A number of types of ``abstraction'' can be distinctly defined and several others less distinctly. A direct attempt to classify these and to describe machine methods of forming abstractions from sensory and other data would seem worthwhile.

  7. Randomness and Creativity

  A fairly attractive and yet clearly incomplete conjecture is that the difference between creative thinking and unimaginative competent thinking lies in the injection of a some randomness. The randomness must be guided by intuition to be efficient. In other words, the educated guess or the hunch include controlled randomness in otherwise orderly thinking. 
From:

https://web.archive.org/web/20070826230310/http://www-formal...

So, no, the fundamental difference is not that "AI is trained, algorithms are not". Some hand-crafted algorithms fall under the purview of AI research. A modern example is graph-search algorithms like MCTS or A*.


A long time ago, I created a version of this challenge called "Cheryl's Murder."

My notebook not only solves logical induction problems like "Cheryl's Birthday," but it also generates them.

https://github.com/shaungallagher/cheryls-murder/blob/master...


Yes. Here's an example: https://research.upjohn.org/cgi/viewcontent.cgi?referer=&htt...

Excerpt:

> "The computer industry, in turn, is an outlier and statistical anomaly. Its extraordinary output and productivity growth reflect the way statistical agencies account for improvements in selected products produced in this industry, particularly computers and semiconductors. Rapid productivity growth in this industry—and by extension the above-average productivity growth in the manufacturing sector—has little to do with automation of the production process. Nor is extraordinary real output and productivity growth an indicator of the competitiveness of domestic manufacturing in the computer industry; rather, the locus of production of the industry’s core products has shifted to Asia"

The whole document is well worth a read.

Here's another article: https://qz.com/1269172/the-epic-mistake-about-manufacturing-...




I’m willing to bet that I can reduce your costs by at least 10x. I’d go so far as to say this thing should be able to handle HN front page traffic at < $300 / month, including all real-time vector search.

That is, if this 6k number is actually true. Part of me (forgive me) is in fact wondering if maybe this is an advertisement for your SaaS and you’re inflating this number to make people think there’s no way they can build a thing like that themselves. But, giving you the benefit of doubt, if you are truly paying this, you’re overspending by more than an order of magnitude. Most likely too many middlemen.

Email is in my profile if you want to talk about it.


People do not understand the value of classifying alerts as useful after the fact.

At Netflix we built a feature into our alert systems that added a simple button at the top of every alert that said, "Was this alert useful?". Then we would send the alert owners reports about what percent of people found their alert useful.

It really let us narrow in on which alerts were most useful so that others could subscribe the them, and which were noise, so they could be tuned or shut off.

That one button alone made a huge difference in people's happiness with being on call.


I have seen many people downplaying the complexity of a datetime library. "Just use UTC/Unix time as an internal representation", "just represent duration as nanoseconds", "just use offset instead of timezones", and on and on

For anyone having that thought, try reading through the design document of Jiff (https://github.com/BurntSushi/jiff/blob/master/DESIGN.md), which, as all things burntsushi do, is excellent and extensive. Another good read is the comparison with (mainly) chrono, the de facto standard datetime library in Rust: https://docs.rs/jiff/latest/jiff/_documentation/comparison/i...

Stuffs like DST arithmetic (that works across ser/de!), roundable duration, timezone aware calendar arithmetic, retrospective timezone conflict detection (!), etc. all contribute to a making the library correct, capable, and pleasant to use. In my experience, chrono is a very comprehensive and "correct" library, but it is also rigid and not very easy to use.


I brought SVG to WebKit, back in 2005. So I guess I'm part of the problem.

I generally agree with the article, SVG is bloated and confused about what it's trying to do.

You're not alone in wanting something better. We had this same problem in Flutter. Ian Hickson (of HTML5 fame) made some attempts at seeing a better future you might be interested in: https://docs.google.com/document/d/1YWffrlc6ZqRwfIiR1qwp1AOk...


Super interesting. Thanks for sharing the detail.

The trouble that you’ve probably felt is CRM are sold to sales exec and as such, focus on sales exec needs.

You’d be better served not with CRM, but a personal contact management system.

Something like https://www.monicahq.com/


There's an old 3d pong game from the flash era where you put spin on the ball.

https://www.crazygames.com/game/curve-ball-3d


LLM based AIs have lots of "features" which are kind of synonymous with "concepts" - these can be anything from `the concept of an apostrophe in the word don't`, to `"George Wash" is usually followed by "ington" in the context of early American History`. Inside of the LLMs neural network, these are mapped to some circuitry-in-software-esque paths.

We don't really have a good way of understanding how these features are generated inside of the LLMs or how their circuitry is activated when outputting them, or why the LLMs are following those circuits. Because of this, we don't have any way to debug this component of an LLM - which makes them harder to improve. Similarly, if LLMs/AIs ever get advanced enough, we'll want to be able to identify if they're being wilfully deceptive towards us, which we can't currently do. For these reasons, we'd like to understand what is actually happening in the neural network to produce & output concepts. This domain of research is usually referred to as "interpretability".

OpenAI (and also DeepMind and Anthropic) have found a few ways to inspect the inner circuitry of the LLMs, and reveal a handful of these features. They do this by asking questions of the model, and then inspecting which parts of the LLM's inner circuitry "lights up". They then ablate (turn off) circuitry to see if those features become less frequently used in the AIs response as a verification step.

The graphs and highlighted words are visual representations of concepts that they are reasonably certain about - for example, the concept of the word "AND" linking two parts of a sentence together highlights the word "AND".

Neel Nanda is the best source for this info if you're interested in interpretability (IMO it's the most interesting software problem out there at the moment), but note that his approach is different to OpenAI's methodology discussed in the post: https://www.neelnanda.io/mechanistic-interpretability


Whatever the reason. The VC objection is nonsense of the fortune cookie wisdom variety. We (venture backed founders) tolerate it because we need the inane spiritual blessings of these dubious kingmakers to make our world go around. So we smile, nod, grit our teeth, and move on to the next one. But none of them are secret geniuses with a unique skill to predict the future based on random made up nonsensical “signals”.

They’re herd animals where survivorship bias has a reinforcing function until a point where being luckiest longest makes it possible to put a finger on the scale of outcomes to make raising subsequent funds easier, makes it possible to set the trend the herd follows, and makes it possible to somewhat curate outcomes (“soft landings” instead of insolvency) for your portfolio.

Every single unicorn and/or significantly exited startup has a pile of VC rejections that’s miles high. The trick as the founder is to just figure out how to find better aligned investors. Typically ones who aren’t high on the ego trip of being an accidental kingmaker. Take the “feedback” like that of the VC in the post for what it is… complete nonsense from someone who’s accidentally successful enough to get away with such a silly criteria because nobody wants to insult the cult clergy to their face in case you might need their blessings at some point yourself… and move on.

Successfully raising money is first an exercise in qualifying who the right investors/funds are for what you’re doing as a venture and second it’s an exercise in number of shots on goal you can make in as short a period as possible until you find one, “Yes”. Full stop.


The right question is "Why would anyone use chatgpt". The answer is https://hachyderm.io/@inthehands/112006855076082650

> You might be surprised to learn that I actually think LLMs have the potential to be not only fun but genuinely useful. “Show me some bullshit that would be typical in this context” can be a genuinely helpful question to have answered, in code and in natural language — for brainstorming, for seeing common conventions in an unfamiliar context, for having something crappy to react to.

> Alas, that does not remotely resemble how people are pitching this technology.

Slanting this towards a specific brand doesn't change that much. Some yes, but not that much.


Microsoft makes “good enough” products that integrate smoothly. That’s what enterprises want and smaller vendors can’t provide by their very nature.

Thanks! - the author of PySheets

very, very well said.

just from one language learner to another, so i hope you can appreciate this small grammatical correction:

> and neither is not easy to come by in Guatemala.

...the infamous 'double negative' que es correcto en español y otros, ¡pero es incorrecto en inglés!

it would seem to be true, and i would certainly defer to you, that altruistic minded and not toothless regulators, plus a whole lot of money are both necessary—and *neither is easy to come by in Guatemala.

again, great explanation of what might be difficult to comprehend from the perspective of others in this modern, interconnected world.


Gosh, it's telling that as early as 1983 (!) the inventors of the spreadsheet thought that spreadsheets were 'done' and they needed to move to more important things. This is like Rickenbacker in 1938 deciding that electric guitars were 'done' and moving to, I don't know, Theremins or something.

Very cool. Here's another style! https://getavataaars.com/

Hah, I did something similar at https://ari.blumenthal.dev/!/-2/-1/three-body after reading the book last year.

Source at https://github.com/zkhr/blog/blob/main/static/js/three.js


Steve Jobs has a great quote about this, from The Lost Interview:

“… it's the disease of thinking that a really great idea is 90% of the work, and if you just tell all these other people "here's this great idea" then of course they can go off and make it happen.

And the problem with that is that there is just a tremendous amount of craftsmanship in between a great idea and a great product. And as you evolve that great idea it changes and grows. It never comes out like it starts because you learn a lot more as you get in the subtleties of it.

And you also find there's tremendous trade-offs that you have to make. There are just certain things you can't make electrons do. There are certain things you can't make plastic do or glass do. Or factories do, or robots do.

And as you get in to all these things, designing a product is keeping 5000 things in your brain, these concepts. And fitting them all together and kind of continuing to push to fit them together in new and different ways to get what you want. And every day you discover something new that is a new problem or a new opportunity to fit these things together a little differently.

It's that process that is the magic."


1-bit weights have been a thing since at least 2016:

https://arxiv.org/abs/1606.06160


> However, we are still at a stage of divergent opinions without any definitive conclusion.

Okay, I know picking people's sentences apart has fallen out of fashion, but:

"Divergent opinions" are ... opinions. A "definitive conclusion" is ... a conclusion.

I see more examples, but I wanted to make a point: I miss the days when fewer words conveyed more meaning. From the classic https://en.wikipedia.org/wiki/The_Elements_of_Style: "Make every word count."

About brevity of expression, I must add this (possibly apocryphal) story about Ernest Hemingway. In the 1920s Hemingway and his Paris friends had a contest: who could write the shortest readable short story? Hemingway won with this entry:

For sale. Baby shoes. Never worn.


This is great. In a similar vein, I implemented GPT entirely in spreadsheet functions with accompanying video tutorials

https://spreadsheets-are-all-you-need.ai/


Logically it doesn’t, but in actual good-faith communication people usually follow Grice’s relevance maxim[1]: the points they mention are relevant to the conversation and the point they’re making. Thus, if neither Linux nor Windows support are planned, and the question is about Windows support, saying that Windows will come after Linux would be (vacuously) true, but the mention of Linux would be irrelevant.

(Notably, communication coming out or through legal counsel cannot be assumed to be good-faith, the premise of the court system being that the best we can achieve is two bad-faith adversaries and a neutral arbiter. But that’s not what we are dealing with here.)

Pedantry aside, I think I remember one of the developers saying they do plan on Linux support at some point in one of the previous Zed threads here. There were also some “small team” and “laser-focused” and “best possible experience” in that comment, but they did say outright they were planning on it. Though plans change, I think that’s the best we could hope for at this point, as I doubt even they themselves know more about their future.

[1] https://en.wikipedia.org/wiki/Cooperative_principle


While it doesn't appear to have been updated in many years, Microsoft built a similarly useful tool[1] that lets you browse the structure of a given Office document and see C# code that generates various components of it.

[1] https://github.com/dotnet/Open-XML-SDK/releases/tag/v2.5



You can use Rust with QML[1].

QML is actually pretty amazing. I've been building my block editor[2] view entirely in QML while the model is in C++. This separation of logic and presentation works great. And yes, there are some crashes sometimes (that I find quite easy to debug thanks to the built-in debugger), but take for example a similar app that's built with Rust and Dart[3], in my testing there were still memory leaks that caused my computer to hang. It's better to know you have a bug than for it to be hidden from you.

I agree with parent commenter, saying these cross-platform frameworks will end up supporting the least common denominator set of features. But I found with external open source libraries, the community is catching up very fast. For example, you want the awesome translucency macOS apps have for your Qt app? Here you go[4]. Many such cases. It's also pretty straightforward to add your own custom OS-dependent code, especially so, if someone already open sourced his approach. I recently wanted to move the traffic light buttons on macOS for my app, but couldn't figure the Objective-C code for that. I ended up looking at either Tauri or Electron source code and found my answer.

[1] https://github.com/woboq/qmetaobject-rs

[2] https://www.get-plume.com/

[3] https://www.appflowy.io/

[4] https://github.com/stdware/qwindowkit


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: