A lot of what you said is intuitively/directionally correct, but misses a lot of important physics related to heat transfer in buildings and operational questions of space heating equipment.
This is your most accurate/relevant point:
> All of this is ignoring the fact that it’s easy to create a tiny personal heated environment around an individual (it’s called a woolly jumper).
Whereas this is plainly wrong:
> It’s much easier, and consumes less additional energy, to heat an occupied space, than to cool it.
And then the following is correct but the marginal reduction in load is minimal except in relatively crowded spaces (or spaces with very high equipment power densities):
> Thanks to the fact that your average human produces 80W of heat just to stay alive.
The truth is it is generally easier to cool not heat when you take into account the necessary energy input to achieve the desired action on the psychrometric chart, assuming by “ease” you mean energy (or emissions) used, given that you are operating over a large volume of air - which does align with your point about the jumper to be fair!
Generally speaking, an A/C uses approx. 1 unit of electricity for every 3 units of cooling that it produces since it uses heat transfer rather than heat generation (simplified ELI5). It is only spending energy to move heat, not make it. On the other hand, a boiler or furnace or resistance heat system generally uses around 1 unit of input energy for every 0.8-0.9 units of heating energy produced. Heat pumps achieve similar to coefficients of performance as A/Cs, because they are effectively just A/Cs operating in reverse.
Your point about a jumper is great, but there are local cooling strategies as well (tho not as effective), eg using a fan or an adiabatic cooling device (eg a mister in a hot dry climate).
> So cooling a living space is always more costly than heating a living space.
Once you move to cost, it now also depends on your fuel prices, not just your demand and system type. For instance, in America, nat gas is so cheap, that even with its inefficiencies relative to a heat pump, if electricity is expensive heating might still be cheaper than cooling per unit of thermal demand (this is true for instance in MA, since electricity is often 3x the price of NG). On the other hand, if elec is less than 3x the cost of nat gas, then cooling is probably cheaper than heating per unit of demand, assuming you use natural gas for your heating system.
Everyone seems to be making the same mistake here. As you say:
> Generally speaking, an A/C uses approx. 1 unit of electricity for every 3 units of cooling that it produces since it uses heat transfer rather than heat generation
You know you can use a heatpump to heat a space as well right? Then you get to move 3 units of heat into the space, plus you also get to use that extra unit energy used to power the heatpump, because the heatpump turns the unit of energy into waste heat! (After all energy can’t be destroyed, so it has to go somewhere).
Daubechies wavelets are such incredibly strange and beautiful objects, particularly for how deviant they are compared to everything you are typically familiar with when you are starting your signal processing journey… if it’s possible for a mathematical construction to be punk, then it would be the Daubechies wavelets.
Highlight related to an analemma: the figure the sun traces if you make a parametric plot of its position in the sky at fixed time T as a function of the day of the year d, ie f_T(d) = (azimuth,elevation).
> without a human going through the necessary thought processes and problem solving steps to build the theory of the software as described in this paper
We might not be there yet (well we definitely are not) but it does not seem out of the question that within a generous 10 years we will have systems which can leverage graphs, descriptive language, interpreters, and so on to plan out and document and iterate and refine the structure of a problem and its architectural solution in tandem with developing the solution itself iteratively at a very effective level, given a sufficient explanation of the goals/problem - or more importantly/phrased another way, following the initial theory of a problem formulated by the human; the kind of documentation produced by such systems can also be more easily ingested by other non-human systems, potentially remedying some of the challenges with outlining/documenting/transferring the theory of the problem that humans have.
And what prevents a human from doing code review on such a system’s outputs? Now maybe your point was that the simple expense of a human’s time is the barrier, especially given that you were talking about the context of companies using LLMs to speed up their code production (read: eliminate cost centers), but in that case the errors that may come from poorly designed procedurally generated codebases just reads like bad project management to me for which the chickens will ultimately come home to roost; the companies which can successfully integrate such procedurally codegen engines while still maintaining strong design principles, maintainability, simplicity, etc ought to outcompete their competitors’ slop in the long run, right?
Having said all that, I think the more important loss is that the human fails to build as much intuition for the problem space themself by not being on the ground in the weeds solving the problems with their own solutions, and this will struggle to develop their own effective theories of the problem (as indicated by the title of the article in the first place).
What you're describing is the siren call of No Code, which has been tempting manager-types for decades and which has so far failed every single time.
The trouble with No Code is that your first paragraph is already my job description: I plan out and document and refine the structure of a problem and its architectural solution while simultaneously developing the system itself. The "sufficient explanation of the goals/problem" is the code—anything less is totally insufficient. And once I have the code, it is both the fully-documented problem and the spec for the solution.
I won't pretend to know the final end state for these tools, but it's definitely not that engineers will write natural-language specs and the LLMs will translate them, because code (in varying degrees of high- and low-level languages) is the preferred language for solution specification for a reason. It's precise, unambiguous, and well understood by all engineers on a project. There is no need to be filled by swapping that out with natural language unless you're taking engineers out of the loop entirely.
> The "sufficient explanation of the goals/problem" is the code—anything less is totally insufficient.
somewhat in that spirit, I like Gerald Sussman's interpretation of software development as "problem solving by debugging-almost right plans", in e.g. https://www.youtube.com/watch?v=2MYzvQ1v8Ww
> First, we want to establish the idea that a computer language is not just a way of getting a computer to perform operations, but rather that it is a novel formal medium for expressing ideas about methodology. Thus, programs must be written for people to read, and only incidentally for machines to execute.
I mostly agree with what you were saying, but I don’t think I was advocating for “no code” entirely, and certainly not the elimination of engineers entirely.
I was trying to articulate the idea that code generation tools will become increasingly sophisticated and capable, but still be tools that require operation by engineers for maximal effect. I see them as just another abstraction mechanism that will exist within the various layers that separate a dev from the metal. That doesn’t mean the capabilities of such tools are limited to where they are today, and it doesn’t mean that programmers won’t need to learn new ways of operating their tools.
I also hinted at it, but there’s nothing to say that our orchestration of such systems needs to be done in natural language. We are already skilled at representing procedures and systems in code like you said; there’s no reason to think we wouldn’t be adept at learning new languages specialized for specifying higher order designs in a more compact but still rigorous form to codegen systems. it seems reasonable to think that we will start developing DSLs and the like for communicating program and system design to codegen systems in a precise manner. One obvious way of thinking about that is by specifying interfaces and test cases in a rigorous manner and letting the details be filled in - obviously attempts at that now exhibit lots of poor implementation decisions inside of the methods, but that is not a universal phenomenon that will always hold.
The DSL paradigm is generally how I go about using LLMs on new projects, I.e use the LLM to design a language that best represents the abstractions and concepts of the project - and once the language is defined, the LLM can express usecases with the DSL and ultimately convert them into an existing high level language like Python.
That is s great idea. I’ve used ChatGPT to help me define the names of the functions of an API. Next time I face a problem where it calls for DSL I will give it a try.
Earlier an HN user had given an example of using Prolog as an intermediate DSL in the prompt to an LLM so as to transform English declarative -> Imperative code - https://news.ycombinator.com/item?id=41549823
In general, we already have plenty of mechanisms for specifying interfaces/api specs, tests, relationships, etc in a declarative but more formal manner than natural language which probably all work , and I can only imagine we will continue to see the development of more options tailored to this use case.
> It seems to me that one consequence of the "Theory Building View" is that: instead of focusing on delivering the artifact or the documentation of said artifact, one should instead focus on documenting how the artifact can be re-implemented by somebody else. Or in other words optimise for "revival" of a "dead" programs.
Arguably, this is the entire spirit of academia, which mildly serves as a counter example, or at least illustrates the challenges with what you are describing - even in something where the stated goal is reproducibility, you still have a replication crisis. Though to be fair, I think part of the problem there is that, like you said, people focus too much on “documenting the artifact” and not “documenting how to produce the artifact,” but this is often because the process is often “merely” technical and not theoretical (and thus not publishable) despite being where most of the hard work and problem solving and edge case resolution and so on happened.
Edit: oh, and I would also mentioned, that the kind of comment you’ve described which focuses on why some process exists in the form it does to better explain how it does what it does aligns closely with Osterhout’s notion of a good comment in A Philosophy of Software Design.
I can’t help but feel that every different flavor of formalization of the process of collaborative [software] development feels… nearly the same? At the end of the day.
Whether there is a self-conscious attempt to avoid being accused of dogmatics, or alternatively a fully embraced biblical dogma, it all feels like the same basic advice repackaged in new language. Sometimes the formalism is meant to create a framework to help the team organize better, but…
I feel like it all just comes down to:
- above all, be consistent but not too rigid
- talk with your colleagues, keep up to date with each other
- think about your constraints and goals frequently and check in with how you are relating to them
- work hard but not too hard
- have some systems, any systems in place, to check for poor work product and highlight good work product. What is good work product? Well, to use the famous quote, you’ll know it when you see it!
To be fair, some frameworks may obviously help some teams more than others, but like any tool evaluation and selection process, it always seems so highly contingent which process management method might work well or not, many might work equally well, and most of it comes down to the team and people more than it does the system…
I could be totally wrong as I don’t have much experience in teams or orgs beyond 5-10 people, but still… just my surface level understanding of all this.
I’m curious what others on here with far more experience and exposure to a variety of organizational frameworks have found and think about my assessment.
> I can’t help but feel that every different flavor of formalization of the process of collaborative [software] development feels… nearly the same? At the end of the day.
All of the attempts to pitch brand-name development practices also feel the same to me. It's always the same bullet points:
- This new process is going to be amazing and incredible and change your life!
- We're not going to force it, wink wink, but also here's a guide to downplaying your team's objections so you can force it upon them
- If anyone doesn't like it, it can't be because it didn't work for them. The only explanation is that they were doing it wrong and/or rejected it for invalid reasons.
- Vague acknowledgment that it's kind of painful and exhausting for the developers, but a promise that it's worth it [for the manager]
> I’d say it’s a mouse-driven IDE with a YAML-like code. But in its current guise, I can see people itching to use the keyboard for speed once they know what text pieces they want.
It’s useful to keep in mind that a “mouse-driven” IDE could have some real benefits from an accessibility perspective. I agree that this generally might miss the mark from a variety of goals, especially compared to graph based dataflow programming languages (see my other comment in this thread), and to really maximize accessibility as a goal you would need to actually design it for that from the ground up, but it’s still good to see experiments with other modalities and raise the question. At the end of the day the fraction of programmers who have difficulty typing is low… but that doesn’t mean it should be ignored, especially considering that one day almost all of us on here will have trouble typing, but likely still have the urge to code!
Accessibility is a good point. Though I can’t think of any disabilities where I couldn’t type but could drag and drop components, which is a much more precise as well as sustained physical motion.
I suspect there are better ways for coding in those scenarios. Like perhaps the kind of word entry that Stephen Hawkin used.
Or if you were to go totally touch UI (I’ve known one individual with motor
disabilities use an old touch screen CRT with a stylus in their mouth) then you’d want those blocks a little taller but with something more akin to Miro’s infinite zoom. Rather than a text UI with mouse input.
That said, every project has to start somewhere. So this might be that proof of concept needed to fully flesh out accessibility tweaks.
Yeah good points. You can easily imagine composing graph based/data flow programming languages with just your mouth, ie “create node:<type>:<name>; connect node:<name>:<outlet> to node:<name>:<inlet>” as a relatively easy way to synchronize a visual programming language’s traditional mouse-based construction engine to speech based construction with realtime visual updates etc.
As I said in another comment, I think the part of visual languages their devs are blind to (pun intended) are the input method(s).
We can all agree that keyboards are a) large b) necessitate being able to type (need hands) but we don't have that many high-frequency out-of-the-way input methods in computing, generally. I want to see an editor that can a) re-use existing code (libraries, concepts, etc) but that can b) use mouse, keyboard, touchscreen, hand gloves, <insert new input method here>.
Drawing boxes on screens and sticking text inside them is just not impressive, and its clunky to boot. Any worth is whatever input method is provided, and TBH I wouldn't expect this to be any easier on e.g. mobile, probably worse, in fact.
Yeah, the high frequency input capabilities of hands/fingers on keyboard are pretty remarkable and hard to beat, especially in conjunction with navigation tools like vim and especially when you start considering things like stenography applied in a programming context (intellisense/copilot etc can in some ways be be considered a form of that already).
I suspect natural language speech is too slow, but i do sometimes wonder if the only other part of our body capable of such high throughput (and the requisite fine motor control) is our mouth/tongue… I’ve always wondered what a programming language optimized for spoken input would look/sound like.
Edit: mentioned this in another comment, but I think graph based visual programming languages would actually be pretty well suited to speech entry, especially with shorthands for common operations like “create node” := “kurr” or “connect” := “kuh”.
Typing speed seems like a rather junior concern. In practice most time tends to be spent thinking or cycling through edits and inspecting test results. I think putting your tests in a watch -n 2 on the second screen would be a much better improvement than fiddling with some reinvention of stenography.
I agree that typing probably is not the limiting factor in most cases and for most people, and so the conversation might seem a little silly… But that’s precisely because typing is fast. Once you know what you are doing, you start typing and you are done relatively quickly, and probably spent a lot of the time typing thinking about what you would do next… but we are specifically talking about a scenario where typing is the limiting factor, or even entirely infeasible, for a variety of reasons.
This is still not helped by other input methods, stenography helps with the navigation too... Ctrl clicking though symbols is all well and good but you often want or need to do much more complicated things than that.
So, yes, input speed is still a problem for reading and understanding code, too.
Just going to highlight some of my favorite graph/flow based programming environments:
- Max/MSP - originally for audio/visual synthesis, but widely used for generative art (in the meaning of generative that existed before the AI boom of the 2020s)
- Grasshopper & Dynamo : structural/geometry/architectural generative design
- Modelica/Dymola/OpenModelica etc: symbolic equation modeling and diffeq solving, widely used in HVAC/systems modeling, automotive, aeronautical design
- Modular synthesizers - analog audio synthesis (and these days, digital as well!)
It’s interesting to see that almost all of the examples mentioned above leverage the fact that they are abstractions on top of lower level programming frameworks which are tailored to specific tasks in order to make fantastic, intuitive interfaces that are easy to learn and quick to iterate incredibly complex designs within their domains, while the example linked here is not really domain-specific and is much closer to traditional programming (or something like Scratch)
This Youtube channel "Ussa Design" has some really neat process videos of using Rhino3D + Grasshopper parametric design approach to build a whole bunch of different stuff. But, not much in the way of explanation or tutorial, mostly just a watch-along and chill out kind of vibe.
I really like the idea of parametric CAD design via flow programming - you can go back and change any parameter later. Versus typical 3D modeling or CAD workflows where you basically are working with digital versions of typical shop tools like lathes, routers, extruders, etc where a fillet or cut you made 10 hours ago can't be changed without "rebuilding by hand".
There are many many many dedicated grasshopper modeling channels, both in the “watch and chill” style and the explanatory style. Once you’ve done lots of grasshopper programming and can recognize component icons from both the base library and the cornucopia of grasshopper plugins, you can skip through all of those videos at like 3x speed and very rapidly absorb the graph structure.
One of the most interesting parts of grasshopper is how components operate over tree based data structures - ie the equivalent of numpy broadcasting.
Awesome! I had no idea, for some reason I just assumed this niche topic no one was interested in. Do you have any recommendations for how to learn grasshopper for a casual like me who only feels comfortable in SketchUp?
It really really depends on what your goals are and what type of design domain you are working in, eg jewelry, architecture (and what within architecture), fabrication, experimental sculpture, etc.
At the end of the day though… play. Lots of play! Playing within Grasshopper and just making crazy things is a great way to build intuition.
at the same time, it of course is very helpful to have directional play, ie problem solving, frequently in the form of trying to recreate - or parametrize/generalize/abstract - a specific pre-existing design.
It also helps a lot if you are pretty familiar with modeling in rhino (ie working with construction planes, curves, etc) since a lot of grasshopper worker is just a matter of formally capturing a sequence of operations that you might perform in Rhino into a codified DAG. Unfortunately some of the naming of rhino operations does not align with their corresponding grasshopper components (and the corresponding components might not perfectly align their api with the corresponding rhino commands) but developing that geometric intuition is essential.
There’s also some stuff that is ultimately more efficient to program within python / CSharp / Visual Basic within grasshopper so always keep that in mind… but it can become another layer of complexity (or simplification…) to learn at the same time.
Percussion synthesis is a deep and wonderful practice within electronic music, especially techno and especially in its more experimental manifestations. I’ve never encountered anything specifically claiming to emulate specific cymbals but I’m sure it’s out there galore. In any case, cymbal and hihat synthesis is a classic task in techno production.
reply