Hacker Newsnew | past | comments | ask | show | jobs | submit | jakewins's commentslogin

Respectfully, if you don’t know how long a 2x4 is, I think it would be very reasonable to look this up, as it will make you much better equipped to make this argument.

I generally agree with what you are saying, and frequently haul 2x4s without my truck - but the solution to that is a long flatbed trailer, not a Thule hitch attachment.


The Thule hitch attachment was responding to GP saying they throw their bikes in the bed of their truck.


it does depend a lot on what you buy it for, but obviously 8' is a good benchmark.

But honestly... at 8' I'm not sure why you're bothering with anything (unless you're getting a lot of them), i usually just threw 8 footers in my Honda Fit and closed the hatch.


Ironically most pickup truck beds are shorter than 8' and most likely a 8' piece of lumber would have to lay diagonally sticking out over one of the edges.

Still good for occasional piece of furniture, lots of lumber, or plywood.


The shortest bed f150 you can buy is 5.5ft, with a 2ft tailgate, trivially hauling 8ft with just a few inches overhang with the tailgate down, and easily doing 10ft lumber.

Again, I think pickup trucks are idiotically oversized and dangerous to pedestrians, but arguing against them by repeating things that anyone that uses a pickup knows is nonsense is not helping win over any detractors.


To be clear, I am not arguing against pickup trucks. The reason I bring up the bed length is a personal pet peeve thing. I have some amount of OCD going on, and I will be damned if I will ever approve of a truck that can't fit a piece of lumber in its bed without leaving the tailgate open that can fit into a Ford Fiesta with the trunk closed.

I am fully aware of why and how people use pickup trucks and I have no beef with that on cultural grounds. But if I were to get one it would be a long bed truck and I would sacrifice the cab space if needed.


A hybrid car trivially improves total energy input needed, since it replaces braking by generating heat by braking by storing energy later to be reused.

The same should he true here, right? The added energy needed to carry the weight of the motor would be easily overcome by the gains from regenerative braking?


Only if the motor were in the hub of the wheel, which given the typical size of the hubs, seems even less likely. Remember that bicycle drivetrains are typically one-way due to the ratchet, so you can't apply braking force via the chain.


Broadly speaking electric bikes don't use regenerative braking. It's not possible with a road bike drive train.

In any case, the weight of the motor is overcome by the motor itself, using the power stored in the battery.


These guys are not using their brakes nearly enough to make up for the amount of power they would use on the climbs, even on the descents.


Are you saying the physics of a bicycle are somehow different than a car going up and down hills? Or are you saying actually hybrid cars use more gasoline driving in hilly terrain as well, and their benefits only accrue in stop-go city traffic?


Physics and practical concerns are way, way different. You want to go as fast as possible down the descents in a bike race. You don't want to lose any kinetic energy and fall behind your opponents, so the only time you'd be using it is when you actually want to slow down. In a car, you might be braking/slowing down going downhill anyway, so that energy is better captured than used that moment.

There's also the matter of mass: lot more momentum/energy to be gained from a 1500kg car versus a 70kg bike + rider. That said, less energy needed for the motor so don't know how the math works out there.

Edit: all of this is moot anyway because of the point zettabomb made as well.


I think they are saying: Imagine 100 games were sold on steam, and 50 of them were unplayed.

If each game was bought by a different person, then both most games bought and most people never play the games.

However, it’s very likely a small minority of people take advantage of the insane sales Steam has sometimes - like if 50 people bought a game each and played it, and one person bought 50 separate $1 arcade games and never touched them.

You get the same statistics in both cases of number of games played, but it’s two very different scenarios in terms of how humans use steam.

For me I imagine it’s a third case, that people like me often take advantage of the $1 deals and never end up playing most of those old or arcade titles?


I don't claim to know the details of reactive power management, but the primary mechanisms for grid stability in the EU is the "cascade" of services the TSOs procures:

- Fast Frequency Response (FFR), sub-second power adjustment following frequency table

- Frequency Containment Reserve (FCR), ~second power adjustment following frequency table

- Automatic Frequency Restoration Reserve (aFRR), ~second energy production following TSO setpoint signal

- Manual Frequency Restoration Reserve, ~minute energy production following TSO activation signals

My understanding is the primary failure in Spain was that 9 separate synchronous plants that had sold aFRR(?) to the TSO then failed to deliver, so when the TSO algorithms tried to adjust the oscillations, nothing happened. Everything else was kinda "as designed".


> 9 separate synchronous plants that had sold aFRR(?) to the TSO then failed to deliver, so when the TSO algorithms tried to adjust the oscillations, nothing happened.

Oof. This sounds like a classic of "it's only needed in emergencies, so it's only in emergencies that we find out it doesn't work".


I don't know about the Spanish market, but at least in the markets I'm involved in aFRR is an "always on" product, the TSO controls your plant with a setpoint that updates in near-real-time throughout the period you've sold to them.. it's not clear to me that the product that wasn't delivered was actually aFRR though, maybe it was something else less frequently called upon.


It likes to me like a major factor was that the grid failed to control voltage, not frequency. Frequency control should be unaffected by transformers.


> caused by the reaction of renewable generation to those conditions

No, that is not what the report says. It says, just like you say, that renewables reacted to market prices, causing a generation drop. It then says explicitly that synchronous generation caused oscillation, while PV plants showed a flat non-oscillating pattern.

From your comments I worry there are emotional factors clouding how you're reading the report - this was a systemic failure involving many separate technologies:

- Market signals - negative prices - caused a drop in PV generation (as frequently occurs)

- Synchronous plants caused oscillations as a side effect

- Plants procured to dampen exactly those oscillations did not deliver as requested

- TSO then took measures using interconnections to stabilize via other balance area

- This caused - presumed - overvoltages in distribution grids

- PV inverters then shut off, as mandatory by regulatory requirement in response to over voltage

You're absolutely right that PV played a large role here, but that point is diminished by making it out that PV is both the source of the initial generation drop and the source of the oscillations; it is neither.

The market design caused the generation drop, synchronous generators caused the oscillations, TSO action caused distribution overvoltages and regulatory requirements on PV firmware design in response to overvoltage caused the final blackout.


The core idea isn’t pods. The core idea is reconciliation loops: you have some desired state - a picture of how you’d like a resource to look or be - and little controller loops that indefinitely compare that to the world, and update the world.

Much of the complexity then comes from the enormous amount of resource types - including all the custom ones. But the basic idea is really pretty small.

I find terraform much more confusing - there’s a spec, and the real world.. and then an opaque blob of something I don’t understand that terraform sticks in S3 or your file system and then.. presumably something similar to a one-shot reconciler that wires that all together each time you plan and apply?


Someone saying "This is complex but I think I have the core idea" and someone to responding "That's not the core idea at all" is hilarious and sad. BUT ironically what you just laid out about TF is exactly the same - you just manually trigger the loop (via CI/CD) instead of a thing waiting for new configs to be loaded. The state file you're referencing is just a cache of the current state and TF reconciles the old and new state.


Always had the conceptual model that terraform executes something that resembles a merge using a three way diff.

There’s the state file (base commit, what the system looked like the last time terraform succesfully executed). The current system (the main branch, which might have changed since you “branched off”) and the terraform files (your branch)

Running terraform then merges your branch into main.

Now that I’m writing this down, I realize I never really checked if this is accurate, tf apply works regardless of course.


and then the rest of the owl is working out the merge conflicts :-D

I don't know how to have a cute git analogy for "but first, git deletes your production database, and then recreates it, because some attribute changed that made the provider angry"


> a one-shot reconciler that wires that all together each time you plan and apply?

You skipped the { while true; do tofu plan; tofu apply; echo "well shit"; patch; done; } part since the providers do fuck-all about actually, no kidding, saying whether the plan could succeed


To me the core of k8s is pod scheduling on nodes, networking ingress (e.g. nodeport service), networking between pods (everything addressable directly), and colocated containers inside pods.

Declarative reconciliation is (very) nice but not irreplaceable (and actually not mandatory, e.g. kubectl run xyz)


After you’ve run kubectl run, and it’s created the pod resource for you, what are you imagining will happen without the reconciliation system?

You can invent a new resource type that spawns raw processes if you like, and then use k8s without pods or nodes, but if you take away the reconciliation system then k8s is just an idle etcd instance


Since they had the reconciliation system because they decided the main use case was declarative, it makes sense that they used it to implement kubectl run. But they could have done it differently.

Imagine if pods couldn't reach other and you had to specify all networks and networking rules.

Or imagine that once you created a container you had to manually schedule it on a node. And when the node or pod crashes you have to manually schedule it somewhere else.


Primary reason for my choices was I didn’t want the other company.

Do I want to work for Instacart? Not really. Do I prefer working for Instacart over Tesla? Well, I’d rather go back to hand digging trenches in the rain than work for Tesla, so yeah I prefer Instacart.


Yeah, same for me. Choosing the less bad one.


Same here.

The first ~10 companies I saw were a dumpster fire.

I think that US companies in general are better on average than this.


Any language that offers a mechanism for libraries has formal or informal support for defining modules with public APIs?

Or maybe I’m missing what you mean - can you explain with an example an API boundary you can’t define by interfaces in Go, Java, C# etc? Or by Protocols in Python?


The service I'm working on right now has about 25 packages. From the language's perspective, each package is a "module" with a "public" API. But from the microservices architecture's perspective, the whole thing is one module with only a few methods.


But why would users of the module care about the dependency packages? You could still have a module with only a few methods and that's the interface.


If “everyone would just” restrict themselves to import only the package you meant to public API, sure, it would work. But everyone will not just.


I'm not sure why you would bother, though. If you need the package, just import it directly, no? (besides, in many languages you can't even do that kind of thing)


i’ve seen devs do stuff like this (heavily simplified example)

    from submodule import pandas
why? no idea. but they’ve done it. and it’s horrifying as it’s usually not done once.

microservices putting a network call in on the factoring is a feature in this case, not a bug. it’s a physical blocker stopping devs doing stuff like that. it’s the one thing i don’t agree with grug on.

HOWEVER — it’s only a useful club if you use it well. and most of the time it’s used because of expectations of shiny rocks, putting statements about microservices in the company website, big brain dev making more big brain resume.


True - but most languages make it much easier than Python to disallow this kind of accidental public API creation. Python inverts the public API thing - in most (all?) other mainstream languages I can think of you need to explicitly export the parts of your module you want to be public API.

You can do this in Python as well, but it does involve a bit of care; I like the pattern of a module named “internal” that has the bulk of the modules code in it, and a small public api.py or similar that explicitly exposes the public bits, like an informal version of the compiler-enforced pattern for this in Go


We use solvers throughout the stack at work: solvers to schedule home batteries and EVs in peoples homes optimally, solvers to schedule hundreds of thousands of those homes optimally as portfolios, solvers to trade that portfolio optimally.

The EU electricity spot price is set each day in a single giant solver run, look up Euphemia for some write ups of how that works.

Most any field where there is a clear goal to optimise and real money on the line will be riddled with solvers


I thought this article seemed like well articulated criticism of the hype cycle - can you be more specific what you mean? Are the results in the Apple paper incorrect?


Gary Marcus always, always says AI doesn't actually work - it's his whole thing. If he's posted a correct argument it's a coincidence. I remember seeing him claim real long-time AI researchers like David Chapman (who's a critic himself) were wrong anytime they say anything positive.

(em-dash avoided to look less AI)

Of course, the main issue with the field is the critics /should/ be correct. Like, LLMs shouldn't work and nobody knows why they work. But they do anyway.

So you end up with critics complaining it's "just a parrot" and then patting themselves on the back, as if inventing a parrot isn't supposed to be impressive somehow.


I don’t read GM as saying that LLMs “don’t work” in a practical sense. He acknowledges that they have useful applications. Indeed, if they didn’t work at all, why would he be advocating for regulating their use? He just doesn’t think they’re close to AGI.


The funny thing is, if you asked “what is AGI” 5 years ago, most people would describe something like o3.


Even Sam Altman thinks we’re not at AGI yet (although of course it’s coming “soon”).


Markus has been consistently wrong over the many years predicting the (lack of) progress of the current deep learning methods. Altman has been correct so far.


Marcus has made some good predictions and some bad ones. That’s usually the way with people who make specific predictions — there are no prophets.

Not sure I’d agree that SA has been any more consistently right. You can easily find examples of overconfidence from him (though he rarely says anything specific enough to count as a prediction).


You need to read everything that Gary writes with the particular axe to grind he has in mind: neurosymbolic AI. That's his specialism, and he essentially has a chip in his shoulder about the attention probabilistic approaches like LLMs are getting, and their relative success.

You can see this in this article too.

The real question you should be asking is if there is a practical limitation in LLMs and LRMs revealed by the Hanoi Towers problem or not, given that any SOTA model can write code to solve the problem and thereby solve it with tool use. Gary frames this as neurosymbolic, but I think it's a bit of a fudge.


Hasn't the symbolic vs statistical split in AI existed for a long time? With things like Cyc growing out of the former. I'm not too familiar with linguistics but maybe this extends there too, since I think Chomsky was heavy on formal grammars over probabilistic models [1].

Must be some sort of cognitive sunk cost fallacy, after dedicating your life to one sect, it must be emotionally hard to see the other "keep winning". Of course you'd root for them to fall.

[1] https://norvig.com/chomsky.html


>with tool use

A LLM with tool use can solve anything. It is interesting to try and measure its capabilities without tools.


I don't think the first is true at all, unless you imagine some powerful oracle tools.

I think the second is interesting for comparing models, but not interesting for determining the limits of what models can automate in practice.

It's the prospect of automating labour which makes AI exciting and revolutionary, not their ability when arbitrarily restricted.


Isn't the point of automating labour though to automate that which is not/was already not automated?

It would draw on many previously written examples of algorithms to write the code for solving Hanoi. To solve a novel problem with tool use, one needs to work sequentially while staying on task, notice where you've gone wrong, and backtrack.

I don't want to overstate the case here, I'm sure there is work where there's enough intersection between previously existing stuff in the dataset and few enough sequential steps required that useful work can be done, but idk how much you've tried using this stuff as a labour saving device, there's less low hanging fruit than one might think, but more than zero.


There is a decent labour savings to be had in code generation, but under strict guidance with examples.

There's a more substantial savings to be had in research scenarios. The AI can read more and synthesize more, and faster, than I can on my own, and provide references for checking correctness.

I'm not confident enough to say that the approaches being taken now have a hard stopping point any time soon or are inherently bound to a certain complexity.

Human minds can only cope with a certain complexity too and need abstraction to chunk details into atomic units following simpler rules. Yet we've come a long way with our limited ability to cope with complexity.


Search is already a pretty powerful oracle to defer an answer to a human and is a common tool most AI use today.

What current models can automate is not what the paper was trying to answer.


What current models can automate is why they are exciting, and the attention the paper is getting because of how it cuts into this excitement. It follows logically that the attention is somewhat misplaced.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: