Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Jevons Paradox (wikipedia.org)
113 points by rememberlenny on Jan 11, 2022 | hide | past | favorite | 90 comments


It seems similar to anecdata as the US migrated from dialup to broadband internet services. In theory you could finish downloading everything you needed in a fraction of the time so the bandwidth should remain unused longer.

In practice, people started downloading everything. I just need this Linux distribution but here's a dozen more I can now get in minutes and try out that were prohibitive to download before to experiment with but now aren't. People started downloading video game demos like crazy, setup file servers on their home systems, etc.

It turns out the new efficiencies tend to be quickly gobbled up by existing processes that were stonewalled before. As long as there are problems bound by some technological constraint, any efficiency gains disappear rapidly as they're quickly integrated into existing or new processes. You didn't use that much bandwidth before because it was scarce but now you have a firehydrant so let's see what we can use. Humans are pretty great at quickly finding ways to gobble up and expand resources whenever they're found. Once you break one of those barriers it may be that the other barriers weren't as unsormountable which leads to rapid rise in demand. That's been my 2 cents.


I think that would fall under a slightly different but very similar concept know as induced demand. Basically that occurs when an increase in supply leads to an increase in demand.

Roads are the most commons example of induced demand. Adding new roads rarely decreases travel times because new roads encourage more people to drive which leads to more traffic that soaks up any benefit the new roads added.

The impact is the same regardless of whether the supply increase is real in a quantifiable way or whether the actual amount is fixed but the same quantity is able to satisfy more buyers due to an increase in efficiency.

Your example is more induced demand because we weren't making more efficient use of existing bandwidth. More bandwidth was now being supplied which caused demand to also increase.


> Roads are the most commons example of induced demand. Adding new roads rarely decreases travel times because new roads encourage more people to drive which leads to more traffic that soaks up any benefit the new roads added.

The problem with "induced demand" is that it's misnamed. It's really suppressed demand.

Suppose you have a congested road. It has two lanes and currently has enough traffic to require three lanes. So you add a third lane. Then you turn around and it's congested again.

Because the congestion suppresses demand. The third lane relieved some of the congestion which increased use. To satisfy the demand that exists when there isn't any congestion, you would need four lanes.

But not necessarily five, because once the congestion is gone, adding more lanes doesn't make it any more gone. If you built a twenty lane road there, traffic to fill it wouldn't just magically appear. (China has done the experiment.)

This is important because it applies to anything else you want to use to relieve the congestion. "Don't build more roads, that's induced demand. The only solution is more mass transit." But if you build the mass transit and it removes one lane worth of traffic from the road, that's the same result as adding the third lane. Which isn't enough for the same reason -- the reduction in congestion increases use of the road. Because the demand isn't being induced by a wider road, it's being suppressed by the existing congestion.

The internet speeds are a great example. In the dial up days, the low speed heavily suppressed demand. Your connection was pretty much maxed out whenever you were using it. Now connections are 1000 times faster or more and we use what, 20 times more bandwidth? Maybe 100 times? The speed increases outpaced the usage increases. It's possible to do that.


For roads specifically, it's not necessarily suppressed demand. Commute times are a significant element in peoples' calculations about where to live. Increase road capacity and living further away (with cheaper land, bigger houses and so on) becomes more attractive, which adds to traffic on the road. The road capacity attracts the demand through second order effects.


It's not just suppressed demand, though. E.g. most people in big cities tends to measure commutes in unit of time rather than distance. Reduce congestion and the region considered commutable by people increases, and as a result the region becomes more desirable as a place to conduct business because you have access to a larger Labour pool.

It certainly is possible to outpace the increase, but the point remains that you need to consider that there likely will be an increase beyond any measurable suppressed demand.

It also matters because sometimes it points to alternative strategies: for roads it may turn out it is better to seek to induce demand elsewhere to alleviate congestion, for example.

One of my pet peeves, living in London, is that instead of alleviating congestion in the centre it'd be better to improve commutability between the town around the edge of London to draw demand out of the centre, for example. Another way of putting that is that to take advantage of induced demand, when capacity issues you should look for options to leverage it to spread the load whenever that is cheaper than increasing the capacity to the core of the system, and often spreading the load can be a lot cheaper.


I read this mostly as a semantic difference. Both supply and demand generally exist as curves and changing the position of the supply curve changes the position it intersects with the demand curve. The demand curve isn't actually changing. Whether we call that move from the first intersection to the second "induced demand" or "release of suppressed demand" doesn't really matter in my opinion. The concept is the same.

Jevons Paradox is a little different in that it doesn't change how we traditionally model the supply curve (that is probably why the two graphs on that Wikipedia page only show the demand curve). But the end result is the same that we have now moved to a different spot on the demand curve.


Yes, it's all about moving to a different place on the demand curve.

It's two things that regularly get missed with induced demand.

The first is the assumption of infinite demand. If two lanes isn't enough, and you add a third and that isn't enough, by induction (pun intended) you wouldn't have enough even with infinity lanes. But in practice it will be finite and might only be four.

The second is that you can "induce" demand by removing demand, and therefore reducing congestion and increasing demand. If your goal is to eliminate congestion then one way or another you have to satisfy the demand that exists without congestion.


Another blind spot is people using it as an reason/excuse for public transport. They somehow see a new lane as inducing demand, but not train tracks or a seat on a bus.


If more train lines or more bus seats increases the number of people using public transit, that's an unequivocal victory and is in fact the whole point of public transit.


Not if they go unused or underused, or otherwise end up being less efficient and/or more costly overall than the alternatives. Then it's worse of both worlds.

Theoretical efficiency and real-world efficiency are different things.

Public transit isn't inherently good by virtue of existing without real-world benefits. (To assume or believe it good without proof would be dogmatic faith, not science.) And those benefits gained need to be greater than any and all offsetting costs lost.

There are far more real-world scenarios where busses and trains end up being far less efficient than driving and flying than vice-versa.

The difference in logistical degrees of freedom between the two sets makes bus and rail far too reliant on ideal geographic and operating conditions.

Yet it's always these few ideal scenarios that are the basis for most cost and efficiency comparisons & claims, and are what get presented as justification for building/implementing.

And the countless public transport systems that never get the ridership or revenue that their pre-construction cost/benefit analysis or environmental impact statements predicted is proof that they rely on overly optimistic assumptions that rarely pan out.


By shifting the bottlenecks elsewhere, in the case of bandwidth. People would totally use way more download bandwidth, for videoconferencing / streaming if nothing else, but you run into latency and upload limits first, rendering the idea of nice 4k streams impractical.


That's what it is for everything, no? Something is "enough" when it moves the bottleneck somewhere else.

In the end it becomes humans. What's the resolution of your eyes? How many streams can you watch at once? It's less bandwidth than fiber has, isn't it?


> But if you build the mass transit and it removes one lane worth of traffic from the road, that's the same result as adding the third lane.

Mass transit can handle much more than one lane worth of traffic.


> Mass transit can handle much more than one lane worth of traffic.

It's not about how many people you can physically fit on a train car. They all have to be going the same way or it doesn't work.

There could be one lane worth of people who are doing that and the others are going to thousands of different places.


It seems like Jevons Paradox is just another example of induced demand, the demand is just induced due to increased efficiency.

Roads and bandwidth can be structured as a Jevons Paradox as well, no? The roads were made more efficient by being able to transport more cars. The Internet connection was made more efficient by allowing more bandwidth.


I think the difference is a real vs nominal increase in supply. Is the quantity of supply higher or is the supply just being used more efficiently? It is the difference between roads being added and the speed limit being increased.

Lots of topics in economics are talked about as if they occur in isolation, but that rarely happens in practice. The bandwidth example includes both because we were increasing efficiency of existing data lines as well as adding more lines, but I think the latter was much more impactful than the former.


Cars are just very space inefficient compared to motorcycle, bicycles, or even pedestrians, though.

Saturation comes much faster for cars, so it's more efficient to use rail and other form of transport.


They are very time efficient, though, and people tend to care about time efficiency too, not just about space efficiency.


But the space inefficiency forces time inefficiency rather quickly ( because cars take so much space, it's relatively easy for them to be enough to create traffic and/or parking congestion). Not to mention the fact that all of the space dedicated to fit the space inefficient cars is wasted, and wastes time for humans ( e.g. when having to cross a huge stroad or parking lot).


Time efficiency, maybe, but time utilization definitely not. Based on current times from Google Maps, driving from New York City to Boston is 3h45m. Taking a Greyhound bus is 4h20m, but you can read a book, play video games, etc during that time. One of them uses the majority of your attention for the full duration, while the other uses barely any of your attention, leaving you free to do something else.

And that's not even counting trains, which would make the same trip in 3h30m.


Okay, and how do you get from the Greyhound station to where you actually want to be? With a car, you can get directly from your home to where you want to be, which makes enormous difference in practice, especially on shorter, intra-city routes.


Cars only work if someone build infrastructure for them and land use pattern is built for them. Think about gas station, garage, etc.

We could use bikes for last mile transport, or make streetcar suburbs an automatic assumption.


> Cars only work if someone build infrastructure for them and land use pattern is built for them.

That's a fully general argument against everything. Buses and streetcars also work only if someone builds infrastructure and land use pattern is built for them.


Of course. So, the idea that car is time efficient doesn't seem to really hold water even when our urban landscape is designed for it, especially in my lived experience.

Is it really going to be all that useful for me if I only walk 5 minutes as opposed to driving 5 minutes to a local convenience store? Especially given the cost on society?

If you can get to a store by a car, you could get to a store by a motor bike.

I have a really hard time justifying myself that I need a one or two ton electric vehicle that can seat five people just to get a drink at that corner shop only accessible by driving.


In congested areas cars are not particularly time efficient. For most journeys up to ten kilometers or so I'm almost as fast on my bike than I would be in a car, sometimes I'm even a bit faster. And that's despite traffic lights not really being optimized for bike traffic.


"Induced demand" usually happens when the product you are supplying is free, like roads.

What you are talking is more like latent demand.


In the roads case, the demand was always there. Increased capacity means that more demand is satisfied, even if the QoS for the satisfied demand went up.

Consider the person who lived close to work because commuting wasn't possible. With increased capacity, living elsewhere becomes an option. They wouldn't exercise that option unless they preferred it. That demand predates the availability of the option.


That's just the paradoxical thing about it though. With induced demand congestion doesn't improve so travel times don't improve, so there is the same (dis)benefit to moving as there was before and living elsewhere doesn't, or rather shouldn't, become an option.


Yet additional demand is still being satisfied, even if QoS returns to a certain equilibrium.

The problem is that the added capacity still isn't sufficient to satisfy the extant demand of people wanting to get to town.


I remember a time where I kept that DSL link saturated, because there was enough bandwidth to download anything I wanted, but it might take twelve hours. With fiber, I can download just about anything in seconds, and as a result I don't download anything until I actually need it.

Over-abundance has the upside that there is no need to over-consume, as there is in the case of a generous but still limited resource.

Of course, on the other hand, over-abundance easily leads to excessive waste, which is where metering plays a role. Even for a very cheap resource.


How long did that trend last, though? It sounds to me a bit like the way people who've grown up during a famine often keep heavily stocked food stores at all times. Barring industry-driven changes like video streaming and other content delivery mechanisms, I bet the average 18-year-old downloads far less today than the average 18-year-old a decade ago. Why download and store everything when you can just get it later if you need it?

(On the flip side it's always funny when driving with younger colleagues and they announce that the music might get choppy because mobile reception is patchy. I tend to explain in a Princess Bride grandpa voice about how once upon a time Spotify was called mp3s and it worked even without mobile data.)


1 word: greed


And in this corner: Braess's Paradox, "the observation that adding one or more roads to a road network can slow down overall traffic flow through it."

https://en.wikipedia.org/wiki/Braess%27s_paradox


I can't find the paper now, but when researching mesh networks and onion routing circa 2001-2005 I remember a paper which showed that in a distributed, decentralized mesh network (plus or minus some other constraints that may or may not have mattered, like uniformity) each additional node increased aggregate throughput, but the increase asymptotically approached zero.


I don't know if it was this paper, but this was a classic piece of research on that topic from that era:

https://doi.org/10.1145/381677.381684


They have taken this approach in my city arguing improving the road network will not improve congestion and if anything reducing the number of roads is what's needed. What I don't understand about this argument is that the point surely isn't to solve for congestion or travel time, but for transport efficiency, or at a higher level, economic output, as transport efficiency likely correlates with that.

Even if the number of cars on the road would double if you had twice as many roads, if the number of people now able to travel to their destination would also approximately double it would seem to me that this would be preferable?

Also does this argument ever get made for any other form of transport? For example when subway systems don't have enough capacity doesn't this paradox apply? Is it therefore pointless to expand the capacity of subway systems because doing so would just increase their use?

It's also interesting to me that this logic isn't ever used to argue against building new homes because it seems it would apply far more to that. I can only "consume" one road or subway service at a time, but my consumption potential of real-estate is unlimited. If we follow this logic then as you build more homes the demand for homes will increase without an upper limit, therefore building homes to solve the problem of a housing shortage is always a pointless endeavour and if anything houses should be demolished to reduce the demand for housing.

Honestly, I haven't looked into any of this that much so I'm sure I'm demonstrating the Dunning–Kruger effect right now, but how this paradox is used politically always confused me. It just doesn't seem to make much sense to me to argue we need to remove roads to fix congestion, while also arguing to build houses to fix home shortages, but this is what the politicians here do.


There's a number of key differences, mostly in how things scale with increased use.

* Car transit is typically scaled through increased space allocation. Public transit is typically scaled through increased utilization of the same space.

* Car transit is limited at about 2000 vehicles/lane/hour. Public transit is mainly limited by the number of buses/trains on a route. (e.g. A typical bus can carry 30-100 people. Replacing a lane of car transit with a dedicated bus lane drastically increases the throughput of that lane.)

* If the number of highway lanes are doubled, followed by induced demand using that capacity, the result is a much less enjoyable experience driving. If the frequency of buses doubles, followed by induced demand using that capacity, the result is a much more enjoyable experience taking the bus, as the wait for the next bus is shorter.

* There's a phase transition in public transit utilization that happens when buses/trains/subways arrive more often than once every 15 minutes or so. If a bus arrives once every hour, you need to carefully plan your day around that particular bus, arrive early to make sure you don't miss it, and waste a lot of time waiting. If a bus arrives once every 10 minutes, there's no need to plan ahead, and you just get on whichever bus arrives next.

The Youtube channel "Oh the Urbanity" had a pretty good video on induced demand as it applies to both car transit and public transit [0], that I highly recommend. Most everything by the "Not Just Bikes" channel [1] is also pretty good.

[0] https://www.youtube.com/watch?v=8wlld3Z9wRc

[1] https://www.youtube.com/c/NotJustBikes/videos


Thank you so much. That video basically addressed my exact confusion! The internet is amazing.

I think I'm correct in thinking this paradox does apply to other modes of transport such as subways, but that by itself doesn't mean that we should therefore build more roads because in general infrastructure spending would be better allocated to other modes of transport for numerous reasons. I do agree with that argument, but it seems there would be nuances there. I'd agree it's generally more efficient in the inner city, but less so as population density decreases. In areas with low population density the car is probably by far the best mode of transportation and there its use should probably be accepted.

I think my confusion comes from how it's often sold as a black and white thing politically where, `car=bad` because this clearly isn't true as general rule. If the argument is simply that within densely populated areas the use of public transportation should be promoted over extra capacity for cars then I completely agree, but where I live I would argue some questionable city planning decisions have been made. Specifically, lanes have been removed from roads to make way for bike lanes and speed limits have been reduced almost universally. While I'm all for promoting cycling these bike lanes go almost entirely unused because it's very hilly and things are too far away for cycling to be a practical mode of transportation in most cases here. In this case the pros and cons don't appear to add up and I suspect the city planners are probably just out of touch with the differing transportation requirements between where they are in the inner city and in areas such as mine.


I think that frequently, a city will be so optimized for cars that (1) no other infrastructure can effectively work around it, and (2) any changes away from that optimized solution feel aggressive toward cars.

For example, consider curb radius at intersections. The smoother the curve, the nicer the experience for cars. If you can take the turn at 35 mph, then you don't even need to slow down. The car behind you doesn't need to slow down as you turn, and it's altogether more enjoyable to drive. But that means that the intersection is much larger, to accommodate the smoother turn. That means cars are making a right turn at higher speeds, leading to more dangerous collisions with pedestrians.

And this happens for pretty much every tradeoff. That might be why you see it as an impasse between different groups, because either tradeoffs need to be made, or the overall cost goes up significantly (e.g. moving all parking spaces underground).

* Bike lanes are adjacent to car traffic, decreasing usage. We could have separated bike lanes, but that space was already used for more car lanes.

* Bike lanes take circuitous routes, increasing distance. We could have more direct routes, but that space was already used for wide roads.

* There's no walkable area in the city center, because the shops have large set-backs, and are spaced far from each other. We could have storefronts closer together, but that wouldn't allow for as much parking space.

* Bike parking is rare, and typically pretty far from your destination. We could have bike parking at every storefront, but that would require cars to be parked further away. (Though, given that you can fit 10-15 bicycles into a single car spot, the impact even there would be minimal.)


Except this is mostly theory that doesn't work out in practice.

For example, a lane at capacity with full busses is more efficient than a lane of cars.

But in reality, busses aren't full and there aren't enough to utilize 100% of a lane.

Inefficient cars fully utilizing that lane capacity ends up being more efficient overall than potentially efficient busses inefficiently filling seats and leaving lane capacity unused.


Swap car and bus in that sentence, and it's still a good argument. Anyway.

My old boss Eric Carlsen had a degree in City Planning. His thesis was modelling LA with busses running on every major cross-street on 15-minute schedules, and funding it with a gas tax. Ridership would be low at first, but as folks came to prefer busses over cars it could change.

The thing was, at ever stage of adoption it was a better system than we have now. Less congestion; less cost per-capita and so on.

So we can make up excuses for 'everybody needs a car' but we can also do the numbers.


If cars had no externalities just building more and more roads would probably be a good strategy. But cars have high externalised costs. There is of course the environmental aspect and accidents, but, perhaps less obvious, due to poor space utilization, more car infrastructure makes cars all but required since density becomes so low that walking or cycling are right out and public transport has no chance of being remotely cost-efficient (or convenient, who wants to walk ten minutes through a parking lot to reach the store after getting there by bus?)


> For example when subway systems don't have enough capacity doesn't this paradox apply?

Not really; Braess' Paradox is about adding edges to a graph, but subway expansion usually doesn't work by digging new tunnels between existing stations.


Along a similar vein:

https://nautil.us/issue/13/symmetry/want-to-get-out-alive-fo...

Essentially, by obstructing emergency exits they can work better to channel people out.


A theory I've often had: the Jevons paradox explains why desktop software remains so slow despite the massive improvements in hardware speeds. When you improve the efficiency of the hardware, software starts to demand more CPU cycles, since there's less of an incentive to optimize.


> ...software starts to demand more CPU cycles, since there's less of an incentive to optimize.

Node and Electron and crew would agree.

A "modern" chat client (Element, Signal, etc) will lag on a quad core 1.5GHz system in terms of "typing into the text input box" (RasPi4, yes, I've set the governor to performance), when a single core 66MHz 486 could manage that trick without any lag in a range of chat clients. Hexchat still works fine, at least.

What we need to do is stop letting developers use high end, modern systems, and put them on something a decade old for daily use. Then maybe they'll stop writing crap that only works well on a 1-2 year old Xeon workstation. Google, I'm looking at you. Buy all your devs a Pi4 and make them use it once a week for a day.


Some of it is explained by bloat but some of it is explained by things like 4K-5K displays, pretty antialiased fonts, support for multiple languages and unicode, support for inline images and videos, emojis, support for all kinds of inline objects like polls or HTML documents, transport encryption everywhere (sometimes multiple layers of it like Signal ratchet over TLS), scrollback history with search going back to the beginning of time, real time sync between devices, and so on.

High resolution displays for example impose an exponential cost right out of the gate. Just printing "hello world" requires more pixels as a function of the square of the increase in resolution. That's more memory, more memory bus traffic, larger bitmaps, scalable vector graphics formats that take more processing power to draw, antialiasing, ...

Again I am not writing off bloat. We could probably shrink most software by at least 2X and most Electron stuff by 3-4X. I doubt we could shrink it 10-100X though without sacrificing some of the things I listed up there, especially all the eye candy and rich interaction media.


In my particular context, it's a Raspberry Pi on a 1920x1200 display, 1:1 pixel scaling, no antialising I know of, and... the rest shouldn't matter that much in terms of how long it takes a key pressed at the keyboard to show up in the input text area. If it does, something is quite wrong in the architecture, IMO, and it's probably 4 layers of nested libraries nobody actually understands down, buried under something else.

I don't think pixel density has increased as much as you seem to think it has in the 1x scaling space.

I ran some very nice 1600x1200 21" monitors back in the day - ~95 ppi.

A 27", 2560x1440 monitor (my preferred native size) is 108 ppi.

The 24", 1920x1200, is 94 ppi.

I understand the issues with scaling a display, but those are more OS level, and shouldn't impact key-to-screen delays noticeably. It certainly doesn't on more recent Apple hardware, even if you're using some screwball non-integer scaling.

But the reality remains, I now have 6GHz of CPU cores, 8GB of RAM, and things are objectively slower than 66MHz with 24MB. That's not progress.


Part of that is interrupt availability and pre-emption; also see https://danluu.com/input-lag/


AFAIK you can turn some of that off if you like compiling kernels and I think there are some sysctl tuning parameters you could play with.


And don't forget accessibility. I don't know exactly how much accessibility contributes to the total size of Chromium (singling it out since Electron is everyone's punching bag), but it's something. In my AccessKit project [1], I'm probably going to spend at least a few weeks working on the accessibility of text editing alone. And that's just multi-line plain text; hypertext is way more complicated.

[1]: https://github.com/AccessKit/accesskit


A square would be quadratic, not exponential.


This is known as Wirth's law, which the page links to as well as to the ancient quip "what Andy giveth, Bill taketh away."

A less cynical take would shift towards the perspective of Gustafson's law. Modern systems attend to more stuff and have more capabilities and affordancies than older trimmer but also simpler software.

On Jevon's paradox, a countervailing force is ephemeralization or dematerialization, of which a large aspect is pushing away from resource intensive builds to more energy and information processing intense arrangements.


New developers optimizes time to get a well paying job, so technologies that are quick to learn are optimal. Companies optimizes for worker replaceability, so technologies with lots of fresh candidates constantly being available is optimal.

Those two aren't the only thing those groups optimizes for, but they are two of the main things. Which is why we have software bloat, neither group cares that much about user experience.


I think a better explanation of this phenomenon is that most users don’t really really care about performance as long as it is above a certain threshold, and care more about features and cost.

Given this as long as your software is efficient “enough” - which is easier to reach as computers get more powerful, you are better off optimizing for developer productivity on the desktop.

Now this dynamic is complete different on mobile and server since efficiency directly translates to battery use (mobile) and electricity and hardware costs (server).


> When you improve the efficiency of the hardware, software starts to demand more CPU cycles, since there's less of an incentive to optimize.

Or rather, there’s always more work to be done.


For example the orthographic and grammatical correction. When I was young, the browser didn't have it. Now it fix most of my post in HN.


*fixes


Desktop software only gets optimized if it's annoyingly slow on the developer's machine. That's why contrary to popular opinion, devs should NOT be testing (only) on the latest greatest machines. It's far to easy to not notice your accidental quadratic or even exponential runtime performance on a lightning fast machine.


It's a really old observation, Joel on Software did it too: "you could spend 6 months carefully optimizing your application, or you could spend this time playing in an amateur rock band, the end result would be the same: 6 months later, your application will run significantly faster".


In data science and machine learning you can see this paradox in the fact that no amount of data is enough: once you get enough data to run a simple model, you will want to make the model as complicated as the available data allows and beyond. And you're short on data again.


Just optimize the search for interesting!


I think this is probably why we'll eventually have some sort of carbon rationing if we really want to take limiting carbon emissions seriously. There's only so much making things more efficient can accomplish.


Right, we have to make energy usage more efficient, and at the same time make it seem less efficient to consumers by adding a tax in between. Like most of Europe does with car fuel.

If you want the market to solve the problem you need to give the market the incentives to solve it.


I’m less optimistic. Carbon rationing is the only solution. But what political power is there that would be able to simply declare and enforce a maximum amount of fossil fuels to extract per year in the entire world?


I’ve given this some thought with regards to small-scale off-grid solar and battery use. Last summer I installed 540Ah (12v) in a houseboat and was able to power a small air conditioning unit overnight to allow our family to sleep comfortably when the temps are >80F/26C. Most nights we would wake with about 40% charge, allowing us to cook breakfast on an induction cook top.

At least in my small world of river glamping, having an efficient battery/solar/appliance system leads me to use and rely on it more.

If you have it you may as well use it, especially if it’s renewable once it’s installed.

This may all seem totally obvious but keep in mind one of the first steps to building out this kind of system is tallying up your intended energy usage. We didn’t “need” to have A/C but it makes the entire experience that much more accessible/enjoyable with small children in small spaces.

The spreadsheet of usage can quickly snowball as watt-creep sets in and you think: “well if I’ve got this efficient refrigerator to store breakfast then I’ll probably need a kettle for coffee” etc.

Can’t wait for summer!


I believe this explains a modern proliferation of paperwork. Computers made it more efficient to administer and complete but because of Jevon's Paradox we spend more time on it instead of less.


Yes! Also ease of printing means that even the local tennis club manages to produce a 10 page sign up form.

I think this is also caused by the incentives present: the form-designer is incentivised to make sure info isn't "missed". The wasted time from filling in a long form / filling in your name for the Nth time doesn't matter to the form-designer.


Yet, domestic electric power use is in sharp decline. It turns out people have as much light as they want.

It will be rising again sharply as people start charging their cars, and replacing furnaces with heat pumps.


I'd hypothesize that Jevon's paradox only manifests if there is significant unfulfilled demand or if new uses for something manifest at a lower price point. If neither of these conditions is satisfied I don't see why higher efficiency would increase use.

LEDs for example reduce the energy use of lighting very significantly. It has not caused most people to use more lighting. The demand for lighting was already largely satisfied prior to the LED revolution.

Transportation on the other hand... I imagine if something created a 10X improvement in the efficiency of rapid long distance travel (bullet trains, electric aircraft, much higher efficiency aircraft, etc.) you'd see quite a bit more of it. People would start taking weekend trips to places they might otherwise visit only rarely, meeting in person more frequently in remote jobs, and so on. There's probably a ton of unfulfilled latent demand for travel.


And even if people do use more lighting, there's an upper limit on how much lighting is tolerable. LED bulbs are significantly more efficient than the incandescents (based on watts/lumen) they replaced. Someone would have to use 6-10x more LED bulbs to consume as much electricity as the incandescent bulb it replaced, which would make for an unbearably bright space (for most people). I had a friend in college who did exactly that, it was a remarkably unpleasant space to be in as a consequence (not sure if he ever toned it down, rarely went to his home).


Increased demand here is often exhibited by:

- Providing lighting where it wasn't previously. Most especially outdoors, for display lighting, or for advertising.

- By operating lighting that would otherwise have been turned off.

- By the introduction of lighting to populations or sectors which previously had none.

A 1 MW generator might power 20,000 households each with a 50W incandescent light bulb.

That same generator could power 200,000 households each with a 5W LED bulb. That's an addtional 180,000 households which would have effective access to lighting.

(Other loads, transmission/distribution losses, etc., ignored. The point is that efficiency greatly expands the possible service population.)

Now factor in that some fraction of the initial 20k households might still be able to afford an additional 45W of power, say, to charge or operate a tablet or laptop computer and router, and the total power draw will in fact increase.


> Yet, domestic electric power use is in sharp decline.

Sharp decline? EIA shows about 4.0 trillion kWh in USA and slightly increasing over the past 20 years with a few little dips here and there. 2020 saw a dip, but that's largely attributed to COVID.

Lights may be set, but bitcoin electricity use has been skyrocketing.

https://www.eia.gov/energyexplained/electricity/use-of-elect...


There are ~50 million more people in the US in the same period. Maybe when normalized for population there's a drop?


The Jevons Paradox is not just an individual adjustment, but a collective one.

Total domestic power consumption tends to a certain level of satisfaction --- enough light, heat or cooling, and other services.

Some of those, notably in the area of electronics, have been increasing over the past few decades, from a baseline of zero, though arguably "instant-on" television and radio sets (which achieved this result by keeping tubes warm when the set was "off") were an early harbinger.

But we're also seeing electrical power being used in areas previously not serviced: increasing amounts of outdoor lighting, heating or cooling of spaces not previously heated or cooled, overall larger floorplans.

And at the other end of the scale, people or organisations not previously making use of electricity are now doing so, both within advanced countries and amongst those which are presently developing. In some cases electricity is only available for a portion of the day, in others, partial power is now more reliable, or supplanted by generators, such that households and businesses have access to power 24/7/365.

All of which means that as electrical generation and utilisation increase in efficiency, the overall net effect is greater consumption.


Tell that to my parents' meighbour who thought it a good idea to decorate his garden as if it were a plane landing strip. Anecdata of course, but also a perfect illustration of Jevons' paradox.


Is the cost of power going down? Jevons paradox only applies if the cost goes down.


The cost of lighting has come down by a large amount. So looking at lighting in isolation, (and its attendant power consumption) we can see Jevon's Paradox hasn't born out.

From where I'm sitting as I write this, I can see 9 light bulbs turned on. These are bulbs that would have been 60W apiece 20 years ago, but are now about 7W each. They're all on and drawing a total of 63W. If they were incandescents, I'd probably have 4 of them off, and still be pulling 300W.

So I definitely use more lighting now than I did when it was expensive, but the efficiency improvement is so good that my current "wasteful" consumption is 1/5th of what my "thrifty" usage would have been.

At nighttime, I leave on my porch lights, patio lights, driveway lights, and lights in rooms I don't occupy. The entire house draws about 200W for lighting, which is what one room would have done in the old days.

I'm not saying Jevon is wrong or anything, but there are efficiency gains that are so great, that some other limiting factor will prevent increased utilization from erasing those gains. In the case of light, my rooms can only be so bright. If light were literally free I don't think I'd add any more lamps.


It is a paradox since it goes against what you normally expect, so normally it doesn't apply. It is just something to keep in mind, if you don't you can make horrible mistakes. Lets say you have a "cost-centre" at your company, they are evaluated based on how many resources they can save.

Cost centres is the main example of Jevon's paradox not being heeded in the wild. When a department is seen as a cost centre and rewarded for reducing costs, then they will try to get as slow and cumbersome as possible to reduce others reliance on the department. If they became more efficient, people used them more so demand goes up, costs also goes up and the department gets lambasted for being wasteful.


I have friends whose habits were established by incandescent lighting. In a room with a floor lamp, cabinet lighting, third alcove lamp, and a desk lamp, they'll often turn off all but the latter.

Joke is that it's the desk lamp at 50W that draws as much power as all the other lighting (LED) combined.


Yup, here’s an article talking about this as applied to crypto, like Bitcoin: https://tftc.io/martys-bent/issue-678/

> We haven't even touched on the future dynamics of mining, which provides the long-term security that people are so worried about.

> As oil and gas companies continue to wake up and realize they can use all of the FREE gas they are currently wasting on their fields to mine bitcoin, this will severely reduce the price at which your average miner can mine profitably. An externality that many are blind to at the moment.


There is no paradox, if one is familiar with the modus operandi of human species: "given the opportunity to consume the humans will consume to the point when there is anything left to consume"


As far as all this goes First, there are Timers for power switches. I have one that splits off into 3 I run mine on a 15&15. And I having a second set for 30m&2h. If you thought about the road system, we are yet to be make full advantage of all possibility.. For we are yet to take advantage of our, Up yet. We are still only capable of driving on flat. Yes, we do have planes andvARE able to use about 0.001% of its capabilities. Yes, its does decongest pur roadways but not enough to clear say T.D. NewYork, or L.A.. We are our future. even as slow as we are progressing.. It IS Progression.


I imagine the following, if the price of clean energy catches up with the price of fossil fuel, the price of fossil fuel will fall and its use will increase.


Exploration will decline, and drilling, as expectation of demand falls. We can expect substantial fluctuations as numerous structural boundaries are crossed. Coal demand is collapsing, but price is not, because there is no cheaper way to mine coal.

At some level of consumption the cheapest existing wells will suffice. But the owners of those wells have got used to spending a great deal more than the natural price for that oil. Meanwhile, people with more expensive wells have debt payments, and have to sell at whatever the market offers.

So, it is more complicated than Econ 101, and will stay that way.


The economic, pricing, and market dynamics are somewhat counterintuitive, though interesting.

Note that what we describe with "price" actually encompasses three things:

- Consumer value: the benefit derived from consumption.

- Producer costs: the actual opportunity costs required to produce some resource.

- Market price: the market-clearing price of a good. This is almost always somewhere between cost and value, C <=- P <= V. Where price falls between these values depends on whether we're discussing commodity goods (C ~= P) or rents (P ~= V). That is, for a commodity, price tends toward costs of production (plus some "normal" economic profit), and for rents, the price tends toward all consumer value (think San Francisco apartment rents and start-up / tech-market salaries / compensation).

A few years ago, looking at a set of prices for medaeval Britain, one fact that struck me was that the cost of fuelwood and coal was the same, expressed in equivalent energy output. I quickly realised that this was a pretty obvious circumstance: if you could get the same amount of heat at lower cost from the other, with handling and burning characteristics being similar, the two goods were perfect substitutes. It's as easy to burn wood as coal and vice versa.

(There's a similar item on HN now: https://medium.com/@zavidovych/what-we-can-learn-by-looking-... https://news.ycombinator.com/item?id=29882389)

What adjusts instead is the supply of each fuel. Where it's more difficult to provide wood (say, forests are being cut down faster than they're replenished), foresters are less willing to sell wood (presuming they can find alternative income and/or livelihood), and the net balance of fuel switches to coal. If coal becomes more expensive to mine (say, larger pumps and more fuel are required to drain water), then those costs of production mean less coal is included in the mix.

Energy is a fundamental physical attribute, and allowing for capital costs of generating plants, the end consumer doesn't care if electrons are motivated by light, wind, water, coal, gas, oil, atoms, or ambitious hamsters. You turn the switch, you get light.

Generators and electrical providers are interested in revenues (billing rates per kWh) and costs (capital + fuel + other expenses). And though it takes time to bring new generating capacity online, given time, it's the capital + fuel costs which tend to dominate.

The providers of fossil fuels face their own fixed extraction costs, plus other costs of production (usually interest service on debt for initial drilling or mining, or the purchase of a going concern). Moreover there isn't a single extraction price that covers all fossil fuel sources (any of coal, oil, gas, shale, etc.), but rather, each individual well or mine has a cost structure associated with it. Some are very low (Ghawar Oil Field, the Number One Well, Bahrain, say), some are quite high (a recently-fracked well, a deep-water offshore oil platform). Providers who can extract at low cost are receiving a large natural resource rent, effectively, which is the difference between their own extraction cost and the market price. The marginal producer has the problem that if the market price falls, it will be below their own cost of extraction, and they're selling product at a loss.

(I'm ignoring externalities of the form of both pollution caused by the combustion of fossile fuels, which most people are well aware of in causing global warming and climate change, and the natural-resource cost of formation, which can be expressed as the difference between the time it's taken to create fossil fuels, and to consume them. This latter would amount to an increase in price of some fossil fuels, such as petroleum, by a factor of about five million. A rational economic system would include such factors, ours does not.)

What happens as the cost of renewable resources, most especially solar and wind power, falls over the long term is that these become increasingly viable and competitive with fossil-based energy sources, and increasingly substitute for them, especially in electrical generation. And whilst the price of fossil-fuel-derived energy falls, the cost function for suppliers does not. Instead, high-cost providers are driven from the market, and their extraction operations (wells or mines) are shut down. Depending on the operation, such shut-downs may be reversable or not.

Put another way, a falling price for renewables (an increase in their supply function) induces a reduced demand function for energy (the market price paid will fall), and so against a constant supply function for fossil fuels, the quantity provided of fossil fuels ... actually falls.

For there to be an increase in the consumption of fossil fuels, either price or demand would have to increase.

The demand increase could occur through more efficient applications of energy. This effectively means that a fixed quantity of energy provides more value (equivalent to a price decrease in energy), and hence a demand increase.

But that's not what you've described.


This reminds me of programming through the decades: going from binary/punch cards to assembly to high level languages to even higher level languages (and libraries and frameworks): at each jump, programmers can do the same thing as before with less effort and time. Did this mean less demand for programmers?

Nope! Actually the opposite: per-programmer productivity rising meant more demand for programmers, because programmers became more useful, and it turns out “manipulate information” is a skill with nearly endless uses.


Economists don't know how to divide?

It appears to be increased consumption, if you don't do it 'per capita'.


Is this really a paradox or acknowledgement that efficiency will adopt the most cost effective inputs.


This is why we can’t rely on technological progress alone to stop global warming.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: