Hacker Newsnew | past | comments | ask | show | jobs | submit | more 6DM's commentslogin

In firefox adding a couple raises causes the site to crash for me when trying to add my salary.

edit: the site layout seems completely messed up in chrome


this whole morning i tried entering the data from oldest date to newest date and it kept crashing but if you start with your newest data points and work backwards it should work.


Hm, I never even thought about this style of art, I had to look it up: https://en.wikipedia.org/wiki/Corporate_Memphis


Doubt it will do anything to curb the current level of spam, not to mention its transitioning from calls to text, now that people have learned to just not answer their phone.


A2P/10DLC registration seems to have curbed SMS spam for me significantly. As annoying as it was to comply with, I think it's useful (so far?)


I think it just comes in waves. Since the beginning of the year, I've received more spam texts and calls being silenced by my phone than I did all of last year.


What is that?


A2P = Application to phone, essentially any SMS/MMS sent to a phone by an app or automated process, and not by a human just texting.

10DLC = 10-digit longcodes; in the US (well, NANP), "regular" phone numbers are 10 digits long. This is as opposed to shortcodes, which are usually 5- or 6-digit numbers (though there are some that are shorter) that are sold to specific customers after an approval process where all US mobile carriers have to sign off on their use cases. If you spam, you get your shortcode revoked. They generally cost on the order of $1000+ per month, while you can usually get a longcode for $1/mon or less.

Over the past few years the US telcos have (due to regulatory action, not of their own choice) started requiring that anyone using 10DLCs for A2P use cases need to register: who they are, how responsible parties can be contacted, and what they plan to use the numbers for. Don't do this and your messages will likely be silently dropped.


All my "unknown sender" texts get filtered to a separate view, and do not create a notification. If I'm expecting a text from an unknown number (such as a 2FA code) I'll just open the "Unknown Senders" view. The rest of the unknown messages are ignored.


Unfortunately on iPhone you can't set some messages to unknown sender and some spam calls seem to be in known senders


Do you have “filter unknown senders” turned on?


Yes and that is how I know it does not work. There are senders I cannot get into known.


To an extent, having a polished established look is to build trust.

If the website looks poorly executed, customers will walk. Think about it, if the site has zero reputation and looks really janky, would you put your credit card in?

Your average consumer will say no.


Yeah but what’s changed. Back in the day those bigs I mention don’t have polish at all at first but people wanted what they had. Now people look to polish


I have tried to evangelize unit testing at each company I've worked at and most engineers struggle with two things.

The first is getting over the hurdle of trusting that a unit test is good enough, a lot of them only trust an end-to-end test which are usually very brittle.

The second reason is, I think, a lot of them don't know how to systematically breakdown test into pieces to validate e.g. I'll do a test for null, then a separate test for something else _assuming_ not null because I've already written a test for that.

The best way I've been able to get buy-in for unit testing is giving a crash course on a new structure that has a test suite per function under test. This allows for a much lower loc per test that's much easier to understand.

When they're ready I'll give tips on how to get the most of their tests with things like, boundary value analysis, better mocking, IoC for things like date time, etc.


I've evangelized against unit testing at most companies I work at, except in one specific circumstance. That circumstance is complex logic in stateless code behind a stable API where unit testing is fine. I find this usually represents between 5-30% of most code bases.

The idea that unit testing should be the default go to test I find to be horrifying.

I find that unit test believers struggle with the following:

1) The idea that test realism might actually matter more than test speed.

2) The idea that if the code is "hard to unit test" that it is not necessarily better for the code to adapt to the unit test. In general it's less risky to adapt the test to the code than it is the code to the test (i.e. by introducing DI). It seems to be tied up with some sort of idea that unit testability/DI just makes code inherently better.

3) The idea that integration tests are naturally flaky. They're not. Flakiness is caused by inadequate control over the environment and/or non-deterministic code. Both are fixable if you have the engineering chops.

4) The idea that test distributions should conform to arbitrary shapes for reasons that are more about "because google considered integration tests to be naturally flaky".

5) Dogma (e.g. uncle bob or rainsberger's advice) vs. the idea that tests are investment that should pay dividends and to design them according to the projected investment payoff rather than to fit some kind of "ideal".


> The idea that unit testing should be the default go to test I find to be horrifying.

Kent Beck, who invented the term unit test, was quite clear that a unit test is a test that exists independent of other tests. In practice, this means that a unit test won't break other tests.

I am not sure why you would want anything other than unit tests? Surely everyone agrees that one test being able to break another test is a bad practice that will turn your life into a nightmare?

I expect we find all of these nonsensical definitions for unit testing appearing these days because nobody is writing anything other than unit tests anymore, and therefore the term has lost all meaning. Maybe it's simply time to just drop it from our lexicon instead of desperately grasping at straws to redefine it?

> It seems to be tied up with some sort of idea that unit testability/DI just makes code inherently better.

DI does not make testing or code better if used without purpose (and will probably make it worse), but in my experience when a test will genuinely benefit from DI, so too will the actual code down the line as requirements change. Testing can be a pretty good place for you to discover where it is likely that DI will be beneficial to your codebase.

> The idea that test realism might actually matter more than test speed.

Beck has also been abundantly clear that unit tests should not resort to mocking, or similar, to the greatest extent that is reasonable (testing for a case of hardware failure might be place to simulate a failure condition rather than actually damaging your hardware). "Realism" is inherit to unit tests. Whatever it is you are talking about, it is certainly not unit testing.

It seems it isn't anything... other than yet another contrived attempt to try and find new life for the term that really should just go out to pasture. It served its purpose of rallying developers around the idea of individual tests being independent of each other – something that wasn't always a given. But I think we're all on the same page now.


> Kent Beck, who invented the term unit test, was quite clear that a unit test is a test that exists independent of other tests

Kent Beck didn't invent the term "unit test", it's been used since the 70's (at minimum).

> I am not sure why you would want anything other than unit tests?

The reason is to produce higher quality code than if you rely on unit tests only. Generally, unit tests catch a minority of bugs, other tests like end to end testing help catch the remainder.


> other tests like end to end testing help catch the remainder.

End-to-end tests are unit tests, generally speaking. Something end-to-end can be captured within a unit. The divide you are trying to invent doesn't exist, and, frankly, is nonsensical.


> End-to-end tests are unit tests, generally speaking.

Generally, in the software industry, those terms are not considered the same thing, they are at opposite ends of a spectrum. Unit tests are testing more isolated/individual functionality while the end to end test is testing an entire business flow.

Here's an example of one end to end test (with validations happening at each step):

1-System A sends Inventory availability to system B

2-The purchasing dept enters a PO into system B

3-System B sends the PO to system A

4-System A assigns the PO to a Distribution Center for fulfillment

5-System A fulfills the order

6-System A sends the ASN and Invoice to system B

7-System B users process the PO receipt

8-System B users perform three way match on PO, Receipt and Invoice documents


> Here's an example of one end to end test

Bad example, perhaps, but that's also a unit test[1]. Step 8 is dependent on the state of step 1, and everything else in between, so it cannot be reduced any further (at last not without doing stupid things). That is your minimum viable unit; the individual, isolated functionality.

[1] At least so long as you don't do something that couples it with other tests, like modifying a shared database in a way that that will leave another test in an unpredictable state. But I think we have all come to agree that you should never do that – going back to the reality that the term unit test serves no purpose anymore. For all intents and purposes, all tests now written are unit tests.


Every step updates shared databases (frequently plural). In the case of the fulfillment step, the following systems+databases were involved: ERP, WMS, Shipping.

Typically, in end to end testing, tests are run within the same shared QA system and are semi-isolated based on choice of specific data (e.g. customers, products, orders, vendors, etc.). If this test causes a different test to fail, or vice-versa, then you have found a bug.

If we call that entire sequence of steps a "unit" test, would you start with testing the entire sequence of steps, or would you recommend testing the individual steps first?

And if we did test the individual steps first, we would give that testing a different name? Like maybe "sub-unit" testing?


> Every step updates shared databases (frequently plural).

That's fine. It all happens within a single unit. A unit should mutate shared state within the unit. Testing would be pretty much useless without.

> If we call that entire sequence of steps a "unit" test, would you start with testing the entire sequence of steps, or would you recommend testing the individual steps first?

For all intents and purposes, you can't test the individual steps. All subsequent steps are dependent on the change in inventory state in step 1. And the product of step one is undoubtedly internal state, so there is no way for the test to observe the state change in isolation (unless you do something stupid). You have to carry out the subsequent steps to be able to infer that the inventory was, in fact, updated appropriately.

After all, the whole reason you are testing those steps together is because you recognize that they represent a single instance of functionality. You don't really get to choose (unless you choose to do something stupid, I suppose).

> And if we did test the individual steps first, we would give that testing a different name?

If the individual steps can be tested individually (ignoring a case of you doing something stupid), it's not actually and end-to-end process, so your example would make no sense. Granted, we have already questioned if it is a bad example.


> For all intents and purposes, you can't test the individual steps.

Sure you can, and we did (that is a real example of an end to end test from a recent project) which also included testing the individual steps in isolation, which was preceded by testing the individual sub-steps/components of each step (which is the portion that is typically considered unit testing).

For example, step 1 is broken down into the following sub-steps which are all tested in isolation before testing the combined group together:

1.1-Calculate the current on hand inventory from all locations for all products

1.2-Calculate the current in transit inventory for all locations for all products

1.3-Calculate the current open inventory reservations by business partner and products

1.4-Calculate the current in process fulfillments by business partner and product

1.5-Resolve the configurable inventory feed rules for each business partner and product (or product group)

1.6-Using the data in 1.1 through 1.5, resolve the final available qty for each business partner and product

1.7-Construct system specific messages for each system and/or business partner (in some cases it's a one to one between business partner and system, but in other cases one system manages many business partners).

1.7.1-Send to system B

1.7.2-Send to system C

1.7.3-Send to system D

1.7.N-etc.

> And the product of step one is undoubtedly internal state, so there is no way for the test to observe the state change in isolation

The result of step 1 is that over in software system B (an entirely different application from system A) the inventory availability for each product from system A is properly represented in the system. Meaning queries, inquiries, reports, application functions (e.g. Inventory Availability by Partner), etc. all present the proper quantities.

To validate this step, it can be handled one of two ways:

1-Some sort of automated query that extracts data from system B and compares to the intended state from step 1 (probably by saving that data at the end of that step).

or 2-A user manually logs in to system B and compares to the expected values from step 1 (again saved or exposed in some way). This method works when the number of products is purposefully kept to a small number for testing purposes.

> If the individual steps can be tested individually (ignoring a case of you doing something stupid), it's not actually and end-to-end process, so your example would make no sense. Granted, we have already questioned if it is a bad example.

Yes the individual test can be tested in individually. Yes it is an end to end test.

> Granted, we have already questioned if it is a bad example.

It's a real example from a real project and it aligns with the general notion of an end to end test used in the industry.

More importantly, combined with the unit tests, functional tests, integration tests, performance tests, other end to end tests and finally user acceptance tests, it contributed to a successful go-live with very few bugs or design issues.


>Kent Beck, who invented the term unit test, was quite clear that a unit test is a test that exists independent of other tests

I vaguely remember him also complaining that there were too many conflicting definitions of unit tests.

Maybe that can be solved with another definition?

https://xkcd.com/927/

or maybe not.

I dont know many people who would describe a test that uses playwright and hits a database as a unit test just because it is self contained. If Kent Beck does then he has a highly personalized definition of the term that conflicts with its common usage.

The most common usage is, I think, an xUnit style test which interacts with an app's code API and mocks out, at a minimum, interactions with systems external to the app under test (e.g. database, API calls).

He may have coined the term but that does not mean he owns it. If I were him Id pick a different name for his idiosyncratic meaning than unit test - one that isnt overburdened with too much baggage already.


> He may have coined the term but that does not mean he owns it.

Certainly not, but there is no redefinition that is anything more than gobbledygook. Look at the very definition you gave: That's not a unique or different way to write tests. It's not even a testing pattern in concept. That's just programming in general. It is not, for example, unusual for you to use an alternative database implementation (e.g. an in-memory database) during development where it is a suitable technical solution to a technical problem, even outside of an automated test environment. To frame it as some special unique kind of test is nonsensical.

If we can find a useful definition, by all means, but otherwise what's the point? There is no reason to desperately try to save it with meaningless words just because it is catchy.


The definition I gave is the one people use. Hate or love it youre not going to change it to encompass end to end tests and neither will Kent Beck. It's too embedded.


> youre not going to change it

I might. I once called attention to the once prevailing definition of "microservices" also not saying anything. At the time I was treated like I had two heads, but sure enough now I see a sizeable portion (not all, yet...) of developers using the updated definition I suggested that actually communicates something. Word gets around.

Granted, in that case there was a better definition for people to latch onto. In this case, I see no use for the term 'unit test' at all. Practically speaking, all tests people write today are unit tests. 'Unit' adds no additional information that isn't already implied in 'test' alone and I cannot find anything within the realm of testing that needs additional differentiation not already captured by another term.

If nothing changes, so what? I couldn't care less about what someone else thinks. Calling attention to people parroting terms that are meaningless is entirely for my own amusement, not some bizarre effort to try and change someone else. That would be plain weird.


Well, I don't regard unit tests as the one true way. I don't enforce people on my team do it my way. When I get compliments on my work, I tend to elaborate and spread my approach. That's what I mean by evangelize, not necessarily advocating for a specific criteria to be met.

I find that integration tests are usually are flaky, its my personal experience. In fact, at my company, we just decided to completely turn them off because they fail for many reasons and the usual fix is to adjust the test. If you have had a lot of success with them, great. Just for the record, I am not anti-integration or end-to-end test. I think they have a place and just like unit tests shouldn't be the default, neither should they.

Here are the two most common scenarios where I find integration (usually end-to-end called integration) tests become flaky:

1) DateTime, some part of business logic relies on the current date or time and it wasn't accounted for.

2) Data changes, got deleted, it expired, etc. and the test did not first create everything it needed before running the test.

Regarding your points,

1) "realism" that is what I referred to as trusting that a unit test is good enough. If it didn't go all the way to the database and back did it test your system? In my personal work, I find that pulling the data from a database and supplying it with a mock are the same thing. So it's not only real enough for me, but better because I can simulate all kinds of scenarios that wouldn't be possible in true end-to-end tests.

2) These days the only code that's hard to test is from people that are strictly enforcing OOP. Just like any approach in programming, it will have it's pros and cons. I rarely go down that route, so testing isn't usually difficult for me.

3) It's just been my personal experience. Like I said, I'm not anti-integration tests, but I don't write very many of them.

4) I didn't refer to google, just my personal industry experience.

5) Enforcing ideal is a waste of time in programming. People only care about what they see when it ships. I just ship better quality code when I unit test my business logic. Some engineers benefit from it, some harm themselves in confusion, not much I can do about it.

Most of this is my personal experience, no knock against anyone and I don't force my ideals on anybody. I happily share what and why things work for me. I gradually introduce my own learning over time as I am asked questions and don't seek to enforce anything.

Happy coding!


> I'll do a test for null, then a separate test for something else _assuming_ not null because I've already written a test for that.

Honestly, this pedantry around "unit tests must only test one thing" is counter-productive. Just test as many things as you can at once; it's fine. Most tests should not be failing. Yes, it's slightly less annoying to get 2 failed tests instead of 1 fail that you fix and then another fail from that same test. But it's way more annoying to have to duplicate entire test setups to have one that checks null and another that checks even numbers and another that checks odd numbers and another that checks near-overflow numbers, etc. The latter will result in people resting writing unit tests at all, which is exactly what you've found.

If people are resisting writing unit tests, make writing unit tests easier. Those silly rules do the opposite.


Just to clarify, I am not advocating for tests to only test one thing, rather that after you have tested for one scenario you don't need to rehash it again in another test.

Breaking a test down helps to clarify what you're testing and helps to prevent 80 loc unit tests. When I test for multiple things, I look for the equivalent of nunit's assert.multiple in the language that I'm in.

The approach I advocate for typically simplifies testing multiple scenarios with clear objectives and tends to make it easier when it comes time to refactor/fix/or just delete a no longer needed unit test. The difference I find, is that now you know why, vs having to figure out why.


Your comment made me go lookup my old reviews. It's crazy, most of them were deleted. I would have expected to receive some kind of notification.

I thought I could sift through the rubble to discern fake from real but now I know Amazon reviews are absolutely worthless.


Wonder if we could make a browser extension that stores and presents amazon reviews.

Could be useful to fill gaps on other sites too, for example when an uploader turns off comments on youtube.

It would be a pretty big middle finger to amazon if the primary site for amazon seller reviews was not actually amazon, but a third party site/app.


Yeah I've been wanting to have that "comment on anything" for a while.

It's possible in technical theory. But sharing the comments would be hard. Federation might work. Spam moderation could be hard. And most websites refuse to go into iframes, so your options are limited to browser extensions, SOCKS proxies, and Opera Mini style apps that re-render the page. Which means you'll get less than a percent of a percent of people to use it.


Ditto. I imagined it like leaving a post-it note on a page. If you found a fix for a product you could leave a note on the company's FAQ page. Problem is spam, in all it forms. Can you imagine political pages?


This has been floated out a couple of times, but there’s one more factor you’ve missed: The companies really want comments/reviews done by a platform they control and they will fight against 3rd party methods.


If it's done as a browser extension, the way Keepa is (it's used to monitor price history on amazon.com), there's absolutely nothing the companies can do about it.


Tactics that have been used against similar tools:

- Changing the generated HTML so that the extensions selectors no longer grab the right data (this becomes a cat and mouse game and there’s a non-zero chance that the extension authors tap out, especially if there’s not enough money to sustain the workload)

- Filing nuisance complaints (DMCA is a standard one here, but there are lots of others)

- Filing a copyright lawsuit in court

- Filing other court lawsuits (lost income, slander, etc)

In many cases the business knows they are fighting a losing battle but all they need is more resources, more time, or more patience.


Yeah, we already have a few extensions similar to this for viewing price history, such as Keepa. I use this one all the time since prices jump around so much; when I look at a product page on Amazon, Keepa inserts a big window with a price graph and other pickable info.


Welcome to the world of enshittification of platforms.


Lower in the article explains why:

> Yet despite experiencing large seasonal growth this year, the ozone hole is still decreasing in size overall. "Based on the Montreal Protocol and the decrease of anthropogenic ozone-depleting substances, scientists currently predict that the global ozone layer will reach its normal state again by around 2050," said Claus Zehner, ESA's mission manager for Copernicus Sentinel-5P.


So in other words, more clickbait doomer-ism titles. It's disappointing to see this type of junk "journalism" be submitted to, and upvoted on, HN.


i wonder: would clickbait style articles be submitted if there were no karma points?


So if we can survive the next thirty years, we're good. I can barely look forward to the next five.

Edit: Woah. Humor doesn't translate well on this site doesn't it?

I'm not saying the world is ending. I'm just saying with the next five years it looks bloody bleak, at least for me, turning 40 doesn't look fun.

With wars, environmental mess going on; the world looks very dystopian to the vision where it could be utopian in thirty years. The good always triumphs, you just have to fight past the evil first and that could happen.


This sort of ridiculous, silly rhetoric needs to end. It's not just unproductive, it's counter-productive.

"ThE wOrLd Is EnDiNg!" is such a tired line, used by religious fanatics since the beginning of time. Modern-day climate activism has indeed become it's own sort of religion, complete with the same sort of doom-and-gloom "repent now before it's too late" rhetoric and increasingly not based on facts or science but instead emotional ploys, fearmongering, and faith based arguments.

No.. the world isn't ending tomorrow, within 5 years, or even in the next few decades. Yes, we should do better to protect the environment.

We just don't need the sensationalism - it turns people off and away from all the silliness.


If you are over, say, 25, then you know that the last 10 years have been _exceptional_ years for our climate, and that the time for normal non-sensational warnings was 40 years ago or 50 years ago.

Look at a graph of CO2 emissions and you realize that we are in a place where what was alarming in the 80's was the result of only _half_ of the CO2 pumped into the atmosphere since 1900 ( https://ourworldindata.org/co2-emissions )

And now we are using energy to cover for the effects of using energy a decade ago.

Things are worse and they will reach the point over the next five or ten years that heat events and cold events will become more deadly for more people and the fact that you're tired of hearing about it isn't the problem.

The problem is that people haven't been taking the danger seriously and now it's an immanent and deadly danger.


The climate over the last 10 years haven't felt or been exceptional. The news has certainly been sensational.


Are you kidding? How many temperature records were broken? How many "once in a 100 years events" happened? Texas had two such winter storms in less than 10 years. There have been record droughts in France, and then record rainfalls. An unprecedented heat dome in the Pacific Northwest. Vast areas in the Middle East and South Asia, and hell the US too, are approaching unliveable wet bulbs temperatures in the summer.

If that isn't exceptional to you, what would be?


Happens all the time. Media wants you to believe otherwise so the rich can get richer and have more control.


Which exactly part happens all the time? Each year setting temperature records? "Once in a lifetime" climate disasters? Wildfires getting worse and worse?

On the wet bulb temperature front, it's literally unprecedented and quite impactful. Millions of people would be unable to survive where they currently live and would need to move.


Isn't amazing how actual people can put their heads in sand and say "Oh it's just media conspiracy and lies to fuck over us poor people!" about anything they don't like.

Added irony that the person is a commenter on HN, who is probably the top 10% of the richest people in the world.


Monday this week, it was 24°C here, a full 9°C above the seasonal average for Berlin for the beginning of October.

I'm aware this is anecdata, but I'm not used to needing to wear shorts in September, let alone October.


Meanwhile, the global troposphere is averaging 0.56C higher than it was four decades ago.


> If you are over, say, 25, then you know that the last 10 years have been _exceptional_ years for our climate

And if you know anything about statistics, its that ancedotes are not the plural of data and that humans have a severe recency bias.

The facts about climate change are damning enough, i don't know why people feel the need to resort to poor logic and fallacies when describing climate change. The truth is actually on your side. Lets present that instead.


> The facts about climate change are damning enough, i don't know why people feel the need to resort to poor logic and fallacies when describing climate change. The truth is actually on your side. Lets present that instead.

In fairness, that demonstrably hasn't worked on the politicians for the last hundred years.


> And if you know anything about statistics, its that ancedotes are not the plural of data and that humans have a severe recency bias.

> The facts about climate change are damning enough, i don't know why people feel the need to resort to poor logic and fallacies when describing climate change. The truth is actually on your side. Lets present that instead

I don’t think that the majority of people deals well when confronted with statistics.


The long term annd anccelerating increase in extremes while averages rise is exactly the _data_ I am describing.


"The problem is that people haven't been taking the danger seriously and now it's an immanent and deadly danger."

Taking something serious, is something very different from a doomsday cult.


It's slowing down at least.


The rate of increase in emissions is slowing down. Which means emissions are still increasing.


Yeah, the third derivative is negative. Just needs to continue, perhaps speed up, but at least something's going in the right direction.


Especially because we are talking about the ozone layer not climate change.

The ozone layer is in the process of being fixed. It is not going to kill us (anymore than the increased cancer has in the past). Yes it would be bad if we started releasing CFCs again, but the status quo on this one is good. Literally one of the best success stories of environmentalism.

The biggest problem with envirinmentalists is there is a lunatic fringe who has no idea what they are talking about spout non-sense, this makes the real issues look like BS too even when they are not.


> We just don't need the sensationalism - it turns people off and away from all the silliness.

By contrast, this is the exact sort of haughty condescension that pushes people further away, often right into the hands of real extremists.

The sensationalism is there because the problems of our ecology are literally sensational. Dismissing that is the real turnoff here.


> literally sensational

The zealots pushing the sensationalism often don't comprehend the forces at play - both natural and human-made. It's just religious fervor regurgitated because they passed some sort of faith purity test and were rewarded by other zealots with internet points.

There's conflicting motives here - one that says the California coastline should always remain exactly how we enjoy it today, and one that says all of California used to be ocean floor.

Any affirmative action we take to "preserve" the environment how we like it is in itself destructive to natural forces.

With that said, any human-made actions that accelerate natural forces or create un-natural forces should indeed be minimalized or removed.

The problem is the time scale. Zealots like to scare everyone into believing the world ends tomorrow - just like actual religious zealots tend to do to encourage conversion. If they can scare you enough to join them, they they see that as a win.

The world is getting greener by the day - but these things take time. We're just not ready to have a 100% renewable system yet, but one day we will be there. The incentives to get there cannot be allowed to be fear - it must be logic. A greener future has to be the logical move.


This is a straw man argument : Those who don't agree with me must be ignorant religious fanatics, surely not people who have been listening to what scientists working on the subject have been saying for more that 30 years.

Nobody really believes the worlds is going to disappear under big wave, it's of course a shorthand for "I have strong reasons to believe that my living conditions will degrade terribly over the next years (and I think we are collectively ignoring the problem ?)"

If feel like you are denying the level of denialism the topic gets which to me is a much bigger concern that people being overly alarmist. I am not sure that level of denialism is the fault of fear-mongerers rather than the fact that most people want to bury their head in the sand, don't change anything to their way of life and carry on business as usual.

> The world is getting greener by the day - but these things take time. We're just not ready to have a 100% renewable system yet, but one day we will be there. The incentives to get there cannot be allowed to be fear - it must be logic. A greener future has to be the logical move.

Sure but that's putting under the rug a lot of important questions. Climate don't care whether we are trying. Most scientists say it's not going fast enough and that not going fast enough will put us in big trouble. Another question is how much fossil fuel will be necessary for the transition, are we sure we are the spending the most of our fossil energy in order to make the switch ?


> The zealots pushing the sensationalism often don't comprehend the forces at play - both natural and human-made

This is the exact sort of haughty condescension that pushes people further away, often right into the hands of real extremists.


The world won’t end for billions of years. In the next few decades, a whole bunch of humans will be financially devastated, physically relocated, and made dangerously jealous, defensive, and angry by climate change. Spin it how you like.


> In the next few decades, a whole bunch of humans will be financially devastated, physically relocated, and made dangerously jealous, defensive, and angry by climate change.

Yes, or dead.


Sure, but the original post wasn't talking about climate change.


Totally agree. The world is the best it has ever been by nearly any metric.

My father had to _literally_ practice hiding under his desk at school with a Geiger counter in preparation for nuclear war. He was then _drafted_ into Vietnam... I don't think I have it so bad.


"The world is the best it has ever been by nearly any metric."

How about fertility of the soil, area of land covered by desert vs forest, number of insects and in general diversity of wildlife, amount of fossil fuels burned every minute, amount of opioids intake, anti depression drugs consumption, number of days you have to wait to see a special doctor, inflation rate, ...

So good to hear, that you are currently doing well. And we surely could be worse off and many things certainly did improve. But give it some more geopolitical escalation and you getting drafted as well in the neae future remains a very real possibility.


> amount of opioids intake

*embarrassed British cough regarding the Opium Wars*

That said, your list is half global, half national. We're not all suffering from the mistakes of the USA in over-prescribing things.


"That said, your list is half global, half national."

Yes .. like the post I was replying to.


I mean, if we look at history, climate change (even much smaller than what we are facing) tends to trigger wars. Suddenly land that was valuable isn't and vice versa. Even in the most optomistic scenario it will probably destabalize the world order and result in conflicts.

Like e.g. weather events are thought to be one of the likely factors in the late bronze age collapse. There's evidence that unusual climate events coincided with the fall of rome. Like obviously a lot of other factors were at play, climate just pushed something brittle to its breaking point.


And yet there's a proxy, some would even call it a direct war between Russia and NATO that has some non zero probability of going nuclear.


I agree, but people have been ringing the bells about climate change for 40 years. When will we actually do something about it? I think the increased sensationalism is, at the very least, a sign that people are becoming afraid about our lack of action.


This is false dichotomy. We are doing things about it - but those things take time to mature and become ready. The Zealots just cannot fathom things not being good enough to act catastrophically right now.

We simply are not ready yet to ditch fossil fuels and go full-electric - among other things. There will be a day, yes, and that day might be within our lifetimes (hopefully), but forcing it right now when it's clearly not ready is so incomprehensibly short-sighted that it borders on insanity. We cannot even keep the lights on year-round as it is.

Imaging going full-eclectic everything today. The amount of pain the nation would feel would lead to an irreversible backwards slide of all the progress that's been made. It would set the climate activist agenda back decades.

My point is, the cooler heads need to prevail here. We are marching ever-towards a cleaner future - we just aren't there yet. We need to stop the silly posturing and tribal signals, such as plastic straw bans and what-not, and focus on what actually matters... and not force it until it's actually ready.


I strongly disagree that we are doing something about it. We are doing the bare minimum, if that. Until we start heavily regulating corporations on their carbon impact and taxing further carbon from being extracted from the ground, we aren't truly doing something about climate change.


You are saying we are getting there, but don't give evidence that the pace of change is fast enough. Instead you call everyone who says the pace isn't fast enough (which includes most of the scientists working on climate change btw) "sensationalists" and "zealots".

Do you have any evidence that the current pace of change is sufficient to avert the most damaging aspects of climate change?


I couldn't disagree with you more. We're barely scratching the surface of what needs to be done to preserve some stability in our climate and by extension our way of life for the next decades.

So yes of course we should ban plastic straws, even though it looks a bit silly given the much larger sources of environmental damage that have not been banned yet. We should seek to ban these too, and as soon as possible.

And indeed, doing this will cause some pain. Things will become more expensive (or more fairly priced, if you consider externalities). We may be able to buy less junk. Would that be a "irreversible backwards slide of all the progress that's been made"?

I guess this makes me a Zealot in your book.


> We just don't need the sensationalism - it turns people off and away from all the silliness.

And flat out ignoring it turns you into a Republican, and clearly hasn't accomplished anything so far.

You can't complain about say, food prices, and also not recognize how the weather in the last few decades is a contributor to food yields.

But no, expensive food prices is the President's fault (either current or prior), and talking about it is sensationalism.


> No.. the world isn't ending tomorrow, within 5 years, or even in the next few decades.

Ah yes, not ending “in the next few decades.” What a reassurance.


The people saying we're close to tipping points are scientists, with research papers, not religious nuts.


Honestly always fascinated by this kind of line. Like pedantry aside, it always reads like we are all working together towards some big press briefing in the future, to let everyone else know about climate change for the first time. Like who is the audience here? The swing voters here aren't going to decide the election!


I'm also turning 40, but I'd much rather be like that than a young person right now..


sounds like we need another term than “normal” state if an ozone layer with a hole is my entire life


Your life is a hell of a lot shorter than the life of the ozone layer, so it's kinda weird to constrain it to that timeframe


Yeah but people do this. Consider for a moment that most environmentalism is really just self-preservation... consider the experience of the "earth" over its lifetime, compared to humanities...

Feel the need to *just a thought, Im not some anti-climate change nut job...


It's unlimited but throttled during peak usage so that the primary network customers get priority. I think cricket wireless is an MNVO, see the drawbacks of an MNVO here:

https://utilitiesone.com/comparing-mobile-virtual-network-op...


I used to live in WV and can attest that coal mining jobs have been on the decline for some time. The reason is because the equipment is bigger, better and they can mine more effectively resulting in needing way less workers. You can skip to the conclusion in the linked article [1] which states:

> What happened to coal jobs is even simpler. It is the same thing that happened throughout much of the country — productivity gains led to fewer workers needed to produce the same output.

Sure some people will re-train and join a new industry, but we still need heavy equipment operators for other construction sites. There is potential that a big mining site for lithium could open up [2], then we would need miners yet again. There will unfortunately be those who don't want to move or retrain that will instead resort to just doing drugs.

Personally I think we should welcome phasing out coal and, as policy, offer help to those who's jobs were displaced to prevent creation of meth towns.

[1] https://siepr.stanford.edu/publications/policy-brief/what-ki... [2] https://www.theverge.com/2023/8/30/23849619/lithium-ev-batte...


I imagine that getting a heavier car up to speed in stop and go traffic would be more energy intensive. Also driving it uphill has to use up a lot more battery since it's so much heavier. The weight reduction should improve [1] tire longevity and reduce maintenance costs. Overall I think it could help with range and costs so it seems like a worthy thing to focus on.

[1] - https://arstechnica.com/cars/2022/12/heres-why-electric-vehi...


Yeah weight also has all sorts of knock on effects on other components. Suspension, tires, brakes, frame all need to get beefed up for extra weight and/or wear quicker.


Unless it's a steep grade you'll recoup a lot of that uphill energy going down hill. The energy used going up the hill is just stored potential kinetic energy.


Regen efficiency is nowhere near 100% though. You still lose a lot due to friction/conversion losses/etc.


You don't need to do regen to get that energy back. Your car gets significant air and rolling resistance going down the highway, its having to put in energy to keep the car going 40+mph. Using that stored kinetic energy of just going down the hill is extremely efficient so long as you're not having to do any braking, which unless you're rolling down a mountain you probably don't need to "ride the brakes". Hence my "unless its a steep grade".

The energy does not need to go back into the battery. It offsets the energy needed from the battery to keep the car going at speed.

Regen efficiency makes no difference in most hills, because you probably won't be using the regen for it.


Regen breaking should in theory help mitigate that. A lot of that energy is recaptured when you slow down. Heavier vehicles have more energy to recapture, assuming the drive train is up to it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: