Hacker Newsnew | past | comments | ask | show | jobs | submit | prasadjoglekar's commentslogin

And according TFA, those poles and wires for transmission are a large part of the increase in costs that are forecasted.

Ideally, the folks who request the new plants and transmission lines pay for them, but it appears tech cos are attempting to pass the transmission cost burden onto residential consumers.


Poles and wires for a datacenter should be much cheaper than for a subdivision.

Privatize the gains, publicize the losses

A bundle of streaming services. That you can surf and choose one from and just watch. And a TV guide that tells you what's running where.

Gee...sounds a lot like Cable TV.

Sarcasm aside, the one problem folks had with Cable was the inability to upgrade without getting locked into another 2 year contract. Streaming solves that one problem while enshittifying all the other good things.


I thought the main complaint was "I'm paying for channels I don't watch!" while not realizing the channels they were watching were actually what they were paying for, and the rest of the stuff was just lumped in for nearly free to make the lineups look bigger and more appealing.

For some reason I always saw it in reverse, that I had to pay to subsidise a set of channels I'm _not_ interested in for the one I am.

Chances are that's not what was happening unless you were watching the channels nobody else watches.

I haven't looked into cable pricing for a while but i remember a few of the contract disputes that caused some big channels to drop off big cable providers in the 2010s. The price-per-customer those channels were asking the cable companies were significant chunks of what a package would cost the customer (eg upwards for $1).

Meanwhile some of the less common ones were a few cents per customer.

That means that unless you weren't watching any of the $1+ ones, you were mostly actually "paying for what you're watching".


I assure you that there are many people who do not need nor want ESPN and knew damn well they were directly paying it.

And those people were having part of their package subsidized by the people who were watching ESPN but not the other channels.

> Gee...sounds a lot like Cable TV.

Honestly, Cable companies could make a comeback by using their relationships with producers to actually be a "one stop shop" streaming services. There's definitely a pain point to having to be subscribed to so many different services just to cover the gamut of shows and movies


> .. the one problem folks had with Cable was the i...

and hardware rental fees

ads on top of your service

bundling a bunch of channels you didnt ask for and increase price

outages

the list goes on


Flo shouldn't have sent those data to FB. That's true. Which is why they settled.

But FB, having received this info proceeded to use it and mix it with other signals it gets. Which is what the complaint against FB alleged.


I wish there was information about who at Facebook received this information and “used” it. I suspect it was mixed in with 9 million other sources of information and no human at Facebook was even aware it was there.

Is your argument that it's fine to just collect so much information that you can't possibly responsibly handle it all?

In my opinion, that isn't something that should be allowed or encouraged.


I’m not the OP but no, I think their point is if you tell people that this data will be used for X, and not to send sensitive data that way and they do it anyway you can’t really be responsible for it - the entity who sent you the data and ignored your terms should be

Not at Facebook, but I used to work on an ML system that took well-defined and free-form JSON data and ran ML on it. Both were used in training and classification. Unless a human looked, we had no idea what those custom fields were. We also had customers lie about what the fields represent for valid and less valid reasons.

Without knowing how it works at Facebook, it's quite possible the data points got slurped in, the models found meaning in the data and acted on it, and no human knew anything about it.


How it happened internally is irrelevant to whether Facebook is responsible. Deploying systems they do not properly control or understand does not shield against legal or normal responsibilities!

There is a trail of people who signed off on this implementation. It is the fault of one or more people, not machines.


>Deploying systems they do not properly control or understand does not shield against legal or normal responsibilities!

We can argue the "moral" aspect until we're both blue in the face, but did facebook have any legal responsibilities to ensure its systems didn't contain sensitive data?


So they shouldn’t be punished because they were negligent? Is that your argument?

I think their argument is that FB has a pipeline that processes whatever data you give it and the idea that a human being made the conscious decision to use this data is almost certainly not what happened.

"This data processing pipeline processed the data we put in the pipeline" is not necessarily negligence unless you just hate Facebook and couldn't possibly imagine any scenario where they're not all mustache-twirling villains.


Yeah, sorry, no, I have to disagree.

We're seeing this broad trend in tech where we just want to shrug and say "gee wiz, the machine did it all on its own, who could've guessed that would happen, it's not really our fault, right?"

LLMs sharing dangerous false information, ATS systems disqualifying women at higher rates than men, black people getting falsely flagged by facial recognition systems. The list goes on and on.

Humans built these systems. Humans are responsible for governing those systems and building adequate safeguards to ensure they're neither misused nor misbehave. Companies should not be allowed to tech-wash their irresponsible or illegal behaviour.

If Facebook did indeed built a data pipeline and targeting advertising system that could blindly accept and monetize illegally acquired without any human oversight, then Facebook should absolutely be held accountable for that negligence.


What does the system look like where a human being individually verifies every pieces of data fed into an advertising system? Even taking the human out of the loop, how do you verify the "legality" of one piece of data vs. another coming from the same publisher?

None of your example have anything to do with the thing we're talking about, and are just meant to inflame emotional opinions rather than engender rational discussion about this issue.


That's not my problem to solve?

If Facebook chooses to build a system that can ingest massive amounts of third party data, and cannot simultaneously develop a system to vet that data to determine if it's been illegally acquired, then they shouldn't build that system.

You're running under the assumption that the technology must exist, and therefore we must live with the consequences. I don't accept that premise.

Edit: By the way, I'm presenting this as an all-or-nothing proposition, which is certainly unreasonable, and I recognize that. KYC rules in finance aren't a panacea. Financial crimes still happen even with them in place. But they represent a best effort, if imperfect, attempt to acknowledge and mitigate those risks, and based on what we've seen from tech companies over the last thirty years, I think it's reasonable to assume Facebook didn't attempt similar diligence, particularly given a jury trial found them guilty of misbehaviour.

> None of your example have anything to do with the thing we're talking about, and are just meant to inflame emotional opinions rather than engender rational discussion about this issue.

Not at all. I'm placing this specific example in the broader context of the tech industry failing to a) consider the consequences of their actions, and b) escaping accountability.

That context matters.


I often think about what having accountability in tech would entail. These big tech companies only work because they can neglect support and any kind of oversight.

In my ideal world, platforms and their moderation would be more localized, so that individuals would have more power to influence it and also hold it accountable.


It's difficult for me to parse what exactly your argument is. Facebook built a system to ingest third party data. Whether you feel that such technology should exist to ingest data and serve ads is, respectfully, completely irrelevant. Facebook requires any entity (e.g. the Flo app) to gather consent from their users to send user data into the ingestion pipeline per the terms of their SDK. The Flo app, in a phenomenally incompetent and negligent manner, not only sent unconsented data to Facebook, but sent -sensitive health data-. Facebook they did what Facebook does best, which is ingest this data _that Flo attested was not sensitive and collected with consent_ into their ads systems.

So let's consider the possibilities:

#1. Facebook did everything they could to evaluate Flo as a company and the data they were receiving, but they simply had no way to tell that the data was illegally acquired and privacy-invading.

#2. Facebook had inadequate mechanisms for evaluating their partners, and that while they could have caught this problem they failed to do so, and therefore Facebook was negligent.

#3. Facebook turned a blind eye to clear red flags that should've caused them to investigate further, and Facebook was malicious.

Personally, given Facebook's past extremely egregious behaviour, I think it's most likely to be a combination of #2 and #3: inadequate mechanisms to evaluate data partners, and conveniently ignoring signals that the data was ill-gotten, and that Facebook is in fact negligent if not malicious. In either case Facebook should be held liable.

pc86 is taking the position that the issue is #1: that Facebook did everything they could, and still, the bad data made it through because it's impossible to build a system to catch this sort of thing.

If that's true, then my argument is that the system Facebook built is too easily abused and should be torn down or significantly modified/curtailed as it cannot be operated safely, and that Facebook should still be held liable for building and operating a harmful technology that they could not adequately govern.

Does that clarify my position?


No one is arguing that FB has not engaged in egregious and illegal behavior in the past. What pc86 and I are trying to explain is that in this instance, based on the details of the court docs, Facebook did not make a conscious decision to process this data. It just did. Because this data, combined with the billion+ data points that Facebook receives every single second, was sent to Facebook with the label that it was "consented and non-sensitive health data" when it most certainly was not consented and very sensitive health data. But this is the fault of Flo. Not Facebook.

You could argue that Facebook should be more explicit in asking developers to self-certify and label their data correctly, or not send it at all. You could argue that Facebook should bolster their signal detection when it receives data from a new apps for the first time. But to argue that a human at Facebook blindly built a system to ingest data illegally without any attempt to prevent it is a flawed argument, as there are many controls, many disclosures, and (I'm sure) many internal teams and systems designed exactly for the purpose of determining whether the data they receive is has the appropriate consents (which it did, that Flo sent to them). This case is very squarely #1 in your example and maybe a bit of #2.


If FB is going to use the data, then it should have the responsibility to check whether they can legally use it. Having their supplier say "It's not sensitive health data, bro, and if it is, it's consented. Trust us" should not be enough.

To use an extreme example, if someone posts CSAM through Facebook and says "It's not CSAM, trust me bro" and Facebook publishes it, then both the poster and Facebook have done wrong and should be in trouble.


>To use an extreme example, if someone posts CSAM through Facebook and says "It's not CSAM, trust me bro" and Facebook publishes it, then both the poster and Facebook have done wrong and should be in trouble.

AFAIK that's only because of mandatory scanning laws for CSAM, which were only enacted recently. There's no such obligations for other sensitive data.


Mens rea vs actus reus.

In some crimes actus reus is what matters. For example if you're handling stolen goods (in the US) the law can repossess these goods and any gains from them, even if you had no idea they were stolen.

Tech companies try to absolve themselves of mens rea by making sure no one says anything via email or other documented process that could otherwise be used in discovery. "If you don't admit your product could be used for wrong doing, then it can't!"


>Facebook did not make a conscious decision to process this data.

Yes, it did. When Facebook built the system and allowed external entities to feed it unvetted information without human oversight, that was a choice to process this data.

> without any attempt to prevent it is a flawed argument, as there are many controls, many disclosures, and (I'm sure) many internal teams and systems designed exactly for the purpose of determining whether the data they receive is has the appropriate consents

This seems like a giant assumption to make without evidence. Given the past bad behavior from Meta, they do not deserve this benefit of the doubt.

If those systems exist, they clearly failed to actually work. However, the court documents indicate that Facebook didn't build out systems to check if stuff is health data until afterwards.


> Facebook did not make a conscious decision to process this data. It just did.

What everyone else is saying is what they did is illegal, and they did it automatically, which is worse. What you're describing was, in fact, built to do that. They are advertising to people based on the honor system of whoever submits the data pinky promising it was consensual. That's absurd.


"doing everything they could" is quite the high standard. Personally, I would only hold them to the standard of making a reasonable effort.

Yup, fair. I tried to acknowledge that in my paragraph about KYC in a follow-up edit to one of my earlier comments, but I agree, the language I've been using has been intentionally quite strong, and sometimes misleadingly so (I tend to communicate using strong contrasts between opposites as a way to ensure clarity in my arguments, but reality inevitably lands somewhere in the middle).

It is necessarily negligence if they are ingesting a lot of illegal data, right? I mean, it could be the case that this isn’t a business model that works given typical human levels of competence.

But working beyond your competence when it results in people getting hurt is… negligent.


You're absolutely right, a human being didn't make the conscious decision to use this data. They made a conscious decision to build an automated pipeline that uses this data and another conscious decision not to build in any checks on the legitimacy of said data. Do we want the law to encourage responsibility or intentional ignorance and plausible deniability?

I would expect an app with 150 million active users to trigger some kind of compliance review in Meta

This is the argument companies use for having shitty customer support. "Our business is too big for our small support team."

Why are you scaling up a business that can't refrain from fucking over customers?


Two different issues IMO. Piracy is depriving someone of payment for an item for which payment was expected. Neither you nor Perplexity may pirate a DVD that you didn't buy.

Copyright usually doesn't prevent copying per se, it's the redistribution that is violative. You, as well as Perplexity are free to scrape public sites. You'll both be sued if you distribute it.


If you have documented proof, please do what the press release says, and ask that excess rents be refunded. Someone has to put comments, or else it will go through without a hitch.

As required by the Tunney Act, the proposed settlement, along with a competitive impact statement, will be published in the Federal Register. Any interested person should submit written comments concerning the proposed settlement within 60 days following the publication to Danielle Hauck, Acting Chief, Technology and Digital Platforms Section, Antitrust Division, U.S. Department of Justice, 450 Fifth Street NW, Suite 7050, Washington, DC 20530. At the conclusion of the public comment period, the U.S. District Court for the Middle District of North Carolina may enter the final judgment upon finding it is in the public interest.


Aluminium

Au contraire, that is much closer to the definition of insurance. We don't want people to get bankrupt with an unforseen medical issue like cancer. Insurance should cover that.

Routine care, including shit that's a little unlucky should be paid for out of pocket.

In the US, health care and health insurance have become synonymous such that all the good bits of insurance are out the door and all the bad ones have stayed. And polluted the true cost of getting simple, routine care.


In that vain health insurance is never insurance.

Getting a kid is not an accident for most modern people, and should never be seen as an event that could bankrupt anybody.

Hence, deferring to the definition of insurance is futile.

Then again - the US is the only country I know of that insists that health is something you insure.


You need to broaden your horizons then - look at the medical plans that are on offer in India. Or the travel medical insurance that every EU country requires you to carry. They are classic insurance.

Having a kid without complications should not cost $50K. It should be a few grand at the most. If the kid now needs NICU, then yes, that's what insurance is for.


>>- look at the medical plans that are on offer in India.

The one's that companies offer are quite good, actually.

I'd have depleted my life savings, and gone bankrupt several times around COVID years with my parents health, if I didn't have company health insurance.

Good for me, because I knew people in hospital sitting in the waiting lobby literally crying because they were done financially. Like finished for life.

Having said, this in India the market for this things is still building up, and given how big India is it will take years before it reaches the US stage of profit seeking.


So, I exanped my horizon Besides the states there is south Africa who treat having a kid as some catastrophic event that might bankrupt you.

In EU countries the spirit of health insurance is socializing the cost / solidarity which we explicitly do not consider in the thread - please read the parents of that was not clear.


3k out-of-pocket for a random mishap could easily get many people bankrupt.

I did scratch an eyeball recently. The cost turned out to be €50 for drops which ain’t covered by single-payer insurance. 3k out of pocket would be pretty bad even naming nice salary. And could cause big issues to a massive portion of people.


This is the definition of an asset bubble in my books. In 2007 +/-, this is exactly what happened with housing prices. People would buy an asset and watch it appreciate doing nothing.

Why go to your forklift operator job at Home Depot if you can make $50K per year in asset appreciation by sitting and doing nothing?


Un redeemed gift cards are a liability for the issuer in perpetuity in many US states including CA. In many others the expiry is at least 5 years from the date of issuance.

Having looked at Solar for my house in NY, I can only summarize as this:

Solar tax credits are subsides for the well off. The poorest folks can't afford the cash cost of panels and don't have enough taxable income to use the offsets.

The workaround to the high cost has been to lock less diligent people into 30 year power purchase agreements in return for no upfront costs. The lessor takes the subsidies and credits and it just creates another lien on the house.


> Solar tax credits are subsides for the well off.

Sounds like a good thing to me! Subsidizing things that benefit the whole of society are 100% a good thing, even when rich people take advantage of them


A valid point, but to me this sounds like you would want such a program to be a wealth redistribution program at the same time, and I believe trying to do that would just diminish the effectiveness (=> less panels built per spent tax dollar), and is better tackled separately.

Greater adoption pushes prices down so that the less well off can afford. But it’s not so much that the poor need to switch, but rather than there are more households regardless of income who do switch to reduce fossil fuel reliance.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: