Seems like this might make using passwords compliant with FIPS 140-2. (Not sure, so maybe someone else can share their opinion.) Previously I heard in a few places that people would use LDAP to delegate the auth to something else, e.g. here: https://news.ycombinator.com/item?id=12129906
It's an improvement for sure, but I am curious -- does anyone situate a Postgres instance where it is publicly accessible? Who was asking for this feature?
SSL is (was?) required. I left Heroku about a year ago and it's nearly inconceivable that this would be changed, having been the case for many years.
I don't think they've implemented certificate validation since I've left though.
My naive hope, going on many years, is that SCRAM with channel binding would have landed years ago (the first versions of the patch began to show up then), making client-side certificate checking (and let's get real: it's hard enough to use that many people will not validate when developing from their laptops, simply backspacing out the optional cert validation connection option, a elision that is invisible to the server) obsolete. It should be possible to modify the definitions of pg_hba.conf to require a channel-bound SCRAM connection, which would mean that the client is certain to have checked for an untampered certificate.
This implementation of SCRAM doesn't have that yet, but it's been an ambition of the author for some time to do so.
A patch implementing channel binding has been presented for integration into Postgres 11: https://commitfest.postgresql.org/14/1153/. Two channel types are presented: tls-finish and endpoint. Per the RFC 5802, it is mandatory to use SSL if you want channel binding as the data needed for binding validation is either the TLS finish message which can be found after the SSL handshake between the server and the client (which happens before the password-based authentication), and a hash of the server certificate. All those things are actually supported by a set of APIs in OpenSSL.
True. I am not saying it's the best idea around, only that it's low friction. I'd probably approach it differently but I can see why they did it like they did.
I think by far the biggest benefit is being able to check the "no insecure crypto algorithms used" box. Even though the way md5 was used wasn't really that concerning security wise, it constantly comes up.
As someone working with json output from the gmail api I’m curious to see how people smarter than me take advantage of this new functionality so I can adopt it as well.
The argument from the parent posts rely on ACA being repealed already. If the ACA does get repealed then the above arguments hold but this hasn't happened yet.
Same what Nokia did with smartphones. They sold remaining phone manufacturing and marketing to Microsoft but kept all patents, Nokia Research and even the Nokia brand.
Smartphone market is in the state of business where manufacturing, R&D and brand marketing can be separated and mixed freely. Especially in the Android ecosystem.
I think its more of the opposite. Technology creates spheres of influence in the countries that they operate. Traditional governments attempt to reign it in (this is usually effective if they are a company with ad revenue or they want to follow local laws. But when you cut down one sphere another grows to replace it.
It seems like right now "superhuman AI" is a buzzword that people like to use when they want to be covered by the press.
I'm surprised OpenAI didn't chime in. Physicists seem to use Aliens or multiple dimensions for this purpose (But some also use AI for the same effect).
It sort of distracts people from actually asking real questions like how to use AI / ML responsibly because the former doesn't require much to speculate about.
Journalists don't feel like they are qualified to report on the actual technology (which is a good thing), don't bother learning anything in order to become qualified (which isn't), don't bother speaking to qualified people on the front line of this technology (which is horrible).
So what they have resorted to is reporting on these "philosophical" topics, because all you need for that is a fucking opinion, right. It's a great Faustian bargain because you then get all those companies and people, who similarly have no clue but are fishing for PR, to pile on.
See "should the autonomous car hit the pedestrian or save its passengers" or "this artist drew a lane marker around his beater car".
Until it's conscious, AI is just a tool and the same ethics apply when using it as when using any other tool. If you use it to hurt people, to deceive people, to steal from people, etc, you go to jail. Well, ideally. But like insurance companies that are forbidden from charging different rates to people based upon their being a member of a protected class or any proxy which becomes essentially equivalent. So if their ML system starts jacking up premiums on one group of people because its found an indicator it likes, they're still breaking the law. Even if they can't explain why it keyed on that indicator beyond "look... here's a list of numbers. Those are weights in the neural net. We don't know what they mean."