Hacker Newsnew | past | comments | ask | show | jobs | submit | irisgrunn's commentslogin


According to this way more recent study they are totally reversible: https://www.sciencedirect.com/science/article/pii/S0929693X2...

And this one says the same: https://academic.oup.com/jsm/article/20/3/398/7005631

And then there's article from Yale that actually disproves the cass report where the NHS guidelines are based on: https://law.yale.edu/sites/default/files/documents/integrity...

> I have nothing against trans people, but many people draw the line when it comes to kids.

Except when those children happen to be trans, that case they're not allowed to exist or be mutilated for life, even though it's easily preventable


I appreciate the study links, but it makes it really hard to take you seriously when you claim trans kids are not allowed to “exist”. That’s extreme hyperbole, as if they’re still alive they obviously exist.


If you don't allow for proper treatment like social transitioning and puberty blockers, they can't be themselves and therefore they can't exist.

Next to this there's also risk of those kids committing suicide because they can't get proper treatment, which is only getting worse with all the anti-trans laws. See https://www.nature.com/articles/s41562-024-01979-5.epdf


[dead]


So is that "critique"


Do you have any substantive criticism you could share?


For example how it cites the cass report that's been debunked quite a few times already


The Cass Review covers a lot of ground. Which parts of relevance to that article are you claiming have been "debunked", and on what basis?


I posted one of the better critiques (by Yale) already in the parent comment you're reacting to


>According to this way more recent study they are totally reversible: And this one says the same:

I see nothing in your links that supports those conclusions. The second one at least asserts that recipients overwhelmingly don't want to reverse the effects, but this too is a complex topic (see e.g. https://slatestarcodex.com/2018/09/08/acc-entry-should-trans... ).

Also, the link you're responding to isn't a "study", but rather a position document from the NHS (UK national healthcare).


> I see nothing in your links that supports those conclusions.

I'd start with chapter 5.2.1.7 go from there.

> but this too is a complex topic (see e.g. https://slatestarcodex.com/2018/09/08/acc-entry-should-trans... ).

You can either force a trans kid to develop the wrong kind of secondary sex characteristics. With all trauma and painful corrective procedures that will follow later in life, or you can let them take a pill a day which will halt it until they're old enough to make that decision. That really doesn't seem difficult to me.

> Also, the link you're responding to isn't a "study", but rather a position document from the NHS

I know but it's still based on the cass report, which claims to be a study.


>I'd start with chapter 5.2.1.7 go from there.

As far as I can tell, you linked to abstracts for a paywalled academic papers.

>You can either

The point is about the objective fact of what the kids want. Your moral judgement of what should be done as a result, is irrelevant to that.


> As far as I can tell, you linked to abstracts for a paywalled academic papers.

Just scroll down, no paywall.

> The point is about the objective fact of what the kids want. Your moral judgement of what should be done as a result, is irrelevant to that.

This has nothing to do with my moral judgment. If a kid gets diagnosed with gender-dysphoria, they should get proper treatment. Social transition in combination with puberty blockers are the known effective treatment.

Not sure about the US, but here gender-dysphoria in children has to be diagnosed by a team of professionals that aren't allowed to steer them in any way.


And this is the major problem. People will blindly trust the output of AI because it appears to be amazing, this is how mistakes slip in. It might not be a big deal with the app you're working on, but in a banking app or medical equipment this can have a huge impact.


I feel like I’m being gaslit about these AI code tools. I’ve got the paid copilot through work and I’ve just about never had it do anything useful ever.

I’m working on a reasonably large rails app and it can’t seem to answer any questions about anything, or even auto fill the names of methods defined in the app. Instead it just makes up names that seem plausible. It’s literally worse than the built in auto suggestions of vs code, because at least those are confirmed to be real names from the code.

Maybe these tools work well on a blank project where you are building basic login forms or something. But certainly not on an established code base.


I'm in the same boat. I've tried a few of these tools and the output's generally been terrible to useless big and small. It's made up plausible-sounding but non-existent methods on the popular framework we use, something which it should have plenty of context and examples on.

Dealing with the output is about the same as dealing with a code review for an extremely junior employee... who didn't even run and verify their code was functional before sending it for a code review.

Except here's the problem. Even for intermediate developers, I'm essentially always in a situation where the process of explaining the problem, providing feedback on a potential solution, answering questions, reviewing code and providing feedback, etc takes more time out of my day than it would for me to just _write the damn code myself_.

And it's much more difficult for me to explain the solution in English than in code--I basically already have the code in my head, now I'm going through a translation step to turn it into English.

All adding AI has done is taking the part of my job that is "think about problem, come up with solution, type code in" and make it into something with way more steps, all of which are lossy as far as translating my original intent to working code.

I get we all have different experiences and all that, but as I said... same boat. From _my_ experiences this is so far from useful that hearing people rant and rave about the productivity gains makes me feel like an insane person. I can't even _fathom_ how this would be helpful. How can I not be seeing it?


The biggest lie in all of LLMs is that they’ll work out of the box and you don’t need to take time to learn them.

I find Copilot autocomplete invaluable as a productivity boost, but that’s because I’ve now spent over two years learning how to best use it!

“And it's much more difficult for me to explain the solution in English than in code--I basically already have the code in my head, now I'm going through a translation step to turn it into English.”

If that’s the case, don’t prompt them in English. Prompt them in code (or pseudo-code) and get them to turn that into code that’s more likely to be finished and working.

I do that all the time: many of my LLM prompts are the signature of a function or a half-written piece of code where I add “finish this” at the end.

Here’s an example, where I had started manually writing a bunch of code and suddenly realized that it was probably enough context for the LLM to finish the job… which it did: https://simonwillison.net/2024/Apr/8/files-to-prompt/#buildi...


You bring up a good point! These tools are useless if you can't prompt them effectively.

I am decent at explaining what I want in English. I have coded and managed developers for long enough to include tips on how I want something implemented. So far, I am nothing short of amazed. The tools are nowhere near perfect, but they do provide a non-trivial boost in my productivity. I feel like I did when I first used an IDE.


> Except here's the problem. Even for intermediate developers, I'm essentially always in a situation where the process of explaining the problem, providing feedback on a potential solution, answering questions, reviewing code and providing feedback, etc takes more time out of my day than it would for me to just _write the damn code myself_.

Exactly. And I’ve been telling myself „keep doing that, it lets them teach, otherwise they will never level up and be able to comfortably and reliably work on this codebase without much hand holding. This will pay off”. Which I still think is true to a degree, although less so with every year.


At least with the humans I work with it’s _possible_ and I can occasionally find some evidence that it _could_ be true to hang on to. I’m expending extra effort, but I’m helping another human being and _maybe_ eventually making my own life easier.

What’s the payoff for doing this with an LLM? Even if it can learn, why not let someone else do it and try again next year and see if it’s leveled up yet?


For me, AI is super helpful with one-off scripts, which I happen to write quite often when doing research. Just yesterday, I had to check my assumptions are true about a certain aspect of our live system and all I had was a large file which had to be parsed. I asked ChatGPT to write a script which parses the data and presents it in a certain way. I don't trust ChatGPT 100%, so I reviewed the script and checked it returned correct outputs on a subset of data. It's something which I'd do to the script anyway if I wrote it myself, but it saved me like 20 minutes of typing and debugging the code. I was in a hurry because we had an incident that had to be resolved as soon as possible. I haven't tried it on proper codebases (and I think it's just not possible at this moment) but for quick scripts which automate research in an ad hoc manner, it's been super useful for me.

Another case is prototyping. A few weeks ago I made a prototype to show to the stakeholders, and it was generally way faster than if I wrote it myself.


It’s writing most of my code now. Even if it’s existing code you can feed in the 1-2 files in question and iterate on them. Works quite well as long as you break it down a bit.

It’s not gas lighting the latest versions of GPT, Claude, Lama have gotten quite good


These tools must be absolutely massively better than whatever Microsoft has then because I’ve found that GitHub copilot provides negative value, I’d be more productive just turning it off rather than auditing it’s incorrect answers hoping one day it’s as good as people market it as.


> These tools must be absolutely massively better than whatever Microsoft has then

I haven't used anything from Microsoft (including Copilot) so not sure how it compares, but compared to any local model I've been able to load, and various other remote 3rd party ones (like Claude), no one comes near to GPT4 from OpenAI, especially for coding. Maybe give that a try if you can.

It still produces overly verbose code and doesn't really think about structure well (kind of like a junior programmer), but with good prompting you can kind of address that somewhat.


My experience was the opposite.

GPT4 and variants would only respond in vagaries, and had to be endlessly prompted forward,

Claude was the opposite, wrote actual code, answered in detail, zero vagueness, could appropriately re-write and hoist bits of code.


Probably these services are so tuned (not as in "fine-tuned" ML style) to each individual user that it's hard to get any sort of collective sense of what works and what doesn't. Not having any transparency what so ever into how they tune the model for individual users doesn't help either.


My employer blocks ChatGPT at work and we are forced to use Copilot. It's trash. I use Google docs to communicate with GPT on my personal device. GPT is so much better. Copilot reminds me of GPT3. Plausible, but wrong all the time. GPT 4o and o1 are pretty much bang on most of the time.


Which languages do you use?


My experience is anecdotal, based on a sample size of one. I'm not writing to convince, but to share. Please take a look at my resume to see my background, so you can weight what I write.

I tried cursor because a technically-minded product manager colleague of mine managed to build a damned solid MVP of an AI chat agent with it. He is not a programmer, but knows enough to kick the can until things work. I figured if it worked for him, I might invest an hour of my time to check it out.

I went in with a time-boxed one hour time to install cursor and implement a single trivial feature. My app is not very sophisticated - mostly a bunch of setup flows and CRUD. However, there are some non-trivial things which I would expect to have documented in a wiki if I was building this with a team.

Cursor did really well. It generated code that was close to working. It figured out those not-obvious bits as well and the changes it made kept them in mind. This is something I would not expect from a junior dev, had I not explained those cross-dependencies to them (mostly keeping state synchronized according to business rule across different entities).

It did a poor job of applying those changes to my files. It would not add the code it generated in the right places and mess things up along the way. I felt I was wrestling with it a but too much to my liking. But once I figured this out I started hand-applying it's changes and reviewing them as I incorporated them into my code. This workflow was beautiful.

It was as if I sent a one paragraph description of the change I want, and received a text file with code snippets and instructions where to apply them.

I ended up spending four hours with cursor and giving it more and more sophisticated changes and larger features to implement. This is the first AI tool I tried where I gave it access to my codebase. I picked cursor because I've heard mixed reviews about others, and my time is valuable. It did not disappoint.

I can imagine it will trip up on a larger codebase. These tools are really young still. I don't know about other AI tools, and am planning on giving them a whirl in the near future.


Copilot is terrible. You need to use Cursor or at the very least Continue.dev w/ Claude Sonnet 3.5.

It's a massive gulf of difference.


That sounds almost like the complete opposite of my experience and I'm also working in a big Rails app. I wonder how our experiences can be so diametrically different.


What kind of things are you using it for? I’ve tried asking it things about the app and it only gives me generic answers that could apply to any app. I’ve tried asking it why certain things changed after a rails update and it gives me generic troubleshooting advice that could apply to anything. I’ve tried getting it to generate tests and it makes up names for things or generally gets it wrong.


OP here. I am explicitly NOT blindly trusting the output of the AI. I am treating it as a suspicious set of code written by an inexperienced developer. Doing full code review on it.


I don't think this criticism is valid at all.

What you are saying will occasionally happen, but mistakes already happen today.

Standards for quality, client expectations, competition for market share, all those are not going to go down just because there's a new tool that helps in creating software.

New tools bring with them new ways to make errors, it's always been that way and the world hasn't ended yet...


Probably.

However impressive LLMs are, they're not AGI.

For example this morning even, I was working together with a coworker and he used ChatGPT to ask for some examples. Unfortunately they didn't work, after reading the actual documentation it turned out that the structure had to be slightly different.

This is the impression I get from everything that has been generated with "AI", on the face it looks great, but as soon as you start going into detail it's not right or just plainly wrong. The generated code had six fingers. They might be able to improve this eventually but I don't think it will ever be fixed because of the nature of LLMs.


Have you pasted the documentation or asked it to search for recent docs on the task at hand? If not, you're using it wrong.


“You’re holding it wrong” strikes again.


This is usually not the problem


Because events are related to users and they both are related to timezones and events can be related to each other. MongoDB is really good for storing big blobs of data you want to retrieve quickly, with some basic search and index, but it's awful at relations between data.


Ah I see what you mean. That makes sense!


That only makes sense if you wanted to store relational data in a NoSQL database (and that's not what Mongo is meant to do)


I'm genuinely curious what you mean when you say "relational data"? I've seen this phrase thrown around and I think it's something of a misconception.

The way you use the term implies that you're referring to the type of data, but the term generally refers to the method used for storing the data.

This distinction is important because it leads to a circular reasoning dynamic: many of us are accustomed to storing the data in tabular form using a relational data model. But choosing to use that particular model to represent objects or entities or ideas does not make those objects or entities or ideas fundamentally relational data.


And if your business is okay with all of your data living in a single instance.

Because PostgreSQL is unacceptably poor at HA/replication compared to MongoDB.


Is that really true these days? Setting up Postgres read replicas with automatic fail over across multiple machines is pretty trivial in the cloud with services like RDS, spanner etc. And although doing it in your own datacenter is still a big job it's far from impossible.


Huh? Replicas are easy, and hot standby nodes aren’t that hard either. There are also various active-active solutions if you need that.


Similar to how you can use a socket 771 Xeon in a socket 775 mainboard with changing a few pins and mod the bios.


I still have a system running a modded xeon in a 775 board lol


Developing what? I skimmed the article twice but can't find what they're actually developing.


Maybe because AI the aim is not even that important anymore, just follow the hype damn the results! Throw everything at the wall see what sticks. </s>


Yup and if all else fails, just leave (wish I realized that before my burn-out)


How would something like this go with an open source project? Other people could have forked it already, so taking everything down is impossible.


Someone tried this with an open source project I ran called bitmatch (now bitstring: https://ocaml.org/p/bitstring/latest/doc/Bitstring/index.htm...). They were running some scanning software that matched bits inside binaries, and felt they could threaten anyone who dared to use the word "bitmatch". Trademarks don't work like that since they only protect a narrow field of endeavour, not "no one can ever use this word".

They sent the C&D to my employer which made everything much more complex. I usually would have ignored it, but my employer's legal department was on my back about it, so I renamed the project to bitstring. For years my project was still top of Google search for "bitmatch". (I tried it now and I notice it's a different, Rust project, so the guy still didn't win in the end.)


I am completely unclear on whether publishing your project open source gives it (at least, the code, if not any deployed version) any kind of protection if Facebook etc. target it

I would love to get a lawyer's take on that, although I guess it would differ by jurisdiction - California and Ireland are probably the two key ones for most big tech


You are not responsible for what other people do, only what you do. If you feel the need to comply with a C&D by, for instance, taking down your work, you take down the repositories you are in control of and don't worry about others who forked the work. Worrying about them is the complaining company's job.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: