Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem is that hidden among that complexity/detail is often the value that software developers bring e.g. security issues, regulatory compliance, diagnosability/monitoring, privacy, scalability, resilience etc.

There will be bugs that the AI cannot fix, especially in the short term, which will mean that code needs to be readable and understandable by a human. Without human review that will likely not be the case.

I'm also intrigued by "see if it works". How is this being evaluated? Are you writing a test suite, manually testing?

Don't get me wrong, this approach will likely work in lower risk software, but I think you'd be brave to go no human review in any non-trivial domain.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: