I'm an illustrator. I'm fully capable of drawing/painting political figures in compromising or worse situations. Should the government blind me because my skills are dangerous?
Artists, intellectuals, journalists, and critics of all sorts have been jailed, beaten, or simply murdered in the past for expressing themselves in ways the government of the time did not like.
I think your situation is different, in that a) people who spend many years learning to be good illustrators tend to have standards for what they create in ways that, say, virulent racists using ML tools don't; and b) people rarely take illustrations as evidence of things that happened in real life, whereas they will do that with ML-generated fake photographs.
I think, like Paul Graham would say, we'll develop societal antibodies against this.
And, like Sam Altman would say, it is going to be net good for society, but that doesn’t mean there aren’t bumps along the way. We will need to learn to navigate them well.
In the general sense yes, but I wonder if there will be unexpected things that we’ll need to take into account with this new generation of tools.
Part of me thinks that this is another revolution in graphics the same way Photoshop was where you can work 10x faster. But another one ponders about what happens when we’re dealing with intelligence.
>But another one ponders about what happens when we’re dealing with intelligence.
This is a good point, diffusion models are an example of intelligence. Proof of that is that they became ubiquitous in the same year as large language models, thus they are the same.
> The most likely way people will get killed by robots is by taking their job, forcing them into poverty and eventually letting them die on the street.
Serves 'em right for not being able to provide value to capital. /s
Forgot our pic. Without seeing our application, there is no context. So it makes it all the more fun to see if this goes anywhere. Thanks for the moonshot, hope you have fun and bring the world some hope.
I'd laugh, but LLMs actually made me begin to reconsider my dismissal of JavaScript and Python. They're still annoying, and the wider ecosystem is a disaster, but for the first time ever, I see personal value in working with the most popular tools: their popularity means that they're what LLMs work best with. So if I'm going to involve GPT-4 in some coding, I'm better off using JS or Python than asking for e.g. Lisp.
The literal example shown in the README has targetlang set to nodejs. Maybe it's a bit odd to specify the runtime instead of the language, but in practice, that's maybe more useful.
Seems like this actually works by generating tests and continuing to try different things until the tests run successfully on both the source and the target codebase.