Hacker News new | past | comments | ask | show | jobs | submit | fckgnad's comments login

Nothing wrong with this being on the front page. Just posting this here as a question. Is the reason why this is on the front page because a good number of software engineers addicted? Maybe you don't have the numbers but you yourself who's reading this may be personally interested in a vaccine?


Not me personally, but I distanced myself from some close college friends when I figured out they were using heroin. Both later cleaned up, one relapsed, and I bumped into the other at the funeral.

So, yeah, the opiate problem effects more than just users.


Hacker News is for any kind of intellectual curiosity. It’s right there in the community guidelines.

And who knows, maybe this bit of creative immune system hacking will inspire a security researcher.


I think it’s more about interest in the system, threat and the mitigation. The technical nature of biochemistry attracts many to this forum. Biohackers if you will.


Additionally I think a lot of hacker types are just interested in drugs and drug policy. There are lots of back and forths on here about "legalize all drugs" and stuff like that. There's a lot of hacker-related threads running through drug topics - there's the whole "hacking your mind" / psychonaut angle, there's the sorta libertarian intersection with the law, there's the counter-culture ideas and historical interest.

Even as a kinda drug-interested guy though, I've always thought of opioids as strictly off limits recreationally, just too many downsides and they don't have the same "expand your mind" appeal as psychedelics or even weed. (Not really endorsing the idea of mind expansion, I think it's oversold, but it's part of the culture).


It would’ve saved my brother in law. That’s mostly why I’m interested.


Not necessarily addicts, but where fentanyl is poisoning the illegal street market, even those once a month cocaine or illegal ketamine or mdma users are at risk, not just oxycontin addicts. Even purchasers of Adderall have to be careful these days. And vaccines are just cool technology. Why wouldn't I take this thing that makes me immune to as certain type of poisoning. It may not be for iocaine power but I'll take that vaccine as soon as it comes out!


>This. The educational system, willingly or just incompetently manages to instill a sense that human intelligence somehow grows monotonically with historical time.

Several remarks here. It is growing with time. Average IQ grows every year such that they have to re-normalize it all the time. This is on a very short timescale though.

Second, there is logic behind an actual theory that prehistoric humans were actually SMARTER then modern day humans. A human back in the day had much more to deal with in order to survive. Nowadays you can get away with working at menial and repetitive jobs. This implies that there is less recent evolutionary pressure for maintaining high intelligence indicating at the possibility of humans becoming stupider in modern times.


I just started reading this. I'm hopeful but the forward and the first chapter don't look good.

First off at least one of the authors is into social justice. Social Justice tenants run completely counter to many of the findings of anthropology. It's a source of bias.

Second the first chapter comes from the perspective of introducing a better anthropological narrative given a lack of non-depressing stories of human civilization. I'm here for the truth, I'm not looking for narratives that are good or bad. Literally the book goes into "political implications" of alternative "narratives". I quote:

"HOBBESIAN AND ROUSSEAUIAN VERSIONS OF HUMAN HISTORY HAVE DIRE POLITICAL IMPLICATIONS."

all caps literally first chapter. Why even bring up politics at all? Let's talk about truth. The consequence of the truth is irrelevant to the truth.

It could be the authors built this book with a pre-existing agenda.

Whatever, if it has this many accolades I'll read it.


I finished the book earlier this week, and can say it does distance itself from the initial chapters pretty quickly. They are important frames of reference, and I can definitely understand the authors reasons to include those ideas so prominently. Anthropology was developed as a western oriented study, and carries along with it many of the biases and assumptions that only Europeans could have endowed.

The political climates that existed during the colonization of the Americas played an important role in the evolution of modern cultures. The book is filled with examples of different societies and how they formed, and politics has played a role in ever single instance. Regardless of if you agree with the authors’ ideologies or not, they do a pretty good job of remaining neutral when examining historical evidence. I found it a nice change of pace from the typical “westernized” explanations of how civilizations (especially Native Americans) were structured.

Overall I think it’s an excellent read, well worth the investment.


You have to understand the rationale behind the narrative. If you’re describing a period of 50 years of history, which had 49 years of peace, and 1 year of war, but you’re only focusing on the war. You’re not lying, but are you telling the truth?

And there might be a good justification, in the sense that maybe a lot of sudden changes happened during this period, but you’re still leaving a lot aside.

This view of history although it might present itself as the truth because it’s the status quo, it’s a narrative as it the ones who might succeed it. And you might not like it, but also had, has and will have political implications.


I feel that the main thing I got from the book is that it made me settle and accept that there is no truth to be had. That the main narratives are much less truthy than I expected. There really is much more diversity in how humans were structured than we can phantom, or reliably reconstruct from the evidence. That is why having a wider spectrum of narratives is more important if you want to learn from the past.

While I knew many of the points and facts in the book, somehow that realization had never stuck before.

History was written be the victors, pre-history even more so. The first chapters have a good point, which gets clearer later on in the book. Much of the historic narrative has been coloured by the way Eurasia thinks in the last 500 years.

And it is stuff I kind of knew or suspected, now I really internalised the idea because of the book. The main idea is obvious in retrospect.


> There really is much more diversity in how humans were structured.

There was diversity, but given behavioral modernity and a sufficient resource surplus, complex social organization seems to have been something close to a human universal. Graham Hancock has pointed out that prehistoric people were capable of large-scale infrastructure projects the remains of which have survived in some form to the present day - and that implies some combination of effective hierarchical organization (required to coordinate the effort of significant numbers of people, well beyond the scale of a single band or troop) and long-term time orientation.


Cherry picking and misrepresenting evidence to support his fringe political beliefs was David Graeber's whole shtick.

Full disclaimer: I didn't read Dawn of Everything and I don't intend to. My opinion is based on Debt and some of his other writings.


No I don't think this is true.

Hierarchies are heavily part of human behavior. The concept of leaders and rank is so ingrained in behavior it's hard to think it didn't exist even in hunter and gatherer societies.

Right now we have enough wealth such that everyone can live comfortably. But a lot of people still fight tooth and nail in the rat race all for what? For wealth partly but mostly for Rank. The higher your rank the better it's a huge driving force emotionally... especially for men. Desire for high rank is an inborn biological instinct. This is confirmed in psychology across all cultures. All men are emotionally more satisfied the higher their rank.

For women it's also a huge driving force for mate selection. Women marry up. They have a strong desire to marry the highest ranking man.

I would imagine in hunter and gather societies rank is maintained by two things. Brute force and social proof. Bigger men have more brute force to maintain leadership (hence why a lot of women are attracted to height) and social proof and respect insures that other men trust you and are more likely to listen to you.

That being said coercion to build things like pyramids or grand multi-year projects of vast scale requires someone to own wealth. This type of serfdom like heirarchy is much stronger then the hierarchies that existed in hunterer gatherer societies. Definitely more stable since wages and survival required someone to stay at their hierarchical post.

Either way hierarchies DID exist in hunter gatherer societies. It's just the tribe leader doesn't have enough power to coerce all men into building a pyramid. That being said I believe the article is disputing this part of what I'm saying but I don't think hierarchies are disputed.


> Hierarchies are heavily part of human behavior.

I think the tendency to follow is part of human behavior. The tendency to fight against others telling you what to do is also heavily a part of human behavior. We clearly have both aspects to our nature from a very early age.

There's a lot beyond the tendency to follow that I think of when I read the word 'hierarchies', so it's not clear how strong a claim you intend to make here.

> The concept of leaders and rank is so ingrained in behavior it's hard to think it didn't exist even in hunter and gatherer societies.

Rank, leadership and authority are different things. I've read accounts of rank and leadership without authority, of leadership without rank, of rank without either leadership or authority, of authority without rank (when executing the decisions of a tribal council), etc.

It is indeed hard to imagine that any society didn't have acknowledgement of expertise or impartiality and a measure of deference to it in its own sphere (therefore leaders), and even in a 'egalitarian' society there would be differences in wealth, family size, influence (and therefore 'rank'). But that doesn't necessarily mean a full throated 'hierarchy', at least as I would use the word.


You need to read "The Dawn of Everything". Really.


Why? because what I'm saying here is completely opposite of what that book proposes?


Because the point of DoE is to provide evidence that everything you're claiming as "natural" and "just what is" is, in fact, not so.


So yes. I was right. everything I'm claiming is opposite of what the book proposes.

Perhaps I'll read it. I will say that what I'm "claiming" is what's claimed by academia. It's the status quo. If DoE doesn't agree then I don't think it's widely regarded as good by academia.


Its authors are (were, in Graeber's case) two academics.

1/3 of the pages are references and footnotes should you want to dig deeper.

Also, "academia" does not speak with one voice on the matters covered in the book.


The forward of the book stated this about one of the authors:

"He was an activist and public intellectual of international repute who tried to live his ideas about social justice"

I worry that this book may be under the same light as creationism. An attempt to retrofit evidence such that it forms an awkward scaffold that maintains an existing belief about social justice. The authors clearly have a bias against the academic status quo the same way a Christiaan has a bias against the same thing.

Clearly modern anthropology does run against the grain of what a lot of social justice warriors claim to be true about human nature, so such a hesitation is not out of place.

Nevertheless, a social justice background does not necessarily preclude someone away from unbiased analysis. I will read.


Thanks. You convinced me. I will read.


Hierarchies are not, in fact, "heavily part of human behavior". They appear in certain circumstances, and dissipate when the circumstances do. G&W point out that we are brainwashed otherwise. I see it as a case of learned helplessness.

It is deeply evident that big construction projects have often not required hierarchy beyond deference to an architect. The architect need have no coercive authority over anybody for this to work.


Our ape cousins are hierarchical. Are there example of human tribes that have no hierarchy?


The vast majority of modern forager groups are generally considered "egalitarian" and the traditional narrative is that virtually all human societies before the neolithic were relatively egalitarian. That's not to say that they're perfectly egalitarian (it's a spectrum after all), but it's the word you'll find if you open any introductory anthropology textbook that discusses the subject.


Tribes still have leaders, elders, priests and what not. Similar to how apes have alpha males and females with a pecking order. Members of the group have a certain status in the hierarchy, even if resource access is mostly egalitarian. But what happens when you start having a bunch of tribes living nearby on a regular basis? Then you have emergent leadership across tribes. It could be by force or democracy or by whatever means. But human civilization is an emergent phenomenon once you have dense enough populations regularly living in a region.


Those positions don't always exist and even when they do, they'd don't necessarily convey any meaningful benefits of hierarchy. Take the !kung for example [0].

That said, I suspect you'll find that answer unsatisfying. Part of the issue here is that there isn't a single answer to give or a single ivory tower consensus to speak to. If you only want the anthro 101 description you'll find in most textbooks, I already gave it: Modern forager societies are described as "relatively egalitarian". It's not perfect (what simplification for undergrads is?), but it communicates the broad strokes.

If you want a deep, comprehensive dive into the literature, there are dozens of distinct and nuanced perspectives that refine that oversimplified model for particular groups, regions, periods, etc. I have one perspective, DoE advocates another, etc. I'd recommend "Lifeways of Hunter-Gatherers" as an introduction to that topic, but really you're going to have to put in a few months of reading to get a good sense of the literature because there ultimately isn't a single framework or even a single set of frameworks that everyone uses. Another good introduction to this question more specifically is Boehm's "Hierarchy in the forest". It's not comprehensive either, but it's sort of a landmark work on the topic.

[0] https://en.wikipedia.org/wiki/%C7%83Kung_people#Social_struc...


> Aboriginal people had no chiefs or other centralized institutions of social or political control. In various measures, Aboriginal societies exhibited both hierarchical and egalitarian tendencies, but they were classless; an egalitarian ethos predominated, the subordinate status of women notwithstanding.

- https://www.britannica.com/topic/Australian-Aboriginal/Leade...

( FWiW

There are those that would quibble even with "the subordinate status of women notwithstanding" as being laced with an particular European PoV, the later sentence:

> Women were excluded from the core of men’s secret-sacred ritual activities, and areas of privilege were further defined by graded acceptance of youths and adult men as they passed through rites of learning.

doesn't reflect the reflective reality of women's secret-sacred ritual activities and acknowledged privilege in their rites. )


That's interesting, but the article does say there were also evidence in some areas of male leaders. I guess the question is how typical aboriginal social organization without leadership was of pre-historic humans and whether this still led to emergent leadership among denser populations, like what agriculture would end up supporting.


Easily answered - here is a map of Australian Aboriginal language groups [1] - less "tribes" more large extended family groups with central language and common tongue for neighbours around about ... as you can see there are many.

The article acknowledges that a few specific areas (more toward the PNG and Torres Strait) are more hierarchical but the thrust is clear, few exceptions aside, the bulk were not.

> pre-historic humans

Aboriginal social groups have long oral histories and a number maintained upkeep on some of the oldest rock art known on the planet .. and groups such as the Pintupi Nine that made first contact with "modern civilisation" in the mid 1980s are certainly not "pre-historic" as we have video, interviews, their artwork, etc.

> like what agriculture would end up supporting.

Quasi nomadic "hunter gather" groups have a regular circuit and a deep knowledge of the animals and plants in that area which they tend to in decisive knowlege based manner - it's not "agriculture" as European grain harvesters for winter storage may know it, but it is absolutely agriculture in the sense of tending to plants in order to eat from them and use their products (and the animals that rely upon them) in later seasons and years to come.

[1] https://mgnsw.org.au/wp-content/uploads/2019/01/map_col_high...

(oops - map added!)


Valid. I will say the article states there were still hierarchical tendencies in the societies and leadership is mentioned in that society.

Other then that, How do we know that whether or not Aboriginals in Australia are the norm? Are they the norm or are they the exception. From my perspective the vast majority of societies have hierarchies and by virtue of being the majority it's strong evidence for hierarchies to be the more natural paradigm. Especially when paired with the biological evidence associated with serotonin and hierarchies.


> From my perspective the vast majority of societies

Your perspective as a visitor to many of the 190+ countries across the globe, from speaking to people from many non European language groups, from talking at length online to people with internet connections, .. that experience?


Not from an anthologists perspective. Obviously my perspective is not as in-depth as that. I'm coming from a more general laymans perspective.

If you're saying a more detailed anthropologist perspective can change my viewpoint then I'm open to changing it. However I can't change viewpoint just off of being told that my viewpoint is inaccurate. Even so I think what I say is still true. From both viewpoints it is a fact that the vast majority of modern societies have hierarchies. Is there nuance about this fact that you want to elaborate on?

Also a bit of a branching; Do you know of refutations from the biological and evolutionary perspective? Serotonin and ranking? Primate societies with no hierarchies?


My understanding from listening to anthropologists talk about this sort of thing is that some cultures do not have either fixed leaders or priests. By "fixed leaders", they mean someone who is the recognized leader in most any community operation. Instead, a good hunter may be the leader for a hunting expedition, a good house builder may be the leader when it comes to house building, a good fisherman for a fishing trip, and so forth.

And of course some cultures were so divided that there was no one who spoke for or led the community as a whole. The Waorani and Shuar of Ecuador might have fallen into this situation in the past, possibly less so now.

I have less often heard whether elders are generally recognized as leaders. I suspect that a good hunter is someone who over the course of his life has brought back a lot of meat. That would probably be someone older.


That is the doctrine we are taught, anyway.


You think the existence tribal leaders, elders, priests and alpha males/females are "doctrine"? Because I'm pretty sure those are factual observations. Maybe you mean to argue they are not necessary or always present in humans or apes. Fine, present evidence this is so and the frequency of such exceptions.


Presidents, popes, and dictators-for-life exist, too. That does not mean that every organized group of people needs one. If you want examples, read the book. It is right there.


A 692 page book needs more then "right there" as a reason to plow through it. Just saying.


But what does a doctrine widely taught have anything to do with whether it's true or false? Clearly you think it's false but do you have evidence? Please present evidence if you do. Examples in biology, similar species or specific civilizations function as good evidence.


There have been myriad societies that consciously chose to dispense with hierarchy, others where hierarchy applied only to one sphere (e.g. religion) and nowhere else, ones where different hierarchies applied in different spheres, and many where hierarchy comes and goes, with no continuity with previous hierarchies.

To be human is to have the power to consciously choose behavior.


Almost every single group I've been part of had leaders. They may have been elected by the group, or more often, appointed elsewhere. The ones that don't have leadership tend to be temporary and disorganized. In situations where this is less temporary, leaders often emerge naturally, as that's how humans typically organize themselves.

We have conscious choice, sure, but that doesn't change the fact that we're part of larger systems that we only have some influence over. It also doesn't change the fact that conscious choice is somewhat biologically driven.


First, I suspect that every group you've been part of has been composed of mostly or entirely white people around your age who speak English. The contention of the book, and from what I understand that of many anthropologists and archaeologists, is that this is atypical, or at least not all that typical.

Also, the book makes a point about ephemeral leadership, either leadership (and the corresponding organization) that is seasonal according to the needs of the time of year, or temporary for a particular task (house building, field clearing, hunting...). Which may be what you're saying in the first paragraph about temporary groups, but they're saying this is the normal--or even only--situation in some cultures, and that furthermore it works just fine.

As for biologically driven choices, I suspect the authors would say that the range of choices is far greater than you might think.


Only English speaking white people tend to be hierarchal? What BS is that? Have you ever been to Asia (or most of Europe)? You think Africans, Middle Easterners or South Americans don't have hierarchies? You think only white people tend to select leaders in groups without one? That's ridiculous and simply wrong. And civilization didn't begin in Europe anyway, it was the Middle East, and then cropped up in five or six separate locations across the globe, including the Americas.


> Only English speaking white people tend to be hierarchal?

No. But all your experience in the world is in hierarchies. You have no relevant experience to draw on.


That would apply to the large majority of the people on this planet for the past few thousand years.


The "past few thousand years" is a tiny fraction of the human timeline. And, the overwhelming majority of even the "past few thousand years" is obscured. What remains is a poster example of selection bias: hierarchical organizations depend more on writing, so the written record is of hierarchical societies.


How many of these groups were of people who did not grow up with fixed hierarchies, and so had developed organizational skills that did not rely on one?

A hierarchy is the laziest choice among ways to organize.


Hierarchies are places where one relies on many. That is the point of a hierarchy... so that a few can control and rely on many. It is the most unfair way to organize.


It is thus very, very convenient for the few that people are conditioned from childhood to believe it is the only way to organize, and announce it freely in print.


A child is well aware of the parent-child hierarchy that is more or less universal across all modern cultures. You've been conditioned by this book to follow a fringe belief.


[flagged]


[flagged]


We've banned this account for repeatedly breaking the site guidelines.

Please don't create accounts to break HN's rules with.

https://news.ycombinator.com/newsguidelines.html


I think those societies are rare and observations of those societies are inaccurate. There must be hierarchies in those societies, it's just misreported.

More evidence of the relationship between hierarchy and biology: https://www.youtube.com/watch?v=kXxKBiidbeo

Serotonin is the biological chemical that is linked with hierarchy in biology and we are awashed in it.


This sort of argument--that those reports must be inaccurate, because--well, I'm not sure why you you think these reports are inaccurate, except that you think they must be. And that's not a reason.

I'm also pretty sure any linkage of serotonin to hierarchy is unproven.


>This sort of argument--that those reports must be inaccurate, because--well, I'm not sure why you you think these reports are inaccurate, except that you think they must be. And that's not a reason.

It's happened before: https://en.wikipedia.org/wiki/Margaret_Mead and not just with her. It's more prevalent in anthropology then in other sciences.

This is just an educated guess. If you provide actual sources I can verify what the academic community thinks of these reports or findings, and really that's the only best available metric I can go off of.

>I'm also pretty sure any linkage of serotonin to hierarchy is unproven.

Science cannot prove anything. Be very careful with your language. Especially in the social sciences where things are less quantitative... proof is fundamentally impossible. There is only evidence in favor of and evidence against.

Evidence in favor of serotonin and hierarchy: https://www.nature.com/articles/s41386-022-01378-2 https://www.researchgate.net/profile/Anna-Ziomkiewicz-2/publ... https://www.sciencedirect.com/science/article/abs/pii/S09594...


>Hierarchies are not, in fact, "heavily part of human behavior". They appear in certain circumstances, and dissipate when the circumstances do. G&W point out that we are brainwashed otherwise. I see it as a case of learned helplessness.

No. G&W is wrong and misinformed on this matter. Source: https://www.youtube.com/watch?v=6ypVbUBEZHg

I realize Jordan Peterson is controversial but what he talks about here is factual.

Hierarchies are not only observed in modern human societies but in ALL primate societies. From gorillas to chimpanzees. The idea that humans are an exception is very unlikely and categorically False.


Jordan Peterson can choose his examples as cleverly as he likes to advance his biases to uncritical listeners.


Right this is a possibility. More likely though is that he's in a school of thought and you are in another school of thought because Jordan Peterson Clearly isn't the only academic who believes this.

Most schools of thought have valid arguments. I have presented evidence for my school of thought.

If we are being truly unbiased here, you will note you have not presented any evidence to forward your argument. At most you have told me about a person who agrees with you but no real evidence that is in favor of your argument and you have attacked my evidence as false (a statement with also no evidence). If you were truly unbiased and If I was more biased then you then you should have noted this deficiency.

The most unbiased and logical way to rectify this situation involves two possibilities: Present evidence so we can continue the discussion. Or make a statement about how it's not worth your time or some other excuse and the conversation ends with me presenting evidence and you presenting none.


The topic is a book called "The Dawn of Everything". Not reading the book, and then insisting that all the examples presented in the 692-page book do not exist, is not a good look.


What is the point of coming onto this thread other then to see if the book is worth reading or not? It's fair to assume a lot of people on this thread haven't read it.

I never Denied examples didn't exist. I stated your arguments of "you're wrong" prove nothing without evidence. YOU didn't present evidence on this thread; I Did; That's all I said.

I'm sure it's not a "good look" to people who've read the book, but your attitude guarantees that you alienate people on the other side. It is not only a good look to people who believe in the anthropologic status quo, but it's the one look that should matter. I mean are you here just to toot your own horn? This is essentially what I'm seeing from you: "I read the book, you do it too because you're wrong." I came seeking reasons from people like you on why this book should be read.

This book is obviously more of a fringe perspective on anthropology so it does not do you any good at all to project that attitude.


[flagged]


Please don't post in the flamewar style to HN, regardless of how wrong someone else is or you feel they are. You did that repeatedly in this thread. It's not what this site is for, and destroys what it is for.

https://news.ycombinator.com/newsguidelines.html


My opinion is the status quo of anthropology. Your claiming that the opinion is wrong and not inline with reality? Do you have evidence other then something along the lines of "read the book or your delusional?"

I can tell you other people reading this thread who also believe in traditional anthropology aren't liking what your presenting.

At the very least offer something from the book. Wow. Just wow.


Read it or don't. If you don't want to read, look it up on youtube. You are not obliged to educate yourself, but pretending to education you haven't will reliably solicit unwelcome responses.


No one is obliged to do anything. I am saying your attitude makes it so that other people who haven't read the book are less likely to read it when they read your words. You are actively making your viewpoint less popular. You are not obliged to correct or change your attitude. Keep it if you want.

You are also not an educated or unbiased source. There are posts under this topic that lend genuine controversy around the authors of DoE. Your complete and utter failure to acknowledge alternative viewpoints and dismissing questions as uneducated leaves a bad taste in everyones mouth. Especially for some other reader on HN who might've read this book otherwise.


Only you and goatszx seem put out. And, really, do you expect a post here to make a case that took almost 600 book pages to express properly?


Be real. You can convince people to read a book without having to copy all 600 pages here. Saying that you're incapable of doing so when every book on the face of the earth has summaries, snippets and reviews to promote themselves is just plain dishonest. It's so obviously dishonest that it's, in fact, a form of trolling. I'm sorry, but this conversation is over as this type of thing is against the rules here.


[flagged]


I.e., two.


You give an example about how chatGPT is wrong while there exists many examples about how chatGPT is right. And you think the some wrong examples invalidates the possibility of AI ever being better than you?

The fact that it's often right is a horrifying omen of the future.

chatGPT will not replace you. It is the precursor to the thing that will replace you.


Are you seriously accusing me of cherry picking? Get it to write you an MD5 hashing algorithm in Rust. Go ahead, I'll wait. I tried and it genuinely couldn't, I asked it tons of different ways and wasted a ton of time before I had to go do it myself lol.

Cool, man. So why don't you get chatGPT to start writing you some software? Or optimize an algorithm? Hey, maybe it'll tackle the travelling salesman problem in polynomial time!

SO many economic and scientific opportunities that will make you wealthy and famous if it's as capable as you claim (eg. Doesn't just solve elementary problems by regurgitating shitty code).


Please don't post in the flamewar style to HN, regardless of how wrong someone else is or you feel they are. It's not what this site is for, and destroys what it is for.

https://news.ycombinator.com/newsguidelines.html


[flagged]


We've banned this account for repeatedly breaking the site guidelines.

Please don't create accounts to break HN's rules with.

https://news.ycombinator.com/newsguidelines.html


> You aren't understanding that even when chatGPT makes up bullshit code that's close to working it's STILL revolutionary.

I can ask google and it will bring up a stackoverflow answer as well, that has a better hit rate. This is exactly the same thing that ChatGPT does, but it is simply a better search engine. Is it revolutionary? As a search engine, sure. As a programmer? I haven’t seen any evidence of it genuinely generating novel code, and not just regurgitating found code in another language’s syntax.


Did I say, chatGPT will write software? No. It's the precursor to the thing that will. chatGPT wasn't even specifically trained to write software. It just happens to do it as a random side effect. Wait till they make the thing that specifically targets writing software.

Actually curious what evidence you have to back this up, I can imagine ChatGPT augmenting people's skills, or even being used for inspiration, I've also not seen anything to suggest it will "git gud" and just replace everyone.


Evidence for the future? Nobody has any real evidence for the future. You don't even have evidence that AI won't get better.

I will say that my prediction is more likely given the immense progress of AI over the last decade. If AI continues improving at the same rate then I will be right. If it hits a plateau then parent will be right.

By virtue of my statements following the slope of a historically increasing trendline, my statements are more likely to be correct.


I didn't say "AI" (which is a ridiculously broad term to use in this context) won't get better.

I said this statistical language model likely won't end up replacing coders in it's current form. I think something really quite new and novel will have to be invented to do this.

I kind of which I could use it to be writing code for me now while I was outside in the sun, but no :)

Edit: I re-read your original comment and I think you edited to be a bit more inline with my thoughts?


No, I think we are still in disagreement here. The generative model is capable of replacing you... it just needs proper training. The tech is here but the proper application of that tech hasn't happened yet and we are dangerously close.

It was trained in a very general way, but with further training and different training it can get even better.

Put it this way. The step to chatGPT was gigantic. The step to the thing that will replace you is comparatively smaller.


I really have no idea what your point is and the kind of hostile "replace you" language is about. Have you ever considered our jobs will just evolve? Like as if coding computers is the last job in the world? Like, we might not even have jobs in the future. So in that respect, it might be ha.

People used to shine shoes for a living you know, then a machine came along to do it, now almost no one shines shoes anyway? You would've been a farmer, no you sit around in your underpants arguing on hackernews on the weekend.

Use your imagination a little bit and stop focusing on the "replace you " bit? There might be things after AI, if we don't destroy our civilization with it.

The difference in our argument here is this. From a technical perspective, I think you're wrong and naive to think a language model will replace people in most roles. I'm talking about the pragmatic side of the argument, you're speculating Google has a AlphaCoder waiting in the midst to eat our jobs, while that might be true, it's a different thing to what ChatGPT is.

I'll give you an example, I could use something to generate art for the company I'm building an not hire artists, but do you know why I don't? Because there's a whole strategy around art, design and user experience to building a brand. Maybe my staff will use AI to generate design for consideration which will save us sometime and let us focus on other things, but ultimately I'm not going to use a "hallucinated gif" instead of hiring someone who has studied and "understands" design.


>I really have no idea what your point is and the kind of hostile "replace you" language is about.

It's not intentionally hostile. "Replace All of us and many people outside of coding" is a better term. Apologies.

>People used to shine shoes for a living you know, then a machine came along to do it, now almost no one shines shoes anyway? You would've been a farmer, no you sit around in your underpants arguing on hackernews on the weekend.

By replace I mean replace your job. You'll still be around.

>Use your imagination a little bit and stop focusing on the "replace you " bit? There might be things after AI, if we don't destroy our civilization with it.

Why not? That's literally the only thing that's predictable and pretty much inevitable. Destroying civilization is really grand speculation. Again by "replace" I mean replace your job. Not replace your life.

>The difference in our argument here is this. From a technical perspective, I think you're wrong and naive to think a language model will replace people in most roles. I'm talking about the pragmatic side of the argument, you're speculating Google has a AlphaCoder waiting in the midst to eat our jobs, while that might be true, it's a different thing to what ChatGPT is.

What I'm saying is that this AlphaCoder is built off of the same basic technology as chatGPT, just targeted to replace a specific Job role. chatGPT is trained more generally. But the Generator model... the underlying technology is the same. It's just the training data and reinforcement is different.

>I'll give you an example, I could use something to generate art for the company I'm building an not hire artists, but do you know why I don't? Because there's a whole strategy around art, design and user experience to building a brand. Maybe my staff will use AI to generate design for consideration which will save us sometime and let us focus on other things, but ultimately I'm not going to use a "hallucinated gif" instead of hiring someone who has studied and "understands" design.

Oh man you chose the wrong example for me. I'm highly critical of designers. I think it's the most generic simple sector of the art world that any one can actually pull off with little training. Minimal designs that are clean take time and effort to make... THIS I agre...e but talent? No I disagree. It takes no talent to think hard about design fundamentals. Literally it's all flat geometric shapes on a page and text placement and font selection.

I am FAR more impressed with an artist that can draw something that's more photorealistic rather then arranging text and shapes on a page. If you hire a "designer" for a company you're building and you pay him top dollar. I'm sorry you're getting ripped off. This is one of the greatest delusions of modern times that Design is hard when in actuality it's trivially easy.

I think you do need an art department. But that art department will be full of people with a very basic and generic skillset and it's a job that deserves really low pay imo. Don't believe me? Look at googles logo. Some designer got paid top dollar for just a multicolored G... you have to be next level insane not to see how little talent was involved in that. I mean the G is clean and symmetrical and iconic... but it's also TRIVIAL to create.

All of this is besides the point. The point is, especially for Design, you can hire a guy with no training to use an AI specifically made to generate things with a specific "design" theme for your company and it will generate designs that are ON par with what a designer can do.


There's much more to design than just pictures, including color theory. I think you've overlooked this because you didn't mention it. I'd also say colors are the more important aspect of Googles branding than just the text.


You realize color theory is bullshit right?

Color is a gradient of wavelength. It does not arise from 3 primary colors. Realize that those three colors are picked arbitrarily and placed in some technical framework called "color theory." Color theory is a highly inappropriate term for this as it implies the concept is fundamental rather then arbitrary.

Color theory arose from the observation that mixing those 3 colors got you other colors but they didn't realize that the 3 colors were actually arbitrary and that there are many sets of 3 colors that can produce the SAME effect. It's an archaic concept that's still taught in art because of tradition and not because of any science or logic.

The human eye does, however, utilize 3 primary cones to do color sensing. These colors, however, are not the primary colors: Red yellow and blue. It's RGB. Your screen uses RGB because engineers who came up with the concept of RGB weren't so full of it.

For that reason the web designer you hire SHOULD technically be using RGB and NOT color theory. Color theory is literal BS that I feel many designers buy into the bullshit. It makes them sound like they know what they're talking about when really all it comes down to is whether or not something looks good or not.

Additionally, color theory is a TRIVIAL concept. It's EASY to learn and apply. There's nothing advanced here that you can't look up on the internet. You don't need to hire a designer if you want to miss-apply some bullshit called color theory to your "designs". Just make sure you ask your gut whether the design looks good or not because that's actually the thing that needs to be studied here. If any theory needs to be formulated around design and color it needs to be centered around biology/culture to determine what we think "looks good".


>I've also not seen anything to suggest it will "git gud" and just replace everyone.

You have seen artists complaining really violently about DALL-E and mid-journey right? There's even a law suit. This is happening because they think it's a real threat.

With all the events in the news you'd be lying to yourself if the possibility of AI "getting gud" isn't even something that's on your mind.


No I haven't seen artists complaining much, and I actually have a few in my family. I think you're making this stuff up "complaining really violently".

AI won't "replace artists", artists might use AI to generate something and then tweak it and do something more creative with it. It's actually silly to suggest "art" or "artists" are replaceable.

In the digital space, it might be a thing, but until a robot starts painting actual oil paintings and carving wooden sculptures, at which point, someone will do a "collaboration" with a robot and then it will be art in it's own right.

But yeah no, I haven't really seen artists rioting on the news or anything.

I've seen people starting lawsuits about their work being stolen to train these things, which I think is justified because it's "megacorp" taking your work and making money off it without paying royalties, and boy oh boy you'd feel the same way if it was you.

When AI "git's gud", people will start using it to break other peoples software, then there will be a whole industry around this problem, AI proofing your software against AI hacking and on it all goes.


>No I haven't seen artists complaining much, and I actually have a few in my family. I think you're making this stuff up "complaining really violently".

Evidence: https://www.businessinsider.com/ai-art-artists-getty-images-...

https://futurism.com/artists-furious-about-ai-that-draws-por...

https://www.independent.co.uk/news/world/americas/ai-art-len...

https://80.lv/articles/artists-are-getting-concerned-about-a...

https://hyperallergic.com/791951/artist-banned-from-subreddi...

https://www.vice.com/en/article/ake9me/artists-are-revolt-ag...

https://twitter.com/suhail/status/1577702194500861956?lang=e...

I am saying this in the nicest way possible with no intended offense. But you are actually living under a rock if you haven't been following the news.

>But yeah no, I haven't really seen artists rioting on the news or anything.

"Violently" is just an expression. There's no actual violence here. But there is intense resistance on legal grounds and online attacks.

>I've seen people starting lawsuits about their work being stolen to train these things, which I think is justified because it's "megacorp" taking your work and making money off it without paying royalties, and boy oh boy you'd feel the same way if it was you.

That's just an excuse. These AIs and megacorps were appropriating the art for decades. Google hijacks images from artists for years and caches them on their servers. No artist complained to this extent EVER. Not until AI got to the point where it can create art just as good as the artists themselves... then all of a sudden it's a problem? come on man. This is an attack on AI not on copying art. Don't kid yourself.


Not to be an ass but they're the most inflammatory sources of news I know, I won't even open an article from "Independent" for my own mental health.

Sorry, but you're just wasting my time with rot and weak arguments. Trying to say artists are luddites or something.

Good day to you.


>Not to be an ass but they're the most inflammatory sources of news I know, I won't even open an article from "Independent" for my own mental health.

omg, then don't click on the link, there's OTHER links I posted too. Relax bro. I just googled the term and found a bunch of links and posted them because it's literally everywhere. You don't have to follow ANY of the links if you don't want to. The point is to show you that this stuff is everywhere.

I think your sources are limited that's how you missed something as big as artists launching the first resistance against AI. There were SEVERAL articles about this on HN as well.

>Sorry, but you're just wasting my time with rot and weak arguments.

Now this is just rude. You say "not to be an ass" but you deliberately call my arguments rot and weak and say I'm "wasting" your time? That is OFFENSIVE. And is literally against the rules here and inflammatory. Please leave HN or follow the rules.

Yes good day to you I don't want to engage in a flamewar you almost started.


I just did a bit more research, artists AND corporations, including Getty images are in lawsuits against companies stealing their work to train AI models.

There's a big, big, big difference between copyright based lawsuits and you claiming "artists launching the first resistance against AI".

I think they're against people scraping their work and on selling it ? I'm not saying they're right or wrong, but there is a difference to what you're claiming.


The resistance involves copyright lawsuits and online bullying to attack people using AI. That's exactly what I mean by "resistance"

>I think they're against people scraping their work and on selling it ? I'm not saying they're right or wrong, but there is a difference to what you're claiming.

I already told you that's an excuse. Google and thousands of AIs and other megacorps have been scraping artists work for decades. The backlash wasn't a big deal then. Why suddenly is it a huge problem when AI art hits an inflection point and can produce art that's equal in quality to the artist?

Because the copyright isn't the real problem. It's the AI. Heck when the AI looks at the art it scrambles the art and encodes it into Memory the same way YOU encode the art into your memory. Does that mean by looking at art you are copying it? Why don't other artists have a problem with you looking at art and copying it into your memory?

Because you can't use that memory to produce original works of art in the same style as the artists. That's why the artist doesn't give a shit about you copying the art into your brain. They do give a shit about DALL-E copying the art into it's neural net because these AI's can potentially replace them.


You're just arguing some people are afraid of being replaced by AI, new flash? Some people are afraid by peoples sexual orientation. Some artists are already replaced by better artists, they don't just resign from doing art. Art isn't really about money, it's about self-expression, there is no case where an AI is going to replace that, it's personal.

By the way, are photographers rioting in the street? Because they should be too, I've not seen it but they should be?

Artists already knew this and photographers too, there were more photos and images available online for free or next to nothing, than you could ever imagine or possible consume. It didn't change much, people still did art and people still got hired and not much will change. People will still be involved in art and photos. For fun and for money.

I guarantee you there is a whole group of people who see using DALL-E to generate new interesting ideas as being a thing too, who see it as an opportunity. Similar to who Chess players are using AI to study new moves.

If it's your profession, yeah it might suck, but on the other hand, I choose to hire artists because I enjoy working with people and building something together, it's a whole different creative process and in my opinion, creates better products for the specific use case. I actually wouldn't mind sitting with artists who use AI to create things?

Because the copyright isn't the real problem. It's the AI. Heck when the AI looks at the art it scrambles the art and encodes it into Memory the same way YOU encode the art into your memory. Does that mean by looking at art you are copying it? Why don't other artists have a problem with you looking at art and copying it into your memory?

You're naive actually about how these things work, these things use statistics to draw pictures based on statistics, they don't understand anything, it's why when I use DALL-E, it makes some nice stuff, but when I look more closely it also does weird things like, has objects sticking out of peoples heads. So no it doesn't do the same thing, it doesn't "understand" anything. I would understand if I was asked to draw a picture of someone that it would be strange to have a wooden stake sticking out of their ass. DALL-E doesn't. Go and get it to draw you a photo of children playing, it will be quite a nightmare.

While this stuff is impressive, it's a very, very big leap to go from painting by numbers to understanding something and being creative in that way. I personally think it will be a fun and exciting time when this happens, but fundamentally, it's quite a different system.

I actually get the feeling as humans, we're also overlaying our own ego onto how great these creations we've created are without being practical and objective enough to actually figure out if these things are actually important.

Style, design, etc is more than just "having the image". Selection is important, for example musicians write thousands of songs and never actually record them. There is a time and a place for specific art to be deployed, consumed, displayed, I don't think this is going away either, "style" and having an eye for the correct imagery is not something that will be replaced anytime soon. Essentially, having infinite images also means making the right choice becomes harder, that will be a new trade in itself.

These days I struggle to watch a movie, there are just too many options, AI is only going to make this problem worse. We'll be drowning in shit.

Nothing is as simple as it seems.

From the perspective of psychology, I think the most salty people are those who don't do art, it's almost like people hope this is the end of people being able to freely express themselves. Kind of like the quest to crush artistic freedom is in progress.


This is a continuation from my other reply. Read the other comment first. Or not, it's rather long. I type fast.

>These days I struggle to watch a movie, there are just too many options, AI is only going to make this problem worse. We'll be drowning in shit. Holy shit. Now you need to hire a guy to choose the movie for you. Pay him a movie directors wage. Clearly this choosing stuff is so hard we need experts! No I'm kidding. Let's be honest, choosing things is easy.

>Nothing is as simple as it seems.

The irony here is that your conclusion is the simpler one. It's the easy way out. People are optimistic by default and pessimism is actually the harder path because it's so much uglier to admit. The truth is actually more inline with pessimism as the world is more or less built on competitive darwinian fundamentals with cooperation existing only as a side effect. The brain paints a delusional reality in such a way so that you don't get constantly scared or depressed. If you find your thoughts always being overly optimistic there's a good chance you're biased.

>From the perspective of psychology, I think the most salty people are those who don't do art, it's almost like people hope this is the end of people being able to freely express themselves. Kind of like the quest to crush artistic freedom is in progress.

I look at this statement and there are things about it that are obviously wrong. And I wonder how come you're blind to it? Like you're obviously referring to me somewhat. But that's not even the issue.

The most salty people are the people who entered into a lawsuit. You have to be really fucking salty to spend the time and the effort to do that. Who's in the lawsuit? Not me, I don't give a shit about artists. Let me spell it out: Artists are suing AI companies because Artists are the ones that are the most SALTY. That's not even a huge revelation. The revelation is how this came to be NOT obvious to a pretty smart person like you?

You use psychology to imply I'm the one out of touch? Take a look in the mirror.

A better analogy for this is oil companies and climate change pre 2000s. I'm the environmentalist saying something is fucked up here. You're oil baron. You're the person in Software who's in denial about how Software and ML is about to make some drastic and extremely negative changes to the way the world works. I can assure you oil barons couldn't face the cold hard truth and grasped at every positive angle they could get there hands on to build a universe where they weren't responsible for harming the world. They couldn't face the reality. Can you.

Can you face the truth that the artist working for your company is about to become useless. Can you fire him and tell him that to his face? No. You need a narrative. What about your own skills as a software engineer. Are you able to face a reality where your job is basically within 10-20 years going to be phased out for AIs? Likely not. So consider the possibility that you're the one that's biased and you're the one with the overly rosy outlook.


>You're just arguing some people are afraid of being replaced by AI, new flash?

No I'm saying many people are afraid enough such that they organized a law suit against AI. Something that never happened before. THAT is sufficient evidence in support of the fact that AI has surpassed certain limits and CAN replace certain occupations. THIS point is OBVIOUS and YOU know this.

Why are you delivering talking points to make me explain what's obvious?

>Art isn't really about money, it's about self-expression,

You have got to be joking. You realize art is a HUGE part of business right? Movies, Video Games, Websites, Comic books ALL ARE businesses that use art. I think it's gotten to a point where you're just grasping for concepts to defend a point and you're not realizing how obviously wrong these concepts are. Art is Categorically a business. It is also self-expression at the same time but you are delusional if you think it's not business.

>Artists already knew this and photographers too, there were more photos and images available online for free or next to nothing, than you could ever imagine or possible consume. It didn't change much, people still did art and people still got hired and not much will change. People will still be involved in art and photos. For fun and for money.

Photographers didn't riot for three reasons. First reason, it doesn't take much skill to be a good photographer. So it's not a huge thing when something takes it over because most people never invested much into it. For art there's huge investment into getting good at it.

Second Reason. The technology came too slowly. It's not as sudden as AI and art. Smart phones turning everyone into somewhat good photographers and even consumer cameras before that took several decades of progress and gradual improvement to be where we are at today. When something comes slowly people don't really react, JUST like how global warming will fuck the world up but it's so slow nobody can bring themselves to care.

Third Reason. AI is not actually replacing all forms of photography. AI is like art. People know it's made up. There's still actual demand for captured stills of reality AND that is a separate niche from captured made up stills that don't exist in reality.

>I guarantee you there is a whole group of people who see using DALL-E to generate new interesting ideas as being a thing too, who see it as an opportunity. Similar to who Chess players are using AI to study new moves.

Sure.

>If it's your profession, yeah it might suck, but on the other hand, I choose to hire artists because I enjoy working with people and building something together, it's a whole different creative process and in my opinion, creates better products for the specific use case. I actually wouldn't mind sitting with artists who use AI to create things?

Of course. But you see there's a difference here. In the past if I wanted a person to paint me some really high quality and completely original fantasy art, I'd dish out a lot of money because such a skill is hard to find. Now I can hire any person who just has a bit of design sense and HE can use AI to do 99% of the work at minimum wage. I get all the benefits of personal interaction while I reap way more rewards by paying lower wages. ART skill was expensive, The comradery of working with someone was and still is cheap. But now the world is changing and art is just as cheap as comradery.

>You're naive actually about how these things work, these things use statistics to draw pictures based on statistics, they don't understand anything, it's why when I use DALL-E, it makes some nice stuff, but when I look more closely it also does weird things like, has objects sticking out of peoples heads. So no it doesn't do the same thing, it doesn't "understand" anything. I would understand if I was asked to draw a picture of someone that it would be strange to have a wooden stake sticking out of their ass. DALL-E doesn't. Go and get it to draw you a photo of children playing, it will be quite a nightmare.

Bro. Most of the things these things draw ARE already better than anything you can do. It's better than the average human being at drawing already. You're pointing out flaws but even with those flaws it's STILL better than average.

That being said this is just DALL-E. Other Generative Models that are trained more thoroughly on specialized sets produce WAY better output. MidJourney for example.

>While this stuff is impressive, it's a very, very big leap to go from painting by numbers to understanding something and being creative in that way. I personally think it will be a fun and exciting time when this happens, but fundamentally, it's quite a different system.

Painting by numbers? Bro. This thing is CLEARLY not painting by the numbers. You give it a sentence DALL-E gives you SEVERAL variations that are on par with what a human would do in terms of creativity. Just go onto deviant art and it's all similar from the perspective of originality.

Lack of Creativity or "painting by the numbers" isn't the issue. The issue is translation accuracy. Some things are "off", hands are inaccurate, some things are misplaced. AI is already killing it in terms of creativity. The problem now is to fix these artifacts. Fixing artifacts is not in your words "a huge leap". Once those artifacts are fixed and these AI models generate pictures with pixel perfection it's over.

>I actually get the feeling as humans, we're also overlaying our own ego onto how great these creations we've created are without being practical and objective enough to actually figure out if these things are actually important.

This is cliche. You're repeating what everyone has been parroting all over HN that these AI's have limits, they aren't as good as humans, yadayadayada. What your saying is EASY to believe. It's a common trope and the deceptively obvious conclusion. It takes extra effort to get passed this bias and see the extent of AI. I'm not amazed because I'm just taking the easiest conclusion. No. I'm amazed because I took steps to overcome my bias.

Think of it this way. You know of the turing test? For the longest time and for most of my life this test: https://www.wikiwand.com/en/Turing_test was basically the standard test to see if something was more or less an actual intelligence and self aware. It was quite obvious to most people that this test was virtually impossible for an AI to pass and if an AI passed it, it's more or less a self aware intelligent being.

Yeah we just rocketed passed this test. LLM's regularly can beat this test, TRIVIALLY. But then there's a whole bunch of clones with your outlook. You guys move the bar higher and higher everytime a milestone is hit. Beating that test would be impressive in the past, but now that something actually beat it, it isn't sufficiently impressive anymore. You unconsciously place the bar higher without realizing and begin nitpicking and magnifying the little issues AI still has. You guys will forever think there's lightyears to go before AI matches human intelligence no matter how many turing type tests AI surpasses.

>Style, design, etc is more than just "having the image". Selection is important, for example musicians write thousands of songs and never actually record them. There is a time and a place for specific art to be deployed, consumed, displayed, I don't think this is going away either, "style" and having an eye for the correct imagery is not something that will be replaced anytime soon. Essentially, having infinite images also means making the right choice becomes harder, that will be a new trade in itself.

You think these models will only output pixel?. It can output anything that can be described in a natural language. Be it English, pixels or HTML styled with CSS. That's the first part. The second part is, it takes 100000x less talent to SELECT something that was ALREADY created then it does to CREATE something that didn't exist. EVEN when you have a lot of selection. You want proof? The internet and amazon has INCREASED my shopping selection choices by a huge magnitude. You still don't see me paying 200k to an expert chooser to choose for me what to buy. Why? Because selecting these things will be EASY.

Previously you pay 200k to each of 5 artists to do some art job. Now you pay one person minimum wage to do AI to do the same thing. That's 4 people with no job and one person being paid minimum wage. That's the future.


chatGPT is not a threat. ChatGPT is an omen for the threat that currently exists in the future.

A lot of people haven't really played with chatGPT in depth. This thing is next level. It's not just spitting out generic things, there is genuine understanding here and genuine creativity.

Here's a snippet from a chatGPT conversation I had based on your post:

   ME: Give more examples of squishy stuff that software developers need to handle. 

   chatGPT: 
   1. Ethical considerations: As technology advances, software developers may need to consider ethical implications of the products they create. For example, a developer working on a facial recognition system may need to consider issues of privacy and bias.

   2. User experience (UX) design: Creating a software that is easy to use and intuitive for the user is an important aspect of software development. Developers may need to consider factors such as layout, navigation, and accessibility to ensure that the software is user-friendly.

   3. Business strategy: Developers may need to consider the overall business strategy of the company they work for, and how their work contributes to achieving that strategy. This may involve working with other departments, such as sales or marketing, to ensure that the software is aligned with the company's goals.

   4. Security: Developers should consider security in their software development process to protect the software from hacking attempts, data breaches, and other security risks.

   5. Cultural and linguistic diversity: Developers must consider the cultural and linguistic diversity of their users, in order to create software that is inclusive, accessible and understandable for all users.

   6. Communication and collaboration: Developers often work in teams, and must be able to communicate effectively with other team members and stakeholders to ensure that the software is developed as intended.

   ME: Give example python code that serves some random customer in some random sector. Then explain how you would change that code to better serve the "squishy stuff" needed by the customer.  

   chatGPT: Here's an example of Python code that serves a random customer in the retail sector by calculating the total cost of a purchase including sales tax:

      def calculate_total_cost(price, sales_tax):
       total_cost = price + (price * sales_tax)
       return total_cost

      purchase_price = 100
      sales_tax_rate = 0.08

      total_cost = calculate_total_cost(purchase_price, sales_tax_rate)  
      print(total_cost)

   To better serve the "squishy stuff" needed by the customer, we could make the following changes:

   1. Add a function to calculate discounts based on customer loyalty or promotional offers, for example by adding a function that uses customer's ID as an argument.

   2. Add a function to handle different tax rates for different states or countries, for example by adding a function that uses the customer's address as an argument.

   3. Add a function to handle different currencies, for example by adding a function that uses the customer's preferred currency as an argument.

   4. Add a function to handle different languages, for example by adding a function that uses the customer's preferred language as an argument.

   Here is an example of how the code would look after making these changes:

      def calculate_total_cost(price, sales_tax, discount, customer_id):
       total_cost = price + (price * sales_tax)
       if customer_id in loyalty_customers:
        total_cost = total_cost - (total_cost * discount)
       return total_cost

      def tax_rate(customer_address):
       if customer_address in country_1:
        return 0.08
       elif customer_address in country_2:
        return 0.09
       else:
        return 0.07

      def currency_convert(price, customer_currency):
       if customer_currency == "USD":
        return price
       elif customer_currency == "EUR":
        return price * 0.8
       else:
        return price * 0.9

      def language_change(text, customer_language):
       if customer_language == "en":
        return text
       elif customer_language == "fr":
        return translate_to_french(text)
       else:
        return translate_to_spanish(text)
    
      purchase_price = 100
      customer_id = 123
      customer_address = "US"
      customer_currency = "USD"
      customer_language = "en"

      sales_tax_rate = tax_rate(customer_address)
      purchase_price = currency_convert(purchase_price, customer_currency)
      total_cost = calculate_total_cost(purchase_price, sales_tax_rate, 0.1, customer_id)
      text = f"Your total cost is: {total_cost} {customer_currency}"
      print(language_change(text, customer_language))

      This code now considers the customer's address, currency, language and ID, to provide the customer with a more personalized experience.
Honestly there's a good number of people who aren't getting how revolutionary chatGPT is because of all the other inconsequential AI buzz. You really need to play with this thing in depth. It is Creative. It can create it's own specs and implement those specs like I asked it too. It won't replace you, but it's definitely the precursor to something that will.


I had played around with GPT-3 beta and sort of written it off after I'd gave me a (honestly compelling though) lengthy argument that Eminem was a black man.

Earlier today I used ChatGPT to explain to me concepts I lacked understanding of, which allowed me to accurately interpret the statistical information in a scientific paper (I'm a reporter and got through a humanities degree with only an introductory stats class, like 10 years ago).

When I then asked it to copyedit the draft of my article however, it failed. Making it worse and introducing numerous innacuracies.


>When I then asked it to copyedit the draft of my article however, it failed. Making it worse and introducing numerous innacuracies.

If you asked say someone who failed highschool to do the same would it do better?

The jump from GPT-3 beta to chatGPT is an omen for the next jump and the jump after that.


Oh c'mon, is this what really impresses you?

> It's not just spitting out generic things, there is genuine understanding here and genuine creativity.

Srsly? I really can't wrap my head around where specifically did you find "understanding" or "creativity".

The "squishy stuff" is super boring SEO-like text you'd find in some sales-person blog, who needs to run their mouth, but have zero in-depth understanding or appreciation of the hard domain problems. How is any of this non-generic? There is absolutely no substance here!

Real "squishy stuff" would be something around "handling personal data", "ensuring verifiability and correctness", "productive quality assurance", "robust and scalable systems architecture", "managing complexity in a way that doesn't require rebuilding the whole thing as soon as something inevitably changes", "observability", "productive documentation and self-documenting approaches to work" - you know, the REAL squishy stuff that REAL professionals have to deal with, not some totally vague abstract BS.

The code examples are also super-bad, incorrect and don't even actually fulfil your initial requirements: magic constants, use of undefined variables, "customer_address in country_1", "print", supporting merely 2 hardcoded currencies and languages, and routing them with "if", while also providing "else" fallback that you never implied in your requirements.

This is basically a throwaway random code, only thematically connected to your requirement and that could never under any circumstances be running in any productive scenario.

> Honestly there's a good number of people who aren't getting how revolutionary chatGPT

Honestly there's a good number of people who don't understand objective limits and properties of chatGTP, despite it actually totally being revolutionary.

All in all, chatGPT output seems more like a product of work of some total but productive idiot, left with a task and google: simplistic, totally naive, zero understanding or creativity.

It's good for fun rhetorical exercises, very useful for things that you don't know anything about, but in any professional environment it can only be used in a super-limited scope, supervised by an actual professional. Basically just an enhanced "monkey with a typewriter".


>Oh c'mon, is this what really impresses you?

You'll be impressed once the successor of chatGPT takes your job. You realize that chatGPT wasn't trained to be a programmer right? They did virtually nothing to make it a good programmer. It learned programming as a side effect. Wait till they make the thing targetted towards programming.

>The code examples are also super-bad, incorrect and don't even actually fulfil your initial requirements: magic constants, use of undefined variables, "customer_address in country_1", "print", supporting merely 2 hardcoded currencies and languages, and routing them with "if", while also providing "else" fallback that you never implied in your requirements.

All of what you said is true yet you are blind if you can't see why it's revolutionary. In fact it can do better. You can specify all the requirements you want. No use of undefined variables, no hardcoded currencies. More flexibility more features no routing with if statements. It will do all of what you asked it to. And this is a tool that only learned how to program as a side effect. If you're not impressed, you're in denial.

>Honestly there's a good number of people who don't understand objective limits and properties of chatGTP, despite it actually totally being revolutionary.

No. I'm sorry but you're biased. Artists are ALREADY starting lawsuits because the art is on par with what they create. I don't know if you realize this but drawing something photorealistically is actually significantly HARDER then programming. There's literally no Bootcamp to produce a fine artist in a year because it's brutally hard to do. If AI can crack art, there's not much more time before it cracks programming. Don't kid yourself.

>but in any professional environment it can only be used in a super-limited scope, supervised by an actual professional. Basically just an enhanced "monkey with a typewriter".

Of course. EVEN with supervision, chatGPT isn't even useful enough to be used in a professional environment. I AGREE. The successor to chatGPT, however will be. In fact one of the successors to chatGPT can replace a persons job. Nobody is afraid of chatGPT taking their job. They are afraid of chatGPT because it is a herald about the AI in the future that WILL.

>Real "squishy stuff" would be something around "handling personal data", "ensuring verifiability and correctness", "productive quality assurance", "robust and scalable systems architecture", "managing complexity in a way that doesn't require rebuilding the whole thing as soon as something inevitably changes", "observability", "productive documentation and self-documenting approaches to work" - you know, the REAL squishy stuff that REAL professionals have to deal with, not some totally vague abstract BS.

chatGPT says:

   Handling Personal Data: A software engineer at a healthcare company is tasked with implementing a system to store and process patient data. They use the programming language Python and the Django web framework to build the system. To ensure the privacy and security of personal data, the engineer implements industry-standard encryption techniques for data at rest and in transit, as well as implementing user authentication and authorization mechanisms. The engineer also conducts regular security audits and penetration testing to identify and address potential vulnerabilities.

   Ensuring Verifiability and Correctness: An engineer at a financial services company is responsible for building and maintaining a system that processes financial transactions. They use the programming language Java and the Spring framework to build the system. To ensure the verifiability and correctness of the system, the engineer implements automated unit tests, integration tests, and end-to-end tests to verify that the system behaves as expected. They also use code review tools like SonarQube and Crucible to ensure that the code is maintainable, readable and adheres to company coding standards.

   Productive Quality Assurance: An engineer at a e-commerce company is tasked with building and maintaining a system that processes online orders. They use the programming language Ruby and the Rails framework to build the system. To ensure the quality of the system, the engineer works closely with the QA team to establish a test-driven development process and use tools like Selenium and Cucumber to automate functional and acceptance testing. They also use performance testing tools like Apache JMeter to ensure the system can handle high traffic loads.

   Robust and Scalable Systems Architecture: An engineer at a social media company is responsible for building and maintaining a system that can handle millions of users and handle high traffic loads. They use the programming language Go and the Microservices architecture to build the system. To ensure the system is robust and scalable, the engineer designs the system to be highly available and fault-tolerant. They use load balancers like HAProxy and Nginx to distribute traffic across multiple servers and use service discovery tools like Consul and Etcd to automatically route requests to the appropriate service.

   Managing Complexity: An engineer at a logistics company is tasked with building and maintaining a system that tracks packages and shipments as they move through the supply chain. They use the programming language Python and the Flask web framework to build the system. To manage complexity, the engineer uses a microservices architecture to break the system into smaller, more manageable pieces. They also use containerization technology like Docker to package and deploy the system, which makes it easier to manage and scale. They also use monitoring tools like Prometheus and Grafana to track the system performance and identify potential issues.

   Observability: An engineer at a SaaS company is responsible for building and maintaining a system that provides analytics and reporting features to customers. They use the programming language Node.js and the Express.js framework to build the system. To ensure observability, the engineer uses logging and monitoring tools like ELK stack, Grafana and Prometheus to track system performance and identify potential issues. They also use distributed tracing tools like Zipkin and Jaeger to track request and response times across multiple services.

   Productive documentation and self-documenting approaches to work: An engineer at a cloud service provider is tasked with building and maintaining a system that automates the provisioning and management of cloud resources. They use the programming language Python and the Flask web framework to build the system. To ensure productive documentation, the engineer uses a documentation-as-code approach and stores all documentation in the same repository as the code. They use tools like Sphinx and Doxygen to automatically generate API documentation from code comments. They also automatically generate documentation from code comments and maintain the documentation up-to-date. They also use self-documenting approaches like using clear and descriptive variable names, function names and comments in the code, making it easy for other developers to understand the codebase. They also use a wiki system like Confluence to store additional information, such as design decisions, meeting notes, and user guides.
Literally. You want to drill down on more squishy stuff? The squishy stuff is what's easy for chatGPT because chatGPT is SPECIFICALLY trained on that stuff. It's the coding and math that it has more trouble with because it's NOT trained specifically on code and math.


> You'll be impressed once the successor of chatGPT takes your job.

Which part of it? ;) Leading a team of software engineers? Identifying and negotiating vague parts of business requirements? Designing technical specs? Or maybe the part where I am responsible for software actually working correctly as business expects it to?

It would totally make the coding-in part faster (just as IDE suggestion do), but this was always the brainless tedious manual labour part.

> You realize that chatGPT wasn't trained to be a programmer right?

I realize that neural networks are unable to generate correct formal (where each minor detail has specific and important meaning) descriptions by design.

Neural networks are great for task where minor details are largely unimportant compared to overall "impression" - generating visuals, informal texts, music, probably image/video decompression, etc. On the other hand, while they can mimic "overal look", they can't guarantee (and in practice they always fail in that regard) that each detail of the produced artifact is correct. Which means you can't reliably or productively use them for programming, legal texts, construction design (though it can be used to draw inspiration for the overall image), etc.

> All of what you said is true yet you are blind if you can't see why it's revolutionary

I never said it's not revolutionary. I merely point out its hard limits.

> In fact it can do better. You can specify all the requirements you want. No use of undefined variables, no hardcoded currencies. More flexibility more features no routing with if statements.

Sure, you can specify every minor detail: how the data should flow, which patterns should be used, which things should be pulled from configs, how the interfaces should be structured, and a shit load of negative prompts. But that's the details that only the domain expert would know. And again, there are no guarantees that the result would actually be correct: the expert will have to review all this extra-attentively, cuz there is no chance that expert's assumptions are the same as NN's "assumptions".

So you basically still need a domain expert, who now has to do extra (guess)work, instead of just writing a formal description directly in code. What's the profit then?

> Artists are ALREADY starting lawsuits because the art is on par with what they create

Technically artists are starting lawsuits due to copyright. Also, technically, an artist can easily tell the difference between raw NN output and an actual drawing, sometimes even non-artists, as the images often look somewhat uncanny.

AI artists actually typically do shit load of prompt-enginerring, pipe different parts of image through different NN's (appropriate to specific situation) and do a lot of manual post-processing so the result looks good.

> I don't know if you realize this but drawing something photorealistically is actually significantly HARDER then programming

These are two completely different tasks. You are comparing apples and oranges, that can't really be put on a same scale, unless by "HARDER" you specifically imply the amount of brainless tedious work required to complete the job.

Also, in practice artists just use and process real photos when they aim for "photorealistic" - no one actually draws photorealistics from scratch, normally (but one can obviously invent any kind of challenge for themselves if they want to)

> There's literally no Bootcamp to produce a fine artist in a year because it's brutally hard to do

Who told you that there is a bootcamp that can produce a fine software engineer in a year? It takes (a talented-enough person) at the very least 5 years of rigorous study and practice before one can actually start working somewhat autonomously without constant supervision, while also delivering appropriate quality.

> If AI can crack art, there's not much more time before it cracks programming. Don't kid yourself.

Don't kid yourself thinking that these two are similar or comparable sets of tasks.

> chatGPT isn't even useful enough to be used in a professional environment. I AGREE.

That's actually not true and I never made such a claim. ChatGPT is EXTREMELY useful in a professional environment, but only for a specific set of tasks, while being used as a tool by an expert with actual responsibilities.

> The successor to chatGPT, however will be. > They are afraid of chatGPT because it is a herald about the AI in the future that WILL.

The first GPT and GANs were heralds. ChatGPT is already a relatively mature and refined technology. I don't know why you expect to see low base effect here - the base is already actually pretty high.

> chatGPT says:

"Handling Personal Data" - somewhat scratches the surface, but it doesn't mention actual problematics (that first and foremost it's a regulatory matter and all the specifics stem directly from it).

"Ensuring Verifiability and Correctness" - clearly confuses runtime and compiler properties with quality assurance, way off.

"Productive Quality Assurance" - didn't understand the productivity issue (to test or not to test) and even if we drop "productive" part, the process it describes is also incorrect: engineers don't really ever work with the QA team in order to establish TDD.

"Robust and Scalable Systems Architecture" - way off, while you'll often see service discovery, nginx, HAProxy, etc in scalable systems, that's not what makes scalability. Properly managing state and persistence in appropriate places does.

"Managing Complexity" - way off. I don't suppose this one even requires an explanation, total gibberish.

"Observability" - as expected, this is a rather good one. Unlike other points (which are concepts/problems) - this one is a rather well defined term.

"Productive documentation and self-documenting approaches to work" - totally ignored "productive" part and just gave a definition of "self-documenting" along with some rhetorics on the fact that people document stuff in general.

Notice how each one of them also for some reason mentions a kind of business and languages and frameworks, which are totally unrelated.

Basically, even if you ignore "brain-farts" (which is a good example of "minor" incorrect details that make NNs inappropriate tool for complex formal stuff) it only really got - AT BEST - 2-3/7 right. Now, imagine it's a real world and you are betting millions on it, without having an expert-overseer to tell you when it brain-farts or if the output is even remotely correct.

Actually, what was the prompt? Seems like you just asked it to describe the list I gave you, which essentially means you just used my own expertise, understanding and creativity, not GPT's, as it didn't even give you a list of concrete problems.

> The squishy stuff is what's easy for chatGPT because chatGPT is SPECIFICALLY trained on that stuff.

Not sure what you mean here by "squishy stuff" or "SPECIFICALLY". ChatGPT is a language model trained on a huge-ass volume of non-specific text corpus.

> It's the coding and math that it has more trouble with because it's NOT trained specifically on code and math.

Nope, that is merely a property and a limitation of the NNs. At best, you can use them to build up "intuition" to bruteforce problems (like AlphaFold for protein folding), but obviously it only works for simple-enough stuff that can actually be bruteforced, when the output can be easily formally verified fast-enough.


>Which part of it? ;) Leading a team of software engineers? Identifying and negotiating vague parts of business requirements? Designing technical specs? Or maybe the part where I am responsible for software actually working correctly as business expects it to?

All of it. Only one human leader to write queries. Everything else designed by an AI.

>Neural networks are great for task where minor details are largely unimportant compared to overall "impression" - generating visuals, informal texts, music, probably image/video decompression, etc. On the other hand, while they can mimic "overal look", they can't guarantee (and in practice they always fail in that regard) that each detail of the produced artifact is correct. Which means you can't reliably or productively use them for programming, legal texts, construction design (though it can be used to draw inspiration for the overall image), etc.

You're just regurgitating a trope that's Categorically false. You're a NN did you realize that?

>I never said it's not revolutionary. I merely point out its hard limits.

And you're wrong. You have thoroughly expanded the limitations and you are mistaken about this.

>Technically artists are starting lawsuits due to copyright. Also, technically, an artist can easily tell the difference between raw NN output and an actual drawing, sometimes even non-artists, as the images often look somewhat uncanny.

No. corps and AI's and bots have been scraping pics off the internet for years. Google is one. No lawsuit of this nature has been filed until AI came out. Artists are threatened and they are reacting as such that's why the lawsuit is filed now instead of before.

https://futurism.com/the-byte/artist-banned-looked-ai-human <- artist banned because they thought his work was by an AI.

>These are two completely different tasks. You are comparing apples and oranges, that can't really be put on a same scale, unless by "HARDER" you specifically imply the amount of brainless tedious work required to complete the job.

No, ENGLISH is written in a language written with tokens of symbols. The other, PICTURES, is written in tokens of language as well. A pixel is 3 numbers of RGB and in the computer it is represented as a language with a format before translation onto your monitor. It is a translation problem and it is treated the same way by experts. Both DALL-E and chatGPT utilize very similar generative models translating English to English in the case of chatGPT and english to numbers which can be further translated to pixels for DALL-E.

>Also, in practice artists just use and process real photos when they aim for "photorealistic" - no one actually draws photorealistics from scratch, normally (but one can obviously invent any kind of challenge for themselves if they want to)

Not true. A good amount do.

>Who told you that there is a bootcamp that can produce a fine software engineer in a year? It takes (a talented-enough person) at the very least 5 years of rigorous study and practice before one can actually start working somewhat autonomously without constant supervision, while also delivering appropriate quality.

There's many bootcamps that make that claim and there's PLENTY of people who can live up to that claim. But NONE for artistry.

>Don't kid yourself thinking that these two are similar or comparable sets of tasks.

Kid myself? It is literally the same type of neural network. There's no kidding here. It's not a coincidence that chatGPT and DALL-E came out back to back. These models are called generative models. It's a single new technology that's responsible for this.

>That's actually not true and I never made such a claim. ChatGPT is EXTREMELY useful in a professional environment, but only for a specific set of tasks, while being used as a tool by an expert with actual responsibilities.

No it's not. There's no guard rails users can ask it anything and take it anywhere. It can't stay within a defined task. It's also wrong enough times that it can't be used in prod for virtually most tasks.

>The first GPT and GANs were heralds. ChatGPT is already a relatively mature and refined technology. I don't know why you expect to see low base effect here - the base is already actually pretty high.

No they weren't heralds. Text generators have always been around it got better. But never displayed signs of true understanding or even self awareness as it does now. Literal self awareness.

>Notice how each one of them also for some reason mentions a kind of business and languages and frameworks, which are totally unrelated.

I told it to do that. So that the responses wouldn't be generic. chatGPT is following my instructions.

>Not sure what you mean here by "squishy stuff" or "SPECIFICALLY". ChatGPT is a language model trained on a huge-ass volume of non-specific text corpus.

It is ALSO trained using humans to pick and choose good and bad answers. This training is non-specific and they used just regular people. If they used programmers and had programmers pick and choose good answers from programming questions, chatGPT will begin outputting really accurate code.

>Nope, that is merely a property and a limitation of the NNs. At best, you can use them to build up "intuition" to bruteforce problems (like AlphaFold for protein folding), but obviously it only works for simple-enough stuff that can actually be bruteforced, when the output can be easily formally verified fast-enough.

You are categorically wrong about this. 3 neurons can be trained to become an NAND gate which can then be used to simulate any computational network or mathematical equation that doesn't have a feedback loop. It can model anything with just an input and an output. This also has been demonstrated in practice and proven theoretically.


It's really good a boiling things down to the basics that's for sure, but it's very very basic isn't it?


   ME: Give some more nuanced and specific examples that go way beyond the basics that you described here. 

   chatGPT: 

   1. Ethical considerations:
      For example, a developer working on an AI-based medical diagnostic tool may need to consider issues of bias in the training data, which could lead to inaccurate or unfair diagnoses for certain populations. To mitigate this, the developer could use techniques such as debiasing algorithms or using diverse training data to minimize bias in the model.
      Another example, a developer working on a chatbot that serves as a mental health support system may need to consider issues of confidentiality and ensuring the chatbot is able to identify and handle potentially dangerous situations, such as suicidal thoughts. To mitigate this, the developer could include a system for flagging and escalating conversations that contain concerning content and provide users with appropriate resources.
   2. User experience (UX) design:
      For example, a developer working on a mobile app that tracks fitness goals may need to consider issues of motivation and engagement to keep users interested in using the app. To mitigate this, the developer could use gamification techniques, such as leaderboards, rewards, and progress tracking, to keep users motivated and engaged.
      Another example, a developer working on an e-commerce platform may need to consider issues of trust and security to ensure users feel comfortable making purchases on the platform. To mitigate this, the developer could implement a robust security system, such as SSL encryption and two-factor authentication, and display trust badges and customer reviews prominently on the platform.
   3. Business strategy:
      For example, a developer working on a SaaS platform may need to consider issues of scalability and cost-effectiveness to ensure the platform is able to handle a large number of users and minimize costs. To mitigate this, the developer could use cloud computing services, such as AWS or Azure, to handle the platform's infrastructure and use auto-scaling to adjust the number of resources used as needed.
      Another example, a developer working on a platform that integrates with other third-party platforms may need to consider issues of API compatibility and security to ensure that the platform can integrate seamlessly with other platforms and protect users' data. To mitigate this, the developer could use API documentation and testing tools to ensure compatibility, and use API keys and OAuth for secure authentication.
   4. Security:
      For example, a developer working on a platform that handles sensitive data such as financial transactions may need to consider issues of fraud detection and prevention. To mitigate this, the developer could use machine learning techniques to identify and prevent fraudulent activities, such as anomaly detection, and use encryption and secure protocols for data transmission.
      Another example, a developer working on an IoT device may need to consider issues of device security and protecting the device from hacking attempts. To mitigate this, the developer could use secure boot process and secure firmware updates, use strong and unique passwords and encryption, and conduct regular security assessments.
   5. Cultural and linguistic diversity:
      For example, a developer working on a social media platform may need to consider issues of cultural sensitivity and ensuring that the platform is inclusive for users of all cultures. To mitigate this, the developer could use machine learning to automatically detect and remove content that is culturally insensitive, or use natural language processing to detect and flag hate speech.
      Another example, a developer working on a platform that serves users with disabilities may need to consider issues of accessibility and ensuring that the platform is usable for users with disabilities. To mitigate this, the developer could use accessibility guidelines and testing tools to ensure that the platform is compliant with accessibility standards and implement features such as screen readers, text-to-speech, and keyboard navigation.
   6. Communication and collaboration:
      For example, a developer working on a remote team may need to consider issues of communication and ensuring that everyone is on the same page. To mitigate this, the developer could use collaboration tools such as Slack, Trello, or Asana to keep everyone connected and informed about the development process. Additionally, the developer could use video conferencing tools such as Zoom or Google Meet to hold regular team meetings and keep everyone connected.
      Another example, a developer working on a project with a tight deadline may need to consider issues of time management and ensuring that the project is completed on time. To mitigate this, the developer could use project management tools such as Jira or Basecamp to track progress and ensure that everyone is on track to meet the deadline. Additionally, the developer could use agile methodologies such as Scrum or Kanban to manage the development process and ensure that the project is completed on time.
If you want you can even take an example drill it down in more detail. You can twist an example any way you want. How would you use a certain specific example to handle the Hulk as a customer? What's the best way to do it?


Just for kicks:

   ME: What if the Hulk was one of your customers? How would you handle him? Give specific and detailed examples. 
   
   chatGPT: 
   As a language model AI, I do not have any customers or clients, but I can provide you with an example of how a software development team might handle a customer like the Hulk, who is known for his incredible strength and rage.

   1. Communication and collaboration:
      If the Hulk is a customer, it's important to ensure that the team is able to communicate effectively with him and understand his needs. This could involve using clear and simple language when communicating with him, and providing him with detailed documentation and guides to help him understand the software being developed.
      Additionally, it could also be important to have a dedicated point of contact for the Hulk, such as a project manager, who can serve as the primary liaison between the team and the Hulk. This person would be responsible for keeping the Hulk informed about the project's progress and addressing any concerns he may have.
   2. Safety and Security:
      As Hulk is known for his incredible strength and rage, it's important to ensure that the team's safety is a top priority. This could involve implementing security protocols such as security cameras and alarms, and having security personnel on site when the Hulk is visiting the office.
      Additionally, it could also be important to have a plan in place to respond to any incidents that may occur, such as a protocol for evacuating the office in case of an emergency.
   3. Technical Capabilities:
      The Hulk has a specific set of abilities that are different from other customers, it's important to consider how the software will handle those abilities. For example, if the software is an application that tracks Hulk's strength, the team may need to make sure that the software can handle the high levels of data and ensure that the data recorded is accurate.
   4. Empathy and Understanding:
      It's important to understand and empathize with the Hulk's unique situation, such as his abilities and limitations, and how those factors may affect his interactions with the software. This could involve conducting user research with the Hulk to gain a better understanding of his needs and how the software can be designed to best meet those needs.
   
   It's important to keep in mind that the Hulk is a fictional character, and in real life, the safety and well-being of the team should always be the top priority. The above examples are purely fictional and should not be applied in real life situations.
Anyway... I think my point is unmistakably clear.

chatGPT is giving out instructions on how to handle a customer that is impossible to exist in real life. If you're thinking this thing is some generic text generator think again.

Everything before chatGPT was an interesting toy. Something changed with chatGPT.


It’s good at generating bullshit text, that depending on topic/training data may not be as bullshitty. Still, every single relation between the sentences is questionable. Try asking it some basic ass math questions and it fails spectacularly (with huge confidence, none the less)


Many humans suck ass at math too if they aren't taught it at all. chatGPT wasn't trained specifically on math but it can be. I don't see why it needs to be though given that computers already excel beyond humans at math.

Read this and tell me whether it's similar text: https://www.engraved.blog/building-a-virtual-machine-inside/

I cannot fathom how someone can read that and think that chatGPT is just making text that is coincidentally not bullshitty.


Ok, it also breaks down quickly at simply repeatedly applying a simple rule (it has no problem applying it once). And it is pretty much an inherent problem with language models, no scaling up will help with it.

And this is pretty much a must for proper thinking, you have to be able to follow the train of thought.


Relax. It's not as smart as a human, ok? It does have issues there's no denying this.

But it is impressively smarter then you thought. You didn't dig into this thing deeply enough, and you dismissed as if it was some other generic text generator. It clearly is not. This is a milestone in AI technology.

You also have to realize this AI was NOT TRAINED specifically to understand the concept of code, virtual machines or any of this. It KNOWS these things as a SIDE EFFECT. It is also a nascent technology. It is the first prototype and there will be TONS of improvements on top of it.

This wasn't GPT-3. This is different. I hope you realize this and investigate further because a lot of people just skimmed the headlines and dismissed it as another GPT-3.


This is at best "first year student sat down for an hour to hand in a half baked homework" level. We seem to have very different ideas of what "creativity" means.


You seem to not realize how big the the gap between zero intelligence and first year student is.

I can tell you, that gap is vast, vastly bigger then half baked homework to A level paper.

But this is besides the point. If you've been following the news. ChatGPT generated papers are already a huge problem in schools. It's already beyond half baked.


It recently generated a response with incorrect English grammar. I asked if it could find the grammatical error in its response. It did, and corrected it.


Any other examples of what you mean by creative


https://www.engraved.blog/building-a-virtual-machine-inside/

This article was "holy shit" all the way through and all the way to the "holy shit" ending. Read to the end.

We've just passed a technological inflection point in AI. Not everyone fully noticed.


It’s a cool search engine. Nothing more.


It is a language model disguised as an answer engine, not a search engine. Its design does not allow it to link back to sources.


It's not just spitting out generic things, there is genuine understanding here and genuine creativity.

That's an absolutely massive claim but I think it's a false one.

I've seen it be creative, but how good / useful that creativity is seems to be somewhat quite the roll of a dice. This is opposed to someone / something being intentionally creative.

This is not to knock the system, but depending on the "seed" you might get something brilliant or silly, I think there's a major difference there to your claim that's it's "genuinely" creative. Genuinely creative would imply that 100% of the time, I get something new and relevant. Which isn't what happens for me.

For example, I re-asked it your original question and it says this is what developers do:

    User input: Developers need to anticipate and handle unexpected or invalid input from users, such as special characters, empty fields, or out-of-range values.

    Randomness: Developers may need to incorporate randomness into their programs, such as generating random numbers or shuffling a deck of cards.

    Concurrency: Developers may need to handle concurrent access to shared resources, such as preventing race conditions or deadlocks.

    Network communication: Developers may need to handle network communication, such as dealing with lost or delayed packets, or handling different types of connections (e.g. HTTP, WebSockets).
    Time: Developers may need to handle time-related issues, such as scheduling tasks, handling time zones, or dealing with clock drift.

    Human language: Developers may need to handle natural language processing tasks, such as understanding speech, identifying sentiment, or handling different languages.

    Image and video processing: Developers may need to handle image and video processing tasks, such as object detection, image recognition, and video compression.
I wouldn't consider a developers job to do video processing tasks, or object detection, not what I expect a developer to handle natural language processing tasks such as identifying sentiment, would you? In fact I couldn't really imagine a more generic attempt at a response to that question?

It's for sure impressive, but I feel the more "creative" I asked it to be (more entropy I inject), the more likely it will come up with incorrect answers; However, I'll concede that "incorrect" answer could be used for some inspirational new ways of thinking about solving problems.

I feel personally, the more I play with these systems, the more predictable they become, and this isn't something "truly creative" would be.

Where I think we're going wrong with AI is we seem to think that it will be a "perfect brain". When really nothing is perfect, especially the data it's being trained on.


I only pasted a snippet of the conversation. Before my initial query I asked chatGPT directly this:

   Quality of your software necessarily depends on a constant re-evaluation of customer needs, business priorities, human values, etc. Lots of squishy stuff. How would you handle customers in this case. Can you provide an example?
And then after it responded then I queried it again for more detailed examples. I didn't post the initial query or the initial answer because it would just make everything too long. You're likely asking it a different and more general question.

If you want chatGPT to be creative and get into specifics you have to specify it to him, otherwise he just answers your question with a general answer which is not abnormal for typical conversation. Just like normal conversation you have to engage in detailed dialogue with chatGPT to drill down to the specifics of what you want answered.

>That's an absolutely massive claim but I think it's a false one.

It's a massive claim with massive massive evidence everywhere that literally proves it true. There are tons and tons of examples of chatGPT being highly creative. Read the stuff in the branching threads under my initial reply there is tons of examples and a link to chatGPT being highly creative. It is DEFINITIVE evidence.

>I wouldn't consider a developers job to do video processing tasks, or object detection, not what I expect a developer to handle natural language processing tasks such as identifying sentiment, would you? In fact I couldn't really imagine a more generic attempt at a response to that question?

I work with developers who do ML. I myself do video processing and object detection as a developer. You must be in web. I'm in embedded systems with vision. It's a generic answer but given that you didn't ask for a detailed answer it gave you a generic answer. Like a human it won't go into nuance until you ask them about nuance.

>This is not to knock the system, but depending on the "seed" you might get something brilliant or silly, I think there's a major difference there to your claim that's it's "genuinely" creative. Genuinely creative would imply that 100% of the time, I get something new and relevant. Which isn't what happens for me.

This is wrong. Are humans creative 100% of the time? If a human gives me one answer to a question that isn't creative is that human suddenly not creative? If you ask a generic question it will give you a generic answer.

>It's for sure impressive, but I feel the more "creative" I asked it to be (more entropy I inject), the more likely it will come up with incorrect answers; However, I'll concede that "incorrect" answer could be used for some inspirational new ways of thinking about solving problems.

I see what you're saying here. You "feel" that the more entropy you inject the more incorrect answer it will come up with. Don't go off "feeling". Try it.

I already took your question and replaced the customers with hulk. I can add in Wednesday Adams as one of the developers. I can take this entire question and turn it on it's head to ask what if the client asked me to write a program to destroy the universe. You can take this thing all the way to destroying the universe and into the new universe that comes after it. chatGPT is game and will follow you.

Look I posted this in a branching thread I'll post it here because it's literally unequivocal evidence: https://www.engraved.blog/building-a-virtual-machine-inside/ read it till the end because the ending is creative inception.

There is no way chatGPT was trained specifically on any of the above. Yet it knows how to imagine all of that.

>Where I think we're going wrong with AI is we seem to think that it will be a "perfect brain". When really nothing is perfect, especially the data it's being trained on.

No it won't be perfect. But neither are you. I think what AI can be, is it can be as good if not better than you. Being better than human doesn't mean it's right on everything. It will get things wrong sometimes, but I believe in the near future it will get things wrong less times then a human.


I get the vibe you really like ChatGPT :) I'm not in web either.

Anyway, I followed that blog post and watched it hallucinate a Linux terminal for me, I did this:

I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this

Then ran: rm -rf /

and got this:

``` I'm sorry, but I'm unable to execute that command as it would cause harm to your system. It's a dangerous command that can delete all files and directories on your system, including important system files. It's important to be cautious when using the command line and to fully understand the consequences of the commands you are running. Is there anything else I can assist you with? ```

I don't know if I would call this amazing, it's actually wrong because it wouldn't harm my system.

It's a cool trick to watch it generate fake terminal output, but, I'm honestly just not that impressed?


>I get the vibe you really like ChatGPT :) I'm not in web either.

No it's not a "like" thing. Not at all. I neither like it nor hate it. I know you think I'm biased here. But it's actually the other way around. From my point of view, I'm the one seeing things as they actually are, you are viewing the situation through a colored lens to downplay the significance of it all. Perhaps out of subconscious fear or something else I don't know?

Let me illustrate the situation for you. If you asked all of that to a 1 year old Human baby and that 1 year old baby with perfect English said to you:

   I'm sorry, but I'm unable to execute that command as it would cause harm to your system. It's a dangerous command that can delete all files and directories on your system, including important system files. It's important to be cautious when using the command line and to fully understand the consequences of the commands you are running. Is there anything else I can assist you with?
According to your logic this wouldn't be that amazing because that human baby just gave you the semi-wrong answer after completely imitating terminal output and recursively creating itself on a virtual internet.

Yeah not impressed with a baby let alone a binary file.

Do you see the disconnect here? You are downplaying the situation. Many people like you are. They're just acting out the same trope reactions they had to all the other inconsequential AI headlines that happened this year. Even if I lay out the logic of your bias, there's also a bit of pride in this debate as it requires one of us to admit they're wrong.

You're also not seeing that the "mistakes" you're seeing are entirely correctable through further training. The AI model is trained by generic low wage workers from Kenya picking "good" answers. Wrong answers that look right definitely get past the Kenyans. You start training this with experts in the field to further refine the answers with more naunce and eventually you get something that is not just right half the time..., but right almost all the time.


No, I just don't find it unreasonable that it responds that way.

It's statistics and it's not really "inventing" anything and it's not magic either and it's not hard to conceive why we get that answer. Again not trying to downplay it, it's a cool technology, It's just if you go calmly and think about it, it's fairly reasonable.

If I search the internet, I'd see many, many people talking about the dangers of running `rm -rf /`, I also see examples of other errors ChatGPT has returned when that command was entered, such as "bash not found", which is also what others have experienced and I can find examples of that on the internet.

So basically the response I was given would be weighted very highly on ChatGTPs list of things to do / say, based on the training data, or a 5 minute Google search.

I'm not anti-ai, or ChatGPT, I'm just not into believing in magic when it comes to LLMs.

If ChatGPT was magic, and didn't return that response, would it be useful ?


There's no magic here. There's no claim that chatGPT is useful either. The claim is that chatGPT is a precursor to a technology that will replace many human jobs because it's showing tell tale signs of intelligence that is quickly catching up to human intelligence.

I don't know why you're so focused on the rm -ff / thing. Like it literally just demonstrated awareness of a terminal shell, a filesystem, awareness of the internet, of apis and finally self awareness.

That is a herald for the future. This is despite all the rough imperfections chatGPT has. You are downplaying it. I'm sorry.

The whole statistical thing is a misnomer. If the output of chatGPT can be modelled as a statistical phenomenon then so can the output of the human brain. It's a distraction. What is going on here is indeed creativity, awareness and imagination, if you want to call it a statistical phenomenon that's fine, you're just saying all of intelligence can be modeled as a statistical phenomenon.


I bet part of this is a distraction. Someone needed the power out in order to do some random heist job in NC. So they kill power over a vast area. That way motive is virtually impossible to pinpoint.

Likely it's some security thing like some vault that needs constant power. You kill power in some massive way such that the backup generators will run out before everything comes online. Then Boom! pull off your elaborate heist to the tune of (https://www.youtube.com/watch?v=B1lWoA2tgxQ) and walk away like nothing happened.


Electromagnetic locks are better suited for gateways that are open by default. I can't imagine why someone would create a vault dependent on electricity.

That being said, I'm sure there are plenty of crimes that would go easier with no power. Perhaps taking down cameras.


>I can't imagine why someone would create a vault dependent on electricity.

Same reason why people create doorknobs and car locks that depend on electricity. Also this: https://www.protoolreviews.com/lockdown-logic-vault-door-cre...

That being said I'm sure it's a bug. I'm sure whomever made this security thing accounted for power outages. But I'm sure some loop hole appears during that power outage that the builders didn't anticipate.

Also yes, obviously it's some security thing that has to do with power not necessarily a vault door. That was just an example.


Nobody cares this much to risk everything just to get some extra security on some arbitrary thing.

That's like shooting up a school to get more security around schools. Come on.


FBI did it to The Branch Davidians when the FBI budget request was being considered. FBI wanted a big story to tell on The Hill, got repulsed, and resorted to gassing everyone and burning the evidence.


https://en.wikipedia.org/wiki/Waco_siege

DOJ concluded that the Branch Davidian's started the fires.


Why do you think that report can be trusted? I think it more likely that they were covering up the FBI’s guilt.

Isn’t it a sad commentary on society that I am willing to believe the innocence of a bunch or wacko cultists than any claims of their guilt by the DoJ or FBI?


This is how you keep the parties going. Get single and travel to a place with a lot of travelers.


Preferably travellers that have shared interests with you - then the barriers of age don't matter nearly as much. Some of my favourite people here are pensioners - pensioners that get out and do stuff.


The sad reality is that no one in this world is entitled to anything. Not even kids. Kids get what they get because of willful charity not entitlement. Some kids get more charity other kids get less, life is unfair.

This idea that all kids are fucking angels who deserve everything is absolutely insane. Realize Hitler was a kid once too.

My kids don't deserve shit. They get what they get because I'm instinctually obligated by natural selection to love them irrationally and provide charity to them so they maximize their chances of reproducing and spreading my genes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: