Hacker News new | past | comments | ask | show | jobs | submit | faddypaddy34's comments login

Then if you are successful one day you can cry on social media about your large surprise bills.


That is a big “IF” . 99.99% side projects are not successful, you should not be planning architecture expecting to win the lottery .

Even if you did, on premise self-hosting does not save from scaling costs or problems .

If you are successful with firebase , all you get is the bill, firebase will definitely scale without sweat or intervention, for most businesses (side or small) that is better tradeoff .

If you are successful with a self hosted solution , you are likely to be down when the peak traffic or hug of death hits that is when you need it most to work, the app will fail.

This is even assuming intimate expertise on how to scale quickly and having many done it many times before with no mis steps and no researching on what to do.

A professionally managed service at firebase level which is shared infra has already both the capacity and code tuned to scale for any one tenant without effort or lag .

Self hosted infra is optimized for cost and usage pattern of one small tenant, no matter how good your scale out skills and autoscaling code is , there is going be a lag between you and something like firebase which will never notice your spike and issues around it.

This is the crux of success behind multi-tenant IaaS and PaaS, the reason Amazon opened up their infra to the world originally.

All the way even to Amazon.com scale of SRE skills and budgets, hosting on AWS will always be better and cheaper for them than using isolated infra only for them given their load pattern fluctuates pretty heavily .

Amazon.com would still run AWS even if was losing money a bit ,because it would make them more resilient and their infra costs cheaper .

The best analogy I have heard is it similar to insurance, works better with more and more users with diverse usage and risk patterns


For me the answer is much simpler.

I find coding backends to be very boring. It's just not something I want to do during my spare time.

Managed infra is almost always easier to get going.


Coding CRUD and repeating patterns tends to be boring, however writing interesting logic in backend tends to be interesting.

Generated CRUD interfaces with basics like authentication etc is pretty much easy these days, whether Supabase, Hasura, PostgREST style self hostable stacks or just Firebase, even collaborative data structure basics for CRDTs don't need to be built each time and are available out of the box.

Frontend can get repetitive too, a lot of components keep getting built and redone differently, interesting stuff which makes the tech be core part of the product differentiator such as say how WASM is used at Figma is the real interesting part in frontend.


Man if only there was another way to turn on lights.


Hmmm, NFC tag paired to the light that you can tap with your phone to turn it on/off?


I don't understand this view. I used vscode and the amount of customization I had to do, per project was equal or greater then what I have to do with neovim. Do you never change anything or have to fiddle with extensions to get the desired behavior? Or creating run configurations for projects?


The Ukraine used illegally cluster bombs on ethnic Russians for years. This is actually documentation. https://www.hrw.org/news/2014/10/20/ukraine-widespread-use-c...


This makes no sense. The conflict started in 2014, you point to an article from 2014, how does that show years of bombs? In addition Russia no longer keeps up the lie that the 'little green men' that invaded Ukraine were just ethnic Russian Ukrainians already in Ukraine. They admit that they invaded with Russian troops. Soldiers will often have their service in 2014 hyped in online obituaries when they are eliminated now. Ukraine was responding to an unannounced invasion by it's much larger neighbor with the weapons is has at hand (weapon's that Russia also has and uses). What a non-point made with non-evidence trying to place blame on a country that was invaded. Do you also use 'how she was dressed was asking for it' as an arguement?


They didn’t use them illegally neither Russia nor Ukraine are signatories to the CCM.


>The Ukraine

It's a country, not simply a geological region.


Doctors, lawyers, historians, and anyone else shouldn't use chatgpt for their work.


Why not? They should use it, with sufficient understanding of what it is. Doctors should not use it to diagnose a patient, but could use it to get some additional ideas for a list of symptoms. Lawyers should obviously not write court documents with it or cite it in court, but they could use it to get some ideas for case law. It's a hallucinating idea generator.

I write very technical articles and use GPT-4 for "fact-checking". It's not perfect, but as a domain expert of what I write, I can sift out what it gets wrong, and still benefit from what it gets right. It has both - suggested some ridiculous edits to my articles, and found some very difficult to spot mistakes, like where a reader might misinterpret something from my language. And that is tremendously valuable.

Doctors, historians, lawyers, and everyone should be open to using LLMs correctly. Which isn't some arcane esoteric way. The first time we visit ChatGPT, it gives a list of limitations and what it shouldn't be used for. Just don't use it for these things, understand its limitations, and then I think it's fine to use it in professional contexts.

Also, GPT-4 and 3.5 now is very different from the original ChatGPT that wasn't a significant departure from GPT-3. GPT-3 hallucinated everything that could resemble a fact more than an abstract idea. What we have now with GPT-4 is much more aligned. It probably wouldn't produce what vanilla ChatGPT produced for this lawyer. But the same principles of reasonable use apply. The user must be the final discriminator that decides whether the output is good or not.


Yeah fighting back, by walking out during lunch only if a thousand people sign the petition. So brave.


My good-faith two cents: one step is better than zero here, even if this isn’t, as you plainly state, much more than a hollow gesture. I doubt Amazon senior leadership are thrilled that this has been in the headlines since this past weekend.


Industrial action build up from even smaller actions. Strikes don’t come out of nowhere.


We should really ban adults from social media as well.


How exactly? Explain how an AI would cause the end of the world? Are you suggesting we would turn over all of the world's nuclear arsenals to AI to deal with? Maybe it's just me but lately it seems like everything is being labeled as "dangerous" to the point of absurdity. It seems to be following the same line as US political rhetoric where everyone is either a Nazi or communist bent on destroying the country depending on what side you generally align with.


And the monkeys thought, “how could a human be dangerous?” “Would they clobber us with stones? Surely we are stronger!”

The problem is that the toolset available to someone who is much more intellectually capable is beyond what we can think of.

People are not afraid of AGI because it will behave like a very smart human, people are afraid of AGI because the capability gap will be more like the one we experience between humans and other animals.

The toolset available to humans when dealing with monkeys is literally incomprehensible to the monkeys.

The toolset available to AGI is similarly incomprehensible to humans.


Part of the issue is that taking AI alignment seriously does require some level of intellectual humility — a quality that the HN comment section famously lacks.


This has been written about in numerous places. There are multiple possible ways an AI might go about this if it saw that as its task; the probability of any one specific method being is of course lower than the total probability of the whole set. So any one method would be an unlikely and speculative scenario. The method in question could range from nuclear, chemical, biological, sabotaging agriculture, mass-producing CFCs or other pollutants, triggering wars, or other unforeseen approaches. Most scenarios allow that (a) the AGI is very smart, deceptive, creative, and resourceful, and can pose as a human or corporation to execute transactions; (b) the AGI is able to gain control over some means of funding, either legitimately or illegitimately, and thereby pay unsuspecting humans to perform seemingly-innocuous tasks like protein synthesis or package delivery; (c) you wouldn't see it coming, any more than you see the chessmate approaching several moves ahead, because the AGI would be appearing to be friendly and helpful along the way, and perhaps earning you lots of money, while it is secretly outsmarting you for its own ends.

For a nuclear approach, the AI would only have to hijack the least-hackproof of US, Russian, or Chinese arsenals in order to trigger an exchange from all sides. But it would probably opt for a different method that would do less collateral damage to its own resources.

This has been an issue raised since at least the early 2010s if not before, and so (arguably) predates the most recent round of US political polarization. The core arguments are unchanged, but became more urgent as AIs broke through several milestones thought to be decades out, such as defeating top human Go players, cracking the protein folding problem, and passing the Turing test with flying colors.


Forget the idea of an "AI" then, because the idea of "intelligence" makes the argument harder. Just think of a "new technology."

Is it possible that a new technology could destroy the world? Of course. It could've turned out that nuclear weapons would incinerate the atmosphere upon detonation, as some were worried they would. It could be that the next technological innovation will kill us, there's nothing prevent it in the laws of physics.

AGI is a specific technology we are worried about, because the whole premise is "once we build something that is extremely capable at a variety of things, one thing it will be capable of is destroying the world. Even by accident."

We're already using AI techniques to help with problems in biology like protein folding. Take it a few dozen iterations forward, and these systems will be helping design medicines and vaccines that no human can do by themselves. At that point, what's to stop the system from creating a super-flu that kills everyone? Forget about intent here, how about a bug?

ChatGPT often misunderstands queries, take something like ChatGPT but 100x more capable, do you really think people won't be using it to do things? And given that they will, it could easily have a bug that "oops, incinerates the atmosphere" as a side effect.


It's called nondelegation doctrine. The rulings pertaining to it have not been consistent but it generally holds true.

https://www.law.cornell.edu/wex/nondelegation_doctrine


I’ve seen some discussion about that topic but I’ve found it hard to parse what people think of as far as how explicit they expect congress to be. At some point it seems unworkable/ a recipe for very inflexible government policy.


The problem isn't pain killers it is massive quantities of fentanyl being brought into the country. Made worse by it being so cheap it is basically pressed into pill shape and sold as other less lethal substances like Xanax, Percocet, MDMA etc. People are also just selling it as cocaine. There have been several stories of people thinking they bought cocaine and overdosing because it was or contained fentanyl.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: