Hacker News new | past | comments | ask | show | jobs | submit | j_shi's comments login

Funcom’s Dreamfall, one of my favorite games of all time, has something to say about where this could go!


Actually seems Databricks got a great deal for Mosaic. Real qustion is why Mosaic took it v. hold out or do another round

Rough math plugging in public #s and comments here:

- All stock deal at Aug 2021 val of 38B (1B ARR)

- Assume rev doubled to 2B (which may even be aggressive)

- SAAS multiples are down 6x since Aug 2021

- 38B x 2 / 6 = $12.7B

- 12.7B / 38B * 1.3B = 434M = effective price

- Assume 100M to pref stock

--> Comes out to 334M, with a chunk of that (1/3? 1/4?) potentially subject to earn out


Realized there's probably pref on Databricks too, which would further lower the value of its common. On the other hand, there could have been a markdown from the 38B since August '21


Disagree there are sacred timeless skills we ought to protect; tech has and will continue to reduce our need to spend mental bandwidth on skills

Similar offline risk goes for all tech: navigation, generating energy, finding food & water.

And as others have noted, like other personal tools, ai will become more portable and efficient (see progress on self hosted, minimal, efficiently trained models like Vicuna that are 92% parity with OpenAI fancy model)


Even if we don't "need" to protect them, they'll be practiced somewhere.

I can watch endless hours of people doing technically obsolete activities on YouTube right now.


Self-hosted + self-trained LLMs are probably the future for enterprise.

While consumers are happy to get their data mined to avoid paying, businesses are the opposite: willing to pay a lot to avoid feeding data to MSFT/GOOG/META.

They may give assurances on data protection (even here GitHub copilot TOS has sketchy language around saving down derived data), but can’t get around fundamental problem that their products need user interactions to work well.

So it seems with BigTechLLM there’s inherent tension between product competitiveness and data privacy, which makes them incompatible with enterprise.

Biz ideas along these lines: - Help enterprises set up, train, maintain own customized LLMs - Security, compliance, monitoring tools - Help AI startups get compliant with enterprise security - Fine tuning service


In the book “To sleep in a sea of stars” there’s a concept of a “ship mind” that is local to each space craft. It’s smarter than “pseudo ai” and can have real conversations, answer complex questions, and even tell jokes.

I can see a self-hosted LLM being akin to a company’s ship-mind. Anyone can ask questions, order analyses, etc, so long as you are a member of the company. No two LLM’s will be exactly the same - and that’s ok.

https://fractalverse.net/explore-to-sleep-in-a-sea-of-stars/...


I suspect the major cloud providers will also each offer their own “enterprise friendly” LLM services (Azure already offers a version of OpenAI’s API). If they have the right data guarantees, that’ll probably be sufficient for companies that are already using their IaaS offerings.


Enterprises should work on an open source LLM and run it on their own. This also helps people like you and me to run LLM at home.

It has worked before like in case of Linux and can work again.


Powerful LLMs are so large that they can only be trained by the major AI companies. Even LLaMA 65B (where the open release was less than intended) can't compete with GPT-3.5, let alone GPT-4. And the price for the most powerful models will only increase now, as we have effectively an arms race between OpenAI/Microsoft and Google. Few, if anyone, will be able to keep up.

Linux is different. It doesn't require huge investments in server farms.


I think you would be interested in Google's internal memo[0] that did the rounds here a couple weeks ago. The claim is that OpenAI and all competition is destined to fall behind open-source. All you need is a big model to be released and all fine tuning can be done by a smart, budget, distributed workforce.

[0]: https://www.semianalysis.com/p/google-we-have-no-moat-and-ne...


But why would a big model be released? LLaMA can't even begin to compete with GPT-4. Fine-tuning won't make it more intelligent. The only entity currently able to compete with OpenAI/Microsoft is Google with their planned Gemini model.


…today. But with the amount of (justifiable, IMO) attention LLMs are now getting, I don't see how this won't change soon. And there's quite a bit of incentive for second- or third-tier companies to contribute to something that could kneecap the bigger players.


How do the data rights broadly differ between OpenAI API directly and through Azure's endpoint?


I don’t think they do. From what I can see, Azure OpenAI is just a forwarder to the OpenAI instance.

The big benefits are AAD auth and the ability to put a proxy (APIM, etc.) on the OpenAI endpoint to do quality control, metering, logging, moderation, etc. all within Azure.


> willing to pay a lot to avoid feeding data to MSFT/GOOG/META.

Right now, you can't pay a lot and get a local LLM with similar performance to GPT-4.

Anything you can run on-site isn't really even close in terms of performance.

The ability to finetune to your workplaces terminology and document set is certainly a benefit, but for many usecases that doesn't outweigh the performance difference.



“* According to a fun and non-scientific evaluation with GPT-4. Further rigorous evaluation is needed.”


Bingo and similar deal with Facebook. The best way to get leverage and power with FBGOOG is for advertisers to cooperate instead of compete. We are trying to do this with ecomm advertisers right now (by getting advertisers to coordinate instead of bid against each other) but goes beyond any particular ad vertical


You mean his American wife whose parents were refugees from Vietnam? Source: light googling


The Hoa people (Han Chinese ethnic minority in Vietnam) suffered persecution under the socialist government of Vietnam in the 70s. That why many of them escaped as refugees.

https://en.wikipedia.org/wiki/Hoa_people


Wait for enough confirmations where the payment becomes unlikely to reverse, which of course takes time and that's the more practical blocker for regular shops to accept Bitcoin.


They aren't doing that, in places I've seen, they just take your payment and let you go away with that. (I've never paid myself, but I've seen other customers do so)

If it's raw bitcoin, they couldn't even be sure that the transaction is a valid one (that the wallet has the funds in the first place, not even talking about double spending). I suspect they use some kind of third party like Coinbase, and that there aren't really using bitcoin at all (and just use Coinbase as a bank) but I'm not sure.



Thanks. I'm not sure I understood the answer given in the specific post you're liking to (or at least, I don't see how it answers my question). A following response linked to this story: https://www.ccn.com/bitcoin-atm-double-spenders-police-need-... which gives a pretty good answer.


There are 2 approaches to Bitcoin Network usage in play. Bitcoin Core team went with RBF(Replace-By-Fee) which gives rise to the double spending problem as the funds can be re-directed before a transaction gets in to a block. And Bitcoin Cash protocol implementation removed the RBF to support trust in 0-confirmation. There are even more optimizations in BCH implementation of Bitcoin, you can review it here https://cash.coin.dance/development


From my quite limited understanding, RBF basically allows to cancel a transaction (since you can set the fee to something that would never be accepted in a block) which is even worse than a regular double spending, because in the end, nothing at all is spent.

But even without RBF, there's nothing stopping you from spending the same coins online and in a restaurant at the same time. I'm not even sure if a restaurant would know that you and you're friend aren't actually spending the same money twice for your respective meals.


0-conf transactions have timestamps and the receiving wallet/node checks for the tree of transactions to even allow the spending of unconfirmed transactions up to a limit of 25 for now, there is on-going research and testing to up this limit to 500 that was sponsored by SatoshiDice https://twitter.com/PeterRizun/status/1181980303033692162

For a list of transactions trying to double spend BCH and failing at it, see this https://doublespend.cash/


Could also go strategic exit or PE route, though former tricky to put together and latter tough on valuation


This is the bull case for We that justifies a stratospheric valuation: it is the extranational life infra and connection market maker for the otherwise hyperalienated worker of tomorrow -- where corporations have surpassed "legacy" social and political units of organization and identity.

Where the valuation came down to earth was when public investors evaluated We as an incrementally better office rental company, and didn't buy the idea of We as creating and monetizing a religion.


“Tribe as product” already exists, most successfully in professional sports teams: “I am a die hard Eagles fan.”


ps can’t click last button on iphone se via iOS hn app


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: