Hacker News new | past | comments | ask | show | jobs | submit login

I can’t believe people are cheerleading this move

* Tigris is DOA - If because it would piss off the MSFT board but mostly because the SEC would arrest Sam assuming he’s an officer at MSFT. He could maybe be a passive investor, but that’s it

* People really think many Open Ai employees will give up their equity to get whatever mediocre stock grant their level at Microsoft has? And 1% raises, no bonus some years, and the board forced headcount reductions?

* Sam has even less power with the board, and the board in a 3 trillion dollar corporation would be even more risk averse than the OpenAI one

* there was a ton of fan fiction yesterday online about Satya forcing a move on the board. This was never really a thing. He made one of the worst investments in the history of SV in terms of keeping power to make sure your money is used correctly. $10B for 0 votes or any power to change votes




We witnessed the insane value destruction over the weekend. Every OpenAI employee is aware that everything they have and what is promised to them is whipped out. Their best chance is that Sam brings them to that new company within MS. They will get the same or better deals as they had. And they will probably deliver within 6m what OAI has now, and what spooked Ilya to launch this coup.

This was a brilliant power move by Satya. I don't see any hope for OpenAI after this brain drainage.


Yeah, just like reuters mentioned

"The OpenAI for-profit subsidiary was about to conduct a secondary at a $80 billion+ valuation. These 'Profit Participation Units' were going to be worth $10 million+ for key employees. Suffice it to say this is not going to happen now," chip industry newsletter SemiAnalysis said."

Insane self own by OpenAI


That sounds like exactly the kind of thing the board of a non-profit should be preventing.


As an employee of a company, I trade my time and effort for some amount of rewards. I enter deals with the expectation of stability from the other party.

My unfounded Internet opinion: OpenAI just removed or reduced a large source of reward and have shown fundamental instability. OpenAIs success is very much tied to Employees and Compute.


If your goal is to work for a profit-sharing company, then don't work for a non-profit.


Plenty of non-profits give a lot of money to employees. There is nothing stopping non-profits from paying exorbitant sums to their employees, and executives often do get paid exorbitant. Non-profits mean they don't pay out to investors, but they are usually used as a grift to get people to work for less so the top people make more money and do fundraising on their pet projects.


The employees work for the for-profit part of OpenAI.


That is owned by a non-profit organization. It seems like a lot of the employees are chasing money, and forgetting that it's fundamentally not trying to maximize profit. Of course, Sam seems to have perverted its mission to be the latter (serving as the latest high-priest of mammon, like Elias served Lillith)


Yeah I mean, who cares if ASI kills us all as long as a couple hundred of the most well-paid people on the planet get even more rich.

It's insane to see all these takes when we don't even know what caused the loss of trust in the first place.


No one sincerely believes they have, or will soon achieve, AGI. Neither can they believe that the CEO can push them to achieve it and forcefully release it, whereas they would responsibly develop it (whatever that may mean) without him around.


Great summary.

We are very complicated creatures and things get out of control, both internally and externally. My armchair opinion is that they started to believe that all of it is so advanced and important, that they lost a bit of a grip on reality. Sutskever imagining planet covered with data centers and solar panels shows me that [0]. Every single person is limited in his/her view - I get a strange feeling when listening to him in this video. Also, they are not the only people left on this planet. Fortunately, this task of creating AI/AGI is not a task for a pack of ten, trying to save us from harm. Still, it may and probably will get rough. /rant

[0] https://www.youtube.com/watch?v=9iqn1HhFJ6c


Your second paragraph is pretty ironic given your first.


> Yeah I mean, who cares if ASI kills us all as long as a couple hundred of the most well-paid people on the planet get even more rich.

creating ASI for money seems particularly asinine as the machine overlords won't care terribly much about dollars


How do you know what ASI will value?


As an employee of a bay area tech company, presumably, in which a mid-level IC can make as much money as a C-suite executive in some less prominent industry*


Well, they're almost certainly 'not profiting' right now.


I agree, Satya is an operator. He translated a mess into a historic opportunity for Microsoft. They'll get some significant chunk of some of the best AI talent on the planet. All the heatseakers will go there. That, plus the IP they already have, will turbocharge Microsoft.

OpenAI, in contrast, will become more like an academic research unit at some university. Those who prefer life slow and steady, will select to stay there, making tech for effective altruists.


they make nothing open source, so I'm not sure why effective altruists would join it.

if they can't predict and contain the consequences of their own actions, how can they predict and contain their so claimed future "AGI".


Is there any reason to assume open source is a prerequisite for effective altruism?

Open source doesn’t necessarily imply good for humanity, for example distributing open source designs for nukes would probably be a net negative.

And even if it did, effective altruists wouldn’t need to prioritize the benefit provided by openness over all other possibilities.


I don't think relying on a proprietary license to make sure enemies can't get AI for nukes is a sane security model. Something else needs to give.


Operator?


Operator in this context refers to someone who successfully runs a business that someone else founded. Often the visionary founder is not good at the nuts and bolts of running and growing a business so they hand off the reins to someone who is. Think Steve Jobs vs Tim Cook.


It doesn’t mean that at all, it’s slang

https://www.urbandictionary.com/define.php?term=operator


For a decade, "operator" in Silicon Valley as has been used exactly as the commentator above describes it.

Which creates separation from "investor" or "engineer" or "founder" or "PM" or "sales" or "finance". Somebody has to make stuff happen in organizations. And the people who are good at it (Satya is excellent) set their organizations up for unique success.

And yes, ex-special forces people roll their eyes at it. Which is appropriate! But the usage is now totally distinct.


I learned the business context before the spec ops context and honestly the former makes way more sense to me than the latter.

A business operator is like a machine operator. You're pulling levers on a machine that someone else built while optimizing and tweaking to get the best performance out of that machine as possible.


I was quite wrong about this, but I still don't think it's especially relevant that someone else built it. You would never call Zuckerberg an operator? Or when someone else built it does that not mean that you had no role in building it? That would be the exception, but it would be analogous to owner/operator in general business parlance.

I think now, having tried to fill in my missing knowledge, that it comes from the same root as DevOps, which I erroneously thought was related to SpecOps. DevOps comes from IT Operations which comes from Operations Management, which yes, is like a machine operator. https://en.wikipedia.org/wiki/IT_operations https://en.wikipedia.org/wiki/Operations_management

Edit: here's a post with "Founder Operators". Which seems like maybe if you just heard "Operator" you would assume they're not a founder, but also that the term can be applied for those running businesses they founded: https://startupceo.com/2023/01/5-things-successful-founder-o...


It seems like underselling the successful part and overselling the part about not being the founder, but I can see it's a slang term. Thanks.

And yeah I'm wrong about it being the same term, though I did imagine a different use, I was also thinking of smooth operator, apparently I was unfamiliar with the term in tech.


It has a meaning in a business context apart from a slang term


> And they will probably deliver within 6m what OAI has now, and what spooked Ilya to launch this coup.

Do you realize how hard it is to make something like GPT4? I think all the non-AI people posting about this all over X/HN have this idea that if you move everyone over you just "remake the AI" as if this were a traditional API.

There is no way MS catches up to OAI in a short period of time. During that time the rest of the market will be pressing the accelerator as hard as possible.

I think MS is in a really, really shit spot right now.


They have access to the weights as per their agreement with open ai; idk if that allows them to use it as a starting point. They also will have access to several of the people who did it. It’s insanely hard to do the first time, but probably just hard to do the second time after you already know what worked before.


sure but what does "hard to do" entail in terms of timeline? in my experience nothing complex can launch in 3 months at a big corp. 6 months would be aggressive. a year seems the most likely. but where will competitors be in a year?


I wonder if that agreement also has an insurrection clause saying that if you benefit from this, you must wipe your memories clean of any of that shared IP.


I mean, if MS literally gets:

- all the code that created GPT-4

- all the weights for GPT-4

- all the people who created both of those things

then, y'know, I like their odds.

They have access to the first two already, per their licensing agreement with OAI; by the end of the week, they may very well have the third.


Value as in money or value as in values? There are people who value also other than the former in the deal. Like people who are trying to keep OpenAI at least somewhat non-profit.


granted, now MSFT basically has another research arm like Google Brain/FAIR, but whether or not their "brain trust" can equal Yann Lecun's or whatnot who knows. Altman and Brockman are on the MBA side of things. The ML magic was Ilya's. If they can recruit a bunch of mini Ilya's over from Open AI, maybe they have a chance at regaining the momentum.


Ilya has backtracked and signed the letter saying he would also leave to Microsoft if the board doesn't resign.


In one fell move he demonstrated he had neither conviction nor any foresight, ouch. I'm starting to believe this was just a unthought out ego emotional reaction by Ilya. Dude is like Walter White, he just "had to be the man"



Wtf? Isn't he on the board?


Ilya signed a letter asking 4 members of the board to resign, including Ilya himself. He even posted a public apology for his actions on X https://twitter.com/ilyasut/status/1726590052392956028

Yes, this is probably the biggest self-own in corporate history.


From satyas tweet Sam's new division/subsidiary is going to run more like LinkedIn or GitHub, and openai has pretty explicitly just declared that they don't like making money, so I don't think the comp is gonna be an issue. And for now, if sam wants to make product and money and Microsoft wants the same thing, then having power over the board doesn't really matter. And Microsoft has all the IP they need. That's a better deal than equity given who is in control of openai now. They're actively not focused on profit. Whether or not you think this is a good outcome for AI or mankind - Microsoft absolutely won. Plus the more people they pull from openai the less they have to deal with openai, everything is in house.

Edit: god damn - even the guy that pushed Sam out announced he wants to resign if Sam isn't brought back what the hell


> Edit: god damn - even the guy that pushed Sam out announced he wants to resign if Sam isn't brought back what the hell

It reads like an orchestrated coup against the other three members of the board. Convince the board to do something you imagine will get this kind of blowback. Board is forced to resign due to their fiduciary obligations to the non-profit. And now you control the entire board.


> fiduciary obligations to the non-profit

What fiduciary obligations does a non-profit have? Isn't the board pretty successful already at not making money?


Fiduciary isn't about money, it's about the best interests of the non-profit they are governing. If staying on the board means a harm to the goals of the non-profit charter, then they have a duty to resign.


Fiduciary obligations need not be profit-seeking. They often - perhaps especially - involve keeping the lights on for the chartered company.


OAI employees have no equity. They have a profit sharing right. The board is clearly not interested in profit.

MS is risk adverse in every way except for one, which is to blow up Google. They will set the world on fire to do that.


MS is risk adverse in every way except for one, which is to blow up Google.

To me this is exactly why I’m skeptical of Microsoft’s strategy here. They seem to be convinced that their success at unseating Google is assured. Meanwhile, google’s share price has barely flinched. Also, the way this has played out just feels desperate to keep the whole thing going. Wouldn’t Microsoft want at least some clarity about what actually happened by the board up and fired the CEO on the spot before doubling down to bring him into a position of power within the company?


OpenAI is doomed; in fact, it has ceased to exist; it's an empty shell, and its ressources provider is now its biggest competitor.

But I doubt MSFT will win this round.

1/ We still don't know why Sam Altman was fired; does MS know? or think they know?

2/ It will take the new team at MS a long time to recreate what they had at OpenAI (what does "MS has a perpetual license to all OpenAI IP" actually mean and entails, legally speaking?); during that time anything can happen.


Exactly. I’m very surprised nadella would take this kind of risk. It seems extremely cavalier to not investigate what happened before quickly going all in on hiring the entire team. You risk having to do a very embarrassing about face if something serious comes out and could lead to himself having to resign


Nadella isn’t getting his info from HN threads.


Yeah, but OpenAI was Microsoft’s ace the hole. Imagine if Nadella sat on his hands and waited while OpenAI burned down. In two weeks the narrative shifts from “Microsoft locked down the best AI people in the business” to “Nadella burned $10B for nothing”.

You don’t have to do a ton of diligence to know you want to keep the people you bet the farm on. If it turns out that Sam is actually a massive liability, Nadella will deal with that after this crisis passes.


When you’re in a hole stop digging


> OAI employees have no equity.

OpenAI employees have no equity? Well, then where exactly is that $86B of "existing employees' shares" coming from?

> ChatGPT creator OpenAI is in talks to sell existing employees' shares at an $86 billion valuation, Bloomberg News reported on Wednesday, citing people with knowledge of the matter.

https://www.reuters.com/technology/openai-talks-sell-shares-...

https://www.reuters.com/technology/openais-86-bln-share-sale...

A random OpenAI eng job page clearly states: "Total compensation also includes generous equity and benefits."

https://openai.com/careers/software-engineer-leverage-engine...


I believe OAI Inc employees and board members have no equity, but OAI LLC employees can have equity.


I will admit I haven’t seen an OAI contract, but have seen articles and multiple Levels.fyi posts for about $600k equity/year (worth $0 right now obviously)

So any idea how that translates into the profit sharing? They have no profit right now. Curious how employees get to that number


I have not seen any of the employee contracts so this is purely an educated guess which might be entirely wrong. There is a chart from Fortune[1] that spells out how the profit caps work. I have not looked at any of the documents myself so I am interpreting only what I have consumed. My guess is that the employee equity/contracts spell out the cap structure so perhaps the equity value is based off those caps. Assuming the profit cap for employees is correct I would assume you could not value any "equity" based off the last raise value. At best you could perhaps value the max cap available for those shares.

[1] https://fortune.com/2023/01/11/structure-openai-investment-m...


What equity at OAI? You mean the equity for profit sharing? Seems to me anyone who cared about their stake in the profit sharing would be fairly pissed off with the move the board made.

Investors loved Satya's investment into OAI, not sure how we can qualify it as one of the worst investments in the history of SV?

How can we even compare the risk concerns of MSFT with OpenAI? The impression we have of OpenAI is the risk concerns are specifically about the profit drive. From a business standpoint, LLMs have huge potential at reducing costs and increasing profit in multiples.


We went from OAI employees flaunting “$1,000,000 per year salaries” to the New York Times to “what equity?” Really fast

This isn’t personally against you but they never had the $1M/year they flaunted when Sam the savior was their CEO


I realize you have a bias against Sam Altman but lets dig into your current statement. The NYT article you are quoting I believe is this one [1], in which it describes Ilya Sutskever making $1.8 million in salary. I am not sure exactly what you are trying to say but from the beginning the equity has not been part of the comp of OpenAI. Salary as far as I know is typically just that, the cash compensation excluding bonus and stock.

I don't know exactly how employee contracts are drawn up there but OpenAI has been pretty vocal that all the for-profit sides have caps which eventually lead back to 100% going into the non-profit after hitting the cap or the initial investment amount. So I am not quite clear what you are saying? Salary is cash, equity is stock. There has always been profit caps on the stock.

My only point was that you made an argument about employees giving up their equity for "mediocre" MSFT stock. It is just a misinformed statement that I was clearing up for you. 1) MSFT has been doing amazing as a stock 2) Employee equity has profit caps. 3) Employees who actually care about equity profit would most likely be more interested in joining MSFT.

[1] https://web.archive.org/web/20230329233149/https://www.nytim...


Im referencing large PPU grants in OpenAI offers, with 4 year vests, they sure made it feel like regular employees are being given a chance to join now and be millionaires via theirs PPUs

If this was never true, that’s on the OpenAI team that misled their engineers

https://www.businessinsider.com/openai-recruiters-luring-goo...

Their job postings even specifically mention “generous equity” - again if there is no equity in the contract - that’s OpenAI misleading its recruits

https://openai.com/careers/research-engineer-superalignment


> Sam has even less power with the board, and the board in a 3 trillion dollar corporation would be even more risk averse than the OpenAI one

This is where I see the win. Newcomer’s concerns about Altman are valid [1]. It is difficult to square the reputation he nurtured as OpenAI’s CEO with the reckless lawlessness of his crypto start-up.

Microsoft knows how to play ball with the big boys. It also knows how to constrain big personalities.

[1] https://www.newcomer.co/p/give-openais-board-some-time-the


If OpenAI is beholden to Microsoft for investment, and OpenAI's license is exclusive to Microsoft, OpenAI has nothing to offer those who remain except mission and ramen. If OpenAI slows their own research, that impairs future profit allocation potential to remaining talent. Enterprise customers will run to Azure GPT, and Microsoft will carry the research forward with their resources.

This morning at OpenAI offices will be talent asking current leadership, "What have you done for me lately?"

Edit: https://news.ycombinator.com/item?id=38348010 | https://twitter.com/karaswisher/status/1726598360277356775 (505 of 700 Employees OpenAI tell the board to resign)


OpenAI employees are already quitting en masse in public on twitter.

Their pay is majority equity, so its worthless now with a board that says it hates profits and money. OpenAI is probably worth 80% less than it did a weekend ago, so the equity pay is also worth 80% less.

Microsoft is perfectly willing to pay those typical AI salaries, because Nvidia and Google are literally doing a recruitment drive on twitter right now. Apple and Amazon are also probably looking to scoop up the leftovers. So Microsoft has to pay properly, and Sam will demand them to, to get the OpenAI team moved over intact. There aren't that many core engineers at OpenAI, maybe 200-300, so it is trivial for Microsoft to afford it.


Driving the narrative doesn't mean driving reality. It is clear that Sam and friends are great at manipulating the media. But this is a disaster for Microsoft and the CEO damn well knows it. It is also a personal disaster for Altman and probably not a great move for those who choose to join him.

Time will tell if the OpenAI non-profit vision of creating safe AGI for the benefit of humanity can be revitalized, but it really does look like all involved are basically acknowledging that at best they were fooling themselves into believing they were doing something on behalf of humanity. Egos and profit seeking took over and right now the ethos which they championed looks to be dying.


Why is this a disaster? They managed to acquihire the founders of a 90B company. Probably the most important company in the world until last Friday.

Seems like a huge win to me. They can write off their entire investment in OAI without blinking. MS farts out 10B of profit in about a month.


They acquired two of the founders least responsible for the actual tech. They made a huge bet on OpenAI to produce the tech and that relationship is going down the drain. Watch the market today, the next week, the next month, the next six months and that will tell you what I say: this is a disaster for MS and they damn well know it.


> They acquired two of the founders least responsible for the actual tech

Microsoft also “has a perpetual license to all OpenAI IP (short of artificial general intelligence), including source code and model weights.”

If you’re a scientist, OpenAI is a fine place to be, though less differentiated than before. If you’re an engineer more interested in money and don’t want the risk of a start-up, Microsoft seems the obvious path forward.


Based on credits in Gpt3 and 4 papers, I think the team that follows Sam and Greg are the main drivers of the tech. Ilya is an advisor more or less.


https://twitter.com/ilyasut/status/1726590052392956028

Ilya just said he will do everything he can to reunite the company. If that’s the case the easiest way to do it is to resign and join MS


You're making the assumption that the technical folks won't follow him, and that's a pretty ridiculous bet at this point unless you've got some more data you're just not sharing.

Out of the gate the technical folks at OA had to be perfectly fine with Microsoft as a company given they knew all of the tech they were building was going to be utilized by MS carte blanche.

So now that their OA equity is staring down the barrel of being worthless, what's stopping them from getting a potentially slightly lower but still significant payday from MS directly?


The only technically person who matters here, the one who came from deepmind and who is the worlds top AI researcher, is sure as hell not going to follow him since he’s the reason Sam is gone.


You're right, I have no idea what I'm talking about, clearly people aren't going to leave and follow Sam instead of Ilya. Nobody at all... just 550 of 700 employees, nothing to see here.

https://twitter.com/karaswisher/status/1726598360277356775


> 550 of 700 employees

Including Ilya Sutskever who is (according to the posted document) among the 550 undersigned to that document.

It's pretty clear this is a fast-moving situation, and we've only been able to speculate about motivations, allegiances, and what's really going on behind the scenes.


You’ve nailed it. The excitement is going to be short lived imo


given that 500 employees are saying "either give us sama and gdb back or we are going to msft", i say nadella won hard.


That’s how it appears currently but experience has taught me to be very careful about making snap judgments in these types of fast moving situations. Nobody seems to know yet why he was actually fired. The popular theory is that it was a disagreement about mission but something about that narrative just feels off. Also Nadella and Altman are both enjoying God-like reputations and the OpenAI board totally being dismissed as clueless and making a stupid, impulsive decision even though basic logic would tell you that a rational acting person would not do that. There’s a lot of room for the pendulum of public opinion to swing back the other way and it’s clear that most of the most fervent supporters of Altman and Microsoft are motivated by money rather than truth.


Most human beings are motivated by money.


> a rational acting person would not do that.

Non-profit boards have no incentive to be rational.


Did you even research the basic facts?

Microsoft stock is up in the pre-market, because they basically got half of the OpenAI team for free.

The majority of top researchers at OpenAI are expressing solidarity for Sam and basically signalling they want to move too, just check out twitter. That also includes like the majority of the execs tehre.


Yes, low volume pre market moves on the back of a nonstop news flow always predict how they end up


> the SEC would arrest Sam

SEC does not have the power to arrest anyone. Their investigations are civil.


Criminal charges can be filed due to SEC investigations. For example:

https://www.sec.gov/files/litigation/admin/2021/ia-5764.pdf


The cheerleaders are the LLM AI future true believers. I imagine they are the same people that were telling us about how NFTs will change the world last year.


I really don't get the comparison of nfts to llms. I mean yeah some hype cycle idiots have redirected both of their brain cells to shilling useless startups that'll get obsoleted by minor feature additions to bard or whatever in a year, but who cares about them? NFTs didnt do anything but enable fraud.

Llms do stuff that has value. I can use Rfdiffusion with motif scaffolding to create a fusion protein with units from enzymes that have no crystal or cryoem structures with as high as a 5-10% success rate!!!!!! That's absolutely insane! I only need to print 20 genes to have a chance of getting a variant that works. Literal orders of magnitude improvement. And if you don't have a good multisequence alignment for getting a good folks from alphafold? Pure llm based esmfold can fill in the gaps. Evodiff is out here generating functional proteins with disordered regions. Also, to bring it back to openai, if I ask chatgpt to write some code, it writes some pretty decent code. If I give it a bunch of PDFs and ask it to give me a summary, it gives me a summary. I don't buy the agi end of the world hype so a shift that means that we get more focus on d eveloping useful tools that make my life easier that I'm totally willing to pay 20 a month for? Yeah I'm down. Get this cool tech and product into a form that's easy to use and useful and keep improving it!


To me, this sounds very similar to the type of over-hyped, exaggerated response when someone criticized cryptocurrencies by saying they don't do anything. The response would be:

-I'm literally drinking a coffee I bought with bit coin right now.

-I was able to send large sums of money to my grandma in another country while paying a fraction of the fees going through banks

-It's a stable store of value for people in volatile countries with unstable currency

-It's an investment available to the small timers, normally only the wealthy have these opportunities

-It lets me pay artists for their art directly and bypass the corrupt middlemen

this is a forum coding so i have no idea what any of that biology mumbo jumbo means, but everything you mentiond about chatgtp is conveniently missing a lot of details.

>write some code, it writes some pretty decent code. Is it trivial code? Is it code that shows up on the first page of any search engine with the same terms?

>it gives me a summary. Is it an accurate summary? Is it any better than just reading the first and last section of the report directly?


Dude I'm talking about it being worth 20 bucks a month (which NFTs are not), not the hype cycle nonsense. Just because you don't understand the scientific applications of protein folding, one of the most important problems in biology, doesn't mean that its mumbo jumbo. Ever heard of folding at home? Man is silicon valley ridiculous sometimes, but since apparently the accomplishments of coders don't count on this coding forum if they're in fields that web developers don't understand let's focus on consumer applications.

In terms of writing code, yeah it's pretty simple code. I'm paying 20 bucks a month not 200k a year. I've found it really useful to dive into open source code bases for papers (just upload the repo and associated paper) - academics write pretty garbage code and even worse documentation. It's able to easily and correcttly extend modules, explain weird uncommented and untyped code (what exactly is xyz data structure? Oh it's a tensor with shape blah where each dimension represents abc value. Great saved me 2 hours of work).

For the summaries - uhh yeah obviously the summaries are accurate and better than reading the first and last sections. Spend the 20 bucks and try it yourself or borrow someone else's account or something. Especially useful if you're dealing with nature papers and similar from journals that refuse to give proper respect for the methods section and shove all the information in a random way into supplementary info. Make a knowledgebase on both and ask it to connect the dots, saves plenty of time. I don't give a damn about the flowery abstract in the first part of the report and the tryhard conclusion in the last part of the report, I want the details.

It's comical that these useless hype bros can convince folks that a genuine computational breakthrough and a pretty solid 20 dollar a month consumer product with actual users must be bunk because the bros are shilling it, but luckily the baker lab doesn't seem to care. Can't wait to play around with all atom so I don't have to model a zinc atom with a guide potential and can just model the heteroatom directly in the functional motif instead! Not sure it'll work for the use case I have in mind until I try it out and print a gene or two of course but I'm glad folks are building these tools to make life easier and let me engineer proteins that were out of budget 3 years ago.


You see no use case for LLMs? I've successfully used GPT4 to actually transcribe hundreds of pages of PDF documents with actual accuracy. That alone is worth something. Not to mention I can now literally ask questions from these pages and come up with cited, reasonable answers in a few seconds. This is amazing technology. How can you not see the use case?


Wow OCR. How innovative.


Accurate OCR that answers questions from source documents? Yes... very innovative. As an example, I have a real estate data company that provides zoning code analysis. Whereas before I would have to manually transcribe tables (they come in many different formats, with table columns and rows that have no standard structure), I can now tell GPT.... Examine these images and output in my custom XML format after giving it some examples. And ... it does. I've fed it incredibly obtuse codes that took me ages to parse through, and it... does it. I'm talking about people using non-standard notation. Handwritten codes, anything. It'll crunch it

tell me... how much would it cost to develop a system that did this with pre-GPT OCR technologies? I know the answer. Do you?


Did you make anything on those NFTs?


Nope. Crypto has no value and I've consistently avoided it


Microsoft can offer more if it wishes, no?


but they can't offer the whole "we are doing this for the benefit of humanity" lark

will researchers that were lured into OpenAI under this pretense jump ship to explictly work on extending microsoft's tendrils into more of people's lives?

(essentially the opposite of "benefit humanity")

no doubt some will


I don't think Microsoft cares about that crowd, since now without capital they can't really do anything anyway. The rest of the crowd that wants to make bank? Might be more appealing


> without capital they can't really do anything

Not a bad moment for a rich patron to swoop in and capitalise the non-profit. If only there were a billionaire with a grudge against Altman and historic link to the organisation…


Why don’t they have capital?


I mean if someone else wants to give them billions of dollars to make an AGI that they think will extinct humanity while not commercializing or open sourcing the tech they do have because they're scared of extinction, then be my guest. Usually if say I'm happy to be proven wrong but in this case I'd just be confused.


> People really think many Open Ai employees will give up their equity to get whatever mediocre stock grant their level at Microsoft has? And 1% raises, no bonus some years, and the board forced headcount reductions?

What long term prospects do those employees have of raises, profit-sharing, equity, etc. at OAI if the board is willing to destroy value to maintain their non-profit goals?

I think the whole point of this piece is that OAI's entire organizational structure is built against generating a large startup valuation that would provide a large payout to employees and investors.

OAI has cash from ChatGPT revenue that it could use to offer competitive pay, but also this entire situations is based around the board being uncomfortable with the decisions that led to this revenue or attempts to expand it.


Regardless of what anyone thinks about it - M$ was going to pay an entity they did not control 18 Billion to be a player. Now they don't have to - they get it almost for nothing. Hat's off to M$ - this is certainly one of the largest corporate missteps by a board in charge of such hot technology that I have ever witnessed.

The Open AI board has taken the keys of Paradise and willingly handed them directly to the devil;)


Nobody cares what you think about it either.


>more risk averse than the OpenAI one

At least it's not sci-fi-risk averse ;)


HN is filled with temporarily-embarrassed billionaires (and actual billionaires) who would very much like to preserve the notion that big corporations can move with impunity and quash any threat to investment returns. Reality is not aligning with that, so they've entered their mental safe pods (with the rose-tinted windshields).


OMG this ^


> no bonus some years,

What do you mean? MS employees are getting bonuses on a yearly basis, this year included.


I’m referring to Satyas email from May saying there will be no raises and the bonus pool will be significantly reduced

That’s fine for corporate employees, but OAI employees were promised mountains of money to leave Google/Meta, they might not be as happy


They don't have to leave OAI.

OAI is a startup. All these OAI employees who were playing up their million dollar salaries should know that startups come with risk. How many times has it been said that equity is worth nothing until (and if) it is?

In the grand scheme of the current IT economy, top of the queue for sympathy to me is not "people making seven digit salaries at startup who may have to put up with only making $500K+ at MSFT".




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: