Hacker Newsnew | past | comments | ask | show | jobs | submit | lawxls's commentslogin

What is the best local model for cursor style autocomplete/code suggestions? And is there an extension for vs code which can integrate local model for such use?

I have been playing with the continue.dev extension for vscodium. I got it to work with Ollama and the Mistral models (codestral, devstral and mistral-small). I did not go much further than experimenting yet, but it looks promising, entirely local and mostly open source. And even then, it’s much further than I got with most other tools I tried.

> Durov travels freely to and from Russia

This is incorrect. Check your facts. They're made up.


It's funny that nobody mentions such an institution and how it was fixed. Elon Musk fired 80% of Twitter's employees, and it's better than ever.


> it's better than ever

As a product, I think you're in the minority if you think that. As a business, you are delusional if you think that.


Honestly, this is more of a PR stunt to advertise the Google Dev ecosystem than a contribution to open-source. I'm not complaining, just calling it what it is.

Barely an improvement over the 5-month-old Mistral model, with the same context length of 8k. And this is a release after their announcement of Gemini Pro 1.5, which had an exponential increase in context length.


Who cares if it's a PR stunt to improve developer good will? It's still a good thing, and it's now the most open model out there.


How is it more open than Mistral with Apache 2.0? Google wants people to sign a waiver to even download it.


Fair enough; that was more directed at LLaMA and derivatives, which have commercial restrictions.


How exactly is it the "most open model" ?

It's more like a masterclass in corporate doublespeak. Google’s "transparency" is as clear as mud, with pretraining details thinner than their privacy protections. Diving into Google’s tech means auctioning off your privacy (and your users' privacy) to the highest bidder.

Their "open source" embrace is more of a chokehold, with their tech biases and monopolistic strategies baked into every line of code. Think of it as Google's way of marking territory - every developer is a fire hydrant.

These megacorps aren’t benevolent patrons of open source; they're self-serving giants cloaking power grabs under the guise of "progress".

Use these products at your own risk. If these companies wanted to engage in good faith, they'd use Apache or MIT licensing and grant people the agency and responsibility for their own use and development of software. Their licenses are designed to mitigate liability, handcuff potential competitors, and eke every last drop of value from users, with informed consent frequently being an optional afterthought.

That doesn't even get into the Goodharting of metrics and actual performance of the models; I highly doubt they're anywhere near as good as Mistral.

The UAE is a notoriously illiberal authoritarian state, yet even they have released AI models far more free and open than Google or Meta. https://huggingface.co/tiiuae/falcon-40b/blob/main/README.md

If it’s not Apache or MIT, (or even some flavor of GPL,) it’s not open source; it’s a trojan horse. These "free" models come at the cost of your privacy and freedoms.

These models aren't Open or Open Access or Free unless you perform the requisite mental gymnastics cooked up by their marketing and legal teams. Oceania has always been at war with Eastasia. Gemma is doubleplusgood.


You said a lot of nothing without actually saying specifically what the problem is with the recent license.

Maybe the license is fine for almost all usecases and the limitations are small?

For example, you complained about metas license, but basically everyone uses those models and is completely ignoring it. The weights are out there, and nobody cares what the fine print says.

Maybe if you are a FAANG, company, meta might sue. But everyone else is getting away with it completely.


I specifically called out the claims of openness and doublespeak being used.

Google is making claims that are untrue. Meta makes similar false claims. The fact that unspecified "other" people are ignoring the licenses isn't relevant. Good for them. Good luck making anything real or investing any important level of time or money under those misconceptions.

"They haven't sued yet" isn't some sort of validation. Anyone building an actual product that makes actual money that comes to the attention of Meta or Google will be sued into oblivion, their IP taken, and repurposed or buried. These tech companies have never behaved otherwise, and to think that they will is willfully oblivious.

They don't deserve the benefit of the doubt, and should be called out for using deceitful language, making comparisons between their performative "openness" and actual, real, open source software. Mistral and other players have released actually open models and software. They're good faith actors, and if you're going to build a product requiring a custom model, the smart money is on Mistral.

FAANG are utilizing gotcha licenses and muddying the waters to their own benefit, not as a contribution to the public good. Building anything on the assumption that Meta or Google won't sue is beyond foolish. They're just as open as "Open"AI, which is to say not open at all.


> Anyone building an actual product that makes actual money that comes to the attention of Meta or Google will be sued into oblivion

No they won't and they haven't.

Almost the entire startup scene is completely ignoring all these licenses right now.

This is basically the entire industry. We are all getting away with it.

Here's an example, take llama.

Llama originally disallowed commercial activity. But then the license got changed much later.

So, if you were a stupid person, then you followed the license and fell behind. And if you were smart, you ignored it and got ahead of everyone else.

Which, in retrospect was correct.

Because now the license allows commerical activity, so everyone who ignores it in the first place got away with it and is now ahead of everyone else.

> won't sue is beyond foolish

But we already got away with it with llama! That's already over! It's commerical now, and nobody got sued! For that example, the people who ignored the license won.


The nice thing about this is that the calculus is in favor of startups, who can roll the dice.


That’s about the point of having a developer ecosystem, isn’t it?


mistral 7b v0.2 supports 32k


This is a good point actually, and an underappreciated fact.

I think so many people (including me) effectively ignored Mistral 0.1's sliding window that few realized 0.2 instruct is native 32K.


Mixtral 8x7B has 32k context.

Mistral 7b instruct 0.2 is just an instruct fine tune of Mistral 7b and stays with a 8k context.


'GPT assistant' is now associated with OpenAI's GPTs for me. Very confusing.


OpenAI just got denied on their Trademark for GPT

There are many other GPTs from outside OpenAI, such that GPT is the contemporary Escalator


Edit: changed to say AI now


That's fair. We've considered a few other terms such as "personal AI," "browsing copilot," "personal knowledge assistant." Do any of these feel more intuitive?


Yep, just use GPT-4


Apple has a fruit logo


Hell yes, brother


and software to intercept air targets via subscription payments I guess?


While the unfolding situation at OpenAI is certainly complex, one can't help but question the role of D'Angelo in this scenario. Given his entanglement in AI development and his recent ventures that place him in direct competition with OpenAI, his continued presence on the board raises legitimate concerns. The essence of board membership, especially in a field as intricate and rapidly evolving as AI, should be rooted in unconflicted support and clear alignment with the organization's goals. If the circulating rumors hold any weight—those hinting at his involvement in the CEO's abrupt dismissal—then it only compounds the argument for a reconsideration of his board role. Entrepreneurship, while often a game of strategic moves, must also adhere to a code of ethics. If these stories of past betrayals among peers are more than just whispers, it does raise questions about the integrity of leadership and decision-making within such influential tech circles. In the interest of transparency and maintaining trust within the tech community, perhaps it's time for D'Angelo to reevaluate his position and possibly step down, ensuring OpenAI can navigate its path without potential conflicts.


I have trouble finding respect for Adam knowing he permitted Quora to gradually transform from something promising to trashing it’s quality for financial sellout to the point that it can no longer be taken seriously. I don’t understand how someone can let that happen.


Yeah I was thinking last night after hearing his involvement in this, does anyone else realize that Quora is basically unusable now? I mean it started from such a noble cause to answer all of humanity's questions to literally tricking end-users into clicking ads...


It’s not just this, you are given answers to related questions when the answer to your actual question is further down. Possibly to increase stickiness? It doesn’t make sense.


This is the most confusing thing any time I've clicked a quora link. I have no idea what I'm looking at. Who designed that? It's crazy how bad it is. They probably got a promo for it lol


It used to be good/great in the past but nowadays I avoid Quora like the plague, due to its massively confusing interface.


Quora is the text version of the Chumbox.

The chumbox is slang for the part of news site web design you... try to ignore. These sites managed to optimize their design for <10% of users who will click just about anything.


> I mean it started from such a noble cause to answer all of humanity's questions to literally tricking end-users into clicking ads...

I'm pretty sure you've just described the web.


Cough cough Reddit.


I'm honestly curious how Quora is still in business. I wouldn't say I was a heavy user, but I used to use it fairly frequently, and then it just became a minefield of walls and dark patterns such that whenever I get a Quora result in search I just go somewhere else. I used to hear people talk about Quora but now I never hear any sort of discussion about it from tech types, besides jests and disdain.

I completely realize that my experience may not track your average Joe, but I read an article from this summer that said Quora was planning an IPO, and I just am trying to wrap my head around how there would be any decent valuation.


Their SEO is very good. "I got it from Quora" is still an answer which carries some weight. And a lot of those answers have a long shelf life - easily 4+ years, even 20 in the case of, say, an answer about Steve Jobs.

So you have a site, with a lot of valuable links, on a platform where anyone can & will create more, with widespread name recognition. That's valuable.


Their SEO was/is technically extremely poor (per Google's rules) and should have gotten their content largely banished from Google. They were intentionally violating one of Google search's primary SEO rules: do not show the Google bot and users a meaningfully different site. Quora was doing exactly that, providing a very different experience to the non-signed in search user (arriving to the site via search results) vs Google bot. Google let Quora get away with SEO murder (speculation on HN has always been that it was due to the close relationships in SV), which is the sole thing that has kept Quora propped up (otherwise its traffic would have been properly obliterated for its blackhat SEO practices).


I always assumed Quora was a classic example of a company that raised too much money.

They had a fantastic, high quality service up to around 2015 - but they raised at least $226m at a (one point) $1.8bn valuation by 2017: https://techcrunch.com/2017/04/21/uniquorn/

Justifying that valuation requires a metric ton of growth hacking that appears to be incompatible with maintaining a high level of quality on a Q&A site.


Quora is my biggest internet disappointment by far. I used to spend a lot of time there between 2013-2017. It wasn't perfect but there were great questions, great authors to follow, and the feed consistently served me quality content.

Then they flipped the switch on the new algorithms / monetization strategies and it all turned to shit. All of the thoughtful writers were replaced by whoever could churn out the most sensational (and usually blatantly false) answers. Whenever Google leads me there now I am physically pained by the awful user experience and absolute drivel in the top answers.

So yeah, not impressed at all with D'Angelo


One of my favorite use-cases for Kagi is to block Quora from all of my search results.


A classic case of enshittification. Quora is the new Yahoo Answers.

The funny thing is that there will be a new one soon probably.

Or is it the AI app he’s already building?


Wasn’t this the game?

Cheat people into generating top quality content for you. Once you have enough, put all of it behind a paywall. And use all that content as your chips in the new AI game.


Also interesting that D’Angelo was the one who was negotiating with OpenAI leadership as representative of the board,[1] and the employee protest letter states that the board "informed the leadership team that allowing the company to be destroyed would be consistent with the mission" during these negotiations.[2]

[1] https://www.bloomberg.com/news/articles/2023-11-20/openai-s-...

[2] https://www.wired.com/story/openai-staff-walk-protest-sam-al...


Note that "allowing the company to be destroyed" is outside of the literal quote (which is just "consistent with the mission"), so "destroyed" may be an appraisal by the leadership that the board doesn't share.


Some speculate that Helen and Adam were on the verge of being forced out by Sam, the former in the lieu of new investors needing a board seat for their person and the latter due to conflict of interest (via Poe after GPT Store launched on Devday). Once Ilya bought into their concerns, Adam and Helen, without informing any stakeholder (incl Microsoft), moved swiftly and decisively before any director changed their mind (like Ilya eventually did).

https://twitter.com/alexandrosM/status/1727026942560330172 / https://archive.is/l89JO


If you're an investor in OpenAI's for-profit unit, the very clear lawsuit target is D'Angelo. His extreme conflict of interest problem is a personal bankruptcy waiting to happen given the value destruction OpenAI has just (probably) suffered. Investors should be promptly slapping a multi billion dollar lawsuit down on the table: resign or else.

Then stop messing around and fully split off the for-profit unit, run by Altman. They're in perpetual conflict. The non-profit can use its ownership stake (liquidate it gradually) for funding for a very long time and can still pursue its mission of safe AGI. It should provide tens of billions of dollars in funding for the non-profit. The for-profit can then be unleashed to fully pursue commercialization related to GPT without hold-backs.

The absurd fantasy of the dual OpenAI missions co-existing in peace needs to die. They can't co-exist peacefully within one body, everything about their requirements to thrive going forward puts them at odds with each other (from speed, to compensation, to funding requirements, to management approach).


Nothing is absurd about expecting a company, run by a group of people, to uphold the values and mission its existence was predicated upon, down to the name of the company itself. Especially in such a short time frame.


Of course it's absurd: it was all a game of playing pretend. That's the fantasy part.

It hadn't been open AI for a long time.

The entity that you're referring to no longer exists at present. They can revive it by splitting these inherently at-conflict entities apart.

With tens of billions in funding via stock liquidation they can go back to pursuing actual open AI and have a lot of money to throw at doing so, without concerns for conflict with a for-profit mission in relation to a funding source.

Today the mission of being open with their AI tech is at conflict with the funding base: GPT commercialization. At least with how they have been operating for years now. There's no fixing that in the current structure.


All great points, it’s wild that this small non-profit board had so much power with so little at stake for themselves. That’s a typical feature of a non-profit board, but in this case the entity wasn’t a typical non-profit.

To your points, such a split makes too much sense and the ship has probably sailed when the employees showed they have no loyalty or responsibilities to the organization itself.


This seems the simplest explanation, and relevant to the post, cause for a lawsuit. Helen could be principled about the mission, but difficult to say Adam was, given what he stood to gain.


Given that this debacle turned out to be NOT about AI safety (as Emmet Shear confirmed), the whole "slowing down" angle becomes about interests.

Who else had the commercial/totally-for-profit interest in slowing OpenAI down, and was in a position to do something about it?

It is hard to see what unfolded as anything but active sabotage.


Indeed, this is most puzzling and it is now a 33% chance that he's the one that started all this. If it turns out that is true the chances of a successful suit go up considerably.


Who cares who started this? All four are equally responsible. You can’t vote then say it wasn’t your fault (unless you were literally coerced).


That might cause the board to split up further, there is already one defector from the 'gang of four'. If D'Angelo started it the other two might say they were pressured by him and come out publicly against him (like Ilya Sustkever did).


I'm starting to wonder if it was a gang of four or if it was a gang of three and they used one of the cofounders (Ilya) to turn on his cofounders to get control of the company.


Likely the second or it was a gang of one who used the inexperience of the other board members to get them to act against their own long term interests. That does not absolve them, they are still board members and they should own their decisions.


That's a big reason why VC's prefer co-founders rather than solo founders.


I know it’s nuts, but is there any chance this was orchestrated to some degree? If someone wanted to throw off the non profit structure, they couldn’t have done a better job than this. I mean, it really works out well for Microsoft.


You forgot the "It's important to keep in mind" part.


Perhaps ironic? Your wording sounds exactly like is was written by ChatGPT (especially the last sentence)!


First thing I thought of, but I didn't want to be the first one to throw out accusations. That comment has the same unnatural writing patterns that ChatGPT uses. Something about the style makes me feel uneasy when reading it.


What's funny is that, as content written by AI becomes normalized across media, which it inevitably will, people will necessarily imitate the style of AI if they want to appear serious.


:)


and a smile face exactly like how chatgpt would do, too! ;)


Did Sam Altman have any conflicts of interest, between OpenAI's charter and his other investments?


The answer to that one is of indeed YES.

It would appear that there are different rules when profit is involved, which ironically is exactly what OpenAI's nonprofit parent was intended to prevent.


Yes, it would be funny if it wasn't tragic: the exact scenario they wanted to prevent materialized but from the one angle where they weren't covered. I predict this is the last time that a non-profit is put in charge of a for profit that has investors. That mistake is not going to be made again. And it also shows how risky it is to put together new governance structures even if they seem to be a good idea at the time. Because the fig leaf suddenly turned into a hammer the size of which would be hard to replicate in a normal governance setting. Unparalleled destruction with zero regards for the consequences.


Purely hypothetical but this presents a tiny angle which Adam could come out of this as a reasonable person... if all of this was basically him saying "I have to go. You have to go. We are both compromised and our positions are now against the charter" then maybe Sam was fighting against that because of the team he built and loves leading. This is one reason I would accept Adam's role if it is just him hanging on to make sure Sam is out before leaving himself.

But if that's the case then just say it.


It's a stretch but it could be true. Unlikely though. And if it were true I would have expected Sustkever to spill the beans by now.


the one angle where they weren't covered

Not sure what this is from the thread context.

Unparalleled destruction with zero regards for the consequences.

Isn't this the size of the hammer needed, in case there's a danger of misaligned runaway superintelligence?


> Isn't this the size of the hammer needed, in case there's a danger of misaligned runaway superintelligence?

Possibly, but then you are better off not to develop the thing in the first place.

And you should only break the glass in a break-the-glass moment.


If a single “donor”/investor owns you you cannot enforce anything nonprofit or not.


Which investment?


Absolutely, but that seems to have been a prerequisite for joining the board of OpenAI.


Yes but that's most boards these days, at least by the standards of a layperson.


D'Angelo is mad he weaponized web design against Quora users for years because he would go bankrupt otherwise, and now he is going bankrupt anyway. Adam is at war with god, not OpenAI.


Adam is at war with god, not OpenAI.

Where does the phrase, "is at war with God," denoting opposition to fundamental laws of reality, come from?


Stab in the dark here, but I would not be surprised if such a phrase is untraceable in origin due to long usage. There are many ancient stories where men battle with gods and they are analogous to fighting nature. Especially considering many gods are considered to be those that control nature and who's moods are characterized by it. We can even find references in the bible but I don't read Hebrew so idk if these are direct translations and not bothering to check changes from versioning. But I suspect that this phrase, in some form or another (battling with god, fighting god, etc), is rather old and not even unique to westerners. So I wouldn't be surprised if the phrase is older than language itself. But I'm not a linguist or historian and this is pure conjecture. Just someone who enjoys language and mythos as hobbies.


My guess would be that it's from Kate Bush's Running Up That Hill (A Deal with God), which leans on the story of Sisyphus. She never actually uses the word "war", but does use various war metaphors ("see how deep the bullet lies", etc.).


To me, it's a "Deal with God" not being at war with God. Deals with gods and deals with devils seem to have something in common, in a way which is topical to discussing OpenAI.


That's the perennial debate about the song. Does Bush want to swap places with her ex-lover, or does she want to swap places with God so that God can feel the pain of continually running up that hill? My take has always been the latter, given the Sisyphus metaphor (Sisyphus was condemned by a god to keep rolling a boulder up a hill). It's not war, no, but there is clearly some major hostility.


For me personally, likely Memnoch the Devil by Anne Rice.

One of those random books you purloin in youth, and a total departure from her standard narrative fiction fare. It's a retelling of the original cosmic myth from the... rebellious perspective.


So OpenAI has no conflict of interest clause?


For those keen on following this situation as it evolves, consider subscribing to keywords 'OpenAI' and 'Sam Altman' via my Telegram HackerNews alerts bot to receive related stories. The bot is completely free and open-source. My aim here isn't financial gain or power, but rather to offer a useful tool for the community (https://github.com/lawxls/HackerNews-Alerts-Bot).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: