Hacker Newsnew | past | comments | ask | show | jobs | submit | ozten's commentslogin

What does the full stack monorepo do?

It’s nothing special. Not in the realm of anything technical outstanding. I just stated that to emphasize that it’s a slightly bigger project than default single-dev coded SAAS projects which are just a single wrapper. We have workers, multiple white-labeled applications sharing a common infrastructure, data scraping modules, AI-powered services, and email processing pipelines.

I’ve had an impossible learning curve the last year, but as I should rather be vibe-coded biased I still use less AI now to make sure it’s more consistent.

I think the two camps are different in terms of skill honestly, but also in terms of needs. Like of course you are faster vibe-coding a front-end then to write the code manually, but build a robust backend/processing system its a different kind of tier.

So instead of picking a side it’s usually best to stay as unbiased as possible and choose the right tool for the task


This is impossible in current glasses form factors. The display is additive.

If you are starting at a display which displays a video feed (VR/MR form factor), you are correct.


It is not impossible in the current glasses form factor. You add a film onto the glasses that darkens specific areas on demand.


How does this compare to a conventional CPU or GPU in terms of flops per second? Or does this not present a traditional architecture?


> “However, the real gift of this technology is not to computer science. Rather, it’s an enabling technology that allows scientists to perform experiments on a little synthetic brain.”

It’s probably not directly mappable in any reasonable way. At least not until a lot more people get their hands on it and explore the possibilities.


How about: it's flopPY, every second?

Seriously brains are neither analog nor digital, they use spike-trains. Very comparable to "clockless" digital circuits. To what we use in chips, synchronized tick-based calculation: it's not comparable. Judging by human but especially animal reaction times: one way to quantify it is to say it has about 10000 flop per second. technically the human head has 2 speeds: one type of cells with 1000 synapses, that can calculate about 10 times per second (the "animal brain" or reptile brain, the cortex). And there are cells with 10000 synapses that fire on average 1 time per second (the "human brain", the neocortex), which should be roughly the same capacity. Of this network type it is known that more synapses means more accurate, more long-term planning. Faster firing means faster responses. Reptiles are stupid, but despite reptiles being cold blooded we mammals have zero chance to respond in time to an attack. It's not happening. And yes, cats have a built-in trick that gives them a fighting chance, but is only ever going to work in small animals (you need muscles powerful enough to throw yourself several body lengths into the air, and you need to be small and light enough that you survive being thrown in the air several body lengths without coordination, and land without injury. Both properties that humans, or any animal 1m or bigger will never have. And of course, that reflex is an incredible source of youtube videos)

The problem with spike trains is that it's tough to say if a zero signal means anything. On the one hand, all zeros means the cell isn't using energy, and that is incredibly efficient (nanowatt, not even multiple nanowatts). Everything about your mind is designed to almost always be all zeros. A spike means milliwatt power usage for .15-.2 seconds after the spike. Given the amount of neurons, our brain would rapidly cook itself if the average firing rate even just double, in fact that is exactly what happens with epilepsy patients.

The above calculations only apply if all zeros means the network isn't doing anything. If that assumption is wrong, you should probably multiply those figures with the temporal accuracy of the spikes, which is incredible, 3-4 nanosecond. So you'd have to multiply the figures by 300 million, at which point the human mind still is 1000 times stronger than even a full stargate deployment. That sounds incredible, but it really isn't.

If you want to see incredible figures, figure out how much calculations natural selection does for a simple ocean-based bacterial species (assuming 1 cell division = one calculation, if you assume a more reasonable 1 allele combination = 1 flop, you're another 3-5 orders of magnitude higher). Bacteria do hundreds of orders of magnitude more thinking than all humans combined.


That was very interesting. Do you have relevant reading material to recommend, good sources?


This is the standard work on the subject: https://direct.mit.edu/books/edited-volume/2001/chapter-abst...


Well, it's a flop, so it compares. /s


Apple has had such a terrible off and on relationship with gaming. It never really made sense to me, as in some ways gaming is the future of media and the synthesis of so many artforms. Apple is so strong as a creative platform.

They will have to work really hard to incentivize developers and studios to invest in iOS and macOS and it's bespoke low level libraries.


Working hard won't be enough. Valve "worked hard" to get developers to support the OG Steam Machine, and the result was basically the same as MacOS; a bunch of lousy OpenGL ports that break on every system update. Even if you write a native Mac or iOS app, you're not guaranteed that the runtime will respect your work in the future. So most devs just don't bother. Windows wears the crown for not cutting off it's leg to spite it's face.

Valve is able to succeed where Apple doesn't because they aren't obsessed with being king-of-the-ring. They're a software retailer, Valve makes more money when more PC hardware proliferates to new audiences. Apple could do this too, but they would have to swallow their pride and work with Khronos again. Hence the reason why iOS games are often the most demanding native titles on Mac.


Glancing at the domain name, I got a burst of nostalgia for whytheluckystiff.net.


Folder UI components are a common case.


Seattle's is very disappointing as it doesn't seem to include Bezos' over the top laughter.

https://www.seattletimes.com/seattle-news/seattle-crosswalk-...


I want different capabilities at $200.

I paid for that to get access to Deep Research from OpenAI and I feel I got more than $200 of value back out.

These companies have a hard time communicating value. Capabilities make that easier for me to understand. Rate-limiting and outages don't.


My current dream is a model that's good at coding with a ~10m token content window. I understand Llama 4 has a window approximately that size, but I'm hearing mixed results on its coding capacity.

If it had deep research and this, with a large number of API requests, I'd consider $200/month.


Has anyone found the output at these large context windows usable at all?

IME the quality of all models goes down considerably after just a few thousand tokens. The chances of hallucinating, mixing up prompts, forgetting previous prompts, etc., are much more likely as context size increases. I couldn't imagine a context of 1M tokens, let alone 10M, being usable at all. Not to mention that any query is going to come to a crawl just to move that amount of data around (which still annoyingly happens on every query...).

So usually at around 10K tokens I ask it to summarize what was discussed, or I manually trim down the current state, and start a new fresh chat from there. I've found this to work much better than wasting my time fighting bad output. This is also cheaper if you're on a metered plan (OpenRouter, etc.).


The results are not mixed, Llama 4 is terrible at coding. I agree on longer context window being the dream.


I mean you get a 2 Million token context window and by far my favorite coding model with Gemini 2.5 Pro.


I just subscribed to the free trial yesterday, and I've been hooked tbh. I haven't subscribed to any of the other LLM companies so far. I hope something else comes out within a month because I really don't want to spend 22 Euro per month for it.

The 1M context window (2M?) really sets it apart.


I believe you can still use Gemini 2.5 Pro for free via https://aistudio.google.com and their gemini-2.5-pro-exp-03-25 model ID through their API.

The free tier is "used to improve our products", the paid tier is not.


22 euro per month is less than 1 per day. Less than one espresso.

I get the subscription fatigue, but there are splurges and there are truly valuable things.


Has someone tried the 2m context window for a code base and can report how it compares over claude or o1?


Made a video comparing Gemini 2.5 Pro to Claude Sonnet 3.7 recently: https://www.youtube.com/watch?v=AVdVJ_hD_vo


I mean I've tried it with Gemini 2.5 Pro + Roo and then tried Claude 3.7 + Roo on the same task and Gemini blew Claude away. Haven't spent anymore OpenRouter credits, because Gemini was so much better.


Does Gemini have a web interface similar to claude.ai? I am lazy[1], but I am also poor. I would not be able to afford 100 USD per month.

[1] But if it is cheap enough, has large context window, then I might consider setting up something akin to claude.ai with Gemini's API.


Yeah AI Studio is free with decent rate limits, though obviously more developer focused: https://aistudio.google.com/

The official Gemini app works well for me too and there's a nice free tier and it's free if you have a newer Pixel phone. Otherwise $20/month for the Advanced tier. There's no $200/month option.

https://gemini.google.com/app


There's also Google's https://idx.dev - which is a webide/vscode dealio and you can use gemini in agentic mode (mix of 2.0/2.5 but if you use your own gemini key you can guarantee 2.5 Pro i think)

edit, well it now appears to be https://firebase.studio/ - that is a recent change I haven't used it since it changed its name..


I mostly use LLMs on PC, as I use LLMs mainly for coding.

Does AI Studio allow you to have projects with project files and whatnot?

How about its context window length, more or less than Claude's?

I am also interested in open-source alternatives to the web interface that claude.ai has, I know there are some but I have forgotten their names, would be cool to have a list here.


The best open source UI I know of is https://openwebui.com/ - you can point it at any OpenAI API compatible endpoint and both Gemini and Anthropic offer those now.

You can use the Gemini API for free with quite generous allowances, including for 2.5 Pro.


Thanks Simon, will take a look.

Extremely off-topic: are you still around DS?


DS?


DarkScience's IRC server.


Wow that takes me back! I've not been active on IRC in about a decade I'm afraid.


So we have talked a decade ago?! Damn! I remember you from DS. :D


AI studio is only developer focused if you’re not working on AI, which is a prohibited use case according to the Gemini API / AI Studio “Additional Terms”


I waited until Deep Research came to the normal paid plans, but it's been very useful the times I have thought to use it.


Probably a no go for Spanish speakers.



I think it was a pun. "No va" in Spanish means "it doesn't go."


Spanish, being a continuation of Latin, recognizes "nova" as its own word; it doesn’t parse as no va. Your pun is an urban legend.


See “Chevy Nova”


https://www.snopes.com/fact-check/chevrolet-nova-name-spanis...

Often repeated, but it doesn't make any sense to Spanish speakers. Nova is a word on its own.

Would an English speaker assume a hammer is made of ham?


English speakers seem to belive that it makes sense to call “cheeseburger” a Hamburger with cheese, so who knows.


A sample MCP example with data would be helpful.


Author here - yeah that makes sense. A real example would def make it clearer - I'll add one. Thanks for the suggestion!


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: