Hacker News new | past | comments | ask | show | jobs | submit | ncomments login

You can do a lot with a little enthusiasm, way to go Swiss Miss :)

This sort of situation doesn't occur often, at least in my experience, but is so good when it does.

^Solution space

Lovelace would be a great language name someone should use it.

This is so dumb that it makes my head hurt.

The metaphor is apt, but the conclusion is, while imaginative, ridiculous.

What we currently refer to as “AI,” as the author correctly notes, is nothing more than a next-word-predictor, or, if you’re wild, a projection of an infinite-dimensional sliding space onto a totally arbitrary, nonlinear approximation. It could be exactly correct and perfect in every way, but it’s not.

This tool will never be an accountant. This tool should never write production code. This tool is actually quite useful for exploring purely-understood problem spaces in materials science.

It’s also good for generating plausible-sounding nonsense that is only sometimes reliable enough to avoid writing emails to your wife.

No thank you from me. I think I’ll continue participating in my own life, rather than automating away the trivially simple parts that make life worth living


Doesn't it have to listen to everything to capture the wake word "hey siri"? How else is it done?

Americans: take note that this miniseries, which perpetuated anti-American propaganda myths (repeated by participants in this very thread), was developed by the CBC, a state-run, taxpayer-funded media outlet.

For decades, Ottawa has freely used anti-Americanism as a caulk to bind together a confederation otherwise prone to fracture due to linguistic (French-English) and geographic (West-East) division. Politicians in Canada regularly use it to insult their opponents and to distinguish themselves. The end result? readthemaple.com/canadian-views-of-the-u-s-have-trended-negatively-for-decades/

Then they act shocked when the relationship has frayed, when it would have done so much sooner if Americans hadn't been so apathetic that they actually knew what their "friends" really thought of them.


Hey HN,

We just open-sourced Octagon VC Agents.

It’s an MCP server that lets you run AI-driven simulations of VCs like Fred Wilson, Peter Thiel, Marc Andreessen, and Reid Hoffman — designed for real research tasks like pitch feedback, diligence drills, and term sheet practice.

It’s powered by OpenAI’s new Agents SDK, so you get true agentic workflows out of the box:

Multi-agent reasoning

Context handoff

Tool use orchestration

Easy persona customization via markdown

Each agent is continuously enriched with real private markets data from Octagon (startup deals, valuations, founder profiles), so the answers stay grounded — not just hallucinated LLM output.

Use cases:

Side-by-side deal critiques from different VC perspectives

Reality-check startup metrics before fundraising

Simulate partner meeting debates or warm intro strategies

Create new investor personas tuned to your fund’s style

Agents can be extended easily — just modify their investment philosophy, cognitive biases, or communication styles in the persona files. You can also plug in your own orchestration logic if you want more complex decision chains.

You can also checkout a hosted custom gpt version here:

https://chatgpt.com/g/g-680c1eddd1448191bb4ed7e09485270f-vc-...

Would love feedback, ideas, or if anyone wants to collaborate on expanding multi-agent debate flows or new use cases!

Repo: https://github.com/OctagonAI/octagon-vc-agents


It depends on how much platform specific stuff you are trying to use. Also in 2025 most packages are tailored for the operating system by packagers - not the original authors.

Autotools is going to check every config from the past 50 years.


Put sparkles on the close account button!

Nobody owns their data. They just scrape the internet, or pirate massive troves of books. Just forcing companies to get a license to all the data they use, let alone an open license, would be a massive impediment to the development of open models.

What presidential elections are you comparing it to?

Yes. Just use the “better C” mode

You can use make without configure. If needed, you can also write your own configure instead of using auto tools.

Creating a make file is about 10 lines and is the lowest friction for me to get programming of any environment. Familiarity is part of that.


The underlying memory is still binary, or were you proposing an entirely new computer architecture with ternary gates?

Right, so a filter that sits behind the model and blocks certain undesirable responses. Which you have to assume is something the creators already have, but products built on top of it would want the knobs turned differently. Fair enough.

I'm personally somewhat surprised that things like system prompts get through, as that's literally a known string, not a vague "such and such are taboo concepts". I also don't see much harm in it, but given _that_ you want to block it, do you really need a whole other network for that?

FWIW by "input" I was referring to what the other commenter mentioned: it's almost certainly explicitly present in the training set. Maybe that's why "leetspeak" works -- because that's how the original authors got it past the filters of reddit, forums, etc?

If the model can really work out how to make a bomb from first principles, then they're way more capable than I thought. And, come to think of it, probably also clever enough to encode the message so that it gets through...


Thanks! I'll ask NotebookLM to make a podcast out of it!

If a company discloses vulnerabilities, they can't also then write that their product can actually help mitigate those vulnerabilities? So, you want them to offer problems without solutions?

I get that ideally the company would offer a slew of solutions across many companies, but this is still good, no?

I mean it looks like finding vulnerabilities is central to this company's goal, which is why they employ many researchers. I'd imagine they also incorporate the mitigations for the vulns into their product. So it's sort of weird to be "against" this. Like, do you just not want companies who deal in selling cybersecurity solutions simultaneously involved in finding vulnerabilities?


Exactly, he sees the problem clearly. And this article was five years ago. It's become even more entrenched now. There's basically no way of fixing this.

We can see similar problems with other sites that rely on volunteer labor, like Reddit.


I don't know if it's an EU rule, but in my (European) country cars are required to have their lights on at all times, even during the day. The lights switch on automatically when you start the car

It's an account created to avoid doxxing myself. My Wikipedia username is easily linked to my main HN account. I still rarely make minor Wikipedia edits now and then, and don't want my account banned.

Anyone who's edited Wikipedia long enough will recognize the pattern of what I'm describing. It's not a misrepresentation.


It isn’t, actually. Kiwix, IIAB, Rachel, and a custom web interface and search implementation- along with licensed and commissioned content. Kiwix is cool (and a partner of ours) but a Prepper Disk is a lot more than Kiwix.

Yes, UVU. And also yes. If I find myself needing something low-level and performant, I have a hard time justifying the ramp-up time required to use D since there is a near zero chance I would use it in my current or future employment. While that isn't always how I decide what technologies to use in my personal time, it definitely is a factor that tips the scales towards a more mainstream language

I'm not too sure what you mean. I kinda just avoided looking at existed implementations because it's a bit more interesting to do it myself.

Mostly just because C is a lot simpler, and in kernel dev, simplicity is everything. I've used rust for other projects but I feel like in kernel dev I would much rather use a simple and readable language than a safe language.

Domain knowledge is just useful in that particular domain unless you distill down general things into something you can carry forward. You cannot carry over skills from Call of Duty over to Starcraft, other than maybe, teamwork makes the dreamwork in multiplayer (this you can distill out). Once you leave a domain (the front lines), you will never be as in-tune as the people in it right now. I'm at peace with leaving domain-specific knowledge wherever I left it, because trying to bring over non-distilled concepts to new domains is a forced attempt at feigning experience. It's one of the problems with much of leadership, they ride on carrying over domain-knowledge from another domain when in reality they are total newbies in the new domain (ego does not allow you leave something behind, it wants to aggrandize you at every step).

However, if you are simultaneously involved in multiple domains, this is when the venn-diagram works its magic. The person with five different interests and professions is in a magical envious place of being able to touch the "overlap". I can see someone arguing that this does not have to happen in real time, that if you can simply remember everything, then you are always in this magical place.


It's tough to say because we definitely have created an exoskeleton with AI. We would need to tell a society with ubiquitous exoskeletons to mindfully exercise their body so it does not atrophy. The exercise you would get hiking will never translate to core body strength with the exoskeleton on. Likewise, core body strength would only have ambiguous value since everything would be done with an exoskeleton regardless since it's simply better. For programming specifically, you can already see it with Leetcode. Which of the Leetcode problems really matter and why for most work? Few for most us, but whoever is deciding what is important will decide what it is important. Google for example decides which Leetcode problems are important and they will decide in the future in the same way. It will have nothing to do with the reality of the day to day.

This is a very long-winded way of saying gatekeepers gonna gatekeep.


Oh god that's gnarly. I'm pretty sure you're right about it being a strategy to fit more into the context window. Prior to Windsurf changing their credit system I'd thought about purposefully limiting my file lengths to fit under multiples of 100 to use fewer of the defunct flow credits.

Hey everyone!

As a fellow k8s enthusiast who's spent too many hours in debugging hell, my co-founder and I built something we wish we'd had: Sentinal.

The problem: Debugging distributed systems is time-consuming. You have to manually inspect countless Kubernetes resources and piece everything together.

Our solution, an AI agent that:

- Deploys with a simple helm chart

- Explores your cluster only when you ask questions

- Agents do not emit data if you are not using it

- Runs relevant kubectl commands for you (with your approval)

With default read only RBAC policies applied

- Actually understands your system's context

- Using Github repositories that only you allow it to view

- Attempts to run read-only commands to interrogate assumptions - HTTP requests to services

Points you in the right direction with specific insights

- Exposes a simple web interface that shows you what the Agent is thinking and what it thinks it should do

Shows the reasoning as to why the agent wants to do what it thinks is best

All commands are human approved, we don't think agents should do whatever it 'thinks' is applicable

Does not save any information about your cluster, just queries what it thinks is relevant

It's non-intrusive - it doesn't deploy anything new to your cluster beyond its own pod. It just helps you make sense of what's already there and how things talk to each other. It does not emit any unwanted data to our servers and simply focuses the entire workflow on how a Software engineer would debug issues, by querying relevant entities and validating assumptions

We are both technical and engineers that have worked in the software industry for nearly a decade and have not found a tool that genuinely helps discover issues when systems interact with each other (eg. My service is running and shows a healthy status but my database is not being populated with orders). This isn't about showing you what you already know - it's about saving you time and frustration and trying different approaches in order to debug hard to solve errors.

If you're interested, drop a comment below. I'd love to start a discussion about debugging pain points and how we can make this more useful for all of us.

We hope that we open up an honest discussion about how LLM's should enrich developer experience and not be a nuisance for developers. We welcome comments, good or bad about this application of LLMs and Kubernetes.

We know that AI is a hot topic right now and is not always what we hope for it to be but we strongly believe there is a lot of value in it when applied correctly. It may not solve everything but if it helps you deliver projects faster and maintain what you have built with less sleepless nights and frustration, then we consider that a win.


Because it would hurt our little elitist exceptionalist hearts if we gave an H1B to a construction worker. There are low wage industries that could use such a program, but our little hearts can't take it because "its not the best and brightest".

Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: