Hacker Newsnew | past | comments | ask | show | jobs | submit | more smarx007's commentslogin

This is long overdue. PowerShell has long supported passing structured output (objects) via pipes and this is the closest attempt to approximate that without breaking the world.


I don't know, Nushell does a pretty good job.

https://www.nushell.sh/


This looks like the worst way to do it though

Why not

A: have a flag to the command telling it you want json output?

B: put the actual file descriptor in the environment variable rather than a flag to look for a hardcoded one?


They actually did B:

“[…] if the environment variable STDDATA_FD is present or set to the descriptor to produce the JSON output on.


Isn't that just an old fork of Elasticsearch?


It is a fork, but not old; they have ongoing commits: https://github.com/opensearch-project/OpenSearch/commits/mai...

Plus, given that AWS is currently hosting Open Search, they are not incentivized to sit on their laurels when it comes to modern features or stability


Went from ES and sharding hell to less of a sharding hell with OS on AWS. I've been looking for a replacement ever since first friday evening sharding party with infrastructure team.


"sharding party"

Man, that made me laugh. I'm using that.


yeah, you haven't lived until you curled in blind rage _cluster/allocation/explain and _cat/shards?h=index,shard,prirep,state,unassigned.reason | grep UNASSIGNED every few seconds; In production party, with tarantulas on your back asking for status, of course.


I think Github already offers BYOK (bring your own key). I think the problem is that you are still expected to pay for all the premium subs, the key only allows you to go above the rate/monthly limits.

I think Jetbrains does it better with a full BYOM for models, including ollama. And I think if you go for ollama, you only need to pay for the IDE license, not for the AI add-on but don't quote me on that.


I had moderate success using https://www.iso.org/ISO-house-style.html converted to markdown and narrowed to the guidelines starting with "Plain English" and ending before "Conformity and conformity-related terms" (plus a few other rules up to and including "Dates"). A quick estimate puts the whole Markdown document at 9869 tokens - quite manageable. I generally prefer the style of the Microsoft Writing Style Guide but ISO house style is the only one that fits nicely into a prompt.

Looking forward to your model/product!

P.S. https://www.gov.uk/guidance/style-guide/technical-content-a-... also looks useful


I am afraid the choice in many OSS projects is not slop vs expert-written content but LLM-assisted content or nothing.

I recently produced a bunch of migration guides for our project by pointing Claude 4 Sonnet at my poorly structured Obsidian notes (some more than 5 years old), a few commits where I migrated the reference implementation, and a reasonably well-maintained yet not immediately actionable CHANGELOG. I think the result is far from top-notch but, at the same time, it is way better IMO than nothing (nothing being the only viable alternative given my priorities): https://oslc.github.io/developing-oslc-applications/eclipse_...


Sure, there is nothing wrong with that if you either:

1. Disconnect that computer from the internet.

2. Are happy to have your computer infected and join a botnet.


Can someone who has knowledge about this explain how a PC with "unsupported OS" will actually get attacked just by web browsing and being connected to the internet? Your PC will always be behind NAT, it'll never have a public IP, therefore someone port scanning it can be ruled out unless it's maybe some infected device on the local network? It's normal in modern web browsers that you can just break out of the javascript sandbox and get OS level access by running an OS that hasn't been updated for a few years? If you're running an exe that exploits some known userspace security issue of older OS versions how likely is it that this exe doesn't have any other malicious code that'd cause issues even on an up to date OS?


If you open a browser, you expose yourself to other servers. Same with opening files you download. Plus, with exploits like NAT slipstreaming, your computer can be exposed to arbitrary packets from anywhere on the internet as soon as any device you own loads and ad.

Microsoft at some point had a bug where a single packet could take over the entire kernel. I think it was a bug somewhere in the IP stack (something related to fragmentation in IPv6 I think?). Linux had similar issues.

If the built-in JPEG viewer or h.264 decoder or whatever component you use contains a bug, your computer can get infected. That also goes for things like preview generators and file indexers that run even if you don't open the file.

As much as the web seems to have consumed everything, there are still plenty of files people open.

In practice, you'll probably be fine as long as you keep your browser up to date and use up-to-date third-party software to open most files. At some point Chrome and Firefox stop supporting your system, though, and that's when infection suddenly becomes real easy.


A lot of these are non-exe files, like images/video, crafted to execute arbitrary code through some bug in outdated software that opens them. Could be a web browser or something else. It does take a while for an OS to be so old that browsers don't support it anymore, but sufficiently old ones are vulnerable to known spectre exploits breaking out of the JS sandbox for example. Or random other browser features can be exploited.

Also, Wannacry is a good example of a LAN attack reaching further than you might expect. Or there are various conditional ways to breach the NAT, one of them simply being NATless ipv6 with a misconfigured firewall.

Microsoft might bluff a bit and actually backport fixes for very serious issues, like how Wannacry was patched all the way back to XP. Maybe Win10 is fine for several years, but the real problem is that you don't know how vulnerable you are with each passing year.


With outdated browsers it does make senese. A bit more surprising is the image or video decoding exploit, considering that I'd assume those would usually be done in hardware rather than by some userspace or OS level code.


Hardware transcoding still involves software, plus the hardware itself can be vulnerable. It's not meant to act as security. But anyway, it's also very hit-or-miss. The drivers need to support it, and even then the software might not use it.

One random thing that ticks me off, Google Meet insists on using VP8/VP9 because they invented it, which has way less overall support for hardware transcoding. That's why it uses so much more CPU on many devices than Zoom etc which use the more common H.264.


Defender won't stop getting updates for devices running 10 from everything I've seen.


This was a thing back in the windows 95 days...but is no longer an issue.


> Attempts to do this and survived either restricted themselves to being very abstract or limited their scope to specific use cases.

Wikidata? 1.65 billion graph nodes and counting under a common vocabulary.


Below are some links for extra reading from my favorites.

High-level overview:

- https://www.w3.org/DesignIssues/LinkedData.html from TimBL

- https://www.w3.org/DesignIssues/ReadWriteLinkedData.html from TimBL

- https://www.w3.org/DesignIssues/Footprints.html from TimBL

Similar recent attempts:

- https://www.uber.com/en-SE/blog/dragon-schema-integration-at... an attempt in the similar direction at Uber

- https://www.slideshare.net/joshsh/transpilers-gone-wild-intr... continuation of the Uber Dragon effort at LinkedIn

- https://www.palantir.com/docs/foundry/ontology/overview/

Standards and specs in support of such architectures:

- http://www.lotico.com/index.php/Next_Generation_RDF_and_SPAR... (RDF is the only standard in the world for graph data that is widely used; combining graph API responses from N endpoints is a straightforward graph union vs N-way graph merge for JSON/XML/other tree-based formats). Also see https://w3id.org/jelly/jelly-jvm/ if you are looking for a binary RDF serialization.

- https://www.w3.org/TR/shacl/ (needs tooling, see above)

- https://www.odata.org/ (in theory has means to reuse definitions, does not seem to work in practice)

- https://www.w3.org/TR/ldp/ (great foundation, too few features - some specs like paging never reached Recommendation status)

- https://open-services.net/ (builds atop W3C LDP; full disclosure: I'm involved in this one)

- https://www.w3.org/ns/hydra/ (focus on describing arbitrary affordances; not related to LinkedIn Hydra in any way)

Upper models:

- https://basic-formal-ontology.org/ - the gold standard. See https://www.youtube.com/watch?v=GWkk5AfRCpM for the tutorial

- https://www.iso.org/standard/87560.html - Industrial Data Ontology. There is a lot of activity around this one, but I lean towards BFO. See https://rds.posccaesar.org/WD_IDO.pdf for the unpaywalled draft and https://www.youtube.com/watch?v=uyjnJLGa4zI&list=PLr0AcmG4Ol... for the videos


> This feature is probably a big thing for .NET developer productivity. It's quite a shame, that it only came now.

I am using https://github.com/dotnet-script/dotnet-script without any issues. Skipping an extra step would be cool though.


Is it possible to add those scripts project-scoped like with npm run? They should work for everyone who checks out the repo.


I think everyone needs to do `dotnet tool install -g dotnet-script` before running them. This is the most annoying part where .NET 10 announcement would be really appreciated.

But then each script has an individual list of dependencies so there should be no need for further scoping like in npm (as in, the compilation of the script is always scoped behind the scenes). In this regard, both should be similar to https://docs.astral.sh/uv/guides/scripts/#declaring-script-d... which I absolutely love.


You can use project-scoped tool manifests. Then you can call dotnet tool restore to load all tools specified in the manifest.

https://learn.microsoft.com/en-us/dotnet/core/tools/global-t...


Sure, but I think you still need to provide the full path of the script. If you're inside a source folder very deep it will lead to something like:

  dotnet script ../../../../../scrips/scaffold-something.cs
With npm run it works from any subdirectory:

  npm run scaffold-something


TIL, thank you!


I think, to be fair, we need to evaluate if this will still be true in 20 years - the same standard to which Obsidian was held.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: