Hacker Newsnew | past | comments | ask | show | jobs | submit | more smarx007's commentslogin

I am afraid this is a very narrow reading of the CRA. Did you read the act yourself or some qualified opinion by a European lawyer? Security updates are the default demand of CRA and not having them is an exception that requires an assessment of risk (which I would assume mean that it's only viable for devices not directly connected to Internet).

An (equally narrow ;)) quote:

"ensure that vulnerabilities can be addressed through security updates, including, where applicable, through automatic security updates that are installed within an appropriate timeframe enabled as a default setting, with a clear and easy-to-use opt-out mechanism, through the notification of available updates to users, and the option to temporarily postpone them;"

Thus, I expect RED to stipulate only radio firmware to be locked down to prevent you from unlocking any frequencies but the CRA to require all other software to be updatable to patch vulns.


I have not read the RED or the CRA, nor discussed what they specifically say with a lawyer who has read them. However, I have gone through a recent product R&D process in Europe where the product has WiFi and LTE connectivity, so it falls under the RED (even though WiFi and 4G are handled by off-the-shelf modules). I have read parts of the EN-18031 standards (mostly using their decision trees and descriptions of decision nodes as reference), I've been on a seminar with a Notified Body about what the practical implications of the RED are, I've filled out a huge document going through all the decision trees in 18031 and giving justifications for the specific path through the decision tree applies to our product. I've also discussed the implications of the RED and 18031 with consultants.

I don't doubt you with regard to what the RED and the CRA actually says. However I'm afraid that my understanding of it better reflects the practical real-world implications of companies who just need to go through the certification process.

18031 requires an update mechanism for most products, yes, however it some very stringent requirements for it to be considered a Secure Update Mechanism. I sadly don't have the 18031 standard anymore so I can't look up the specific decision nodes, but I know for sure that allowing anyone with physical access to just flash the product with new unsigned firmware would not count as a Secure Update Mechanism (I think unless you can justify that the operational environment of the product ensures that no unauthorized person has physical access to the device, or something like that).

EDIT: And I wanted to add, in one common use case for microcontrollers, namely as one part of a larger product with some SoC running Linux being the main application processor and with MCUs handling specific tasks, you can easily get a PASS in all the EN-18031 decision trees without an upgrade mechanism for the MCUs themselves. In such products, I can imagine a company deciding that it's easier to just permanently lock down the MCU with a write protect than to justify leaving it writeable.


Thank you, an interesting (and somewhat sad) perspective. Would be unfortunate if these two regulations combined result in less firmware update capabilities not more.


Yeah, it's sad. I can say with certainty that there are products whose developers would have decided to leave MCUs and/or SoMs writeable based on analysing the threat model, but where the rigid decision trees in EN-18031 around secure storage mechanisms and secure update mechanisms makes that too difficult to justify.


If you want a production-grade graph DBMS, you don't have that many OSS options that are reliable and well-supported.

In the relational space, it took OSS options like Postgres many decades (and somehow paid-for person-years) to get to a place where enterprises seriously consider migrating off Oracle to it.


Are there any? My experience so far with graph databases is a resounding failure.


I'm using Neo4j to build a CMDB and it is awesome.


That's good to hear, how large is the graph you're building (nodes, edges) and how do queries perform?


Not very big since it is only used internally at my company. 5 digit node count and high 6 digit relationships count. Queries are usually very fast unless you try to do something stupid that ends up having to search the entire graph. indexing critical high-cardinality properties and thinking of relationships as a kind of index help a lot with query performance. I have been meaning to test how fast memgraph is.


The big issue we have had with Neo4J was with replication, when we do MASSIVE updates. For the rest, it handles the charge reasonably well.


What constitutes a massive update?


What is a "CMDB"?


A Configuration Management Database (CMDB) is a centralized repository that stores information about Configuration Items (CIs), including their attributes and relationships. It's a key component of IT service management (ITSM), providing visibility into the components that make up IT services, like hardware, software, and documentation


Interesting!


It can be incredibly useful. One example is to have every every process linked to the VM it is running on and the host the VM is running on and the TCP port the process is listening on. If you have all the correct relationships defined then you can write a query like this to find every process on every VM listening on port 80 on a given VM host.

MATCH p = (host:VMHOST {name: 'your_host_name'})-[:RUNS]->(vm)-[:HAS_SERVICE]->(service)-[:EXPOSES_PORT]->(port:TCPPORT {port: 80}) RETURN p

This can save a absurd amount of time for analyzing the impacts of failures and security isolation compliance.


In OSS or generally?


Either, tbh?


I think on this site anything that's more expensive than free is considered expensive. Countless arguments have been had on Oracle vs Postgres, including lock-in. I think lock-in is more important to consider than license cost.

To be fair, it is quite nice for the pricing to be transparent. And I think it's somewhat competitive w.r.t. Stardog, for example. The community version is less restricted than Ontotext, for example.


Not really competitive with Stardog given our leading LLM integration with Voicebox. 85% pass@1 to exit POV with new customer.


> "To make sure everyone understands that, I prefer label property graphs over RDF."

I have two major issues with virtually all graph DBMSs that are not RDF/SPARQL-based:

1) They do not allow structure-preserving querying. That is, I query a graph and want the results to be a smaller graph. This is trivial in SQL, you just 'SELECT * FROM x WHERE ...' and the result set you get is tabular just like the table x. In SPARQL, there are a CONSTRUCT/DESCRIBE queries that do just that - give you the results as a graph.

2) They don't use any (internationally recognized) standard to represent graph data. RDF is the only such format known to me (ignore all the semantic web stuff associated with it and just consider the format).

230k edges is peanuts for a graph db. It's like when the number of rows times columns in your SQL DB is 230k. NASA could (should?) have just used Oxigraph, RDF4J, or Jena. Stardog and Ontotext are the paid options. However, it is quite nice to see more interest in graph-based DBMSs in general!

> “Which employees have cross-disciplinary expertise in AI/ML?”

Regarding the study itself, I did not understand who is the target user of this. I would rather be more interested in the Lessons Learned 2.0 study (I understand it was attempted once before [1]). I don't think the study at hand would be able to correctly answer questions about expertise.

On the technical side, as far as I understand, the cosine similarity was computed per triplet? In that case, I could see how pgvector could be used for this. Relevance expansion is the only thing in the article that made me think that it would be cool if it works well. But I could see how in a combo of a regular RDF DBMS + pgvector, one could first do a cosine similarity query via pgvector and then compute an (S)CBD [2] of the subject (the from node) of the triplet.

[1]: https://youtu.be/QEBVoultYJg?t=1653

[2]: https://patterns.dataincubator.org/book/bounded-description....


"They do not allow structure-preserving querying. That is, I query a graph and want the results to be a smaller graph."

I'm not sure what you mean by this. The result of a query in neo4j is a set of nodes with specified relations linking them. It is much more flexible than the way SQL can only return a single table.


Query result in openCypher is a similar rectangular result set like in SQL. See openCypher spec p. 74.


"In the RETURN part of your query, you define which parts of the pattern you are interested in. It can be nodes, relationships, or properties on these"

you can return all nodes, relationships, and paths that match a query by using this syntax

MATCH p = (a {name: 'A'})-[r]->(b) RETURN *

This is the exact opposite of a rectangular result set.


There are also many flags that should be enabled by default for non-debug builds like ubsan, stack protection, see https://news.ycombinator.com/item?id=35758898


UBSAN is usually a debug build only thing. You can run it in production for some added safety, but it comes at a performance cost and theoretically, if you test all execution paths on a debug build and fix all complaints, there should be no benefit to running it in production.


I think it's time for the C/C++ communities to consider a mindset shift and pivot to having almost all protectors, canaries, sanitizers, assertions (e.g. via _GLIBCXX_ASSERTIONS) on by default and recommended for use in release builds in production. The opposite (i.e, the current state of affairs) should be discouraged and begrudginly accepted in select few cases.

https://www.youtube.com/watch?v=gG4BJ23BFBE is a presentation that best represents my view on the kind of mindset that's long overdue to become the new norm in our industry.


I do not think things like the time command need to be compiled with such things. It is pointless, but your suggestion here is to do it anyway. Why bother?

Assertions in release builds are a bad idea since they can be fairly expensive. It is a better idea to have a different variety of assertion like the verify statements that OpenZFS uses, which are assertions that run even in release builds. They are used in situations where it is extremely important for an assertion to be done at runtime, without the performance overhead of the less important assertions that are in performance critical paths.


Why would I want potentially undefined behaviour in 'time'? I expect it to crash anytime it's about to enter UB. Sure, you may want to minimize such statements between the start/stop of the timer, but I expect any processing of stdout/stderr of the child process to be UB-proofed as much as possible.

I think it's a philosophical difference of opinions and it's one of the things that drive Rust, Go, C# etc. ahead - not merely language ergonomics (I hope Zig ends up as the language that replaces C). The society at large is mostly happy to take a 1-3% perf hit to get rid of buffer overflows and other UB-inducing errors.

But I agree with you on not having "expensive" asserts in releases.


I think I had a similar experience with HTTrack. However, wget also needs some tweaking to make relatively robust crawls, e.g. https://stackoverflow.com/a/65442746/464590


Impressive! Silly question, perhaps, but will such packages be available for PCB assembly by "retail" companies like PBCway, JLCPCB etc?

Also, is it even safe/practical to use WCSP on a PCB if the bare die is exposed to the environment? Or do they require conformal coating (or even epoxy potting?) after assembly to avoid premature faults?


I'm wondering about this too. Wouldn't they end up sensitive to light?


I've used a wlcsp mcu before and sunlight would make it reset. That was a fun one to debug.


I think Deno and Bun are the two successful attempts at a faster tsc :)


Both Deno and Bun still use current tsc for type checking


They just strip types and don’t do any type checking


Those are runtimes primarily, not compilers/type checkers. Likewise, TSC is not a TS runtime.


Well, of course. But TSC output (transpiled JS source code) is then run by a JS runtime like Node that has a VM like V8 that makes an internal representation for the JS code. Using Bun or Deno allows you to go to a VM IR from the TypeScript directly without a need for TSC transpilation into JS first.

But as @keturakis pointed out (thanks!), Deno/Bun still rely on TSC, which I was not aware of.


Bun doesn't even support a way to check types, just remove them.

> Note — Similar to other build tools, Bun does not typecheck the files. Use tsc (the official TypeScript CLI) if you're looking to catch static type errors.


For being production-ready?


Hi, thanks for building a great tool and a great write-up! I was trying to add a number of repos under oslc/, oslc-op/, and eclipse-lyo/* orgs but no joy - internal server error. Hopefully, you will reconsider shutting down the project (just heard about it and am quite excited)!

I think a project like yours is going to be helpful to OSS library maintainers to see which features are used in downstream projects and which have issues. Especially, as in my case, when the project attemps to advance an open standard and just checking issues in the main repo will not give you the full picture. For this use case, I deployed my own instance to index all OSS repos implementing OSLC REST or using our Lyo SDK - https://oslc-sourcebot.berezovskyi.me/ . I think your tool is great in complementing the code search.


Ohh apologies, I think there was a bug that led to the Internal Server Error, please try again, I _think_ it should be working now!

> I think a project like yours is going to be helpful to OSS library maintainers to see which features are used in downstream projects and which have issues.

That was indeed the original motivation! Will see if I can convince Ammar to reconsider shutting down the project, but no promises

> For this use case, I deployed my own instance to index all OSS repos implementing OSLC REST or using our Lyo SDK

Ohh, in case it's not clear from the UI, you could create an account and index your own "collection" of repos and search from within that interface. I had originally wanted to build out this "collection" concept a lot more (e.g. mixing private and public repos), but I thought it was more important to see if there's traction for the public search idea at all


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: