Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

what decision / downstream process is going to consume the 1B node graph render? is producing a render really necessary for that decision, or is rendering the graph waste?

is there a way you can subsample or simplify or approximate the graph that'd be good enough?

in some domains, certain problems that are defined on graphs can be simplified by pre-processing the graph, to reduce the problem to a simpler problem. e.g. maybe trees can be contracted to points, or chains can be replaced with a single edge, or so on. these tricks are sometimes necessary to get scalable solution approaches in industrial applications of optimisation / OR methods to solve problems defined on graphs. a solution recovered on the simplified graph can be "trivially" extended back to the full original graph, given enough post-processing logic. if such graph simplifications make sense for your domain, can you preprocess and simplify your input graph until you hit a fixed point, then visualise the simplified result? (maybe it contracts to 1 node!)



> is producing a render really necessary for that decision, or is rendering the graph waste?

Just to be clear, the OP already has a graph. There are nodes and relationships. The graph can be queried for understanding.

Rendering the graph is tractable for a small graph or a portion of the graph.

Trying to render all the nodes in an enormous graph is almost always an expensive quixotic adventure.


Expensive quixotic adventure.

Perhaps that is the experience he was after for his billion node graph.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: