Hacker News new | past | comments | ask | show | jobs | submit | dejongh's comments login

Old software is not replaced because nobody fully understands it, and therefore it is a gigantic effort and risk to reverse engineer it.


Having skia power in nodejs is a cool idea. Thanks!


Cool!


Wow


crashed chrome :)


On September 18th Atlassian announced a new license model for Jira automation that severely limits automation for all but enterprise customers starting November 1st. Many customers will be forced to give up their automation or upgrade to expensive enterprise tiers. Since there is no enterprise tier for Jira Product Discovery customers using automation in this product don’t even have the option to pay more.


> Since there is no enterprise tier for Jira Product Discovery customers using automation in this product don’t even have the option to pay more.

This one is fascinating. I get paywalling features. I get removing features. But just turning them off is unusual.


Atlassian has been doing this for years, killing the server versions to force people into cloud and revenue, etc.

If you’re on their products I feel for you son, but if you’re starting out I’d avoid anything Atlassin infected.


I swing by Hacker News almost daily. Thanks for all the links and comments.


We could train an AI to compile and run itself. Next step is then to train it to optimize its own code and training. Then we have almost evolution.


Wow. Wild story. Thanks for sharing. Cool twist that a bug ended up identifying the bad guys.


Interesting idea to reverse engineer the network. Are there other sources that have done this?


Maybe not exactly what you had in mind, but is a lot of literature in general on trying to extract interpretations from neural network models, and to a lesser extent from other complicated nonlinear models like gradient boosted trees.

Somewhat famously, you can plot the weight activations on a heatmap from a CNN for image processing and obtain a visual representation of the "filter" that the model has learned, which the model (conceptually) slides across the image until it matches something. For example: https://towardsdatascience.com/convolutional-neural-network-...

Many techniques don't look directly at the numbers in the model. Instead, they construct inputs to the model that attempt to trace out its behavior under various constraints. Examples include Partial Dependence, LIME, and SHAP.

Also, those "deep dream" images that were popular a couple years ago are generated by running parts of a deep NN model without running the whole thing.


Yes, more generally this entails looking at the numbers learned by a model not as singular "weights" (which doesn't scale as the model gets larger) but more as a way of approximating a non-parametric representation (i.e. something function-like or perhaps even more general). There's a well-established field of variously "explainable" semi-parametric and non-parametric modeling and statistics, which aims to seamlessly scale to large volumes of data and modeling complexity much like NN's do.


The field of NN explainability tries, but usually there's only handwavy things to be done (because too many weights). This project involved intentionally building very very small networks that can be understood completely.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: