On September 18th Atlassian announced a new license model for Jira automation that severely limits automation for all but enterprise customers starting November 1st. Many customers will be forced to give up their automation or upgrade to expensive enterprise tiers. Since there is no enterprise tier for Jira Product Discovery customers using automation in this product don’t even have the option to pay more.
Maybe not exactly what you had in mind, but is a lot of literature in general on trying to extract interpretations from neural network models, and to a lesser extent from other complicated nonlinear models like gradient boosted trees.
Somewhat famously, you can plot the weight activations on a heatmap from a CNN for image processing and obtain a visual representation of the "filter" that the model has learned, which the model (conceptually) slides across the image until it matches something. For example: https://towardsdatascience.com/convolutional-neural-network-...
Many techniques don't look directly at the numbers in the model. Instead, they construct inputs to the model that attempt to trace out its behavior under various constraints. Examples include Partial Dependence, LIME, and SHAP.
Also, those "deep dream" images that were popular a couple years ago are generated by running parts of a deep NN model without running the whole thing.
Yes, more generally this entails looking at the numbers learned by a model not as singular "weights" (which doesn't scale as the model gets larger) but more as a way of approximating a non-parametric representation (i.e. something function-like or perhaps even more general). There's a well-established field of variously "explainable" semi-parametric and non-parametric modeling and statistics, which aims to seamlessly scale to large volumes of data and modeling complexity much like NN's do.
The field of NN explainability tries, but usually there's only handwavy things to be done (because too many weights). This project involved intentionally building very very small networks that can be understood completely.