There certainly are many interesting parallels here. I often think about this from the perspective of systems biology, in Uri Alon's tradition. There are a range of graphs in biology with excitation and inhibitory edges -- transcription networks, protein networks, networks of biological neurons -- and one can study recurring motifs that turn up in these networks and try to learn from them.
It wouldn't be surprising if some lessons from that work may also transfer to artificial neural networks, although there are some technical things to consider.
Agreed! So many emergent systems in nature achieve complex outcomes without central coordination - from cellular level to ant colonies & beehives. There are bound to be implications for designed systems.
Closely following what you guys are uncovering through interpretability research - not just accepting LLMs as black boxes. Thanks to you & the team for sharing the work with humanity.
Interpretability is the most exciting part of AI research for its potential to help us understand what’s in the box. By way of analogy, centuries ago farmers’ best hope for good weather was to pray to the gods! The sooner we escape the “praying to the gods” stage with LLMs the more useful they become.
There certainly are many interesting parallels here. I often think about this from the perspective of systems biology, in Uri Alon's tradition. There are a range of graphs in biology with excitation and inhibitory edges -- transcription networks, protein networks, networks of biological neurons -- and one can study recurring motifs that turn up in these networks and try to learn from them.
It wouldn't be surprising if some lessons from that work may also transfer to artificial neural networks, although there are some technical things to consider.