My impression is that we're already working with implicit symbols in some fields.
Unsupervised ASR pre-training uses random projection and then trains to classify into discrete acoustic classes. Later, those are then classified into discrete characters.
It's implemented using differentiable convolutions, but since we go from a sequence of discrete classes to another sequence of discrete classes, I would be very surprised if one could NOT represent this as pattern matching and look up tables.
And thanks to dialects, the same characters can stand for many different phoneme sequences, so they are akin to a symbolic abstraction.
Unsupervised ASR pre-training uses random projection and then trains to classify into discrete acoustic classes. Later, those are then classified into discrete characters.
It's implemented using differentiable convolutions, but since we go from a sequence of discrete classes to another sequence of discrete classes, I would be very surprised if one could NOT represent this as pattern matching and look up tables.
And thanks to dialects, the same characters can stand for many different phoneme sequences, so they are akin to a symbolic abstraction.