There’s nothing stopping any LLM-backed chatbot from using plugins; the ReAct pattern discussed recently on HN is a general pattern for incorporating them.
The main limits are that unless they are integral and trained-in (which is less flexible), each takes space in the prompt, and in any case the interaction also takes token space, all of which reduces the token space available to the main conversation.
My experience with Bard is it probably isn't smart enough to figure out on its own how to use these. Google would probably have to do special finetuning/hardcoding for the plugins that they want to work.
Would be nice to keep the ecosystem open.