If the economical case justifies it you can use a cheap or lower end model to generate the meta information. Considering how cheap gpt-4o-mini is, seems pretty plausible to do that.
At my startup we also got pretty good results using 7B/8B models to generate meta information about chunks/parts of text.
Asking an LLM is low effort to do, but its not efficient nor guaranteed to be correct.