As a not professional programmer, but mathematically inclined I can say I found modelling in MiniZinc crazily hard. I tried the Coursera class, put in the hours and failed. It took me back to some classes where everything I tried failed and the proper result felt so near what I was trying that it was hard to digest a better strategy the next time. It did give me a new feeling for complexity and model space reduction. Somewhere MiniZinc and other engines are super efficient, you just need to put the problem in the right form (and therein lies the catch).
I took a class in grad school where MiniZinc was our go-to to solve many different problems and I had the same issue. I found that the handbook[1] was pretty helpful in modeling tricks and understanding the logic where I was used to CPLEX modeling style and Python OR-Tools.
Seems like no? I mean, they are great in getting some text that looks correct but is an hallucination. LLM can help in getting something roughly ok, that a human can fix. This doesn't seem to be the case here.
Translating is the thing that GPT is best at. Hallucination is much less of a problem here because you’re dealing with a whole language, not a ton of libraries with different APIs.