For the world's most common/famous languages (English, Mandarin, Portuguese, etc) there's every reason to think that it's just a question of how much training data is available for training up an LLM.
In particular note that the Chinese experiments with their Deepseek LLM technology does well with both Mandarin and English, which all by itself is fairly illustrative.
If "exotic" grammars turned out to pose a major problem for LLMs, that would possibly challenge some of the most mainstream theories about linguistics, so I regard that as unlikely.