Neither is a primary source though, but the advantage of linking to an encyclopedia is that it provides a plain-English summarization of the primary sources, as well as pertinent reference to those sources.
Large language models have historically been incapable of providing sources, but newer models are gaining the ability to do so, which is making them as useful as encyclopedias. Until everyone starts using sourced output from LLMs, we get to see who is blindly trusting hallucinations. (https://www.infodocket.com/2024/12/06/report-media-personali...)
Is is "good enough" for the general public, but that is not the same thing.
"Source of reliable information" is one of them.
"Source of how a topic has changed over time" is another.
"Source of what disputes are more common regarding specific parts of each article" is another.
And so on...
Even if most people don't actually do that and trust every small bit of information (like the original amount of hair of some classical composer had), some other people will in fact track and try to understand the path of that information and whether is truthful or not, relevant or not.
Maybe one day LLMs will allow that kind of thing as well. I don't know. Currently they don't offer that choice.
Does that answer your question?
Wikipedia has a very dedicated fact-checking team who try to enforce accuracy, and (at least for non-political articles) most agree they do very well. Perhaps someone will develop a reliable automated fact-checker, then it can be applied to LLM output to “bless” it or point out the mistakes.