You could do the same for any number of more recent inventions, and see if the LLM is able to "innovate" the solution solely from what had been published prior to the actual human invention.
Has anyone done anything like this? It seems like an easy way to measure AI "innovative-ness". That or to dispel the idea that it exists, if it doesn't.
So, train it on less than 0.01% of the material other LLMs are trained on? It won't prove much if it fails.
For example: "Has Science Found the Fountain of Youth?" -> No
So, answering your question, it's a resounding no
If you did have access to a high-quality pretraining dataset and you could explore training up to 1600, then up to 1610, 1620, ... 1700 and look at how the presence of calculus was learned over that period. Running some tests with the intermediate models to capture the effect