Isn't this simply because the dataset used (Putnam-AXIOM Original) is in the training data used to train the various models?
Given that these are simple variations (variable names and constants value change in math problems). Why would the companies creating these models (OpenAI etc.) create these variations themselves in order to insure that the model is learning how to solve the problem rather than memorize a solution? Seems like a very obvious thing to do ...
Given that these are simple variations (variable names and constants value change in math problems). Why would the companies creating these models (OpenAI etc.) create these variations themselves in order to insure that the model is learning how to solve the problem rather than memorize a solution? Seems like a very obvious thing to do ...