Yes, but in practice, no. These games are usually coded as a loop that runs with full CPU as fast as it can (unless capped, which the old one wasn’t), using as much cpu as is available. In that case, the fps is a side effect of how long the loop takes to run each pass (which is what happened here) - i.e. you don’t determine the fps, the fps is a result of how complicated or (in)efficient your code is. So going from running at 30 fps because it was so poorly coded and made such inefficient use of the cpu to running at 6000 fps because it now completes each loop pass that much faster, the cpu usage is actually the same.
Now if your code is so optimized that it can run at 6000 fps, at that point you can say “gee, I don’t need this many updates a second, let me cap it to x frames per second.” But how do you do that? The GPU is grabbing finished frames out of the buffer at its own pace, whether you are generating them at 6k/sec or just 5/sec. To cap your cpu consumption you would usually say “we need a new frame every 0.015s to always have a new frame ready for the GPU so that the screen updates sixty times a second, so if we finish a frame in 0.001s instead, sleep (effectively yielding cpu consumption to other processes) for 0.01 seconds after we run through the loop” - but while that may work for some things, there are other stuff that need to happen “in real-time” such as reloading the audio buffer (to avoid pauses or corrupted/garbled audio), etc and you also can’t rely on the system to actually wake you before 0.015s even though you asked it to wake you after just 0.01s to be extra safe.
Tl;dr, yes, once your code is running at 6k fps, then capping it to reduce consumption is an option, but running at 6k fps doesn’t actually increase cpu vs inefficiently running at 30fps.
It's possible that going far above "6000fps" might be necessary someday for holographic/3D displays that need to render the scene from hundreds or thousands of different viewpoints for one single frame.
Say you need to render a scene from 1000 different angles for a 3D display, just to get to a 60hz refresh rate you would need to render the scene 60,000 times.
This is the game update loop, which excludes rendering. (for some reason people still use FPS which is confusing)
I'm not aware of any displays like that, but if there were, you could optimize by eye tracking each viewer and only rendering the direction they're seeing it from. The "New 3DS" (note: different from the regular 3DS) did this.
That is so absolutely false. Any game you run if you don't cap fps it uses 100% of your gpu and potentially your cpu. As soon as you cap the framerate to 60 fps it starts behaving normally.
Now if your code is so optimized that it can run at 6000 fps, at that point you can say “gee, I don’t need this many updates a second, let me cap it to x frames per second.” But how do you do that? The GPU is grabbing finished frames out of the buffer at its own pace, whether you are generating them at 6k/sec or just 5/sec. To cap your cpu consumption you would usually say “we need a new frame every 0.015s to always have a new frame ready for the GPU so that the screen updates sixty times a second, so if we finish a frame in 0.001s instead, sleep (effectively yielding cpu consumption to other processes) for 0.01 seconds after we run through the loop” - but while that may work for some things, there are other stuff that need to happen “in real-time” such as reloading the audio buffer (to avoid pauses or corrupted/garbled audio), etc and you also can’t rely on the system to actually wake you before 0.015s even though you asked it to wake you after just 0.01s to be extra safe.
Tl;dr, yes, once your code is running at 6k fps, then capping it to reduce consumption is an option, but running at 6k fps doesn’t actually increase cpu vs inefficiently running at 30fps.