Think of it like a race where there are two people, CPU, and GPU, who are chained together and want to get to the finish line. It doesn’t matter if one is faster than the other. Because they’re chained together they’ll have to end the race at the same time.
Suppose GPU is faster than CPU. The CPU huffs and puffs throughout the race, barely able to catch his breath, while the GPU steamrolls ahead, angry that CPU is holding him back. This situation is called bottlenecked performance, and in this instance, poor CPU is the bottleneck.
Now suppose CPU and GPU can carry different kinds of “bags” that adjust their speed. GPU can carry stuff like “Resolution” and “Anti-aliasing” and the CPU can carry stuff like “Simulation Quality” and “Physics Quality”.
Now suppose you give the bag called “High Resolution” to GPU. Because it’s high resolution, the bag is heavy. This means that GPU, because of the extra weight, will run slower, and the CPU will be able to catch its breath.
Did you technically reduce the burden on the CPU? Yes, because now the GPU is slow enough for the CPU to catch up. Does this mean they’ll finish the race faster? No, because now both of them are slow, instead of one of them. The only difference here is that now both of them are working within their limits.
So if you have an ancient CPU but a kick-ass graphics card, then it makes sense for you to use higher resolutions. This is not because it’ll make your game run faster. It’s because there’s nothing you can do to make the game run faster (apart from getting a better CPU or reducing the bags the CPU is carrying), so might as well make use of the extra power in the graphics card that is being wasted to make the game look prettier.