Kind of - you can also pin runners.("This workflow runs on this runner always"). And caching just means not deleting the artifacts from the file system from the previous runs.
Imagine building Android - even "cloning the sources" is 200GB of data transfer, build times are in hours. Not having to delete the previous sources and doing an incremental build saves a lot of everything.
Gitlab also has some tips here: https://docs.gitlab.com/ci/caching/ on using shared caches, which can help in some scenarios, especially runners in Kubernetes that are ephemeral, ie. created just before a job starts and destroyed afterward.
tldr; "A cache is one or more files a job downloads and saves. Subsequent jobs that use the same cache don’t have to download the files again, so they execute more quickly."
It will probably still be slower than a dedicated runner, but possibly require less maintenance ("pet" runner vs "cattle" runner).
Imagine building Android - even "cloning the sources" is 200GB of data transfer, build times are in hours. Not having to delete the previous sources and doing an incremental build saves a lot of everything.