~20 GB vram for the 7B model and 48 GB for the 13B model.
It depends on the context size as well. I'd recommend renting a 4090 from a cloud provider like runpod/vast ai to get started, using a PEFT tutorial.
4090 only has 24 GB and will only be able to fine tune (and merge, which is more memory intensive) the 7B model. The RTX6000 with 48 GB is able to fine tune the 13B model. The 70B model presumably needs multiple GPUs, like 4 RTX6000.
For people starting out, you can also use a free GPU from Google colab to fine tune a 7B model. Finetuning 70B gets more expensive and I would suggest trying smaller models first with a high quality dataset.