I think that it would be helpful to add a fine-tuning costs for an open source model (think LLaMA to Alpaca).
From the phrasing around fine tuning right now it seems like it's using openai's fine tuning api to determine that cost, but it's not very clear.
Also this would be helpful for other foundation models if that doesn't already exist - how much VRAM to run Stable Diffusion v2.1 at different resolutions, running Whisper or Bark for audio, etc.
They mention that they could finetune a 6B model for $7. Obviously the number depends on the amount of data and the model size but it's probably not going to be a significant expense in practice.
From the phrasing around fine tuning right now it seems like it's using openai's fine tuning api to determine that cost, but it's not very clear.
Also this would be helpful for other foundation models if that doesn't already exist - how much VRAM to run Stable Diffusion v2.1 at different resolutions, running Whisper or Bark for audio, etc.