Fine-tuning improves the model by training on many more examples than can fit in a prompt, letting you achieve better results on a wide number of tasks. This notebook provides a step-by-step guide for our new GPT-4o mini fine-tuning. We'll perform entity extraction using the RecipeNLG dataset, which provides various recipes and a list of extracted generic ingredients for each. This is a common dataset for named entity recognition (NER) tasks.
Note: GPT-4o mini fine-tuning is available to developers in our Tier 4 and 5 usage tiers. You can start fine-tuning GPT-4o mini by visiting your fine-tuning dashboard, clicking "create", and selecting “gpt-4o-mini-2024-07-18” from the base model drop-down.
We will go through the following steps:
- Setup: Loading our dataset and filtering down to one domain to fine-tune on.
- Data preparation: Preparing your data for fine-tuning by creating training and validation examples, and uploading them to the
Files
endpoint. - Fine-tuning: Creating your fine-tuned model.
- Inference: Using your fine-tuned model for inference on new inputs.
By the end of this you should be able to train, evaluate and deploy a fine-tuned gpt-4o-mini-2024-07-18
model.
For more information on fine-tuning, you can refer to our documentation guide or API reference.