Part 6/13:
Challenges
While the synopsis generator works well for creating story ideas, Shapiro notes that it's far from perfect. Inconsistent outputs sometimes result in generic or off-theme summaries, especially if parameters conflict (e.g., mixing/fusing genres or tones). To improve quality, he explores methods like grading synopses on a scale of 1 to 5, to filter out subpar results efficiently.
Avoiding Fine-Tuning: Cost and Efficiency Considerations
A key insight from Shapiro's experiments is that fine-tuning GPT-3 remains prohibitively expensive relative to prompt-based generation. Fine-tuning a model costs around thirty dollars or more per attempt, and the benefits are marginal compared to generating multiple synopses or plot outlines at scale.