Scaling laws of synthetic images for model training... for now
Proceedings of the IEEE/CVF Conference on Computer Vision and …, 2024•openaccess.thecvf.com
Recent significant advances in text-to-image models unlock the possibility of training vision
systems using synthetic images potentially overcoming the difficulty of collecting curated
data at scale. It is unclear however how these models behave at scale as more synthetic
data is added to the training set. In this paper we study the scaling laws of synthetic images
generated by state of the art text-to-image models for the training of supervised models:
image classifiers with label supervision and CLIP with language supervision. We identify …
systems using synthetic images potentially overcoming the difficulty of collecting curated
data at scale. It is unclear however how these models behave at scale as more synthetic
data is added to the training set. In this paper we study the scaling laws of synthetic images
generated by state of the art text-to-image models for the training of supervised models:
image classifiers with label supervision and CLIP with language supervision. We identify …
Abstract
Recent significant advances in text-to-image models unlock the possibility of training vision systems using synthetic images potentially overcoming the difficulty of collecting curated data at scale. It is unclear however how these models behave at scale as more synthetic data is added to the training set. In this paper we study the scaling laws of synthetic images generated by state of the art text-to-image models for the training of supervised models: image classifiers with label supervision and CLIP with language supervision. We identify several factors including text prompts classifier-free guidance scale and types of text-to-image models that significantly affect scaling behavior. After tuning these factors we observe that synthetic images demonstrate a scaling trend similar to but slightly less effective than real images in CLIP training while they significantly underperform in scaling when training supervised image classifiers. Our analysis indicates that the main reason for this underperformance is the inability of off-the-shelf text-to-image models to generate certain concepts a limitation that significantly impairs the training of image classifiers. Our findings also suggest that scaling synthetic data can be particularly effective in scenarios such as:(1) when there is a limited supply of real images for a supervised problem (eg fewer than 0.5 million images in ImageNet)(2) when the evaluation dataset diverges significantly from the training data indicating the out-of-distribution scenario or (3) when synthetic data is used in conjunction with real images as demonstrated in the training of CLIP models.
openaccess.thecvf.com