Scone: Bridging Composition and Distinction in Subject-Driven Image Generation
via Unified Understanding-Generation Modeling
Yuran Wang1,2* Bohan Zeng1,2* Chengzhuo Tong1,2 Wenxuan Liu1 Yang Shi1,2
Xiaochen Ma1 Hao Liang1 Yuanxing Zhang2 Wentao Zhang1β
1Peking University 2Kling Team, Kuaishou Technology
* Equal contribution, β Corresponding author
- 2025.12.16: The paper, training code, inference and evaluation code, model weight, training data, SconeEval benchmark are now released.
Subject-driven image generation has recently gained significant attention, with the focus evolving from single-subject to multi-subject generation, incorporating more input images. Existing methods can process two or more input images and combine subjects based on instructions, showcasing potential for more complex composition tasks.
However, existing works primarily focus on expanding subject combinations while neglecting the ability to distinguish target subjects in complex contexts. As shown in Figure 1.(a), although current models can combine multiple subjects, they may fail to identify and generate the correct target subject when a reference image contains multiple candidates, leading to problems such as subject omissions (none of the candidate subjects appear) or errors (misidentification of the target subject). Real-world images often involve interference and intricate details, which further limit practical performance. Thus, we emphasize examining the input subjects themselves, focusing on the modelβs ability to distinguish the target subject within complex contexts and leverage this information for generation.
Figure 1. The distinction problem and challenges.- We propose the Scone (Subject-driven composition and distinction enhancement) model, which supports multi-subject composition and excels in subject distinction in complex contexts. Experiments show our Scone ranks first among open-source models on OmniContext benchmark.
- We introduce the understanding bridge strategy, which transforms the understanding expert into a semantic bridge, enabling early multimodal alignment and attention-based semantic filtering to guide the generation expert, enhancing subject distinction and semantic fidelity without adding extra parameters.
- We develop SconeEval, a challenging benchmark with three difficulty levels, to evaluate performance on subject-driven image generation tasks from both composition and distinction perspectives.
git clone https://github.com/Ryann-Ran/Scone.git
cd Scone
conda create -n scone python=3.10 -y
conda activate scone
pip install -r requirements.txt
pip install flash_attn==2.5.8 --no-build-isolation-
Download our 22K refined single-candidate data and 35K multi-candidate data from Scone-S2I-57K. The 70K base single-canididate data are sampled from open-source datasets like X2I, MUSAR-Gen, UNO-1M, and Echo-4o-Image. Please refer to the dataset links for more details.
cd Scone # pip install -U huggingface_hub hf download Ryann829/Scone-S2I-57K --repo-type=dataset --local-dir ./datasets/Scone-S2I-57K
-
Organize the data hierarchy as follows:
Scone-S2I-57K
βββ parquet_data
β βββ scone_single_candidate_base/
β βββ scone_single_candidate_refined/
β βββ scone_multi_candidate/
βββ parquet_info
βββ scone_single_candidate_base.json
βββ scone_single_candidate_refined.json
βββ scone_multi_candidate.json
-
Replace each
your_data_pathplaceholder with your actual absolute path in:-
Parquet information files:
./datasets/Scone-S2I-57K/parquet_info/*.json -
Dataset information file:
./data/dataset_info.py
-
-
Download the checkpoint of our base model BAGEL from HuggingFace:
cd Scone
# pip install -U huggingface_hub
hf download ByteDance-Seed/BAGEL-7B-MoT --local-dir ./ckpts/BAGEL-7B-MoT
- Note: To avoid out-of-memory (OOM) issues, we disable the EMA update strategy originally used in BAGEL. All our training processes are conducted on 8 Nvidia A800 GPUs.
- The usage of semantic mask in the understanding bridge strategy is controlled by the training argument
--use_semantic_mask.
For Step 1, please use base single-candidate data for 1 epoch (~30 hours):
bash scripts/train_stage1_step1.sh # π₯ Und., Gen.For Step 2, please use refined single-candidate data for 1 epoch (~15 hours) and replace model_path in the script with your Step 1 checkpoint :
bash scripts/train_stage1_step2.sh # π₯ Und., Gen.For Step 1, please use refined sinlgle-candidate data and multi-candidate data for 1k steps (~5 hours) and replace model_path in the script with your Stage 1 Step 2 checkpoint:
bash scripts/train_stage2_step1.sh # π₯ Und. βοΈ Gen.For Step 2, please use refined sinlgle-candidate data and multi-candidate data for 1k steps (~5 hours) and replace model_path in the script with your Stage 2 Step 1 checkpoint:
bash scripts/train_stage2_step2.sh # π₯ Und., Gen.Download the Scone model checkpoint from HuggingFace:
# pip install -U huggingface_hub
hf download Ryann829/Scone --local-dir ./ckpts/SconeRun the inference script:
bash scripts/inference_single_case.shExample Output: (Images sampled at 1024x1024 resolution with seed 1234, except for GPT-4o and Gemini-2.5-Flash-Image APIs)
We support inference and evaluation on both the OmniContext and our SconeEval benchmarks, building upon the OmniContext repository.
We provide the jsonl version of OmniContext data in .
Download the data:
# pip install -U huggingface_hub
hf download Ryann829/OmniContext-jsonl --repo-type=dataset --local-dir ../OmniContext-jsonlRun the inference script:
bash scripts/inference_omnicontext.shUse GPT-4.1 to evaluate the quality of the generated images and calculate the final score. Please ensure your API key is configured before running the script.
bash eval/s2i/omnicontext/eval.shTo evaluate a modelβs ability to distinguish and generate the referred subject in complex visual contexts, we introduce a new benchmark, SconeEval. It contains 409 test cases across character, object, and scene combinations and subject distinction, with 19 case types in Figure 2(a) and 6 subtasks in Figure 2(b), providing a comprehensive evaluation of a modelβs ability to distinguish and utilize subject features.
Unlike traditional benchmarks that emphasize visual fidelity or text alignment, SconeEval focuses on cross-modal reasoning from complex contexts involving reference images and instructions, which requires deciding whom to generate when multiple candidates appear within or across images.
SconeEval includes three progressively challenging tasks, as shown in Figure 2(c): composition, distinction, and distinction & composition. In the composition task, each reference image contains a subject, and one or more images correspond to single or multiple generated subjects. In the distinction task, each reference image contains multiple subjects, and the model generates one target subject. The distinction & composition task integrates both settings, where each reference image contains multiple subjects and multiple images are used for multi-subject generation. Tasks involving distinction include cross-category and intra-category cases, indicating whether candidate subjects in a reference image belong to the same category.
Figure 2. Overview of our SconeEval benchmark.| Method | Composition β | Distinction β | Distinction & Composition β | Average β | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Single | Multi | Cross | Intra | Cross | Intra | COM | DIS | Overall | |||||
| COM | COM | COM | DIS | COM | DIS | COM | DIS | COM | DIS | ||||
| Closed-Source Model | |||||||||||||
| Gemini-2.5-Flash-Image | 8.87 | 7.94 | 9.12 | 9.15 | 9.00 | 8.50 | 8.27 | 8.87 | 8.17 | 8.85 | 8.56 | 8.84 | 8.70 |
| GPT-4o* | 8.92 | 8.51 | 9.18 | 8.55 | 9.45 | 9.01 | 8.83 | 8.49 | 8.99 | 9.56 | 8.98 | 8.90 | 8.94 |
| Generation Model | |||||||||||||
| FLUX.1 Kontext [dev] | 7.92 | - | 7.93 | 8.45 | 6.20 | 6.11 | - | - | - | - | - | - | - |
| USO | 8.03 | 5.19 | 7.96 | 8.50 | 7.14 | 6.51 | 5.10 | 6.25 | 5.07 | 5.57 | 6.41 | 6.71 | 6.56 |
| UNO | 7.53 | 5.38 | 7.27 | 7.90 | 6.76 | 6.53 | 5.27 | 7.02 | 5.61 | 6.27 | 6.31 | 6.93 | 6.62 |
| UniWorld-V2 (Edit-R1-Qwen-Image-Edit-2509) |
8.41 | 7.16 | 8.63 | 8.24 | 7.44 | 6.77 | 7.52 | 8.03 | 7.70 | 7.24 | 7.81 | 7.57 | 7.69 |
| Qwen-Image-Edit-2509 | 8.54 | 6.85 | 8.85 | 8.57 | 7.32 | 6.86 | 7.53 | 8.13 | 7.49 | 7.02 | 7.76 | 7.65 | 7.70 |
| Unified Model | |||||||||||||
| BAGEL | 7.14 | 5.55 | 7.49 | 7.95 | 6.93 | 6.21 | 6.44 | 7.38 | 6.87 | 7.27 | 6.74 | 7.20 | 6.97 |
| OmniGen2 | 8.00 | 6.59 | 8.31 | 8.99 | 6.99 | 6.80 | 7.28 | 8.30 | 7.14 | 7.13 | 7.39 | 7.81 | 7.60 |
| Echo-4o | 8.58 | 7.73 | 8.36 | 8.33 | 7.74 | 7.18 | 7.87 | 8.72 | 8.01 | 8.33 | 8.05 | 8.14 | 8.09 |
| Scone (Ours) | 8.52 | 7.40 | 8.98 | 9.73 | 7.97 | 7.74 | 8.20 | 9.25 | 8.21 | 8.44 | 8.21 | 8.79 | 8.50 |
- *: GPT-4o responded to 365~370 test cases out of the total 409 cases due to OpenAI safety restrictions.
- To mitigate randomness, we perform 3 rounds of sampling at 1024x1024 resolution, scoring 3 times per round, yielding 9 group results. The final score is the average of these results.
Download the data:
# pip install -U huggingface_hub
hf download Ryann829/SconeEval --repo-type=dataset --local-dir ../SconeEvalRun the script:
bash scripts/inference_sconeeval.shUse GPT-4.1 to evaluate the quality of the generated images and calculate the final score. Please ensure your API key is configured before running the script.
bash eval/s2i/sconeeval/eval.sh- Release paper
- Release training code
- Release inference and evaluation code
- Release model weight
- Release training data
- Release SconeEval benchmark
If you find Scone helpful, please consider giving the repo a star β.
If you find this project useful for your research, please consider citing our paper:
@misc{wang2025sconebridgingcompositiondistinction,
title={Scone: Bridging Composition and Distinction in Subject-Driven Image Generation via Unified Understanding-Generation Modeling},
author={Yuran Wang and Bohan Zeng and Chengzhuo Tong and Wenxuan Liu and Yang Shi and Xiaochen Ma and Hao Liang and Yuanxing Zhang and Wentao Zhang},
year={2025},
eprint={2512.12675},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2512.12675},
}This project builds upon the following repositories:
Special thanks to these original projects and open-source datasets for their valuable contributions.













