MAGIC: Mask-Guided Diffusion Inpainting with Multi-Level Perturbations and Context-Aware Alignment for Few-Shot Anomaly Generation
Few-shot anomaly generation is emerging as a practical solution for augmenting the scarce anomaly data in industrial quality control settings. An ideal generator would meet three demands at once, namely (i) keep the normal background intact, (ii) inpaint anomalous regions to tightly overlap with the corresponding anomaly masks, and (iii) generate anomalous regions in a semantically valid location, while still producing realistic, diverse appearances from only a handful of real examples. Existing diffusion-based methods usually satisfy at most two of these requirements: global anomaly generators corrupt the background, whereas mask-guided ones often falter when the mask is imprecise or misplaced. We propose MAGIC--Mask-guided inpainting with multi-level perturbations and Context-aware alignment--to resolve all three issues. At its core, MAGIC fine-tunes a Stable Diffusion inpainting backbone that preserves normal regions and ensures strict adherence of the synthesized anomaly to the supplied mask, directly addressing background corruption and misalignment. To offset the diversity loss that fine-tuning can cause, MAGIC adds two complementary perturbation strategies: (i) Gaussian prompt-level perturbation applied during fine-tuning and inference that broadens the global appearance of anomalies while avoiding low-fidelity textual appearances, and (ii) mask-guided spatial noise injection that enriches local texture variations. Additionally, the context-aware mask alignment module forms semantic correspondences and relocates masks so that every anomaly remains plausibly contained within the host object, eliminating out-of-boundary artifacts. Under a consistent identical evaluation protocol on the MVTec-AD dataset, MAGIC outperforms previous state-of-the-arts in downstream anomaly tasks.
This environment is used for training and inference of our Stable Diffusion Inpainting-based anomaly generation model.
- Create and activate the conda environment:
conda create -n magic python=3.9
conda activate magic- Install CUDA and PyTorch:
conda install -c conda-forge cudatoolkit=11.8
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 --upgrade- Install other dependencies:
pip install -r requirements.txt
pip install opencv-pythonDownload from: https://www.mvtec.com/company/research/datasets/mvtec-ad
Extract it to a target directory and use the path when running training or inference.
- Source: https://github.com/xuebinqin/U-2-Net
- License: Apache License 2.0
This model is used to extract foreground masks from normal images. Save the generated mask as:
<dataset_root>/<category>/train/good/<image_filename>_mask.png
Example:
mvtec_ad/screw/train/good/000.png → obj_foreground_mask/screw/train/good/000_mask.png
GeoAware-SC is used to compute keypoint matches between normal and defective images, and is required for our CAMA module.
⚠️ This runs in a separate environment from the inpainting model.
- Source: https://github.com/Junyi42/GeoAware-SC
- License: Not specified (assumed all rights reserved)
- Clone GeoAware-SC:
git clone https://github.com/Junyi42/GeoAware-SC.git- Add the Matching Script
Move the CAMA_matching.py file into the cloned folder:
GeoAware-SC/CAMA_matching.py
Example command:
mv CAMA_matching.py GeoAware-SC/CAMA_matching.py-
Follow GeoAware-SC setup guide (e.g., install dependencies).
-
Run the following to generate
matching_result.json:
python CAMA_matching.py \
--mvtecad_dir /path/to/mvtec_ad \
--categories screw \
--img_size 480 \
--out_dir /path/to/matching_output \
--mask_root /path/to/obj_foreground_maskReplace
screwwith the desired category name.
The resultingmatching_result.jsonfile will be saved inside the specified--out_dirfolder, and is required for inference.
To train on the screw category:
python run_train.py \
--base_dir /path/to/mvtec_ad \
--output_name output_name \
--text_noise_scale 1.0 \
--category screwReplace
screwwith any other category name from MVTec AD.
To run inference with trained checkpoints:
python inference.py \
--model_ckpt_root /path/to/trained/ckpt \
--categories screw \
--dataset_type mvtec \
--text_noise_scale 1.0 \
--defect_json /path/to/defect_classification.json \
--match_json /path/to/matching_result.json \
--normal_masks /obj_foreground_mask \
--mask_dir /anomaly_mask \
--anomaly_stop_step 20 \
--CAMA \
--base_dir /path/to/mvtec_ad \
--anomaly_strength_min 0.0 \
--anomaly_strength_max 0.6This repository provides four evaluation scripts to assess the quality and diversity of generated anomaly images.
python cal_kid.py --real_path=/path/to/mvtec_ad --generated_path=/path/to/generated_imagespython cal_ic_lpips.py --mvtec_path=/path/to/mvtec_ad --gen_path=/path/to/generated_imagespython train-classification.py \
--mvtec_path=/path/to/mvtec_ad \
--generated_data_path=/path/to/generated_images \
--checkpoint_path=/path/to/save_checkpointspython test-classification.py \
--mvtec_path=/path/to/mvtec_ad \
--generated_data_path=/path/to/generated_images \
--checkpoint_path=/path/to/save_checkpointsAll scripts are located under the
evaluation/directory. Replace each path with the appropriate location of your data or checkpoints.
See the licenses/ folder:
CREATIVEML-OPEN-RAIL-M.txt— Stable Diffusion v2GEOAWARE-NOTICE.txt— GeoAware-SC (no license specified)LICENSE_U2NET.txt— U-2-Net (Apache 2.0)
This code is licensed for academic research and non-commercial use only.
- You may use, modify, and distribute this code for academic or educational purposes.
- Commercial use is strictly prohibited without explicit written permission from the authors.
- If you use this code in your research, please cite our paper.
- This work is licensed under the Creative Commons Attribution-NonCommercial 4 5325 .0 International (CC BY-NC 4.0) License. See https://creativecommons.org/licenses/by-nc/4.0/ for details.
© Jae Hyuck Choi, 2025
