뭐 해결했다 싶으면 뭐 안되고 뭐 안되고 뭐 오류나고 시발 ㅠ........

max_train_steps = 2400

stop_text_encoder_training = 0

lr_warmup_steps = 240

Running on local URL:  http://127.0.0.1:7860


To create a public link, set `share=True` in `launch()`.

accelerate launch --num_cpu_threads_per_process=2 "train_db.py" --v2 --enable_bucket --pretrained_model_name_or_path="C:/Users/64126/Downloads/test/stable-diffusion-webui/models/Stable-diffusion/nyanMix_230303Absurd2.safetensors" --train_data_dir="C:/Users/64126/OneDrive/바탕 화면/Ai/img" --resolution=512,512 --output_dir="C:/Users/64126/OneDrive/바탕 화면/Ai/model" --logging_dir="C:/Users/64126/OneDrive/바탕 화면/Ai/log" --save_model_as=safetensors --output_name="last" --max_data_loader_n_workers="2" --learning_rate="1e-05" --lr_scheduler="cosine" --lr_warmup_steps="240" --train_batch_size="4" --max_train_steps="2400" --save_every_n_epochs="1" --mixed_precision="fp16" --save_precision="fp16" --optimizer_type="AdamW8bit" --max_data_loader_n_workers="2" --bucket_reso_steps=64 --shuffle_caption --gradient_checkpointing --xformers --persistent_data_loader_workers --bucket_no_upscale --random_crop

Could not find module 'C:\Users\64126\OneDrive\바탕 화면\new ai\kohya_ss\venv\Lib\site-packages\xformers\_C.pyd' (or one of its dependencies). Try using the full path with constructor syntax.

WARNING:root:WARNING: Could not find module 'C:\Users\64126\OneDrive\바탕 화면\new ai\kohya_ss\venv\Lib\site-packages\xformers\_C.pyd' (or one of its dependencies). Try using the full path with constructor syntax.

Need to compile C++ extensions to get sparse attention suport. Please run python setup.py build develop

WARNING:xformers:WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:

    PyTorch 2.0.0+cu118 with CUDA 1108 (you have 2.0.1+cu118)

    Python  3.10.11 (you have 3.10.6)

  Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)

  Memory-efficient attention, SwiGLU, sparse and more won't be available.

  Set XFORMERS_MORE_DETAILS=1 for more details

prepare tokenizer

prepare images.

found directory C:\Users\64126\OneDrive\바탕 화면\Ai\img\100_yeah Ai contains 32 image files

3200 train images with repeating.

0 reg images.

no regularization images / 正則化画像が見つかりませんでした

[Dataset 0]

  batch_size: 4

  resolution: (512, 512)

  enable_bucket: True

  min_bucket_reso: 256

  max_bucket_reso: 1024

  bucket_reso_steps: 64

  bucket_no_upscale: True


  [Subset 0 of Dataset 0]

    image_dir: "C:\Users\64126\OneDrive\바탕 화면\Ai\img\100_yeah Ai"

    image_count: 32

    num_repeats: 100

    shuffle_caption: True

    keep_tokens: 0

    caption_dropout_rate: 0.0

    caption_dropout_every_n_epoches: 0

    caption_tag_dropout_rate: 0.0

    color_aug: False

    flip_aug: False

    face_crop_aug_range: None

    random_crop: True

    token_warmup_min: 1,

    token_warmup_step: 0,

    is_reg: False

    class_tokens: yeah Ai

    caption_extension: .caption



[Dataset 0]

loading image sizes.

100%|█████████████████████████████████████████████████████████████████████████████████| 32/32 [00:00<00:00, 484.80it/s]

make buckets

min_bucket_reso and max_bucket_reso are ignored if bucket_no_upscale is set, because bucket reso is defined by image size automatically / bucket_no_upscaleが指定された場合は、bucketの解像度は画像サイズから自動計算されるため、min_bucket_resoとmax_bucket_resoは無視されます

number of images (including repeats) / 各bucketの画像枚数(繰り返し回数を含む)

bucket 0: resolution (192, 192), count: 100

bucket 1: resolution (320, 320), count: 100

bucket 2: resolution (320, 576), count: 100

bucket 3: resolution (320, 640), count: 200

bucket 4: resolution (320, 768), count: 100

bucket 5: resolution (384, 512), count: 400

bucket 6: resolution (384, 576), count: 600

bucket 7: resolution (384, 640), count: 300

bucket 8: resolution (448, 448), count: 200

bucket 9: resolution (512, 448), count: 100

bucket 10: resolution (512, 512), count: 200

bucket 11: resolution (576, 256), count: 100

bucket 12: resolution (576, 320), count: 200

bucket 13: resolution (576, 384), count: 200

bucket 14: resolution (640, 384), count: 300

mean ar error (without repeats): 0.03571981911004995

prepare accelerator

C:\Users\64126\OneDrive\바탕 화면\new ai\kohya_ss\venv\lib\site-packages\accelerate\accelerator.py:249: FutureWarning: `logging_dir` is deprecated and will be removed in version 0.18.0 of 🤗 Accelerate. Use `project_dir` instead.

  warnings.warn(

Using accelerator 0.15.0 or above.

loading model for process 0/1

load StableDiffusion checkpoint: C:/Users/64126/Downloads/test/stable-diffusion-webui/models/Stable-diffusion/nyanMix_230303Absurd2.safetensors

C:\Users\64126\OneDrive\바탕 화면\new ai\kohya_ss\venv\lib\site-packages\safetensors\torch.py:98: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()

  with safe_open(filename, framework="pt", device=device) as f:

Traceback (most recent call last):

  File "C:\Users\64126\OneDrive\바탕 화면\new ai\kohya_ss\train_db.py", line 477, in <module>

    train(args)

  File "C:\Users\64126\OneDrive\바탕 화면\new ai\kohya_ss\train_db.py", line 102, in train

    text_encoder, vae, unet, load_stable_diffusion_format = train_util.load_target_model(args, weight_dtype, accelerator)

  File "C:\Users\64126\OneDrive\바탕 화면\new ai\kohya_ss\library\train_util.py", line 3033, in load_target_model

    text_encoder, vae, unet, load_stable_diffusion_format = _load_target_model(

  File "C:\Users\64126\OneDrive\바탕 화면\new ai\kohya_ss\library\train_util.py", line 2999, in _load_target_model

    text_encoder, vae, unet = model_util.load_models_from_stable_diffusion_checkpoint(args.v2, name_or_path, device)

  File "C:\Users\64126\OneDrive\바탕 화면\new ai\kohya_ss\library\model_util.py", line 863, in load_models_from_stable_diffusion_checkpoint

    info = unet.load_state_dict(converted_unet_checkpoint)

  File "C:\Users\64126\OneDrive\바탕 화면\new ai\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 2041, in load_state_dict

    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(

RuntimeError: Error(s) in loading state_dict for UNet2DConditionModel:

        size mismatch for down_blocks.0.attentions.0.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).

        size mismatch for down_blocks.0.attentions.0.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).

        size mismatch for down_blocks.0.attentions.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).

        size mismatch for down_blocks.0.attentions.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).

        size mismatch for down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).

        size mismatch for down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).

        size mismatch for down_blocks.1.attentions.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).

        size mismatch for down_blocks.1.attentions.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).

        size mismatch for down_blocks.2.attentions.0.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).

        size mismatch for down_blocks.2.attentions.0.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).

        size mismatch for down_blocks.2.attentions.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).

        size mismatch for down_blocks.2.attentions.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).

        size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).

        size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).

        size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).

        size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).

        size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).

        size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).

        size mismatch for up_blocks.2.attentions.0.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).

        size mismatch for up_blocks.2.attentions.0.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).

        size mismatch for up_blocks.2.attentions.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).

        size mismatch for up_blocks.2.attentions.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).

        size mismatch for up_blocks.2.attentions.2.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).

        size mismatch for up_blocks.2.attentions.2.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).

        size mismatch for up_blocks.3.attentions.0.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).

        size mismatch for up_blocks.3.attentions.0.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).

        size mismatch for up_blocks.3.attentions.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).

        size mismatch for up_blocks.3.attentions.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).

        size mismatch for up_blocks.3.attentions.2.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).

        size mismatch for up_blocks.3.attentions.2.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).

        size mismatch for mid_block.attentions.0.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).

        size mismatch for mid_block.attentions.0.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).

Traceback (most recent call last):

  File "C:\Users\64126\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main

    return _run_code(code, main_globals, None,

  File "C:\Users\64126\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code

    exec(code, run_globals)

  File "C:\Users\64126\OneDrive\바탕 화면\new ai\kohya_ss\venv\Scripts\accelerate.exe\__main__.py", line 7, in <module>

  File "C:\Users\64126\OneDrive\바탕 화면\new ai\kohya_ss\venv\lib\site-packages\accelerate\commands\accelerate_cli.py", line 45, in main

    args.func(args)

  File "C:\Users\64126\OneDrive\바탕 화면\new ai\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 923, in launch_command

    simple_launcher(args)

  File "C:\Users\64126\OneDrive\바탕 화면\new ai\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 579, in simple_launcher

    raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)

subprocess.CalledProcessError: Command '['C:\\Users\\64126\\OneDrive\\바탕 화면\\new ai\\kohya_ss\\venv\\Scripts\\python.exe', 'train_db.py', '--v2', '--enable_bucket', '--pretrained_model_name_or_path=C:/Users/64126/Downloads/test/stable-diffusion-webui/models/Stable-diffusion/nyanMix_230303Absurd2.safetensors', '--train_data_dir=C:/Users/64126/OneDrive/바 탕 화면/Ai/img', '--resolution=512,512', '--output_dir=C:/Users/64126/OneDrive/바탕 화면/Ai/model', '--logging_dir=C:/Users/64126/OneDrive/바탕 화면/Ai/log', '--save_model_as=safetensors', '--output_name=last', '--max_data_loader_n_workers=2', '--learning_rate=1e-05', '--lr_scheduler=cosine', '--lr_warmup_steps=240', '--train_batch_size=4', '--max_train_steps=2400', '--save_every_n_epochs=1', '--mixed_precision=fp16', '--save_precision=fp16', '--optimizer_type=AdamW8bit', '--max_data_loader_n_workers=2', '--bucket_reso_steps=64', '--shuffle_caption', '--gradient_checkpointing', '--xformers', '--persistent_data_loader_workers', '--bucket_no_upscale', '--random_crop']' returned non-zero exit status 1.

이건 또 왜 이러는 지 아는 사람?

xformers 가 잘못됐대서 삭제하고 재설치도 해봤는데 안됨 ㅠ