목적 ) 

web버전이 없는 diff-svc를 쉽게 사용하기

처음에 bat파일 제작을 해두면 그 이후는 코드를 칠 필요가 없다!



할 수 있는 것 )

처음 학습 할 때, bat파일 실행 후 모델명, batch size, endless_ds, num_ckpt_keep 값을 사용자에게 입력 받으면 이후 모든 과정이 자동으로 진행됩니다

이어 학습 할 때, bat파일 실행 후 모델명만 선택하면 자동으로 텐서보드가 실행되고, 이어서 학습이 진행됩니다

추론 할 때, bat파일 실행 후 모델명만 선택하면 자동으로 가장 높은 step의 학습파일을 선택하여 raw 폴더의 음원을 전부 results 폴더에 추론해줍니다



준비물 ) 

https://github.com/wlsdml1114/diff-svc

위의 링크 이동 후 스크롤 내려보면 나오는 설치링크들로

아나콘다3, ffmpeg, CUDA, diff-svc, checkpoint 전부 다운

다운받은 아나콘다3, ffmpeg, CUDA 먼저 설치 해두고

다운받은 diff-svc 레포 를 본인이 원하는 곳에 압축해제 합니다

diff-svc 경로 안의 checkpoints 폴더 안

다운 받은 checkpoint 들을 전부 압축해제 해줍니다


깃헙 에서 학습환경세팅 윗 부분까지 해두고


준비해둔 데이터셋 음원 파일들을 

diff-svc 폴더 안의 preprocess 폴더 안에 

전부 붙여넣기 해두면 준비는 끝





BAT파일 생성 방법 )












bat 파일 이름은 자기맘대로 해도 되고, 파일이름 끝에 .bat 는 꼭 붙여줘야만 함

저장할때 bat 파일들은 인코딩을 ANSI 로 설정 해줘야만 한다

생성한 bat 파일들은 diff-svc 를 압축해제한 폴더 안에 같이 넣어두면 됨

이후 작업들은 bat 파일들만 실행하면 전부 진행 가능





새로 학습.bat  (인코딩 : ANSI)

title Train(F) Diff-SVC
SETLOCAL ENABLEDELAYEDEXPANSION

REM ================================
REM root 는 anaconda3의 설치 경로를 입력해줍니다
REM ================================

set root=C:\ProgramData\anaconda3
set dpath=I:\_Diff-svc

REM ================================
REM dpath 는 Diff-svc의 설치 경로를 입력해줍니다
REM ================================


set cpath=%dpath%\data\


if not exist "%dpath%\preprocess\*.wav" ( 
    if not exist "%dpath%\preprocess\*.mp3" ( 
        echo 학습시킬 음원파일이 preprocess 폴더 안에 존재하지 않습니다
        echo 사용법을 숙지해주세요
        start chrome.exe --incognito "https://github.com/wlsdml1114/diff-svc"
        goto :file_not_exist
    )
) else (
REM echo exist
)


:stt1
cls
echo 학습시킬 모델의 이름을 입력해주세요^!
echo (띄어쓰기, 특수문자를 사용하면 안됩니다)
set /p user_name= 이름 입력 후 엔터 : 
if "%user_name%" == "" ( goto :stt1 )
set user_name=%user_name: =%
set user_name=%user_name:\=%
set user_name=%user_name:/=%
set user_name=%user_name::=%
set user_name=%user_name:?=%
set user_name=%user_name:"=%
set user_name=%user_name:<=%
set user_name=%user_name:>=%
set user_name=%user_name:|=%

REM echo %user_name%
set ui_binary=data/binary/%user_name%
set ui_raw_data_dir=data/dataset/%user_name%
set ui_speaker_id=%user_name%
set ui_work_dir=checkpoints/%user_name%
echo.


:stt2
echo 최대로 보존할 CKPT(학습파일) 개수를 입력해주세요 (기본값 10, 최대 100)
set /p user_ckpt= 숫자 입력 후 엔터 : 
if "%user_ckpt%" == "" (
set user_ckpt=10
echo.
goto :stt3
) else (
for /L %%a in (10,1,100) do (
    if "%user_ckpt%" == "%%a" (
        set user_ckpt=%%a
        echo.
        goto :stt3
    )
))
echo.
echo 올바른 값을 입력해주세요
echo.
goto :stt2


:stt3
echo 모델이 한번에 학습할 batch 양을 입력해주세요 (기본값 8, 최대 128)
set /p user_max_sentences= 숫자 입력 후 엔터 : 
if "%user_max_sentences%" == "" (
set user_max_sentences=8
echo.
goto :stt4
) else (
for /L %%a in (8,1,128) do (
    if "%user_max_sentences%" == "%%a" (
        set user_max_sentences=%%a
        echo.
        goto :stt4
    )
))
echo.
echo 올바른 값을 입력해주세요
echo.
goto :stt3


:stt4
echo endless_ds 값을 설정합니다
echo 데이터셋의 전체길이가 1시간 이상인가요^?
echo     1시간보다 길다면, 1 을 입력 후 엔터
echo     1시간보다 짧다면, 0 을 입력 후 엔터
set /p ui_endless_ds= 입력 해주세요 : 
for /L %%a in (0,1,1) do (
    if "%ui_endless_ds%" == "%%a" (
        set ui_endless_ds=%%a
        goto :stt0
    )
)
echo.
echo 올바른 값을 입력해주세요
echo.
goto :stt4
echo.


:stt0
cls
echo 모델 이름 : %user_name%
echo 보존할 학습파일 개수 : %user_ckpt%
if "%ui_endless_ds%" == "0" (
    set ui_endless_ds=True
    echo 데이터셋 길이 : 1시간 이하
) else (
    set ui_endless_ds=False
    echo 데이터셋 길이 : 1시간 이상
)
echo batch size : %user_max_sentences%
echo.
echo 입력한 값이 맞나요^? 
echo     y 입력 후 엔터 누르면 다음으로 진행
echo     n 입력 후 엔터 누르면 처음부터 다시 입력
set /p qs=  입력해주세요 : 
if "%qs%" == "y" (
    goto :sttz
) else (
    if "%qs%" == "n" (    goto :stt1 )
    goto :stt0
)


:sttz
call :write_yaml
echo write_yaml comple
echo.


:Cok
call %root%\Scripts\activate.bat %root%
call cd /d %dpath%
if not exist "%root%\envs\diff-svc\" (
    call conda create -n diff-svc python=3.9
)
call conda activate diff-svc
if not exist "%root%\envs\diff-svc\Lib\site-packages\torch\" (
    call pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116
)
if not exist "%root%\envs\diff-svc\Lib\site-packages\torchcrepe\" (
    call pip install -r requirements.txt
)
call set PYTHONPATH=.
call set CUDA_VISIBLE_DEVICES=0
call python sep_wav.py
if not exist "%dpath%\%ui_raw_data_dir%\" (
    call md "%dpath%\%ui_raw_data_dir%"
)
call move /y "%dpath%\preprocess_out\voice\*.*" "%dpath%\%ui_raw_data_dir%"
call move /y "%dpath%\preprocess_out\final\*.*" "%dpath%\%ui_raw_data_dir%"
call rd /s /q "%dpath%\preprocess\"
call rd /s /q "%dpath%\preprocess_out\"
call md "%dpath%\preprocess"
call md "%dpath%\preprocess_out"
call echo ^* > "%dpath%\preprocess\.gitignore"
call echo ^!.gitignore >> "%dpath%\preprocess\.gitignore"
call python preprocessing/binarize.py --config training/config_nsf.yaml
call start chrome.exe --incognito "http://localhost:6006/#scalars&amp;_smoothingWeight=0.999"
call start cmd /C tensorboard --logdir "%dpath%\checkpoints\%user_name%\lightning_logs\lastest"
call python run.py --config training/config_nsf.yaml --exp_name %user_name% --reset
endlocal
rundll32 user32.dll,MessageBeep
exit


:file_not_exist
endlocal
rundll32 user32.dll,MessageBeep
pause
exit


:write_yaml
echo write_yaml
echo # setting for users> training/config_nsf.yaml
echo ## original wav dataset folder >> training/config_nsf.yaml
echo raw_data_dir: %ui_raw_data_dir% >> training/config_nsf.yaml
echo ## after binarized dataset folder >> training/config_nsf.yaml
echo binary_data_dir: %ui_binary% >> training/config_nsf.yaml
echo ## speaker name >> training/config_nsf.yaml
echo speaker_id: %ui_speaker_id% >> training/config_nsf.yaml
echo ## trained model will be save this folder >> training/config_nsf.yaml
echo work_dir: %ui_work_dir% >> training/config_nsf.yaml
echo ## batch size >> training/config_nsf.yaml
echo max_sentences: %user_max_sentences% >> training/config_nsf.yaml
echo ## AMP(Automatic Mixed Precision) setting(only GPU) for less VRAM >> training/config_nsf.yaml
echo use_amp: true >> training/config_nsf.yaml
echo. >> training/config_nsf.yaml
echo # setting for developers and advanced users >> training/config_nsf.yaml
echo K_step: 1000 >> training/config_nsf.yaml
echo accumulate_grad_batches: 1 >> training/config_nsf.yaml
echo audio_num_mel_bins: 128 >> training/config_nsf.yaml
echo audio_sample_rate: 44100 >> training/config_nsf.yaml
echo binarization_args: >> training/config_nsf.yaml
echo   shuffle: false >> training/config_nsf.yaml
echo   with_align: true >> training/config_nsf.yaml
echo   with_f0: true >> training/config_nsf.yaml
echo   with_hubert: true >> training/config_nsf.yaml
echo   with_spk_embed: false >> training/config_nsf.yaml
echo   with_wav: false >> training/config_nsf.yaml
echo binarizer_cls: preprocessing.SVCpre.SVCBinarizer >> training/config_nsf.yaml
echo check_val_every_n_epoch: 10 >> training/config_nsf.yaml
echo choose_test_manually: false >> training/config_nsf.yaml
echo clip_grad_norm: 1 >> training/config_nsf.yaml
echo config_path: training/config_nsf.yaml >> training/config_nsf.yaml
echo content_cond_steps: [] >> training/config_nsf.yaml
echo cwt_add_f0_loss: false >> training/config_nsf.yaml
echo cwt_hidden_size: 128 >> training/config_nsf.yaml
echo cwt_layers: 2 >> training/config_nsf.yaml
echo cwt_loss: l1 >> training/config_nsf.yaml
echo cwt_std_scale: 0.8 >> training/config_nsf.yaml
echo datasets: >> training/config_nsf.yaml
echo - opencpop >> training/config_nsf.yaml
echo debug: false >> training/config_nsf.yaml
echo dec_ffn_kernel_size: 9 >> training/config_nsf.yaml
echo dec_layers: 4 >> training/config_nsf.yaml
echo decay_steps: 40000 >> training/config_nsf.yaml
echo decoder_type: fft >> training/config_nsf.yaml
echo dict_dir: '' >> training/config_nsf.yaml
echo diff_decoder_type: wavenet >> training/config_nsf.yaml
echo diff_loss_type: l2 >> training/config_nsf.yaml
echo dilation_cycle_length: 4 >> training/config_nsf.yaml
echo dropout: 0.1 >> training/config_nsf.yaml
echo ds_workers: 4 >> training/config_nsf.yaml
echo dur_enc_hidden_stride_kernel: >> training/config_nsf.yaml
echo - 0,2,3 >> training/config_nsf.yaml
echo - 0,2,3 >> training/config_nsf.yaml
echo - 0,1,3 >> training/config_nsf.yaml
echo dur_loss: mse >> training/config_nsf.yaml
echo dur_predictor_kernel: 3 >> training/config_nsf.yaml
echo dur_predictor_layers: 5 >> training/config_nsf.yaml
echo enc_ffn_kernel_size: 9 >> training/config_nsf.yaml
echo enc_layers: 4 >> training/config_nsf.yaml
echo encoder_K: 8 >> training/config_nsf.yaml
echo encoder_type: fft >> training/config_nsf.yaml
echo endless_ds: %ui_endless_ds% >> training/config_nsf.yaml
echo f0_bin: 256 >> training/config_nsf.yaml
echo f0_max: 1100.0 >> training/config_nsf.yaml
echo f0_min: 40.0 >> training/config_nsf.yaml
echo ffn_act: gelu >> training/config_nsf.yaml
echo ffn_padding: SAME >> training/config_nsf.yaml
echo fft_size: 2048 >> training/config_nsf.yaml
echo fmax: 16000 >> training/config_nsf.yaml
echo fmin: 40 >> training/config_nsf.yaml
echo fs2_ckpt: '' >> training/config_nsf.yaml
echo gaussian_start: true >> training/config_nsf.yaml
echo gen_dir_name: '' >> training/config_nsf.yaml
echo gen_tgt_spk_id: -1 >> training/config_nsf.yaml
echo hidden_size: 256 >> training/config_nsf.yaml
echo hop_size: 512 >> training/config_nsf.yaml
echo hubert_path: checkpoints/hubert/hubert_soft.pt >> training/config_nsf.yaml
echo hubert_gpu: true >> training/config_nsf.yaml
echo infer: false >> training/config_nsf.yaml
echo keep_bins: 128 >> training/config_nsf.yaml
echo lambda_commit: 0.25 >> training/config_nsf.yaml
echo lambda_energy: 0.0 >> training/config_nsf.yaml
echo lambda_f0: 1.0 >> training/config_nsf.yaml
echo lambda_ph_dur: 0.3 >> training/config_nsf.yaml
echo lambda_sent_dur: 1.0 >> training/config_nsf.yaml
echo lambda_uv: 1.0 >> training/config_nsf.yaml
echo lambda_word_dur: 1.0 >> training/config_nsf.yaml
echo load_ckpt: '' >> training/config_nsf.yaml
echo log_interval: 100 >> training/config_nsf.yaml
echo loud_norm: false >> training/config_nsf.yaml
echo lr: 0.0008 >> training/config_nsf.yaml
echo max_beta: 0.02 >> training/config_nsf.yaml
echo max_epochs: 3000 >> training/config_nsf.yaml
echo max_eval_sentences: 1 >> training/config_nsf.yaml
echo max_eval_tokens: 60000 >> training/config_nsf.yaml
echo max_frames: 42000 >> training/config_nsf.yaml
echo max_input_tokens: 60000 >> training/config_nsf.yaml
echo max_tokens: 128000 >> training/config_nsf.yaml
echo max_updates: 1000000 >> training/config_nsf.yaml
echo mel_loss: ssim:0.5^|l1:0.5 >> training/config_nsf.yaml
echo mel_vmax: 1.5 >> training/config_nsf.yaml
echo mel_vmin: -6.0 >> training/config_nsf.yaml
echo min_level_db: -120 >> training/config_nsf.yaml
echo norm_type: gn >> training/config_nsf.yaml
echo num_ckpt_keep: %user_ckpt% >> training/config_nsf.yaml
echo num_heads: 2 >> training/config_nsf.yaml
echo num_sanity_val_steps: 1 >> training/config_nsf.yaml
echo num_spk: 1 >> training/config_nsf.yaml
echo num_test_samples: 0 >> training/config_nsf.yaml
echo num_valid_plots: 10 >> training/config_nsf.yaml
echo optimizer_adam_beta1: 0.9 >> training/config_nsf.yaml
echo optimizer_adam_beta2: 0.98 >> training/config_nsf.yaml
echo out_wav_norm: false >> training/config_nsf.yaml
echo pe_ckpt: checkpoints/0102_xiaoma_pe/model_ckpt_steps_60000.ckpt >> training/config_nsf.yaml
echo pe_enable: false >> training/config_nsf.yaml
echo perform_enhance: true >> training/config_nsf.yaml
echo pitch_ar: false >> training/config_nsf.yaml
echo pitch_enc_hidden_stride_kernel: >> training/config_nsf.yaml
echo - 0,2,5 >> training/config_nsf.yaml
echo - 0,2,5 >> training/config_nsf.yaml
echo - 0,2,5 >> training/config_nsf.yaml
echo pitch_extractor: parselmouth >> training/config_nsf.yaml
echo pitch_loss: l2 >> training/config_nsf.yaml
echo pitch_norm: log >> training/config_nsf.yaml
echo pitch_type: frame >> training/config_nsf.yaml
echo pndm_speedup: 10 >> training/config_nsf.yaml
echo pre_align_args: >> training/config_nsf.yaml
echo   allow_no_txt: false >> training/config_nsf.yaml
echo   denoise: false >> training/config_nsf.yaml
echo   forced_align: mfa >> training/config_nsf.yaml
echo   txt_processor: zh_g2pM >> training/config_nsf.yaml
echo   use_sox: true >> training/config_nsf.yaml
echo   use_tone: false >> training/config_nsf.yaml
echo pre_align_cls: data_gen.singing.pre_align.SingingPreAlign >> training/config_nsf.yaml
echo predictor_dropout: 0.5 >> training/config_nsf.yaml
echo predictor_grad: 0.1 >> training/config_nsf.yaml
echo predictor_hidden: -1 >> training/config_nsf.yaml
echo predictor_kernel: 5 >> training/config_nsf.yaml
echo predictor_layers: 5 >> training/config_nsf.yaml
echo prenet_dropout: 0.5 >> training/config_nsf.yaml
echo prenet_hidden_size: 256 >> training/config_nsf.yaml
echo pretrain_fs_ckpt: '' >> training/config_nsf.yaml
echo processed_data_dir: xxx >> training/config_nsf.yaml
echo profile_infer: false >> training/config_nsf.yaml
echo ref_norm_layer: bn >> training/config_nsf.yaml
echo rel_pos: true >> training/config_nsf.yaml
echo reset_phone_dict: true >> training/config_nsf.yaml
echo residual_channels: 384 >> training/config_nsf.yaml
echo residual_layers: 20 >> training/config_nsf.yaml
echo save_best: false >> training/config_nsf.yaml
echo save_ckpt: true >> training/config_nsf.yaml
echo save_codes: >> training/config_nsf.yaml
echo - configs >> training/config_nsf.yaml
echo - modules >> training/config_nsf.yaml
echo - src >> training/config_nsf.yaml
echo - utils >> training/config_nsf.yaml
echo save_f0: true >> training/config_nsf.yaml
echo save_gt: false >> training/config_nsf.yaml
echo schedule_type: linear >> training/config_nsf.yaml
echo seed: 1234 >> training/config_nsf.yaml
echo sort_by_len: true >> training/config_nsf.yaml
echo spec_max: >> training/config_nsf.yaml
echo - 0.0 >> training/config_nsf.yaml
echo spec_min: >> training/config_nsf.yaml
echo - -5.0 >> training/config_nsf.yaml
echo spk_cond_steps: [] >> training/config_nsf.yaml
echo stop_token_weight: 5.0 >> training/config_nsf.yaml
echo task_cls: training.task.SVC_task.SVCTask >> training/config_nsf.yaml
echo test_ids: [] >> training/config_nsf.yaml
echo test_input_dir: '' >> training/config_nsf.yaml
echo test_num: 0 >> training/config_nsf.yaml
echo test_prefixes: >> training/config_nsf.yaml
echo - test >> training/config_nsf.yaml
echo test_set_name: test >> training/config_nsf.yaml
echo timesteps: 1000 >> training/config_nsf.yaml
echo train_set_name: train >> training/config_nsf.yaml
echo use_crepe: true >> training/config_nsf.yaml
echo use_denoise: false >> training/config_nsf.yaml
echo use_energy_embed: false >> training/config_nsf.yaml
echo use_gt_dur: false >> training/config_nsf.yaml
echo use_gt_f0: false >> training/config_nsf.yaml
echo use_midi: false >> training/config_nsf.yaml
echo use_nsf: true >> training/config_nsf.yaml
echo use_pitch_embed: true >> training/config_nsf.yaml
echo use_pos_embed: true >> training/config_nsf.yaml
echo use_spk_embed: false >> training/config_nsf.yaml
echo use_spk_id: false >> training/config_nsf.yaml
echo use_split_spk_id: false >> training/config_nsf.yaml
echo use_uv: false >> training/config_nsf.yaml
echo use_vec: false >> training/config_nsf.yaml
echo use_var_enc: false >> training/config_nsf.yaml
echo val_check_interval: 2000 >> training/config_nsf.yaml
echo valid_num: 0 >> training/config_nsf.yaml
echo valid_set_name: valid >> training/config_nsf.yaml
echo vocoder: network.vocoders.nsf_hifigan.NsfHifiGAN >> training/config_nsf.yaml
echo vocoder_ckpt: checkpoints/nsf_hifigan/model >> training/config_nsf.yaml
echo warmup_updates: 2000 >> training/config_nsf.yaml
echo wav2spec_eps: 1e-6 >> training/config_nsf.yaml
echo weight_decay: 0 >> training/config_nsf.yaml
echo win_size: 2048 >> training/config_nsf.yaml
echo no_fs2: true >> training/config_nsf.yaml
goto :eof




이어 학습.bat  (인코딩 : ANSI)

title Train Diff-SVC
SETLOCAL ENABLEDELAYEDEXPANSION

REM ================================
REM root 는 anaconda3의 설치 경로를 입력해줍니다
REM ================================

set root=C:\ProgramData\anaconda3
set dpath=I:\_Diff-svc

REM ================================
REM dpath 는 Diff-svc의 설치 경로를 입력해줍니다
REM ================================


set cpath=%dpath%\checkpoints\
set "ccnt=0"
set "acnt=0"
sef df0=
set df1=0102_xiaoma_pe
set df2=0109_hifigan_bigpopcs_hop128
set df3=hubert
set df4=nsf_hifigan
echo off
cls
cd /d %dpath%
for /f "tokens=*" %%d in ('dir %cpath% /B /a:d') DO (
if %df1% == %%d ( 
REM echo df1 : %%d 
) else (
if %df2% == %%d ( 
REM echo df2 : %%d 
) else ( 
if %df3% == %%d ( 
REM echo df3 : %%d
) else (
if %df4% == %%d (
REM echo df4 : %%d
) else (
REM echo %%d
set df[!ccnt!]=%%d
set /a ccnt+=1
)))))
:arrayLoop
if defined df[%acnt%] (
    set /a "acnt+=1"
    GOTO :arrayLoop
)
if "%ccnt%" GTR "1" ( set /a "acnt-=1" )
:selectLoop
cls
if %ccnt% == 0 ( goto :notrain )
if %ccnt% == 1 (
    set df0=%df[0]%
    goto :Cok
) else (
for /l %%n in (0,1,!acnt!) do (
    echo %%n : !df[%%n]!
)
)
REM echo %acnt%
echo.
set /p UST= 학습 할 모델명을 선택해주세요. (숫자만 입력) : 
for /L %%a in (0,1,!acnt!) do (
    if "%UST%" == "%%a" (
        set df0=!df[%%a]!
        goto :Cok
    )
)
REM echo f : %UST%
goto :selectLoop


:notrain
endlocal
rundll32 user32.dll,MessageBeep
echo 학습된 CKPT 파일이 checkpoints 폴더에 존재하지 않습니다
pause
exit


:Cok
cls
call %root%\Scripts\activate.bat %root%
call cd /d %dpath%
call conda activate diff-svc
call set PYTHONPATH=.
call set CUDA_VISIBLE_DEVICES=0
call start chrome.exe --incognito "http://localhost:6006/#scalars&amp;_smoothingWeight=0.999"
call start cmd /C tensorboard --logdir "%dpath%\checkpoints\%df0%\lightning_logs\lastest"
call python run.py --exp_name %df0%
endlocal
exit




추론.bat  (인코딩 : ANSI)

title Infer Diff-SVC
SETLOCAL ENABLEDELAYEDEXPANSION

REM ================================
REM root 는 anaconda3의 설치 경로를 입력해줍니다
REM ================================

set root=C:\ProgramData\anaconda3
set dpath=I:\_Diff-svc

REM ================================
REM dpath 는 Diff-svc의 설치 경로를 입력해줍니다
REM ================================


set cpath=%dpath%\checkpoints\
set "ccnt=0"
set "acnt=0"
sef df0=
set df1=0102_xiaoma_pe
set df2=0109_hifigan_bigpopcs_hop128
set df3=hubert
set df4=nsf_hifigan
echo off
cls
cd /d %dpath%
for /f "tokens=*" %%d in ('dir %cpath% /B /a:d') DO (
if %df1% == %%d ( 
REM echo df1 : %%d 
) else (
if %df2% == %%d ( 
REM echo df2 : %%d 
) else ( 
if %df3% == %%d ( 
REM echo df3 : %%d
) else (
if %df4% == %%d (
REM echo df4 : %%d
) else (
REM echo %%d
set df[!ccnt!]=%%d
set /a ccnt+=1
)))))
:arrayLoop
if defined df[%acnt%] (
    set /a "acnt+=1"
    GOTO :arrayLoop
)
if "%ccnt%" GTR "1" ( set /a "acnt-=1" )
:selectLoop
cls
if %ccnt% == 0 ( goto :notrain )
if %ccnt% == 1 (
    set df0=%df[0]%
    goto :Cok
) else (
for /l %%n in (0,1,!acnt!) do (
    echo %%n : !df[%%n]!
)
)
REM echo %acnt%
echo.
set /p UST= 추론 할 모델명을 선택해주세요. (숫자만 입력) : 
for /L %%a in (0,1,!acnt!) do (
    if "%UST%" == "%%a" (
        set df0=!df[%%a]!
        goto :Cok
    )
)
REM echo f : %UST%
goto :selectLoop




:notrain
endlocal
rundll32 user32.dll,MessageBeep
echo 학습된 CKPT 파일이 checkpoints 폴더에 존재하지 않습니다
pause
exit


:Cok
REM echo %df0%
cls
call %root%\Scripts\activate.bat %root%
call cd /d %dpath%
call conda activate diff-svc
call set PYTHONPATH=.
call set CUDA_VISIBLE_DEVICES=0
call python infer.py "%df0%"
endlocal
rundll32 user32.dll,MessageBeep
exit




infer.py  (인코딩 UTF-8 그대로 둘 것. 기존파일 덮어쓰기)

import sys
import os

import io
import time
from pathlib import Path

import librosa
import numpy as np
import soundfile

from infer_tools import infer_tool
from infer_tools import slicer
from infer_tools.infer_tool import Svc
from utils.hparams import hparams

chunks_dict = infer_tool.read_temp("./infer_tools/new_chunks_temp.json")
target_model = sys.argv[1]
target_model_path = f"./checkpoints/{target_model}"
target_model_ex = r'.ckpt'
target_model_sch = "model_ckpt_steps_"
target_model_ckpt = [file for file in os.listdir(target_model_path) if file.endswith(target_model_ex)]
for i, rwd in enumerate(target_model_ckpt):
    if target_model_sch in rwd:
        target_model_ckpt[i] = rwd.strip(target_model_sch)
        target_model_ckpt[i] = target_model_ckpt[i].strip(".")
        target_model_ckpt[i] = int(target_model_ckpt[i])
target_model_max = str(max(target_model_ckpt))
print("Target Model  : " + target_model)
print("Model Checkpoint : " + target_model_max)
print("")

def run_clip(svc_model, key, acc, use_pe, use_crepe, thre, use_gt_mel, add_noise_step, project_name='', f_name=None,
             file_path=None, out_path=None, slice_db=-40,**kwargs):
    print(f'code version:2022-12-04')
    use_pe = use_pe if hparams['audio_sample_rate'] == 24000 else False
    if file_path is None:
        raw_audio_path = f"./raw/{f_name}"
        clean_name = f_name[:-4]
    else:
        raw_audio_path = file_path
        clean_name = str(Path(file_path).name)[:-4]
    infer_tool.format_wav(raw_audio_path)
    wav_path = Path(raw_audio_path).with_suffix('.wav')
    global chunks_dict
    audio, sr = librosa.load(wav_path, mono=True,sr=None)
    wav_hash = infer_tool.get_md5(audio)
    if wav_hash in chunks_dict.keys():
        print("load chunks from temp")
        chunks = chunks_dict[wav_hash]["chunks"]
    else:
        chunks = slicer.cut(wav_path, db_thresh=slice_db)
    chunks_dict[wav_hash] = {"chunks": chunks, "time": int(time.time())}
    infer_tool.write_temp("./infer_tools/new_chunks_temp.json", chunks_dict)
    audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks)


    count = 0
    f0_tst = []
    f0_pred = []
    audio = []
    for (slice_tag, data) in audio_data:
        print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======')
        length = int(np.ceil(len(data) / audio_sr * hparams['audio_sample_rate']))
        raw_path = io.BytesIO()
        soundfile.write(raw_path, data, audio_sr, format="wav")
        if hparams['debug']:
            print(np.mean(data), np.var(data))
        raw_path.seek(0)
        if slice_tag:
            print('jump empty segment')
            _f0_tst, _f0_pred, _audio = (
                np.zeros(int(np.ceil(length / hparams['hop_size']))), np.zeros(int(np.ceil(length / hparams['hop_size']))),
                np.zeros(length))
        else:
            _f0_tst, _f0_pred, _audio = svc_model.infer(raw_path, key=key, acc=acc, use_pe=use_pe, use_crepe=use_crepe,
                                                        thre=thre, use_gt_mel=use_gt_mel, add_noise_step=add_noise_step)
        fix_audio = np.zeros(length)
        fix_audio[:] = np.mean(_audio)
        fix_audio[:len(_audio)] = _audio[0 if len(_audio)<len(fix_audio) else len(_audio)-len(fix_audio):]
        f0_tst.extend(_f0_tst)
        f0_pred.extend(_f0_pred)
        audio.extend(list(fix_audio))
        count += 1
    if out_path is None:
    #out_path = f'./results/{clean_name}_{key}key_{project_name}_{hparams["residual_channels"]}_{hparams["residual_layers"]}_{int(step / 1000)}k_{accelerate}x.{kwargs["format"]}'
    out_path = f'./results/{project_name}_{int(step / 1000)}k_{key}key_{clean_name}.{kwargs["format"]}'
    soundfile.write(out_path, audio, hparams["audio_sample_rate"], 'PCM_16',format=out_path.split('.')[-1])
    return np.array(f0_tst), np.array(f0_pred), audio




if __name__ == '__main__':
    # Project folder name used for training
project_name = target_model
 model_path = f'./checkpoints/{project_name}/model_ckpt_steps_{target_model_max}.ckpt' # change ckpt file name to your best ckpt file name
    config_path = f'./checkpoints/{project_name}/config.yaml'


    # Support multiple wav/ogg files, put them in the raw folder, with extension
file_names_path = f"./raw"
file_names_ex = ['.ogg', '.wav']
file_names = [file for file in os.listdir(file_names_path) if os.path.splitext(file)[1] in file_names_ex]
    trans = [0] # Pitch adjustment, 
                # support positive and negative (semitones), 
                # the number corresponds to the previous line, 
                # if it is insufficient,
                # it will be filled automatically according to the first transpose parameter


    # Acceleration factor
    accelerate = 20
    hubert_gpu = True
    format='flac'
    step = int(model_path.split("_")[-1].split(".")[0])


    # don't move below
    infer_tool.mkdir(["./raw", "./results"])
    infer_tool.fill_a_to_b(trans, file_names)


    model = Svc(project_name, config_path, hubert_gpu, model_path)
    for f_name, tran in zip(file_names, trans):
        if "." not in f_name:
            f_name += ".wav"
        run_clip(model, key=tran, acc=accelerate, use_crepe=True, thre=0.05, use_pe=True, use_gt_mel=False,
                 add_noise_step=500, f_name=f_name, project_name=project_name, format=format)






이전에 썼던 글을 좀 더 보강 했습니다

일단 저는 잘 쓰고 있는데 다른 환경에서 잘 되는지는 ㅁ?ㄹ