ChatGPT 에게 Stable Diffusion 이 무엇인지 물어볼 때 돌아오는 답변:


학습한 자료가 없어 개소리만 반환하는 ChatGPT



GPT 3.5 Turbo 모델과 Bing 검색 API 를 조합해 만든 결과물:

aeon in 🌐 miracle in ~/Devs/openai-api via 🐍 v3.10.9 (venv)
❯ python main.py
어떤게 궁금하신가요?: What is Stable Diffusion?
다음과 같이 검색해보겠습니다: Stable Diffusion
-------------------
Stable Diffusion is a deep learning, text-to-image model that was released in 2022 [1]. It is a generative neural network that is primarily used to generate detailed images conditioned on text descriptions, but it can also be used for other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt. Stable Diffusion is a latent diffusion model, which means that it uses a diffusion process to gradually generate an image in a pixel-by-pixel manner [1]. Stable Diffusion 2.0 delivers a number of big improvements and features versus the original V1 release, which includes robust text-to-image models trained using a brand new text encoder [3]. There is a Stable Diffusion plug-in for Blender that introduces a suite of generative AI tools that can create textures, images, and animations to use in [5]. To run Stable Diffusion, download it from GitHub and the latest checkpoints from HuggingFace.co, and install them. Then, run Stable Diffusion in a special python environment using Miniconda [6].
-------------------
1. https://en.wikipedia.org/wiki/Stable_Diffusion
1. https://en.wikipedia.org/wiki/Stable_Diffusion
3. https://stability.ai/blog/stable-diffusion-v2-release
5. https://www.theverge.com/2023/3/3/23623473/blender-stable-diffusion-ai-plugin-3d-modeling-texture-render
6. https://www.howtogeek.com/830179/how-to-run-stable-diffusion-on-your-pc-to-generate-ai-images/
-------------------


몇가지 다른 질문들...

❯ python main.py
어떤게 궁금하신가요?: What is LoRA train?
다음과 같이 검색해보겠습니다: LoRa train
-------------------
LoRA (Low-Rank Adaptation) is a method for quickly fine-tuning and training large language models [6]. It uses fewer parameters than traditional methods and can be used to train on all different aspects of a data set, such as character, object, or style [0]. To train a new LoRA concept, a zip file with a few images of the concept is needed, and these images can be used to generate new images [1]. The LoRA is extremely small, requiring only 8MB or even less, and can be dynamically loaded [0]. LoRA training scripts are available on GitHub and can be used with a diffusion model [3]. When adding LoRA to a model, alpha is used as a constant to control the level of effect; if the LoRA has too much effect, alpha can be set to a lower value, and if it has too little effect, alpha can be set to a higher value [7]. LoRA has minimal inference latency, and only requires calculation of W = W_0 + BA [6].
-------------------
6. https://zhuanlan.zhihu.com/p/514033873
0. https://rentry.org/lora_train
1. https://replicate.com/blog/lora-faster-fine-tuning-of-stable-diffusion
0. https://rentry.org/lora_train
3. https://github.com/Akegarasu/lora-scripts
7. https://github.com/cloneofsimo/lora
6. https://zhuanlan.zhihu.com/p/514033873
-------------------


❯ python main.py
어떤게 궁금하신가요?: How can I train Stable Diffusion with LoRA?
다음과 같이 검색해보겠습니다:

Search terms: train Stable Diffusion LoRa
-------------------
LoRA is a technique that allows easy and efficient fine-tuning of Stable Diffusion models on custom datasets [0]. It is a method that enables training a Stable Diffusion model with just a few images [1]. LoRA scripts are available that can work with as little as 11 GB of GPU RAM [3]. It is faster and requires fewer weights than the initial model size and produces outstanding results [3]. The installation of LoRA can be done via pip install with manual configuration needed to set up the accelerate module properly. After installation, datasets can be prepared for training [2]. LoRA is lightweight, around 2GB for a model, while being on par with Dreambooth with regard to quality [4]. It is less demanding than the VRAM requirements of other image generation models and can be used to create unique and interesting results, such as in fashion modeling [5][7]. There are also pre-trained LoRA extensions available for use compatible with specific versions of Stable Diffusion [8].
-------------------
0. https://huggingface.co/blog/lora
1. https://replicate.com/blog/lora-faster-fine-tuning-of-stable-diffusion
3. https://www.marktechpost.com/2023/02/18/huggingface-publishes-lora-scripts-for-efficient-stable-diffusion-fine-tuning/
3. https://www.marktechpost.com/2023/02/18/huggingface-publishes-lora-scripts-for-efficient-stable-diffusion-fine-tuning/
2. https://ngwaifoong92.medium.com/how-to-fine-tune-stable-diffusion-using-lora-85690292c6a8
4. https://www.reddit.com/r/StableDiffusion/comments/10i81s3/what_is_a_lora_and_how_do_i_use_it_effectively/
5. https://www.youtube.com/watch?v=70H03cv57-o
7. https://www.reddit.com/r/StableDiffusion/comments/11h8i8m/advice_for_creatingtraining_a_great_fashion_model/
8. https://sevenstyles.com/p/artist-lora-for-stable-diffusion-1-5-8209429/
-------------------




파인튜닝 없이 이정도로 뽑아낼 수 있다는게 신기할 따름임...