Stable Diffusion Built-In to the Blender Shader Editor

  1. Wow. Just wow. If you would have gone back when I started that this would be possible, I would have laughed. Thank you for making this

  2. You can play with the prompts, or maybe try another model that can make a texture tillable. I’d like to incorporate something like that into the addon.

  3. Eh, someone still has to do the texturing, even if it's using an AI to make the textures. Also someone has to put in the prompts since mind reading doesn't exist, yet. Texture artists are still going to be the best at knowing what textures to create and what prompts to use.

  4. This is a perfect use case for the “Init Image” feature. In the UV editor click UV > Export UV Layout and save it as a PNG somewhere. Then enable the “Init Image” option in Dream Textures, and select the UV layout png. Type a prompt for the texture to generate, and it should stay in the boundaries.

  5. 0.0.3 user here! Should this work on a MacBook Air M1? I used the dream_textures.zip for install and installed git too. I run it as an admin, but I get the same error a few other people commented before. The python console inside Blender doesn't show any additional info. What can I do?

  6. Apple Silicon should be fully supported (I use a Mac Studio myself). To get the logs on macOS you need to start blender from the terminal. So open Terminal and run: cd /Applications/Blender.app/Contents/MacOS And then ./Blender. That terminal window will show the full logs (and the real error) now.

  7. I really wanna use this but it keeps freezing and then crashing whenever I click generate image, my PC specs are

  8. it is most likely because your GPU is running out of VRAM. Try reducing the size of the images, and potentially disable “Full Precision” under “Advanced Configuration”.

  9. Is the “Install Dependencies” button still available in the preferences window? If so, please open “Window” > “Toggle System Console” and then run it again. Otherwise, can you show me the full error you get?

  10. I would try on a fresh Blender 3.3 install. then before you install dependencies, open “Window” >”Toggle System Console”. If an error occurs it will show in the console now. Also, 0.0.3 is the latest version I have released.

  11. This is pretty cool! What is the reason it has to run as administrator? Or is that just for installation while it installs the actual code?

  12. The Python that ships with blender does not contain some of the header files needed to build the dependencies for stable diffusion. Typically you install this as “python-devel” on Unix systems at least. So the add on downloads the headers from Python.org and copies them into blender’s Python, which requires write access to that install folder.

  13. Oh, and yes. You only need administrator when installing the dependencies. Running the model should work fine after that.

  14. I don’t believe stable diffusion works with AMD out of the box because it relies on CUDA. There are ways to do it however. Give “stable diffusion amd” a search.

  15. Feel free to give it a try. I do not have access to AMD hardware to test it myself, but I don’t think it’s supported by stable diffusion unfortunately.

  16. You might be able to prompt it for that, I haven’t tried it though. Or another model that makes textures seamless could be run on the output.

  17. Hm, I finally got it all set up, and even validated the installation, but when I go to generate the texture with a prompt, it hangs for a bit and crashes. Any idea why?

  18. I would guess it’s a VRAM issue. Try reducing the image size. If not, Can you go to Window > Toggle Console and copy the logs? They’ll show the real error.

  19. I wonder if I'm doing something wrong! I finally got it running, but after the process completes, no image node appears in my shader editor. There's no log or context for why it fails, either.

  20. I see it takes a little bit for the noise to settle down, is this on top tier hardware like 20 or 30 series? I just wanna see if this is viable on my 6600XT

  21. The demo is from a Mac Studio with M1 Max and 32GB of unified memory. The demo I showed probably took about 10-15 seconds real-time to generate. Different samplers look different during the process, and some are faster than others. I think DDIM is generally a good fast sampler.

  22. Reddit may have formatted it wrong. You should first try opening “Window” > “Toggle System Console” and run the install again. That will show you the actual problem that needs to be solved.

  23. Makes me wonder how long until it goes from just a texture, assuming that's all this is, to something that has roughness and normal maps created too. I bet it could be taught but I imagine that's a lot harder as there's a lot less material sources that also have depth and roughness.

  24. I want to play with that, you can continue training with a custom dataset and add custom vocabulary, so I’m going to try training it on some textures with multiple maps.

  25. It should be possible to go larger, but you will need a considerable amount of VRAM. For reference, the largest image I’ve generated on an M1 Max with 32GB of unified memory (so hypothetically the GPU could use all of that) was 1024x768. Memory consumption was around 23GB.

  26. I've installed it and it appears to work.. except after I click OK and it whirrs for a bit, nothing happens, no texture appears or anything? Any tips?

  27. Open “Window” > “toggle system console” and try again. A more specific error message will show in the console so you can figure out what needs fixing

  28. This seems really cool, I've already been using stable diffusion for my textures and stuff but this makes things way easier.

  29. Hello, I don't know if anyone has asked this, but generating the image seems to only be using my CPU and RAM rather than GPU and VRAM. In task manager it's using 30-40% of my CPU and 8 GB of RAM. It still works, but I'm assuming that using my 3080 with 12 GB of VRAM would make it much faster. Is there a way to fix this?

  30. RuntimeError: CUDA out of memory. Tried to allocate 8.00 GiB (GPU 0; 11.00 GiB total capacity; 4.56 GiB already allocated; 4.32 GiB free; 4.66 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

  31. That is a normal amount of VRAM. I was able to run on an NVIDIA gpu with 6GB by reducing the image size (256x256 for example), and disabling full precision. An M1 Max with 32GB unified memory can run at default settings and higher.

Leave a Reply

Your email address will not be published. Required fields are marked *

Author: admin