You can play with the prompts, or maybe try another model that can make a texture tillable. I’d like to incorporate something like that into the addon.
Eh, someone still has to do the texturing, even if it's using an AI to make the textures. Also someone has to put in the prompts since mind reading doesn't exist, yet. Texture artists are still going to be the best at knowing what textures to create and what prompts to use.
This is a perfect use case for the “Init Image” feature. In the UV editor click UV > Export UV Layout and save it as a PNG somewhere. Then enable the “Init Image” option in Dream Textures, and select the UV layout png. Type a prompt for the texture to generate, and it should stay in the boundaries.
0.0.3 user here! Should this work on a MacBook Air M1? I used the dream_textures.zip for install and installed git too. I run it as an admin, but I get the same error a few other people commented before. The python console inside Blender doesn't show any additional info. What can I do?
Apple Silicon should be fully supported (I use a Mac Studio myself). To get the logs on macOS you need to start blender from the terminal. So open Terminal and run: cd /Applications/Blender.app/Contents/MacOS And then ./Blender. That terminal window will show the full logs (and the real error) now.
it is most likely because your GPU is running out of VRAM. Try reducing the size of the images, and potentially disable “Full Precision” under “Advanced Configuration”.
Is the “Install Dependencies” button still available in the preferences window? If so, please open “Window” > “Toggle System Console” and then run it again. Otherwise, can you show me the full error you get?
I would try on a fresh Blender 3.3 install. then before you install dependencies, open “Window” >”Toggle System Console”. If an error occurs it will show in the console now. Also, 0.0.3 is the latest version I have released.
The Python that ships with blender does not contain some of the header files needed to build the dependencies for stable diffusion. Typically you install this as “python-devel” on Unix systems at least. So the add on downloads the headers from Python.org and copies them into blender’s Python, which requires write access to that install folder.
I don’t believe stable diffusion works with AMD out of the box because it relies on CUDA. There are ways to do it however. Give “stable diffusion amd” a search.
Feel free to give it a try. I do not have access to AMD hardware to test it myself, but I don’t think it’s supported by stable diffusion unfortunately.
Hm, I finally got it all set up, and even validated the installation, but when I go to generate the texture with a prompt, it hangs for a bit and crashes. Any idea why?
I would guess it’s a VRAM issue. Try reducing the image size. If not, Can you go to Window > Toggle Console and copy the logs? They’ll show the real error.
I wonder if I'm doing something wrong! I finally got it running, but after the process completes, no image node appears in my shader editor. There's no log or context for why it fails, either.
I see it takes a little bit for the noise to settle down, is this on top tier hardware like 20 or 30 series? I just wanna see if this is viable on my 6600XT
The demo is from a Mac Studio with M1 Max and 32GB of unified memory. The demo I showed probably took about 10-15 seconds real-time to generate. Different samplers look different during the process, and some are faster than others. I think DDIM is generally a good fast sampler.
Reddit may have formatted it wrong. You should first try opening “Window” > “Toggle System Console” and run the install again. That will show you the actual problem that needs to be solved.
Makes me wonder how long until it goes from just a texture, assuming that's all this is, to something that has roughness and normal maps created too. I bet it could be taught but I imagine that's a lot harder as there's a lot less material sources that also have depth and roughness.
I want to play with that, you can continue training with a custom dataset and add custom vocabulary, so I’m going to try training it on some textures with multiple maps.
It should be possible to go larger, but you will need a considerable amount of VRAM. For reference, the largest image I’ve generated on an M1 Max with 32GB of unified memory (so hypothetically the GPU could use all of that) was 1024x768. Memory consumption was around 23GB.
Hello, I don't know if anyone has asked this, but generating the image seems to only be using my CPU and RAM rather than GPU and VRAM. In task manager it's using 30-40% of my CPU and 8 GB of RAM. It still works, but I'm assuming that using my 3080 with 12 GB of VRAM would make it much faster. Is there a way to fix this?
RuntimeError: CUDA out of memory. Tried to allocate 8.00 GiB (GPU 0; 11.00 GiB total capacity; 4.56 GiB already allocated; 4.32 GiB free; 4.66 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
That is a normal amount of VRAM. I was able to run on an NVIDIA gpu with 6GB by reducing the image size (256x256 for example), and disabling full precision. An M1 Max with 32GB unified memory can run at default settings and higher.
Did you see this
I had not, I need to look into that!
>!remindme 1 month
!remindme 2 months
Dream Textures
DOPE plugin!!!!!
Wow. Just wow. If you would have gone back when I started that this would be possible, I would have laughed. Thank you for making this
Amazing dude! Cheers
Honest question: ist this process deterministic?
When i click on ok after giving it a prompt,blender freezes for like 2 min and then nothing happens,how do i fix it?
Your brilliant!!
Best thing I’ve seen in a long time
What a fantastic idea, superb that you came up with this
I'm so happy that you made the connexion between those tools it's because of people like you that I personally dont fear a.i anymore
Haha, thanks!
God bless stable diffusion to be open source
"What a time to be alive"
It works like a charm and also it's pretty fast. Do you plan to add inpainting ?
Yes, I would like to support all features of Stable Diffusion eventually.
This is the single most exciting revolution in 3D modeling right now IMO
Now how do we make it tile :)
You can play with the prompts, or maybe try another model that can make a texture tillable. I’d like to incorporate something like that into the addon.
-Oh so you trained to be an artist for years and think you're irreplaceable? Here's this AI that tells you FU
if you trained that long to be an artist, this tool only replace the time you were going to CC0 textures or
I'm not worried they take will artists jobs.
Eh, someone still has to do the texturing, even if it's using an AI to make the textures. Also someone has to put in the prompts since mind reading doesn't exist, yet. Texture artists are still going to be the best at knowing what textures to create and what prompts to use.
Would it be possible to have it take uv's and UV islands into account?
This is a perfect use case for the “Init Image” feature. In the UV editor click UV > Export UV Layout and save it as a PNG somewhere. Then enable the “Init Image” option in Dream Textures, and select the UV layout png. Type a prompt for the texture to generate, and it should stay in the boundaries.
This is next level
OMG! This changes everything
Heyy looks awesome. Im working on a stable diffusion plugin for another art program, and found that
Interesting, I’ll take a look!
Awesome work, I can't wait to try it out!
What are the legalities involved with using this for commercial projects?
The license for the model
0.0.3 user here! Should this work on a MacBook Air M1? I used the dream_textures.zip for install and installed git too. I run it as an admin, but I get the same error a few other people commented before. The python console inside Blender doesn't show any additional info. What can I do?
Apple Silicon should be fully supported (I use a Mac Studio myself). To get the logs on macOS you need to start blender from the terminal. So open Terminal and run: cd /Applications/Blender.app/Contents/MacOS And then ./Blender. That terminal window will show the full logs (and the real error) now.
I really wanna use this but it keeps freezing and then crashing whenever I click generate image, my PC specs are
it is most likely because your GPU is running out of VRAM. Try reducing the size of the images, and potentially disable “Full Precision” under “Advanced Configuration”.
I keep getting the error that it cant find the module named omegaconf??
Is the “Install Dependencies” button still available in the preferences window? If so, please open “Window” > “Toggle System Console” and then run it again. Otherwise, can you show me the full error you get?
I just downloaded v0.0.4 I'm using Blender 3.2
I would try on a fresh Blender 3.3 install. then before you install dependencies, open “Window” >”Toggle System Console”. If an error occurs it will show in the console now. Also, 0.0.3 is the latest version I have released.
Have you tried running blender as administrator?
polyfjord
👀
[удалено]
Running locally is always my preference :)
HOLD ONTO YOUR PAPERS!!!
This is pretty cool! What is the reason it has to run as administrator? Or is that just for installation while it installs the actual code?
The Python that ships with blender does not contain some of the header files needed to build the dependencies for stable diffusion. Typically you install this as “python-devel” on Unix systems at least. So the add on downloads the headers from Python.org and copies them into blender’s Python, which requires write access to that install folder.
Oh, and yes. You only need administrator when installing the dependencies. Running the model should work fine after that.
I see you mentioned NVIDIA and Apple GPU's, will this work with AMD? If not, any plans to add support for that?
I don’t believe stable diffusion works with AMD out of the box because it relies on CUDA. There are ways to do it however. Give “stable diffusion amd” a search.
Feel free to give it a try. I do not have access to AMD hardware to test it myself, but I don’t think it’s supported by stable diffusion unfortunately.
It's interesting seeing history being made
Stable diffusion is super dope af
amazing - can it generate seamless textures?
You might be able to prompt it for that, I haven’t tried it though. Or another model that makes textures seamless could be run on the output.
Thanks a lot you just made something incredible
When a dream come true
You have to be kidding me! Ive used stable diffusion separately, but as a built in plugin 😳 oh. my. god
Wow thats next level - you are a wizard for me!
Yo! My dude. Good job! Will check it out
holy shit batman
Cool
Woooooow absolute fire...gonna make game textures with this
Holy shit dude
Hm, I finally got it all set up, and even validated the installation, but when I go to generate the texture with a prompt, it hangs for a bit and crashes. Any idea why?
I would guess it’s a VRAM issue. Try reducing the image size. If not, Can you go to Window > Toggle Console and copy the logs? They’ll show the real error.
I wonder if I'm doing something wrong! I finally got it running, but after the process completes, no image node appears in my shader editor. There's no log or context for why it fails, either.
Do you see the intermediate images in the image editor while it’s generating (if you have one open)?
Just tried it. All i get is a green square as the texture. It goes through the 25 steps. Any ideas?
Try setting the size to 384x384 and make sure “Full Precision” is enabled.
I see it takes a little bit for the noise to settle down, is this on top tier hardware like 20 or 30 series? I just wanna see if this is viable on my 6600XT
The demo is from a Mac Studio with M1 Max and 32GB of unified memory. The demo I showed probably took about 10-15 seconds real-time to generate. Different samplers look different during the process, and some are faster than others. I think DDIM is generally a good fast sampler.
I installed GIT and trying to install the dependences and I get this error:
Reddit may have formatted it wrong. You should first try opening “Window” > “Toggle System Console” and run the install again. That will show you the actual problem that needs to be solved.
Looks super useful
I love progress.
Oh my God. This is amazing.
savevideo
Amazing!
Ho gods above
AI created the visual based on the subject you entered?
Yes
Cool AF
i am glad that 3 years before i choose blender to be my tool.
Dang, probably the coolest application of generated textures I've seen.
Makes me wonder how long until it goes from just a texture, assuming that's all this is, to something that has roughness and normal maps created too. I bet it could be taught but I imagine that's a lot harder as there's a lot less material sources that also have depth and roughness.
I want to play with that, you can continue training with a custom dataset and add custom vocabulary, so I’m going to try training it on some textures with multiple maps.
truly groundbreaking potential here.
It should be possible to go larger, but you will need a considerable amount of VRAM. For reference, the largest image I’ve generated on an M1 Max with 32GB of unified memory (so hypothetically the GPU could use all of that) was 1024x768. Memory consumption was around 23GB.
I've installed it and it appears to work.. except after I click OK and it whirrs for a bit, nothing happens, no texture appears or anything? Any tips?
Open “Window” > “toggle system console” and try again. A more specific error message will show in the console so you can figure out what needs fixing
wow dude. amazing! will give it a shot
This is so freaking amazing!
The next update fixes the flipped images. For now you can use a Mapping node to flip the texture.
Hey
Yes, this is looking very good!!! I'm keeping an
Jesus this is great
I got it to work but all I was left with was an image of my 3rd world graphics card
This seems really cool, I've already been using stable diffusion for my textures and stuff but this makes things way easier.
Look at the troubleshooting section in the addon, it has the fix for the basicsr error.
I get this message when validating
Hello, I don't know if anyone has asked this, but generating the image seems to only be using my CPU and RAM rather than GPU and VRAM. In task manager it's using 30-40% of my CPU and 8 GB of RAM. It still works, but I'm assuming that using my 3080 with 12 GB of VRAM would make it much faster. Is there a way to fix this?
Hey, sorry for posting on a older thread but i cant seem to find the answer im looking for.
You need version 0.0.7, which added upscaling.
Hello, thank you very much for this
[удалено]
I will be messaging you in 3 days on
Wtf that's hella dope
fck
Awesome man thanks!!
This is insane, ty
RuntimeError: CUDA out of memory. Tried to allocate 8.00 GiB (GPU 0; 11.00 GiB total capacity; 4.56 GiB already allocated; 4.32 GiB free; 4.66 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
That is a normal amount of VRAM. I was able to run on an NVIDIA gpu with 6GB by reducing the image size (256x256 for example), and disabling full precision. An M1 Max with 32GB unified memory can run at default settings and higher.
Savevideobot
Amazing