This is what we are building to! Right now I've just completed the bridge between Blender and Stable Diffusion, which took a couple of days. The next item on my list is to add this functionality.
The coolest thing about Stable Diffusion is that you can generate multiple images of the same prompt, and they all come out differently! We plan on adding a way to cycle through multiple textures like that until you find the one that works for you.
Glad to see Polyfjord’s call for this implementation was heard and being actively worked on so soon, great results thus far, looking forward to using this addon when complete!
This is definately cool but maybe not super useful just yet. Correct me if im wrong but if it's using local GPU rendering and you have less than 10 GB of VRAM you're going to be limited to generating 512 x 512 images?
Currently you can only use local graphics cards or cpus to render the textures, however we will add this functionality in the near future. Thank you for the question!
Great concept, look forward to seeing where this goes. Though considering how large SD weights are, you should really add an option to point this add-on to a local copy of SD. Storing a separate copy of SD for each app that builds on top of it would be wildly impractical.
hey. a version of img2img would be cool, so you can maybe input a noise you've created or something. - I've been using centipede diffusion for that purpose for the last few months, and choosing the overall layout of the texture before applying diffusion is a great help. without input, I have not managed to get a single texture out of SD that would be flat and easily made tileable
Might be a good idea to figure out which modifiers help make textures. Like a checkbox that will add something like ‘UV map, material texture, hd texture’ to the prompt.
Unless I'm mistaken, it sounds like this has barely left the idea phase and you're making a post about an add-on that's still very far from complete. I'm not sure why you're posting about something that's not done yet.
In the open source software development community it's very common to post and share projects that are in early development, it both gives new developers a chance to join in, gaining experience, and gives the project more ideas and inspiration for new features and expansion.
Hm, I was expecting to see an AI generated texture applied to a 3d model. This is a great idea though. I’ll stay tuned!
This is what we are building to! Right now I've just completed the bridge between Blender and Stable Diffusion, which took a couple of days. The next item on my list is to add this functionality.
The coolest thing about Stable Diffusion is that you can generate multiple images of the same prompt, and they all come out differently! We plan on adding a way to cycle through multiple textures like that until you find the one that works for you.
Glad to see Polyfjord’s call for this implementation was heard and being actively worked on so soon, great results thus far, looking forward to using this addon when complete!
His call the spark for this project, my brother showed me the video he made and I just had to jump on it!
Would be awesome to do something like this in Blender:
Oh wow, great idea! I'll look into this
Looks awesome, good job there 👌
Thank you for the kind words!
There's even a way to make tiling textures with
This is definately cool but maybe not super useful just yet. Correct me if im wrong but if it's using local GPU rendering and you have less than 10 GB of VRAM you're going to be limited to generating 512 x 512 images?
I believe so yes, however most textures are 1x1 format, so it's just a matter of using an upscaling AI to increase the resolution.
Where would one check back to see updates on progress?
looking for developers to help?
Absolutely! It's all written in Python and uses Blender's Python API quite heavily, I could definitely use the help!
This is exactly what I've been looking for yesterday! What a coincidence! What are the min specks to run this?
Are we gonna need to use our own cards or can be use from the stable difussion page?
Currently you can only use local graphics cards or cpus to render the textures, however we will add this functionality in the near future. Thank you for the question!
Great concept, look forward to seeing where this goes. Though considering how large SD weights are, you should really add an option to point this add-on to a local copy of SD. Storing a separate copy of SD for each app that builds on top of it would be wildly impractical.
hey. a version of img2img would be cool, so you can maybe input a noise you've created or something. - I've been using centipede diffusion for that purpose for the last few months, and choosing the overall layout of the texture before applying diffusion is a great help. without input, I have not managed to get a single texture out of SD that would be flat and easily made tileable
!temindme 6 months
Might be a good idea to figure out which modifiers help make textures. Like a checkbox that will add something like ‘UV map, material texture, hd texture’ to the prompt.
looks cool!, will there be linux support in the future?
[удалено]
You could google it, maybe this helps....SMH
Unless I'm mistaken, it sounds like this has barely left the idea phase and you're making a post about an add-on that's still very far from complete. I'm not sure why you're posting about something that's not done yet.
In the open source software development community it's very common to post and share projects that are in early development, it both gives new developers a chance to join in, gaining experience, and gives the project more ideas and inspiration for new features and expansion.