Nlat98


























  1. Please consider adding a book by Italo Calvino to your list. I highly recommend Invisible Cities. It is short enough to read in a day, but is endlessly re-readable. It is a fascinating attempt to transfer non-propositional knowledge through written language. What does it feel like to be in a city? How would you describe the cities essence? listing all of the buildings, bricks, windows, people etc. does not suffice. This book is an attempt to address this problem

  2. I have a general DreamBooth quesiton for anyone that's done this before. How good is this at training an art style as opposed to a character or concept?

  3. I have not seen anything about dreambooth for styles. Currently trying to use OPs colab to train a style, will share results

  4. Thanks. I was since advised that styles work better as a Textual Inversion but please let me know how your experiment goes!

  5. https://www.reddit.com/r/StableDiffusion/comments/xpwcjh/can_you_use_dreambooth_to_train_styles_sorta_full/

  6. I trained DreamBooth (using

  7. Wow. Possible chance you'll publish the script?

  8. Sure, here is the image fading script for making the init images:

  9. Thank you kind sir. Will report back my own progress!

  10. I also have been playing around with using these fading image combos to train new textual inversion concepts. When it works, it works very well

  11. I decided to try training a new concept on a series of images that depict the smooth blending of two images. Scroll through the training images

  12. did you use a notebook or your own local installation? local installations seem to want .pt files, not the (.bin, etc) files huggingface provides in that library

  13. Couldn't get it to work because it would spit out "AttributeError: 'AutoencoderKLOutput' object has no attribute 'sample'".

  14. I used the colab to make three styles yesterday, but since this afternoon I have been getting that error as well. Maybe one of the dependencies updated and changed something crucial? not sure what's going on

  15. I don't think prompt weighting works in this notebook. I tried a very long prompt that ended in the word 'nightmare', and then tried the same prompt with the first bit and the word nightmare weighted equally, but the results were identical

  16. I was afraid of that... I tested an online generator today that worked a lot better with weights.

  17. I dont think that individual weights works with that link wither... But I found a colab that lets you weight prompts! (The UI is set up to only mix two prompts, but it looks like it will be easy to edit the code to allow for more) Scroll down to the image of the bird/dog

  18. Thank you so much for help. It is interesting why # appeared there. As i understand even if i accidently put it there it should reverse everything when i reload the page.

  19. Whoever made this colab can make changes which will apply on your screen if you reload the notebook. I guess they made the decision to comment out that line, or maybe it was a mistake on their part, who knows. happy to help!

  20. Oh it makes sense now. Is it possible to copy this collab so only i can make changes or something like that?

  21. Yes (file, save a copy in drive), but then if there are any cool future updates (like adding in new 1.5 weights), you will miss them. If you keep using this version, you will get those updates automatically.

  22. initial generation prompt:

  23. prompt: deteriorating polaroid collage. no people. dark landscapes and jagged black scribbles. creatures. the surface of the moon. the most disturbing image ever created.

  24. when do you get this error?

  25. HOLY CRAP! I was experimenting just now, to see where the trouble was, and it flipping WORKED! Thank you so much, Nlat98! You're a good person!

  26. Glad it worked out! Make and share some cool stuff!

  27. would love to see all 30 originals if you're willing to share

  28. Super cool idea. I see how you lightened the colors of the reflection, maybe try exaggerating that a bit? img2img kinda ignores fine details like lines, or very subtle color differences unless you skip like 50% or more of the diffusion steps

  29. I've been just using the img to img for variations at a strength of 0.9 and the variations are incredibly high quality:

  30. This is probably how Midjourney does variations. OP's post is more akin to using a lone image prompt in Midjourney, as opposed to clicking the variation button on an already generated image. I like your idea of using high strength for img2img in order to generate subtle variations of an image, but OPs post, using image embeddings as a prompt, has vastly different possibilities

  31. Never used hugging face before, how do I get a user token? Will I have to pay for that?

  32. I suspect they're rolling out something that has gone sideways as a colleague can't run my prompt requests right now either (getting errors).

  33. weird.. I have gotten a few errors when trying to run prompts. Usually just resubmitting the same thing a few times works eventually

Leave a Reply

Your email address will not be published. Required fields are marked *

Author: admin