Sort:  

Thank you. I just read one of your posts and one thing that I have discovered using Midjorney is that so many words are banned. I have already found that a bit limiting but I'll try to work around it somehow, to come up with more creative words I guess.

With the local model of Stable Diffusion, you can be a lot more artistic. Some people have even trained their own models with different content than the core data.

With MidJourney, you can of course use a thesaurus to get around particular words, but they make it clear in their terms of service that they do not like that.

If for example, you wanted a historical, athenian statue or bust; you'd have a hard time getting that represented in MidJourney in the same way in which such statues are on display in public squares throughout Europe.

Thank you for this valuable information. I have a few things to ponder now 🙂

If for example, you wanted a historical, athenian statue or bust; you'd have a hard time getting that represented in MidJourney in the same way in which such statues are on display in public squares throughout Europe.

Yes, that I understand from my experience so far.

Feel free to ask any questions about the tech the AI Art & Information community has some really knowledgeable people!

The Stable Diffusion Sub Reddit is also a great place to get more information on the topic, both from an artistic, and technical perspective.

Great, thanks again. I'm not a very technical person but I know some basic programming at least so I guess I could get into this more if I feel drawn to it.

No need to know a lot about programming (I've only got a bit of Python under my belt) - and it is enough to understand the error messages if you get some running the AI on your local machine, but there's always google!

Most of the applications are written using python, and then use a web-interface to connect with the command line. I like the Automatic1111 distribution, but it can be a bit annoying (and slow) to set up. If you're running MacOS on a laptop with a M1 or M2 chip, I've been experimenting running Diffusion Bee which is about as simple as it gets, but lacks a lot of the advanced features that other options have available.

It is really slow on the Mac, compared to my gaming PC, but being able to create using little power at all is really flexible, even away from an Internet connection.

Lots of people are getting into this stuff for the first time, so there's lots of questions and answers to be found on not only reddit, but also in the discussion sections of the github repositories where the code is available.

Thank you for your generosity, in providing all this information. Really appreciated. I now have something new to explore 🙂

I wish you a nice day 🌿