Tutorials - Windows

Video thumbnail image for March 2024 - Stable Diffusion with AMD on windows -- use zluda ;)

March 2024 - Stable Diffusion with AMD on windows -- use zluda ;)

SD is so much better now using Zluda! Here is how to run automatic1111 with zluda on windows, and get all the features you were missing before! ** Only GPU's that are fully supported or partially supported with ROCm can run this, check if yours is fully or partially supported before starting! ** check if your gpu is fully supported on windows here: https://rocm.docs.amd.com/projects/install-on-windows/en/develop/reference/system-requirements.html Links to files and things: Git for windows: https://gitforwindows.org/ Python: https://www.python.org/downloads/ Zluda: https://github.com/lshqqytiger/ZLUDA/releases/ AMD HIP SDK: https://rocm.docs.amd.com/projects/install-on-windows/en/develop/ Add PATH for HIP SDK and wherever you copies Zluda files to %HIP_PATH%bin C:\path\to\zluda\folder Start Automatic 1111 webui webui.bat copy zluda cublas and cusparse to ...\stable-diffusion-webui-directml\venv\Lib\site-packages\torch\lib delete cublas64_11 and cusparse64_11 rename zluda files cublas.dll to cublas64_11.dll cusparse to cusparse64_11.dll back in terminal run webui webui.bat --use-zluda If you have issues with cudnn ...\stable-diffusion-webui-directml\modules\shared_init.py Add this after def initialize torch.backends.cudnn.enabled = False If you have a GPU that is not fully supported in hip SDK follow these instructions https://github.com/vladmandic/automatic/wiki/ZLUDA

Watch Video on Youtube

Published on: 3/5/2024

Video thumbnail image for AMD GPU + Windows + ComfyUI!  How to get comfyUI running on windows with an AMD GPU!

AMD GPU + Windows + ComfyUI! How to get comfyUI running on windows with an AMD GPU!

Happy Holidays! ComfyUI in windows and running on an AMD GPU! Install Git https://gitforwindows.org/ Install miniconda for windows (remember to add to path!) https://docs.conda.io/projects/miniconda/en/latest/ Complete steps coming after the holidays calm down a bit, for now you will have to actually watch the whole 6 minutes of video! ;-p

Watch Video on Youtube

Published on: 12/24/2023

Video thumbnail image for Run the newest LLM's locally!  No GPU needed, no configuration, fast and stable LLM's!

Run the newest LLM's locally! No GPU needed, no configuration, fast and stable LLM's!

This is crazy, it can run LLM's without needing a gpu at all, and it runs it fast enough that it is usable! Setup your own AI chatbots, AI coder, AI medical bot, AI creative writer, and more! Install on Linux or Windows Subsystem for Linux 2 curl https://ollama.ai/install.sh | sh Install on Mac: https://ollama.ai/download/mac Pull and run a model ollama run [modelname] Pull and run a 13b model ollama run [modelname]:13b Exit out from running ollama /bye Ollama website: https://ollama.ai/ Ollama Github https://github.com/jmorganca/ollama Start Ollama if it is not running sudo systemctl start ollama Stop Ollama if you wanted to for some reason sudo systemctl stop ollama Stop ollama from booting up on system boot sudo systemctl disable ollama

Watch Video on Youtube

Published on: 12/9/2023

Video thumbnail image for Install Stable Diffusion on windows in one click!  AMD GPU's fully supported!

Install Stable Diffusion on windows in one click! AMD GPU's fully supported!

Want to run Stable diffusion on windows with an AMD gpu? Install and run Shark from Nod.ai in one click. Simplest way to get stable diffusion up and running even on AMD. Page to download installer: https://github.com/nod-ai/shark/releases/tag/20231009.984 Direct link to installer: https://github.com/nod-ai/SHARK/releases/download/20231009.984/nodai_shark_studio_20231009_984.exe

Watch Video on Youtube

Published on: 12/4/2023

Video thumbnail image for How to run your own VOIP server.  Open source voice server for friends, gaming, and more!

How to run your own VOIP server. Open source voice server for friends, gaming, and more!

In this video we will go over step by step instruction for installing and running your own VOIP server. Incredibly stable voice server that can be used for chatting, gaming, friends, family, and more. Ultra low latency for clear communication, highly customizable, and able to support hundreds of concurrent users. Client software available on windows/mac/linux/ios/android. Users should be able to connect to your VOIP server from just about any device. Server Installation: sudo apt update sudo apt upgrade sudo apt-get install mumble-server sudo dpkg-reconfigure mumble-server Inside reconfigure Yes to running on boot Yes to higher priority set [SuperUser password] Edit Mumble configuration file sudo nano /etc/mumble-server.ini Inside configuration file lines to consider editing welcometext port serverpassword users messageburst messagelimit public server registration if you really want it to be public sslCert sslKey autobanAttempts autobanTimeframe autobanTime To connect to your server from mumble client Server -- click connect click add new address = [ip of server] or [domain or subdomain setup through DNS] port = [64738] username = [pick a username] label = give it a label so you know what server you are connecting to To connect as super user username = SuperUser password = [super user password] To allow users to connect from a domain rather than your home IP address setup a DNS entry pointing to your home [IP address] Home router port forwarding find port forwarding and setup a rule as follows from anywhere on port [64738] to [serverIP] on port [64738] accept You and friends who you give access to should now be able to connect using the DNS entry you created and connect to your mumble server! You can also install the server on windows as a windows service! For more installation instructions on different operating systems see the mumble documentation here: https://wiki.mumble.info/wiki/Installing_Mumble

Watch Video on Youtube

Published on: 11/30/2023

Video thumbnail image for How to convert civitai models to ONNX!  AMD GPU's on windows can use tons of SD models!

How to convert civitai models to ONNX! AMD GPU's on windows can use tons of SD models!

I had numerous folks from comments asking how to convert models from civitai. I went and looked at several different ways of doing this, and spent days fighting through broken programs, bad code, worse documentation, only to find out, the Olive tab already worked just fine. For anyone who has no idea what is going on, this is using Automatic1111 fork for DirectML on windows 11. See how to install Automatic1111 form for directML here: https://youtu.be/Db0HuRY2p84 In the Olive tab, optimize from the Optimize Checkpoint tab don't click on optimize onnx model tab at all copy the name of your entire file and paste it in at checkpoint name copy the filename without an extension for onnx model folder name output folder name hit optimize. After a while it should get optimized. If it complains about no config.json file for safety checker, sorry I have not been able to get those to work (looking at you SD 2-1). Also while most seem to work fine, I did find some that constantly created "grainy, washed out, faded" looking images. I have not found any way to make those models work properly on ONNX yet, I suspect it is a side effect of the ONNX conversion. Hope this helps! Happy holidays to everyone!

Watch Video on Youtube

Published on: 11/25/2023

Video thumbnail image for Let's make a dual-boot PC!  Windows + Ubuntu 22.04 Desktop!

Let's make a dual-boot PC! Windows + Ubuntu 22.04 Desktop!

This goes over creating a dual-boot PC that will let you boot into both Windows or Ubuntu Desktop. **If you are formatting drives, ensure that you choose the correct drive to install Ubuntu Desktop onto!** Requirements: Empty USB stick larger than 6Gb Ubuntu desktop 22.04 iso image Rufus.exe file Recommended: A drive entirely dedicated to linux, 1Tb or larger ideally. Ubuntu Downloads: https://ubuntu.com/download/desktop Rufus Tool for creating a bootable USB drive: https://rufus.ie/downloads/

Watch Video on Youtube

Published on: 11/12/2023

Video thumbnail image for Beginners guide using automatic1111 and stable diffusion with AMD gpus!  Tips , tricks, and gotchas!

Beginners guide using automatic1111 and stable diffusion with AMD gpus! Tips , tricks, and gotchas!

This is a simple beginner's tutorial for using Stable Diffusion with amd graphics cards running Automatic1111. We will learn how to use stable diffusion, and learn which things work, and which things do not work appropriately so that you can enjoy creating images and don't spend hours trying to get broken features to work. Commands from in the video: conda info --envs conda activate [environment name] webui.bat --onnx --backend directml If you want to put command line arguments into your webui.bat file use this: set COMMANDLINE_ARGS=--onnx --backend directml common command line arguments --lowram --lowvram --medvram --opt-sub-quad-attention Full set of command line arguments: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Command-Line-Arguments-and-Settings

Watch Video on Youtube

Published on: 11/5/2023

Video thumbnail image for AMD GPU's are screaming fast at stable diffusion!  How to install Automatic1111 on windows with AMD

AMD GPU's are screaming fast at stable diffusion! How to install Automatic1111 on windows with AMD

*Update March 2024 -- better way to do this* https://youtu.be/n8RhNoAenvM *Alternatives for windows* Shark - https://youtu.be/4mcdTwFUxMQ?si=COmlj9A1NQgNuZK0 ComfyUI - https://youtu.be/8rB7RqKvU5U?si=pKMK2ss5-FSuo-f- Getting Stable diffusion running on AMD GPU's used to be pretty complicated. It is so much easier now, and you can get amazing performance out of your AMD GPU! Download latest AMD drivers! Follow this guide: https://community.amd.com/t5/ai/updated-how-to-running-optimized-automatic1111-stable-diffusion/ba-p/630252 Install Git for windows Install MiniConda for windows (add directory to path!) Open mini conda command prompt conda create --name Automatic1111_olive python=3.10.6 conda activate Automatic1111_olive git clone https://github.com/lshqqytiger/stable-diffusion-webui-directml cd stable-diffusion-webui-directml git submodule update --init --recursive webui.bat --onnx --backend directml If you get an error about "socket_options" venv\Scripts\activate pip install httpx==0.24.1 Great models to use: prompthero/openjourney Lykon/DreamShaper If looking for models on hugging face... they need to have text-to-img libraries: check onnx Download model from ONNX tab Then go to Olive tab, inside Olive use the Optimize ONNX model when optimizing ONNX model ID is the same as you used to download change input and output folder names to be the same as the location the model downloaded to. Optimization takes a while! Come back and I will have some other videos about tips and tricks for getting good results!

Watch Video on Youtube

Published on: 11/4/2023