|30 Sep 2023|
|WAS||Yeah it's the photoshop method||04:25:54|
|WAS||just, not photoshop||04:26:03|
|WAS||I tried this before but the blending modes in existing stuff I use in WAS-NS isn't right||04:26:39|
|WAS||not exact calcs so effect was just wrong||04:26:44|
|ericrollei||I'm glad you are working on more sharpening routines.||04:27:12|
|ericrollei||that looks really good||04:27:19|
|Tommy_plug joined the room.||04:28:42|
|ericrollei||It seems like some sharpening is needed between upscaling passes. Anything that makes a halo early on will get amplified so avoiding that is super important||04:29:15|
|ericrollei||As a side question - it seems like FreeU B2 settings above 1 can reduce sharpness of the output - at least in my workflow. But I've not played with the advanced settings too much. Do you know what settings are best to avoid that?||04:34:58|
|WAS||That could be because it's multiplying above 1, which is throwing values out of original range, and muddling stuff||04:51:25|
|WAS||like cause it's a vision process, maybe at this point if the image is above "1" in whatever scale it is in before patching, it could just really degrade things and any benefit a random fluke and preference. When done on the input blocks I feel this is still the case, but there is more inference process to mask it, and get something out of it within the original range of the image.||04:52:41|
|WAS||Vivid sharpen made the face-fix area really show through||04:53:12|
|WAS||pretty high strength for the resolution and still not really immediately "halo-y" looking like high-pass methods in WAS-NS||04:53:48|
In reply to @wasasquatch:matrix.orgYeah, the whole process of gen AI is full of lots of routines and techniques like this that make small improvements (maybe) or could mess things up if done in the wrong combination with other things. But on the other hand lots of small gains can add up at the end if done well. FreeU seems to help with things like composition and body position so I'll probably still use it, but I had to fine tune settings to keep the crisp detail that I love.
In reply to @wasasquatch:matrix.orgThanks for that, just downloaded, will play with that, and reminds me that I need to check the inpainting as well.
|@spurlos:midov.pl left the room.||08:18:21|
In reply to @wasasquatch:matrix.orgInitial testing ... nice results so far. I'm wondering does the strength correlate directly to the blur radius used?
|苗峰 joined the room.||08:44:44|
In reply to @ericrollei:matrix.orgYeah, it's used for the GaussianBlur radius.
|WAS||I figured strength would make more sense since we're not seeing the visual process (the blur in PS for example).||09:04:26|
|Dekita RPG joined the room.||13:40:28|
|hironow joined the room.||15:14:13|
|BVH||HLL 5.5 goes hard needs a lot of processing, but the results are nice||15:56:53|
|BVH||* HLL 5.5 goes hard [when mixed with good models] needs a lot of processing, but the results are nice||15:57:06|
|BVH||* HLL 5.5 goes hard [when mixed with appropriate models] needs a lot of processing, but the results are nice||15:57:17|
|BVH||* HLL 5.5 goes hard [when mixed with appropriate models] needs a lot of processing, but the results are nice vpred sure is nice I wish there was a general anime vpred model like the furries have||15:57:56|