Wgpu Users

60 Members
Async all the things6 Servers

Load older messages

3 Jul 2020
@walther:kapsi.fiWalther but: correct me if i'm wrong, wgpu compute stuff requires you to write your logic in a shader language instead of rust? 19:33:29
@kvark:matrix.orgkvark Walther: wgpu currently works with SPIR-V, it doesn't care how you are generating it. In the future, it will work with WGSL 19:36:03
@kvark:matrix.orgkvarkso yeah, generally, you aren't writing the shaders in Rust, unless trying https://github.com/MaikKlein/rlsl which is sorta abandoned AFAIK19:36:34
@cwfitzgerald:matrix.orgcwfitzgeraldtbh I wouldn't want to be writing shaders in rust, that sounds kinda aweful actually19:37:40
@cwfitzgerald:matrix.orgcwfitzgeraldmaybe something rust inspired, but not rust itself19:37:48
@walther:kapsi.fiWaltheroh for sure perhaps for graphics it wouldn't make sense to write shaders in rust itself19:38:54
@walther:kapsi.fiWaltherbut i was thinking strictly in terms of compute - i've been writing some compute stuff in rust, and would like to keep it in rust because rust is fun and great, and would love to try and get it running on a gpu instead19:40:39
@walther:kapsi.fiWaltherand yeah i found rlsl as linked, as well as emu-core, both seem interesting and quite experimental19:41:07
@walther:kapsi.fiWaltherone random thought i also had was that there's already great tooling for turning rust into webassembly, and wasm could maybe be an easier target to port - wonder if there could be something like wasm to spirv compute some day, if that would end up being easier than rust itself?19:42:17
@walther:kapsi.fiWaltherin more abstract terms: rust has great support for massively parallel compute, including making sure you handle your memory correctly. in theory, that could make for a great match for gpus, right? and with helpers like rayon and others, splitting up that compute shouldn't be too difficult even across a ton of cores / compute units19:44:07
@kvark:matrix.orgkvarkthere is certainly a lot of potential there19:44:39
@cwfitzgerald:matrix.orgcwfitzgerald(keep in mind if you're used to something like CUDA, compute shaders are significantly more restrictive than it is)19:46:25
@walther:kapsi.fiWalther and there's a weird gap currently in that if you want to do something on a gpu, you kinda have to learn and use the specific tools and languages that exist for those gpus, usually shader languages (spirv, glsl, etc) whereas on the cpu for decades now you've been able to use a ton of different languages even for the "business logic" of your app itself 19:46:44
@walther:kapsi.fiWalthernot used to CUDA, my previous graphics experience is in writing GLSL shaders, and while it was useful for the purpose it wasn't exactly 100% enjoyable in terms of tooling19:47:47
@walther:kapsi.fiWaltherand right now i have some rust code i'd love to have "gpu-accelerated" and the idealistic naive optimist in me is thinking "wouldn't it be great if there was an abstraction / compiler / tooling for running your rust on gpu"19:48:25
@walther:kapsi.fiWaltherof course for getting things performant (or even possible) could require you to make sure you structure your code in certain ways19:48:59
@cwfitzgerald:matrix.orgcwfitzgeraldI've always been a bit skeptical about any kind of "automated" conversion to gpu acceleration19:51:39
@cwfitzgerald:matrix.orgcwfitzgeraldthere are a lot of factors, even in wgpu, that make gpu acceleration very close to useless for a lot of jobs19:52:32
@cwfitzgerald:matrix.orgcwfitzgeraldhttps://github.com/cwfitzgerald/wgpu-heterogeneous-compute-benchmark was my first real attempt to benchmark the overhead of sending a job to the gpu and getting the result back19:54:28
@cwfitzgerald:matrix.orgcwfitzgeraldand the results were not pretty if you have a discrete gpu19:54:47
@cwfitzgerald:matrix.orgcwfitzgeraldhttps://cwfitz.com/5k5ERM.png for things that are memory bandwidth bound, it is never worth it to port to the gpu (on discrete)19:56:11
@cwfitzgerald:matrix.orgcwfitzgerald * https://cwfitz.com/5k5ERM.png for things that are memory bandwidth bound, it is never worth it to port to the gpu (on discrete)19:56:21
@cwfitzgerald:matrix.orgcwfitzgeraldand even then there is a 2ms overhead to even getting the result back at all19:56:44
@m4b:matrix.orgm4bWhat do you think of work like taichi? https://github.com/taichi-dev/taichi19:57:35
@m4b:matrix.orgm4bTheir original paper on it was fascinating. I wish the codebase was more accessible. It’s pretty annoying to build and get running tho. C++ and python probably my least favorite combo ever19:58:26
@cwfitzgerald:matrix.orgcwfitzgeraldhonestly haven't heard of it19:59:19
@cwfitzgerald:matrix.orgcwfitzgeraldlooks pretty similar to cuda in concept20:00:48
@m4b:matrix.orgm4bIt’s a little different. It’s more about data structures and auto data parallel compilation, sort of what was talked about above: this is the paper that’s quite good: http://taichi.graphics/wp-content/uploads/2019/09/taichi_lang.pdf20:02:36
@cwfitzgerald:matrix.orgcwfitzgeraldI'll take a look at it in a bit20:02:58
@walther:kapsi.fiWaltherthe more down-to-earth specific practical example is that i've been writing a raytracer in rust, based on the raytracing.github.io tutorial. It would be fun and nice to make it work on a GPU, but i'm not particularly interested in rewriting it in glsl/spirv/etc20:17:34

There are no newer messages yet.

Back to Room List