Sender | Message | Time |
---|---|---|
31 May 2023 | ||
It does work when I give the resources in the same order as I pass the bind group entries. But it is entirely possible that those are in the same order as the binding numbers. | 12:07:38 | |
So it might actually be that Dawn does it correctly, provided that "correct" means following the binding number order. | 12:08:13 | |
Oh, and it is right there in the spec "when bindings are ordered by GPUBindGroupLayoutEntry.binding". | 12:09:49 | |
Facepalms himself | 12:09:58 | |
Yes, false alarm. Sorting the entries by the binding index indeed works. Thanks for the tip! | 12:19:59 | |
No worries! This is a bit confusing for sure! | 12:25:08 | |
Hey everyone!
The thing is, once my image is ready and I recreate my texture with the new data, I found that I have to regenerate my bindGroup entries and recreate the whole bindGroup:
Even tho it's not gonna happen often, I'm wondering if this could be considered a bad practice? If so, is there any workaround? | 12:43:02 | |
I'm not sure I understand, can't you create the texture and then copy into it? Then you wouldn't need to recreate the bindgroup: the bindgroup would keep pointing to the same texture, but the content of that texture will have been updated. | 13:06:42 | |
I can't know the mandatory size [width, height] of the texture in device.createTexture() before the image is loaded, that's why I'm using a placeholder texture that has a size of [1, 1] while waiting. Ideally I'd like to create the texture just once, yes. A quick test with hardcoded [width, height] values is indeed working without the need to recreate the bind group. | 13:23:11 | |
Ah yeah if you don't know the size then there is nothing better you can do at the moment. | 13:29:57 | |
Alright then, thanks! 🙏 | 13:36:19 | |
1 Jun 2023 | ||
i noticed on my work pc (win-nvidia) earlier that the supported limit for maxStorageBufferBindingSize was lowered to 128mb on chrome canary (forgot to write down the exact version, sorry!) also mainline is fine. maxBufferSize is still 2gb as expected. is this known issue? or intentional? also noticed that on mac-m1 both maxBufferSize and maxStorageBufferBindingSize is now 4gb on canary - great! | 01:02:43 | |
It will be raised higher again soon. We thought there was a limitation on Windows, but it turned out to be a driver bug on a particular GPU instead. | 01:05:48 | |
How much buffer data synchronization/management does WebGPU do? | 06:54:52 | |
Say, on frame 0 I fill up a uniform buffer with data and bind it (perhaps with dynamic offsets) drawing my scene. | 06:54:56 | |
What happens on frame 1 if I reuse the same buffer, again filling it up with data before drawing? Does the actual rendering of frame 0 get corrupted because of the GPU being slighly behind the CPU and I am overwriting frame 0's data? Or does WebGPU manage this for me automatically? | 06:55:02 | |
WebGPU manages it automatically | 07:01:36 | |
Nice | 07:03:52 | |
I tried looking for details on this but couldn't actually find any. Is there anything available? | 07:04:23 | |
For example, hos does WebGPU avoid overwriting the memory? Does it internally juggle multiple buffers etc. | 07:05:02 | |
Or is all of that implementation specific? | 07:05:08 | |
WebGPU either validates this, or introduces the necessary memory barriers to avoid races | 07:06:39 | |
Ah | 07:07:15 | |
if you are, for example, using write_buffer, staging memory will be used so that the copy can happen on the gpu's timeline | 07:07:22 | |
The reason I am asking is because I am considering how to update a uniform buffer multiple times during a frame with a dynamic offset. Each write ("element" if you will) of the buffer contains data related to the batch I am drawing. | 07:08:23 | |
And I was wondering if it would be more performant to have a ring buffer of buffers to cycle through, in order to "help" WebGPU avoid having to wait for memory to become free etc. | 07:09:31 | |
for most use cases, you should be able to use a single write_buffer call to upload your uniform buffers without major problem | 07:11:00 | |
so collect all the data you need to upload over your frame into an array, then issue a write_buffer writing it to the uniform buffer, then submit the work using that buffer | 07:11:34 | |
the implementation should take reasonable steps to make that flow performant | 07:11:55 | |
Allright, sounds very straightforward, thanks! | 07:12:18 |