!FZyQrssSlHEZqrYcOb:matrix.org

In WebGPU we Rust

1005 Members
3rd Rust Graphics Meetup this Saturday! 16:00GMT https://gamedev.rs/blog/graphics-meetup-03/66 Servers

Load older messages


SenderMessageTime
2 Feb 2023
@nihal.pasham:matrix.orgnihal.pasham

Hi, I'm new to gpu programming, started learning GPGPU programming using Rust + wgpu recently. I was going through the wgpu repo and have a question

The map_async api lets us map a gpu-allocated buffer so that it can be accessed by the CPU. How does this actually work?

I was able to put-together the following with my limited understanding of wgpu.

  • a call to map_async adds the buffer to LifetimeTracker's mapped field (I presume there's only one of lifetimetracker per wgpu app).
  • when a call to device.poll is made (usually from another thread), we check to see if the gpu is finished with buffer (i.e. if the index of queue submission that uses the buffer has finished ) and if it is, we move the buffer (or its id) to read_to_map field and from this point on, the CPU is free to work on the buffer.

Is the above correct? If yes, would I also be right to say - mapping host buffers in wgpu does not involve adding any commands to the GPU command queue.

17:37:18
@nihal.pasham:matrix.orgnihal.pasham *

Hi, I'm new to gpu programming, started learning GPGPU programming using Rust + wgpu recently. I was going through the wgpu repo and have a question

The map_async api lets us map a gpu-allocated buffer so that it can be accessed by the CPU. How does this actually work?

I was able to put-together the following with my limited understanding of wgpu.

  • a call to map_async adds the buffer to LifetimeTracker's mapped field (I presume there's only one lifetimetracker instance per wgpu app).
  • when a call to device.poll is made (usually from another thread), we check to see if the gpu is finished with buffer (i.e. if the index of queue submission that uses the buffer has finished ) and if it is, we move the buffer (or its id) to read_to_map field and from this point on, the CPU is free to work on the buffer.

Is the above correct? If yes, would I also be right to say - mapping host buffers in wgpu does not involve adding any commands to the GPU command queue.

17:38:17
@nihal.pasham:matrix.orgnihal.pasham *

Hi, I'm new to gpu programming, started learning GPGPU programming using Rust + wgpu recently. I was going through the wgpu repo and have a question

The map_async api lets us map a gpu-allocated buffer so that it can be accessed by the CPU. How does this actually work?

I was able to put-together the following with my limited understanding of wgpu.

  • a call to map_async adds the buffer to LifetimeTracker's mapped field (I presume there's only one lifetimetracker instance per wgpu app).
  • when a call to device.poll is made (usually from another thread), we check to see if the gpu is finished with buffer (i.e. if the index of queue submission that uses the buffer has finished ) and if it is, we move the buffer (or its id) to the ready_to_map field and from this point on, the CPU is free to work on the buffer.

Is the above correct? If yes, would I also be right to say - mapping host buffers in wgpu does not involve adding any commands to the GPU command queue.

17:45:54
@kvark:matrix.orgkvarkThat sounds about right18:16:52
@cwfitzgerald:matrix.orgcwfitzgeraldThe only thing I'd say is that the thing that does the last bullet point is a call to device.maintain (an internal call) which is called by both poll and submit18:22:01
@jasperrlz:matrix.orgjasperrlz If a buffer can be mapped by the CPU, it's probably allocated on the CPU side, it's just that the GPU also has the ability to read/write from it across the PCIe bus 18:25:42
@jasperrlz:matrix.orgjasperrlzI believe this is the case for all buffers allocated by wgpu today.18:25:53
@nihal.pasham:matrix.orgnihal.pasham any recommendations for reading material. Some of these concepts are pretty new to me. I noticed WebGPU is an in-progress standard and detailed documentation is
sparse
18:26:15
@codetoilet:matrix.orgcodetoiletat a high level it's really similar to vulkan, so references for that are okay as long as you're aware which features are missing/different18:30:18
@codetoilet:matrix.orgcodetoiletbut there is a webgpu best practices site I like to look at too, and some examples of more complicated things on GitHub too18:30:55
@nihal.pasham:matrix.orgnihal.pasham* any recommendations for reading material. Some of these concepts are pretty new to me. I noticed WebGPU is an in-progress standard and detailed documentation is sparse18:30:57
@nihal.pasham:matrix.orgnihal.pasham
In reply to @jasperrlz:matrix.org
If a buffer can be mapped by the CPU, it's probably allocated on the CPU side, it's just that the GPU also has the ability to read/write from it across the PCIe bus
Interesting
18:35:16
@jasperrlz:matrix.orgjasperrlzThe map/unmap checks are just making sure that the GPU doesn't have the ability to read/write to the buffer at the same time18:37:04
@jasperrlz:matrix.orgjasperrlzVulkan/D3D12 don't do these checks, you have to take care of the data races yourself18:37:28
@nihal.pasham:matrix.orgnihal.pasham
In reply to @codetoilet:matrix.org
but there is a webgpu best practices site I like to look at too, and some examples of more complicated things on GitHub too
Is this the one -https://github.com/toji/webgpu-best-practices
18:37:54
@codetoilet:matrix.orgcodetoiletI think so yeah18:39:34
@codetoilet:matrix.orgcodetoiletI'm also pretty new to the modern rendering apis and I found it useful when in the growing pains of my renderer18:40:03
@nihal.pasham:matrix.orgnihal.pashamThank you18:40:50
@mehmetoguzderin:matrix.orgmehmetoguzderin changed their profile picture.20:09:05
@mehmetoguzderin:matrix.orgmehmetoguzderin changed their profile picture.20:09:24
@i509vcb:matrix.orgi509vcbI know there is a downlevel flag for compute. What else was gles 3.0 using again that meant we couldn't use gles 2.0 in wgpu?20:31:47
@cwfitzgerald:matrix.orgcwfitzgerald
In reply to @i509vcb:matrix.org
I know there is a downlevel flag for compute. What else was gles 3.0 using again that meant we couldn't use gles 2.0 in wgpu?
Uniform buffers
20:33:04
@cwfitzgerald:matrix.orgcwfitzgeraldWe would have to completely changed the binding model20:33:21
@cwfitzgerald:matrix.orgcwfitzgeraldI guess we could go push constants only which would be a little weird20:33:39
@i509vcb:matrix.orgi509vcbI was going to try to take wgpu for a spin on a downlevel config on asahi linux but I guess uniform buffers kind of block that. (Compute isn't implemented yet pretty much)20:35:08
@i509vcb:matrix.orgi509vcbFull gles 3 will happen anyways eventually, so I probably won't need such a thing 20:37:14
@cwfitzgerald:matrix.orgcwfitzgeraldGles 3.0 can run on basically anything at this point20:43:34
@i509vcb:matrix.orgi509vcbYeah it's pretty much everywhere, especially with Mesa doing all the api side stuff20:47:00
@i509vcb:matrix.orgi509vcbI'll definitely be one of the first to try out vulkan when that driver develops on the arm macs on Linux 20:47:24
@i509vcb:matrix.orgi509vcbI've heard that vkcube (somewhat modified) does work on the experimental driver20:48:04

There are no newer messages yet.


Back to Room List