DX11: Do I need to resolve a MSAA texture before using it as a texture ressource in a pixel shader?

  Kiến thức lập trình

I apply elaborate pixel shaders to textures, using a simple quad (drawing two triangles covering the whole input texture). The pixel shader uses the result of previous draw calls, stored in a ID3D11Texture2D. Here is a snippet of the C++ code:

        // create a texture and set it as render target:
        Microsoft::WRL::ComPtr<ID3D11Texture2D>         render_memory;
        device->CreateTexture2D(&desc, nullptr, render_memory.ReleaseAndGetAddressOf()));
        Microsoft::WRL::ComPtr<ID3D11RenderTargetView>  m_renderTargetView;
        device->CreateRenderTargetView(render_memory.Get(), nullptr,
                m_renderTargetView.ReleaseAndGetAddressOf()));

        // <draw useful content into render_memory using OMSetRenderTargets(m_renderTargetView)>

        // set the texture as a shader resource to be used by a pixel shader:
        Microsoft::WRL::ComPtr<ID3D11ShaderResourceView>    m_SRV;
        device->CreateShaderResourceView(render_memory.Get(), &desc, m_SRV.ReleaseAndGetAddressOf()));
        context->PSSetShaderResources(0, 1, m_SRV.GetAddressOf()); 
        context->VSSetShader(mdx->shaders.ScreenQuadVS(), nullptr, 0);

In the code above, the “desc” structure defining the texture which will contain the useful content is trivial, i.e. without any multisampling (SampleDesc.Count/Quality is 1/0):

        D3D11_TEXTURE2D_DESC    desc{};
        desc.Width = width;
        desc.Height = height;
        desc.MipLevels = desc.ArraySize = 1;
        desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
        desc.SampleDesc.Count = 1;       // <--    trivial case
        desc.SampleDesc.Quality = 0;     // <-- (no multisampling)
        desc.Usage = D3D11_USAGE_DEFAULT;
        desc.BindFlags = BindFlags();
        desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
        desc.MiscFlags = 0;

If I change SampleDesc to use MSAA, there is a cascade of changes that follow: not only do I need to use D3D11_RTV_DIMENSION_TEXTURE2DMS in the render target vie description, but also D3D11_SRV_DIMENSION_TEXTURE2DMS in the shader resource creation, and even the depth stencil views and textures need adaptation.

I am using the company’s DirectX framework, and these changes would be quite invasive (for example, there is a spool of all the depth stencil views, so I would need to break the cache structure to take also into account not just the dimensions, but also the MSAA).

Before going to defend this move in a meeting, I would like to know if there isn’t a final hurdle at the end: will it work at all?

That is, if all compiles and executes well without introducing breaking changes for my colleagues, will the result be what I expect?

I expect the texture used as the source of my pixels shader to be rid of jagged lines, leveraging the GPU MSAA capabilities without having to waste suboptimal pixel shader computing power to do just that (which is the current workaround, and is described in this question).

New contributor

Choplun is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.

LEAVE A COMMENT