Another quick update on the material system.
I've implemented a special skin rendering technique based on Jorge Jimenez's (www.iryoku.com)
Separable Screen-Space Subsurface-Scattering.
There are quite a few challenges to this technique because of its special needs.
For example you need to output linear view space depth during your main pass + an additional FP16 Render Target to render the specular and the SSS mask to.
My setup currently looks the following:
MainPass: RT0: Color [RGB], SSS Strength [A] // R16G16B16A16
RT1: Linear View-Space Depth // R32_FLOAT
RT2: Specular contribution [RGB], SSS Mask [A] // R16G16B16A16
SSS Strength is per object (or materialID) and can be stored in a diffuse map's alpha channel to control how strong the Subsurface-Scattering should be. This is important since especially female skin tends to be more subtle in detail and should not be set too high in strength.
The Specular render target is important because subsurface scattering only happens under the skin and so only affects the diffuse component of it and should under no circumstances be applied to the specular. The extra cost here hurts but I have not yet found another solution to this. I believe Jimenez mentioned an approach to attenuate the specular parts in his latest paper to avoid having to render and later on add the specular.
In the alpha channel of this specular target we mask out the pixels where SSS should be applied to.
This is especially tricky when there's eyes and eye covers on it since this technique is a post process and cannot be applied directly after the main pass for the skin parts so you need to find a way to mask out those parts in screen-space where the eyes / eyecovers are.
There may still be room for improvement on the performance side but all in all I'm quite happy with it.
It is particularly effective when shading several faces / skins at the same time.
Update: I managed to greatly improve the performance by dropping MSAA which made it possible to use stencil to mask out the part that should be skin shaded, plus it greatly improved performance by being able to remove the render target resolves that took 0.5-0.9ms each on my GPU (AMD 5750M). This might be reasonable on more recent hardware but it didn't make sense for me. Maybe reintroduce this in the future as an option but I'm currently looking into SMAA T2X which seems the better choice.
The main pass still needs to output linear view space depth & the specular contribution but this is still greatly improved due to (as said before) removing the extra resolves + less memory bandwith needs. I also managed to drop the SSS mask condition for being able to mask out eyes / eye covers and switched it to a stencil approach. The SSS blur now runs at 1-2ms at average on my hardware.
Update 2: I've made some visual improvements that include the use of a parameter map (cavity, roughness, AO) and the use of my layer system to additively blend in detail normals for high frequency skin details.
An important thing this has taught me is the importance of ambient occlusion to modern renderers. You might think AO is this "old" feature that people used back in the days to fake some small scale shadowing but the thing is...it's more important today than every before because every modern renderer nowadays uses image-based-lighting techniques for indirect (diffuse and specular) lighting. And the biggest flaw of IBL is the lack of information / occlusion. Ambient occlusion as a form of approximation for masking out IBL specular reflections is absolutely crucial.
Another thing I've noticed is the importance of shadows in combination with subsurface-scattering.
There's still no support for shadow mapping in my renderer but the use of AO made it's importance so much more obvious. The boundaries between shadows and light seem to be key in conveying realism in skin shading (I guess you could say in lighting in general but even more so here). The other thing I mentioned before is the use of a second material layer that uses a high frequency noise normal map to blend between. The blend function is also very important since we want to keep as much of the detail of the base normal map as possible while augmenting it by the detail normal map. A great technique was described by Stephen Hill and Colin-Barré-Brisebois (here) called Reoriented-Normal-Mapping (RNM) that basically blends in the detail normal depending on the orientation of it's base normal.
Here's some eye candy for you:
New screenshots using parameter map & detail normal layer blending (no shadows yet...)
Without detail normal blending:
With detail normal blending: