logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

See How to Take Your Surface Detail Workflow to the Next Level

Gabriel Lebel Bernier has demonstrated how the White Tree Frog project was done, focusing on the surface detail workflow in Substance 3D Painter and discussing the lookdev and materials.

In case you missed it

More from the artist

Introduction

I am a Montreal-based Senior CG Generalist specializing in lookdev. Throughout my career, I have had the opportunity to work on a diverse range of projects, stylized animated films, video games, realistic VFX films, advertisements, and event shows. If I were to choose the most notable work I have done, I would pick the Predator in ''Prey,'' the latest entry in the Predator filmography. I was in charge of the texture and lookdev of the gruesome main creature and most of its accessories.

This article will be focused on my surface detail workflow done in Substance 3D Painter, used in my frog project. Before getting into the subject, I'll provide a quick overview of my general workflow for a character. From ZBrush to Maya, from sculpting to rigging. I'll finish by talking a bit about the frog lookdev done in Maya using the Arnold renderer. I am really grateful to have the opportunity to share some insight on this project.

The White Tree Frog Project

The idea for this project started when I randomly came across a picture of a snail crossing a small river. It perched itself atop the head of a frog, halfway over its left eyeball. I was really intrigued by the image, the way the frog didn't mind the snail, even though it was clearly uncomfortable having it sitting directly on his eye. I found myself looking at more pictures like this. Pages and pages on Google Images of snails on frog heads, sometimes in reverse. I just thought it was kind of funny, cute, and just so dumb. That was my initial inspiration. However, as time went on, working on the frog alone became a much larger project than I had anticipated. So I decided to narrow down the scope of the project and focus solely on the frog.

Image credit: andri_priyadi, iStockphoto

Sculpting

Like most of my projects, I usually kick things off in Maya to get the blocking done. Once I'm satisfied with the basic shapes and volumes, I switch over to ZBrush for the sculpting phase. Inside ZBrush, my workflow is pretty classic.

I start with DynaMesh, which allows me to build up volumes quickly and easily. Once I have the volumes in good shape, I use ZRemesher to start working with subdivisions, which gives me better control while sculpting. After that, it's all about going through each level of subdivision, adding and refining the details as I progress.

At this point, it was clear to me that I wanted to work on my high-frequency detail through textures. The goal was to capture all the primary and secondary volume details in ZBrush and then use displacement textures created in Substance 3D Painter to capture the finer details.

The reason for this workflow is that it gives me greater control over the final look of the creature. I can adjust detail values and placements in a non-destructive and iterative manner. Moreover, having all this data integrated directly into Substance 3D Painter makes it easier to build the material and derive it from that data.

This is what my ZBrush model looked like. As you can see, it had relatively low surface detail, with only the primary and secondary volumes defined. I also sculpted details that would be hard to achieve using textures, such as the knee and wrinkles resulting from overhanging skin.

Retopology

I created the topology inside Maya using the Quad Draw tool, snapping it onto a decimated model from ZBrush. It was clear to me that I needed good topology for animation, but it didn't need to be ultra-high-quality since the animation and rigging would be somewhat limited. So, my goal was to ensure that most parts would deform correctly without going overboard. This allowed me to cut some corners here and there to save time along the way.

Once the topology was completed, I quickly rigged it using the Advanced Skeleton autorig tool to test the topology and ensure that everything deformed well. I iterated between refining the topology and testing its response in the rig, making adjustments to the edge flow, and adding polygon loops when necessary. Advanced Skeleton is a fantastic tool for creating efficient rigs quickly, especially for artists who aren't professional riggers. You can achieve excellent results with minimal effort (which is great).

Once I was confident that the topology was sound, I began working on the UVs. Knowing that most of the details would come from the textures (and as a personal project with few limitations), I opted for a high number of UDIMs. Since most texture softwares nowadays are really robust, as long as the texel density is relatively even and everything is well-unwrapped (with no overlapping), you are pretty much good to go.

UVs can be seen here. I tried to keep them somewhat organized.

As a quick tip, maintaining UV symmetry greatly facilitates the texturing process.

Texturing

The way I like to approach my texture work is by starting with the surface detail. Even in a simple solid grey shader, good surface detail says a lot about the material. Smaller frequency details will break the specular reflection and give a rough look, while larger frequency details will tend to look smoother.

I try to get a good 80% of my surface detail defined and laid down before starting to add any color or roughness. This way, I ensure that they don't visually interfere with or distract from the surface detail.

Building a Library

To start, I build a library from a variety of displacement maps from different sources. The tricky part in projects like this is that you won't ever find the exact texture you need. There's no ready-to-use "white tree frog eyelid wrinkle displacement map" out there, so it's a bit of a challenge. You have to get creative and see if you can isolate parts of textures to repurpose for your project.

As an example, I used a human face texture and isolated the lip wrinkles to use on the frog's side head. I also borrowed a section from a bumpy cheek face texture for the frog's back.

Texture Normalizing and Preparation

Once I've built a solid library, I go through each file and ensure they all share the same mid-value. If they don't have the correct mid-value, I simply add or subtract 0.5 to normalize them all together within Nuke (or the free alternative, Natron). After that, I re-exported the textures, keeping an eye on compression, bit depth, and format. Personally, I use 0 as a go-to mid-value, so I have to make sure that I don't clamp the negative data going below 0. Therefore, I write them out as 16 or 32-bit EXR files. I prefer working with a mid-value of 0, as it's easier to work with mathematically. Textures will add up on top of each other and multiply easily (though both values can work with a little tweaking).

The mid-value is an arbitrary value used in the displacement workflow. This value tells the rendering software whether a vertex will be pushed or pulled when displaced. It basically looks up the vertex corresponding texture value and compares it to this arbitrary mid-value. If it's over it, it will displace pushing outward. If it's under it, it will pull inward. Mid-values are usually either 0.5 or 0.

For this article, I will only explain based on the 0 mid-value.

Surface Detail

As I mentioned earlier, most of the frog's details were created using height maps projected onto the model using Substance 3D Painter. These maps served dual purposes – they obviously acted as surface details, but they also served as masks to drive the material from.

Here's how I set up my height map in Substance 3D Painter. Please refer to the image below for visual support of this layer and folder setup:

1. I create a folder and name it after the body part I want to work on. In this example, it's going to be ''Face_projection''. I make sure the height blending mode of the folder is set to Ldge (aka Linear Dodge or Add).

2. I create a second folder inside my ''Face_projection'' folder. This child folder will be called ''Projection_Folder'', and the height blending mode is set to Normal.

3. At the bottom of the child folder (''Projection_Folder''), I add a fill layer named ''Height_base'' with only the height channel activated. By default, the height is set to 0, which is what I want. Then I need to change the blending mode of the height of this layer to Replace. This essentially sets the height data to 0 for everything in the folder and acts as a height data blocker for this setup loop of this body part. Since the top folder (Face_projection) is set to Ldge, it will add the sum of the height data of the ''Projection_Folder'' to the model.

4. Inside the child folder (''Projection_Folder''), over the ''Height_base'' layer (step 3), I create another fill layer. I solely use the height channel, and I link it to a height texture from my library (1). To project that texture, I use the "warp projection" (2) mode of the fill layer and then position the projection lattice, anchoring it to the mesh using the "snap to mesh" option (3). Afterward, I mask out the parts I don't need with a simple painted mask (4). I also always add a level adjustment on top of my mask to fine-tune the displacement influence. This setup gives me great control over every aspect of the displacement and the flexibility to tweak it just right for the contribution amount.

Projecting the height texture onto the model

Masking out the part I don't need and tweaking the contribution using a level. The level out value is set to 0,75 wich means the height intensity of this specefic projection is reduced to 75% of initial intensity

Result

5. I repeat step 4 for every projection I want to add to the setup. Every fill has its own projection.

6. At the top of the child folder (''Projection_Folder''), over all the projections layers, I create another fill layer called ''Face_height''. This fill layer is completely empty, and the blending mode for the"height" channel is set to "passthrough". This allows the layer to store the sum of the height data of everything beneath it within the folder (''Projection_Folder''). I add an anchor point to this layer, allowing me to recall that data later on. The Ancor point is only bound to that folder (''Projection_Folder'') because of the blending mode used in this setup.

Final setup

7. I repeated this setup for each body part and other types of detail. This allows me to stay organized.

Here's how it looks after placing all the surface detail.

Material

Now starts the fun part. The height anchor points created for every body part can now be used to drive the material. As an example, I can easily isolate the top of the dimples or the cavity of wrinkles based on the height map I've projected. I can then link those isolated areas to drive the color, roughness, SSS weight, etc.

Here's how it works:

  1. Create a mask on the layer or folder you want to be controlled by the height data.
  2. Add a fill layer to that mask (1).
  3. Open up the resource menu (2) of the fill layer.
  4. Go over the anchor point tab and select your desired height anchor point (3).
  5. Lastly, select which channel of that anchor point we want to call; in this case, it would be the height (4).

From there, I can play around with the values to create the desired mask. Add a level to crush the values or reverse them, use a histogram, multiply them by a paint layer or grunge texture, etc. Personally, I tend to work extensively on my mask. It's not uncommon for me to have around 6 to 10 nodes in my mask.

Here are a few mask examples I was able to create just by remapping the values.

It is really easy to isolate a specific area and link that to a material channel. 

Color

Roughness

Another example of the surface detail driving the material

I export my texture sooner rather than later. At the beginning of the texturing process, I start a lookdev scene to test the textures in a proper lighting set up. I want to make sure that the height surface detail values are responding well in the render. It can be hard to judge in Substance 3D Painter because it's displayed as a simple height bump. The height can look different once it's used in a real displacement workflow. Some adjustments might be needed. I export my height map as a 16-bit EXR.

Lookdev

For rendering, I used Maya and Arnold.

To test my lookdev, I like to use three HDRIs in different environments. The goal is to ensure that the shader behaves in a PBR (Physically-Based Rendering) manner. It's important to test with various light setups, as you can easily find yourself tweaking the lookdev according to a specific lighting configuration.

I also like to test with a neutral HDRI that has no saturation at all. this way, I ensure that the light doesn't tint the color of the creature.

Materials

A-Displacement

Here's how I set up the displacement for the frog:

1. Create a displacement shader. Ensure the scalar mid-value is set to 0.

2. Create two texture file nodes with the color space set to raw. One is reading the ZBrush displacement, and the other is reading the Substance height map. Add them together using the aiADD node (specific to Arnold).

  • Since the displacement mid-value of both my textures is set to 0, I can simply add them together. If they were set to 0.5, I would have to subtract 0.5 from each one before adding them together.
  • For more information on how to export good displacement from ZBrush using the MultiMap Exporter, refer to this great article.

3. Connect the output (R) of the addition node to the displacement entry of the displacement shader.

In this project, I used 6 subdivision levels on the model while rendering. A great feature in Arnold is the 'auto bump' in the displacement shader. This transfers all the data that is not being displaced (due to a lack of subdivision) into a bump map.

B-SSS

Here's how I set up the Subsurface Scattering (SSS) for the frog. The frog entirely uses an SSS material, utilizing the random walk v2 computation method in Arnold.

I controlled the color of the SSS with the 'SSS radius' channel, which determines (more or less) what colors scatter through specific parts of the surface.

For example, a white map (1R, 1G, 1B) allows all light colors to pass through, while a red map (1R, 0G, 0B) scatters only red, blocking all other colors from passing through the surface.

Small tip: It's essential to mind the values and avoid going for dark values, as it will mathematically reduce the SSS. I use other parameters to control the amount of SSS.

For the frog, I opted for a light blood-red color (1R, 0.3G, 0.1B) for most of the body. For the membrane of the legs and feet, I went with a light yellowish color. I used a slightly purplish hue for the ends of the fingers.

To complete the SSS, I used a scale map. This map is a black-and-white texture that acts as an SSS distance multiplier. White allows the light to travel the furthest, and black doesn't allow the light to travel through the surface at all. My approach here is to consider what is inside the surface and how dense is it. For example, the knees have a dark value because the light has to pass through a small layer of skin and then a few bones, which block the light. In comparison, the ends of the fingers have a very light value because there's not much inside to block the light.

Here are a few images to show the SSS in action:

Specular Roughness

The lighter the value, the rougher the specular will be. His belly is slightly rougher as it's a bit dirtier and drier since he keeps dragging it around. Big cavities are rougher because there would be an accumulation of dirt and oil. The top of the head is the slimiest, therefore I used a darker value.

Here's an overview of the material.

I hope this small article was helpful and interesting!

Gabriel Lebel Bernier, Senior CG Generalist

Interview conducted by Theodore McKenzie

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more