Ethan Clark, a recent graduate of the Gnomon School of VFX, shares some of his workflow and secrets for creating a stylized atmosphere for his latest project, the dreamy Palia House.
Howdy! My name is Ethan Clark, and I’m a recent graduate of the Gnomon School of Visual Effects, Games, and Animation in Los Angeles, as of receiving my diploma in January 2023. I’m an Environment Artist/3D Generalist who loves creating stylized pieces that share some sort of narrative with the viewer. To me, storytelling is the most important piece of creating artwork, and I strive to tell stories in all of my pieces.
I started as an oil painter, studying portraiture and illustration at The Art Academy in New Jersey. My background as a painter gave me an acute awareness of the principles of lighting, color, and composition. Eventually, I decided that I wanted to find an artistic outlet that would allow me to work more collaboratively while reaching more people with my stories, so I turned to the film and games industries. I sought out a school that could give me the tools to create industry pipeline-ready artwork that was also uncompromising in its aesthetic beauty, and this led me to Gnomon.
The Palia House was my final work created in my Demo Reel class during my time in Gnomon’s BFA program. In the scene, I didn’t just want to make “a CG environment”. I wanted to create a moment in time.
The Palia House Project
The Palia House is based on a piece of concept art by Etienne Hebinger, a French concept artist currently working as a freelancer. While I’m sure the concept art had its own narrative intention, I was simply a big fan of Etienne’s style and wanted to use his environment as a launchpad to tell my own story.
I created a reference board of other houses, both real and fictional, which inspired me. Using the styles of Studio Ghibli, Disney, and other 3D artists including Luuh Zavala and Allan Bernardo as references, I embarked on my journey to make a peaceful and relaxing environment.
To me, the essence of a beautiful CG environment is to use realistic or semi-realistic textures on stylized geometry. Most “pretty 3D animated movies” follow this, and this is the reason why Pixar movies are consistently visually stunning. Most Disney, Pixar, and Dreamworks environments have a level of “wonk” in their geometry, meaning that no edge is completely straight. They are all at least a tiny bit wobbly to prevent the environment from looking too realistic, even if the textures are created from photo information. When you look at the tower from Tangled, you can immediately recognize that it is not real. It does not look photographic. However, closely inspecting the bricks or the plants will reveal that they are created using photographic textures as a base.
The basic modeling process from my ShotCam
One of my major goals in this project was to preserve the stylized 2D shapes while also creating a structure that made sense from all angles in 3D. As per my usual workflow, I set up a “ShotCam”, which would be the fixed camera from which the house is intended to be viewed. I would do most of my modeling from the perspective camera but refer back to the ShotCam frequently to ensure that I am still matching the silhouette of the concept.
I blocked out the geometry house and added supporting edge loops to every mesh so that they would be subdividable. I try to keep my faces as close as I can to squares within reason to avoid UV stretching. I create a V-Ray displacement node in my outliner and drag the geometry of the entire house into it so that it will all be subdivided upon rendering. This makes the geo look clean and high resolution in my render while keeping it lower-poly in my viewport to optimize the scene.
To displace the roof tiles and “wonkify” the house, I took the whole thing into ZBrush as one object and used the move, move topological, and crumble brushes to wiggle around the geo until it looked properly wonked. There isn’t too much pressure to be overly careful either, as I store a morph target before starting so that I may selectively use the morph brush to undo wonking that occurred in unwanted areas.
Video example of how I use the crumple brush to wonk geo in ZBrush:
Video example of how I use the morph target brush to selectively undo my wonking:
In total, the house took up 20 UDIMs over its 7 different materials. I separated the different materials logically, into things such as Wood, Metal, Cloth, and Roof Tiles. My texturing workflow generally starts with flat colors, and I add on HSL Adjustment layers or flat color layers masked with generators like Light and Curvature to add a bit of variation.
From there, I add layers with more color and roughness variation/breakup, using grunge and noise patterns in the mask so that it adds breakup in random patterns, depending on my needs. For a wetter surface, I may have areas of lower roughness breakup with a more splotchy noise pattern in the mask. Or for an old and dirty surface, I may add a darker color with more roughness in an ambient occlusion mask to signify that there is more dirt in the crevices between things. Whenever using these sorts of generic generators, I find that it is good to multiply or divide some type of noise on top of it in the mask so that there is variation. Variation is the key to texturing.
On top of my color and roughness variation layers, I add a height channel at -0.1 with anisotropic noise in masked-out areas to create cracks in the roof tiles. I create an anchor point on this and make a new layer that fills with the crack mask, blurs, and then subtracts the crack mask so I am left with a new mask that only affects the area around the crack. Making this layer lighter in color adds contrast around the cracks and makes them stand out more.
On top of this, I add more grunge as well as moss, which gives the tiles so much life. The moss mask is created by combining Substance’s Dirt generator with a Clouds noise pattern and a Moisture Noise and then subtracting a different Clouds noise pattern. After crunching the values with Levels, I drop an anchor point so that I can use this mask in layers on top of it. I then create another layer of “Dense Moss” on top with a slightly different color and more height. I use my “Moss Occlusion mask” anchor point as a base and tweak it with levels, bevel, noise, and warp until I have an interesting look to it. In particular, I think the warp helps to give the mask a fuzzy edge look, like moss.
The final layer is my lighter moss, which uses the “Moss Dense mask” anchor point as a fill and then crunches the levels on it so that just the brightest values become more of a lime green. All of this is in an effort to create interesting variations. In the end, I was very pleased with the breakup it created, and the moisture noise in particular gives very interesting little particles that do resemble moss to me.
Since all three layers are procedural, I am able to tweak the balance slider on the initial moisture noise to increase the overall moss density. I can also change the seeds in any of the noises to vary the look and copy/paste the entire Moss Smart Material with different seeds to add more overall.
It was important to me that the roof tiles had their own material in Maya so that I could use VRayMultiSubTex to add random variation to each individual tile. VRayMultiSubTex is a powerful node that you can connect to any file texture in Maya to randomly adjust the hue, saturation, and gamma within a certain range. Keeping these values very low will give subtle color variation to each individual roof tile, helping to give the whole roof more personality.
Video demonstrating the power of VRayMultiSubTex:
I use the same technique in my grass and fence segments to give more visual interest. As I said before, variation is everything.
This project involved a bit of set extension/matte painting – creating separate elements in the background and then compositing them together. The two background elements that had to be created separately were the clouds and the mountain. The clouds were created in Houdini using only eight nodes. They were rendered in Redshift with ACES and then composited into my main render later in Nuke.
I created the base geo for the cloud meshes in ZBrush, by quickly jamming a few spheres into each other and mushing them around with the move and clayBuildup brushes. From there, I exported the geo as a .fbx and brought the file into Houdini. In Houdini, I was able to use Cloud and CloudNoise nodes to quickly develop my desired look.
This project was the first time I had to make my own clouds, which is a task that is usually reserved for the FX track students at my school. I asked for some guidance from my friend and classmate Amanda Kutcher, who explained to me that most of the work in lookdeving a cloud is usually in the shader, adjusting the Density and the Absorption Coefficient until you reach the desired look. The lighting was important as well as I wanted to match the lighting of my main scene the best I could. I had two lights, a key light which represented the sun, and a dome light, which represented the sky. I color-matched the dome light to the sky of my reference image so that the cloud would properly reflect the blue sky color in its shadows.
The background mountain was actually very simple to create. The underlying geometry was just two spheres, slightly modified with a texture deformer in Maya. On top of that, I used MASH to scatter instances of a tree that I created in SpeedTree all over, with over a million trees in total. I placed this mountain in its appropriate location in the Maya scene in relation to the house and used the same ShotCam but rendered it separately to optimize render time. With the VRaySun/Sky that I am using, I am able to include bluish atmospheric fog that increases in density with distance from the camera to get the “far away mountain” look. The house and tower on the mountain are also just slightly modified duplicates of the ones in the foreground.
It was important to me that I included characters in this piece. While it still would have been a pretty environment without them, I really wanted to create a piece of artwork rather than just an environment. To me, a finished piece of artwork is the most compelling when it has both strong characters and a strong environment.
The character models themselves went through quite a journey to get to their current state. They actually started as base meshes for the characters in my previous work, The Adventure Boys. For a different project, I collaborated with my friends Rain Rouhani, who is a talented character artist, and David Eisenstadt, a talented generalist who now works as a look development artist at Dreamworks. Rain used my Adventure Boys meshes and transformed them into totally new characters, who we named Young Man Richard and Lilly. David created their clothing in Marvelous Designer. Months later, I gave Richard and Lilly a fresh set of paint on their textures and clothing and re-posed them for this project.
The narrative itself is simple: Richard holds flowers behind his back. Lilly answers her door and is excited to see him. It could be a first date or a long-time lover. The sheep look on, excited to see what she’ll say in response to his advance. If you look carefully at the right side of the garden, you’ll notice a clump of dirt in the grass. I included this here to show that Richard had picked these flowers from Lilly’s own garden, perhaps a sign of his poor planning or aloofness.
For the sheep, I started with the ZBrush default dog and deformed the geo until it resembled a sheep. This sped up the sculpting process since I didn’t need to retopologize.
Added bonus – you get to do this:
When I first sculpted the sheep, I was looking at a real photo of a sheep for reference – therefore the result was too realistic for my liking. I attempted to "restylize it" and went through the process of stretching and exaggerating certain features.
The grooming in the scene was done by my friend and classmate Alena Mealy, for both the sheep and the human characters. I sculpted and textured the sheep before sending it over to her to be fluffed with Yeti in Maya. The grooms were done in multiple pieces, with the body fur being separate from the head. This way, the head fur could be worn like a wig or removed for a different look.
It took some iterations to get the look right as I wanted the sheep to look fluffy and dense. While she was working, Alena noted that she would ask herself, “Do I wanna hug this sheep?” And if the answer was no, she knew that she wasn’t done yet. Initially, she created a texture reference object with the original base mesh and then created the grooms in their own file. After grooming and lookdeving, she transferred them over to the main scene with Yeti’s cache groom system so that the groom file could be modified independently and would update in the main scene.
Although the human characters were relatively small in the frame, it was still important to me that they were as high quality as possible. From far away, the shader of the hair becomes very important, and it’s important to have the proper breakup to avoid your characters looking like LEGO minifigures. Shown below are the Yeti grooms that Alena created for Lilly and Richard.
This scene uses a mixture of handmade foliage, modified SpeedTree assets, and Megascans. The grass was hand-sculpted in ZBrush and textured in Substance 3D Painter, and then taken into SpeedTree where I made cutouts for each of the grass blades that I created and distributed them in a few patterns of various densities. These are examples of what I actually exported into Maya and used in my scene. Once it was in Maya, I hand-placed some instances and used MASH to scatter others. I was initially planning on using SpeedTree’s wind animation in Maya so the grass would move slightly, but the files proved too unruly, taking up way too many gigabytes to be worth it once the animation was attached.
The trees in my scene were created by starting with the SpeedTree preset Broadleaf tree, and changing some of the materials and generation variables until I reached the desired look. Luckily, the preset was already similar to what I was going for, but I tried to change it up regardless to avoid having too much of a cookie-cutter tree.
One of the most powerful uses of SpeedTree to me is the ability to grow foliage off of geometry. I used geometry, gnarl, and twist forces to create trunks that wrap around the house geo and grow like real vines. From there, branches, twigs, and even more twigs were piled on top to create all the little vines that would hold the ivy leaves. This helped to reinforce the work I did in the texturing and add the environmental storytelling element that this house has been here for a long time, and it is a safe place that is totally integrated with the nature around it.
Lighting & Rendering
My personal lighting technique involves “overlighting” in Maya and then “re-lighting” in Nuke. What this means is that I use more lights than I need to in Maya, basically including a light anywhere that I think it might look good or be useful and I make them all a little bit brighter than they need to be. I connect every single light to its own V-Ray Light Select render element, and this will save my life later. In the initial render, the scene will look over-exposed and blown out. This is totally fine.
Once I get the render into Nuke as a multichannel EXR, Light Select passes to change the lighting of my scene after the fact. This is extremely powerful and lets me re-light the scene in real time in Nuke. This way, I have more control than if I was doing it in Maya as I can see the lighting change in perfect 4K resolution instantly rather than having to wait for my noisy IPR to update in Maya. I can even turn lights off entirely or give them different colors.
Example of how Light Select passes can be used in Nuke:
After re-lighting with light select, I composite my background clouds and mountain and then begin color grading. I can use my Cryptomatte to selectively grade certain elements, making their colors more vibrant and improving the overall feel of the image.
Managing my renders was crucial. Since this scene was heavy, it was not an option to render all 500 frames of the entire scene, including the 5 grooms. Therefore, I used Maya/V-Ray’s render region feature to selectively render 500 frames of only the moving elements: the foreground tree, the cloth simulation, and the windmill. I comped these on top of the static render of the entire frame in Nuke and made a fake camera zoom in Nuke by transforming the whole image. The smoke, fire, leaves, birds, and fireflies were also comp elements that I acquired from the ActionVFX website.
I feel that subtle movement is the key to making an aesthetically pleasing environment.
If there are a million things jumping around, your attention will be split and the whole image will feel too busy. Adversely, a completely static image can become boring quickly. Therefore, adding several sources of subtle movement will keep the viewer stimulated without becoming too distracting.
In the end, I add some grain and chromatic aberration as well as defocus the entire image a bit. Of course, the foreground plants are defocused the most to indicate the depth of field. In general, images look better when they are not too sharp and clear because sharp and clear CG looks fake. We want our images to maintain some of the imperfections that real cameras create, such as lens scratches, chromatic aberration, and film grain.
The Palia House is one of my favorite projects that I’ve made to date because of how many different areas of the pipeline I got to incorporate. I felt like a true generalist, from character sculpting to environment creation to cloth simulation to cloud generation to compositing, I got to do a little bit of everything here. The whole project took me about ten weeks from start to finish.
If I could share one takeaway from this, it’s that I think it is important to tell a story with your art. We get very caught up in the technical aspect of what we do, but every once in a while you need to step back and remind yourself why you started making art. You want to tell a story and share something beautiful with others. CG and VFX are just other media to do this, like oils or colored pencils.
Thank you, Gnomon School of VFX, for providing me with the education that made this piece possible, and particularly to Miguel Ortega, Tran Ma, and David Eisenstadt for teaching me what I needed to make this project a reality.
I hope you enjoyed reading about my art. Feel free to reach out to ask questions about this project or just connect on ArtStation, LinkedIn, or via email.
Ethan Clark, Environment Artist & 3D Generalist
You may find these articles interesting