The team shared some insights into the production of the award-winning title: from the interactive ropes system and realistic blood behavior to next-level immersion and the beauty of destruction.
Bringing New Elements into the Next Title
80.lv: For The Last of Us Part II, you had quite a good base with Uncharted 4 and Uncharted: The Lost Legacy. What are the main elements of your tech that you brought in and built the new title upon?
Michael Fadollone, Technical Artist: The main element of our tech is the diversification of tools we use within our department. No tool is solely used to create an actor or set piece. We have many tools at our disposal and it's up to the individual artist to compose his/her own tool set. For example, I use Maya, Fracture FX, Havok, Substance Painter, ZBrush, and just recently added Houdini. Whereas someone else might work with Maya, Houdini, Havok, and scripting per se. We're all different in one way or another and have a lot to offer in our own creative ways.
Neilan Naicker, Technical Artist: I think one of the foundations that we established with Uncharted 4 and expanded with Lost Legacy, is the sheer amount and scale of destructible and physics elements like plants, chippable covers, and, for a new addition in The Last of Us Part II, breakable glass. We wanted every available surface that the player was likely to interact with to respond in some way, and thanks to a huge performance boost to our physics code by our dedicated physics programmer, Jaroslav Sinecky, we were able to add a lot more than ever before.
80.lv: What destruction features that you've added change the way gamers view the space? Probably, interactive glass comes into mind most often. It really forces us to look at things in the game differently.
Michael Fadollone: Let me preface that I strongly believe that there's no I in team. I would not have been as successful with my work if it wasn't for the support of my team and other departments. I know that I’ve done a great job when my creations harmonize with everything and everyone.
We've always constructed extraordinary destruction since Uncharted and it got bigger and better with each game. In the Uncharted series, the player navigates through the destructive sequence while jumping, running, swinging, and dodging through space as the environment alters the player’s original path.
In The Last of Us, The Last of Us Left Behind, and The Last of Us Part II, we had a few smaller-scale destructive set pieces to help move the player along or to add dramatic tension to the narrative. This was not the main emphasis like it was in the Uncharted series. We took a more subtle approach to move the player through the environment with our glass tech, chippable cover, and destructible cover.
It wasn't until John Sweeney challenged our department to invent a glass tool to motivate us. Christophe Desse accepted the challenge and started to research and develop. One day, I turned and noticed what he was doing and he asked if I had a solution. I showed him a process to construct breakable glass shards from a texture. It developed further and involved Charlotte Francis to create a fractal glass shader, a few tweaks with Havok from Jaros Sinecky, and Neilan Naicker to compile and add a bunch of tech under the hood into a simple tool for us. Many more iterations later and voila! A breakable glass tool for us and a new toy for level design.
We also used the destructible and chippable covers to force the player to move in the game. We've used this tech in the Uncharted series, and The Last of Us series as well. It makes it more difficult for campers to maintain their position if there's no cover.
Our destruction creation is only part of the ensemble; sound, lighting, and FX are essential to complete the quartet. One needs the others to experience the full effect.
80.lv: How does destruction work in games today? What are the main approaches to this challenge, and how did you decide to work on different types of destruction in The Last of Us Part II?
Michael Fadollone: It’s a guide to influence the player into a direction by design.
Usually, the designer and animator will have started the layout animatic and general player direction. In the vehicle sequence of the Hillcrest chapter, one of our animators, Bryant Wilson, set up the layout animatic with Daniel Harrison, the designer, to determine the general direction for the vehicle’s path. They asked for necessary objects, like a car, shrubs, or fences to be smashed into by the vehicle. I then tried to find existing Havok objects or create new Havok objects with small-scale debris within them to add secondary dynamic motion into the already exciting scene. Next, we playtest, add, remove, modify, optimize, and refine until we’re satisfied or run out of time.
Either Mike Hatfield or Christophe Desse decides who gets what. We're all assigned levels to maintain and manage. If an artist is overwhelmed with tasks, they make the adjustments. It's safe to say, we all feel comfortable to know each other’s strengths and weaknesses to assist each other to fill the void. No egos here.
Assets and Weapons
80.lv: The spaces in the game are filled with a stunning number of very pretty objects. Maybe it's some clever organization of the production cycles, but it seems like every asset was hand-made painstakingly and you rarely see assets being reused. How do you achieve this level of quality? Do you rely on scans in these cases? What’s the tech that eased your life the most in terms of asset production?
Neilan Naicker: Almost all of the assets are assembled by our extensive environment team, but when it comes to destruction or physics behaviour, every single one needs an individual setup of rigid bodies and constraints to look believable. Thankfully, we have more people than ever before authoring the Havok behaviours.
80.lv: How did you optimize assets for the game? How did you set up budgets for them in terms of wireframe and size? When enriching environments with props, did you have some limits for how many assets can be used within one scene?
Neilan Naicker: We make extensive use of automated Level of Detail generation in Simplygon. When we create an asset, we have tools that determine the optimal distances at which to reduce its polycount, and Simplygon handles the rest. We don’t often bump up against huge polygon count limiting factors in the foreground, but one of the biggest caps on quantity is rigid body count, which is how many live physics objects we can have at any given time; anything up to around 1000 is generally okay, but if we’re frequently close to or over that limit, we usually have to dial things back.
80.lv: Another thing you’ve done here is the incredible weapons. Could you tell us a bit about the way you’ve been approaching these pieces, and more importantly how you worked on texturing? The way the weapons look, the way they stain and wear -– it’s just something extraordinary. We never thought Substance Painter was capable of it.
Inkyo Lee, Technical Artist: there are no specific rules we have to follow when making art for weapons. We do all use Substance Painter to make our textures but each of us has our own unique way of approaching these pieces. That’s the one thing I really like about Naughty Dog. We share our techniques and use whatever is more convenient for our work. We all have different approaches, so there is no one answer. In Substance Painter, some of us use custom smart materials, some will prefer to hand paint rust and grimes, and others will create masking layers to create rust or grime in the shader layer.
Alex Rivera, Technical Artist: The first thing I do is gather as much reference as you can of real-life weapons and ones that were used in combat to get a good sense of what they look and feel like when you start to model and texture them.
On the modeling side, I always start out by making the model in subd, doing so would be easier for me to make changes down the line if somebody wants something to have a different look or if parts of the concept get changed for approval. I use ZBrush for sculpting detail in places where the weapon would normally have the most wear and on edges where you would see things like chips and dents. Low poly models are pretty straightforward, normally we try to keep them under a maximum of 25,000 triangles. When we start laying out UVs we try to have as much texel density as possible in order to get a good resolution out of our texture maps and when we start to bake them in Substance Painter, common methods would be mirroring UVs and certain objects sharing the same UV space.
On the Painter side, I tend to use custom generators for dirt, edge wear, and rust but mask it out using grunge masks to have some breakup. Everyone tends to do their own thing when they work in Substance Painter but we always come across sharing techniques that we use to show others how we work on our textures.
To make our weapons intractable with combat and the environment so they can feel as real as possible, we used custom shaders that environment artist Steven Tang worked on; they allowed us to add dynamic blood, wetness, and mud effects to all of our weapons. Each effect has its own grunge mask.
80.lv: Please, tell us more about creating grass and implementing it in gameplay. How did you make this game mechanic work? How did you set up a visible distance between the main hero and the enemies?
Jonny Chen, Environment Artist: The creation of grass went through dozens of iterations. Much of the challenge was balancing the density of geometry and performance – making sure we had enough grass geometry to cover up the player, making them feel hidden, but not too much so that the framerate was not affected in a noticeable way. We had to make different types of stealth grass that the player could crouch in as well as prone crawl through. There are visibility volumes created for the grass that determine if the player is inside a clump of stealth vegetation. And we used these to calculate how hidden the player is based on the distance to the enemy and if the player is crouched or prone.
Interactive Ropes System
80.lv: Interactive ropes and cables are one more great addition to The Last of Us II gameplay. How difficult was it to let players explore new areas using these elements? How did you set up the mechanics inside your engine?
Pete Ellis, Game Designer: When we were developing the puzzles for the new rope mechanics in our game, we aimed to make them really stand out as unique pieces of gameplay that no other game had included before. We wanted to combine the joy of playing with a fully physicalised rope and seeing how it interacted with the world, with intuitive problem-solving. To achieve this, we had to make sure that the rope puzzles required players to really engage with the environment and piece together a solution based on their expectations of the real world.
The way we achieved this was through ‘Affordances’, which is where the characteristics of an object define its possible uses – how it can, or should, be used. For example, a flat plate on a door affords being pushed as there is not only an area to put your flat palm on, but it also lacks an object to grab in order to pull. Door knobs, on the other hand, afford being turned/twisted, pushed, and pulled.
In terms of our ropes, there were lots of affordances for us to consider. This included things like horizontal bars or poles affording throwing the rope over it to drape down, or a pole sticking up signifying that the rope could be thrown around it to act as a pin for climbing up. We wanted the player to be able to decide how they could solve a puzzle based on real-world expectations which meant we had to lean into real-world affordances and support every interaction they thought they could do.
An example of a rope puzzle that leant on real-world affordances for possible rope interactions was the optional rope puzzle in the abandoned streets in the Ellie Seattle Day 2 level ‘The Seraphites’.
This puzzle told a story of a couple who had been hiding out in an abandoned convention centre and had been found by the enemy faction, the Scars. The husband (who you find the remains of later in the level) had gone out to find medication for the wife, but having been left alone, she’d been caught by the Scars and strung up. When you arrive at the puzzle you find remains of the skeleton underneath the noose she was hung from, the head having detached after decomposition. The hanging noose is what you use to solve the puzzle to get inside her locked hideout to be rewarded with a stash of items and collectibles.
The player climbs a ladder to the floor above to be able to pick up the rope and finds a clear goal of the item stash locked behind a door, which was further highlighted by a camera blend and a unique dialogue, so players knew what the aim of the puzzle was.
It was a small ‘possibility space’, however, which means there was nothing else to interact with that players could get confused by or fixated with. The only thing to interact with at the start was breakable glass, which throughout the game we had taught could be smashed, so the players could play with this element to see what results they get to help puzzle progression.
Smashing the closest window on the door side leaves an unobstructed view of a horizontal bar which has an affordance that the rope can be draped over it. As there is a drop to the ground on the other side of the window, the horizontal bar is the only ‘new’ element that the player has for puzzle solving. Additionally, there is a triangular piece sticking out from the bar support that further affords a place that the player can pin the rope around.
The noose had previously been hanging over some railings when the player entered the environment, which was a subtle way to prime the player into thinking about the possibilities with hanging ropes. The solution isn’t immediately obvious so that players don’t just follow steps they feel a designer is laying out for them. In fact, with this particular puzzle, as it’s an optional space, I wanted the requirement of using a consumable item, such as a brick or bottle, or in their absence, some ammo. There is a broken piece of the horizontal bar that is pointing downwards, which helps to both draw attention to the area the player can pin the rope around and deny the affordance that the rope can be used on the damaged bar. This suggests to the player just by its very nature and shape that if they throw the rope on to this sloped bar the rope will simply slide right off.
Having just primed the player with smashing glass, the solution is to then smash the glass that is across the unbroken horizontal bar (by throwing a brick or bottle at it, or shooting it, as it’s not within reach of a melee swing) in order for the rope to then be thrown over it to create a hanging rope the player can climb.
It was extremely rewarding to hear focus testers verbalise their internal thoughts and say things like “I wonder if I can hook it over that? I’m gonna give that a try” or even “I don’t know if that was the way you wanted me to do it, but I thought it was possible so I tried it and it was awesome that it worked!”. With a more difficult optional rope puzzle like this it was exactly the experience I was aiming for; not having an obvious solution but one where the player can try things out based on what they’re expecting from the real world.
We strived hard to support anything that the player could theoretically do, even if it wasn’t the desired solution or wouldn’t necessarily get players closer to the goal. This was so that immersion wouldn’t be broken if the player discovered areas where the rope didn’t behave as expected, without explanation or reasoning.
The way our engine worked was that the rope considered a flag on the collision to see if it could be draped over it and support the player climbing up it. We made sure that any object or surface that looked like you could hang the rope off it behaved exactly as expected; if it afforded draping, it supported draping.
This meant that players could experiment with where to place the rope themselves in order to get closer to a solution. It’s a lot more satisfying for the player to figure a puzzle out themselves than to just hammer the buttons on the controller to try and find out where the designers want them to place the rope if they had only supported one place it could be thrown over.
This meant a lot more work for us, like marking up collision to support the rope abilities, optimising the collision so it didn’t blow the physics budget, and clearing areas of any clutter to avoid any awkward rope interactions. There was a lot of back and forth between design, the background artists, and the physics programmers about certain objects that caused issues to see whether they could be removed, smoothed out, or have additional geometry to remove any nooks and crannies that would cause problems. It was a lot of effort, but I think it was worth it as the rope puzzles and physics have received a lot of praise from players!
Supporting any affordances in the environment allowed, in many cases, for the rope puzzles to be solved in more ways than one, which further fed back to our goal of allowing the player to figure the puzzles out based on their real-world expectations. For example, this optional rope puzzle with the noose in the convention centre could be solved not just by hanging the rope from the horizontal bar above, but the player could also throw the rope down across a platform that was further away and climb up from the other side. This too required the use of a consumable item, such as a brick, and so the constraints of the puzzle didn’t change, but it allowed it to be solved in multiple ways which is more player favouring.
Allowing for interactions with the rope as you’d expect in the real world not only let us provide multiple solutions to puzzles, but also add extra little secret spaces that you could only get to by using a rope. Warning, this is a spoiler (SPOILER WARNING!), but in the level ‘Ellie Day 1 – The Gate’ the QZ area with the rope that is too short to reach the circuit breaker has a hidden secret. When we were supporting every interaction with the rope, including running around with it inside the mobile huts and threading it through the windows, we also supported throwing the rope over the top and climbing up it from the other side! If the player does this they are rewarded with a lot of pickups such as ammo, parts, ingredients, and upgrade pills, but also a note and a collectible card! It’s a tough find, but if you notice there’s a chair on the roof and go exploring, you’ll be rewarded by playing with the rope!
In fact, this mobile unit, which wasn’t explicitly part of the solution for the main puzzle, probably had some of the most work and attention given to it so that the rope would work smoothly and seamlessly inside it, around it, and over it. It was hats off to the incredibly talented programmer Jaroslav Sinecky who put extra time into the collision in this section. Going the extra mile to support the player ‘threading the needle’ through the windows and climbing on top of it is an example of the Naughty Dog magic that we put into all our games; rather than taking the easier approach of not supporting it, we add that extra touch of love and care to create a great experience that, hopefully, blows your mind!
Jaroslav Sinecky, Programmer: The tech that allowed us to create these rope puzzles in The Last of Us Part II has been in development at Naughty Dog for quite a few years. It started during the pre-production of Uncharted 4 in 2011. The tech was then used for Drake’s grapple rope and jeep winch.
There are two main, mostly independent components at play. First, there is the simulation of slack rope which is based on the discretization of the rope into a system of connected simulation nodes (or particles). Each pair of neighboring particles has a distance constraint between them, each node has collision constraints based on the collision geometry in its vicinity and then there is a bending stiffness constraint and a few more. You run this system of nodes and constraints through an iterative solver to figure out how the rope should move each frame. Problem is that for a long rope with a lot of nodes this kind of system needs a lot of iterations to get good results making it very taxing for our engine. And if it moves quickly from frame to frame it’s hard to get it robust enough to guarantee it will not clip through collision.
So instead of trying to achieve that, we developed a second system to solve a simple geometrical problem: imagine two points in space connected by a straight line. Now when you start moving these two end points around in your level the line that connects them has to correctly wrap around edges of collision and slide along them. This seems like an easier problem to solve but it was still much harder than we originally anticipated, especially if you have to deal with all kinds of complicated intersecting pieces of collision and also add a moving collision into the mix. We also didn’t find any previous work done on the subject.
This second system was the one that we really relied on in Uncharted 4. The rope between Drake’s hand and the grapple hook is always relatively close to taut and even if you see slack rope (provided by the first system) there is an invisible straight line maintained that serves as a fallback. This reduced the requirement of the robustness of the slack rope simulation.
Now for The Last of Us Part II, designers came up with the idea of picking up a coil of rope, tying it at one end, and throwing it around. Players can then go and pick it up at any point along the rope. This seemed like something we should be able to do with our Uncharted 4 rope tech but the crucial difference is that in such cases there is no more safe line to rely on. The moment you throw a coil of rope into the level you lose your safe line and have to fully rely on the slack rope simulation.
So production on The Last of Us Part II had us revamping our rope simulation in a major way again to be able to provide rope that does not break during gameplay. In the end, puzzle areas are limited by the fact that one end of the rope is always tied to one fixed point. But those areas are still big (rope can be up to 14m long) and provide thousands of ways how you can thread the rope around. And we wanted to give players the freedom and confidence to go and try everything they can come up with so each puzzle had to be carefully tested and problematic collision areas cleaned or adjusted.
I think the results warrant the effort as these puzzles are pretty unique. Once we started brainstorming and prototyping ideas for what we could do with this rope we actually got all surprised how there are pretty much unlimited possibilities with unlimited levels of difficulty. The Last of Us Part II is less of a puzzle-oriented game than Uncharted 4 and I feel like we really just only scratched the surface of what we could do. A lot of people said we should now do a game that is just about rope puzzles. So maybe that will happen next?
80.lv: The game has some incredibly versatile water effects, including rain, flows, reflections, wet materials. What are your favorite things you’ve worked on? Our favorite is probably when the water streams down on some rocky surfaces. What were the biggest technological advances in the way you’ve worked with water simulations and rain tech? It seems like most of the solutions for water are well known (like cube maps for the reflections) but the level of their implementation is just stunning.
Quinn Kazamaki, Visual Effects Artist: A big part of our workflow was just finding specific attributes in our reference that we thought were an important part of a certain look for water, and figuring out systems for how to implement those looks. Basically, features that make the water in video A special in comparison to the water in video B, etc. I think this helped a lot with making the effects we made feel special and different even when some didn’t necessarily use new or revolutionary techniques.
Many effects definitely did need some tech, however! One of the things we wanted to achieve early on was a shader that could transform from clear, refractive water, to frothy and opaque white water, all the way to water mist in the air, or in other words, a shader that allows phase changing. Our lead, Eben Cook, got this working pretty early on in the game and all the assets that I created for the waterfall particles used it. Another tent pole look for water was emulating the surface and material of turbulent water. A large amount of my time went into developing a system for simulating water flows in Houdini, taking the mesh and data from there, and using it to drive flow and the water surface shape in the game. I spent a while hand-modeling some of the water surface shapes to get exactly what we wanted as well.
A ton of thought went into how rain looks too. Eben and Taylor Duval actually set up a reference shoot and recorded video of a collection of drips and streaming water that was used in most of the final assets; it helped us to integrate rain into the environment. Just the drip effect itself was placed over five thousand times!
I think my favorite water effect that I worked on was probably the waves that wash up onto the shore in a few places in the game. Wataru Ikeda was responsible for the vector displacement and splashes out at sea, and I was responsible for how that wave would interact with the ground plane. A lot of it used our material mod system which writes to a special buffer that materials can then read from to edit their material behaviors. Water running up a beach, making it wet, and then seeping back down to the water level is an example of how it works. There was so much thought and time put into even water effects by the whole team that I can't really mention all the things I want to in a short answer, but hopefully, this gives an idea of how we produce new effects.
80.lv: A big part of the tech was this incredible blood and projectile deformation. The way blood pours out of slain bodies, the way blood leaves a trail in the water. How does it work?
Kirsten England, Visual Effects Artist: I’ll start out by saying blood and gore were a massive team effort! We had many discussions with various departments about these elements and what we wanted for the game. We spent time looking at movie references and talking about how blood behaves, as there were many details we wanted to capture. How blood drips and forms into a pool, how it behaves on different surfaces, how it flies through the air, etc. The question came, how do we do this in the game?
Early on, we asked ourselves if our blood particles could be 3D. We saw how when blood grows in size it actually turns into thinner strings and droplets and how we wanted to get that look in the game, but as optimal as possible. We started experimenting with 3D vertex animation through a tool Ray Popka developed in Houdini. It ended up being extremely promising and optimal so we went with 3D vertex animated blood! The 3D blood also spawns a blood splatter that skews with the velocity of the blood in the air.
Our team actually started doing blood shoots too. Thanks to Kion Phillips, we made fake practical blood to see how it would behave in many different scenarios. It was actually quite a lot of fun and our lead, Eben Cook, had the idea to start using some of these practical assets in the game. This is how blood pools and drips were created for the game. We would film the blood pools and then I would create a threshold and flow maps to drive the particles. Natalie Lucht then created the thickness maps. Eben also developed the shader for how blood pools and drips behave on different surfaces based on the surface height map as well as the deformation of the melting snow. Eli Omernick gave us the ability to spawn blood pools on different surfaces, which is how we achieved blood behaving differently on default surfaces, water, and snow.
One of the coolest challenges was actually getting the look for blood and wounds on bodies. Most of the characters in the game have a blood map and wound map, dynamic render targets we write to with particles. The maps reveal blood and flesh looks designed by the Character team. Steven Tang developed the character’s blood look to behave differently on cloth versus skin, so we would never get a soaked look on skin. Artem Kovalovs came up with an amazing system that reads the blood buffer and starts to spawn dynamic blood drips that conform to the geometry of the character and obey gravity after the character is shot.
This brings me to dismemberment. There’s a lot that detects where a character is hit and if dismemberment should occur. Character, TDs, Programming, and FX worked together making a system that detects the location, weapon type, geometry, and an effect specific to the type of dismemberment. Eli, David Kim, and Sunny Kim set it up so when dismemberment occurs, the limb on the main body will be hidden, and a gnarly piece of gore will be revealed on the body as well as gore pieces will fly through the air supplemented with effects. Oddly enough, dismemberment effects were some of my favorites to do. John Sweeney asked if we could have chunks of gore that stick to walls and ceilings leaving trails of blood behind and, through the use of many dot products, counters, and collision detection, I worked on a system of gore particles that could detect angle changes of walls and ceilings to stick to them. It was an incredible team to be a part of.
80.lv: Another very impressive thing you’ve done here is the use of clothes. Not only do they all look incredible, but they also behave in a much more realistic fashion getting crumpled, wet, and moving with the character. What were the biggest challenges with this part of the production and what are the restrictions in modern tech that don’t limit realistic cloth simulation in games?
Wasim Khan, Lead Technical Director:
As excited as we are to utilize the power of the next-generation console, it's important to remember that no amount of hardware horsepower will be effective without a smart approach to balancing the desired fidelity against studio-wide needs and schedules. For example, joint count is a known limitation, so multi-layer clothing, calculating self-collision, or multi-layer collision in game engines are expensive (computationally). We also consider the downstream effect joint count and rig evaluation have on rebuilding characters and animation. For an iteration-focused studio like Naughty Dog, where frequent reviews and ongoing playtests shape production demands, there's a question of balancing simulation fidelity with the needs of animators and designers for turnaround time on changes.
- Gameplay Cloth
Thankfully, The Last of Us Part II gameplay cloth setup lets us use a hidden lower-resolution simulation mesh that drives the visible high-resolution game mesh, blending it with the non-simulated work by gameplay animators. The lower-poly simulation keeps everything performant without sacrificing details, and also lets us edit and iterate more quickly. The integrated in-engine debugging tools are also crucial here.
- Cinematic Cloth (e.g. Abby intro sleeping bag and Ellie removing her shirt in the theater)
Additionally, in cinematics, we often swap this with offline simulation created with Maya's nCloth or with Houdini. While low-res simulation works well for a shirttail as Ellie runs around Seattle, it can't capture the detailed deformations and collisions of Dina helping Ellie pull the shirt off over her head. So again, we balance the needs of the story (a very emotional moment with Dina and Ellie, which would suffer from distractingly unrealistic cloth) against the needs of gameplay (the detail needed for that cinematic would kill frame rates and hurt iteration time in multiple departments).
And in both gameplay and cinematic cloth, a robust material blending system lets us dynamically drive wetness, wrinkle maps, blood spatter, etc on cloth that moves naturally and keeps players inside the world of The Last of Us Part II.
80.lv: Probably, one of the elements that everyone has noticed in the game is the incredible lighting solution with GI, light bouncing from different surfaces, and nice soft capsule shadows. We're wondering if you could talk about the biggest achievements in this direction and how you reached them?
Mark Shoaf, Lead Level Lighting Artist: We definitely encountered many technical as well as artistic challenges in achieving the lighting look for The Last of Us Part II. While certain daytime sunny levels tend to fall into place without too much trouble, one of the bigger challenges in our lighting was dealing with the variety of overcast and even nighttime levels. By default, even in a physically based lighting setup, overcast levels tend to look, well, overcast and pretty flat overall. The challenge for our team was how to keep the feeling of an overcast day, but still maintain a nice play between light and shadow, as well as a good sense of directionality.
In past games, we’ve had the ability to go into our levels and paint in cloud shadows from the sun to better shape and form our levels, highlighting what is important as well as for aesthetic reasons. In The Last of Us Part II, we also created similar tech that allowed us to paint in ambient shadows into our levels. This process proved to be a very defining piece of tech in achieving the look of the game. Enabling us to quickly paint in shadows around trees, against the edges of buildings, alongside the main player path, or even to sculpt wide mountains, this process of painting ambient shadows really helped us achieve a much more sculpted look in our lighting. While this was a great feature for exterior areas, we also had to come up with a method for dealing with interiors as well. Just allowing the natural sun/sky light to come into interiors, even though being accurate, gave us too much of an ambient flat look inside. We would quite often block off the sun/sky for our interiors and replace them with area lights up and outside of our windows/doors. Using smaller light sources like this allowed us to still keep an overall soft ambient feel inside, but give us more directionality coming into the spaces.
The majority of our lighting in The Last of Us II is achieved through baked lighting. Indirect lighting from the skydome and sun is all baked into our levels. While this gives us a good baseline to start with, we also place numerous lights throughout our levels and manually sculpt each area based on art concepts or until we hit that aesthetic mark. The trick here is always trying to play off of natural light and direction and push it through our levels. It should never look like unnatural lights are placed anywhere, e.g. a light coming from an unseen source.
In addition to our baked lighting, we also use runtime lights for various scenarios. Lights off of lamps, shining through windows, or even illuminating mountains off in the distance. One of our more notable runtime solutions is the player’s flashlight which calculates bounce light in real-time. This adds a lot to dark areas in the game, bouncing colors off of the various materials in the levels. While this is a really nice effect, we also had to be very careful about where to use it as it has a fairly high frame rate cost.
These aspects of our lighting, combined with some of our amazing tech such as player ambient shadows, screen space reflections, and our amazing new fog system for the game really brought it to an amazing end product.
Mari Wang, Lead Cinematic Lighting Artist: All the achievements made in level lighting raised higher expectations for cinematic lighting. Cinematic lighting relies on runtime lights, which do not come with GI or baked indirect lights. So the challenge for the cinematic lighting artists was that we had to imitate the beautiful bounce and indirect lights we see in the environments on the characters to integrate them better with the background.
Being able to add more lights per shot helped us improve the quality of lighting during cinematic moments in The Last of Us Part II. The variety of optimization features that tech artists and programmers came up with during this project allowed us to utilize more of our engine’s capacity for rendering runtime lights and stay within our technical and memory limits.
Upgrades on character shaders also played a big role in achieving the subtle realism of our cinematic cut scenes. The reference photos for the characters were taken in a way that we could match the lighting scenario in the real world to the game engine. Having skin and eyes that react to the lights based on real-world phenomena helps us put more life into the characters. These enhancements allow our cinematic lighting team to start with a great baseline and focus on the artistic qualities of the scene.
There are so many disciplines that came together to develop the lighting solution that the greatest achievement is the collaboration between all the individual pieces to create something extraordinary.
80.lv: How did you figure out the lighting in your spaces? What are the ways you were positioning the lights and what lighting effects helped you really elevate some of the scenes?
Reuben Shah, Environment Artist: We use a lot of film references, our art directors are great at picking out the color palettes and mood based on the story beats. We then work with them on defining the space and figuring out where the lighting would come from… sometimes a hole in the ceiling or broken windows off to the side. As with everything at Naughty Dog, it's a very big collaborative effort, lighting artists are able to block in lights and mood while we are still solving the space. Our post-processing and Look Up Tables for color and value provide a great deal of our final image.
Jeremy Huxley, Environment Artist: I had the pleasure of working with Boon Cotter on some of the coastal environments in Seattle. We had worked together on previous Uncharted games, so it made our collaboration much smoother. In this particular section where Abby is working toward reaching her goal, which is the Aquarium, Boon positioned the moon at an angle to hit the crashed ferry boat in just the right way to light it up and tell the player that in order to progress to the Aquarium you need to pass through this creepy boat first. Beyond the ferry, you can see the Ferris wheel which we had used as a landmark to lead the player through a large portion of her half of the game.
80.lv: Lighting leads us to the post-effects that you already happened to mention. Especially with the recent update, you’ve added so much in the game. We would like to concentrate on the original effects though. How were they used in the game to accentuate the picture, like motion blur, depth of field, and so on?
Vincent Marxen, Lead Programmer: Ah yes, we had a lot of fun making these new render modes for the Grounded update, my favorite is the comic book filter, it looks so cool.
As for the original game, we support a full chain of post-processing effects. This includes bokeh depth of field, motion blur, film grain, chromatic aberration, tone mapping, and color grading, to list just a few. Their primary goal is to support the storytelling and enhance the player's immersion into our world. For instance, depth of field is mainly used during cutscenes as a way to focus the player's attention on a specific part of the screen, usually the character that is speaking. Color grading's purpose is usually to help set the mood and atmosphere of a particular scene. During gameplay, post effects enhance the player's immersion by depicting our world as it would have been seen through the lens of a real camera.
The cameras in our games are virtual of course but have been modeled after real-world cameras with real physical characteristics such as focal length, aperture size, exposure time, lens quality, or film granularity. By modeling these real-world limitations, our post effects contribute to making our image look more realistic and convincing. Without them, computer-generated images tend to look too raw and fake and not very believable or immersive. This is also why it is critical to polish these effects as much as possible and remove any visual artifacts which can distract the players and take them out of the experience. For The Last of Us Part II, we decided to redo our motion blur from scratch as we were not fully satisfied with our previous implementation used in Uncharted 4. We were more than happy with the end result, which greatly improved the visual quality while maintaining the same GPU cost as the old version. This allowed us to run it at the highest quality setting throughout most of the game while maintaining a stable 30fps framerate. As for bokeh depth of field, we started with our Uncharted 4 version, but spent a lot of time polishing and fixing visual artifacts in order to get the cleanest image quality possible. It was a lot of work but well worth the effort.
Building Story-Driven Environments
80.lv: Naughty Dog has always been at the forefront of the art of environment building. What were the main ideas and key things that guided you during this particular title? What were the important things you wanted to convey here?
Reuben Shah: Locations and passage of time were key elements to this story. We revisited a lot of spaces as one character and later as another. We showed different seasons in similar settings. The journey of the characters and actions dedicated a lot of the spaces we created. I found that knowing facets of the characters' lives helped define and convey the feelings that we were attempting to display.
Jeremy Huxley: I came into Naughty Dog as a huge fan, I had played Uncharted 2 and upon completion, I knew that I wanted to work on these games. And going into The Last of Us Part II, we knew how passionate everyone was about the first game.
I always get excited finding ways to infuse color and emotion into my work as much as possible, I try to mirror whatever experiences and feeling the characters are going through and convey that to the player as much as possible with the environments.
Anthony Vaccaro, Environment Artist: Duality, or the theme of two sides of the same coin, is a strong and constant theme throughout The Last of Us Part II. Where the first game is seen from one story vantage point, The Last of Us Part II compares and contrasts how each opposing side views and handles the same events. This was not just important in terms of story but also the worlds we built. Having the player experience locations through the eyes of different people with very opposing goals and eliciting a different reaction to those same locations was something we worked very hard on throughout the studio.
The Aquarium, for example, is a level setup to really play to this effect. It is the same environment but the experience you have playing through as Abby, the memories and connections to this space you have couldn’t be further from the experience you get as playing through as Ellie. Building out a space that really showed off the wonder of exploration for Abby, the softening of her guard towards Yara and Lev, her complicated love for Owen and contrasting that with Ellie’s hell-bent determination of revenge, the death and destruction in her wake and ultimate regret in some of the actions she has done was a pleasure to work on. Two sides of the same coin.
80.lv: How did your process evolve compared to the amazing Lost Legacy? We can see some definite connections in terms of structure, but the overall style and number of assets are completely different. What were the things you've learned during the previous game that helped push The Last of Us Part II to the limit?
Reuben Shah: We learned a lot during the R&D time of The Last of Us Part II while Uncharted: The Lost Legacy was being developed. About eight members of the team stayed behind to work on The Last of Us Part II while Uncharted: The Lost Legacy was being made and offered support in terms of asset creation and workflow, but the Uncharted: The Lost Legacy team was more than ready for the challenge that was in front of them. Naughty Dog’s environment workflow has been iterated on for many generations, and Uncharted: The Lost Legacy proved that with a strong foundation the impossible tasks at hand were made possible.
Jeremy Huxley: Uncharted: The Lost Legacy was an Uncharted dream team, we had almost everyone from the previous Uncharted games and we were still rearing to go on this additional content we developed. One thing very different about Uncharted: The Lost Legacy, as opposed to The Last of Us Part II, is that we had around a year to create it, but we were able to reuse a lot of the massive library we had already accumulated from the previous titles, whereas on The Last of Us Part II a lot of our environmental props had to be recreated, the styles were a bit different, so this process was a lot longer but worth it.
Anthony Vaccaro: Uncharted: Lost Legacy was a project that from the birth of the idea to the release of the game took around 8 to 9 months and we only had about 80% of the studio working on it while The Last of Us Part II started pre-production.
With that smaller team dynamic and shorter production timeline we learned a great deal and developed a lot of helpful tools to assist us in creating more robust asset libraries that were far easier for the entire environment team to access and keep a high consistent quality on all assets and levels.
Uncharted is a very globe-trotting franchise and often individual teams worked on different areas around the world with unique looks. The Last of Us Part II is built around 3 main points in the world, Seattle, Jackson, and Santa Barbra.
Much like Uncharted: The Lost Legacy with this smaller world scope, we are able to have more artists creating assets that can be utilized by the whole team as you don’t have 7 different places in the world you need to create assets for. This allowed for locations to feel more invested in and varied due to the high amount of assets the team could pull from. These details gave places like Seattle that true lived-in feeling that is revealed while exploring and coming across the more intimate lives of the people who once inhabited this world as opposed to just a quick vacation spot to see the highlights.
Designing Joel's House
80.lv: One of our favorite levels and maybe one of the saddest ones is Joel's empty house. Could you talk a bit about that? What were the goals here, how did you approach it, what was the way you've worked on all those cowboys, horses, and guitars? The question about the environment storytelling is brought up quite often these days, but it'd be incredibly interesting to learn how you managed to evoke emotions with that trip around the space.
Reuben Shah: When creating smaller story-heavy spaces, every piece of art, decoration, and styling must have a reason and make sense for the characters living in the space. Joel’s house was decorated with all his guitars, vinyl music, wood carvings, and a sense of his Texas ethics. For Ellie and Dina’s farm, we needed to show how the pair lived after the events in Seattle occurred and how they were getting their lives back together. After Santa Barbara, Ellie comes back to an almost haunting home. The emptiness of the house matched her own solace at that point of the story. Knowing the characters and where they are in the story timeline is very important in filling out these spaces. Sometimes, things are used to convey a memory from earlier in the story; we can see how after Joel's house (and death), a lot of the wood carvings and Joel’s other belongings show up in Ellie and Dina’s farmhouse, and then later packed into her room.
Jonny Chen: One of the goals of Ellie going through Joel's house was to have these quiet moments of reminiscence to re-establish what Joel's relationship meant to Ellie. It was purposely to slow down the pace and gather ourselves after what has happened. Most importantly it is to take some time to remember who Joel was as a person. His interest in cowboys, horses, and guitars is to show his country roots being a man from Texas. He was also someone who enjoyed mundane hobbies like woodworking, fishing, and collecting music. We also wanted to show Joel's sentimental side by having all the things he's kept from the first game and things from this game related to his relationship with Ellie – like the pamphlet from the museum and the picture Ellie drew of him.
80.lv: The new game featured a number of very interesting locations, including a lot of really large spaces. One is Seattle, another one is that incredible stadium where Abby lived. Can you tell us a bit about how the process was organized when it came to these environments and what this impressive scale meant to your team? What solutions helped you cope with such a huge task?
Reuben Shah: Anytime a large space is being developed, the more we can figure out in the blockmesh the easier it is to tackle when it comes to applying art and gameplay to the space. Lines of sights and visual blockers help in the ability to draw more on screen. In the case of the stadium, for example, the large crowds in the mess hall would not have been possible if clever blocking wasn’t provided. In other parts of Seattle, it was important to see the Ferris wheel, so blocking that in with sightlines from gameplay and cinematics was key in creating such an expansive space. The environment art team helped out with the layout and design of quite a few spaces in this game, this was valuable in terms of knowing what and how much can be drawn in a scene and what can work with our engine.
Anthony Vaccaro: The original The Last of Us is what we like to describe as “wide linear”. More open than our traditional Uncharted games but still a bit narrow in focus. For The Last of Us Part II, we wanted to expand this and give more choice and freedom to the player not only to tackle combat as they see fit but also let them feel like they are truly exploring a part of the world on their own and are not just forced down a tunnel of art.
For the more expansive levels such as Downtown Seattle, it's actually a culmination of a lot of learning from previous projects in terms of design, art, and a technical standpoint. Uncharted 4 saw our largest and most explorable level to date at the time with the open driving section of Madagascar. We took that concept a step further with Uncharted: The Lost Legacy for the Western Ghats to explore a mini-hub type level.
The great thing about Uncharted: The Lost Legacy was that it really gave us that shorter project to test out a lot of ideas and see how they fared. A traditional multi-year dev cycle can make this much harder. This level was instrumental to the creation of Downtown Seattle from that design, art, and tech perspectives as they all had to be worked on by each department in conjunction instead of a more traditional design handing off to art, then optimize approach. Artists worked hand in hand with the designers of the level to layout a space from the early inception to make sure the space was easy to navigate, beautiful to look at, and had enough sightline blockers so we could not only keep up the high quality of art on a technical level but also keep the player guessing as to what could be around the next turn.
80.lv: In terms of asset production and generation of content, how did you reuse your props? There are thousands of objects in the levels but one cannot but be impressed by the level of creativity you've put into reusing various elements and making them play together so well. What is the main principle that guides your team in the usage of your library of props and how do you make them work in so many versatile ways?
Reuben Shah: When working on certain landscapes and environments, the reuse of props and objects is actually a bit easier. Once we figured out the look of Seattle during look dev, we had most of the props, objects, and foliage to create the expansive landscape. We had internal tools that helped us manage our library and were able to tag things with keywords. We would get a preview of the said object before importing and placing throughout the environments. Outsourcing played a large role with our one-offs such as signs, branding, and unique assets.
Approach to Level Design
80.lv: How do other departments work with level designers? How iterative the process is? Are there some tricks that help you butt heads less?
Reuben Shah: On The Last of Us Part II, a few of us were asked to help the design team block out spaces for gameplay and cinematics. Before we shot any cinematic we blocked out full areas where the scenes took place, with character touchpoints and interactions. On the gameplay side, it was easier to work a bit lighter at first (in terms of detail) and wait to add extra details until the design was playtested a few times before committing. Naughty Dog has an organic nature to layout, we do change and edit quite a bit before we go to the final stages, but communication and being able to talk things out with the design team help the process.
Anthony Vaccaro: In previous games, we have had some artists work extremely close with design and even handling the design layout of particular spaces for a better artistic flow to the level. This requires a great amount of communication and back and forth work with the designer you are teamed up with.
On The Last of Us Part II, we expanded this a bit more with a larger amount of sections of the game having an environment artist take a pass on design layout spaces to get a more natural feeling. We have a very iterative approach at Naughty Dog and these spaces would be adjusted throughout the project by the designer and environment artist to create locations that felt believable but still fun to explore. The speed of an artist who has a good layout design sense can effectively convey the design goals of our design team freeing them up to focus on more difficult areas.
There will always be instances of artists and designers butting heads but thankfully, those issues are few and far between. Most people at Naughty Dog view everyone as a “designer” because contributions come from everywhere, even out of the department. Though we have a specific title role of Designer, from QA to the studio President, ideas are always encouraged and taken seriously.
80.lv: The game has some spaces where you needed to create the feeling of incredible height, but instead of creating traditional vistas and such, you went for external factors, like camera, wind, some character reactions. Could you name some other examples of how you were able to enhance your spaces?
Reuben Shah: Sound plays a key role in any Naughty Dog game. When thinking of spaces like this, it's good to know what would work in context; while walking on metal grating looks cool, it may not sound great when you are out in the open or when you are setting up a gameplay experience of partial stealth. In the guitar store, we went back and forth quite a bit on the space Ellie plays for Dina and what we would see in the background. Ultimately, we decided to window up the space for the narrative, it didn’t make sense to play the guitar that could possibly be heard miles away while in an open room. Sometimes we have to make decisions on what works best for the story rather than the visual.
Jeremy Huxley: While we were working on the International District, we were constantly trying to find little vistas where the player could stop and see their progress through the level; it needed to feel like Abby was making real progress through this hostile environment. We did have a few “reveals” like the Chinatown gate or the large imposing sniper building afterwards in my opinion, but they were definitely more subtle than some of the stuff we have done in the past with Uncharted. But that has a lot to do with keeping it more grounded and differentiating the two IPs.
Evoking Dark Emotions
80.lv: The game is filled with the theme of violence and revenge. How do these topics manifest themselves in the environment art? In what way game scenes underline the message and tell us a bit more about the characters and their actions?
Reuben Shah: Revenge at its essence can be cyclic, and we revisit similar/same spaces as different characters throughout different times. When working on any space in this game, we had to pretend we were the characters living in this world and make decisions as they would. It helped provide the proper narrative to world building.
Jeremy Huxley: I had the privilege of working on an early tutorial and mountaineering level. It is the first part of the game where the player gets to play as Abby, so you follow her partner Owen and traverse the mountainside. Internally, we named this level “Tracking” because it is eventually revealed to the players that they are tracking someone and they are part of the W.L.F. or “Wolves”. So we snuck a very abstract wolf head into some of the rocks to mirror the “Wolves” tracking their prey. In the end, it was maybe too subtle, but that’s usually the better route.
Anthony Vaccaro: Trying to envision what it would be like to live in a world that has been ravaged by a deadly fungus and has pitted humans against each other to bring out some of the darkest aspects of humanity can be difficult. Having long discussions with our story and concept teams can really help flesh out the worlds we are building. Just throwing destruction, foliage and dead people in a world feels more like fluff but it doesn’t get into the minds of what the people living in this world must face.
When planning out an environment and the stories we want to tell within those spaces we like to think of each space individually. We try to put ourselves in the shoes of the people that currently or once lived here. How did they treat others who might be infected? How did they treat the oppressive military force occupying their city putting them into lockdown? What inhumane things did they do to survive and possibly, even thrive in this world? Can we sell a story of people turning on each other at the mere fear of someone being infected and sacrificing people who are not just to save themselves? What can we do to show the worst humanity has to offer, but then still show the light in the darkness? We strive to have our world believable in this sense so that they tell their own story just by walking around and exploring a space.
The Beauty of War between Humanity and Nature
80.lv: How do you find beauty in dilapidation and its oneness with nature?
Reuben Shah: Beauty is everywhere. When we have a destroyed building or wall, we look at what elements in the scene can be our key art and work from there. At times, we start with a vignette of a space, whether it’s an overgrown space shuttle or a crashed truck entering a room, we use those esthetics to convey the beauty in the world. Using foliage to represent growth and an undying world helps with our colors and theming of areas and style.
Jeremy Huxley: I grew up in the Pacific Northwest, and due to my own curiosity, I took some environmental classes in college. We had a very passionate teacher in that class and I remember him saying that nature, in general, is in a war for survival. He even referred to forests as a “battle zone” which we all laughed at, but he was right, the old forests suffocate the new growing ones unless the new plants and trees are able to survive; the death of older trees can also support life and new growth in its own way. As humans, our success at surviving alone has been the downfall for many other species, so from that perspective, humanity’s fall and reclamation by nature is a beautiful thing.
Anthony Vaccaro: The dichotomy of man vs nature is what lends itself so well to finding beauty in destruction and dilapidation. Humans desire to conquer nature, own the land, and build amazing things, yet, no matter how hard humans try, Mother Nature is relentless, eternal, and will eventually swallow every bit of humanity back beneath the ivy if you are not always there to trim back the weeds.
Seeing once towering structures of human ingenuity destroyed, abandoned, and reclaimed by the earth has a hypnotic beauty to it that can be shared by anyone on this planet. Playing up the visual language of what an everyday person comes to expect from a city street to a coffee shop and then playing up how time and nature would reclaim it was an important aspect to create beautiful destruction.
We as artists have to avoid just throwing damage and foliage everywhere, we need to strategically think “How did this happen? Is there water running through that can weaken a structure? Is there enough sunlight here to promote vegetation growth? Are the surfaces conducive to shifting to let dirt seep through so plant life can grow or would this section remain more bare and dusty?" Thinking about these "hows" is just as important as the wow factor as it will lead to a more believable look and feel.
Impression is Key
80.lv: To summarise our talk, what makes The Last of Us Part II different in terms of environment art? What allows it to make such a big impression on the players?
Reuben Shah: Detailed immersion. We take pride in going the extra mile to make spaces feel believable. Having access to the story beats and color scripts readily at hand helps convey messages that are dear to the player, our characters, and our story. The engagement of feelings and narrative landscape have players remembering their actions and the effects to the overall story which in turn creates a lasting effect.
Jeremy Huxley: I think The Last of Us Part II really struck me personally because I am from the Pacific Northwest. Seeing the freeways destroyed and the iconic library in Seattle overgrown with plants, I imagined what my own small town in that area would look and feel like in this narrative. It was both beautiful and sad and I feel like beauty and sadness are huge themes throughout the whole series and in particular this game, so mirroring that in any way we could was our ultimate goal.
Anthony Vaccaro: As Reuben said, detailed immersion is the key factor for me as well. The environment team tries to make every space have a purpose, add to the overall narrative that we are trying to tell in our worlds. We want to sell this not just as a beautiful world but as one that is truly lived in by people with their own very different lives, beliefs, fears, and breaking points. We make each space feel uniquely handcraft not only to tell more about the world and its inhabitants but also to strengthen the main narrative story by strategically placing where and when these environmental storytelling moments are.
Interview conducted by Kirill Tokarev
You may find these articles interesting