mouthporn.net
#3d graphics – @askagamedev on Tumblr
Avatar

Ask a Game Dev

@askagamedev / askagamedev.tumblr.com

I make games for a living and can answer your questions.
Avatar
Anonymous asked:

Plenty of 3D games in the past several years have in game meters/kilometers or other real world distance measurements. For lack of a better term, are they accurate? Consistent between games? How do you set what a meter even is in a virtual world?

They're accurate... kind of. Units of measure within 3D game worlds are actually arbitrary. We can choose to make one unit be a kilometer, a meter, an inch, or even something more relative, like one character height/width. As long as we remain consistent with our assets and environments in terms of relative size, things will look ok. If we can't remain consistent, things break.

Let's say, for example, that character modeler Neelo builds a character model where one unit is set to one inch. This means that a six-foot-tall person would be 72 units tall and maximum width and depth of roughly the same. Neelo's finished character would comfortably fit within a 72 x 72 x 72 unit cube.

Now imagine that the environment artist Desmal misread the memo and creates the fighting arena using one unit set to one foot, rather than one inch. The fighting arena is 100 feet wide, 20 feet deep, and 20 feet tall because that's what the lead designer wants the arenas to be. Thus, Desmal's finished fighting arena would fit within a 100 x 20 x 20 unit block.

What happens if we put Neelo's character into Desmal's arena? Well, Neelo's character is over 3x the height of the arena and almost as wide, so we'd probably see Neelo's feet and lower legs in the arena, and not much else. In order for things like relative size to work, we need to make sure they're built with the same consistent unit size in mind. We can scale Neelo's character down or we can scale Desmal's arena up (or they can meet halfway), but we need to establish some kind of consistent scale or nothing will work.

One benefit to having this consistent unit size is that we can extrapolate out relative sizes. If we establish that our protagonist is 180 cm tall, it means that 1 kilometer (100,000 cm) within the virtual world is the equivalent of (100,000 / 180) = 555.555 protagonist heights. As long as everything works out in terms of relative size, the virtual world feels like sizes of in-game things map to the real world.

PS. It's actually quite rare for different games to use the exact same scale for their assets! This is one of the major problems with the NiFTy enthusiast talking point about taking an item or skin or whatever from one game and bringing it into another. On the last game I worked on, one unit was one meter. On the current game I'm working on, one unit is one inch. Bringing assets from the other game over to my current game would mean they would be tiny in comparison!

Got a burning question you want answered?

Avatar
Anonymous asked:

Hi, Dev! I recently got into modding Skyrim for the first time. I started getting higher fidelity visual mods such as 8k textures and such. This might be a dumb question but how is that even possible? Of course being on a powerful PC is a big part. But I thought games and their engines had memory and graphical limitations of their own.

The first set of constraints are during development - the assets that are created. It won't actually matter if the game can potentially display 4K, 8K, 16K, or 128K textures if there aren't 4K, 8K, 16K, or 128K texture assets to load. Somebody actually has to take the time to create those assets for the game to load them at the appropriate times. We typically don't put in code to upscale textures, they need to be added to the assets and loaded at the appropriate time in-game.

The second set of constraints are at run time - the system resources of the machine it is on. If the system has little memory or a slow CPU/GPU, there are fewer overall system resources to work with compared to a system with a faster CPU/GPU and more system memory. The other potential technical bottleneck is the [architecture of the game]. If the game was built using 32 bit architecture, it will only be able to use up to approximately 4 gigabytes of memory. If the game was built with 64 bit architecture, it can use up to approximately 2300 petabytes of memory. As long as the game can run within 2300 petabytes of memory, it's fine.

Thus, if the game is built on 64-bit architecture on a beefy machine and has 8k texture packs and code to know when to load those 8k textures, it should be able to run those even if the original game never had such high res textures to begin with.

The FANTa Project is being rebooted. [What is the FANTa project?]

Got a burning question you want answered?

Avatar
Anonymous asked:

What exactly causes t posing?

A T-Pose is the default pose that animators and character artists put characters in. This is because it allows the artists to see every part of the character by rotating and zooming the camera in and out, including all the nooks and crannies typically hidden by other things on camera - under the armpits, between the fingers, behind the knees, inside the elbows, and so on. Character artists, texture artists do their work primarily with the T-posed character. It is also usually the reference pose that animators and riggers start from as well. If we use motion captured animation, we usually start with the mocap actor in a T-pose in order to get a reference position for animators to start from. Because so many things start from the T-pose, it means that the T-pose tends to be the default pose in practically all games.

When you tell the character model to play an animation but it can’t find an animation to play for some reason (missing a file, wrong file name, corrupted or bad data, etc.), we often need it to do something very visible to indicate that something went wrong. This usually means some sort of default super-simple, super-visible placeholder animation to play in case we have missing or bad animation data - something no one can mistake for a proper animation. Thus, we generally use the T-pose for those cases - everybody recognizes it and nobody will mistake it for a proper animation.

This has been a kind of long explanation to say “t-poses are usually caused by missing or bad animation data”.

The FANTa Project is being rebooted. [What is the FANTa project?]

Got a burning question you want answered?

Avatar
Anonymous asked:

What is triangle count? If pokemon from Sword and Shield model has a different triangle count but looks exactly the same as in Sun and Moon model, is it the same model?

You’re probably thinking about a 3D mesh. Some people refer to it as a model, but the word “model” often carries other connotations with it. I said I wouldn’t write anything more about Pokemon in specific, but this is an opportunity to segue into a more general look at how 3D graphics work in general and I think that’s a worthwhile topic to cover, so let’s take a dive down that rabbit hole. We’ll start with the mesh. A 3D mesh usually looks something like this:

As you can see, this is a polygonal mesh for bulbasaur. It is comprised of a bunch of points in space called vertices that are connected to each other in varying degrees. These connections between vertices are called “edges”. Typically, edges are grouped into threes and called “triangles”, each of which represents a solid surface on the model. The general complexity of the 3D mesh is proportional to the number of triangles it is comprised of. As a point of reference, Spider-Man PS4′s various spider-suit meshes had between 80,000 and 150,000 triangles. 

However, you may have gathered that this mesh isn’t the entirety of the 3D model. It’s is just the shape of the thing. You also need to know what it looks like, what parts are colored what, and so on. This means you need to add a texture map to it in order to see what parts of it look like what. A texture map is a 2D image that has certain established points “mapped” to vertices on the 3D mesh. It ends up looking something like this:

You can see how specific parts of the Texture Map in the upper left get applied to corresponding parts on the 3D Mesh - there’s a section of the Texture Map for the tail, the face, the body, the paws, etc. We can also have multiple texture maps for a single 3D mesh. Here’s another example - each of these guns uses the same 3D mesh but a different texture map.

Are these models the same? Well… they all use the same 3D mesh, but clearly they have different texture maps applied, each of which required somebody to create them. Let’s hold off on judgement for a moment though, because we’re not done yet.

This is a normal map: 

It describes the heights and shape of a surface that isn’t smooth. It lets us make smaller numbers of triangles look like they have a lot more details without needing to break them up into significantly higher numbers of triangles. The two examples here are both using the same 3D mesh, but with different normal maps applied. Are these two models the same? Well, they both use the same 3D mesh and texture map, but not the same normal map. But let’s hold off for a moment again because we’re still not done.

There’s also specular maps…

… and ambient occlusion maps…

… and Shaders…

… all of which can be different while keeping things like the 3D mesh and/or texture map the exact same, especially between one game and another. This also is only discussing elements of the model and doesn’t go into things like the rig (animation skeleton) or the individual animations that can also be different.

There’s clearly a lot of work that goes into the creation of a 3D model. We obviously have to build the mesh but we also have to build things like the texture map, the normal map, the specular map, the ambient occlusion map, and any specific shaders. For a game like Pokemon, in some cases you might keep the old mesh and the texture map, but create brand new normal maps, specular maps, ambient occlusion maps, and shaders. So you end up with something that looks like this:

Clearly the specularity is different - look at how more shiny and reflective the Let’s Go version is. The shaders are also different - note how much better defined the lines are around the model in gen 8. The mesh itself is probably the same. The texture map might be the same. The normal map is definitely different - you can clearly see how the shadowing is different between the two, especially around the chin and lower torso. So is this a new model or an old one? My answer is “yes” - some parts of it are new and some are not. But is it the exact same model? They are clearly not the exact same model - there has definitely been work done between the two. Claiming otherwise would be foolish.

The FANTa Project is currently on hiatus while I am crunching at work too busy.

Got a burning question you want answered?

Avatar
Anonymous asked:

I'm working on a personal project and I was wondering, how are cameras implemented? Do you move every instance in the game depending on the screen size or is there another way, because the former sounds a bit heavy on calculations.

I’m assuming you are talking about 3D space and not 2D, since 2D is a lot easier to handle. In my experience, a camera in 3D space is usually its own 3D object that has a position and orientation within the world. It has a set of variables that are used to figure out what it should be able to see at that position - a field of view, a position, an orientation, a focal length, depth of field, etc. Given that position and orientation in 3D space, we can limit what the camera sees into a known space called a “viewing frustum” - the space that includes everything the camera sees, from things that are super close up to things that are at maximum viewing distance. Conceptually, it looks something like this:

Things that are in front of the near clipping plane aren’t rendered (too close to see). Anything that’s beyond the far clipping plane is too far away to see. Anything that’s outside the viewing angle isn’t viewable by the camera, so the renderer can safely skip rendering it. Anything that’s (partially) within the viewing frustum can be seen, so the renderer should decide to render it. You can use some basic 3D math to figure out whether the camera can see it. This is what’s known as “frustum culling”. Here’s an example of the frustum culling:

Only the objects in color are rendered in this case. Everything else isn’t. So… to answer your question, objects in the world generally stay where they are unless they decide to move. The camera is one of those objects - it can move, and the view frustum must then be recalculated for its new position and orientation. The contents of the viewing frustum are calculated every frame, so if the camera moves between frames, what the camera sees also changes between frames and things must be recalculated and re-rendered. 

That’s generally how cameras work in 3D graphics. I’ve purposely avoided the nitty gritty math involved with this, as well as other rendering optimizations like occlusion culling and back face culling for the sake of brevity and simplicity. This is the sort of stuff you usually learn about in a University-level computer science course on computer graphics. If you’re interested in learning more, you can try googling “Introduction to Computer Graphics” and picking a university course. For a brief overview, [click here] for a decent introduction to computer graphics from Scratchapixel.

The FANTa Project is currently on hiatus while I am crunching at work too busy.

Got a burning question you want answered?

Avatar
Anonymous asked:

Can you explain the math behind IK?

Distilling a few semesters of upper division university math into a single blog post is kind of hard, but I’ll give it my best shot. IK is short for Inverse Kinematics, so we’ll start with that.

Kinematics is the branch of mechanics that describes motion without considering force. It describes what happens when you hold your wrist and fingers still and then move your elbow. Your wrists, forearm, and fingers all move without any application of force on them directly, because they’re inheriting the motion from your elbow. Your elbow moves and all of the parts of your body that are attached to it move with it.

Inverse Kinematics is starting from the extremities and then working our way backward. If our foot gets placed on a solid object (like a stair step), it cannot go any further. This places a constraint on the bones further up in the hierarchy - our ankle can’t move because our foot can’t move, our knee can’t move because our ankle can’t move, and our hip can’t move because our knee can’t move. Each of these joints has an maximum range of motion, so by enforcing these ranges of motion as we go up the hierarchy, we end up with a legitimate pose for the skeleton where the foot cannot penetrate the stair step.

The math comes in when we’re figuring out the position and orientation for each of our bones. Our foot bone has a set transformation because we know it can’t move any farther than where it collided with the stair step. The ankle bone exists somewhere relative to the foot bone and has built constraints for how far and what angles it can move (e.g. a normal ankle can’t bend more than maybe 150 degrees in any direction), so we can figure out where the ankle should be relative to the foot bone. From there, we continue to go up the chain to the knee bone which has a position relative to the ankle bone that will inherit the new ankle bone position and take the knee bone’s set of constraints into account (e.g. knees can’t bend more than 180 degrees). The knee bone is connected to the hip bone, the hip bone is connected to the base of the spine, and so on up until the bone you’re supposed to move doesn’t need to move because everything under it is within the hierarchy constraints.

The IK math comes in when using these constraints and expected transformations relative to the child bone’s transform to calculate the transforms for bones further up in the hierarchy, until the displacement transformation from the foot not being able to step through the stair step is distributed among the bones constraints. In principle, we’re distributing the spatial difference between where we want the foot bone to be and where the foot bone has to be across all of the bones above it in the hierarchy - the ankle, knee, hip, etc.

The FANTa Project is currently on hiatus while I am crunching at work too busy.

Got a burning question you want answered?

Avatar
Anonymous asked:

Why don't we see AAA 2D games anymore? Some of my guesses: the interest for 2D is not as high/ realistic 3D is more well received, the tipping point for cost efficiency is really low (a small 2D game is cheap, but the more art and animation you have, the better you can reuse 3D assets), there aren't enough 2D artists left for a AAA 2D game since the industry shifted to 3D.

Your guess is partially correct. I’ve written about this before in my animation primer. You can read that [here], [here], and [here].

Basically, when you do 2D sprite animation, you cannot change camera angles because it’s an animated flat image. You cannot change the character’s appearance without having to redraw the contents of each frame. In a small-scope 2D game, this isn’t a problem because the character doesn’t change appearance or camera angles. But as you get into more depth of gameplay systems and storytelling, you start needing that sort of thing - cinematics need camera angles and different perspectives to help tell the story. Equipment and gear needs to be shown in order to show the character’s growth and build a sense of player ownership of the character.

The move to 3D also helps divide the work up better. Instead of having an army of tweeners each drawing the same character in different frames, you can break them up into modelers, texturers, riggers, and animators to break the work up and allow for more specializations. We can reuse the same animation data instead of having to recreate it for each different outfit in the game, allowing for more overall visualizations. It allows for much more overall efficiency of work, like adding new elements of visualization to the game - new outfits, new animations, etc.

One element a lot of outsiders don’t know is that many of the amazing 2D artists of yore were actually the pioneers of 3D art. Most of these pioneers wanted to bring the images and visualizations in their heads to life. 2D sprite art was an accepted technical limitation of the time, not the desired end goal. In order to show and tell the stories with the visuals they wanted, they built the way 3D art works today. We shifted to 3D because our artists wanted a better way to make art. 

The FANTa Project is currently on hiatus while I am crunching at work too busy.

Got a burning question you want answered?

Avatar
Anonymous asked:

Certain hardware developers appear to believe that Ray-tracing is going to be the next big thing. How does non-raytraced video game lighting work, and why is ray-tracing better?

As far as I understand it (and I am not a graphics programmer, so I really don’t understand a lot of this), most games and simulations use the “Radiosity” model to calculate lighting primarily for performance reasons. 

Radiosity operates in a finite number of passes. It first calculates the direct light from each light source onto all of the elements in the scene the light shines on. After it’s finished calculating that, the renderer calculates all of the first-order reflections from the stuff that the light source shines on. Something dull doesn’t reflect as much light as, say, something made of chrome, so the second pass would illuminate the things near the chrome object more due to its reflectivity. On the third pass, it calculates any second-order reflections from the first-order reflections, and so on and so forth. The more passes we do, the more accurate the lighting looks, but the subsequent passes tend to provide less additional accuracy than its predecessor (i.e. diminishing returns). In order to maintain a certain level of performance, we usually cap the maximum number of passes the renderer will take. This often gets us a “close enough” result.

Ray Tracing generates a more accurate simulation of what things (should) look than Radiosity because Ray Tracing will get all of the reflections instead of the maximum number of Radiosity passes, but it is much more expensive in terms of performance. The way lighting works in physical space is that light comes from a source (e.g. the sun) and bounces around the physical world until it reaches our eyeballs. Some rays of light will never reach our eyeballs - they point away from us. Ray Tracing ignores all of the light that won’t reach our eyeballs (the camera), and instead goes in reverse - if our camera can see it, there must be some path from the camera to the object and beyond, bouncing as many times as necessary, to reach the light source. Since we know all about the light source and exactly how many bounces there were and the material properties of each bounce point, we can use that to calculate what that pixel should look like.

The trouble with Ray Tracing is that those reflections and bounces could number in the hundreds or even thousands before reaching the light source. The renderer must run those calculations for each pixel its camera can see. We’ve actually had ray tracers for decades now. Writing a ray tracer is actually a great introduction to computer graphics for computer science students. A friend of mine actually wrote a ray tracer while we were in middle school, but it took the computer of the time five minutes to render out a simple scene with just one light. The main difference between then and now is that we’ve finally reached the point in graphics hardware where we can actually do those calculations in real time. That’s why ray tracing is now a big deal, it’s because it’s more accurate at lighting things than the old Radiosity model.

By the way, Disney Animation Studios put out this video in 2016 that explains how Ray Tracing works much better than I can. If you’ve got nine minutes or so and are interested in a great explanation of computer graphics technology, give it a watch.

The FANTa Project is currently on hiatus while I am crunching at work too busy.

Got a burning question you want answered?

Avatar

So what's the deal with pre-rendered backgrounds? They seem to be very uncommon in games today. Are they harder to make than they look? Do they really turn gamers off? I was surprised when even the Resident Evil 2 remake was announced featuring full 3d graphics compared to the pre-rendered original. Thanks for your insight!

Avatar

Pre-rendered backgrounds can look really pretty, but the big problem with them (especially in a 3D game) is that you can’t change your view because it’s a flat 2-dimensional image. It might look like it’s 3D with some really nice lighting and perspective, but it is still just a very lifelike and detailed flat image.

This image of Morgan Freeman might look like the real thing, even though it is a hand drawn image. If we hold the camera in one fixed position the entire time we look at it, and Morgan Freeman the human was somehow was able to not breathe or move, we might not be able to tell the difference between the image and the real person. However, if you turn the view angle at all, suddenly the difference becomes clear - it’s a flat 2D image, not an actual 3 dimensional person.

That’s essentially what Capcom wants to be able to do in the Resident Evil remake - they’ll want to move the camera to show you the environments from different angles, and that’s not possible with prerendered backgrounds unless you pre-render every single camera shot in the game.

The FANTa Project is currently on hiatus while I am crunching at work.

Got a burning question you want answered?

Avatar
Anonymous asked:

You said in your emulation post that you are limited to whatever the original "box" produces. However, many emulators bump up the resolution of games far beyond what the original box was capable of. For example, the official Mario Galaxy Emulator for tablets in China has the game running at 1080p while the original wii game ran at 480p. Is resolution one of the things tha can be "spiced up?"

Kind of. Things like resolution can be tweaked for 3D games because the original game data is all still unchanged, it’s just the renderer interpreting that data at a higher resolution than before. Here, let me show you a more obvious visual example. This is a screenshot from Parasite Eve 2, for the original Playstation:

Observe the general quality of the image here. See how jagged the edges of the polygons are? See how the display resolution is pretty low? Now look at a screenshot from Parasite Eve 2 on an emulator:

As you can see, the resolution has significantly improved. The image looks much better in terms of jagged edges on the models and such. However, if you look at the models themselves, the texture quality has not improved at all. They’re still kind of muddy and smudgey, even though the visual is being displayed at a higher resolution. You can clearly see the seams on the texture mapping where the polygons on the model intersect. You can see how the textures are kind of blurry. These assets weren’t meant to be viewed at this level of detail.

3D games are constructed from polygons and textures, and those polygons and textures haven’t changed - only the size of the viewport has. You can upscale the resolution, but the textures and number of polygons on the original model won’t improve because there aren’t any artists to craft higher quality models or textures that look better at higher resolution. This isn’t a crime procedural television show; there’s no magic Enhance button that makes textures or models more detailed. There is only the work of artists who craft the models and textures that are designed to be viewed at a certain resolution. We can change the technology, but without people creating new assets there’s only so much we can do with the original game.

The FANTa Project is currently on hiatus.

Got a burning question you want answered?

Avatar

For those interested in the engineering behind DOOM (2016)’s rendering pipeline, this blog post has a lot of interesting tidbits. They did some cool stuff with their order of operations and optimizations in order to make DOOM’s visuals so good.

Got a burning question you want answered?

Avatar
Anonymous asked:

Hi, Can you weigh in on this topic of the whole lazy devs / developers are so dumb that they don't know how to enable AF for PS4? I'm tired of reading this crap - i'm certain developers are not dumb and there are real performance based reasons why it may not launch with AF. Such a thread where i feel tons of misinformation on Neogaf and of course I'm from beyond3d and we've got the same garbage there too

Two things, really. I checked with one of the graphics programmers about this since I don’t really delve into the deeper graphics stuff very often. 

As a quick primer for those who aren’t sure what AF is, it stands for Anisotropic Filtering. Anisotropic Filtering is a graphics process that makes textures viewed at an angle look prettier by sampling nearby pixels in terms of screen. Normally when you view something from an angle, it gets kind of compressed and squashed. Anisotropic Filtering works similarly to MIP mapping and the technology behind LODs, in which it has different angled versions of the texture stored off, referenced, and displayed instead of the ordinary view of the texture that assumes the viewer will be looking at it directly. Instead of showing you a simpler model and texture set at a distance like with LODs, the graphics engine determines your viewing angle and then shows you a different texture based on that angle.

The thing is that Anisotropic Filtering is kind of an expensive process, so neither my graphics programmer associate nor I would be surprised to see if more than one studio simply disabled it for the sake of early games on the platform. It’s really tough to say whether the studios in question had to do this or not without actually looking at their performance numbers, but I can guarantee you it has nothing to do with laziness. Typically, it’s that they tried it and it either didn’t look good enough or they couldn’t take the performance hit under those circumstances.

The other thing is that there was a rumor that the Anisotropic Filtering code provided by Sony could have just been buggy out of the gate. This shouldn’t really be that big a surprise either - the developer tools for new consoles have always been works in progress, and some bugs and effects aren’t complete when the console ships. We developers get our firmware and SDK updates from Sony and Microsoft (and even Nintendo, for those who still develop on those platforms) as they come just like you do, and early games tend to lack certain features that are added later. Regarding AF in particular, there was a rumor a few months back that Sony actually rolled out the fix for the problem sometime in March or April, making AF workable. Whether that’s true or not, I don’t know, but it might be? Keep an eye out for the AF in new games as they come out, especially this holiday season.

Avatar

Game Development Glossary: Graphics 101

The other day, a friend of mine was going through the list of game options that adjusted various graphic-related settings, but didn't understand what all of them actually meant. Some of them should be pretty self-explanatory (texture quality - low, medium, high, ultra, for example) but there's also several other terms that are used that many people have a vague idea about, but aren't quite sure. Today, I'll try to explain what they are and how they work.

Bloom

Bloom is a shader effect that allows lighting to create a sort of feathered halo-ish effect around hard edges. It's what makes light sort of bleed out over and around corners on things, or when it is reflected on shiny surfaces.

VSync, or Vertical Sync

VSync forces your graphics card to synchronize its display at the same refresh rate as your monitor. This is supposed to guard against screen tearing - when your GPU is caught in between rendering two different frames. It doesn't always work, though. Turning on VSync will also force your GPU to output at a lower rate, which results in worse overall performance.

Depth of Field

Depth of Field is when the programmers use various shaders to simulate focusing on a specific distance, making things further or closer appear blurry instead of uniformly focused. It's a shader effect that gets run on top of what you should be seeing if everything were in perfect focus.

Motion Blur

When the camera moves faster than the eye can focus, the image gets interpolated with what has just passed to simulate speed. This makes things at lower frame rates feel better, though it will cost graphical computing power since it has to retain what it has just seen for the past few frames and use that data to interpolate what it is you see.

Anti-Aliasing

Anti Aliasing takes what you should see and smooths each pixel with the surrounding pixels. This helps keep edges from looking too jagged or pixelated. As a post-processing effect, it also soaks up graphical processing power.

Texture Filtering

Similar to Anti-Aliasing, it's used to make textures appear less pixely. As such, you can turn down the texture quality/size without making the textured objects look too blocky if upscaled. It does this by interpolating the pixels on a texture with nearby pixels. The important thing to note here is that texture filtering occurs on the texture itself, while anti-aliasing occurs on what you see on screen.

FOV  (Field of View)

Field of View is the width of the camera's view angle. The wider the angle, the more you can see overall. Extreme width will, however, cause distortions in the proportionality of things.

God Rays

God Rays are when you have concentrated, visible light beams break through cloud cover or around/in specific areas.

Ambient Occlusion

When deciding how to draw a given pixel, the GPU will sample the depth/distance from nearby pixels to figure out how much shadow/depth the nearby pixels should affect the given pixel. This makes crevices and nooks darker than the surroundings.

And that's it for this blog in 2014. I will be on break and traveling for the next two weeks or so, so updates will be sporadic at best until the new year begins.

Further Reading:

Avatar

Game Optimization Tricks (part 3) - LODs

When you're looking at what's on screen from a video game, you (as a human) tend to naturally utilize certain concepts like perspective to provide context for things like how far away things are from each other. Things farther away are represented as smaller on the screen, while things that are closer are larger. Naturally, this means that you can see things that are closer more clearly, and have a harder time seeing things that are far, far away. This is the founding principle behind the optimization concept called "Level of Detail", or "LOD".

Avatar

Are system resources used for rendering polygons, lighting, and textures ONLY used when the camera is actively rendering them on the screen? Like, if you have a camera facing north and a textured cube behind the camera to the south, does that cube take up any resources that are reserved for rendering graphics?

Avatar

System resources are expended to render polygons even when the polygon isn’t visible unless that polygon is explicitly skipped over. A polygon that is behind the camera will still go through all of the calculations as if it were rendered, but it won’t be sent to the screen because it isn’t within the camera’s view - just like the polygons facing away from the camera. The texture can still be loaded, the lighting can still be calculated, the shaders can still be processed, and so on and so forth.

The renderer doesn’t inherently care about whether the camera can see the polygon or not unless you make it care. That’s what optimization techniques like backface culling are for - they help decide whether or not you can see something. If they decide you can’t see it, the engine can choose not to render it… but the engine will try to render everything it can unless explicitly told not to. Something has to decide what is and isn’t visible. Backface culling is one of the criteria we use to make that decision.

Avatar

Game Optimization Tricks (part 2): Backface Culling

Today we continue the talk about optimization in video games and the tricks involved with improving performance. What if you had a way to approximately halve the number of polygons you'd have to draw? You wouldn't need to calculate how the light hits them, you wouldn't need to draw the textures on them, and guarantee that your player would never even notice that they are gone. Wouldn't that be great? You could use less memory and have more CPU and GPU cycles for calculating things that actually matter. This is the concept of backface culling.

Avatar

Roles in the Industry: The Graphics Programmer

Not every programmer works on gameplay. There's a whole lot of them who don't ever even get close to the rules of the game, or how it feels to the player. Some of them spend their time bringing the hardware to heel and harnessing its power. Others solve networking problems, allowing players to play together across thousands of miles. And then there are graphics programmers, the ones who are absolutely dedicated to one task - making things look and perform better. So today, I'll go into a bit more depth on just what it is these people do.

You are using an unsupported browser and things might not work as intended. Please make sure you're using the latest version of Chrome, Firefox, Safari, or Edge.
mouthporn.net