mouthporn.net
#graphics programming – @askagamedev on Tumblr
Avatar

Ask a Game Dev

@askagamedev / askagamedev.tumblr.com

I make games for a living and can answer your questions.
Avatar
Anonymous asked:

You have talked at length at standard optimizations (LoD, not rendering things, deleting cars on a distant highway to avoid memory overflow, etc). What are the weirdest optimization strategies you have seen?

One of my favorites was detecting animated background characters on screen, skipping 80% of their animation updates, and instead interpolating between the animation frames during the missing frame updates. Since they were just auto-looping background characters and not often looked at, most players didn't notice a difference but it saved a lot on the animation calculation overall.

Got a burning question you want answered?

Avatar
Anonymous asked:

Regarding your latest answer about optimizing games. Can you elaborate more on how the optimization gets to happen? Like ofcourse they do want to make their npcs look as good as they planned to so how do they optimize without reducing quality in general?

Optimization tends to be tailored to the specific tasks at hand. Programmers on large projects often build or use out-of-the-box tools called profilers that measure performance in specific areas of the code. Based on the profiler reports, the engineers can see where the majority of processing time is being spent and then consider looking into optimizing the heaviest offenders.

If we're optimizing the visuals specifically, then it's a question of what saves the most processing time. The best optimization is often not to render anything at all. If the hardware doesn't have to render and animate a thing, it saves a lot of processing time. We do this by not rendering models that the player can't see (e.g. stuff off-camera, stuff behind walls, stuff too far off in the distance, etc.), and also by not rendering the parts of the model that the player can't see (e.g. if you're looking at the front of a building, you can't see the back of the building at the same time without a mirror or something behind it).

That's only one example of optimizations we make. We load cheaper models and/or textures for things that are far away (LODs), we can reduce the frequency of animation updates for characters in the background or on the peripheral edges of the camera that players aren't focusing on, and so on. There are as many ways to save processing time as there are ways to spend it. If you have the time and interest, I suggest looking at the #game optimization tag on this blog, since I've written at length about various means of optimization in past posts.

Got a burning question you want answered?

Avatar
Anonymous asked:

Hi, Dev! I recently got into modding Skyrim for the first time. I started getting higher fidelity visual mods such as 8k textures and such. This might be a dumb question but how is that even possible? Of course being on a powerful PC is a big part. But I thought games and their engines had memory and graphical limitations of their own.

The first set of constraints are during development - the assets that are created. It won't actually matter if the game can potentially display 4K, 8K, 16K, or 128K textures if there aren't 4K, 8K, 16K, or 128K texture assets to load. Somebody actually has to take the time to create those assets for the game to load them at the appropriate times. We typically don't put in code to upscale textures, they need to be added to the assets and loaded at the appropriate time in-game.

The second set of constraints are at run time - the system resources of the machine it is on. If the system has little memory or a slow CPU/GPU, there are fewer overall system resources to work with compared to a system with a faster CPU/GPU and more system memory. The other potential technical bottleneck is the [architecture of the game]. If the game was built using 32 bit architecture, it will only be able to use up to approximately 4 gigabytes of memory. If the game was built with 64 bit architecture, it can use up to approximately 2300 petabytes of memory. As long as the game can run within 2300 petabytes of memory, it's fine.

Thus, if the game is built on 64-bit architecture on a beefy machine and has 8k texture packs and code to know when to load those 8k textures, it should be able to run those even if the original game never had such high res textures to begin with.

The FANTa Project is being rebooted. [What is the FANTa project?]

Got a burning question you want answered?

Avatar
Anonymous asked:

Just what is a shader anyway?

In short - a shader is a post-processing effect.

A shader is a small program that runs at certain points during the rendering pipeline that makes changes according to its program. You can think of a shader as a processing pass - it takes the existing information from the game and processes things to produce some kind of different result, often some kind of visual effect. For example, it can take all shades of green on a character model and apply some kind of filter to them, distort them, blur them, modify transparency, brighten them, project some other source onto them like a green screen, or any of a number of other effects. Most shaders have a format like “every so often, for each thing that matches some (or all) of these specific conditions, do this to the thing”. This can be very powerful to add specific touches to specific elements like making metal more dully reflective, glass transparent but also refractive, or even things water volumes reacting to objects placed into them. Since they can run every frame, you can have real time post-processing effects like these:

Breath of the Wild’s cel shading (and what it looks like without the shader)

Unity’s toon water shader:

Making fur:

They can also be applied to specific objects like how this VFX has shaders to cause distortion on particles emitted from the model here:

The big takeaway is whenever you see somebody say “shader”, think “post processing effect”.

The FANTa Project is being rebooted. [What is the FANTa project?]

Got a burning question you want answered?

Avatar
Anonymous asked:

Dynamic resolution scaling has become a common feature/solution in games today. How does the engine determine what resolution to scale to? How would it know before the scene/frame is rendered how to draw out fewer pixels?

Usually this is done with preset resolution settings that we have estimates for rough frame rate improvements due to the relative size and amount of data the engine needs to process each frame. We aren’t going to scale a 2560x1440 resolution down to 2524x1420, we’re going to go with a natural size step - probably either 1920x1080, 1707x960, or even 1280 x 720, depending on what the performance target is. Since we can pre-calculate roughly how much of a performance gain we get going to each resolution through benchmarks before the game launches, we can program the game to switch to the appropriate resolution depending on how much performance gain we need.

The decision to switch resolution comes from the game monitoring its own frame time. Game code runs on a loop - it needs to go through the same process each frame in order to update and draw things on screen. The program also has access to the real-time clock - what time it currently is. Thus, it can track how much real time has passed since the last time the frame was rendered and use that to calculate its current frame rate. A frame rate of 30 frames per second means the game must render each frame roughly every ~33 milliseconds of real time (1000 milliseconds / 30 frames = ~33 milliseconds). A 60 FPS frame rate means each frame must render every ~16-17 milliseconds of real time. Thus, if the engine detects that the average frame time for the past five frames has grown to, say, 18-20 milliseconds per frame, it knows that it has dropped below the necessary frame rate. When it realizes this, it can change the resolution and try to get that frame time back under 16 milliseconds. Conversely, if it detects that the frame time is significantly lower than 16 milliseconds and the game is running at below maximum resolution, it can increase the resolution back to the native maximum.

The FANTa Project is being rebooted. [What is the FANTa project?]

Got a burning question you want answered?

Avatar

similar to the recent texturing posts, how does subsurface scattering work and how is it implemented in games?

Avatar

Subsurface scattering is what happens when light isn’t completely reflected off of a surface, but penetrates into the surface by some amount and reflects off of stuff beneath the surface and partially reflected and refracted through that depth. This is how it works:

Normally, when you look at an opaque object, what you see is determined by the light reflecting directly off of its surface. That can be shiny or matte, but you see only what is on the surface of the thing. There are also objects that are (semi-)transparent - they may refract or bend light as it passes through them, partially handling colors and such as well. Regardless, the light bounces off of something and enters your eye, allowing you to see it in a process like this:

However, sometimes the surface isn’t completely opaque but it also isn’t transparent. Light will actually penetrate some surfaces to some depth before it reflects back out. This is what the “sub” in sub-surface scattering refers to - it goes under the surface before it reflects back out. The surface isn’t transparent on the way out, so the light coming back out of that area beneath the surface gets scattered based on qualities of the material it’s traveling through. As a result, you get something like this:

The depth to which the light can penetrate beneath the surface and the effects of that material on the light’s color, intensity, direction, etc. are defined and controlled by the data within the subsurface scattering map. A subsurface scattering map is essentially another kind of texture map that defines the depth that light can penetrate this particular part of the model, and what kind of effects that penetration/reflection will have on the light that comes out. This allows technical artists can tweak the individual visual behaviors for different materials like skin, wood, blood, ooze, fur, fuzz, etc. as well as where these different individual locations are located on the textured model.

The result is that we can now define exact areas where light can penetrate into the model, others where it cannot, and our artists can tweak those values to say how far the light penetrates and what the material does to the light that gets reflected from beneath the surface. This is what lets tech artists do things like make certain parts of the dragon (tail, horns, ridges, etc.) react to light shined on it differently than other parts.

The FANTa Project is being rebooted. [What is the FANTa project?]

Got a burning question you want answered?

Avatar
Anonymous asked:

I've been watching Ars Technica's War Stories series where developers talk about technical hurdles they had to overcome - more specifically Prince of Persia for the Apple II and Crash Bandicoot for Playstation 1. Both revolved around working with the consoles limited resources and memory in particular. What would analogous hurdles be that modern game developers struggle with to squeeze the most performance out of hardware, now that memory and storage is more abundant than it used to be.

Generally, the eternal struggle is always about maximizing visual fidelity within the limitations of the technology. That hasn’t really changed, it’s just been a little bit different each time. This gaming technology generation’s struggle is probably two major hurdles to overcome:

  1. Making everything look as pretty as possible while still running at 60 frames per second without dropping anywhere
  2. Removing all loading screens and obvious loading zones

With today’s cutting edge technology, the first challenge basically means trying to juggle ray tracing, physics-based rendering (PBR), high dynamic range lighting (HDR), running shaders, and so on at 4k resolution while still retaining a 60 fps floor. These are all things that could have been done before, but it’s a question of the total number of calculations that must be done each frame. Increasing the display resolution in two dimensions means that you’re growing the number of calculations per second geometrically. [Click here] for a post I wrote explaining the kind of growth I’m talking about.

The second problem is us trying to avoid unintentional breaks to the flow state. We want players to be in the zone, fully immersed in the game. That state gets broken by the limits of technology, such as having to wait a few seconds for a loading screen, being forced to crawl slowly through a crawl space, or seeing blurry things appear that get sharper as time passes. This has always been because we have a finite amount of system memory that we can use to store data, and the rest of it has to sit on the disk somewhere and be loaded. We can’t load that disk-bound data any faster than the hardware can let us. In the past, we’ve adopted sneakier ways of doing this by doing things like predictively loading data from disk, but that doesn’t always work if players decide to turn around and go back to an area that was just unloaded. New hardware like SSDs will certainly allow us to transfer data from disk at a faster rate than before, which should help with loading screens and such. Still, it’s a very real problem that we’ve been experimenting with for ages.

Overall there are still very real limitations on the hardware that we have to work around with clever solutions. It’s always been a question of how to fit all of the relevant data we need within the available system resources. It was back in the early days and it’s still the same situation today. Our system resource capacity has increased, but we’ve also greatly increased the sheer amount of data we need to fit into those system resources. It’s an eternal struggle that has worn many faces and names over the decades, but the underlying issue is never truly solved. It is, at best, acceptably solved for now.

The FANTa Project is being rebooted. [What is the FANTa project?]

Got a burning question you want answered?

Avatar
Anonymous asked:

What is triangle count? If pokemon from Sword and Shield model has a different triangle count but looks exactly the same as in Sun and Moon model, is it the same model?

You’re probably thinking about a 3D mesh. Some people refer to it as a model, but the word “model” often carries other connotations with it. I said I wouldn’t write anything more about Pokemon in specific, but this is an opportunity to segue into a more general look at how 3D graphics work in general and I think that’s a worthwhile topic to cover, so let’s take a dive down that rabbit hole. We’ll start with the mesh. A 3D mesh usually looks something like this:

As you can see, this is a polygonal mesh for bulbasaur. It is comprised of a bunch of points in space called vertices that are connected to each other in varying degrees. These connections between vertices are called “edges”. Typically, edges are grouped into threes and called “triangles”, each of which represents a solid surface on the model. The general complexity of the 3D mesh is proportional to the number of triangles it is comprised of. As a point of reference, Spider-Man PS4′s various spider-suit meshes had between 80,000 and 150,000 triangles. 

However, you may have gathered that this mesh isn’t the entirety of the 3D model. It’s is just the shape of the thing. You also need to know what it looks like, what parts are colored what, and so on. This means you need to add a texture map to it in order to see what parts of it look like what. A texture map is a 2D image that has certain established points “mapped” to vertices on the 3D mesh. It ends up looking something like this:

You can see how specific parts of the Texture Map in the upper left get applied to corresponding parts on the 3D Mesh - there’s a section of the Texture Map for the tail, the face, the body, the paws, etc. We can also have multiple texture maps for a single 3D mesh. Here’s another example - each of these guns uses the same 3D mesh but a different texture map.

Are these models the same? Well… they all use the same 3D mesh, but clearly they have different texture maps applied, each of which required somebody to create them. Let’s hold off on judgement for a moment though, because we’re not done yet.

This is a normal map: 

It describes the heights and shape of a surface that isn’t smooth. It lets us make smaller numbers of triangles look like they have a lot more details without needing to break them up into significantly higher numbers of triangles. The two examples here are both using the same 3D mesh, but with different normal maps applied. Are these two models the same? Well, they both use the same 3D mesh and texture map, but not the same normal map. But let’s hold off for a moment again because we’re still not done.

There’s also specular maps…

… and ambient occlusion maps…

… and Shaders…

… all of which can be different while keeping things like the 3D mesh and/or texture map the exact same, especially between one game and another. This also is only discussing elements of the model and doesn’t go into things like the rig (animation skeleton) or the individual animations that can also be different.

There’s clearly a lot of work that goes into the creation of a 3D model. We obviously have to build the mesh but we also have to build things like the texture map, the normal map, the specular map, the ambient occlusion map, and any specific shaders. For a game like Pokemon, in some cases you might keep the old mesh and the texture map, but create brand new normal maps, specular maps, ambient occlusion maps, and shaders. So you end up with something that looks like this:

Clearly the specularity is different - look at how more shiny and reflective the Let’s Go version is. The shaders are also different - note how much better defined the lines are around the model in gen 8. The mesh itself is probably the same. The texture map might be the same. The normal map is definitely different - you can clearly see how the shadowing is different between the two, especially around the chin and lower torso. So is this a new model or an old one? My answer is “yes” - some parts of it are new and some are not. But is it the exact same model? They are clearly not the exact same model - there has definitely been work done between the two. Claiming otherwise would be foolish.

The FANTa Project is currently on hiatus while I am crunching at work too busy.

Got a burning question you want answered?

Avatar
Anonymous asked:

I'm working on a personal project and I was wondering, how are cameras implemented? Do you move every instance in the game depending on the screen size or is there another way, because the former sounds a bit heavy on calculations.

I’m assuming you are talking about 3D space and not 2D, since 2D is a lot easier to handle. In my experience, a camera in 3D space is usually its own 3D object that has a position and orientation within the world. It has a set of variables that are used to figure out what it should be able to see at that position - a field of view, a position, an orientation, a focal length, depth of field, etc. Given that position and orientation in 3D space, we can limit what the camera sees into a known space called a “viewing frustum” - the space that includes everything the camera sees, from things that are super close up to things that are at maximum viewing distance. Conceptually, it looks something like this:

Things that are in front of the near clipping plane aren’t rendered (too close to see). Anything that’s beyond the far clipping plane is too far away to see. Anything that’s outside the viewing angle isn’t viewable by the camera, so the renderer can safely skip rendering it. Anything that’s (partially) within the viewing frustum can be seen, so the renderer should decide to render it. You can use some basic 3D math to figure out whether the camera can see it. This is what’s known as “frustum culling”. Here’s an example of the frustum culling:

Only the objects in color are rendered in this case. Everything else isn’t. So… to answer your question, objects in the world generally stay where they are unless they decide to move. The camera is one of those objects - it can move, and the view frustum must then be recalculated for its new position and orientation. The contents of the viewing frustum are calculated every frame, so if the camera moves between frames, what the camera sees also changes between frames and things must be recalculated and re-rendered. 

That’s generally how cameras work in 3D graphics. I’ve purposely avoided the nitty gritty math involved with this, as well as other rendering optimizations like occlusion culling and back face culling for the sake of brevity and simplicity. This is the sort of stuff you usually learn about in a University-level computer science course on computer graphics. If you’re interested in learning more, you can try googling “Introduction to Computer Graphics” and picking a university course. For a brief overview, [click here] for a decent introduction to computer graphics from Scratchapixel.

The FANTa Project is currently on hiatus while I am crunching at work too busy.

Got a burning question you want answered?

Avatar
Anonymous asked:

Certain hardware developers appear to believe that Ray-tracing is going to be the next big thing. How does non-raytraced video game lighting work, and why is ray-tracing better?

As far as I understand it (and I am not a graphics programmer, so I really don’t understand a lot of this), most games and simulations use the “Radiosity” model to calculate lighting primarily for performance reasons. 

Radiosity operates in a finite number of passes. It first calculates the direct light from each light source onto all of the elements in the scene the light shines on. After it’s finished calculating that, the renderer calculates all of the first-order reflections from the stuff that the light source shines on. Something dull doesn’t reflect as much light as, say, something made of chrome, so the second pass would illuminate the things near the chrome object more due to its reflectivity. On the third pass, it calculates any second-order reflections from the first-order reflections, and so on and so forth. The more passes we do, the more accurate the lighting looks, but the subsequent passes tend to provide less additional accuracy than its predecessor (i.e. diminishing returns). In order to maintain a certain level of performance, we usually cap the maximum number of passes the renderer will take. This often gets us a “close enough” result.

Ray Tracing generates a more accurate simulation of what things (should) look than Radiosity because Ray Tracing will get all of the reflections instead of the maximum number of Radiosity passes, but it is much more expensive in terms of performance. The way lighting works in physical space is that light comes from a source (e.g. the sun) and bounces around the physical world until it reaches our eyeballs. Some rays of light will never reach our eyeballs - they point away from us. Ray Tracing ignores all of the light that won’t reach our eyeballs (the camera), and instead goes in reverse - if our camera can see it, there must be some path from the camera to the object and beyond, bouncing as many times as necessary, to reach the light source. Since we know all about the light source and exactly how many bounces there were and the material properties of each bounce point, we can use that to calculate what that pixel should look like.

The trouble with Ray Tracing is that those reflections and bounces could number in the hundreds or even thousands before reaching the light source. The renderer must run those calculations for each pixel its camera can see. We’ve actually had ray tracers for decades now. Writing a ray tracer is actually a great introduction to computer graphics for computer science students. A friend of mine actually wrote a ray tracer while we were in middle school, but it took the computer of the time five minutes to render out a simple scene with just one light. The main difference between then and now is that we’ve finally reached the point in graphics hardware where we can actually do those calculations in real time. That’s why ray tracing is now a big deal, it’s because it’s more accurate at lighting things than the old Radiosity model.

By the way, Disney Animation Studios put out this video in 2016 that explains how Ray Tracing works much better than I can. If you’ve got nine minutes or so and are interested in a great explanation of computer graphics technology, give it a watch.

The FANTa Project is currently on hiatus while I am crunching at work too busy.

Got a burning question you want answered?

Avatar
Anonymous asked:

Hello. Sorry if you're asked this question before. But, why do developers still insist on using DirectX as renderer on PC versions of their games even tho Vulkan is proven to be more viable and better performing solution?

There’s a lot of reasons. Vulkan doesn’t support older versions of Windows like XP when there are an awful lot of players with min spec or low spec PCs who still want to play. Microsoft has a vested interest in promoting DirectX and a lot of publishers that want to publish on XBone will want to keep a healthy relationship with Microsoft. However, the primary reason is because it just takes a while to adopt a new standard, especially when a huge amount of technology was already written on top of the old stuff. Vulkan might be the hot new thing, but it was only released a year ago. It’s going to take a little time before big budget games start showing up using Vulkan en masse.

Almost all current and in-development PC titles were built on a DirectX foundation, and they’ve been doing so for years and years. DirectX is tried and true; even if it isn’t the best performing solution now, engineers have spent years getting to know its strengths and weaknesses very well. That means that the code they wrote and the systems they created were built around assumptions on the platform they’d be using. Most publishers build a common set of core technologies that get used by each of their studios, and these publishers usually aren’t willing to risk tens of millions of dollars on barely-tested new technology. Just because Vulkan performs better doesn’t mean the switching process is trivial. All that technology that the studios have been using still needs to be rewritten to take the benefits, drawbacks, and idiosyncrasies of Vulkan instead of DirectX into account. Even though Vulkan is stable and has been out for a year, the technology built on top of Vulkan still needs its own development time, testing, and deployment to the studios to use. And then those studios will need to decide whether they are going to take the time to integrate the Vulkan changes in their currently in-development project, or take the safe route and ship on DirectX before switching the Vulkan for their next project.

Remember that AAA games often take two to three years from inception to ship. Bethesda gave Vulkan a test run with Doom 2016, and it performed well. It was because of this successful trial that Bethesda announced it would continue to support Vulkan in its other upcoming games. But I wouldn’t look for Vulkan in anything for a while - the worst case scenario is that a studio is in full production mode on a title and then something goes wrong with either Vulkan or the engine tech built on Vulkan, causing hundreds of man-hours of productivity to be lost while the issues are identified and fixed. Remember, it isn’t just the engineers who have to deal with any issues that arise Vulkan - it could be everybody working on the game. That’s a difficult ask when you have a deadline that’s approaching fast. It’s dangerous to swap important parts of your car while you’re driving it. It’s much safer to wait until you stop.

Got a burning question you want answered?

Avatar

For those interested in the engineering behind DOOM (2016)’s rendering pipeline, this blog post has a lot of interesting tidbits. They did some cool stuff with their order of operations and optimizations in order to make DOOM’s visuals so good.

Got a burning question you want answered?

Avatar

What do you think of Vulkan, the successor of OpenGL? I know little about APIs, but from what I hear it's very promising, especially when DirectX has never been too great (or at least not for quite some time). Right now only Croteam announced support for it, but do you think it will make a difference regardless?

Avatar

I think it’s going to be a viable alternative, at least in the short term. NVidia, ATi, and Intel have all declared support for it. NVidia and ATi both have drivers for it already. As long as the GPU developers continue to support it, it has a real chance at being used. The real question is whether the publishers adopt it for their games’ development, and that depends on the ease of use, the power it provides, and the continued support from its developers.

Avatar
Anonymous asked:

Hi, Can you weigh in on this topic of the whole lazy devs / developers are so dumb that they don't know how to enable AF for PS4? I'm tired of reading this crap - i'm certain developers are not dumb and there are real performance based reasons why it may not launch with AF. Such a thread where i feel tons of misinformation on Neogaf and of course I'm from beyond3d and we've got the same garbage there too

Two things, really. I checked with one of the graphics programmers about this since I don’t really delve into the deeper graphics stuff very often. 

As a quick primer for those who aren’t sure what AF is, it stands for Anisotropic Filtering. Anisotropic Filtering is a graphics process that makes textures viewed at an angle look prettier by sampling nearby pixels in terms of screen. Normally when you view something from an angle, it gets kind of compressed and squashed. Anisotropic Filtering works similarly to MIP mapping and the technology behind LODs, in which it has different angled versions of the texture stored off, referenced, and displayed instead of the ordinary view of the texture that assumes the viewer will be looking at it directly. Instead of showing you a simpler model and texture set at a distance like with LODs, the graphics engine determines your viewing angle and then shows you a different texture based on that angle.

The thing is that Anisotropic Filtering is kind of an expensive process, so neither my graphics programmer associate nor I would be surprised to see if more than one studio simply disabled it for the sake of early games on the platform. It’s really tough to say whether the studios in question had to do this or not without actually looking at their performance numbers, but I can guarantee you it has nothing to do with laziness. Typically, it’s that they tried it and it either didn’t look good enough or they couldn’t take the performance hit under those circumstances.

The other thing is that there was a rumor that the Anisotropic Filtering code provided by Sony could have just been buggy out of the gate. This shouldn’t really be that big a surprise either - the developer tools for new consoles have always been works in progress, and some bugs and effects aren’t complete when the console ships. We developers get our firmware and SDK updates from Sony and Microsoft (and even Nintendo, for those who still develop on those platforms) as they come just like you do, and early games tend to lack certain features that are added later. Regarding AF in particular, there was a rumor a few months back that Sony actually rolled out the fix for the problem sometime in March or April, making AF workable. Whether that’s true or not, I don’t know, but it might be? Keep an eye out for the AF in new games as they come out, especially this holiday season.

Avatar
Anonymous asked:

Hi, my question is why would a developer lock a game at 30fps? I read that the new batman game was locked at 30fps for computers.

I’ve received a number of questions about frame rates and locking lately, so I thought I would try to answer it. At the core, it’s a technical problem about how much time you can set aside to do your calculations, and it isn’t very easily solved. This is likely going to be fairly lengthy and possibly technical. You have been warned.

image
Avatar
Anonymous asked:

What are some of the key differences between the programming specifically of a video game vs some other kind of software? Aside from the obvious, like working a lot more with vector maths and stuff like that.

There isn’t a lot that’s absolutely unique, but video games tends to have aspects of crossover that are fairly rare. 

Most business or enterprise software has a very specific set of specifications the code is built on. You know exactly what the scope of the project is, what you need to deliver, when you need to have things done by. Everything is quantitatively trackable. In video games, it’s the result is measured by an extremely qualitative measurement of “fun”. We don’t have a magical dial to raise or lower the amount of fun a game or system is. Sometimes the difference between fun and not is the game’s performance, others it’s about the content, sometimes it’s the UI, or any of a huge number of things. Thus, our scope and specs tend to be more malleable, often having to cut or shoehorn in features to make things work rather than having something set in stone from the get go.

Another major consideration is simply performance - most commercial software doesn’t have to optimize for performance. The only exceptions are those handling real-time critical tasks, like high-speed stock trading, medical software, or safety equipment. As stated before though, we game developers are tasked with making things “fun”, which is both presenting compelling content and removing elements of frustration. Bad performance is a huge element of frustration, so we need to focus on that far more than, say, the Microsoft Excel team.

Aside from that, games deal with a lot of graphics and animation on a practical level, and most non-gaming software doesn’t do that sort of thing. You’ll almost never see non-game software that needs to support different types of anti-aliasing, handle high dynamic range imaging, or update dynamic lighting in real time. Even movie studios like Pixar don’t often need to construct highly complex real-time animation blending systems, since they have the benefit of knowing exactly how every shots will be constructed. They can get away with having a render farm crunch their data out, while we need to make it work at 60 frames per second in real time. They practically never need to combine complicated animation systems, particle effects, lighting, and real time requirements all into one package.

At the end of the day, software engineers are still software engineers. We write code, we fix bugs and optimize things, and we solve problems. Working on games as an engineer tends to show more immediately visible results for your work.

Avatar
Anonymous asked:

Can graphic programmers be animators?

Short answer: Yes.

Long answer: Technically yes, but realistically you’ll probably never see one person do both. Here’s why:

An animator’s primary job is to create animation assets. Animators will work with tools like Maya, 3D Studio Max, Motion Builder, etc. to use animation skeletons (called Rigs) to create individual motions that will be used in game to make the character models move. Animators are artists. Most of what they do involves creating the motion that will be viewed by the player, and that involves principles like squash and stretch, staging, slow in and slow out, using arcs, and so forth. An animator’s job is about creating a motion through a given time in such way that it reads well and the viewer can understand just what sort of action is being taken. An animator has to work within a bunch of other constraints as well - the size and dimensions of the model, the number of bones in the skeleton, how the action reads from potentially multiple camera angles, artistically weighing the importance of the way the motion reads to the responsiveness of the controls, and so on.

One of the animators I really admire described animation as “sculpting time”, which I thought was an apt metaphor for it. The animator creates the motions that has to convince the player that these are actual creatures moving, and not just marionettes.

A graphics programmer, on the other hand, is focused primarily on efficiency and accuracy. The graphics programmer is the person who writes the systems that take the data from artists and actually displays it on the screen. As such, it is the graphics programmer’s job to take the mathematical equations that govern how things like lighting work, and accurately translate them into routines that a computer can handle. The graphics programmer needs to figure out what order various lights and effects will be applied to a specific area, then translate that into screen space so that the CPU can figure out just what color this particular pixel is going to be. In addition to accuracy, the graphics programmer also focuses on efficiency and speed - the most beautiful scene in the world won’t matter in a video game if it doesn’t run at a reasonable frame rate. Thus, a graphics programmer must understand the hardware limitations and work to improve the speed at which the CPU does its calculations - typically by optimizing the math involved with the loading, processing, and calculating of assets. 

One of the graphics programmers I worked with once described his job like this: “My greatest joy in life is seeing numbers go down, and I spend all my time trying to make this batch of numbers as small as possible.”

So in answer to your question... it is theoretically possible for an animator to be a graphics programmer or vice versa. However, they are two distinctly different fields with different types of expertise involved, and it is profoundly difficult to find that special person who is not only technically-minded enough to understand and engineer the math behind the way graphics are rendered, but also artistically minded to the point that he or she can create assets that read well to the player. And even if there were those super rare types, animating and engineering are full-time jobs. You can’t reasonably expect someone to do the work of two people on a given project, simply because of human limitations.

You are using an unsupported browser and things might not work as intended. Please make sure you're using the latest version of Chrome, Firefox, Safari, or Edge.
mouthporn.net