mouthporn.net
#computer graphics – @askagamedev on Tumblr
Avatar

Ask a Game Dev

@askagamedev / askagamedev.tumblr.com

I make games for a living and can answer your questions.
Avatar

For open world games that load everything in real time as you explore the map, how come when you fast travel to someplace close by it goes to a loading screen? Especially if you decide to fast travel to a marker that is within visual distance of you and is technically already loaded in?

Avatar

Normally when you travel by walking/running/riding/driving/etc. to a new location in an open world game, the game can stream the assets (trees, rocks, buildings, animals, etc.) in as you approach them so they can pop in at maximum draw distance where they are small and harder to see. When you teleport to a new place, the game must stream all assets in, both close and far. This doesn't all occur at the same rate, so what ends up happening is that all of the terrain, props, features, buildings, characters, effects, etc. all pop in as they finish loading at varying times. As you might imagine, this looks really bad! In order to avoid showing the player all of that pop-in, we hide it behind a loading screen curtain until everything is finished loading and we're ready to go.

We often have to stream or load parts of the models like high resolution textures and higher detail models in as well. A low-detail low-poly model on the horizon is different from the high quality model that gets shown to you when you’re right next to it. Cyberpunk had a lot of problems with this early on where the better quality textures would just pop in after a while and everything would look like potato until things finished loading. Thus, even if the assets for far-away things are loaded, it doesn’t necessarily mean their high detail up-close versions are loaded. We want to hide the ugly parts as much as we can in order to provide a consistent, polished experience.

The FANTa Project is being rebooted. [What is the FANTa project?]

Got a burning question you want answered?

Avatar
Anonymous asked:

Just what is a shader anyway?

In short - a shader is a post-processing effect.

A shader is a small program that runs at certain points during the rendering pipeline that makes changes according to its program. You can think of a shader as a processing pass - it takes the existing information from the game and processes things to produce some kind of different result, often some kind of visual effect. For example, it can take all shades of green on a character model and apply some kind of filter to them, distort them, blur them, modify transparency, brighten them, project some other source onto them like a green screen, or any of a number of other effects. Most shaders have a format like “every so often, for each thing that matches some (or all) of these specific conditions, do this to the thing”. This can be very powerful to add specific touches to specific elements like making metal more dully reflective, glass transparent but also refractive, or even things water volumes reacting to objects placed into them. Since they can run every frame, you can have real time post-processing effects like these:

Breath of the Wild’s cel shading (and what it looks like without the shader)

Unity’s toon water shader:

Making fur:

They can also be applied to specific objects like how this VFX has shaders to cause distortion on particles emitted from the model here:

The big takeaway is whenever you see somebody say “shader”, think “post processing effect”.

The FANTa Project is being rebooted. [What is the FANTa project?]

Got a burning question you want answered?

Avatar

similar to the recent texturing posts, how does subsurface scattering work and how is it implemented in games?

Avatar

Subsurface scattering is what happens when light isn’t completely reflected off of a surface, but penetrates into the surface by some amount and reflects off of stuff beneath the surface and partially reflected and refracted through that depth. This is how it works:

Normally, when you look at an opaque object, what you see is determined by the light reflecting directly off of its surface. That can be shiny or matte, but you see only what is on the surface of the thing. There are also objects that are (semi-)transparent - they may refract or bend light as it passes through them, partially handling colors and such as well. Regardless, the light bounces off of something and enters your eye, allowing you to see it in a process like this:

However, sometimes the surface isn’t completely opaque but it also isn’t transparent. Light will actually penetrate some surfaces to some depth before it reflects back out. This is what the “sub” in sub-surface scattering refers to - it goes under the surface before it reflects back out. The surface isn’t transparent on the way out, so the light coming back out of that area beneath the surface gets scattered based on qualities of the material it’s traveling through. As a result, you get something like this:

The depth to which the light can penetrate beneath the surface and the effects of that material on the light’s color, intensity, direction, etc. are defined and controlled by the data within the subsurface scattering map. A subsurface scattering map is essentially another kind of texture map that defines the depth that light can penetrate this particular part of the model, and what kind of effects that penetration/reflection will have on the light that comes out. This allows technical artists can tweak the individual visual behaviors for different materials like skin, wood, blood, ooze, fur, fuzz, etc. as well as where these different individual locations are located on the textured model.

The result is that we can now define exact areas where light can penetrate into the model, others where it cannot, and our artists can tweak those values to say how far the light penetrates and what the material does to the light that gets reflected from beneath the surface. This is what lets tech artists do things like make certain parts of the dragon (tail, horns, ridges, etc.) react to light shined on it differently than other parts.

The FANTa Project is being rebooted. [What is the FANTa project?]

Got a burning question you want answered?

Avatar
Anonymous asked:

Often in graphics/performance discussions, people will say games tend to be "CPU heavy" and some tend to be "GPU heavy." Does this bear out in your experience? What sorts of things in a game use the CPU more and what things use the GPU more, and how much can those be shuffled around by engineering or clever coding?

These discussions boil down to assigning tasks to the processor that is best suited to handle them. GPUs are very good at a smaller set of tasks, but cannot do others at all. CPUs can do all of the tasks, but aren’t as good as the GPU at the tasks the GPU is good at. This means that all of the calculations that the GPU doesn’t want to do (i.e. is bad at doing) should be handled by the CPU. With this in mind, if there are more calculations that need to be done than the GPU can handle (and thus requiring the CPU to pick up the extra calculations), the game is GPU-limited. If the GPU is idling because there aren’t enough calculations for it to process while the CPU is still running at full load, the game is CPU-limited.

GPUs are specifically optimized for processing 3D graphics. This usually means they are very good at performing floating point math operations - usually a lot of addition and multiplication of floating point numbers really really fast. You may have heard of the term "FLOPS” (megaflops, teraflops) before. FLOPS are FLoating point Operations Per Second. Floating points are how we store precision decimal numbers in data. A FLOP is a mathematical calculation run on some floating point numbers. FLOPS is how we measure the amount of work a GPU can do over a unit of time (like a second). GPUs are very very good at performing a lot of small mathematical calculations in parallel (i.e. at the same time), which is also makes them so sought after for things like bitcoin mining in addition to processing computer graphics.

Everything else that isn’t 3D graphics or mathematical operations usually gets handled by the CPU. This usually means handling things like creating and destroying objects, determining which data to process, and handling more abstract concept tasks - pathfinding and AI decisionmaking, netcode, procedural content generation, saving and loading, that sort of thing. A general rule of thumb is that the GPU does all the math, while the CPU has to decide what math needs to be done. The CPU can also do the math if/when needed, but not as fast as the GPU can.

Are you in Professional Game Dev? Please fill out the [Anonymous Salary Survey]!

The FANTa Project is being rebooted. [What is the FANTa project?]

Got a burning question you want answered?

Avatar
Anonymous asked:

What is triangle count? If pokemon from Sword and Shield model has a different triangle count but looks exactly the same as in Sun and Moon model, is it the same model?

You’re probably thinking about a 3D mesh. Some people refer to it as a model, but the word “model” often carries other connotations with it. I said I wouldn’t write anything more about Pokemon in specific, but this is an opportunity to segue into a more general look at how 3D graphics work in general and I think that’s a worthwhile topic to cover, so let’s take a dive down that rabbit hole. We’ll start with the mesh. A 3D mesh usually looks something like this:

As you can see, this is a polygonal mesh for bulbasaur. It is comprised of a bunch of points in space called vertices that are connected to each other in varying degrees. These connections between vertices are called “edges”. Typically, edges are grouped into threes and called “triangles”, each of which represents a solid surface on the model. The general complexity of the 3D mesh is proportional to the number of triangles it is comprised of. As a point of reference, Spider-Man PS4′s various spider-suit meshes had between 80,000 and 150,000 triangles. 

However, you may have gathered that this mesh isn’t the entirety of the 3D model. It’s is just the shape of the thing. You also need to know what it looks like, what parts are colored what, and so on. This means you need to add a texture map to it in order to see what parts of it look like what. A texture map is a 2D image that has certain established points “mapped” to vertices on the 3D mesh. It ends up looking something like this:

You can see how specific parts of the Texture Map in the upper left get applied to corresponding parts on the 3D Mesh - there’s a section of the Texture Map for the tail, the face, the body, the paws, etc. We can also have multiple texture maps for a single 3D mesh. Here’s another example - each of these guns uses the same 3D mesh but a different texture map.

Are these models the same? Well… they all use the same 3D mesh, but clearly they have different texture maps applied, each of which required somebody to create them. Let’s hold off on judgement for a moment though, because we’re not done yet.

This is a normal map: 

It describes the heights and shape of a surface that isn’t smooth. It lets us make smaller numbers of triangles look like they have a lot more details without needing to break them up into significantly higher numbers of triangles. The two examples here are both using the same 3D mesh, but with different normal maps applied. Are these two models the same? Well, they both use the same 3D mesh and texture map, but not the same normal map. But let’s hold off for a moment again because we’re still not done.

There’s also specular maps…

… and ambient occlusion maps…

… and Shaders…

… all of which can be different while keeping things like the 3D mesh and/or texture map the exact same, especially between one game and another. This also is only discussing elements of the model and doesn’t go into things like the rig (animation skeleton) or the individual animations that can also be different.

There’s clearly a lot of work that goes into the creation of a 3D model. We obviously have to build the mesh but we also have to build things like the texture map, the normal map, the specular map, the ambient occlusion map, and any specific shaders. For a game like Pokemon, in some cases you might keep the old mesh and the texture map, but create brand new normal maps, specular maps, ambient occlusion maps, and shaders. So you end up with something that looks like this:

Clearly the specularity is different - look at how more shiny and reflective the Let’s Go version is. The shaders are also different - note how much better defined the lines are around the model in gen 8. The mesh itself is probably the same. The texture map might be the same. The normal map is definitely different - you can clearly see how the shadowing is different between the two, especially around the chin and lower torso. So is this a new model or an old one? My answer is “yes” - some parts of it are new and some are not. But is it the exact same model? They are clearly not the exact same model - there has definitely been work done between the two. Claiming otherwise would be foolish.

The FANTa Project is currently on hiatus while I am crunching at work too busy.

Got a burning question you want answered?

Avatar
Anonymous asked:

I'm working on a personal project and I was wondering, how are cameras implemented? Do you move every instance in the game depending on the screen size or is there another way, because the former sounds a bit heavy on calculations.

I’m assuming you are talking about 3D space and not 2D, since 2D is a lot easier to handle. In my experience, a camera in 3D space is usually its own 3D object that has a position and orientation within the world. It has a set of variables that are used to figure out what it should be able to see at that position - a field of view, a position, an orientation, a focal length, depth of field, etc. Given that position and orientation in 3D space, we can limit what the camera sees into a known space called a “viewing frustum” - the space that includes everything the camera sees, from things that are super close up to things that are at maximum viewing distance. Conceptually, it looks something like this:

Things that are in front of the near clipping plane aren’t rendered (too close to see). Anything that’s beyond the far clipping plane is too far away to see. Anything that’s outside the viewing angle isn’t viewable by the camera, so the renderer can safely skip rendering it. Anything that’s (partially) within the viewing frustum can be seen, so the renderer should decide to render it. You can use some basic 3D math to figure out whether the camera can see it. This is what’s known as “frustum culling”. Here’s an example of the frustum culling:

Only the objects in color are rendered in this case. Everything else isn’t. So… to answer your question, objects in the world generally stay where they are unless they decide to move. The camera is one of those objects - it can move, and the view frustum must then be recalculated for its new position and orientation. The contents of the viewing frustum are calculated every frame, so if the camera moves between frames, what the camera sees also changes between frames and things must be recalculated and re-rendered. 

That’s generally how cameras work in 3D graphics. I’ve purposely avoided the nitty gritty math involved with this, as well as other rendering optimizations like occlusion culling and back face culling for the sake of brevity and simplicity. This is the sort of stuff you usually learn about in a University-level computer science course on computer graphics. If you’re interested in learning more, you can try googling “Introduction to Computer Graphics” and picking a university course. For a brief overview, [click here] for a decent introduction to computer graphics from Scratchapixel.

The FANTa Project is currently on hiatus while I am crunching at work too busy.

Got a burning question you want answered?

Avatar
Anonymous asked:

Certain hardware developers appear to believe that Ray-tracing is going to be the next big thing. How does non-raytraced video game lighting work, and why is ray-tracing better?

As far as I understand it (and I am not a graphics programmer, so I really don’t understand a lot of this), most games and simulations use the “Radiosity” model to calculate lighting primarily for performance reasons. 

Radiosity operates in a finite number of passes. It first calculates the direct light from each light source onto all of the elements in the scene the light shines on. After it’s finished calculating that, the renderer calculates all of the first-order reflections from the stuff that the light source shines on. Something dull doesn’t reflect as much light as, say, something made of chrome, so the second pass would illuminate the things near the chrome object more due to its reflectivity. On the third pass, it calculates any second-order reflections from the first-order reflections, and so on and so forth. The more passes we do, the more accurate the lighting looks, but the subsequent passes tend to provide less additional accuracy than its predecessor (i.e. diminishing returns). In order to maintain a certain level of performance, we usually cap the maximum number of passes the renderer will take. This often gets us a “close enough” result.

Ray Tracing generates a more accurate simulation of what things (should) look than Radiosity because Ray Tracing will get all of the reflections instead of the maximum number of Radiosity passes, but it is much more expensive in terms of performance. The way lighting works in physical space is that light comes from a source (e.g. the sun) and bounces around the physical world until it reaches our eyeballs. Some rays of light will never reach our eyeballs - they point away from us. Ray Tracing ignores all of the light that won’t reach our eyeballs (the camera), and instead goes in reverse - if our camera can see it, there must be some path from the camera to the object and beyond, bouncing as many times as necessary, to reach the light source. Since we know all about the light source and exactly how many bounces there were and the material properties of each bounce point, we can use that to calculate what that pixel should look like.

The trouble with Ray Tracing is that those reflections and bounces could number in the hundreds or even thousands before reaching the light source. The renderer must run those calculations for each pixel its camera can see. We’ve actually had ray tracers for decades now. Writing a ray tracer is actually a great introduction to computer graphics for computer science students. A friend of mine actually wrote a ray tracer while we were in middle school, but it took the computer of the time five minutes to render out a simple scene with just one light. The main difference between then and now is that we’ve finally reached the point in graphics hardware where we can actually do those calculations in real time. That’s why ray tracing is now a big deal, it’s because it’s more accurate at lighting things than the old Radiosity model.

By the way, Disney Animation Studios put out this video in 2016 that explains how Ray Tracing works much better than I can. If you’ve got nine minutes or so and are interested in a great explanation of computer graphics technology, give it a watch.

The FANTa Project is currently on hiatus while I am crunching at work too busy.

Got a burning question you want answered?

Avatar
Anonymous asked:

A lot of games are now using "dynamic" resolution, where the game raises/lowers it's resolution depending on how much the system is being taxed. For example, many games on the Nintendo Switch drop down below 480 resolution when the action gets tense to keep the framerate stable. My question is, how does dynamic resolution work, particularly how does a game "know" it's going to need to drop the pixel count to keep everything running smoothly?

Before we begin, I suggest reading this post I made a while ago about [screen resolution and just how much data we need to process for each frame] if you haven’t yet. There’s a little bit of math involved, but I think it should be easily understandable. You need a bit of understanding about how resolution informs the amount of data we need to process each frame. Got it? Cool.

The amount of data that we need to process as resolution increases at a quadratic rate, because we must increase resolution in two dimensions at the same time. We can’t just widen what we display, we have to lengthen it at the same time or it won’t fit right on the screen. Doubling the resolution results in 4x the number of pixels to process because the total number of pixels is length times width. This works in reverse too - halving the resolution means only ¼ the pixels to process. As you might imagine, because there are so many fewer pixels to process, downgrading the resolution results in huge savings in terms of calculations.

One of the things that we track in most computer programs is the time. We know what real time it was (down to the fraction of a millisecond) the game started, and we can track how much time has passed since then. Because we have absolute time markers, we can use it to find relative time markers like how long it took for each of the last X frames to render. If we know how long the frame rate took to render, we can calculate our frame rate: 1000 milliseconds divided by (X milliseconds to render the frame) = current frame rate. For 60 frames per second, X has to be around 16.66 milliseconds because 1000ms / 16.66ms = 60.02. For 30 fps, X must be around 33.3 milliseconds. If X suddenly jumps to 50 or 60 milliseconds (16-20 fps) and stays there, that’s an unacceptable frame rate. We have to intervene somehow.

When the game detects this situation (by checking a running average of the last X frame times each time it finishes a frame), it can ratchet down the resolution to decrease the total amount of data it needs to process. Then it runs as normal. The resolution change should affect the average frame time, which then gets worked into the running average. If the average frame time is calculated too fast, the extra time is spent idling and waiting until the next frame. The game would consider the spike to be over and can decide to return to a better resolution. The goal here is to keep the frame time in the sweet spot for the best resolution at the target frame rate. If the frame time is too fast, we increase resolution. If it’s too slow, we decrease it. 

This week we step into the Design Phase of the FANTa Project!

Got a burning question you want answered?

Avatar

Game Development Glossary: Graphics 101

The other day, a friend of mine was going through the list of game options that adjusted various graphic-related settings, but didn't understand what all of them actually meant. Some of them should be pretty self-explanatory (texture quality - low, medium, high, ultra, for example) but there's also several other terms that are used that many people have a vague idea about, but aren't quite sure. Today, I'll try to explain what they are and how they work.

Bloom

Bloom is a shader effect that allows lighting to create a sort of feathered halo-ish effect around hard edges. It's what makes light sort of bleed out over and around corners on things, or when it is reflected on shiny surfaces.

VSync, or Vertical Sync

VSync forces your graphics card to synchronize its display at the same refresh rate as your monitor. This is supposed to guard against screen tearing - when your GPU is caught in between rendering two different frames. It doesn't always work, though. Turning on VSync will also force your GPU to output at a lower rate, which results in worse overall performance.

Depth of Field

Depth of Field is when the programmers use various shaders to simulate focusing on a specific distance, making things further or closer appear blurry instead of uniformly focused. It's a shader effect that gets run on top of what you should be seeing if everything were in perfect focus.

Motion Blur

When the camera moves faster than the eye can focus, the image gets interpolated with what has just passed to simulate speed. This makes things at lower frame rates feel better, though it will cost graphical computing power since it has to retain what it has just seen for the past few frames and use that data to interpolate what it is you see.

Anti-Aliasing

Anti Aliasing takes what you should see and smooths each pixel with the surrounding pixels. This helps keep edges from looking too jagged or pixelated. As a post-processing effect, it also soaks up graphical processing power.

Texture Filtering

Similar to Anti-Aliasing, it's used to make textures appear less pixely. As such, you can turn down the texture quality/size without making the textured objects look too blocky if upscaled. It does this by interpolating the pixels on a texture with nearby pixels. The important thing to note here is that texture filtering occurs on the texture itself, while anti-aliasing occurs on what you see on screen.

FOV  (Field of View)

Field of View is the width of the camera's view angle. The wider the angle, the more you can see overall. Extreme width will, however, cause distortions in the proportionality of things.

God Rays

God Rays are when you have concentrated, visible light beams break through cloud cover or around/in specific areas.

Ambient Occlusion

When deciding how to draw a given pixel, the GPU will sample the depth/distance from nearby pixels to figure out how much shadow/depth the nearby pixels should affect the given pixel. This makes crevices and nooks darker than the surroundings.

And that's it for this blog in 2014. I will be on break and traveling for the next two weeks or so, so updates will be sporadic at best until the new year begins.

Further Reading:

Avatar
Anonymous asked:

I was thinking about it last night, and I came to a dumb conclusion: Now that we've hit a graphical peak, where the last generation's graphics are similar or indistinguishable from the current gen, the only way to go is Larger (2K! 4K!) and Smoother (60fps! 120fps!). That said, in my opinion what should be innovated upon is the experience; you can see this already with second-screen gameplay (Wii U, NDS/3DS, potentially the PS4/Vita) and virtual reality (Rift, Morpheus, Google Cardboard).

While this isn’t really a question, it does raise a question of my own. What makes you think we’ve hit a graphical peak? Launch titles on any console have always looked about the same as the final batch of titles from the last generation, and have never been a good measuring stick for the potential to be brought out of the hardware.

Here, just take a look. A friend of mine (grapeykins) helped put together a comparison of titles from near launch compared to similar titles released near the end of the console’s lifespan for the past three Playstation generations.

Launch titles being indistinguishable from the prior generation isn’t anything new. People tend to think that we've hit that point of diminishing graphical returns with each successive generation, but when you look back at where you started you can see how far things have actually come. 

While the hardware for each console generation might be fixed, the software and drivers that take advantage of that hardware certainly isn't. As engineers optimize the drivers, fix bugs, and improve the programming interfaces, game developers unlock progressively more potential from the game hardware. We've only just begun to unlock the potential of the current generation. I think there's still plenty of room to grow, especially with things like skin, hair, body deformations, animations, lighting, materials, reflection, refraction, etc. 

I'm not saying that game developers won't continue to innovate in other ways - my previous post shows that we have and will continue to do so (barring another global economic crisis). I'm just saying that we're going to continue to do so in the visual fidelity arena as well.

Avatar
Anonymous asked:

So, I know that all games/animations are rendered in polygons (triangles), but what do you do when you want to render a sphere? I've seen programs/games with 3D spheres, that seem perfect to the naked eye. Now, a perfect sphere would require infinite polygons, which would take infinite computing power to render. Are these spheres just regular old models that only look like they are perfect because the polygons are so small on the model, or is there some sort of programming trickery involved?

It’s trickery, and it’s actually a pretty interesting trick. What you’re seeing is the combined effects of shading and normal mapping

Shading is when the renderer uses positional data to calculate the color of a particular pixel on screen. There are a number of different techniques for this with differing results. Here is the first one, using a technique called “Flat Shading”. Flat shading is when the renderer adjusts the color of a polygon surface based on what it determines the color of the center of the polygon is.

You can pick out the individual polygons pretty easily here, right?

Now we’ll switch to a different shading technique. This one is called Phong shading. Instead of using the center of the polygon to determine what color the polygon is, the renderer will take the color value from the corners of each polygon and interpolate between them and the color value from the center of the polygon. This results in shading that is much more gradual and smooth, as you can see here:

If you look closely, you can see that the number of polygons here actually hasn’t changed. You should still be able to pick out the vertices along the outline of the “sphere” here. But it certainly looks a lot rounder, doesn’t it?

But this still has issues, because we might have something that’s very polygon-intensive, like a cobblestone street. This poses a problem - we want streets to be flat in terms of polygons, because it’s a street and you walk on it, but it should still visually look like cobblestones. You don’t want to spend extra GPU cycles rendering extra polygons for the street when you could spend them on hair or fingers or facial expressions or something, but you don’t want it to look flat either. So how do you fix this?

Have you ever seen the movie Indiana Jones and the Last Crusade? There was a scene in the movie where Indy has to take a step out into what looks like a bottomless gorge:

But he’s not really stepping onto a bottomless gorge, is he? If you look closely, you can see it. When you change the angle of the camera, you can easily see what’s actually going on:

The step of faith here is actually a cleverly painted (and flat) bridge to make it look like there’s a huge drop. From the viewer’s perspective, it looks 3D even if it actually isn’t. And since we have a computer fast enough to do all the calculations for us every frame, we can make it calculate what the 3D surface would look like from different angles and repaint it on the fly, even if the polygon we’re displaying is actually still flat.

This is called bump mapping (or often normal mapping, which is a specific kind of bump mapping). The way it works is that you apply a texture like this to the polygon, but instead of being directly displayed on the polygon, it’s used by the renderer to determine the way the pixel at that point should look in terms of height, even if the polygon is flat at that point. It simulates a bunch of different heights or depths, even though the polygon is still actually flat. The result is what you see to the top right - lighting as if it were bumpy or pock marked, but without actually needing additional polygons to get the visual effect.

The results of this can be pretty interesting. Take a look at these. This is a model with a lot of polygons in it to create a bunch of different shapes:

And here is a completely flat polygon with a normal map based off of the above shape applied to it:

You can see that the stuff that really sticks out far like the cone doesn’t look right, but the stuff that only pokes out a little bit like the donut and the hemisphere actually look pretty good for taking up no additional polygons at all. If you looked at both of them from directly above, without the side angle view, it’d actually be pretty tough to tell them apart without touching them. And that’s the point - it’s a way to fake heights and depths without adding extra polygons. This is why it works best on flat surfaces like walls and the ground that you view from (nearly) straight on:

These are both flat polygon roads, but the right side looks a lot more like it’s made of real stones than the left. There are other effects also at play, like specular maps (which are used to calculate how shiny/reflective or dull an object is) and more, but they also operate on the same sort of mathematical principles.

It takes a good artist to create the proper map textures for 3D models, and it takes a graphics programmer with a solid understanding of math to create the renderer that can do all of the proper calculations to take those maps and figure out exactly what color each pixel actually is. I will say that it can be pretty fascinating stuff.

You are using an unsupported browser and things might not work as intended. Please make sure you're using the latest version of Chrome, Firefox, Safari, or Edge.
mouthporn.net