I came in 8 minutes late since Calgary just had a fresh serving of snow and traffic was slow. The class had waited for me which was nice. We started at 9:08
Friday the 3rd:
Also talked about as game mechanics. Its the interactivity which makes games unique.
- The game knows all and can do all so making perfect AI is easy
- The real goal is the make fun AI
- Represent the world to the AI by choosing a data structure
World representation consideration:
- path finding
- colision detection
- sending messages to groups of entiites
- dynamic loading sections of world
World representation types:
- Single array list
- KD trees
- Spatial hashing
A Array works well for small games and does not have any surprising usage edge cases. Big games use multiple techniques to maximize performance.
In the hulk games when you smash things the resulting debris is a real entity.
Most of the complex methods are spatial which allow log(n) lookup but if the object is inbetween cells then the code gets complicated.
Shpere of influence means inteast of simulating everything in hte world the game maintains only a bubble arround the player. I remember this in Just Cause where cars will disappear as soon as they drive past you. If you turn arround and try to capture a cool car you just saw you’ll miss it if you let it get out of view.
Dynamic entities might be
- players (human & ai)
Modeled with simplified spartial representation for collision detection. They might have a state machine to handle behaviour plus attributes like health.
We want out entities to do interesting things. We can program the behaviour or we can simulate it. Simulation would be to use generic behaviours, like Givesdamage = 1 for bullets or rockets. Or Explodes = 2, etc.
Behaviour attributes make the behaviour more rich but make tuning harder. The world will be more active but less controlled.
Common way to start entity behaviours. Common types are: volume, surface, time. I remember using these in my Fallout mod. Trigger conditions can send events.
Used for simple entities. Most games use adhoc state machines without a formal generic engine. Prototype (game by radical) had a formal generic GUI edited state machine.
Mapping events to intentions
The AI interpets a button press as an intention. Any key presses should be expressed as an intention which then should get mapped. The intention then goes into the player state machine as input but conditions like isFacingWal could act as blockers.
Not always rigid body simulation. Could be simple like adding vectors. Full dynamics engine can be emulated much cheaper if done right. What the engines bring you is colission detection. It can be used for animation with procedural animation. We’ll look at FIFA for an example.
AI use of physics
In prototype the cars are not simulated by the physics. They move along on rails until a collision. If it collides then the cars are put into the physics simulation until they come to a rest. This means AI controlled entities have no physical attributes yet the AI must have realistic attributes to give the physics simulation when handed off.
Hulk 2 example: When hulk elbows a car, angular volicity is chosen to make it launch up into the air, tumble end-overend, and then land behind him. This is all fake and give to the physics engine from the ai. Thrown objects are not simulated until they hit something. So if you throw a car at a helicopter we want the car to hit. Expect in a full simulation the car’s trajectory will have a fixed curavature. In the game the ai instead drives the thrown object towards the helicopter like a homing missle.
Physics is emergent which can be good but hard to debug. For example if the ai lets a car drive into an object then the physics will throw the object out at high volocity. You’ll also have to break real world physics else the game will be boring. For example in a hockey game a real world speed for the players feels slow.
AI camera models are motivated by gameplay goals. Need to see player, to see important ai entities, and improve controls.
- simple fixed camera like in pong
- tracking camera / following camera
- instant replay camera
- artist controled camera
In Simpson’s hit & run if a car goes over a jump then an instant replay camera will show the jump.
Driving games camera models
- first person – glued to the bumper with N field of view
- third person – should have lag else it will feel like car is nailed to center of screen.
- gameplay touches on everything
- We’ve skipped some areas like path finding, enemy ai, or ai animation control.
Gameplay section was done at 9:54. We took a break until 10:00.
Its 10:04 and we’re starting on Graphics
A glipmes at a graphics programer’s job.
- main job is creating infulstructure for artists. Work with art director. Common for art team to have 3x artist vs programmers. Programming scales better than art, there is only so many things to do.
- Integrate renderer with components.
- They need to know openGL/Dx but this is only the base job.
- Performance tuning is heavy on graphics. Memory boundaries, gpu time.
Towards the goal of helping the art design they must deconstruct what happens in a scene or concept art. For example we tried to pick out elements from a photo of a car. Things like shadows, reflections, lighting, and motion blur. Then we looked at some concept art for prototype. We noticed that the lighting model was not realistic, too light, but still higher contrast shadows versus typical games.
Art production requirements:
- efficient generate buildings & varied store fronts
- generating reusable materials
- decide geo vs texture detail
- create sinage props
- workflow for placing lights and signs
- solution for creating light volumes
- pipline for static lights
- scalable texture solution for applying grime textures
- build art for multiple levels of detail, Loss of Detail
A requirements for this course is to have taken the introduction to graphics course so this section is review. Pipeline:
- Content tools
- Asset conditioning
- scene management
- submit rendering
- geometry processing
Artists create resources, models textures. Some games have the artist create shaders, others leave shaders to the graphics programmers.
Assets then need offline processing for optimization and transformation for the game:
- Export geometry
- Optimization of geometry
- Merge vertexes
- texture compression
- shader compilation
PC may require some of the above steps to be done at runtime. For example shaders must be compiled at runtime. In theory this should help start up time, and let consoles manage to be silly slow startup times.
Done on the cpu we cull things before we send the vertexes for rendering.
- Frustum culling – skip things out of the camera view
- occlusion culling – quick reject objects behind other objects
Complex data structures for this like Octree, KD-tree, etc. Otherwise linear search can work.
Since graphics is not my area so I took the time to fix my FxOS patch.
We took a break around 11:10 and came back at 11:16 to look at Uncharted 3 with a critical eye. I’ll stop typing now. Long loading screen.
We took a lunch break at 12:05. Came back at about 13:00. One of the developers gave some interesting behind the scenes info on the new consoles. Pretty sure the info was not intended to leave the room. Nothing that should have been under NDA but still interesting tidbits we’ll hear in a few years.
Back to graphics.
GPU architecture and internals
Missed 40 minutes of notes here, was working on some emails. Interesting aspects of GPU programming.
use cube maps, aka 3d maps. Way to store static data for fast runtime.
Ambient occlusion map: biases the light level. A texture which textures the lighting.
Dynamic lights: high power dynamic lights are slow since every light must occur in the shader.
Differend Rendering seperates surface property calculation and lighting.
- Blobs – just stick a blackish blob under something. Example seen form Simpsons Hit&Run.
- Projected texture – shadow based on simplified model
- Stencil – complex magic with something called a stencil buffer. Lots very nice, clean edges.
- Shadow maps – render the scene from the light’s eye and store that depth buffer. Still complex but cheaper than stencil. As you might be able to tell: I’m not my team’s graphics expert. Instead I’m going to do our AI, and some art.
Frame Buffer Effects
Render the scene to a texture then perform post-processing;
- motion blur
- refraction / reflection
- color correction – less correction and more mangling. Things like making grass green or adding bloom.
Billboards: a sprite which is alinged with the camera on N axises.
Point sprites: GPU powered primatives which are alligned on 3 axises to the camera.
Skinning: character which can be deformed to match movement.
Display Latency: some games use triple buffering so that they can be rendering 100% of the time. No need to wait while the double buffer waits to blit.
Graphics programming wrap up.
- graphics programming is a collection of disciplines
- know the hardware
- know the techniques
- follow the research
- look at other games
- check out demos from the graphics makers
- read Real-Time Rendering (Akenine-Mollar, Haines, Hoffman)
Break taken at 14:27. We’re back at 14:36.
We’re exploring Assassins Creed 4. That took an hour. Now we are discussing project and work allocation. Most work has been assigned. Onto tomorrow!