Category Archives: School

Gettext’s bindtextdomain() ignoring directory

If you use strace on a program using the gettext internationalization library and you see something like the below.

open("/usr/lib/locale/ru/LC_MESSAGES", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)

This means gettext is looking in the default locale directory which is /usr/lib/locale on my ubuntu machine. This was a bug for me because I had used bindtextdomain() to change the directory for the locale’s .mo files.

The cause for gettext ignoring my custom locale directory was that I had set my LANG enviromental variable to just the language code. For example LANG=ja

Instead LANG should be a full xx_CC format, for example:


You can check what your terminal’s locale environmental variables are set to use `locale`. An example of this is below.

danieru@danieru-x1:~$ locale

Above was a correct and valid set of locale variables. What caused gettext to not find my translation’s .mo files was the following broken set.

danieru@danieru-x1:~$ locale
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_MESSAGES to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory

Notes from Day 5 of UCalgary CPSC 585 Winter 2014

Final day!

Tuesday 7th:

Console Architecture

Our lecturing developers are quite console focused so they should know lots of interesting tidbits.

We’ll talk about

  • what is a console
  • console component
  • differences between consoles and pcs
  • benifits of console development
  • development environment
  • console game design
  • ps3 in detail

What is a console

  • dedicated game machine
  • nintendu wii(U)
  • gamecube
  • n64
  • nds
  • etc

Console history

  • Playstation (1995) – 33MHz mips, 2MB ram, cd storage
  • Plystation 2 (2000) – 300MHz mips, 32MB ram, dvd storage
  • Platstaion 3 (2006) – 3GHz PowerPC, 256MB ram and 256MB vram, dual layer Blu-ray
  • Playstation 4 (2013) – 1.6GHz 8 core x86, 8GB ram, four layer blu-ray + HDD

Differences between consoles and tv

  • TV vs higher resolution monitor
  • tv vs high color accuracy monitor
  • No HDD on some consoles
  • No virtual memory (on linux malloc will never fail since the memory is not allocated until you write to a page).
  • No keyboard nor mouse – makes FPS and RTS hard to implement

Console benefits

  • Fixed target, you can make assumptions without crying in a month
  • “dedicated” hardware, not sure how Xbone falls under this
  • consoles are “cheaper”
  • consoles are “more secure”, “less” copyright infringement “Please ignore the dreamcast”
  • more people buy games, meanwhile I have 144 games in steam. Yes, I’m not a console gamer
  • “Consoles are where the money is for games developers, certainly compared to pc”

Console liabilities

  • underpowered
  • little or no operating system
  • Lots of hardware level programming, DMA, task scheduling
  • closed production and distribution models, I agree

Development environment

  • Games are written on host machine
  • cross-compiled, visual-studios for xbox, GCC and SN for PS3, clang for PS4, codewarrior for nintendo wii.
  • Download to dev console over network or usb.

The Wii was a bit crazy. During development it would be connected over ethernet, usb, scsi, and serial. All at the same time.

Testing and debugging

  • game runs on dev console
  • debugging is done on host connected by network
  • xbox through visual studios
  • ps3 debugged over SN Debugger
  • Emulate DVD instead of burning disk

Development libraries

  • No Operating system but often have libraries
  • Lots of variation between consoles
  • early generation support will be weak
  • sometimes poorly translated from japanese
  • “On the PS2 sony had a nack for writing documentation which was correct, and on time, and useless”. “This sentence describes the exact behaviour for this instructions but not what or why you might use it. It could be useful for clipping but they don’t tell you”. “The assembly documentation gave everything in opcode order. If you knew the opcode of the instruction then fine, but otherwise you’d be doing a linear search. Oh, and there was no index.”

Game design on consoles

  • limited memory
  • lower resolution
  • played in a living room and sometimes in a party

“The GPU in the ps2 was called the emotion engine, granted it did make us feel emotions. Like rage, sadness, and hopeless-ness”

PS3 architecture


  • powerpc
  • Two hardware threads required to capture full performance
  • 2 x 32K L1

7 “synergistic” processing elements SPU, co-processor

  • custom instruction set
  • 256K embedded SRAM
  • 128 x 128bit SIMD registers
  • main memory access via DMA only


High latency but high thurough-put. Every PS3 has 8 SPUs but they were having problems with silicon yield. On average every chip had an error killing at least one SPU.


  • fast to read from GPU (22 GB/s)
  • fast to write from CPU (4GB/s)
  • slow to read from CPU (16MB/s!)

RSX graphics chip

  • 550MHz
  • based on GeForce 7800
  • 8 vertex shaders
  • 24 pixel shaders
  • 24 texture filtering units
  • 8 texture addressing units
  • peak theoretical pixel fill rate 4.4 Gpixel/s

GPU was what held the PS3 back. It betrayed the complex architecture. You cannot use the SPUs to interesting things because they are being used to make up for the power GPU. Each SPU in theory is equal to a 360’s main CPU core but the SPUs are all being used else-where.

Development Environment

  • Compilers, was GCC fork by sony, now SNC made by sony
  • IDEs, Visual Studios plugin available
  • Debuggers, ProDB from sony
  • OpenGL like PSGL
  • GCM used for high-performance

Playstation 3 issues

  • Memory dichotomy, everything is split
  • heterogeneous CPU architecture is hard to program
  • PSX performance is poor. RSX was an afterthought and designed from the beginning.
  • 360 had a more gradual and less rocky path to multithreading

Console Transitions

Warning that he cannot talk much about our current transition due to NDA and not knowing all the interesting tidbits.

  • transitions tend to happen at the same time
  • this generation some exclusive games are launching on both an old and new console.
  • current transition started by WiiU, this might have sparked Sony and Microsoft to release a year later
  • business turmoil, some one might get “dreamcasted”.

Loading times on consoles

  • Each generation RAM increases by 10-16x but optical disk bandwidth only by 2x
  • So for the PS4 and Xbone the goal is getting data on the hdd
  • But a full install takes an hour or more so now you can play a game while installing

We took a break at 10:45 until 11:00, now we’re talking about future of gaming.

Lunch at 12:00

Coming back at 13:00 I asked him about Mantle. He said he could not talk about that, which I think is good news since we can talk about the Ouya.

Had lecture on Memory & Data

Now lecture on Project Management

Took 15 minute break at 15:00. At 15:15 we started looking at Fifa 14 and NBA Live 14.

For our last lecture is a talk on getting a game in the games industry. If you’re interested in working in the games industry you’ll want to take this course. Sorry I will not be blogging this part, I’ll leave it as a treat for future students.

Notes from Day 4 of UCalgary CPSC 585 Winter 2014

Monday 6th:

Game Engines & Middleware

Some subsystems are hard to create so it often gets factored out into middleware. Common middleware used for Physics, trees, movie players, rendering, sound. Middleware became popular around 2000 during the ps2 era.

Blink is easy to integrate, you must give it a way to alloc, output rendering, and file io.

Physics is harder to integrate.

Game Engines:

  • Quake Engine (1996) – 3d acceleration
  • Unreal Engine (1998) – modular arch, unreal script
  • CryEngine (2004)
  • Unity (2005) – multiplatform

Why use an engine?

  • content tools & pipeline
  • state of the art rendering
  • cross-platform support
  • cross-domain integration
  • easy gameplay prototyping
  • Saves you from writing glue code for a bunch of middleware
  • Piece of mind, the code is not junk

We looked at Unity and Unreal. Licensing fees were discussed but I assume again the discussion was not meant to be public.

Unity was design without a single game in mind. Meanwhile unreal was forked from a FPS. Thus Unreal is built around levels yet a sports game must then fit around this.

Why we do not use an engine in this course: They want us to learn the basics and what an engine does and how. More words were used but I’ve simplified.


  • Competition and your artist are going to be pushing your hardware’s boundary.
  • Targeting a test may make you miss the the real use case.
  • Takes more time to develop than the faster languages
  • You’ll be running a optimization on a compiler optimized build so debugging will be hard.
  • Hard to find the real gains
  • 90/10 rule, most code has little performance impact
  • watch for pitfalls like virtual functions
  • Cache can cause inversion of inpectations

How to optimize:

  • Profile and find performance bottlenecks
  • Fix them
  • Goto step 1

Instrumented profiling versus sampled profiling versus system trace.

Structure of Arrays versus Array of Structures

We talked about optimization tricks and avenues. Its now 12:00 and we’re taking a break for lunch. I’m hungry.

Coming back at 13:05 we’re talking about C++ internals.

This talk is pretty cool, I should be writing it down =\

Took ten minute break and now we’re back at 14:00, going into networking gameplay.

Sound was covered, I’ve been working on integrating bullet.

Now we’re talking about debuging and recounting our hardest bug stories. Good stories, haha. In theory I should be writing this down but I won’t since I’m lazy and you may want to enrol if you have the chance.

Notes from Day 3 of UCalgary CPSC 585 Winter 2014

Arrived early. It is Saturday so traffic is low at 8:00 but the snow is still on the road. In calgary after a snow we get pseudo lanes which may more may not match the real lanes.
Saturday the 4th:

The PS4 has crashed and crashes after every startup.

We’re starting on physics at 9:10.


Types of representations:

  • Arbitraty mesh – NO
  • triange
  • convex hull – hard to make
  • simplified volume – boxes, shperes, cone, easy math
  • height field
  • implicit surface – spline, subsurface, but requirs transformation to trianges

Collision detection

  • Collision is inherent O(n^2)
  • Games are on average better and few things colide
  • Easy assumptions, static objects, etc

Discussed in more detail detection. Discussed some edge cases and their solutions.

Break taken just before 10:00.  Back at 10:04.

Driving physics.

  • Fun is the goal, not realism.
  • vehicle control interface should be the same for players and ai
  • have tunable numbers which may not match real physics

Brick prototype

  • gravity should be 2-3x real world
  • make a brick
  • map user input to forces on the brick

I did not take notes for this section, was working porting Phil’s work to linux. We took a break around 11:20. We’re examining Big Planet Racing’s driving model. At 11:50 we took a quick look at Need for Speed: Most Wanted. It’s 12:00 and we’re pausing for lunch.

We’ve come back at 13:00. We played need for speed for 11 minutes.

Driving AI continued.

You may be able to guess that I am not handling our driving handling. That’s Kyle’s job =)

Took break at about 14:25. Come back but the PS4 has broke and we cannot use NBA as an example at animation.

We’re not using animation but this subject is being covered to give us background.

At the end we had an hour to go over our project. We tried to simplify our game and concrete our goals.

Notes from Day 2 of UCalgary CPSC 585 Winter 2014

I came in 8 minutes late since Calgary just had a fresh serving of snow and traffic was slow. The class had waited for me which was nice. We started at 9:08

Friday the 3rd:


Also talked about as game mechanics. Its the interactivity which makes games unique.


  • The game knows all and can do all so making perfect AI is easy
  • The real goal is the make fun AI
  • Represent the world to the AI by choosing a data structure

World representation consideration:

  • visibility
  • audibility
  • path finding
  • colision detection
  • sending messages to groups of entiites
  • dynamic loading sections of world

World representation types:

  • Single array list
  • KD trees
  • BSP
  • Grid
  • Graph
  • Spatial hashing

A Array works well for small games and does not have any surprising usage edge cases. Big games use multiple techniques to maximize performance.

In the hulk games when you smash things the resulting debris is a real entity.

Most of the complex methods are spatial which allow log(n) lookup but if the object is inbetween cells then the code gets complicated.

Shpere of influence means inteast of simulating everything in hte world the game maintains only a bubble arround the player. I remember this in Just Cause where cars will disappear as soon as they drive past you. If you turn arround and try to capture a cool car you just saw you’ll miss it if you let it get out of view.

Dynamic entities might be

  • players (human & ai)
  • props
  • power-ups
  • rockets

Modeled with simplified spartial representation for collision detection. They might have a state machine to handle behaviour plus attributes like health.

Entity Behaviour

We want out entities to do interesting things. We can program the behaviour or we can simulate it. Simulation would be to use generic behaviours, like Givesdamage = 1 for bullets or rockets. Or Explodes = 2, etc.

Behaviour attributes make the behaviour more rich but make tuning harder. The world will be more active but less controlled.


Common way to start entity behaviours. Common types are: volume, surface, time. I remember using these in my Fallout mod. Trigger conditions can send events.

State Machines

Used for simple entities. Most games use adhoc state machines without a formal generic engine. Prototype (game by radical) had a formal generic GUI edited state machine.

Mapping events to intentions

The AI interpets a button press as an intention. Any key presses should be expressed as an intention which then should get mapped. The intention then goes into the player state machine as input but conditions like isFacingWal could act as blockers.


Not always rigid body simulation. Could be simple like adding vectors. Full dynamics engine can be emulated much cheaper if done right. What the engines bring you is colission detection. It can be used for animation with procedural animation. We’ll look at FIFA for an example.

AI use of physics

In prototype the cars are not simulated by the physics. They move along on rails until a collision. If it collides then the cars are put into the physics simulation until they come to a rest. This means AI controlled entities have no physical attributes yet the AI must have realistic attributes to give the physics simulation when handed off.

Hulk 2 example: When hulk elbows a car, angular volicity is chosen to make it launch up into the air, tumble end-overend, and then land behind him. This is all fake and give to the physics engine from the ai. Thrown objects are not simulated until they hit something. So if you throw a car at a helicopter we want the car to hit. Expect in a full simulation the car’s trajectory will have a fixed curavature. In the game the ai instead drives the thrown object towards the helicopter like a homing missle.

Tuning physics.

Physics is emergent which can be good but hard to debug.  For example if the ai lets a car  drive into an object then the physics will throw the object out at high volocity. You’ll also have to break real world physics else the game will be boring. For example in a hockey game a real world speed for the players feels slow.


AI camera models are motivated by gameplay goals. Need to see player, to see important ai entities, and improve controls.

Camera models:

  • simple fixed camera like in pong
  • tracking camera / following camera
  • instant replay camera
  • artist controled camera

In Simpson’s hit & run if a car goes over a jump then an instant replay camera will show the jump.

Driving games camera models

  • first person – glued to the bumper with N field of view
  • third person – should have lag else it will feel like car is nailed to center of screen.

Gameplay sumemry

  • gameplay touches on everything
  • We’ve skipped some areas like path finding, enemy ai, or ai animation control.

Gameplay section was done at 9:54. We took a break until 10:00.

Its 10:04 and we’re starting on Graphics


A glipmes at a graphics programer’s job.

  • main job is creating infulstructure for artists. Work with art director. Common for art team to have 3x artist vs programmers. Programming scales better than art, there is only so many things to do.
  • Integrate renderer with components.
  • They need to know openGL/Dx but this is only the base job.
  • Performance tuning is heavy on graphics. Memory boundaries, gpu time.

Towards the goal of helping the art design they must deconstruct what happens in a scene or concept art. For example we tried to pick out elements from a photo of a car. Things like shadows, reflections, lighting, and motion blur. Then we looked at some concept art for prototype. We noticed that the lighting model was not realistic, too light, but still higher contrast shadows versus typical games.

Art production requirements:

  • efficient generate buildings & varied store fronts
  • generating reusable materials
  • decide geo vs texture detail
  • create sinage props
  • workflow for placing lights and signs
  • solution for creating light volumes
  • pipline for static lights
  • scalable texture solution for applying grime textures
  • build art for multiple levels of detail, Loss of Detail

Rendering Pipeline:

A requirements for this course is to have taken the introduction to graphics course so this section is review. Pipeline:

  1. Content tools
  2. Asset conditioning
  3. scene management
  4. submit rendering
  5. geometry processing
  6. rasterization

Artist pipeline:

Artists create resources, models textures. Some games have the artist create shaders, others leave shaders to the graphics programmers.

Assets then need offline processing for optimization and transformation for the game:

  • Export geometry
  • Optimization of geometry
  • Merge vertexes
  • texture compression
  • shader compilation

PC may require some of the above steps to be done at runtime. For example shaders must be compiled at runtime. In theory this should help start up time, and let consoles manage to be silly slow startup times.

Scene management:

Done on the cpu we cull things before we send the vertexes for rendering.

  • Frustum culling – skip things out of the camera view
  • occlusion culling – quick reject objects behind other objects

Complex data structures for this like Octree, KD-tree, etc. Otherwise linear search can work.

Since graphics is not my area so I took the time to fix my FxOS patch.

We took a break around 11:10 and came back at 11:16 to look at Uncharted 3 with a critical eye. I’ll stop typing now. Long loading screen.

We took a lunch break at 12:05. Came back at about 13:00. One of the developers gave some interesting behind the scenes info on the new consoles. Pretty sure the info was not intended to leave the room. Nothing that should have been under NDA but still interesting tidbits we’ll hear in a few years.

Back to graphics.

GPU architecture and internals

Missed 40 minutes of notes here, was working on some emails. Interesting aspects of GPU programming.

Environment maps:

use cube maps, aka 3d maps. Way to store static data for fast runtime.

Ambient occlusion map: biases the light level. A texture which textures the lighting.

Dynamic lights: high power dynamic lights are slow since every light must occur in the shader.

Differend Rendering seperates surface property calculation and lighting.


  • Blobs – just stick a blackish blob under something. Example seen form Simpsons Hit&Run.
  • Projected texture – shadow based on simplified model
  • Stencil – complex magic with something called a stencil buffer. Lots very nice, clean edges.
  • Shadow maps – render the scene from the light’s eye and store that depth buffer. Still complex but cheaper than stencil. As you might be able to tell: I’m not my team’s graphics expert. Instead I’m going to do our AI, and some art.

Frame Buffer Effects

Render the scene to a texture then perform post-processing;

  • motion blur
  • depth-of-field
  • refraction / reflection
  • color correction – less correction and more mangling. Things like making grass green or adding bloom.

Billboards: a sprite which is alinged with the camera on N axises.

Point sprites: GPU powered primatives which are alligned on 3 axises to the camera.

Skinning: character which can be deformed to match movement.

Display Latency: some games use triple buffering so that they can be rendering 100% of the time. No need to wait while the double buffer waits to blit.

Graphics programming wrap up.

  • graphics programming is a collection of disciplines
  • know the hardware
  • know the techniques
  • follow the research
  • look at other games
  • check out demos from the graphics makers
  • read Real-Time Rendering (Akenine-Mollar, Haines, Hoffman)

Break taken at 14:27. We’re back at 14:36.

We’re exploring Assassins Creed 4. That took an hour. Now we are discussing project and work allocation. Most work has been assigned. Onto tomorrow!