Introduction to MIPS

My Mac has died.  I’m waiting for my backup to copy to my desktop before trying anything funky, which coincidentally gives me enough time to start right into MIPS.

MIPS is the assembler language used on RISC architecture machines.  As with any assembly, the precise set of instructions and operations changes from processor to processor.  For consistency (and sanity), I’m going to base this series of articles on the R3000 processor.  This processor is a 32-bit system that offers us a very nice feature for our use in a learning environment: we have a simulator.  SPIM ( is a robust simulator for the R3000 RISC processor, on which you can run your MIPS assembly code.

There are many modern versions of SPIM with plenty of GUI to go around, but I’m still a fan of the console-based simulator.  All of the code I’m going to write will be assembled and executed using SPIM ver. 8.0 for Linux, on the command line.  I’m also going to limit this first series to non-floating point calculations for simplicity.  I’ll go back and address FP operations later.  Now that the administrative stuff is out of the way, it’s time to dive right into the fun!

To begin with, our processor features 32 registers for use in our programming.  While several are reserved for special purposes, the majority of these registers can be used for general purpose computing.

$zero     0   Easy Access to a Zero value.
$at       1   (Reserved for OS Use) Assembler Temporary 
$v0-$v1  2-3  Evaluation Results
$a0-$a3  4-7  Used for Arguments
$t0-$t7  8-15 Used for Temporary Storage (Callee Saved)
$s0-$s7 16-23 Used for Temporary Storage (Caller Saved)
$t8-$t9 24-25 Used for Temporary Storage (Callee Saved)
$k0-$k1 26-27 (Reserved for OS Use)
$gp       28  (Reserved for Stack Use) Global Pointer
$sp       29  (Reserved for Stack Use) Stack Pointer
$fp       30  Frame Pointer
$ra       31  Return Address

These are the registers we will be using for our temporary storage throughout our programs.  While I’ll get into procedures in a future post, I do need to address one aspect of those with regards to register usage.  In the above table, I have several registers marked as Callee Saved and several marked as Caller Saved.  These are essential, albeit non-enforced, constraints on your coding that will determine how you use these registers.  Callee Save is a concept, which states that the value in those registers may freely be modified by any subsequent procedure calls.  Since these values will be modified, if the current procedure wishes to keep those values for use following a procedure call, they must first be saved.  Following the procedure call, they may then be restored to their original values.  The complement to this, Caller Save, states that the procedure using them will be guaranteed that these will not be changed by any subsequent calls, and therefore has no need to back them up.  On the other hand, if you want to use any Caller Save registers in your procedure, you must first back them up and then restore them at the end of your procedure, to fulfill your guarantee to the procedure who called you, that you wouldn’t mess with those registers.  I’ll cover these in more details when I get into procedure calls though.

RISC assemblers distinguish themselves by offering a very small, yet robust set of processor operations.  Moreover, each of these instructions on our machine uses the same fixed-size for every instruction: 32-bits.  This lies in stark contrast to the x86-based processors, which feature many more instructions, each of which varies greatly in size from one operation to the next.  For this purpose, you will occasionally see no-op (NOP) instructions in x86 code for purposes of alignment.  Within MIPS, because each instruction is precisely the same width, you can make a few abstractions and come up with a couple of common formats for packing essential instruction data.

RISC uses only three different instruction formats for all of its non-floating point operations.  These are known as the R (arithmetic-based), I (immediate-based), and J (jump-based) formats.  I’m going to cover these formats for reference, however, these formats only equate out to the machine language codes that the RISC processor uses.  The MIPS assembly instructions to generate these RISC instructions follow this section.

R Format

This format is used for all operations which do not use immediate values (values which are stored inside of the instruction directly) or direct jumps.   The following shows the encoding  format for these types of instructions:

   6    5  5  5   5     6

In this format, the OPCODE is not a direct correlation with the mnemonic code for the operation to execute.  This is a code that is used to provide control instructions to, among other things, the ALU.  All of the R type instructions use an OPCODE of 000000, which enables the ALU for processing.  RS is a 5-bit code that specifies which of the 32 registers to use as the source for the arithmetic operation.  RT is the second 5-bit code to specify which of the 32 registers will be used as the second source for the arithmetic operation.  RD is the third 5-bit code that is used to specify which of the 32 registers will be used to store the output of the ALU.  Notice here that there are no addresses mentioned in these instructions.  The R format is a format that only works with registers.

SHAMT is a field that is used to determine the amount to shift by for shifting operations.  FUNCT is only used by R type instructions to specify which operation will be executed by the ALU.  This is the key block of 6-bits in the R instruction to specify the operation to perform.

Common  FUNCT Codes
Hex Operation
 0  Shift Left Logical (Using SHAMT)
 2  Shift Right Logical (Using SHAMT)
 3  Shift Right Arithmetic (Using SHAMT)
 4  Shift Left Logical (Using Reg)
 6  Shift Right Logical (Using Reg)
 7  Shift Right Arithmetic (Using Reg)
 8  Jump (Using Reg)
 9  Jump and Link -- Adds next instruction to $ra --(Using Reg)
20  Add
21  Add Unsigned
22  Subtract
23  Subtract Unsigned
24  And
25  Or
26  XOR
27  NOR
2a  Set Less Than
2b  Set Less Than Unsigned

I Format

This format is used for all operations which use immediate values.   The following shows the encoding  format for these types of instructions:

   6    5  5    16

In this format, the OPCODE is not a direct correlation with the mnemonic code for the operation to execute.  Unlike with the R format, this code contains no destination, SHAMT, or function code.  These fields were superfluous as the data could be packed into the OPCODE, RS, and RT.  Notice that the OPCODE, RS, and RT are in the same positions and have the same size as in the R format instruction.  This is a feature of RISC that allows for very rapid processing of instructions by breaking them up into the blocks and working with them, even before the OPCODE is deciphered.

For these instructions, RT serves as the destination, RS serves as the first source, and IMMEDIATE serves as the second source.  Notice that whereas our native size on a 32-bit system is a 32-bit word, our immediate value here is only 16 bits.  This means your immediate values can only contain 16 bits of actual data.  The processor immediately runs this IMMEDIATE value through an extender to get it up to 32-bits, so it can be used in processing.  Whether it is sign-extended or zero-extended depends on the OPCODE.

Common OPCODEs for I Instructions
Hex Operation
 4  Branch on Equal
 5  Branch on Not Equal
 8  Add Immediate
 9  Add Immediate Unsigned
 a  Set Less Than Immediate
 b  Set Less Than Immediate Unsigned
 c  And Immediate
 d  Or Immediate
 e  XOR Immediate
 f  Load Upper Immediate
20  Load Byte
23  Load Word
24  Load Byte Unsigned
25  Load Halfword
28  Store Byte
29  Store Halfword
2b  Store Word

J Format

This format is used for all non-branch jumps:

   6      26

Notice that the OPCODE is the same size and in the same position as the above two instruction formats.  This enables the RISC processor to strip out the OPCODE and begin processing it immediately, regardless of which type of format the instruction is in.  Also of note here is that the IMMEDIATE value is now up to 26-bits in size.  It is still not sufficient for native processing, but it does hold more values than an I format instruction can.

There are only 2 J format instructions.  These both use the OPCODE hex value of 2.  The first performs an unconditional jump to the given jump address.  The address is determined by taking the upper 4 bits of the program counter, the entire immediate, and then padding the lower order bits with two zeroes.  This forms a 32-bit address which is used for jumping.

Now that I’ve covered the instructions that RISC receives, let me begin to cover the actual MIPS instructions for programming.  I’m going to wrap this entry up pretty quickly, so I’m just going to cover enough to code up a simple “Hello World”.

In order to do this, however, we will need to look at two more concepts: Segments and Interrupts.

A program on your computer is nothing more than a simple binary file that contains large amounts of different types of data.  Your compiled source code is merely one element of this data.  It joins things such as your symbol table, hardcoded strings, and other initialized data that is necessary to operate your program.  The operating system executes the program by extracting the information from its various data sections.  These sections are called segments and are used to partition your program file up into logical arrangements.  For this article, I’m going to cover your two most important sections for MIPS programming: .text and .data


This section contains all of your source code for your program.  In here, you’ll intermix MIPS instructions and processing directives to instruct the assembler how to link the code into the final file.


This section contains all of your hardcoded data.  This is typically used to store your strings, however, it is also used to set aside memory for complex data types, such as arrays and structs.

For our Hello World, we will need to use both of these sections.  Fortunately, in MIPS, switching sections is as easy as using those two commands: .text and .data.  These can be used throughout the code and in as many places as needed to switch back and forth between them.  When the program is assembled, all of the data segment sections will be combined together to form a combined data segment.

The last thing we’ll need to know is the Interrupt table for the system.  This table contains all of the system calls that are made to perform system level operations.  In our case here, we want to use the system to write a message on to the screen.  The following table represents the common system calls:

Operation      $v0   Arguments               Result
Print an Int    1    $a0 - Int to Print      Prints $a0 to the screen.
Print a String  4    $a0 - Srring to Print   Prints $a0 to the screen.
Read an Int     5                            Reads an Int from the Keyboard into $v0
Read a String   8    $a0 - Buffer, $a1 - Length
Exit           10

Now, we have all of the knowledge needed to produce a program with only one exception.  We don’t know any MIPS instructions yet.  Fortunately, for the purposes of this Hello World, we don’t need much.

MIPS Instructions typically use a tri-element format for instructions.  Arithmetic instructions are generally in the format of: INSTR DEST, SRC1, SRC2 and will perform the operation using SRC1 and SRC2 as inputs and DEST as outputs.  For memory functions dealing with an immediate, this is condensed into a two argument format: INSTR DEST, SRC.  This is the format that we are going to use for our first program.

Looking at the system call table, we see that we want to print a string.  So, we need to load the value 4 into the register %v0, and then we need to load the address of the string into $a0.  For this, we need to learn our first two MIPS instructions: li and la

li  (Load Immediate)

Usage: To load an immediate value into a register.
Format:  li $reg, imm
Example:  li $t0, 1
Outcome: Loads the value of 1 into the register t0.

la  (Load Address)

Usage: To load an address into a register.
Format:  la $reg, address
Example:  la $t0, g_string1
Outcome: Loads the address of the string labeled g_string1 from the data segment into register t0.

syscall (Call the System Interrupt)

Usage: To make a System Call
Format: syscall
Outcome: Calls the System Interrupt.  The System will process the command stored in $v0 and use the other arguments as specified.

So, let’s take all of this and put it together into a program.  All strings have to be placed into the data segment.  We do this by using the .data keyword to switch to the data segment and then allocating space to store the string into the segment.  This is done using a processing directive (which I’ll cover in the next section) called .asciiz which takes the given ASCII string and places it into the data segment, using the specified label to refer to the address.  The ‘z’ in asciiz is crucial here as it refers to zero-termination.  This is the null terminator that specifies the end of strings.

Following this, we’ll switch back to the .text segment, which is where our source code lies.  In this segment, I’m going to load 4 into $v0 to specify that I want to output a string onto the screen, then I’m going to load the address (using the label of the string) into $a0.  Once I have this loaded, I’ll make a system call.  After this, I’m going to load 10 into $v0 to specify that I want the program to exit, followed by another system call.

g_s1:   .asciiz   "Howdy World!\n"
        li $v0,4
        la $a0,g_s1
        li $v0,10

And the output:

kandrea@zeus:~$ spim -f hw.mips
SPIM Version 8.0 of January 8, 2010
Copyright 1990-2010, James R. Larus.
All Rights Reserved.
See the file README for a full copyright notice.
Loaded: /usr/local/lib/spim/exceptions.s
Howdy World!

Assembly programming relies on a tremendous amount of background knowledge, however, now that most of this is out of the way, future posts will dive straight into the commands and get some good code going early.

- Kevin Andrea


An Initiation to the Secret Society of Assemblers

One of the best skillsets I have learned over the past decade has been assembler programming.  I have long been an avid fan of programming in C, however, like many people, I often struggled with pointers and managing memory.  It was only after I began programming in assembly that I really developed an appreciation for the operations on the machine level.  Once I understood what these operations were truly doing and how the language was translated into machine instructions, my programming in C improved almost overnight.

In this series of posts, I am going to attempt to impart some of my knowledge in programming in assembly.  I’m planning to do two series, back to back.  The first will be on an introduction to assembly programming on a RISC architecture machine, using MIPS.   The following series will be on x86 programming using IA32.  I’ll focus on presenting the AT&T syntax (used by GCC) as there are very few resources online for this assembly; however, I’ll also intermix some of the more common Intel syntax (used by MASM on DOS/Windows and NASM on Linux).  If I do not get bogged down again with course work, I would also like to chase these two with a follow-up series on the Intel64 (AMD64) assembler.

First off, I would like to list a few reasons why learning an assembler is still practical in today’s world.  As a Teaching Assistant for the Computer Science department, I often get glares and questions from students when I mention that they will have to study anything lower level than Java.  We have a lot of students who complain with a sort of visceral anguish over having to code in C, which is something perceived to be obsolete and useless in the modern computing society.  With this sort of near universal panning of C for being low-level, assembly must seem even more so absurd to learn for programming purposes.  It is not so ironic, however, that our biggest supporters typically come from across the hallway by the Electrical Engineering students.

Assembly is, on its face, nothing more than a language of mnemonic codes that can be translated one for one with machine instructions.  Now this is, of course, a bit of an over-simplification as some modern assemblers are built on more complicated macros that provide the programmer with some additional capacity.  That said, assembly exists, for all purposes, at that one-to-one level with machine programming.  Every line of assembly you write will be loaded onto the processor at some point and executed atomically.

So, how does this become a beneficial skill to know?  First, if you are working with any custom hardware or using any microcontrollers, odds are that you will not have any of these high level languages around to let you program on.  The team who have created the microcontroller or microprocessor have designed hardware in such a way that it responds to various states natively.  These states, represented by high and low levels on individual wires, are passed into multiplexers and are used to change the control logic on the Arithmetic Logic Unit (ALU), to select the registers for use, to select the memory location to fetch for usage, and so forth.  These individual bits of data are routed throughout the processor to control the instruction execution, input, and the output of data.

Setting these bits used to be a mechanical process and was done on the front panel of computers.  An operator would have a table of binary codes that equated to specific instructions and codes that represented registers, then they would use physical switches to set wires to high and low levels and then another button to advance the clock to execute the instruction and give the operator time to input the next instruction.  This process was automated using a technique from the player piano days by using punch cards to pre-load the sequence of switch inputs for an instruction.  These cards would be fed through a reader like a deck of cards and each one would set the processor’s next instruction.  On modern systems, this has been replaced by digital means, using bits to represent individual switch states in an instruction.  These bits are still encoded using the same techniques.  For example, the following is a machine language line of code for a RISC processor:

000000 01010 01001 01000 00000 100000

This sequence is loaded into the processor and separated out.  The first block 000000 is sent into a Control multiplexer.  This multiplexer translates the sequence into a bit that enables the ALU Control Unit.  The second block consists of 5 wires and is sent into the Register Control unit.  This unit activates the register $t2 and enables it to send its full data to the first input of the ALU.  The third block activates register $t1 and enables it to send its full data to the second input of the ALU.  The fourth block selects register $t0 as the destination for the output of the ALU to be stored into.  The fifth block is fed into a sign extender and into a shifter to set how much to shift the value by.  This result is then enabled or disabled based on the operation.  In this case, the operation is not a shift operation, so it is not used.  The final block is the arithmetic operation to be performed.  These six wires are fed into a multiplexer that drives the selector of the ALU.  This case selects addition in the ALU as the operation to perform.

In the end, this cryptic sequence of numbers sets 32 physical wires to either high or low states to configure the processor to perform a single operation.  Once this operation completes, the resulting data from the ALU is sent back into the register control block, which will store the data into a register or into physical memory as specified by the instruction.  For programming purposes, we can code entirely in these sequences of bits, however, it is very difficult to accurately remember and use sequences of bits, and it is even harder to debug by visual inspection.  This problem gave birth to a very simple solution: use mnemonic codes.  Each set of the above numbers can be replaced by a simple human-readable code of ASCII characters.  The assembler can then read the codes and then translated them back to the binary digits for the machine to use.  The above sequence of bits is normally written using the MIPS assembler in the following way:

add $t0, $t2, $t1

This is a much simpler way to write programs!  This contains all of the data needed by an assembler to translate it back to the above binary sequence.  Notice here that there are only four symbols, whereas above there are six binary strings.  Since the add instruction does not use the shifter (shamt) field, it is omitted in programming.  The assembler translates add back to 000000 SSSSS TTTTT DDDDD 00000 100000, where the SSSSS is the source register code ($t2 = 01010), TTTTT is the second source register code ($t1 = 01001), and DDDDD is the destination register code ($t0 = 01000).  These three register codes are manually specified in the instruction, but the first, fifth, and sixth bit strings are added directly from the operation code alone.

So, assembly is just machine code with mnemonics for human readability.  How is this a good thing to know these days?  Going back to an earlier thought, microcontrollers and custom processors all use the same sort of physical stages that I described earlier.  The problem is that once these chips are manufactured, there is often little to no direct programming support for them.  Programming on these custom chips is commonly done by taking machine code (which is unique to this processor) and loading it onto an EEPROM or some other form of memory that is incorporated into the controller.  This machine code then executes at power-on of the processor.  Often, this execution is done without any form of an operating system, it is merely your code that runs at startup.  Since this machine langage is custom, finding a compiler that can produce this machine code can be impossible.  So, you are limited in your programming to using the manufacturer’s assembler and their mnemonics.  Knowing the fundamentals of assembly programming can save you if you want to do programming for certain custom hardware.

If you want to be a hero and raise to the level of demi-god, there is another really cool thing you can do at this stage.  We all know that compilers will take a high level langage, such as C, and convert them into machine language executables, ready to run on a target machine.  Compilers accomplish this in multiple steps, however.  The first step is to do a syntactical analysis of your high level code, to ensure that you are issuing correct instructions.  The second step of a compiler is to do semantic analysis to interpret the meaning of each of your instructions.  This step is accompanied by various levels of optimization, however, it will result in Intermediate Code (IC) generation.  This IC is commonly in the form of assembly!  Your compiler is converting your high level language into assembly.  This assembly is then assembled into native machine language.

If you wanted to be cool, you could then take this new microcontroller, with its custom assembly, and write a compiler that will compile C (or some other C-like language) into your custom assembly.  This would enable you to program natively on your custom hardware in C!  This is commonly done by the manufacturers, who will release their hardware along with a custom version of gcc or some other compiler for the architecture.  Even with this custom compiler, you will often not have full access to hardware!  There are limitations to the C language and many of the custom hardware operations will have no equivalency.  So even with a custom compiler, it is often necessary to do some inline programming in assembly.  This entails writing your code in both C and assembly as needed.  For example:

int main(void){
int x = 1337;
int y = 42;
printf(“X is %d, Y is %d\n”, x, y);
asm (“movl %%eax, %%edx;”
“movl %%ecx, %%eax;”
“movl %%edx, %%ecx;”
:”=a”(x), “=c”(y)
:”a”(x), “c”(y)
printf(“X is %d, Y is %d\n”, x, y);
return 0;

This is a simple C program that uses inlined assembly (IA-32 AT&T syntax).  This is a trivial example to show how this can be accomplished in a program.  In this case, I’m declaring two integers in C, then swapping their values in Assembly, and verifying the swap in C again.  This will result the expected output:

kandrea@zeus:~$ ./at
X is 1337, Y is 42
X is 42, Y is 1337

This is a crucial trick to know how to do if you plan to explore some of the non-standard features of a processor that you are programming for in a high level language.

Finally, in addition to programming for custom hardware or writing a compiler, assembly is essential to know if you plan to do any reverse engineering of code.  Decompilers exist for C, but they are by and large un-useful because compilers excel at optimizing away logic.  Logic is irrelevant to a compiler; all it cares about is speed and accuracy.  I once wrote a very long C program to show myriad examples of assembly operations for a student.  I compiled the code and disassembled it to show the student the assembly, but was shocked when this was all I found:

pushl %ebp
movl %esp, %ebp
movl $0, %eax
movl %ebp, %esp
popl %ebp

The compiler inspected my code and saw that I was only using local variables inside of a function.  These local variables were never returned or referenced anywhere.  There were no printf statements and none of these values resulted in any meaningful change to the system.  The compiler realized this and classified the entire function as a non-op, so it stripped out all of my code and replaced it with a single line to return 0.  While this is perhaps the most extreme example of optimization, your compiler will frequently destroy all logical meaning from your original C code and reorder things around just enough to be un-recognizable in the assembly.  The only saving grace here is that the operations will result in the same output, meaning, with a solid knowledge of assembly, you can interpret the tea leaves of disassembled functions and determine what they are doing.  From this analysis, you can often come up with an equivalent function in C.  This level of reverse engineering is actually best performed from the assembly directly.

Of course, these reasons to learn assembly also gloss over the comprehension aspects of understanding assembly programming.  My ability to program in all languages was greatly benefited from learning how to program in assembly.

I will leave this introductory post here and will begin the series on programming in MIPS for a RISC architecture with my next post.  I will be using a simulator (SPIM) and will include enough code and sample programs to demonstrate some of the fundamental programming constructs.

Kevin Andrea (“Demi-God of CS367″)


Concatenating Strings in C

As I continue studying for finals, I came across the realization that this is something a lot of people are having trouble with.  While there is a standard string library function to concatenate strings, there are a lot of other means that provide for a great amount of flexibility in formatting.

Here is a simple example in C to do a straight concatenation using the sprintf function.

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
char *concat(char *str1, char *str2) {
char *conc = (char *)malloc(sizeof(char)*(strlen(str1)+strlen(str2)+1));
sprintf(conc, “%s%s”, str1, str2);
return conc;
int main() {
char hello[10] = “Hello “;
char world[10] = “World!”;
char *hw = concat(hello, world);
printf(“%s\n”, hw);
return 0;

This is a simple example, but one that shows how easy it can be to work with strings in C.  What this does is it starts in main by creating two strings which can hold 9 characters and initializes them to “Hello ” and “World!”.  It then passes the addresses of these two arrays into a function I wrote called concat.

Once in concat, I create a brand new array on the heap that is perfectly sized for the non-null characters inside of these strings, along with an extra space for the new null terminator.   sprintf then enables me to format a normal string for printing with all sort of elaborate options.  In this case, I’m doing a simple concatenation, so my format codes are “%s%s”.  The arguments are my two string parameters.   sprintf creates this new string using the format codes and places the characters into the array specified by my string pointer to the heap array, which is the first argument.

Once I return, I am now free to use this new concatenated string as I wish.  Of course, I will free it when I am done.

Now, this was just a simple example.  Below are some examples of creative sprintf uses I have used in the last few weeks of classes:

sprintf(message, “Person [%d]: Notification that a %s is leaving the room.\n”, id, gender?”male”:”female”);

sprintf(variable->label, “g_%s%d”, name, varctr++);

Just was thinking about this as I was looking at materials for finals.



Extending C

I love C.  I enjoy being able to implement my ideas quickly in a manner that will compile, with optimization, and run relatively fast.  I also enjoy, academically, having to truly understand the structures I will be employing.  Without a solid understanding of a concept, I will not be able to implement it directly in C; this is a sanity check for my learning.

C is also a beautiful system for processing data very fast at a low level.  For example, I am presently writing a line and circle detection algorithm for a set of robots that will be competing in the RoboCup competition.  The vision processing that provides the foundation environment from which I will be able to work is all rapidly implementable at this level.  The senior level course we teach on basic computer vision focuses mainly on the algorithms, but then switches to matlab for their use in projects.  It is certainly one thing to effectively use a Sobel filter to prepare an image for applying the Hough transform, however, it is another thing entirely to understand these two transforms enough to be able to write them.  There is certainly no doubting that the algorithms in matlab are expertly written and are the pinnacle of optimization, but there is also no question that matlab cannot be run on an embedded system in a realtime environment.

As much as I love C, I also realize that there are features of other languages that are simply foolish to attempt to emulate within C.  Imagine trying to create an associative array that is able to store key:value pairs, where either, or both, of those elements are strings.  This is certainly implementable, though it would certainly be significantly more daunting to do so than to just fire up Python and use a dictionary structure.   For rapid prototyping, I would be a fool to spend half a day on this problem in C, when an entry-level student could write a similar program in Python in fifteen minutes flat.

It would also be idiotic to, in the same program, expect Python to run tight loops of operations on pixel-arrays in anywhere near the time that it could be done in C.  Generally, this predicament has led me to weigh the benefits and problems with different languages and then pick the language with the more natural implementation and runtime speed for the problem at hand.  In many cases, however, this chosen language will be underperforming in select areas.  There is a third solution that has always been in the back of my brain, but never actually made it to my conscious thought-process.  This is to use the more powerful, low-level language, and then extend its functionality through the use of an embedded scripting language.

Yesterday I finally got around to turning this dormant thought into reality.  I wrote a quick program, for a classroom demonstration, that adds the ‘dictionary’ functionality of an associative array into C, using embedded lua code.  This has the advantages of using the compiled C for speed and being able to use the table data structure of lua to emulate a ‘hashmap’ or ‘dictionary’ feature without having to do any heavy C coding.  In fact, writing a full-featured associative array in lua was precisely this easy:

hashmap = {}
function addEntry(key, value)
hashmap[key] = value

function retrieveEntry(key)

All I had to do was create matching functions in C that pushed all of the arguments onto an argument stack, made the lua function call, allowed lua to run its own code and return their value(s), which are then received in the return stack and processed by C.  At this point, C is back to its normal compiled efficiency to do with that code what it needs to.

Here is one of the the function calls I used to interface with the lua functions:

int hashSearch(char *name) {
int errno;
lua_getglobal(LS, “retrieveEntry”); // Designate the function in lua to call
lua_pushstring(LS, name); // Push in the parameter
// Call the function with 1 argument, expecting back 1 return value
errno = lua_pcall(LS, 1, 1, 0);
if(errno) return 0;
// This function retrieves the first element on the return stack from lua  and converts it to a number.
else return lua_tonumber(LS, -1);

Code format on this blog notwithstanding, it’s not a terribly complicated process.  The call stack is manually set, which gives the benefit of allowing multiple arguments and multiple return values from lua.   There is also very little overhead code to create the lua environment for embedding.

Once embedded and compiled using the lua libraries, the executable is portable with the added benefit that the .lua script is still just a plaintext lua script file.  This makes end-used modifications effortless by allowing your users to write or modify the lua code which is executed by the compiled code, without having to recompile.  Obviously, that would be very bad to play with for this example, but, when done properly, lua extensions to your codebase can provide users a great deal of flexibility and power over the program without any need to recompile.

This was a fun experiment and something that I look forward to practically using at a later time.

Kevin Andrea


Interesting Side Effect when Bit Shifting

Last week I received a question from a student in the course I TA that I had no immediate answer for.  This student was playing with basic bit shifting operations in C when they attempted to perform a left shift by 32 bits.  Each shift to the left by one performs a simple set of operations.  The most significant bit (MSB) of the bit sequence is pushed into the Carry Flag (CF) on the processor.  All of the rest of the bits are then moved one place to the left.  The least significant bit (LSB) is then set to 0.  For example, shifting this 4-bit number (0010) left by 1, would result in 0100.  If I were to do this operation again, the result would be 1000.  Once more would make it 0000, since the last remaining 1 bit gets shifted off.

As this question was pertaining to a 32 bit system, the expected result for shifting any bit sequence left by 32 times would be 0, since every 1 bit would have been shifted off the end to the left, leaving nothing but 32 0s in the wake.  The problem the student noticed was that when they used actual values in the statement (-1 << 32), they got the correct result of 0; however, when they used a variable that had the exact same value (-1), and they ran the statement (a << 32), the answer they got back was the original value of a.

For me, this was a very interesting question.  Logically both of these statements should be precisely equivalent.  Shifting anything left by 32, regardless of value, should result in a value of 0.  Since the C code (being a high level language) is abstracted from the processor, I knew I’d need to drop down a level to be able to find out first why these two statements gave dramatically different results, and second, why one of the two was just simply wrong.

To go one level under the hood, I wrote a simple C program and then compiled it, then immediately opened the executable inside of gdb (the GNU Debugger) to examine the assembly code generated.

/* Relevant fragment of my C code */
pftest1() {
return (-1 << 32);
pftest2() {
int a = -1;
return (a << 32);

/* Output of Assembly, with my comments added after the // marks */
080483a4 <pftest1>:
mov $0×0,%eax // === return 0;

080483ae <pftest2>:
sub $0×10,%esp
movl $0xffffffff,0xfffffffc(%ebp) // Moves -1 (current value to shift) into a temp stack space
mov 0xfffffffc(%ebp),%eax // Moves -1 from the temp stack space into EAX (a memory register)
mov $0×20,%ecx // Moves 32 (the number of bits to shift) into ECX (another memory register)
shl %cl,%eax // Shifts the contents of EAX left by the contents of the lower end of ECX
// This last line transliterates to SHL 32, -1.

Even though the two functions in C were logically identical, in the compiled code, the first function (which properly outputted 0 for the student), actually consists of nothing but the equivalent C code for ‘return 0;’.  It literally immediately returns 0.

For the second function, which again was logically identical, we can see that the code in assembly is actually performing the requested operation.  It moves the variable value into a register, then shifts the value of the register 32 times to the left, then returns the value resulting.  This is exactly what we wanted, which is ironic in that it also gives the wrong answer.

So, what happened?  Here’s what I was able to figure out from this test.  For the first statement — the one that used hardcoded values for both sides of the << operator — what happened was that the compiler intercepted the intent of the programmer, which was to wipe all of the bits off the slate, and thusly created the assembly code to simply output 0.  Not only does this achieve the user’s intention, but it also runs really fast.

The second block is more fun in that since there was a variable in the equation, the compiler was unable to determine exactly what would happen at compile time.  Even though the << 32 was hardcoded, when the compiler saw a variable in the equation, it realized it couldn’t predict the possible values of the variable, so it created assembly code to pass the buck off to the processor.  In this case, it creates instructions for the processor to load and perform the shift operation requested.

This is where it gets fun.  In the specification for IA32 processors (, in the description of the SHL opcode, I found this: “The count range is limited to 0 to 31″.  This is something the geniuses who actually design processors came up with to keep up from breaking their stuff.  They won’t allow a shift of more bits than exist in the register.  To keep the data within acceptable ranges, the developers of this particular processor implemented a sort of modulo operation to ensure the value passed into the SAL instruction was within the acceptable limits.

This means that when the student tried to run ‘a << 32;’, what the processor interpreted this as was ‘a << (32%32)’ which is equal to ‘a << 0′.  Hence, the result was the same as doing no shifts at all.   Running ‘a << 33′ would then be the equivalent of running ‘a << 1′.

This is off topic for most of the people that would read this blog, but provides a wonderful insight into the difficulties of optimizing code.  These operations were all perfectly legal, however, in a more robust coding project, this would have been very hard to trace as an error, especially since the logical review of the code would yield a flawless, albeit weird, statement.

Kevin Andrea


Indexing Media

Last summer I had a task to play sounds from a large library in response to both user and system generated events.  The recording artist delivered the product in the format of a raw session, with a single delivered session that contained all of the relevant phrases , spoken at a slow cadence.  Not having any experience with audio processing, I first looked for a way to chop up the audio into approximately 100 different audio files.  This already became an issue for me, having a large number of very small files to load onto a mobile device.

I looked into the recording session again and saw something I had missed the first time through.  The recording artist had a natural cadence that was almost exactly about one second in between the start of the next phrase.  Moreover, each phrase was small enough that they all fit within a 1 second clip.  Being an initiate coder, I decided to make use of the given environment and I played with loading the sound file into memory as a whole, then using a form of indexing to play only the sounds I had interest in, before closing the file.

I went in first using SoundBooth to do a little trimming and thresholding on the audio to make the sounds begin precisely on even 1000 ms intervals.  I also researched the system being used by the system I was coding under, Corona (not using their openAL library), and noted that I was not shooting myself in the foot by loading an audio stream of that size.  At this point, I created a partitioned space, breaking down the audio categories with a few chosen key index values, then referencing the exact sound of interest by adding an offset.

table.insert(mainView.sound.queue, {mainView.sound.recording, soundIndex, 1000})

local stream = obj[1][2], stream), {channel = 2, duration = obj[3], onComplete = playSound})

The first table entry (1 indexed?  really?  Thanks lua), allows me to select which of the main recording streams I am interest in.  The key here is the second table entry, which provides the value into the stream to seek (here in milliseconds).  This gave me a very easy way of selecting the sound and playing them back without having to worry about unloading and reloading the next cut.  Furthermore, since this was an active stream, I was able to enqueue sound requests and play them back as fast as it could receive the command to seek to the new location and play.  This was, as it turned out, a very important feature in this implementation, playing back a set of audio clips without any unnatural pauses in between.

This was a cool little trick that I was able to get away with in this case because the loaded audio file was quite within the constraints of memory that I was working under, allowed for very fast changing and playing sets of sounds, and gave me a very fast and rather elegant way of programming the means for selecting the sounds to be played, using very simple key defined values and offsets for indexing.

Kevin Andrea



Resuming Operations

Four months and one sanity ago, I embarked upon a perilous journey through a semester fraught with impending doom.  My online and social presence was snuffed by courses in computer architecture design (reverse engineering CPU design through assembly instruction expected results), artificial intelligence, physics (I was sealed in a Gaussian bubble until I understood Maxwell’s equations), and numerical analysis (the programmer’s math course), all while working 5 hours a week as a peer advisor (drop-in tutoring for engineering students), and 10 hours a week as a TA for a systems programming course (yay C and Assembly!).

I am now in a much happier place with a much lighter course load and a corner desk in a robotics lab.  As the first step in rebooting my life, I will be planning on resuming updates on coding insights, tips, tricks, and revelations that I have while working on coursework (compiler design — I’m presently writing a recursive descent parser to read in a grammar and output properly formatted C code for a recursive descent parser to recognize strings in that language) and while working on the side for the GMU RoboCup team (currently working on implementing a Hough Transform for circle detection for our robots), as well as resuming work on finishing up a planner that would ultimately greatly benefit me in trying to adopt a consistent schedule for my own physical health.

I’m going to vary topics quite a bit as I post, so I shall make ample use of tags.  I’m expecting quite a lot from the field of genetic programming, programming language design, and computer vision to dominate the next few posts.

isCoding = 1;

- Kevin


Home Office

I’ve been mostly down with a pretty nasty cold for the last bit of this week.  One of those lovely deals where it feels like acid is burning away the inside of your nose and throat.  One of the best parts of my current work schedule is being able to adjust by performing a >> 2 on the week and kicking out some good hours over the weekend.

I’m working on CoreData right now, using data from two different entities across several controllers.  Once I have the data being created, accessed, removed, and sent to the proper notifications, I’ll be posting quite a bit on using CoreData.  For now though, during my dinner break, I decided to add some information about my home office and the equipment.   I don’t have much to my name, but I’ve found some pretty good ways of extending what I have to enhance productivity.

I took some pictures and added them to this set on Flickr:

Starting off with the main desk, I’m running a MacBook Pro 2,2, currently running OS X 10.6.8.  This computer has a 2.16 GHz Intel Core 2 Duo processor, 2GB of DDR2 SDRAM at 667MHz, a Radeon X1600 video card,  and a new 500GB HDD that runs at 7200RPM.  I also have a 320GB external HDD for my TimeMachine.  I have to admit, TimeMachine is an essential piece of software.  When I recently replaced my hard drive to upgrade performance, I was able to do a clean install and reload to the point of operating as it was before the swap using only the OS X install disc and my TimeMachine backup.  Amazing software, that.

The MBP is accompanied by two external monitors.  The first, the largest of the three, is an Acer X223w, 22″ widescreen LCD connected directly to the external DVI port of the MBP.  The second external monitor is a KDS Visual Sensations VS190i CRT.  I have this plugged into an amazing little adapter that drives a VGA display through USB on the computer.  This adapter is a SIIG USB 2.0 to VGA Pro adapter that I found on the shelf at MicroCenter.  I plugged it in and installed the software, it came up instantly.  Moreover, it’s managed directly through the configuration tools in OS X, so it’s as good as a native.  The only performance issue I’ve found is running full-screen video, which is to be expected.  Anything less than fullscreen seems to do quite well though.

For the mobile devices on the main desk, I have a 2nd Generation iPod Touch, which I picked up in 2009, a 4th Generation iPod Touch that I got a few weeks ago, and a 1st Gen iPad that I’ve had for about 6 months now.  As an iOS developer, all three of these devices have proven essential.  The difference between the 2nd and 4th Gen iPod Touches is multi-fold.  They have different hardware capabilities, which affect display and performance of apps, they run different iOS versions — 2nd Gen is limited to iOS 4.2.1 by Apple — and the 2nd Gen does not handle multitasking.  As a developer working on a customer base that can include both of these devices, knowing how to code and test code to handle background transitions as opposed to the older termination calls is absolutely required.

Squirreled away in the drawer, I also have a Pharos Traveler GPS v535 PDA and a Compaq iPaq RX1955. These are both rather capable PDAs that came with Windows Mobile 5.0, Microsoft Office Mobile applications, and basic media.  The great thing of its time was that there was an active development community that created some amazing programs to load.  I have flash running, use the Kinoma media player, and was up on all of the messaging protocols.  I could watch two movies on a single charge on the iPaq back in 2005, a full 2 years before the first iPod Touch came out.  I upgraded eagerly to the Pharos PDA, which had similar capabilities, but some more power, a GPS, and a better display.  The GPS on that got me from California to Virginia without any problems and is still in my car to this day.  I really wish Apple had added a GPS to the iPod Touch.  At any rate, because neither had a windows button on the device, they could technically not be upgraded to WM 6.0/6.1 and were obsoleted almost overnight as programs were redistributed only for the 6.0 OS.  Shame too, there was literally no love for a long time for someone who wanted a PDA that wasn’t a phone.

On my glass desk I have my two PCs.  The desktop is a custom build I did.  The case is a Silverstone Temjin TJ-06, which is an oddity for me as I’ve been a long supporter of Antec.  This is an amazing case though, couldn’t pass it up.  I had an amazing 850W power supply from Antec until two months ago when it suddenly died.  I now have a Corsair Professional Series 650W supply.  The drop in power was made possible by my replacing my aging ATI Radeon 3850X2, which was both the pride of the fleet back in it’s day, as well as a monumental power drain.  It had dual, high-end video cards mounted inside of the frame, each requiring active cooling and power from the supply directly.  My new card looks pathetic compared to that, but outperforms it in every possible way.  I am now running a Radeon HD5670.  The processor is a bit antiquated, it was one of the first quad-core deals on the market, an AMD Phenom 9600 quad-core, 2,.31 GHz processor.  I’m running that on an Asus M3A32-MVP Deluxe Wifi motherboard with 4GB of RAM. The board has copper cooling that covers all four memory banks and the bridges, quite a nice layout to boot.  The system is running Windows 7 Ultimate with close to 2.5TB of HDD space.  My boot and swap drive is a 10,000 RPM drive.

If you look carefully, you might recognize the wallpaper in the picture.  It’s a section of the Voynich Manuscript.  This is a document that was dated to around 1420 and contains around 200 vellum pages.  Most of the document contains observation notes of plants and agriculture, though there are sections on astronomy and other topics.  The language has never been deciphered.  Scholars to this day have no idea about who wrote the manuscript, what language it might be written in, or even where the observations were made — the plants appear to be mainly unidentifiable.  One of the only clues to its origin comes from the appearance of Western European style architecture in the drawings and the fact that its first record of possession (and its mysterious contents) dates to the 17th century in Prague.

The second computer is an HP Pavilion laptop, which was billed as an Entertainment Laptop back in 2008.  It features an Intel Core 2 Duo 5550 1.83 GHz processor with 4GB of DDR2 SDRAM and a GeForce 8600M graphics card with half a gig of dedicated video RAM and over 2GB available for its use. It runs Windows 7 Professional and I currently have that as my PC development platform.

Hidden from view, but evidenced by the small keyboard and mouse hidden under the CRT is my linux server.  This is a large desktop server that dates back to 2002, meaning it was scrounged up and saved from the giant bit bucket beyond.  Modern flavors of linux don’t care for the old hardware any, causing install failures on multiple builds of Ubuntu, Mepis, and Debian.  I got around this with what I should have done in the first place; I did a Gentoo install.  Everything on the system was compiled and installed for this specific box, to include the compiler and the kernel.  It’s a long process, but has always given me the absolute best linux system in the end.

For development, I have XCode 4 running on my MBP as my primary environment, with XCode 3 also on the MBP as a secondary environment. One trick I use, as is seen on the picture, is to have my main programming project on the main screen with XCode 4, then have a secondary project on the MBP monitor under XCode 3.  I also use JEdit for single-file viewing of multiple language types, as is currently seen on the CRT.  My desktop runs MS Visual Studio 2008 Professional and my PC Laptop runs MS Visual Studio 2010 Professional.

For the remainder of my office, my bookshelves are filled with books of all subjects from programming to history.  The bookxhelf is capped with my collector’s edition house from Invader Zim, along with my GIR action figure.  I also have my BSG Blu-Ray series set up top at present.  I alternately also display my three Blizzard collector’s sets (Warcraft III, Warcraft III Frozen Throne, and WoW Burning Crusade), not that I play them anymore, but that they were signed my the managers, developers, programmers, and artists that came out to the various midnight releases at Fry’s Electronics.  One of my favorite pieces in all of them was a message on a poster that came with the original Warcraft III collector’s edition, which one of the producers of the series signed ‘Thank you for keeping America free”, after he discovered I was in the Marine Corps.    I also have a signed box set of RvB from the guys over at RoosterTeeth, who are all more than awesome at each of their public apparences.

The large whiteboard is 3′ x 4′ tileboard that I bought from Lowe’s Home Improvement.  With the plastic frame around it, the total cost of that whiteboard was about $15.  I’ve been using it for about 8 months now and it’s still brilliant white when I clean it.  (It was digitally erased after the photo was taken, leaving a lot of smudges in the picture).  The picture of my peripherals features my other main whiteboard, a silver 2′ x 1.5′ board from Staples, which cost me more than twice as much and is half as big as my main whiteboard.  On the wooden desk, I also have an awesome Expo dual-sided whiteboard mounted in aluminium.  I use this for TA work at the university as well as for lots of notes and small drawings.  I also have a smaller whiteboard off to the left and another one over my MBP for calendar events.

Everything else is pretty cut and dry.  I use active cooling stands for both laptops.

With that, I believe I’ll be heading back to work.  I have notes on CoreData that I intend to post in the future, as well as a book of notes on programming in Lua, using the Corona SDK, and some tricks I came up with on how to stream sounds.

- Kevin, Chaotic Sorcerer, Initiate Coder


Forward Declaration

In the spirit of keeping this as a personal blog to record my own progress, I have decided to limit my posts to observations and overviews, instead of writing any form of tutorials or guides.

For my first insight, the sorcerer in me tried to blast through creation of a series of interconnected controllers while working in Objective-C a little while ago. I know that Obj-C uses #import instead of my more familiar #include because #import keeps track of what is already brought in.

In my preferred C, I will begin a header to a common source file with the following to ensure I never load multiple versions of the same header:

#ifndef settings
#define settings

The beautiful aspect of #import is that it does this automatically, so you can just sort of pave through thinking about design a little and #import everything, knowing that it’ll only do one load of the code therein.

My problem here comes from a subconscious reliance on this to solve all header-releated problems; which, in this case, involved circular dependencies.

My current project has the following model in place, which I shall attempt to present the relevant bits to my problem in pseudocode.

MyAppDelegate -> MainViewController -> SettingsViewController -> PickerViewController

--- #import "MainViewController.h"

--- #import "MyAppDelegate.h"
--- #import "SettingsViewController.h"

--- #import "MyAppDelegate.h"
--- #import "PickerViewController.h"

--- #import "MyAppDelegate.h"

Now, looking it it sort of abstracted shows immediately that MyAppDelegate depends on MainViewController, which depends on MyAppDelegate, which depends on MainViewController, ad infinitum.  This is a problem that a wizard would have discovered long before putting quill to Xcode, but it’s something that took me a bit of work to discover.  Implementing this, I received two errors on MyAppDelegate.h, both of which read:

"MyAppDelegate.h:14: error: expected specifier-qualifier-list before 'MainViewController'"

Now, I’ve been coding for a while, but that struck me as mostly cryptic.  What this basically refers to is the lack of a definition for MainViewController at each of the two places in the header where I reference it.  This had me floored for a while because the #import “MainViewController.h” line is quite present and MainViewController.h is a complete header.

As it turns out, Objective-C will, at some stage in the buld process, recognize the cyclic dependency and simply refuse to play that game, which robs my MyAppDelegate of the loaded definition for MainViewController, triggering this particular error.  Objective-C, like C++, does have a rather nice solution for this though: Forward Declaration.

Since this is a compiled language, you don’t necessarily need to have all of your eggs defined in the master basket before first refering to something in your code.  In this particular case, the compiler is plenty happy enough knowing that something will be defined at a later stage, when it’s actually needed.  In this case, I’m not using anything that requires knowledge of precisely how MainViewController works, all I’m doing is telling MyAppDelegate that it has a variable available to it of the type MainViewController.  Beacause all I need here is knowledge of it’s existence, I can forward define MainViewController in the header and leave the actual import to the source ‘.m’ file.

--- @class MainViewController.h; // Forward Declaration Statement

--- #import "MyAppDelegate.h"
--- #import "MainViewController.h"

--- #import "MyAppDelegate.h"
--- #import "SettingsViewController.h"

--- #import "MyAppDelegate.h"
--- #import "PickerViewController.h"

--- #import "MyAppDelegate.h"

Now I can use a definition for MainViewController in the header of MyAppDelegate and use a definition for MyAppDelegate in MainViewController without any circular references.

- Kevin, Chaotic Sorcerer, Initiate Coder


A beginning

Several weeks ago I began my first proper job in the field of my choosing.  I’ve been a productive member of society for a bit over a decade, mind you, but this is the job that I spent that decade telling my coworkers that I would do when I grew up.  I spent my free time after work reading books on computers and programming.  I spent a number of years between contracts at a community college, learning this craft.  I studied full-time at university for several more years.  I’ve been chapter president and a regional vice president of an honor society, served at several levels of student government, graduated thrice with high honors and made Dean’s List every semester I’ve been eligible for it at university.  I’ve glided through classes with ease, have been accepted into an accelerated Master’s program in my field, and was even honored by the ITEA with a scholarship in recognition of these, and many other achievements I’ve made.

Yet, after nearly a decade and a half of preparation, albeit around a decade of military service, I now find myself mostly harmless as a programmer.  These past three weeks, the first of my professional career, have taught me that I have been, thus far, woefully unprepared for working as a programmer.  My techniques are unsure, my book knowledge is spotty, I can whip up some amazing implementations of common algorithms just as a master bard would write a short sentence, but the rest of my writing would be better fit as part of a young children’s reader.  I find myself hammering out solutions to problems with as much delicacy as would Thor swat at a fly, and find myself having to repair the surrounding code in the same manner as he would with was left of his wall.

Don’t get me wrong about the quality of my education in this arena.  My university is an amazing environment for these studies. Many of the faculty are quite literally at the front of their fields; their names appearing in nearly every journal paper on the topic.  Our first semester, non-department, programming students wrote a game of Battleship with an AI opponent on a graphical interface.  Even after taking into account the simplicity of their implementations, the overall level of programming knowledge gained far surpasses what I would expect at any other institution.  But I find myself as prepared to go out into the world and ply my trade as would a kindergartner succeed as an artist after being told by their mother that their picture was beautiful.   I know how to draw some very complicated shapes, but the picture as a whole is mostly just random crayon lines.

I suppose, to borrow from a common series, I would be most like an 11-year old Draco Malfoy.  This is someone, I would imagine, had been highly educated on components of the magical world, and even allowed to play a little with some common incantations and some of the vehicles of the trade.  I can imagine that he would have been well indoctrinated with the culture of the wizard and had great amounts of experience seeing master wizards at work.  He would have been groomed for this lot in life, given the tools he needed to succeed.  But on his first day, he would, if even only to himself, realize that he was little more than a cheap sorcerer, able to conjure enough magic to prove to himself that he could.  His skill would have had some little formal training, but mainly would have been uncontrolled and as flighty as his emotional state; he would conjure spells that were wildly unpredictable and only through sheer force of will be able to control his craft enough to complete the task at hand.

Naturally, in the stories, the kids were all formally educated and learned not only to control their magic, but to wield it skillfully.  This is where I would change the classification from cheap sorcerer to wizard.  Right now, I feel like I’m a cheap sorcerer.  I’m able to scrawl out incantations in various magical languages, such as Lua, C, Objective-C, C++, C#, Java, Python, SQL.  I can even throw myself into a caffeine bender and go on a vision quest, where I wake up two days later and find cryptic writing all over my whiteboards and a program that seems to technically work.  This is the sort of thing that Dumbledore would have shaken his head at and walked off, disappointed.

I am purposing myself to achieve that higher level, to become a true wizard.  My goal is to be able to listen to a person’s problem, then recede into my study and prepare beautifully crafted solutions — not in terms of visual structure, but beautiful in terms of software elegance, versatility, and correctness.  I aim to use this blog as a scroll to record the progress, through insights, inquiries, code snippets, and advancements in my thinking and knowledge, to chart the progress of a two-bit sorcerer on his quest to become a powerful wizard.

And with that, the lethargy has ended and the age of awakening has begun.

- Kevin, Chaotic Sorcerer, Initiate Coder

Older posts «