CareApp uses CI to test every commit of our mobile app, and build and deploy every merge to master, for iOS and Android. A few weeks ago I felt very seen by this tweet:

What follows was an adventure to cut our CI times by 12 minutes.

Read all about it on the new CareApp Engineering Blog

Join the conversation!

Hit me up on Twitter or send me an email.
Posted on November 13, 2020ProgrammingTags: Dart, Flutter, CI

Connect your Flutter app’s Redux Store to the Redux Devtools from the web!

I really like Flutter, and I like using Redux when building mobile apps. There’s a great Redux implementation for Dart and Flutter, and a time travel capable debug store.

The Javascript world is spoilt with the fantastic Redux DevTools plugin. It allows you to inspect actions and application state in a web browser, time travel, and playback changes to your app’s state. There is an on screen time travel widget for Flutter, but that means sacrificing screen space for the UI.

So why not combine the Redux DevTools from the Javascript world with Redux.dart? Now you can, with the reduxremotedevtools package!

Debug your Redux Store with Flutter and Remote DevTools

This article gives a quick overview of how to get setup. The Git repository contains examples to help get you started.

Getting Started

Add the library to your app’s pubspec.yaml:

  redux-remote-devtools: ^0.0.4

And add the middleware to your app, and provide it a reference to your store so time travel actions from the remote can be dispatched:

var remoteDevtools = RemoteDevToolsMiddleware('YOUR_HOST_IP:8000');
await remoteDevtools.connect();
final store = new DevToolsStore<AppState>(searchReducer,
  middleware: [
  ]); = store;

Startup the remotedev server, and then run your Flutter app:

npm install -g remotedev-server
remotedev --port 8000</code></pre>

You can then browse to http://localhost:8000 and start using Remote DevTools to debug your Flutter app!

Encoding Actions and State

In the Javascript world, Redux follows a convention that your redux state is a plain Javascript Object, and actions are also Javascript objects that have a type property. The JS Redux Devtools expect this. However, Redux.dart tries to take advantage of the strong typing available in Dart. To make Redux.dart work with the JS devtools, we need to convert actions and state instances to JSON before sending.

Remember that the primary reason for using devtools is to allow the developer to reason about what the app is doing. Therefore, exact conversion is not strictly necessary – it’s more important for what appears in devtools to be meaningful to the developer.

To make your actions and state JSON encodable, you have two options. Either add a toJson method to all your classes, or using a package like json_serializable to generate the serialisation code at build time. The GitHub search example demonstrates both approaches.

If your store is simple then you may be using enums for actions. These encode just fine without any extra effort.

Time Travel

If you have configured your app to use the DevToolsStore from redux_devtools, then you can time travel through your app state using the UI.

Time Travel through your app

Remember that there are limitations to time travel, especially if you are using epics or other asynchronous processing with your Redux store.

Being a new library there are still things to work out. PRs are welcome if you’re up for helping out.

Now go build something cool with Flutter!

* Photo by Tim Mossholder on Unsplash

Join the conversation!

Hit me up on Twitter or send me an email.
Posted on October 09, 2018ProgrammingTags: dart, flutter, redux, remote-devtools

Nobody likes apps that crash or stop working properly. Handling and recovering from errors is obviously an important task for any developer; we should not assume that everything will run smoothly.

In this post we’re talking about what to do on top of your regular error handling — the last resort.

Read on the NextFaze Blog

* Photo by Kris Mikael Krister on Unsplash

Join the conversation!

Hit me up on Twitter or send me an email.
Posted on February 26, 2018ProgrammingTags: error-handling, ionic, rollbar, typescript

Drag and drop sorting lists of records in an Ember 2 application, using JQuery UI’s sortable plugin! Working example up on GitHub.

I’ve been rebuilding Three D Radio‘s internal software using Ember JS. One aspect is to allow announcers to create playlists to log what they play on air. I wanted announcers to be able to reorder tracks using simple drag-n-drop. In this post I’ll explain how to do it.

Firstly, this post is based on the work by Benjamin Rhodes. However, I found that his solution didn’t work out of the box. Whether that is API changes from Ember 1.11 to Ember 2.x I’m not sure. So what I’m going to do here is bring his technique up to date for 2016 and Ember 2.6.

Starting an Ember Project

I’ll build this from scratch so we have a complete example. You shouldn’t have problems integrating this into an existing project though. So we’ll create a new Ember CLI project called sortable, and install JQuery UI:

ember new sortable-demo
cd sortable-demo
bower install --save jqueryui

We need to add JQuery UI to our build as well.

// ember-cli-build.js
var EmberApp = require("ember-cli/lib/broccoli/ember-app")

module.exports = function (defaults) {
  var app = new EmberApp(defaults, {
    // Add options here


  return app.toTree()


We are going to need a model for the data we are going to sort. Here’s something simple

ember generate model note

Inside the note model we’ll have two attributes, the content of the note, and an index for the sorted order:

// app/models/note.js
import Model from "ember-data/model"
import attr from "ember-data/attr"

export default Model.extend({
  index: attr("number"),
  content: attr("string"),

Fake data with Mirage

For the sake of this example, we’ll use Mirage to pretend we have a server providing data. Skip this bit if you have your REST API done.

ember install ember-cli-mirage

And provide some mock data:

// app/mirage/config.js
export default function () {
  this.get("/notes", function () {
    return {
      data: [
          type: "notes",
          id: 1,
          attributes: {
            index: "1",
            content: "Remember to feed dog",
          type: "notes",
          id: 2,
          attributes: {
            index: "2",
            content: "You will add tests for all this, right?",
          type: "notes",
          id: 3,
          attributes: {
            index: "3",
            content: "Learn React Native at some point",

A Route

We will need a route for viewing the list of notes, and a template. Here’s something simple that will do for now:

ember generate route list

And in here we will simply return all the notes:

// app/routes/list.js
import Ember from "ember"

export default Ember.Route.extend({
  model() {

A template? No, a component!

We are going to display our notes in a table, but the Sortable plugin also works on lists if that’s what you’d like to do.

You may be tempted to just put your entire table into the list template that Ember created for you. However, you won’t be able to activate the Sortable plugin if you try this way. This is because we need to call sortable after the table has been inserted into the DOM, and a simple route won’t give you this hook. So, we will instead create a component for our table!

ember generate component sortable-table

We will get to the logic in a moment, but first let’s render the table:

// app/templates/components/sortable-table.hbs
  <tbody class="sortable">

    {{#each model as |note|}}


The important part here is to make sure your table contains   and   components. We add the class sortable to the tbody, because that’s what we will make sortable. If you were rendering as a list, you add the sortable class to the list element.

Finally, in the template for our route, let’s render the table:

{{sortable-table model=model}}

We should have something that looks like this:

Our table component rendering the notes

Our table component rendering the notes

A quick detour to CSS

Let’s make this slightly less ugly with a quick bit of CSS.

/* app/styles/app.css */

body {
  font-family: sans-serif;
table {
  width: 500px;
table th {
  text-align: left;

tbody tr:nth-child(odd) {
  background: #ddd;
tbody tr:nth-child(even) {
  background: #fff;

Which gives us a more usable table:

Just make the table a tiny bit less ugly

Just make the table a tiny bit less ugly

Make Them Sortable

Moving over to our component’s Javascript file, we need to activate the sortable plugin. We do this in the didInsertElement hook, which Ember calls for you once the component has been inserted into the DOM. In this method, we will look for elements with the sortable class, and make them sortable!

// app/components/sortable-table.js
export default Ember.Component.extend({
  didInsertElement() {

Persisting the records

At this point we have a sortable table where users can drag and drop to re-order elements. However, this is purely cosmetic. You’ll see that when you reorder the table the index column shows the numbers out of order.

We can now reorder notes, but the index fields are not updated

We can now reorder notes, but the index fields are not updated

Open up Ember Inspector and you will see the models’ index is never being updated. We’ll fix this now.

The first step is to store each note’s ID inside the table row that renders it. We will make use of this ID to update the index based on the order in the DOM. So a slight change to our component’s template:


<pre class="lang:xhtml decode:true" title="">{{#each model as |note|}}
    <tr data-id="{{}}">

Next is to give sortable an update function. This gets called whenever a drag-drop is made.

// app/components/sortable-table.js
didInsertElement: function() {

    let component = this;
      update: function(e, ui) {
        let indices = {};

        $(this).children().each( (index, item) => {
          indices[$(item).data('id')] = index+1;



This function iterates over all the sortable elements in our table. Note that we get them from JQuery in their order in the DOM (ie the new sorted order). So, we create an array, and using the item’s ID store the index for each element. Note that I’m adding 1 to my indices to give values from 1 instead of 0.  Next step is to use this array to update the records themselves:

// app/components/sortable-table.js
updateSortedOrder(indices) {
    let tracks = this.get('model').forEach((note) => {
      var index = indices[note.get('id')];
      if (note.get('index') !== index) {

We update and save the record only if its index has actually changed. With long lists, this greatly reduces the number of hits to the server. (wish list: A method in ember that will save all dirty records with a single server request!)

Now when we reorder the index fields are updated correctly!

Now when we reorder the index fields are updated correctly!

And we’re done! A sortable list of Ember records, that persist those changes to the server*.

Have a look on GitHub!

(Note: If you’re using Mirage, you’ll get errors about saving records, because we need code to handle patches).

Join the conversation!

Hit me up on Twitter or send me an email.
Posted on June 16, 2016ProgrammingTags: emberjs, javascript, jquery, jqueryui, sortable

I worked for South Australia’s youth circus organisation Cirkidz on their production We The Unseen. Using the same 3D projection mapping technology I developed at UniSA and expanding from the work I did with Half Real, we built several interactive projection based special effects to compliment the performance. Let’s have a look at the Storm.

So what’s going on here? We have two Microsoft Kinects, either side of the stage, tracking the performers in 3D. We can then use that information to make projected effects that respond to the performers movements.

For Storm, I created a particle simulation that would provide a (deliberately abstract) storm effect. We have two particle emitters; one at the top of the triangle at the back of the stage, and another at the front. This gives the illusion that particles are travelling from the sail out onto the floor. Then, we have a couple of forces attached to the performer. The first is a rather strong attractor, which draws the particles to the actor. The next is a vortex, which manipulates the direction.

The result is a particle system that appears to dance with the performer.

We The Unseen’s projected effects were developed over a 6 week period. The first step was to figure out what was actually possible to track with the Kinect. These are circus performers, not people in their living rooms doing over the top gestures!

Tracking multiple performers

Tracking multiple performers

Having multiple actors on stage is fine, even on unicycles:

Skeleton tracking not so much

Skeleton tracking not so much

The main problem was the size of the stage. For this reason we used two Kinect devices, which were connected to separate PCs and sent tracking data using a simple, custom network protocol. Calibration meant the tracking data would be in the same coordinate system. And again, due to the size of the stage, there was almost no interference between the two devices.

In fact, if there was more time, we would have tried four devices.

One of the things thought about for Half Real but never got done was using projectors as dynamic light sources. In We The Unseen, we had a chance:

It mostly works, but you start to see the limits of the Kinect. No matter how precisely you calibrate, depth errors start to cause misalignment of the projection. There’s also a bit of a jump when the tracking switches devices. But overall, works pretty good.

In a smaller space, you could do some very nice lighting effects with projectors and decent tracking data.

Another problem discovered during Half Real was controlling the projection system. The operator was busy dealing with lighting cues, sound cues, and then an entirely separate projection system.

For We The Unseen, I had time to integrate the projection system with QLab, using Open Sound Control. This allowed the operator to program the show exclusively in QLab, and OSC messages told the projection system what to do.

There were additional effects that we built but didn’t make it into the show. For example, we had this idea for some of the acrobatics to create impact effects for when performers landed from great  heights. The problem here was mostly aesthetic. The lighting designer of course wants to light the performers to have the biggest visual impact. For these acrobatic scenes there was simply too much stage lighting for the projectors to cut through. Projectors are getting brighter and brighter, for cheaper, but still can’t compete against stage lighting. So those effects we left unseen.

We The Unseen – pretty amazing youth circus, with a couple of special effects by me where they worked.

Tech runs

Tech runs

Join the conversation!

Hit me up on Twitter or send me an email.
Posted on June 14, 2016Programming, Software ProjectsTags: Augmented Reality, c++, circus, cirkidz, kinect, opengl, particle system, Programming, projection, sar, theatre

tl;dr; I’ve made a GitHub repo that makes FTGL work on OSX again.

FTGL is a library that makes it super convenient to render TrueType text in OpenGL applications. You can render text as textures and geometry, making it very flexible. There’s just one problem: if you’re using MacPorts or Homebrew on OSX, FTGL doesn’t work! Here’s how to work around it.

FTGL makes use of FreeType to actually render text. In newish versions of FreeType, some of their source files have been moved around and renamed. This is a problem on OSX since, by default, we are on a case-insensitive filesystem. We now have a name clash where both FTGL and FreeType seem to have a file named ftglyph.h. All of a sudden software that uses FTGL will no longer compile because the wrong files are being included!

The fix for this is fairly straight forward. Since FTGL depends on FreeType, FTGL should be modified to remove the name clash. Unfortunately, FTGL seems to have been abandoned, and has not had any updates since 2013. In the bug report linked above I have provided a patch that renames the file and updates references to it. I’ve also created a GitHub repository with the patch applied.

This problem doesn’t show up on Linux because on a case sensitive filesystem like Ext4, the FreeType file is ftglyph.h, while the FTGL file is named FTGlyph.h. No name clash.

So there, uninstall FTGL from MacPorts or Homebrew, clone my GitHub repo, and build/install from source. FTGL will work on OSX once more.

Long term you may want to look at moving away from FTGL in your own software. It is great at what it does, but hasn’t been updated in a long time. It uses OpenGL display lists internally, so will not work on modern OpenGL specs. But at least you can now use it if you need to.

Join the conversation!

Hit me up on Twitter or send me an email.
Posted on September 16, 2015ProgrammingTags: ftgl, osx, Programming

A little while ago I worked on a mixed media theatre production called If There Was A Colour Darker Than Black I’d Wear It. As part of this production I needed to build a system that could send and receive SMS messages from audience members. Today we’re looking at the technical aspects of how to do that using SMS Server Tools.

There are actually a couple of ways to obtain incoming text messages:

  • Using an SMS gateway and software API
  • Using a GSM modem plugged into the computer, and a prepaid SIM

The API route is the easiest way to go from a programming aspect. It costs money, but most gateways provide a nice API to interface with, and you’ll be able to send larger volumes of messages.

BLACK had a few specific requirements that made the gateway unsuitable.

  1. We were projecting out of a van in regional South Australia. We had terrible phone reception, and mobile data was really flakey.
  2. We were going to be sending text messages to audience members later, and needed to have the same phone number.

So, we got hold of a USB GSM modem and used a prepaid phone SIM. This allowed us to receive unlimited messages for free. However, we couldn’t send messages as quickly as we would have liked.

Modem Selection

There are quite a few GSM modems to choose from. You are looking for one with a USB interface and a removable SIM. GSM modems that use wifi to connect to computers won’t work. You need to be able to remove the SIM because most mobile data SIMs won’t allow you to send or receive SMS messages. The other big requirement is Linux drivers, and Google is really your friend here. The main thing to watch out for is manufacturers changing the chipsets in minor product revisions.

We ended up going with an old Vodafone modem using a Huawei chipset. The exact model I used is HUAWEi Mobile Connect Model E169 It shows up in Linux like this:

ID 12d1:1001 Huawei Technologies Co., Ltd. E169/E620/E800 HSDPA Modem

SMS Tools

SMS Tools is an open source software package for interfacing with GSM modems on Linux. It includes a daemon, SMSD, which receives messages. SMSD is configured to run your own scripts when messages are received, allowing you to do pretty much anything you want with them.

Installation is straight forward on Ubuntu et al:

sudo apt-get install smstools

Next you’ll need to configure the software for your modem and scripts.

Configuration File

The configuration file is a bit unwieldy, but thankfully it comes with some sane default settings. Edit the file in your favourite text editor:

sudo vim /etc/smsd.conf

Modem Configuration

First up you will need to configure your modem. The modem configuration is at the end of the config file, and the exact parameters will vary depending on what modem you have. Let’s have a look at what I needed:

device = /dev/ttyUSB0
init = AT^CURC=0
incoming = yes
baudrate = 115200

device is where you specify the file descriptor for your modem. If you’re using a USB modem, this will almost allways be /dev/ttyUSB0.

init specifies AT commands needed for your modem. Some modems require initialisation commands before they start doing anything. There are two strategies here, either find the manual for your modem, or take advantage of the SMSTools Forums to find a working configuration from someone else.

incoming is there to tell SMSTools you want to use this device to receive messages.

baudrate is, well, the baud rate needed for talking to the device.

Like I said, there are many options to pick from, but this is the bare minimum I needed. Check the SMSTools website and forum for help!

Event Handler

The other big important part of the config file is the event handler. Here you can specify a script/program that is run every time a message is sent or received. From this script you can do any processing you need, and could even reply to incoming messages.

eventhandler = /home/michael/smstools/sql_insert

My script is some simple Bash which inserts a message into a database, but more on that in a moment.

Sending Messages

Sending SMS messages is super easy. Smsd looks in a folder, specified in the config file, for outgoing messages. Any files that appear in this folder get sent automatically. By default this folder is /var/spool/sms/outgoing.

An SMS file contains a phone number to send to (including country code, but with out the +) and the body of the message. For example:

To: 61412345678

This is a text message sent by smstools. Awesome!

Easy! Just put files that look like this into the folder and you’re sending messages.

Receiving Messages

Let’s have a better look at the event handler. Remember, this script is called every time a message is sent or received. The information about the message is given to your program as command line arguments:

  1. The event type. This will be either SENT, RECEIVED, FAILED, REPORT, or CALL. We’re only interested in RECEIVED here.
  2. The path to the SMS file. You read this file to do whatever you need with the message

You can use any programming language to work with the message. However, it is very easy to use formail and Bash. For example:


#run this script only when a message was received.
if [ "$1" != "RECEIVED" ]; then exit; fi;

#Extract data from the SMS file
SENDER=`formail -zx From: < $2`
TEXT=`formail -I "" <$2 | sed -e"1d"`

From there you can do whatever you want. I put the message into a MySQL database.


That’s all you need to write programs that can send and receive SMS messages on Linux. Once you have smsd actually talking to your modem it’s pretty easy. However, in practice it’s also fragile.

The smsd log file is incredibly useful here. It lives in /var/log/smstools/smsd.log

Here are some of the errors I encountered and what to do about them:

Modem Not Registered

You’ll see an error that looks like this:


This means the modem has lost reception, and is trying to re-establish a connection. Unfortunately there is nothing you can do here but wait or, using a USB extension cable, trying to find a spot with better reception.

Write To Modem Error

An error like this:

GSM1: write_to_modem: error 5: Input/output error

means the software can no longer communicate with the modem. This is usually caused by the modem being accidentally unplugged, the modem being plugged in after the system has powered up, or by an intermittent glitch in the USB driver. To fix this, do the following:

  1. Stop smsd (sudo service smstools stop)
  2. Unplug the modem
  3. Wait 10 seconds or so
  4. Plug the modem back in
  5. Start smsd (sudo service smstools start)

Cannot Open Serial Port

You may see this error:

Couldn’t open serial port /dev/ttyUSB0, error: No such file or directory

This occurs if you started the computer (and therefore smsd) before plugging in the modem. Follow the steps above to fix it.


So there you have it. Follow these steps and you can send and receive SMS messages on Linux, using a cheap prepaid SIM and GSM modem.

In the next post we’ll be looking at exactly what I used this setup for.

Join the conversation!

Hit me up on Twitter or send me an email.
Posted on April 23, 2015ProgrammingTags: Linux, mysql, sms, smstools

Half Real So, for the last few months I’ve taken a break from the PhD to do some work for a theatre show for The Border Project, Half Real.

There’s a lot of technology in the show. In particular, most of the set is projected, and we are using a Microsoft Kinect to track the actors on stage, and modifying the projections based on their location.

I’m working on Linux, and using OpenNI for interfacing with the Kinect. Things almost worked perfectly. In this post I will document the trials and tribulations of getting the Kinect to work for Half Real.

I often fall into

Not Invented Here Syndrome, and so slowly I’m trying to get out of it. Obviously, interfacing with hardware like the Kinect is not something I really wanted to do during a 3 month theatre development. My Spatial Augmented Reality framework is built on Linux, so I basically had the option of Libfreenect or OpenNI. OpenNI appears to be more mature, and so that’s what I went with.

As you can see, I’m only really tracking the position of the actors – we aren’t using any of the gesture recognition stuff.

During development everything looked peachy. However, during production week when we started running through the whole show, a major issue popped up. It turns out there is a bug buried deep in OpenNI that eventually rears its ugly head if you have a few people running around at the same time:

Program received signal SIGSEGV, Segmentation fault.
 0x00007ffff215574d in Segmentation::checkOcclusion(int, int, int, int)

This is a big problem. See, this is a theatre show, where the entire set is projected. If the system crashes, the stage goes black. The operator has to restart and bring the projections up to the right point in the show. It turned out that in our tech previews, the software was crashing 2-3 times per show. This was simply unacceptable.

Thankfully, I was only interested in the positions of the actors. This meant I could run the tracking in a completely different process and send the data to the projection system without too much overhead. So, on the day before I finished working for the project, I had to completely rewrite how the tracking worked.

The Data We Need

As I said, we only need position. I didn’t have to send through any camera images, gesture information, etc. All I needed was:

struct KinectMessage
    uint8_t actor_id;
    float   quality;
    float   x;
    float   y;
    float   z;

The process that interfaces with the Kinect simply sent these messages over a TCP connection to the projection system for every actor on stage. TCP worked pretty well. Both processes run on the same system, and the Kinect only updates at 30fps anyway. So you know, there’s only 510 bytes per second, per actor that needed to be transferred. If I was transferring images, a better IPC technique would be required.

While True

At this point, the hard work was done. Simply wrap the tracking process in a shell script that loops forever, rerunning the process when the segfault occurs. The projectors never go to black, and the worst case is the tracking lags for a a couple of seconds. Not perfect, but infinitely better.

I guess the moral of this post is to be wary of relying on 3rd party libraries that are not particularly mature. And if you have to (you don’t have much choice if you want to talk to the Kinect), wrap it up so it can’t screw you over. TCP worked for me, because I didn’t need to transfer much data. Even if you were doing the skeleton tracking and gestures, there isn’t a lot of data to send. If you need the images from the camera, TCP may not be for you. But there are plenty of other IPC techniques that could handle that amount of data (even pipes would do it). I guess the good news is OpenNI is Open Source, so in theory someone can get around to fixing it.

Hope this helps someone.


Join the conversation!

Hit me up on Twitter or send me an email.
Posted on September 22, 2011ProgrammingTags: Border Project, c++, Half Real, kinect, Linux, OpenNI, ubuntu

UPDATE October 2015: Verified working in Ubuntu 14.04 LTS and 15.04!

I’ve spent all this morning trying to talk to the Microsoft Kinect using OpenNI. As it turns out, the process is not exceptionally difficult, it’s just there doesn’t seem to be any up to date documentation on getting it all working. So, this post should fill the void. I describe how to get access to the Kinect working using Ubuntu 12.04 LTS, OpenNI 1.5.4, and NITE 1.5.2.

Please note that since writing this tutorial, we now have OpenNI and NITE 2.0, and PrimeSense have been bought by Apple. This tutorial does not work with versions 2 (though 1.5 works just fine), and there is talk of Apple stopping public access to NITE.

To talk to the Kinect, there are two basic parts: OpenNI itself, and a Sensor module that is actually responsible for communicating with the hardware. Then, if you need it, there is NITE, which is another module for OpenNI that does skeletal tracking, gestures, and stuff. Depending on how you plan on using the data from the Kinect, you may not need NITE at all.

Step 1: Prerequisites

We need to install a bunch of packages for all this to work. Thankfully, the readme file included with OpenNI lists all these. However, to make life easier, this is (as of writing) what you need to install, in addition to all the development packages you (hopefully) already have.

sudo apt-get install git build-essential python libusb-1.0-0-dev freeglut3-dev openjdk-7-jdk

There are also some optional packages that you can install, depending on whether you want documentation, Mono bindings, etc. Note that on earlier versions the install failed if you didn’t have doxygen installed, even though it is listed as optional.

sudo apt-get install doxygen graphviz mono-complete

Step 2: OpenNI 1.5.4

OpenNI is a framework for working with what they are calling natural interaction devices.Anyway, this is how it is installed:

Check out from Git

OpenNI is hosted on Github, so checking it out is simple:

git clone

The first thing we will do is checkout the Unstable 1.5.4 tag. If you don’t do this, then the SensorKinect library won’t compile in Step 3. From there, change into the Platform/Linux-x86/CreateRedist directory, and run the RedistMaker script. Note that even though the directory is named x86, this same directory builds 64 bit versions just fine. So, don’t fret if you’re on 64bit Linux.

cd OpenNI
git checkout Unstable-
cd Platform/Linux/CreateRedist
chmod +x RedistMaker

The RedistMaker script will compile everything for you. You then need to change into the Redist directory and run the install script to install the software on your system.

cd ../Redist/OpenNI-Bin-Dev-Linux-[xxx]  (where [xxx] is your architecture and this particular OpenNI release)
sudo ./

Step 3: Kinect Sensor Module

OpenNI doesn’t actually provide anything for talking to the hardware, it is more just a framework for working with different sensors and devices. You need to install a Sensor module for actually doing the hardware interfacing. Think of an OpenNI sensor module as a device driver for the hardware. You’ll also note on the OpenNI website that they have a Sensor module that you can download. Don’t do this though, because that sensor module doesn’t talk to the Kinect. I love how well documented all this is, don’t you?

The sensor module you want is also on GitHub, but from a different user. So, we can check out the code. We also need to get the kinect branch, not master.

git clone
cd SensorKinect

The install process for the sensor is pretty much the same as for OpenNI itself:

cd Platform/Linux/CreateRedist
chmod +x RedistMaker
cd ../Redist/Sensor-Bin-Linux-[xxx] (where [xxx] is your architecture and this particular OpenNI release)
chmod +x
sudo ./

On Ubuntu, regular users are only given read permission to unknown USB devices. The install script puts in some udev rules to fix this, but if you find that none of the samples work unless you run them as root, try unplugging and plugging the Kinect back in again, to make the new rules apply.

Step 4: Test the OpenNI Samples

At this point, you have enough installed to get data from the Kinect. The easiest way to verify this is to run one of the OpenNI samples.

cd OpenNI/Platform/Linux-x86/Bin/Release

You should see a yellow-black depth image. At this point, you’re left with (optionally) installing the higher level NITE module.

Step 5: Install NITE 1.5 (optional)

Firstly, you need to obtain NITE 1.5.2. Go to the following link and download NITE 1.5.2 for your platform..

Extract the archive, and run the installer:

sudo ./

At some point, you may be asked for a license key. A working license key can be found just about anywhere on the Internet. I don’t think PrimeSense care, or maybe this is a non-commercial license or something. But whatever, just copy that license into the console, including the equals sign at the end, and NITE will install just fine.


After following these steps, you will be able to write programs that use the Microsoft Kinect through OpenNI and NITE middleware. I hope this helps someone, because I spent a lot of time screwing around this morning trying to get it all to work. Like I said, the process is pretty straight forward, it just hasn’t been written down in one place (or I suck at google).

Join the conversation!

Hit me up on Twitter or send me an email.
Posted on June 30, 2011ProgrammingTags: featured, kinect, microsoft, NITE, OpenNI, ubuntu

Hey Everyone

So this week I became a member sponsor on The first thing I had a look at was their XNA Behaviour Programming videos, which are the first in their set on AI programming. However, not being particularly interested in XNA, I implemented the algorithms presented in the videos for Android.

Here’s a video of the demo running on my Nexus One:

Since I was on Android and only using the Android and OpenGL ES libraries, I had to write a lot of low level code to replace the XNA functionality that 3DBuzz’s videos rely on. I also had to implement an on-screen joystick. I might write up a couple of posts on the more interesting parts of the code (what is not in the videos) soon.


Join the conversation!

Hit me up on Twitter or send me an email.
Posted on January 13, 2011ProgrammingTags: 3dbuzz, ai, andriod, nexus one, opengl, Programming

So some of my work at uni involves programming using OpenSceneGraph. Now, anybody who has used OSG before will know that as powerful as it may be, it is seriously lacking in the documentation department. So, this article describes how to do dual screen graphics on Linux using OpenSceneGraph. First we’ll look at the X Screens approach, which is easier but probably not the best solution. Then we’ll look at a method that works with a single X screen.

Multiple X Screens

The easiest way to do dual screen output is if you have your X server configured so each output is its own X screen. The first thing you need to do is make sure you have enough screens. Finding this out is easy enough:

int getNumScreens()
    osg::GraphicsContext::WindowingSystemInterface* wsi = osg::GraphicsContext::getWindowingSystemInterface();
    osg::GraphicsContext::ScreenIdentifier si;
    return wsi->getNumScreens(si);

You should do this before attempting to create your screens to make sure the X server is configured correctly. Otherwise OSG will throw an error when you try and create a graphics context for a screen that doesn’t exist. Setting up the viewers is fairly straight forward:

//Create main scene viewer

ref_ptr compositeViewer = new osgViewer::CompositeViewer;

// create first view
osgViewer::View* v0 = new osgViewer::View();

// add view to the composite viewer

// do the same with the second view
osgViewer::View* v1 = new osgViewer::View();

And thats it. You also need to set the scene data for each of your views and a couple of things I’ve missed, but that is the basic idea. The problem with this method is it creates 2 graphics contexts, which in most cases will cause a performance hit.

Single X Screen, Single Context

I looked into this method because I use TwinView on my Linux desktop boxes. Of course, TwinView means that there is only 1 XScreen that spans both monitors. Therefore, the getNumScreens function above will return 1. I have also set up our projector setup to use TwinView, so I don’t need to have one lot of code for testing on the desktop and then do something completely different when using the projectors. The other benefit of this approach is you only create one graphics context.

First thing we do is get the dimensions of the screen, which will be the combination of both screens.

// get the total resolution of the xscreen
osg::GraphicsContext::WindowingSystemInterface* wsi = osg::GraphicsContext::getWindowingSystemInterface();
unsigned width, height;
wsi->getScreenResolution(osg::GraphicsContext::ScreenIdentifier(0), width, height);

Once that is done we can create our (single) graphics context.

// create a context that spans the entire x screen
traits->x = 0;
traits->y = 0;
traits->width = width;
traits->height = height;
traits->windowDecoration = false;
traits->doubleBuffer = true;
traits->sharedContext = 0;
traits->overrideRedirect = true;
osg::ref_ptr gc = osg::GraphicsContext::createGraphicsContext(traits.get());

The important one here is overrideRedirect. Some window managers (I’m looking at you Gnome) will redirect the position of your graphics context, so it won’t appear where you want it. The overrideRedirect option is kindof new, it does not exist in the version of OSG shipping with Ubuntu 8.10. Therefore, I am running the latest stable release (2.6) compiled from source.

To get the equivalent of 2 screens to draw on, we create 2 views like before. However, we have to set their viewport manually. Here we just make v0 use the left half of the screen, and v1 use the right half. Easy enough?

//first screen
osgViewer::View* v0 = new osgViewer::View();

osg::ref_ptr cam = v0->getCamera();
cam->setViewport(0, 0, width/2, height);

//second screen
osgViewer::View* v1 = new osgViewer::View();
osg::ref_ptr cam2 = v1->getCamera();
cam2->setViewport(width/2, 0, width/2, height);

setViewport sets the viewport of the camera. The first 2 parameters are the position in the context’s window, the next 2 are the dimensions. So, each view gets width/2 for the width, and the second screen’s position is offset by half the screen width meaning it starts on the second monitor.


And there you have it. Two methods for dual screen using OpenSceneGraph. Looking at the code, it is fairly simple. However, after browsing the doxygen docs for OSG it was not at all obvious to me. Of course, the osg-users mailing list was a big help here. In fact, here is the thread from the mailing list


Join the conversation!

Hit me up on Twitter or send me an email.
Posted on December 14, 2008ProgrammingTags: c++, dual screen, Linux, opengl, openscenegraph, Programming