Archive for the Programming Category

Updating a Database in a Deployed Mobile App

May 22, 2013

One of the issues I had to deal with in building my mobile app was the evolving database schema as more features were added and the app grew.  I needed a way to update the database scheme without blowing away the old one and forcing the app to redownload all the data.  While I could have done that without any loss of data (there is no user supplied data to be lost), the main data set is over 8 MB in size and that isn’t nice to users’ wireless plans.  What I needed was a way to incrementally change the schema as the app was updated.

First Attempt

My first attempts just used if() statements.  i.e. if the current app version was one value and the new app version was a different value, apply this update.  This worked well enough for my development versions where I was strictly controlling the upgrade versions.  But I quickly realized that if I were to try to do that with versions of the app out in the wild, where I had no control over when it was updated, I’d quickly have a mess of conditionals to deal with.  So I scrapped that idea and went looking for something new.

Final Solution

I don’t think this solution is new to me as I’m sure I saw it somewhere to give me the idea but I couldn’t tell you where I saw it off the top of my head.  The answer was a switch() statement, with each case() block having a bit of the upgrade path from old to new schemas.  In order for this to work, you need to track a version number for the database schema that increases each time you make a change.  Simply start at one and every time you change the schema increment the value.  This value, which I called current_version, is simply a coded variable.

Additionally, the app needs a second value that it stores as part of the application data which contains the database version that is currently installed (installed_version).  Then, each time the application is run, you check these values.  There are three possibilities[1]:

  1. There is no stored value for installed_version. This should only be true the very first time the application is run.  In this case, you simply need to create the entire database schema as it now stands and store the current_version as the installed_version.
  2. The current_version equals the installed_version.  This is the simplest case of all.  Since the database schema is all up-to- date you don’t have to do anything.  Continue on to the main part of the application.
  3. The current_version is greater than the installed_version.  In this case there is work to do and we now need to do the update.

To make the update we simply enter a switch() statement based on the installed_version.  Each case() block should have arguments that start at 1 and increase by one each time.  The first case() block (i.e. case 1:) would hold the commands that were necessary to upgrade from version 1 to version 2.  The second would hold the commands modify the schema from version 2 to version 3, and so forth.

None of the case statements should have the break command associated with them so that after execution they would fall through to the next one and cascade all the way to the end.  That way if a user skips one or more app updates that include database upgrades, each one will be applied in turn and none will be skipped.

Finally, the default() block at the end of the switch statement should contain the code up update the value of the stored installed_version so that it is also up-to-date for the next time the application is run.

Here is a bit of sample code that shows how this works.  It is lifted in modified (with additional comments added) from my application which is written in using Titanium Studio’s Mobile SDK.  It should be noted that the c object is a configuration object that holds a lot of information such as the database name, the table names, and column names among other things.

// Check that we have a stored value for the installed database version
if(!Ti.App.Properties.hasProperty('installed_db_version')){
  // if not, the database has never been created
  createDB();
} else {
  var installed_version = Ti.App.Properties.getInt('installed_db_version');
  if (c.current_version > installed_version){
    Ti.API.info("Performing an upgrade from database version "+old_version+" to version "+c.version);
    var db = Ti.Database.open(c.name);
    // We need to do an upgrade to the new version based on the old version.
    // We'll do this incrementally adding in all the changes since the installed
    // version was created.
    switch(installed_version) {
      case 1:
        db.execute("REPLACE INTO daily_data (source,start,duration,flux_1000_300000,error_1000_300000,ul_1000_300000," +
        "flux_300_1000,error_300_1000,ul_300_1000,flux_100_300000,error_100_300000,ul_100_300000,test_statistic)" +
        "VALUES (1,56000,86400,10,1,0,9,2,0,1,0.5,0,32)");
        db.execute("REPLACE INTO daily_data (source,start,duration,flux_1000_300000,error_1000_300000,ul_1000_300000," +
        "flux_300_1000,error_300_1000,ul_300_1000,flux_100_300000,error_100_300000,ul_100_300000,test_statistic)" +
        "VALUES (1,57000,86400,12,2,0,10,1.5,0,2,0.5,1,32)");
      case 2:
        db.execute("DELETE FROM " + c.T_DAILY);
        db.execute("DELETE FROM " + c.T_WEEKLY);
        Ti.App.Properties.setString('last_lc_data_update','0');
      case 3:
        db.execute(DATABASE_CREATE_GCN);
        Ti.App.Properties.setString('last_GCNNotice_date', '0');
      case 4:
      ...
      ...
      default:
        Ti.App.Properties.setInt('installed_db_version',c.current_version);
        break;
    }
  db.close();
  }
}

In version 1, I was using some dummy data that I was setting explicitly.  In version 2, I had updated the code to get the data from a web service so I needed to remove all the old data and add a stored variable to track the date of the last data point.  In version 3, I added a table to the database and an internal stored variable to track the date that data was last updated.

At the beginning of the code block, I check to see if the database is installed at all.  If not I call a createDB() function.  This function simply contains the code that creates the current version of the database schema.  Another option would be to just set the installed_version to 0 and have a case 0: block that creates the original database schema that all the modifications are applied to but I believe it is faster and cleaner to just create the current schema rather than run through all the old ones applying one updated at a time.  While probably not an issue if you only have a few changes, as the number of modifications grow, so does the time involved in the creation of the new database.

And there you have it.  A simple straightforward way to make sure that your database updates go smoothly across updates of your application code.

One final note is that you don’t have to call this each time the app runs, but only the first time the database is used.  Now I suspect in most cases, the database is used every time the application is run but if there are sections of your app that don’t use the database, this check can be delayed until first use of the database.  This allows the app to load up a bit faster, although the difference may not be noticeable.


[1] There is a fourth possibility and that is that current_version is less than installed_version.  But if you’re properly incrementing current_version then this should never happen unless you roll back a distributed version of your application.  If you do that, all bets are off and you’ll need to find another way to handle the update.  Of course this is only a worry if the older code can’t work with the newer schema.

Building a prototype

December 3, 2012

Once you have an idea for an application, it really helps to get a prototype up and running to understand better how the application will function and even if it is possible.  Building the mobile data application for the Fermi Science Support Center was no different.

This is the third of several articles about the journey of developing an application for the Fermi Gamma-ray Space Telescope mission.  Earlier articles can be found here:

When I started this project my mobile app development experience comprised a sum total of about two weeks of playing around with the Android SDK.  I had it up and installed, had read a few tutorials, and started in on a small app that was to be a character generator for an old role-playing game to have something (semi-)practical to work on.  But beyond that I was a complete novice.

However, that was infinitely more experience than I had with iOS development and that fact, combined with the fact that I owned an Android smartphone (a Droid X) and the only Apple devices in my house were my daughter’s 3rd gen iPod Nano and a work MacBook that I used occasionally to test out builds of our science software, I decided to try the prototype as a native Android app written in Java.

Breaking New Ground

This was a completely uphill journey, although not necessarily a difficult or unpleasant one.  First was the fact that I was working in Java.  I’m primarily a C++/Perl programmer although I’m familiar with many other languages.  I have a passing knowledge of Java but have never really used it for any real project.  So working in Java required more than a bit of looking up syntax and functions.

Doing event driven, GUI programming wasn’t completely new to me.  On of my side projects that I work on off-and-on is a computer version of the spaceship combat game from the old Star Frontiers role-playing game.  This game is GUI based and so the concepts of responding asynchronously to input events isn’t new.  In fact to a lesser scale, the main data server that I built for the mission does the same thing but the events are coming from sockets instead of a user interface.

And of course the entire Android development process was new.  Screen layouts, the SDK, etc was virgin territory.  I spent a lot of time looking at the developer documentation to figure things out and learn about the various classes, what they did and how they interacted.

What to Build in the Prototype

The next question was what the prototype should contain.  The goal of the prototype was two-fold.  First, I wanted it to demo the basic functionality of the application so I could show it to the project scientist to get approval to do the full application.  And second, it was a way for me to explore a bit of how to do thing in the mobile space and how the whole mobile development paradigm worked.

In the end I decided to implement three major components:

  1. Plotting the light curves – This was to be the major functionality of the application to begin with so it made sense to try this out from the get go.  It didn’t have to pull data from the web or perform updates, but it did need to plot regular data points with errors as well as upper limits and give source and data range selection options.
  2. Show the Fermi Sky Blog – I mainly did this one because it was easy as it was effectively just providing a link to the web page of the blog.  This provided some experience in launching other tasks (Activities in Android parlance) that were external to the application I was writing.  However, while everything I learned here could have been learned from implementing the ATels (#3 below) it did provided a little more functionality to have in the demo.
  3. Building and displaying the list of Fermi related Astronomer’s Telegrams – The main draw to implementing this one was that it provided some experience with reading an RSS feed and parsing the related XML.  Both of which I’d never done before.  It also provided some experience with the process of using the network interface to go out and get data.

Putting It All Together

It took me about a week and a half to get the main application screen and get the basic plotting implemented.  I know because a week and a half after I got back from the AAS meeting I was on the plane headed out to NASA Goddard.  The trip was to help train a new staff member on our data server but I would also be pitching the mobile application to the project scientist.

One thing I was reminded of during that time is how effective and efficient you can be when you are really excited about what you are working on.  I was really excited about this project and wanted to pursue it and so I found myself very focused and working hard to get this done.  It’s amazing what you can accomplish when you don’t let yourself get distracted.

While at Goddard, I had a chance to show the prototype as it existed to the scientist who I had worked on the design with as well as the project scientist.  Both of them liked what I had so far and the project scientist gave the green light for me to work on this officially and implement the full version.

I still had work to do on the prototype to fulfill it’s role as an exploratory tool for some of the techniques in building the app but it had fulfilled at least one of its main purposes as a demo tool.  In the end, the functionality of the prototype was expanded some what to include other things like notifications, automatic data updates, and pulling live data from our servers.  Especially in the latter case, it served as a test client to back end services that I needed to develop for the final applications but that that is the topic for another post.  But all along the way it was a tool to help me learn what was possible, what wasn’t, and how to implement those things.  And in the end, that was really its primary purpose which it fulfilled admirably.

My First Mobile App

November 15, 2012

For the past few months I’ve been working on a mobile application for work.  While it’s nothing fancy, considering the fact that I had no experience with mobile development when I started, I think this first version of the app turned out pretty well.  Although the real test will come as the number of users increases.

The journey from concept to initial version has been fun,  interesting, and educational as well as filled with frustrations and roadblocks along long the way.  This post is only an introduction to a  series of posts  that I’m planning on writing to document a bit of the journey.  For now I just want to tell you a little about the app.

The app is geared towards scientists who use the data from the NASA mission I work on, the Fermi Gamma-ray Space Telescope, although there is nothing that prevents others from using the app to see what is going on.  As part of the mission, we provide a variety of “high level” data on various sources as well as publish alerts and news about sources and events in the gamma-ray sky.  The goal of the app was to aggregate some of this information onto the computer you’re always carrying with you and make it available even when you are off-line for some reason.

Currently the application provides access for four bits of data:

  • Light curves of monitored gamma-ray sources – These are sources that we’ve either been monitoring since the beginning of the mission or which have flared up to be very bright since launch and we’ve been monitoring since they first crossed a predefined luminosity threshold.
  • Gamma-ray Coordinate Network (GCN) notices produced by Fermi – When there is a gamma-ray burst or other very bright transient event (e.g. solar flares among other things) the satellite sends down a real time alert about the event and its location on the sky.  We aggregate these alerts to provide a summary of the best possible data for each event.
  • Fermi related Astronomer’s Telegrams – These are notices sent out by astronomers about objects of interest that have been detected and analyzed using the Fermi data.  While there are dozens of telegrams each week, the app filters and collects the Fermi related ones.
  • Access to the Fermi Sky Blog – This is a weekly summary of events and objects of interest in the gamma-ray sky.

If you’re interested in checking the application out, it is called the Fermi Data Portal and is available on both the Google Play Store and the Apple AppStore and works on both phones and tablets.  Check back for more about the development process and lessons learned from building the app.

Cleaning Out Your Code

February 21, 2011

In my spare time, I’m working on a game. (I wonder if there is any programmer who can’t say that to some extent?)  The details aren’t important (it’s based on the old Star Frontiers RPG) but if you want to check it out you can find it on my gaming site in it’s own forum topic.

I’ve been working on it off an on for several years now.  It’s primarily been a learning exercise for me but it’s getting to the point now where it is actually playable.  I’ve implemented something like 90% of the rule set. And therein lies my dillema.  My efforts on the project have been in spurts.  A couple of weeks spending 10 or more hours on it and then months where I don’t do anything.

During the “on” times, I’m fired up and want to get things implemented and add new features and get them out the door as quickly as possible.  And so I do whatever I can to “just make it work” and get it out there for people to play with.  And so along the way I’ve incurred a lot of technical debt and the code has accumulated a bit of code cruft as well (well, probably much more than a bit).

I was first introduced to the concept of technical debt when reading Jeff Attwood’s post Paying Down Your Technical Debt.  I could defnitely relate to the topic as I was going through the same thing both in my at work code and my game code.

And it seems all I’ve done since then is build up more debt.  I’ve got these last little bits of the game’s rules to implement and it seems that every addition requires reworking things I’ve done before.  But I’m getting better and have started to pay down the debt and clean up the code to make it better and more modular.

As developers, we probably all have to deal with this at some point.  And if we are developing stuff for others to use, they like to see progress and additional functionality with each release.  Unless the bugs were show stoppers, a release that just fixes bugs isn’t very interesting to your users, they want it to do something more than the last one.

But you still need to pay down your technical debt and clean up your code.  And so for the last couple of releases that I’ve done, I’ve adopted a new strategy for dealing with this.  In each release, I’ve added one piece of new functionality, hopefully something that will pique the users’ interest so that they’ll grab the new version and try it out.  But at the same time, I’m working really hard in the background to clean up and refactor the code.  For every visible addition to the game, there have been two or three backend changes that the user doesn’t see (since it doesn’t impact the UI or game play) but which help me get the code under control and pay down my debt.

And it seems to be working, at least to some extent.  In this last go around I fixed several bugs and in the end, the main code was actually smaller than the original and a little easier to understand.  So I must have done something right.

It’s an on-going battle.  In the end, I guess if you are shipping code, you’re winning.  But the faster you can ship, the bigger the win.  And cleaning out your code and paying down your technical debt just makes things easier and makes it possible to have the bigger win.

Complex Systems

February 10, 2011

I hate Windows, it seems that all my problems at work come from having to deal with Windows.  And Mac OS X, I hate Mac OS X as well for the same reason.

Actually, I don’t really hate thoses operating systems, but it got your attention.  I actually think they are both perfectly fine operating systems.  But they do cause all my headaches at work.  I’m a Linux user by default and venturing into the realm of Windows and OS X always seems to give me headaches.

And it is not really the operating systems that cause me the headaches.  The real issue is the complexity of the systems that I have to work with.  As the main part of my job, I maintain and (try to) enhance and extend two fairly complex systems.  One is the public data server for data from the primary instrument on a NASA satellite mission, the other is the software build system for the primary instrument team for that mission.

Both of these systems suffer to some extent from the second system effect as described by Fred Brooks in the Mythical Man Month, as both are the follow-on systems to earlier systems that worked quite well. And both second systems were written by the author of the first system.

In the case of the data server, I only have myself to blame, since I am the original author.  I did all the trade studies, wrote the requirements and design documents, and implemented the system.  In fact, knowing about the second system effect, I tried really hard to avoid suffering from it.  And for the most part, I think I succeeded.  It’s a realtively small, focused system that does one thing really fast.

But it is still complex.  And it still gives me headaches when things go wrong.  And I wrote it.  I understand intuitively what it is supposed to be doing and how it works.  I can only imagine the headaches the guy who was maintianing it for the year I was off working on a different project had.

The other system, on the other hand, was not written by me, and I don’t have the intuitive grasp of the system like the original developer did.  Although I’m getting a better feel for it every day.  And in many ways, this system much more complex than the data server.  It’s an automated build system.  When a user checks in and tags new code, the build system launches a series of processes that checks out the code, builds it, runs all the associated tests, bundles up user, developer and source distributions and publishes all the results (including e-mailing developers about any of their packages that failed to compile or pass their tests).

It’s a fairly standard build system.  Except that it all has to run on seven different operating systems.  With six different compilers.  And it runs on a batch queuing system and talks to four different databases on two different MySQL servers.  Did I mention it was fairly complex?

Just to enumerate, the operating systems we currently support are 32 and 64 bit Redhat Enterprise Linux 4 & 5, Mac OS X 10.6 (Snow Leopard), Mac OS X 10.4 (Tiger, going away as soon as the Snow Leopard support is fully functional) and Windows XP (with Windows 7 support looming soon).  The compilers we currently support are four versions of gcc (3.4, 4.0, 4.1 and 4.2) and two versions of Visual Studio (2003 and 2008).  It’s not actually as bad as it sounds.  With the exception of two versions of VS running on Win XP, there is only one compiler supported per *nix style OS.  This variety is actually a good thing as it helps keep the codebase clean since it has to work everywhere.

The real trouble comes from the infrastructure supporting the system and the ways it interacts (or doesn’t) with these different operating systems.

The programs that run the build system were written in C++ using the Qt library.  Now I didn’t know anything about Qt when I acquired the responsibility for the project but after sifting through the code, I think I can understand why this was chosen.  One of the main reasons was the use of the timer and process control functionality, both to launch checks at specific intervals and to kill build or, more importantly, test processes that have hung and are taking to long.  Only that latter doesn’t seem to work on Snow Leopard, as we found out when one of our packages was seg faulting in the tests and instead of dying, it was going into an infinite loop.  And since the build system code didn’t properly kill it, the entire system hung up for that OS.  And right now I can’t tell if the problem is Qt, the underlying OS, how we’re applying it, or some combination of the three.  Complexity.

This build system has a lot of moving parts.  And I think the reason is that it is built around the central batch queuing system at the national laboratory where it runs.  In theory and at the beginning, that was a good idea.  We were sometimes triggering new builds every 15 to 30 minutes and the entire build process takes about an hour or two to run (there’s a lot of code to be compiled and tested).  By using the batch farm, we could have all these builds running and not piling up on one another by leveraging a tiny fraction of the thousands of CPU cores available in the farm.

But that came with tradeoffs.  For example, since the various parts of the process could potentially (and usually do) run on different machines, you can’t us local storage and have to use network disks (via AFS in our case) to hold all the code and test data.  This doesn’t seem to be an issue for the *nix systems but for some reason accessing the network disks from Windows is sloooooow.  A process that takes 10 minutes on the *nix boxes can take 30-40 (or more sometimes) minutes on the Windows boxes, reason unknown.  There are other tradeoffs as well, all increasing the complexity.

And then a couple of things happened.  The lab never really supported Mac OS X in the batch farm, so we had to get our own OS X boxes.  And we somewhat pioneered Windows usage so we had to get those boxes ourselves as well.  And then they dropped Redhat EL 4, and 32 bit Redhat EL 5.  So now, the only OS supported in the main farm that we were supposed to use is Redhat EL5 64-bit.  Everything else runs on our own project purchaced machines, but we’re still wedded to this complex infrastructure of using the batch farm.

Luckily, we’re starting to move away from that.  But it’s painfully slow, mainly since I seem to spend all my time running around propping up the beast to keep it in production and have little time to work on an alternative.  But at least there is motion in the right direction.  Towards simpler systems.  And away from complexity.

Switch to our mobile site