Skycam Controller for The Last Shuttle Project

rcfisher Skycam Controller

This week, while visiting Los Angeles, I took the opportunity to have breakfast with Robert C. Fisher from The Last Shuttle Project. One neat thing I got to see is pictured above, the hardware used to control the ‘Skycam’ for capturing some cool video footage of the final space shuttle launch last year. Check out the video!

The hardware is a stock Arduino, with a protoshield on top. It features an RTC, status lights, a test switch, a piezo to detect the sound of the launch, and an opto-isolator to trigger focus and shutter lines. All that is protected by a sturdy little Pelican case. Quite a nice compact setup.

To handle the unique timing requirements of this shoot, the board is running an Arduino sketch I wrote, the Camera Controller.

This sketch was my first attempt to write a clock-based camera controller. I learned a bunch of lessons along the way that I’ve since incorporated into the next version.

Launch Windows

There is a unique problem confronted by this shoot. The entire area is closed off hours prior to a scheduled launch, so cameras must be set up and ready to roll without human intervention. Also, the launch can happen any time within a precisely-timed “window”, optimized for getting into orbit smoothly. If NASA doesn’t get the launch off within the window period, the launch is canceled for that day, and everyone comes back again tomorrow.

Ideally, one could program all the windows at once, so the camera operators would not need to tinker with the controller setup every time the operation missed a window.

The window timings were published online, so I wrote an awk script to automatically translate the web page into a structure the sketch could parse:

window_c sts134_windows[] =
{
    window_c ( 5,16,11, 8,55,42,AM, 8,56,28,AM, 9, 1,28,AM ),
    window_c ( 5,17,11, 8,28,56,AM, 8,33,56,AM, 8,38,56,AM ),
    window_c ( 5,18,11, 8, 8, 9,AM, 8, 8,12,AM, 8,13,12,AM ),
    window_c ( 5,19,11, 7,40,41,AM, 7,45,41,AM, 7,50,41,AM ),
}

Logic of Operation

My goal with the logic was to closely map the real-world problem to the code, so that as we got a better idea of what would really be required, a small change in requirements could be handled with a small change in code. Likewise, I tried to write it in a declarative way so the code explained what it did along the way, so other people might be able to make changes and so I could easily remember what it did months later.

This is the basic loop:

void loop(void)
{
    await_window_open();
  
    set_status(window_is_open);
    start_listening();
    while ( window_open() || test_switch_on() )
    {
        bool test_on = test_switch_on();
  
        if ( sound_is_on() || ! use_piezo )
        {
            set_status(cameras_are_firing);
  
            if (use_focus)
            {
                digitalWrite(focus_pin,HIGH);
                delay(focus_delay);
            }
            
            if ( test_on )
                test_pulses();
            else
                camera_pulses();
  
            if (use_focus)
            {
                digitalWrite(focus_pin,LOW);
            }
  
            set_status(cameras_are_waiting);
            start_listening();
        }
    }
    set_status(window_is_closed);
}

Configuration

The original idea was for there to be different kinds of camera controlled by the same code with similar but slightly different logic. Some would need to wait for an attached piezo to detect the loud sound of the launch, others would not. Some would pulse the shutter line to capture multiple still photos, others would keep the shutter open the entire time to capture video. Some need a focus line ‘pressed’ first, others do not.

To handle this, the configuration is in the config.h file, and a single #define controls which set of configuration parameters are set for that camera type. Those parameters are later tested during the loop logic to see whether to use that feature or not.

Unfortunately, the goal of readable code and high configurability collide somewhat, because the configuration if’s clutter up the logic too much.

EEPROM Logger

One of my biggest concerns for this project is testing to make sure it worked flawlessly come launch day. Happily this was successful, and everything worked according to plan. Yay! One strategy to test the unit was for Bob to perform his own field tests in conditions that were as close to real life as possible. As much as I could test it at my desk, there’s no substitute for Bob using his own expertise to simulate what he’d really be putting it through.

The problem is, he’s a thousand miles away, so if something goes wrong, how can I even tell? Thus was born the idea of creating a log in EEPROM. Every time the system does something, it’s logged in EEPROM in a compressed form. Then, when the unit is started again, the data is reported back out in a human-readable form. That way, the unit can be tested far away from a computer, and we can see what went wrong later back in the lab.

Lessons Learned

Along the way, I learned a few things which have later been incorporated into the next version. We are testing that one right now in preparation for a future shoot, and someday I’ll write it all up too. The code for the new version is on github as TheCameraMachine.

Parallel logic

Having a single code loop was too confining. If something happens during a delay(), it can’t be reacted to. Moreover, having a single code loop broke my basic principle of mapping the solution closely to the problem. In real life, multiple things can happen at once, which became increasingly hard to represent using a single code loop.

As a solution, I settled on a set of interconnected objects, each which represent a single real-world component. So we’ll have an object for the test switch, an object for the focus line, an object for each LED, etc. Each of those objects can have its own state, its own logic, and its own way of communicating with other objects. This became my Tictocs library.

Better testing

Even though the unit was successful in the field, I was never happy with the level of testing it got. At the time of its writing, I had not yet figured out how to implement test-driven development on Arduino. Also, it’s burdensome to to test on real hardware after every little change, because sometimes the real hardware was not available, and it’s hard to debug very subtle bugs.

The solution to this problem turned out to be writing a Native Core. This allows me to compile the same code which will run on Arduino, yet run it on my Mac or Linux machine. It gives me a command shell where I can set or monitor pins, watch the serial output, monitor the EEPROM, etc. This way I could test out new logic quickly from the comfort of my laptop wherever I happened to be. And I can run the debugger!

Best of all, since the code was compiling on the Mac or on Linux, I could run the usual suite of unit testing tools on it, and finally develop using TDD on Arduino. Happiness.

More general logger

While the logger was quite nice and useful, it felt a bit too specialized. It seemed like the principles of the logger for this sketch could be applied to just about anything. Around the same time, I started using the Tictocs library more, and discovered that I was writing the same printf’s for debugging in the same places in every sketch. Hmm.

So the solution turned out to be quite simple. I abstracted the camera controller logging into the Tictocs library, so now it simply logs every communication between objects in the entire system. Because the objects are so fine-grained, they have to communicate with another object to accomplish anything, so logging the chatter between objects turned out to give plenty of information about what’s happening in the system. Then the best part is that any sketch built with Tictocs automatically gets logging as a benefit.

Leave a comment

Filed under Arduino, Camera

Leave a comment