1. Introduction
Digital cameras automatically record the capture date and time of every image and video file. Such time-aware imagery is leveraged throughout science and society, from forecasting weather and monitoring global change from space to solving crimes with webcams (e.g. http://www.whatdotheyknow.com/request/scotland_yard_internal_cctv_repo/), retracing city visitation patterns through internet photograph collections (e.g. Reference Crandall, Backstrom, Huttenlocher and KleinbergCrandall and others, 2009) and broadcasting plant phenophases from smartphones (e.g. Reference GrahamGraham, 2010). In glaciology, the recognized importance of short-term variability motivates ever higher-frequency observations. Re-photography pairs, traditionally used to document glacier retreat over years to centuries (e.g. http://nsidc.org/data/glacier_photo/), are increasingly complemented by time-lapse cameras imaging the continual evolution of ice dynamics and surface conditions (e.g. Reference Ahn and BoxAhn and Box, 2010; Reference Chapuis, Rolstad and NorlandChapuis and others, 2010; Reference Dumont, Sirguey, Arnaud and SixDumont and others, 2011; http://data.eol.ucar.edu/codiac/dss/id=106.377). Recent investigations of calving source mechanisms have demanded second-to subsecond-frequency time-lapse and video sequences, synchronized to GPS, seismic and other instrumental records (e.g. Reference Amundson, Fahnestock, Truffer, Brown, Lüthi and MotykaAmundson and others, 2010; Reference Bartholomaus, Larsen, O’Neel and WestBartholomaus and others, 2012; Reference Walter, Amundson, O’Neel, Truffer and FahnestockWalter and others, 2012).
Although advanced camera systems, such as those used by research satellites and astronomical observatories, are meticulously calibrated against a known time source, most digital cameras were never intended for precise temporal observation. Nevertheless, consumer-grade digital cameras are widespread and provide generally excellent image quality, and many tools have been developed for analyzing and interpreting the resulting images. Szeliski (2010) provides an overview of state-of-the-art computer vision applications. These circumstances suggest that a better understanding of the timekeeping limitations of these cameras and the development of accessible calibration procedures that extend their application and reliability as scientific instruments would be useful developments, not just for glaciologists but for the broader scientific community. Whether the required accuracy is subsecond or on the order of minutes or more, confidently matching observations made by a camera to other time-aware datasets requires careful evaluation and calibration of the camera’s internal clock.
In this paper, we discuss the timekeeping limitations of consumer-grade camera and reference clocks and demonstrate an optimized and accessible approach to calibrating cameras to a reference for absolute timing. Any use of trade, firm or product names is for descriptive purposes only and does not imply endorsement by the US Government. Scripts implementing many of the steps are provided as supplementary material at www.igsoc.org/hyperlink/12j126/. Two time-critical applications are presented: (1) using camera positions, interpolated from GPS tracklogs, as geodesic control in aerial photogrammetric surveys (Section 4.1); and (2) correlating high frame-rate observations of glacier-calving events to synchronous seismic waveforms (Section 4.2).
2. Limitations of Camera Clocks
Any long-term consumer-grade camera deployment seeking second to minute accuracy while relying on the camera’s internal clock will need to account for the magnitude and variability of the clock’s intrinsic drift (Section 2.1). For subsecond-critical applications, two additional factors should be considered, whether and with what resolution a camera reports subsecond decimals (Section 2.2) and the true precision of the reported timestamps (Section 2.3).
2.1. Drift
Camera clocks drift and the drift can be substantial: daily subsecond to second drift can accumulate to multi-minute offsets within a few months. Clock drift varies between cameras of the same make and model, and drift, otherwise highly linear, is especially sensitive to changes in temperature, expected of any circuit-integrated oscillator (Reference Sundaresan, Allan and AyaziSundaresan and others, 2006).
Table 1 lists mean clock drift and temperature for a variety of cameras and treatments. The Nikon D200 (#1) and Canon 40D digital single lens reflex (DSLR) cameras were kept in a heated indoor space and compared repeatedly against Universal Time Coordinated (UTC) over 31 and 76 days, respectively. The weighted least-squares linear fits confirm that, at near-constant temperature, clock drift is indistinguishable from linear over month timescales (Fig. 1). In contrast, the drift rates of the Nikon D300S and Nikon D200 (#2), deployed year-round at Columbia Glacier, Alaska, vary by a factor of four between summer and winter. Although drift rates are specific to the individual camera, the pro-level Canon 5D Mark II and Nikon D2X are the best performers by a wide margin, suggesting that some camera models may benefit from substantially superior clock hardware and design.
2.2. Resolution
Digital cameras record image-capture times following the Exchangeable image file format (Exif) standard (http://www.exif.org/Exif2-2.PDF). The DateTimeOriginal tag contains the year, month, day, hour, minute and second of original data generation. Subsecond decimals, if reported, are written to the SubSecTimeOriginal tag. While no unifying standard exists for videos, capture start times can typically be found within equivalent video file tags or as Exif in accompanying image thumbnails. Whether and at what resolution a camera records subsecond decimals is of foremost concern to subsecond applications.
A survey of the camera, camcorder and camera phone models of 17 leading manufacturers, using photo-sharing and camera-review websites, found only a handful of Nikon DSLR, Canon DSLR, Kodak EasyShare compact and Nokia phone cameras implementing the SubSecTimeOriginal tag (see supplementary material at www.igsoc.org/hyperlink/12j126/12j126Sect2.2.pdf). Furthermore, most of the cameras that do report subsecond information do so in a manner inconsistent with expectation. Figure 2 compares frequency distributions of the SubSecTimeOriginal tag (a value ranging from 0 to 99 × 10−2 s) for all capable Nikon and Canon DSLR camera models, compiled from thousands of user-submitted photographs on the photo-sharing website Flickr (www.flickr.com). All Nikon (Fig. 2a–e) models record subsecond time with an effective resolution coarser than the expected 10−2 s, whether by clipping (Fig. 2a), rounding (Fig. 2b–d) or subtly favoring values at discrete intervals (Fig. 2e). For most Canon (Fig. 2f–h) models, the SubSecTimeOriginal tag serves only to distinguish between images taken in rapid succession and otherwise strongly favors 0 (Fig. 2f) or 3 (Fig. 2g). Only the Canon 5D Mark II and Canon 500D approximate a uniform distribution (Fig. 2h) – the ideal behavior we would expect.
2.3. Precision
Although a camera may report subsecond decimals with high resolution, the true precision of the image times may be much less. Figure 3 compares the clocks of a Nikon D2X and Canon 5D Mark II against UTC over a 40 min sample period. Although the SubSecTimeOriginal tag of the Canon 5D Mark II has a resolution approaching 0.01 s (Fig. 2h), the finest of any model evaluated in Section 2.2, the timestamps reported by the camera deviated from UTC by as much as 0.8 s. The Nikon D2X timestamps, in contrast, had a precision on par with the model’s ∼0.08 s resolution (Fig. 2e).
The sources of such errors are unknown. Camera manufacturers are reluctant to disclose engineering details, considering them trade secrets (personal communication from Canon Professional Services, 2010, 2011; personal communication from Nikon Support, 2010, 2011). Given the potential for substantial precision loss, the consistency of reported capture times should be evaluated for any camera considered for time-critical applications.
3. Limitations of Reference Clocks
Although relative timing may be sufficient, in most situations comparison to UTC will be desired for absolute timing. Possibly the most accessible UTC reference is the Web Clock (www.time.gov), provided as a free service by the US National Institute of Standards and Technology (NIST) and the US Naval Observatory (USNO). The online applet prints the current time at 1 s resolution, announcing the beginning of each new second (or ‘second rollover’), alongside the calculated accuracy, typically 100 ms on a broadband connection. Alternatively, the NIST Automated Computer Time Service driving the Web Clock may be queried directly by analog modem to stream second rollovers with 5–20 ms accuracy in a computer terminal (Reference Levine, Lombardi and NovickLevine and others, 2002). A subsecond resolution display can be achieved by synchronizing a computer clock to UTC over the internet via the Network Time Protocol (NTP) and streaming the system time in a terminal. The accuracy, typically 5–100 ms over the public internet (http://www.ntp.org/ntpfaq/NTP-salgo.htm#Q-ACCURATE-CLOCK; http://www.eecis.udel.edu/∼mills/exec.html), is largely a factor of the stability and reciprocity of the connection to the chosen NTP time servers, as well as the synchronization distance of those servers to a stratum-0 (reference) device (e.g. atomic clock), the sophistication of the software used and the refresh rate of the computer monitor.
GPS satellites each maintain four onboard atomic clocks. GPS receivers, once a position lock is achieved, keep time internally to nanoseconds (Reference Allan, Ashby and HodgeAllan and others, 1997). However, consumer-grade GPS receivers print time to their screen with as little as 1 s accuracy due to the display subroutines on some models being given lower priority by the software and single CPU (personal communication from Garmin Engineering, 2010). Figure 4 demonstrates this issue by comparing the time displayed by two handheld GPS (DeLorme PN-40 and Garmin eTrex Vista HCX) against an NTP-synchronized Unix system clock (using photographs as shown in Fig. 5).
The Red Hen Blue2Can included in the study has no time display; rather it communicates by wireless Bluetooth signal with an external GPS to retrieve the time (and position) of each image capture. The GPS refreshes its broadcast of time and position only once per second and this information is passed to the Blue2Can and the camera with additional latency (personal communication from Red Hen Systems, 2011). The GPS date and time (including subsecond decimals) are written to the image Exif tags GPSDateStamp and GPSTimeStamp. In practice, the time associated with an image precedes capture by about 1 s and potentially much more if the GPS signal is lost. Equivalent errors arise in cameras equipped with onboard GPS, with the added concern that the user may not be notified of signal loss. Furthermore, most models currently do not write subseconds to the GPSTimeStamp tag.
Consumer radio clocks, which synchronize to terrestrial radio time signals, are limited by the temporal accuracy of their displays, may not issue a warning when the time signal is lost and usually synchronize only periodically to the radio signal, otherwise relying on the oscillator inside the device (Reference Gust, Graham and LombardiGust and others, 2009). Although insufficient for subsecond applications, the 1 s precision of consumer GPS and radio clock displays is adequate for many situations and can (and should) be used to measure the large offsets that accumulate from nonlinear drift during extended field deployments.
4. Calibration of Camera Clocks
The evaluations of camera-clock precision and drift presented in Section 2 relied on comparing the camera clocks with a calibrated reference. In this section, we describe a suite of methods for measuring the offset between camera time and the time displayed by a reference clock. The underlying principle of our approach is that from a picture of the reference taken with the camera of interest (e.g. Fig. 5) the offset can be calculated as
where T c is the capture time of the image as reported by the camera and T r is the time displayed by the reference clock in the image. If recording video, the capture time of each photograph is calculated by multiplying the frame number by the video frame-rate and adding it to the reported capture start time of the video clip.
4.1. Subsecond: both camera and reference
In the simplest case, both T c and T r include subsecond components and Eqn (1) yields a subsecond resolution measurement of the offset between the camera and reference clocks. However, subsecond offset can be estimated even when one or both of the devices report only second rollovers. In these cases, a multi-second sequence of images is taken of the reference clock at the camera’s maximum frame-rate. First, a shutter speed faster than the target frame-rate needs to be used. To avoid buffer overrun in writing to memory, which can lead to slowing frame-rates in still image sequences and dropped frames in videos (and subsequently to a loss in the precision of the offset measurement), low-resolution output and fast memory cards are recommended for these procedures. The following subsections describe how to analyze the resulting image sequence to calculate a subsecond offset and an error estimate. Although we simply read the reference time off the photographs, a character-recognition algorithm could be developed to automate the procedure.
4.2. Subsecond: either camera or reference
In the case of a 1 s resolution reference clock and subsecond resolution camera clock, calculating the offset at each image yields a repeating pattern, such as the example from a Nikon D200 in Table 2. As a result of the stepwise nature of the reference clock sequence, the offset will reach a local maximum ai at the frame preceding each second rollover and a local minimum bi at the frame following each second rollover. Since the true second rollover occurs at a time between each [ai , bi ] pair, it follows that the offset can be no more than the smallest bi and no less than the largest ai minus 1 s, i.e.
For the sequence in Table 2, in which the true offset is constrained to the overlapping [ai –1, bi ] intervals [0.26 s, 0.51 s], [0.34 s, 0.50 s] and [0.32 s, 0.49 s], max (ai ) − 1 = 0.34 s and min (bi ) = 0.49 s, yielding the interval intersection 0.34 s ≤ offset ≤ 0.49 s, or more explicitly offset = 0.415 ± 0.075 s. In the case of only one-second rollover in the sequence, the range of the offset is equal to the time between the single [a–1, b] pair, by definition the frame-rate of the camera between those two images. Sampling a greater number of second rollovers drives the range of the offset estimate towards zero by increasing the probability that max (ai ) −1 ≈ min (bi ). If we assume a uniform probability distribution function over the full range (that is, we assume the camera is being triggered randomly and we ignore any sampling bias due to the camera’s regular frame-rate), the 95% confidence interval (CI) for the example above is ±0.073 s. In the reverse scenario, that of a subsecond resolution reference clock and 1 s resolution camera clock, the offset T c – T r reaches a local maximum ai at the frame following each second rollover and a local minimum bi at the frame preceding each second rollover. In this case the true offset can be no less than the largest ai and no more than the smallest bi plus 1 s, i.e.
Although ignored here for clarity, in practice the precision of both clocks is nonzero and needs to be added to the error estimate. This is made especially clear in cases when max (ai ) −1 > min (bi ).
4.3. Subsecond: neither camera nor reference
In the case that both the camera and reference lack subsecond reporting, a more careful analysis is needed. Each time either the camera or reference clock rolls forward one second, the offset alternates between two consecutive values (e.g. −1 and 0, 2 and 3), yielding a periodic binary integer sequence. By stripping subseconds from the offset sequence in Table 2, the following integer sequence results: 0 1 1 | 0 0 0 1 1 | 0 0 0 1 1 | 0, where | denotes a second rollover by the (trailing) reference clock. In this example, the repeating five-frame pattern indicates that the camera fired at ∼5 frames s−1, or 1 frame every 0.2 s, a result that agrees well with the average spacing between subsecond camera times in Table 2 (0.205 s) and the advertised ‘5 frames per second’ of the Nikon D200. Since the pattern is consistent over two full cycles, or ten frames, the apparent mean frame-rate f over that period could not differ from this estimate by >10%, i.e. f = 0.2 ± 0.02 s. The use of ‘apparent’ must be stressed because besides variations in the frame-rate of the camera, errors in the second rollover timing of both the reference and camera clocks can alter the sequence. In practice, since errors by both clocks may mask one another, the combined precision of the two clocks should be used instead if known to be coarser. Since the camera clock led the reference clock by 1 s for two frames each cycle (n = 2), the offset can be no smaller than (n−1)f (the camera clock advanced immediately before the first image in the cycle was taken, and the reference clock advanced immediately after the second or nth image was taken) and no larger than (n + 1)f (the camera clock advanced immediately after the first image in the cycle was taken, and the reference advanced immediately before the third or nth + 1 image was taken). This scenario for n = 2 is depicted in Figure 6. The (fully bounded) offset can be expressed more generally as
where min(oi ) is the smallest or most negative of the offset measurements oi , n is the average number of consecutive oi larger than min(oi ) within a full cycle in the sequence, f is the mean frame-rate and df is the uncertainty in f.
Unlike in Section 4.2, where we assumed a uniform distribution over the range of the offset, the probability distribution determined by this method is that of an upright isosceles triangle (Fig. 6c). The likelihood of a given offset decreases linearly with distance from the mean (where the cumulative range of compatible pairs of camera and reference rollovers is maximized), until it approaches zero probability a distance (f + df) from the mean (where the range of compatible camera and reference rollovers narrows to zero). Given this probability distribution, the 95% confidence interval is given by
In our example, where min (oi ) = 0, n = 2, f = 0.2 and df = 0.02, this yields offset = 0.40 ± 0.17 s, which agrees with the result derived in Section 4.2, 0.415 ± 0.073 s.
5. Case Studies
We present two glaciological case studies where accurate knowledge of image acquisition time was critical: georeferencing aerial photogrammetric surveys at Columbia Glacier with in-flight camera positions time-interpolated from GPS tracklogs (Section 5.1); and synching high frame-rate observations of iceberg-calving events to the resulting seismic waveforms at Yahtse Glacier, Alaska (Section 5.2). The case studies are included only to highlight some of the (possibly many) applications for calibrated camera clocks; therefore, lengthy analyses of the scientific results are not conducted here.
5.1. Accurate geotagging for DEM creation
Leading image-based approaches to three-dimensional (3-D) modeling (reviewed by Reference Hartley and ZissermanHartley and Zisserman, 2004) are now capable of performing automated scene reconstruction on even the largest and most poorly documented image collections: for instance, rebuilding Rome from thousands (Reference Agarwal, Snavely, Simon, Seitz and SzeliskiAgarwal and others, 2009) to millions (Reference Frahm, Daniilidis, Maragos and ParagiosFrahm and others, 2010) of street-level photographs. This accomplishment hinges on flexible structure-from-motion (SfM) algorithms which can triangulate from overlapping images taken from multiple perspectives (‘motion’) both the relative camera geometry and 3-D scene (‘structure’) that gave rise to the images. Therefore, although conventional ground control can be (and are best) used to scale and orient the resulting model to absolute coordinates (e.g. Reference Dowling, Read and GallantDowling and others, 2009; Reference Ployon, Jaillet, Barge, Jaillet, Ployon and VilleminPloyon and others, 2011), surveyed camera positions are theoretically sufficient – a significant advantage for measurement programs where fixed bedrock control is unavailable due to topographical or logistical constraints.
On 25 May 2010, airborne vertical stereo photographs (0.40m ground resolution) were acquired over Columbia Glacier, positioned and oriented using previously surveyed ground control, and processed to a 2 m accuracy digital elevation model (referred to as conventional DEM). Same-day oblique imagery was acquired with a Nikon D2X from the window of a second small aircraft flying the path shown in Figure 7 at a mean distance of 1.4 km above the glacier surface. A DeLorme PN-40 handheld GPS logged a position every second, equivalent to 32 m nominal point spacing at the average flight speed of 114 km h−1. At such velocities, time-interpolating accurate camera positions from the tracklog requires subsecond calibration of the camera clock. The tracklog is believed to be tightly coupled to the internal rather than displayed time of the GPS device (the information required to confirm this assumption is not available from the manufacturer at this time) and therefore calibration to UTC was performed. At the onset of the 1 hour photograph acquisition period, we captured a sequence of images of the GPS which later revealed that the camera lagged behind the GPS clock display by 2.485 ± 0.054 s (95% CI). Following our return from the field, we compared the GPS display to a NTP-synchronized computer clock with millisecond accuracy and found that in 151 second rollovers over 2 months the GPS display lagged behind UTC by 0.173 ± 0.095 s (95% CI). A correction of +2.658 ± 0.109 s (95% CI) thus should provide the best agreement with the GPS tracklog. Camera clock drift was ignored due to the short photograph acquisition period and the small nominal drift of the Nikon D2X used (Table 1; −0.044 ± 0.004 s d−1).
The photographs from our aerial survey were first processed with the open-source SfM package Bundler (http://phototour.cs.washington.edu/bundler/) to simultaneously calculate a sparse, relatively oriented point cloud (964 075 points) and corresponding camera positions, orientations and lens calibration parameters. To test the time calibration estimate, absolute camera positions for all 383 images were linearly time-interpolated from the GPS tracklog for a range of camera time corrections. The SfMcomputed and GPS-interpolated camera positions were then used to calculate a least-squares seven-parameter (Helmert) transformation (Reference ChallisChallis, 1995) for orienting and scaling the SfM model. The root-mean-square (rms) 3-D distance between the transformed model camera positions and time-interpolated GPS camera positions reaches a minimum (13.09 m) at +2.69 s within the expected range of the camera time correction (Fig. 8, bold line). In this case, precise knowledge of the camera–UTC offset markedly improves the agreement of the GPS positions with the relative camera geometry computed by SfM.
The rms elevation error between the conventional DEM and transformed SfM point cloud exhibits an equivalent but lower amplitude relationship to the camera time correction (Fig. 8, thin line). Since elevation differences depend on local slope, they offer a less sensitive confidence measure (e.g. an XY error in the transformation would result in no vertical error wherever the ground is flat, in error contours wherever the ground is sloped, and in randomly distributed error spikes over glacier crevasses). To better reflect the accuracy over the glacier surface, poorly constrained SfM points in the glacier forebay (occupied by ice melange) and on the peaks above 600 m (13% of total points) were discarded. Furthermore, the SfM elevations were corrected for a systematic −8.02 m bias (determined from bedrock regions of known elevation) likely associated with the single-frequency handheld GPS. The rms elevation error between the conventional DEM and transformed SfM point cloud reaches its minimum of 6.10 m at + 2.69 s, providing a second independent assessment of the camera time correction. The errors are nearly equally distributed (mean −0.77 m) and could be due largely to differences in crevasse depth penetration resulting from the varying viewing angles of the oblique images or the smoothing algorithms used by Inpho Match-T in computing the conventional DEM.
Automated software packages, both commercial and open-source, are becoming available (e.g. Agisoft Photoscan: http://www.agisoft.ru/products/; VisualSFM: http://homes.cs.washington.edu/∼ccwu/vsfm/), increasing the accessibility of the SfM method. Adoption of SfM technology and application of camera time calibration methods has enabled us to produce DEMs of comparable accuracy to conventional vertical photogrammetry using only oblique photographs and a consumer-grade tracklog of camera position.
5.2. Icequake source mechanisms
Since iceberg calving was first identified as a source of seismic energy (Reference QamarQamar, 1988), glaciologists have been attempting to identify the specific sources of that energy. These studies have largely been motivated by attempts to learn about and predict calving (i.e. develop ‘calving laws’) or to remotely monitor calving fluxes for mass-balance and dynamical studies of tidewater glaciers. Various portions of the calving process have been proposed as the sources of calving seismicity, including hydrofracture and resonating water-filled cracks (Reference O’Neel and PfefferO’Neel and Pfeffer, 2007), basal slip (e.g. Reference Wolf and DaviesWolf and Davies, 1986) and the rotation and terminus push-off of large icebergs (Reference Amundson, Truffer, Lüthi, Fahnestock, West and MotykaAmundson and others, 2008; Reference Tsai, Rice and FahnestockTsai and others, 2008).
In the present case study, we use the camera time calibrations developed in Section 4.3 to synchronize video of iceberg-calving events with seismograms, allowing us to draw correspondences between the directly observable calving process and the seismic record. Our efforts to constrain the seismic source of calving took place at Yahtse Glacier, an advancing tidewater glacier on the Gulf of Alaska (60.15° N, 141.38° W). Video was recorded with a Canon EOS 7D at 29.97 frames s−1. We tested two commercially available standards for absolute timing by placing them together within the video: a wall clock synchronized to radio signal WWVB (Radio Shack, model 63-247) and a handheld GPS (Garmin eTrex Vista HCx). In 13 comparisons of 3–5 s each, we found that the radio clock lagged the GPS display by 0.56 ± 0.57 s. Following our return from the field, we compared the GPS to a NTP-synchronized computer clock with millisecond accuracy. In 14 measurements over 2 months, we found that the GPS lagged behind UTC by 0.01 ± 0.44 s. We therefore selected the GPS unit as our time standard and used it to synchronize each of our video clips to UTC. We conservatively assume that our video is accurate to within 1 s.
Figure 9 presents a sequence of video frames from a typical calving event with contemporaneous seismic data (the full video is available as supplementary material at www.igsoc.org/hyperlink/12j126/12j126Fig9.mov). At a distance of 1.8 km from the calving event, the digitizer of the broadband seismometer is connected to a GPS antenna and is synchronized to UTC at the time of recording. On the same amplitude scale, we present two passbands of vertical channel data, 1–5 and 5–50 Hz. Waveforms were filtered using a four-pole Butterworth filter that does not artificially offset the waveform timing, and the timing of the seismogram was adjusted to account for the travel time from the source to the receiver. We assumed a velocity of 1.9 km s−1, the velocity measured for the peak amplitude as it moves through a local network of seismometers.
These methods reveal that, in this case, the largest-amplitude seismic signals are at relatively low frequencies (<5 Hz) and best associated not with ice fracture but with the splash of water seen erupting from sea level at ∼9 s after the calving event initiates. This result, and those from similar videos, indicates that calving seismicity is greatly influenced by calving style (i.e. submarine or subaerial calving) and how far from its neutrally buoyant position an iceberg is released (Reference Bartholomaus, Larsen, O’Neel and WestBartholomaus and others, 2012). Seismic methods are best suited to monitor iceberg-calving rates when the calving process is energetic, as is the case for subaerial events and submarine events released at great depths. Shallowly released submarine calving events, including some of the largest events at Yahtse Glacier, generate only gentle splashes and thus only weak seismic waves.
Owing to the 15–20 s duration of this and many other calving events at Yahtse Glacier, we have been able to correlate the visual record of calving to the seismic record with an accuracy of ∼1 s. In the absence of clock synchronization, there would be no rigorous method for relating the mechanical calving sequence to seismic signals, hampering further analysis of calving icequakes. If care were taken to better calibrate the timing of the video sequences, more might be learned about the connections between high-frequency seismicity and ice fracturing, particularly at the initiation of the calving event (0 ≤ t ≤ 6 s in this example). However, the 0.5 s dominant period of this seismogram and uncertainties pertaining to the wave travel time would still hinder some attempts at higher-accuracy analyses.
6. Sample Code
The suite of perl and bash scripts available as supplementary material at www.igsoc.org/hyperlink/12j126/12j126sect6/ provide the basic tools needed for evaluating camera–clock offset, drift, subsecond resolution and precision, and for subsequently correcting measured offset and drift from image-capture times. The code package requires the ExifTool command-line application and perl library (http://www.sno.phy.queensu.ca/∼phil/exiftool/) for reading and writing Exif, and the ImageMagick command-line application (http://www.imagemagick.org) for stamping capture date and time onto images. The functions are introduced below; complete documentation can be found as comment blocks inside the code.
local_time.pl prints out the system time at the specified interval and resolution. When run on a computer calibrated to a stable time source, this script provides a high-accuracy, subsecond resolution reference clock for measuring camera-clock precision, offset and drift.
extract_time.pl reads the DateTimeOriginal and SubSecTimeOriginal tags from all image files in the specified directory, writing the results alongside image filenames to a tab delimited text file in the formats ‘YYYY:MM:DD hh:mm:ss’ and ‘00’, respectively.
timestamp.sh stamps the DateTimeOriginal and SubSecTimeOriginal values, formatted as ‘YYYY/MM/DD hh:mm:ss.ss’, onto new versions of all jpeg images in the specified source directory. Timestamped images can help to evaluate a camera’s clock against reference clocks photographed with the camera.
embed_time.pl copies the values of the DateTimeOriginal and SubSecTimeOriginal tags to the XMP Description tag as ‘<DateTimeOriginal = YYYY:MM:DD hh:mm:ss[.ss]>’ for all jpeg images in the specified directory. This backup of the original capture date and time to another metadata field is recommended before any adjustments are applied so that the information can be restored with the reverse function restore_time.pl if an error is later realized.
adjust_time.pl adjusts the capture date and time of all jpeg images in the specified directory according to the camera-specific drift, the camera–reference offset and the camera date and time at which the specified offset was measured. The original DateTimeOriginal and SubSecTimeOriginal values are overwritten with the new adjusted values.
7. Future Work
Although low in cost, hardware requirements and power consumption, the methods presented here are labor-intensive and in some situations only partial solutions. For instance, we provide a script to correct for mean clock drift calculated from two measurements of camera–reference offset bounding the period of interest, but accounting for undocumented deviations from this mean due to temperature modulations would require both a temperature record spanning the camera’s deployment and a physically or empirically derived model of the clock drift’s temperature dependence. Ultimately, continuous calibration to a reliable visual or electronic time signal – effectively bypassing camera timekeeping entirely – may provide the final solution for camera impartiality and immediate guaranteed precision, just as GPS has for countless other instruments. Integrated GPS already exists in some camera and camera-phone models and may soon negate the need for an external time reference in applications where second precision is sufficient and GPS signals are available. For the time being, however, the latency and slow refresh rates of these systems, as discussed in Section 3, currently prevent their use for subsecond observation.
For time-critical applications, we suggest two purely electronic strategies: the triggering of the camera by a time-calibrated device and alternatively the recording of the camera trigger by a time-calibrated device. A popular solution for precise and portable UTC timekeeping is a GPS receiver equipped with a pulse-per-second (PPS) dedicated time port, which announces the start of every second with millisecond or better accuracy. This electrical signal is used routinely to discipline a computer clock, which could subsequently be used to initiate video recording or still capture at predetermined times. Similarly, any GPS receiver chip could, with custom software and hardware, be used to discipline the clock’s onboard time-lapse intervalometers, an appropriately low-power solution to the challenge of filtering temperature-dependent drift from extended time-lapse deployments.
The reverse scheme – recording rather than triggering image capture – could be made possible by the hot shoe or flash port provided on most if not all pro-level still cameras. These interfaces are used by the camera to send an electronic signal to trigger external flashes and alternatively could be connected to a GPS event logger. By design, the timing of this signal must be precisely synchronized to the opening of the shutter, since flashes fire very short bursts of light (≤0.02 ms). In practice, consumer SLR cameras are capable of flash synchronization speeds of 17 ms (1/60 s) to upwards of 2 ms (1/500 s). The alignment of image capture and flash trigger can be quickly constrained by photographing the triggered flash at a range of shutter speeds.
Finally, continuous calibration to visual time signals is possible but more labor-intensive to process. Placing conventional robust reference clock displays (e.g. time-calibrated computer, research-grade GPS) within the camera field of view for the duration of the deployment is impractical in many field situations (especially if the focus is set at a great distance from the camera), but more compact solutions do exist. Precision GPS time video inserters, commonly used in the amateur astronomy community for timing occultations (Reference NugentNugent, 2007), embed millisecond accuracy timestamps directly onto intercepted analog video streams. No conventional equivalent yet exists for digital video; instead, the timestamp can be introduced optically (for video, one solution could consist of a light-emitting diode blinking to a GPS PPS signal, blinking twice every minute for easier determination of the absolute time). Any of these methods can be evaluated by photographing a reference time display with the tethered camera, as previously discussed.
8. Conclusions
We have reviewed obstacles to precise camera timekeeping, tested the capabilities of available consumer-grade camera models and UTC reference clocks and demonstrated calibration procedures for absolute timing, all in an effort to extend the application and reliability of digital still and video cameras for use in scientific observation. Camera clock drift is singled out as the unique source of multi-second to minute errors, while timestamp precision and resolution, onboard GPS refresh rates, and GPS and radio clock display latency pose challenges for subsecond observation. With proper calibration, subsecond imagery is well within the reach of select consumer-grade digital cameras. This represents a potentially pervasive addition to the instrumental record, both for relative timing (e.g. measuring the rates of rapid processes) and for confidently matching visual (and acoustic) observations to the many other data already synchronized to a time standard.
At Yahtse Glacier, time-calibrating video footage of iceberg-calving events has allowed us to draw conclusions about the sources of the observed seismic signals. At Columbia Glacier, refining image-capture times has allowed us to considerably improve estimates of in-flight camera positions, which subsequently allowed us to orient our photogrammetric models without the need for additional ground control. Ultimately, careful management of camera time errors is applicable to the study of all rapid processes observable in the visible and near-infrared spectrums. Multi-second to subsecond significant processes are admittedly rare in the cryosphere (supraglacial lake drainage (e.g. Reference DasDas and others, 2008), snow avalanche triggering (e.g. Reference LacroixLacroix and others, 2012) and iceberg calving as previously discussed); however, they permeate the physical Earth: volcanic eruptions (e.g. Reference WalterWalter, 2011), structural failure during earthquakes (e.g. Reference Papazoglou and ElnashaiPapazoglou and Elnashai, 1996; Reference Priestley, Sritharan, Conley and PampaninPriestley and others, 1999), meteorological conditions (Reference SentmanSent-man and others, 2003; Reference Lorenz, Jackson and BarnesLorenz and others, 2010), nearshore wave dynamics (e.g. Reference Holman, Stanley and Ozkan-HallerHolman and others, 2003; Reference Yoo, Lee, Ha, Cho and WooYoo and others, 2010) and animal behavior (e.g. Reference Tammero and DickinsonTammero and Dickinson, 2002; Reference Wilson, Steinfurth, Ropert-Coudert, Kato and KuritaWilson and others, 2002), to list just a few.
Acknowledgements
We thank the reviewers at Steve’s Digicams and Digital Photography Review, as well as the thousands of users on Flickr, whose image Exif we surveyed, for making their photographs publicly available online. We are grateful for the significant assistance provided by Extreme Ice Survey’s Adam LeWinter, James Balog and others in deploying, servicing and calibrating the time-lapse cameras at Columbia Glacier. We thank Phil Harvey, Scientific Editor John Woodward and four anonymous reviewers for assistance and comments. W.T. Pfeffer, S. O’Neel and E.Z. Welty were funded by the US National Science Foundation grant IPY-0732726, and T.C. Bartholomaus by grant EAR-0810313. S. O’Neel and E.Z. Welty were also funded by the United States Geological Survey (USGS) Climate and Land Use Change Mission and the Department of Interior Alaska Climate Science Center. T.C. Bartholomaus acknowledges Chris Larsen and Michael West for their mentoring and for the design and set-up of the Yahtse Glacier project. We thank the International Glaciological Society for financial assistance.