Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Motion correction for passive radiation imaging of small vessels in ship-to-ship inspections

Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 2016
...Read more
Motion Correction for Passive Radiation Imaging of Small Vessels in Ship-to-Ship Inspections 1 K.P. Ziock a , C.B. Boehnen a , J.M. Ernst a , L. Fabris a , J.P Hayward a,b , T.P. Karnowski a , V.C. Paquita a , D.R 2 Patlolla a , D.G. Trombino c 3 a Oak Ridge National Laboratory, Oak Ridge, TN, USA 4 b Department of Physics and Astronomy, University of Tennessee, Knoxville, TN, USA 5 c Lawrence Livermore National Laboratory, Livermore, CA, USA 6 7 1 ABSTRACT 8 Passive radiation detection remains one of the most acceptable means of ascertaining the presence of illicit 9 nuclear materials. In maritime applications it is most effective against small to moderately sized vessels, 10 where attenuation in the target vessel is of less concern. Unfortunately, imaging methods that can remove 11 source confusion, localize a source, and avoid other systematic detection issues cannot be easily applied in 12 ship-to-ship inspections because relative motion of the vessels blurs the results over many pixels, signifi- 13 cantly reducing system sensitivity. This is particularly true for the smaller watercraft, where passive inspec- 14 tions are most valuable. We have developed a combined gamma-ray, stereo visible-light imaging system 15 that addresses this problem. Data from the stereo imager are used to track the relative location and orienta- 16 tion of the target vessel in the field of view of a coded-aperture gamma-ray imager. Using this information, 17 short-exposure gamma-ray images are projected onto the target vessel using simple tomographic back- 18 projection techniques, revealing the location of any sources within the target. The complex autonomous 19 tracking and image reconstruction system runs in real time on a 48-core workstation that deploys with the 20 system. 21 © 2015. This manuscript version is made available under the Elsevier user license http://www.elsevier.com/open-access/userlicense/1.0/
2 INTRODUCTION 1 Maritime settings remain are among the most challenging arenas for detecting illicit nuclear materials. 2 While considerable progress has been made in addressing nuclear smuggling on large oceangoing vessels 3 and the millions of cargo containers they transport [1], in contrast, relatively little advance has been made 4 in dealing with the smuggling opportunities presented by small vessels. In a study on this issue [1], the De- 5 partment of Homeland Security points out that there are more than four million small vessels with unen- 6 cumbered access to 95,000 miles of coastal and inland waterways. While the morphology of ports, bays, 7 and rivers often allows for the use of large passive radiation sensors at navigational choke points [2], the 8 use of passive sensors represents only one aspect of a layered approach. An additional component of an 9 overall system to deal with the small vessel problem includes ship-to-ship inspections. Typically, close in- 10 spections are conducted by vessels of modest size (~ 9 to 15 m in length), and inclusion of radiation sensors 11 on such vessels is one more layer of an overarching system. 12 In terrestrial searches for illicit radioactive materials, it has been shown that the use of radiation imag- 13 ing can increase sensitivity by removing systematic background variations and reducing source confusion 14 issues [3–7]. While radiation background levels are generally lower and less variable in maritime settings, 15 they are still an issue in coastal waters, where many small ship-to-ship inspections occur. Results from de- 16 tection exercises indicate that variations of up to a factor of two are not unusual. Further, such waters are 17 often crowded, leading to problems of source confusion. Imaging provides the additional benefits of locat- 18 ing a source within a suspect vessel while also providing background-subtracted spectra for source identifi- 19 cation. Unfortunately, in ship-to-ship inspections the application of radiation imaging is complicated by the 20 fact that both the inspecting and target vessels are in continuous motion due to wave action. These motions 21 blur radiation images, significantly decreasing the sensitivity of the technique. 22 To address this problem an approach that relies on automated target-tracking software, using images from 23 visible-light video cameras to determine the relative location and orientation of the two vessels, has been 24 developed. An instrument was constructed that uses the tracking information to compensate for the motion 25
1 Motion Correction for Passive Radiation Imaging of Small Vessels in Ship-to-Ship Inspections 2 K.P. Ziocka, C.B. Boehnena, J.M. Ernsta, L. Fabrisa, J.P Haywarda,b, T.P. Karnowskia, V.C. Paquitaa, D.R 3 Patlollaa, D.G. Trombinoc 4 a 5 b 6 c Oak Ridge National Laboratory, Oak Ridge, TN, USA Department of Physics and Astronomy, University of Tennessee, Knoxville, TN, USA Lawrence Livermore National Laboratory, Livermore, CA, USA 7 1 8 9 ABSTRACT Passive radiation detection remains one of the most acceptable means of ascertaining the presence of illicit 10 nuclear materials. In maritime applications it is most effective against small to moderately sized vessels, 11 where attenuation in the target vessel is of less concern. Unfortunately, imaging methods that can remove 12 source confusion, localize a source, and avoid other systematic detection issues cannot be easily applied in 13 ship-to-ship inspections because relative motion of the vessels blurs the results over many pixels, signifi- 14 cantly reducing system sensitivity. This is particularly true for the smaller watercraft, where passive inspec- 15 tions are most valuable. We have developed a combined gamma-ray, stereo visible-light imaging system 16 that addresses this problem. Data from the stereo imager are used to track the relative location and orienta- 17 tion of the target vessel in the field of view of a coded-aperture gamma-ray imager. Using this information, 18 short-exposure gamma-ray images are projected onto the target vessel using simple tomographic back- 19 projection techniques, revealing the location of any sources within the target. The complex autonomous 20 tracking and image reconstruction system runs in real time on a 48-core workstation that deploys with the 21 system. © 2015. This manuscript version is made available under the Elsevier user license http://www.elsevier.com/open-access/userlicense/1.0/ 1 2 INTRODUCTION 2 Maritime settings remain are among the most challenging arenas for detecting illicit nuclear materials. 3 While considerable progress has been made in addressing nuclear smuggling on large oceangoing vessels 4 and the millions of cargo containers they transport [1], in contrast, relatively little advance has been made 5 in dealing with the smuggling opportunities presented by small vessels. In a study on this issue [1], the De- 6 partment of Homeland Security points out that there are more than four million small vessels with unen- 7 cumbered access to 95,000 miles of coastal and inland waterways. While the morphology of ports, bays, 8 and rivers often allows for the use of large passive radiation sensors at navigational choke points [2], the 9 use of passive sensors represents only one aspect of a layered approach. An additional component of an 10 overall system to deal with the small vessel problem includes ship-to-ship inspections. Typically, close in- 11 spections are conducted by vessels of modest size (~ 9 to 15 m in length), and inclusion of radiation sensors 12 on such vessels is one more layer of an overarching system. 13 In terrestrial searches for illicit radioactive materials, it has been shown that the use of radiation imag- 14 ing can increase sensitivity by removing systematic background variations and reducing source confusion 15 issues [3–7]. While radiation background levels are generally lower and less variable in maritime settings, 16 they are still an issue in coastal waters, where many small ship-to-ship inspections occur. Results from de- 17 tection exercises indicate that variations of up to a factor of two are not unusual. Further, such waters are 18 often crowded, leading to problems of source confusion. Imaging provides the additional benefits of locat- 19 ing a source within a suspect vessel while also providing background-subtracted spectra for source identifi- 20 cation. Unfortunately, in ship-to-ship inspections the application of radiation imaging is complicated by the 21 fact that both the inspecting and target vessels are in continuous motion due to wave action. These motions 22 blur radiation images, significantly decreasing the sensitivity of the technique. 23 To address this problem an approach that relies on automated target-tracking software, using images from 24 visible-light video cameras to determine the relative location and orientation of the two vessels, has been 25 developed. An instrument was constructed that uses the tracking information to compensate for the motion 1 by projecting the gamma-ray images into a stationary volume-defined in a coordinate system fixed to the 2 target. This technique digitally removes the motion blur in the gamma-ray images, and with sufficient mo- 3 tion, provides a three-dimensional (3D) tomographic location for a source in the reconstruction volume. 3 4 MOTION MEASUREMENT 5 Before the imaging system for use in ship-to-ship inspections was constructed, a study was undertaken to 6 determine the requirements of such an instrument. A small sensor package (Fig. 1) capable of measuring all 7 six degrees of motion of the host vessel was assembled with a three-axis micro-electrical-mechanical sys- 8 tem accelerometer [9] and a three-axis rotational sensor and compass [10]. The sensor package was com- 9 bined with a gyro-stabilized (two-axis) video camera system [8] that could be used to track target vessels. 10 This component also included a Global Positioning System (GPS) device to provide the spatial location and 11 orientation of the host vessel. 12 13 Fig. 1. Three-axis motion sensor and video camera mounted on the roof of a 33-ft police vessel (arrow on left) and a close- 14 up of the package (right). The black spherical object is the gyro-stabilized video camera of a TASE inertial package [8]; 15 the small box under the support plate on the right contains both a three-axis accelerometer and a magnetometer. 16 The imaging system was deployed aboard a number of local law enforcement vessels in the Los Angeles 17 area, the San Francisco Bay, and the New York–New Jersey harbor, each of which was conducting inspec- 18 tions. The primary results from tests with this system included the field of view required to track the target 19 vessels and the angular and linear accelerations of the targets in the video images. The data included the 20 motion of the target in the video image, the motion of the host, and the motion of the pan-and-tilt axis of 21 the tracking system. Sample data are shown in Fig. 2. 1 2 3 4 Fig. 2. Sample data obtained during an encounter with a target vessel showing the video image (top left), the GPS track 5 (top right), the accelerometer (middle set of plots), and the electronics compass (bottom set of plots.) The accelerometer 6 output and compass output are for the portion of the GPS track indicated by the circle. 7 The outputs from the various sensors were used to generate plots of the field of view needed in the verti- 8 cal (tilt) and horizontal (pan) directions to contain a full encounter with a target vessel. Results for a num- 9 ber of encounters are shown in Fig. 3. 10 11 Fig. 3. Plots of the fraction of time that the target vessel is in the field of view as a function of the field of view for the hor- 12 izontal (left) and vertical (right) directions. 13 The primary findings from the measurements were that the vertical and horizontal fields of view needed 14 to be 50º and at least 180º, respectively. The latter field of view requirement was due to different inspection 15 scenarios—particularly one that involved approaching a target vessel, stopping near the vessel during an 1 inspection, and recession from the vessel. In fact, with the relative freedom of motion enjoyed by small 2 vessels, the desired horizontal field of view could be as large as 360º. Based on this requirement, a motor- 3 ized gimbal system was selected that could provide tracking in the horizontal (pan) direction. Once the 4 mount was motorized, the addition of a motorized tilt axis was a relatively minor addition. 5 The overall design specifications for the final instrument are given in Table 1. In addition to the axis mo- 6 tion, another primary design consideration was to have a man-portable system that could be assembled and 7 mounted to different host vessels. This required both limiting the mass and using a general-purpose mount- 8 ing scheme (based on lashing the system to a deck using straps.) In addition, the useful range of operations 9 was limited from near contact (5 m) to a range of 20 m. The range limitation was selected based on the de- 10 tector system mass limits. Finally, to ease integration of the video and gamma-ray data, an emphasis was 11 placed on maximizing the gamma-ray detector area so that sources could be seen in relatively short expo- 12 sure times. To maintain a low mass, this meant optimizing the response to low-energy gamma rays. Property Mass Single Component Mass Vertical Field of View Horizontal Field of View Operational Range Energy Resolution γ Spatial Resolution Voxel Size Power γ -detector area 13 14 Table 1. System Design Specifications Specification Requirement < 200 lb Mounting concerns < 100 lb Two-man portability ~ 50º Target vessel size + motion 180º or 360º Relative target motion, Concept of Operations 5–20 m Boarding, gamma-sensitivity ~ 7% at 662 keV Isotope ID ~ 1 m at 20 m Voxel size Target vessel size, computational requirements, 50–100 cm relative coordinate determination < 500 W Available power 2 > 100 cm Gamma sensitivity 4 INSTRUMENT DESIGN The final instrument, mounted on the back of a patrol vessel, is shown from two angles in Fig. 4. It 15 comprises a coded-aperture gamma-ray camera, a high-definition (HD) stereo video imager, and a two-axis 16 gimbal. More detail on each of these components is given below. Stereo Imager Anger Camera Mask Tilt Rotary Stage / Axis Shock Mounts Pan Rotary Stage Mounting Base 1 2 Fig. 4. Photos showing the front and side views of the imaging instrument mounted on the back of a police vessel. 3 4.1 GAMMA-RAY IMAGER 4 The gamma-ray imager is based on a custom commercial Anger camera [11]. The imager uses a 1 cm 5 thick CsI(Na) 28.0 × 26.5 cm2 scintillator crystal as the detection element. A full-face exposure of a cali- 6 bration mask with holes on a 1 cm pitch is shown in Fig. 5. Throughout the life of the project, the system 7 was calibrated several times and the spatial resolution deteriorated during that time (several years). It had 8 an initial lateral spatial resolution at 122 keV of ~ 3.5 mm full width at half maximum (FWHM) that de- 9 graded to ~ 6.6 mm FWHM at the last calibration. In general the spatial resolution varies somewhat over 10 the face of the detector, degrading toward the edges as edge effects alter the response of the Anger logic. 11 The system also achieves an energy resolution of ~13% at the 122 keV line from a 57Co source. 12 13 Fig. 5. (left) Anger camera exposure to calibration mask. The source spots are separated by 1 cm. (Right) Gamma-ray 14 image of a 57Co point source, black is more intense. 15 The gamma-ray detector is paired with a rank-31 modified uniformly redundant array (MURA) coded- 16 aperture shadow mask [12] made of 1.5 mm thick tungsten. The pixel pitch of the mask, 6.34 mm, was se- 17 lected so that at closest approach to a target vessel (5 m) the magnified pattern of the mask on the detector 1 is not larger than the active detector area. The mask itself was laser-cut in four different parts and sand- 2 wiched between two sheets of 0.8 mm thick aluminum (see Fig. 6). That assembly is sandwiched between 3 two square “rings” of stainless steel to keep the entire assembly flat. The relatively thin mask was chosen to 4 minimize the mass and moment of inertia of the experimental system; this design choice practically limited 5 experimental data collection to low-energy sources, yet the overall mask holder design allows for the use of 6 multiple mask layers to increase the stopping power of the mask if higher-energy sources are of interest. 7 That capability was not used during the study. 8 9 10 Fig. 6. Coded aperture mask. The mask is made of four segments that are mounted in a frame and then sandwiched be- 11 tween two sheets of thin Al. Pins are used (visible on the top, left, and right of the picture) to maintain alignment. 12 The mask and imager are mounted at the center of the two-axis gimbal described in more detail in Sect. 13 C. The focal length of the system is 15 cm, providing a resolution at the target of 21 cm at the minimum 14 target range (5 m) and 84.5 cm at the maximum target range (20 m). A sample image of a 57Co source is 15 shown in Fig. 5. The image shows a high dynamic range even though it does not use the mask/anti-mask 16 imaging approach frequently used to suppress systematic noise in coded-aperture images [13]. Further, 17 when a moving target is imaged, residual systematic noise should be suppressed by the dynamic nature of 18 the imaging (i.e., the source location moves throughout the integration, so systematic noise that does not 19 translate with the image will tend to wash out [14]). 1 4.2 STEREO VIDEO IMAGER 2 The video images used by the system are collected with a custom commercial high definition (HD) stereo 3 imager. This system is based on the TYZX DeepSea platform [15] that was modified for HD cameras and a 4 40-cm camera separation distance. In addition to providing video images, the system uses an embedded 5 processor to generate range or “disparity” images that give the distance to points within the image. Sample 6 images are shown in Fig. 7. 7 4.3 8 MOTION MOUNT The two-axis gimbal mount is primarily made of tubular aluminum to minimize mass and moment of in- 9 ertia. The gamma-ray imager is mounted so that it balances at the tilt axis, with the combined mass of the 10 coded-aperture mask/stereo imager balanced by the Anger camera mass. A series of slots combined with 11 pins allow one to change the focal length of the camera. To maintain the center of balance on the axis, the 12 complete camera system (mask and detector) can be shifted with respect to the axis. Thus, while focal 13 length changes are possible in the field, they do require some effort and are not expected for normal opera- 14 tion. 15 16 Fig. 7. Video image (right) and corresponding disparity image (left, with overlaid text) of a target vessel. The gray scale in 17 the disparity image corresponds to the distance from the camera plane to that point in the image. Lighter locations are at 18 greater distances. The sky contains locations with no data (black) and some erroneous distances found from mismatched 19 points in the clouds. 20 21 The video camera mounts to the top of the mask frame using a Losmandy V-Series Dovetail [16] quick disconnect, allowing alignment of the system to be maintained if the two components are separated. 1 To protect the video and gamma-ray systems from hard impacts, as might occur if a vessel skips from 2 wave to wave at higher speeds, spring-mounted shock dampers are used to mount the camera assembly to 3 the tilt axis. They maintain rigid contact until a sufficiently strong impulse overwhelms the spring force that 4 maintains the contact. Above that impulse, the springs compress, reducing the accelerations of the camera 5 platform. 6 The camera/tilt assembly mounts to a frame sitting on a rotary table connecting the mount feet to the 7 frame. Friction feet on the end of the four legs are used to keep the system stable during vessel motion. 8 Normal forces and system mounting are provided by straps that run through handles that are also used to 9 lift the entire assembly. 10 11 4.4 DATA ACQUISITION SYSTEM It became clear early in the project that the motion correction would require significant processing power, 12 so a multi-computer architecture for the software was designed around the transmission control proto- 13 col/internet protocol (TCP/IP) Ethernet communications between the different computers. Later we found a 14 single 48-core Windows server [17] that could handle the processing and that used the same TCP/IP proto- 15 cols within the single machine for communication between the different software modules. The data com- 16 prise list-mode gamma-ray events from the gamma-ray imager and the visible light and disparity images 17 from the stereo imager. The gamma-ray data are obtained from a Linux server that receives and buffers the 18 events from the Anger camera. The data are collected and passed to the gamma-ray processing software. 19 The camera and Linux server are on a dedicated Ethernet link that includes the main computer. Data are 20 transmitted from the camera to the Linux server via the User Datagram Protocol (UDP). The data stream 21 includes (1) gamma-ray events, including energy and the lateral position coordinates, (2) a millisecond tim- 22 er that includes the count of an internal clock, and (3) the count of an external pulse also supplied to the 23 stereo camera. The latter signal is generated by an universal serial bus (USB) pulser [18] under control of 24 the main computer and is also used as the shutter release for the video camera. 1 The energies and locations of the gamma-ray events recorded by the imager are based on calibrations 2 made off line. The calibrations were performed by using a 57Co source to collect a flood image of a pinhole 3 shadow mask mounted to the front of the detector (Fig. 5). The mask has holes at a 1 cm pitch, and the 4 known hole locations were used to generate a linearized map for the event locations and energies. Because 5 the calibrations were performed at a single energy, the camera results are best when used at that energy. 6 Data from the stereo imager are also obtained via a dedicated Ethernet link. Once started, the camera 7 provides both video and disparity images of the scene. The data were collected synchronously with the ex- 8 ternal pulse that was also recorded by the gamma-ray imager. As with the gamma-ray imager, the raw data 9 (images) were recorded to disk. 10 Prior to testing, the alignment between the gamma-ray and video imagers was established by collecting 11 images of a source mounted behind an optical target and moved through the field of view by a 2 × 2 m2 12 translation stage. Such data were collected from a number of different distances to determine the four- 13 dimensional matrix used to transform the gamma-ray and video images at any given source distance. 14 15 5 SYSTEM SOFTWARE/DATA HANDLING A schematic diagram of the different software modules and data paths is shown in Fig. 8. 16 17 Fig. 8. Software modules and communications channels. 1 2 5.1 VISUAL ODOMETRY At the heart of the software system is the visual odometry (VO) module. It receives the visible-light and 3 disparity images from the stereo imager and uses routines from the Open Source Computer Vision 4 (OpenCV) library [19] to perform its task of determining the location and orientation of the imager in a co- 5 ordinate system aligned to the target vessel. The following steps are taken to obtain the coordinate trans- 6 formation between an existing frame and a new frame: 7 (1) The first step after receiving a video image is to find “good features to track” [20] in the existing 8 frame using the OpenCV library. This finds locations in the 2D video image where there are strong gradi- 9 ents in more than one direction. Such points are generally unique and can be found in subsequent frames. 10 (2) The software then obtains the distance to those points from the disparity image giving a set of 3D 11 12 13 14 15 16 points that are used to find a coordinate system attached to the target vessel. (3) The system calculates the location and orientation of the imager using the coordinate system found in step 2. (4) The software then uses optical flow routines [21] to find the same set of points in the next frame obtained from the camera. (5) Once the set of points has been found in the new frame, the software obtains the distance to those 17 points from the disparity images. After that step, the system has a series of 3D points from both the existing 18 frame and the new frame. 19 (6) Using this common set of 3D points between the two frames, a 4 × 4 transformation matrix that takes 20 the coordinate system in the existing frame to the new orientation in the new frame is determined. The 4 × 21 4 transformation chosen is the transformation that minimizes the error in the least squares sense [22]. 22 23 (6) The last step in processing a frame is to calculate the location and orientation of the video camera in the 3D coordinate frame attached to the target vessel, as in step 2 above. 1 5.1.1 Reference Frames 2 In principle, the simple procedure outlined above can be repeated for each new video frame acquired by 3 the system and for the sequential coordinate transformations used to determine the instantaneous location 4 and orientation of the imager. However, there will be errors incurred each time a transformation is calculat- 5 ed, and using a continual series of small transforms will lead to large errors for the overall trajectory. To 6 help minimize such cumulative errors, the VO module uses a reference-frame approach. In this approach, 7 the calculations to determine the orientation of each new frame are not performed with respect to the frame 8 immediately preceding it, but rather with respect to the last viable reference frame. New reference frames 9 are only created periodically when the scene has shifted so much that not enough features being tracked are 10 still visible to the system. The goal is to minimize the cumulative errors by minimizing the number of ref- 11 erence frames used in a measurement. To this end, a quality factor has been defined to determine when to 12 generate a new reference image. When the quality factor falls below a threshold (determined empirically, 13 see Section 7), the system will define the image with the last acceptable quality factor as the new reference 14 frame. Subsequent video frames will be compared with the new reference frame. In addition, the current 15 location and orientation of the imager are based on the combined transformations of the reference frames 16 from the original frame through the current reference frame and including the transformation from that 17 frame to the current video image. 18 19 20 Fig. 9. Visual odometry quality factor method for determining when to generate a new reference frame. The procedure for determining when a new reference frame is required is shown in Fig. 9. The optical 21 flow matches points in the current video frame with those from the most recent reference frame. The 22 matched points are used to calculate an initial 3D transform, and the pairing of points is checked again. 23 Those points that no longer match well are removed, and a new alignment is calculated. If insufficient 24 points remain after poor matches are removed, then a new reference frame is generated. Both the minimum 1 number of points and the alignment filtering threshold are parameters that were empirically chosen to min- 2 imize tracking error in the data sets collected (see Section 7). 3 5.2 TOMOGRAPHY 4 Once the instantaneous location and orientation of the imager are determined, that information is sent to 5 the tomography module. The module uses the information to request a gamma-ray image from the gamma- 6 ray imager module and provides the start and end times (in frame number since the start of the run) for the 7 events that are to be used to form the image. The alignment of the gamma ray image to the video image is 8 known from the system calibration performed off line. The tomography module uses this information to 9 project each of the voxels in the reconstruction volume back onto the 2D gamma-ray image. The data from 10 the gamma pixel that is closest to the center of the projected voxel (assuming it is within the field of view) 11 are then added to the accumulated gamma-ray data in the voxel. This simple approach can be calculated 12 rapidly because it avoids the more complex calculations required to determine the fractional overlap vol- 13 ume between the beam representing each gamma-ray pixel and the reconstruction voxels through which it 14 is projected. Due to the relative motion between the vessels, the tendency of this approach to link some 15 voxels and pixels in a regular pattern (for instance a moiré interference) will wash out over the course of 16 the measurement. 17 The reconstruction volume is currently selected as a cube 9 m on a side that is centered on the tracking 18 point used in initializing the system as described in the video-imaging module. The voxel size is set to 15 19 cm as a balance between resolution and system speed. 20 5.3 21 GAMMA-RAY IMAGING MODULE The function of the gamma-ray imaging module is to interface with the gamma-ray camera. It is used to 22 start and stop the imager as well as to collect and record the data. In addition, it generates the images re- 23 quested by the rest of the system. To this end, the events are stored in a time-ordered buffer as they come 24 in. When a request for a gamma-ray image is received, the system uses the events from the buffer that fall 1 within the request time window and generates an image using just those events. In principle, different im- 2 ages can be requested based on the range to the voxels being projected; however, that complexity has not 3 been implemented. For the results here, the distance to the source was assumed to be fixed, and a range of 5 4 m was used. 5 The distance to a source changes the size of the mask pattern projected onto the detector, and this is most 6 important in the extreme near field. To first order, an error in distance does not change the estimated loca- 7 tion of a source; however, it will reduce the image contrast and can introduce modest artifacts in the rest of 8 the image [23]. 9 To minimize the calculations required to generate the image, the software precalculates the magnified 10 mask pattern size and determines a detector binning that oversamples the mask pattern size by a factor of 11 two [24]. A “preimage” for a single event in each pixel of the detector is then saved for use in generating 12 the images [23]. If less than 3600 events are recorded in the detector during the dwell time of a single 13 frame, then the composite image sent to the tomography module is generated by adding the correct 14 preimages associated with each event. If the number of counts recorded in the detector is equal to or more 15 than 3600 counts, then the full cross-correlation calculation is used to generate the image [25]. 16 5.4 VIDEO CONTROL MODULE 17 The function of the video-imaging module is to interface with the stereo video camera. It is used to start 18 and stop the imager as well as to collect and record the images and pass them on to the VO module. It has 19 several additional tasks associated with control of the overall system, including starting a data collection. 20 This is complicated by the need to synchronize the visual and gamma-ray data streams, and by the fact that 21 the TYZX unit has a requirement that external trigger pulses are received about every 10 s or it times out. 22 Finally, before the start of an acquisition run, the system must fill the 100-frame-deep processing buffer of 23 the VO module. These images are processed by the VO module to find the distance to the target fiducial 24 volume and to generate the initial reference frame image that it uses. 1 The gamma-ray camera only counts the shutter release frames so that synchronization between the two 2 data streams requires that the first frame processed by the VO module corresponds to the first trigger pulse 3 received by the gamma-ray camera. Since the delay between the time that the system is started and the time 4 the data acquisition commences is not fixed, a means of uniquely identifying the first frame was required. 5 To achieve this, a programmable USB-based pulser is used to generate the trigger signals. This is normally 6 set to run at the acquisition frequency (10 Hz in these experiments) and runs at that rate until the operators 7 are ready to start an acquisition. When the video control code receives a start command from the operator, 8 it stops the pulser output long enough (5 s) for the gamma-ray camera to change from standby to the data 9 collection mode. It then restarts the pulser. The start time of the gamma-ray imager then corresponds to the 10 first pulse it sees; the corresponding time is associated with the first image received after the 5 s gap (the 11 images are time tagged by the stereo imager.) The length of the gap was selected both to make sure that all 12 systems had time to respond and to make it easier to pick out the gap in the data files. 13 In addition to starting the system and passing images to the VO module, the video control code performs 14 the target-tracking function by controlling the pan and tilt motors of the instrument. Only a simple tracking 15 algorithm is used. The user can place a marker in the visible light image, and the system finds a number of 16 good-features-to-track points near that location. The weighted average of the points is then used to define 17 the tracked location. The motors have several parameters, including initial velocity, maximum velocity, ac- 18 celeration, and deceleration, that are used to specify the motor's operation. The motor is then given the dif- 19 ference between the current center of the field of view and the desired center. This difference is then used 20 by the code to calculate the motion based upon the specified parameters. As the view of the target changes, 21 the set of good-features points will change so that the tracked point on the target can wander. This is not 22 incommensurate with the overall goals of the system because the tracking requirement is only to keep the 23 target near the center of the field of view. Precise aspect information is provided by the VO module. 1 2 6 DATA COLLECTION For data collection, the instrument was mounted to the roof of a 35-ft Mooseboat [26] (Fig. 10). A second 3 vessel (33-ft Safeboat [27]) was used as the target (Fig. 10). A 148 MBq (4 mCi) 133Ba source was used on 4 the target vessel mounted to the location shown in the figure. During the runs, the target vessel was kept 5 “stationary” while the host vessel circled it. This configuration was used to ease station keeping while the 6 instrument was initialized and approximates one of the possible inspection scenarios. First tests were con- 7 ducted in the relatively sheltered waters of Aquatic Park off of San Francisco; additional tests were con- 8 ducted in the unsheltered bay near the Hyde Street pier. 9 10 11 Fig. 10. Picture of the target (top) and host (bottom) vessels for the exercise. The instrument is mounted on top of the cab- 12 in of the host vessel. For most of the source runs the source was suspended from the mast indicated by the arrow. 13 The data for a sample run, viewed from overhead, are shown in Fig. 11. The target vessel and gamma 14 tomograph are inside the cube at the center of the image. The track of the target vessel is given by the series 15 of “points,” where each point is really a pyramid indicating the instantaneous pointing of the imager. The 16 output is fully 3D and can be manipulated as such when viewed on line. The source is seen as the darker 17 region to the top left of the center of the cube. A “ghost” source is also seen in the same direction but fur- 18 ther from the center of the cube. In order to show the significance of the detection, Fig. 12 shows the tomo- 1 graphic results collapsed to the x-y, x-z and y-z axes. As can be seen, although the detection is clearly made, 2 the projection reveals that the image is blurred over many voxels. 3 4 Fig. 11. Sample results for a single long run. The track of the host vessel is shown to circumnavigate the target volume 5 shown inside the cube. Each point shown on the track is really a pyramid corresponding to a vector showing the instan- 6 taneous heading of the imager. 7 8 Fig. 12. Gamma-ray results from the run in Fig. 11 with the data collapsed to the x-y, y-z, and x-z planes. The left plot 9 shows the full results; the plot on the right has a threshold set at 50% of the maximum value. 10 a) 0-30 s b) 30 - 60 s c) 60 - 90 s d) 90 - 120 s e) 120 - 150 s f) 150 - 180 s g) 180 - 210 s h) 210 - 240 s i) 240 - 270 s j) 300 - 370 s Fig. 13. Three-dimensional, 30 s duration gamma reconstructions from the complete run shown in Fig. 11. 1 One of the goals of using the strong 148 MBq source was to allow short integrations to be analyzed to see 2 how the image developed, and from that to understand tracking issues. By replaying the list-mode data in 3 segments, we were able to see the tomographic image develop as a function of time. A sequence of short 4 (30 s) integrations from the full 370 s long run are shown in Figs. 13 and 14. The latter figure includes the 5 imager track, revealing that each run represents ~ 110º of motion of the imager around the target. While the 6 source generally develops in the same region of the reconstruction volume, its location is seen to wander 7 both in distance from the center of the reconstruction region (indicated by the black dot) and the absolute 8 location (here on the x-y plane). The amount of wandering broadens the source width in the overall results. 9 Examination of the different segments also shows that the strength of the detection varies from segment to 10 segment, with strong detections in segments a, e, f, g, h, and j, and with minimal detections in the others. In 11 the different segments, the significance of the detection varies primarily based on the accuracy of the track- 12 ing. This can be seen in Fig. 14, where the visual odometry output is included for segments d (weak detec- 13 tion) and g (strong detection). In the former image, the path has erratic tendencies, which would indicate a 14 low-quality path reconstruction whereas the path in the latter is very smooth and likely better corresponds 15 to the true path. Poor tracking is also indicated by the streaks toward the bottom of the reconstruction vol- 16 ume. They are likely short segments where the pointing information for the imager is significantly offset 17 from the true location so the projected gamma-ray data are in the wrong direction. (The instantaneous pro- 18 jection of gamma-ray data from any single image is a beam through the reconstruction volume.) Such 19 streaks are also visible in some of the other images with weak detections. In fact, the sources with a strong 20 detection are generally elongated along the central line of site of the path arc. That is the expected result 21 from the incomplete tomographic data represented by the shorter integration. d) 90 - 120 s 1 g) 180 - 210 s Fig. 14. Reconstructed volumes with visual odometry paths for 30 s long portions of the run shown in Fig. 11. 7 2 DISCUSSION 3 The results of the study clearly indicate that one can use data fusion of the visible and gamma-ray results 4 to compensate for vessel motion and to localize a source location in vessel-to-vessel inspections. However, 5 there remains a long way to realize a practical system. The sensitivity of the current system is limited by 6 the accuracy of the VO tracking. To understand the impact of tracking errors, ideally one would compare 7 the calculated path from the VO code with the true path of the vessel. Unfortunately, this would require ac- 8 curate knowledge of the position and orientation of both vessels. In particular, the orientation of the imager 9 is very important because only a small rotational change can greatly alter the corresponding relative loca- 10 tion of the target vessel. Sensors to obtain such information are available, but they are very expensive and 11 were not available for the study. To estimate (and optimize) the performance, we determined a one- 12 dimensional approximation of the angle that the host vessel had travelled around the target by manually 13 observing the aspect of the target vessel on the video. This provided ground truth information estimated to 14 be good to ~ 5º. 15 The performance of the system with respect to this metric is shown in Fig. 15, where the degrees of rota- 16 tion are plotted as a function of the frame number (the cameras ran at 10 frames per second). The black dots a) Run 1 b) Run 2 c) Run 3 d) Run 4 e) Run 6 f) Run 7 g) Run 9 h) Run 11 1 Fig. 15. Angle of target boat to police boat from visual odometry (black line) and video ground truth (black dots). 2 represent the estimated ground truth; the solid black line shows the angles generated by the VO algorithm. 3 These results are significantly better than what was originally obtained by the system, where the VO results 4 frequently under estimated the amount of rotation by ~ 30%. The improved final results were obtained by 5 varying the parameters of the VO algorithm (e.g., the minimum number of common points found between 6 the current video frame and the current video reference frame, the quality factors in finding good features 1 to track). The impact of varying these parameters is also shown in the figure by the light-gray lines, each of 2 which represents the results obtained with a different set of tracking parameters. While it is clear from these 3 plots that it is important to choose the correct visual odometry parameters, it is also clear that the reasona- 4 ble results using a single set of final parameters (black lines) means that the parameters would not need to 5 be estimated by the system, but simply chosen correctly ahead of time. 6 The overall design of the instrument was selected to maximize the signal for system development while 7 still maintaining an acceptable profile for use on working patrol vessels. For the close-in inspections likely 8 with such hardware, once tracking issues are resolved, smaller imagers would easily achieve comparable or 9 better sensitivity than demonstrated with this system. This would allow development of a device that is eas- 10 ier to install, provides more reliable mechanical tracking, and requires less space on the host vessel. In ad- 11 dition, mask thickness could be improved to enhance the response to higher energy gamma-ray lines that 12 are of interest in such work. 8 13 CONCLUSION 14 A ship-to-ship gamma-imaging system for detection of sources aboard small vessels has been described 15 and has been shown to work as a prototype. Gamma sources were detected and located in a 3D reconstruc- 16 tion that accounts for relative ship motion at sea. Traditional gamma ray detection systems are affected by 17 unknown background radiation, but since this system images 3D space, it can differentiate between high 18 background radiation and a point source that produces the same amount of radiation. The results from this 19 prototype show that such a system could be an important part of the effort to control illicit radioactive ma- 20 terials transported on small vessels. 21 9 ACKNOWLEDGEMENTS 22 The authors dedicate this work to Glenn Knoll and his contributions to the field of radiation detection; he 23 will be sorely missed by the community. Even this work benefited from his insights as he participated as an 24 independent outside reviewer for DOE, providing feedback to make sure that the project was on track. To 1 those of us fortunate to know him more personally, he provided a role model with integrity and a well 2 maintained balance of priorities across his personal and professional lives. He also wanted the best for his 3 students and all the students in his program. He took time to invest in all of us, as he clearly did with his 4 own family. In short Glenn was really a gift to all of us and his absence is keenly felt. 5 This work would not have been possible without the help and support of the LA Sherriff’s Department, 6 the New Jersey State Police, and the Alameda County Sherriff’s Department in collecting the initial motion 7 data. While the Marine Unit of the SFPD at the Hyde St. Pier also helped collect motion data the authors 8 particularly want to acknowledge their patient support during the final data collection campaign. 9 This work was performed under the auspices of the US Department of Energy by Oak Ridge National 10 Laboratory under Contract DE-AC05-00OR22725 and by Lawrence Livermore National Laboratory under 11 Contract DE-AC52-07NA27344. The project was funded by the U.S. Department of Energy, National Nu- 12 clear Security Administration, Office of Defense Nuclear Nonproliferation Research and Development 13 (DNN R&D). 10 REFERENCES 14 15 1. “Small Vessel Security Strategy” available from http://www.dhs.gov/xlibrary/assets/small-vessel- 16 security-strategy.pdf. 17 2. K.P. Ziock, et al., Nucl. Inst. Meth. in Phys. Res. A, 652 (2011) 10. 18 3. K.P. Ziock, et al., IEEE Trans. Nucl. Sci., 55 (2008) 3643. 19 4. K.P. Ziock, et al., IEEE Trans. Nucl. Sci., 60, (2013) 2237. 20 5. R.D. Penny, et al., Nucl. Inst. Meth. in Phys. Res. A., 652, (2011) 578. 21 6. S. Zelakiewicz, et al., Nucl. Inst. Meth. in Phys. Res. A., 652, (2011) 5. 22 7. M.V. Hynes, et al., Proc. Of SPIE, 7310, (2009) 731003-1. 23 8. Cloud Cap Technology, 205 Wasco Loop Suite 202, Hood River, OR. 24 9. GPB2X USB Accelerometer, Sensr, 690 East Bridge St., Elkader, IA 52043. 1 10. OS5000-S Digital Compass, Ocean Server Technology, Inc., ATMC, 151 Martine St., Fall River, MA 2 02723. 3 11. Proteus, Inc., 120 Senlac Hills Drive, Chagrin Falls, OH 44022. 4 12. S. R. Gottesman and E. E. Fenimore, Appl. Opt., 28, (1989) 4344. 5 13. C. Brown, J. Appl. Phys.,45 (1974) 1806. 6 14. J. E. Grindlay and J. Hong, Proc. SPIE, 5168, (2004) 402. 7 15. TYZX, Inc., 3895 Bohannon Drive, Menlo Park 94025. 8 16. Losmandy Astronomical Products, Hollywood General Machining, Inc., 1033 N. Sycamore Ave. Los 9 Angeles, CA. 10 17. Thinkmate model HPX QS5-4410, 159 Overland Road, Waltham, MA 02451. 11 18. USB Pulse 100, Elan Digital Systems, info at: 12 http://web.archive.org/web/20110903030603/http://www.elandigitalsystems.com/measurement/usbpulse10 13 0.php. 14 19. OpenCV.org 15 16 20. J. Shi and C. Tomasi, Proc. Comput. Vis. Pattern Recognit. IEEE Comp. Soc. Conf. 1994, 593. 17 21. B. D. Lucas and T. Kanade, Int. Jt Conf. Artif. Intell., 3 (1981) 674. 22. A.-M. Legendre, Nouvelles méthodes pour la détermination des orbites des comètes [New Methods for 18 the Determination of the Orbits of Comets] (in French), Paris: F. Didot (1805). 19 23. K.P. Ziock, et al., Nucl. Inst. Meth in Phys Res. B, 505 (2003) 420. 20 24. E. Caroli, et al., Space Sci. Rev., 45, (1987) 349. 21 25. E.E. Fenimore, T.M. Cannon, Appl. Opt., 17 (1978) 337–347. 22 26. Moose Boats, 274 Sears Point Rd, Port Sonoma Marina, Petaluma, CA. 23 27. Safe Boats International, LLC, 880 SW Barney White Road, Bremerton, WA.