Motion tracking aims to accurately localize the moving lesion during radiotherapy to ensure the accuracy of radiation delivery. Ultrasound (US) imaging is a promising imaging modality to guide radiation therapy in real time. This study proposed a deep learning-based motion tracking method to track the moving lesion in US images. To reduce the searching region, a box regression-based method is adopted to predefine a region of interest (ROI). Within the ROI, the feature pyramid network (FPN) that uses a top-down architecture with lateral connection was adopted to extract image features, and the region proposal network (RPN) that learns the attention mechanism of the annotated anatomical landmarks was then used to yield a number of proposals. The training of the networks was supervised by three training objectives, including a bounding box regression loss, a proposal classification loss and a classification loss. In addition, we employed long-short-term-memory (LSTM) to capture the temporal features from the US image sequence. The weights from transform learning were used as the initial values of our network. Two-dimensional liver US images from 24 patients and the corresponding annotated anatomical landmarks were used to train our proposed method. In the testing experiments on 11 patients, our method achieves a mean tracking error of 0.58 mm with a standard derivation of 0.44 mm in a temporal resolution of 69 frames per second. Our proposed method provides an effective and clinically feasible solution to monitor the lesion motion in radiation therapy.
|