TVS1 de 1
TVS1 de 1
TVS1 de 1
Design of a
Portable Observatory
Control System
Vincent Suc
Doctoral Thesis
Author:
Vincent Suc
Thesis Director:
Dr. Santiago Royo
Tribunal members:
Dr. Salvador Ribas
Dr. Salvador Bará
Dr. Marco Rocchetto
Time finally came to write these lines. Lines which have been in my mind all over the writing of
this thesis and I would like to dedicate a few words to all the people who supported me along the
years and made possible this achievement.
First of all, I would like to thank my wife Macarena for her constant help and presence along
this extense work. There are no words able to describe her immense contribution, her sweetness,
and moral support adequately. Not only she made it possible but gave me the strength to drive
it along with my work and the development of the company born out of it. Thank you for the
patience you had in driving me through this process.
Thanks to my family, my fathers, and sister who always supported my choices whatever they
are. Thanks for being present and for your constant advice during all this process. Thanks also to
my Uncle and aunt for indicating to me the path to science life.
I am sincerely grateful for the help of Santi, my advisor for trusting in this project, his patience,
and advice. Without his directions and orientation, writing this work would not have been possible.
Thanks to Andrés for his revisions orientations and dedication from the very first moments of
this project.
I would also add a thought to the ObsTech team Samuel and Rodrigo who showed to be much
more than coworkers and made the applications of this work come true.
I also want to thank my dear friends Esther and Pepe for their long talks, the time they dedi-
cated to reading it, their corrections and clarifications and their help which went so much beyond
the technical point of view.
Thanks to my CD6 friends Francisco Miguel Jordi and Reza who participated in this project,
the construction of its first prototype and the first and intense installation.
Finally, I would like to dedicate this document to my grandparents gone too early to see it but
whom the sense of humor, positivism, sweetness, and professionalism, made most of the person I
am today and gave me the tools to reach this moment.
I
Observatory Control System
II
TABLE OF CONTENTS
Table of contents
Table of Contents V
List of Figures XI
Glossary XV
1 Introduction 1
III
Observatory Control System
4 Algorithmics 49
4.1 High Precision Drive Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.1.1 System Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.1.1.1 Reduction and motors . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.1.1.2 Encoders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.1.1.3 Drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.1.1.4 Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.1.2 Performance in standard configuration . . . . . . . . . . . . . . . . . . . . 55
4.1.2.1 Ipec device under open loop control . . . . . . . . . . . . . . . . . 55
4.1.2.2 Ipec device under closed loop control . . . . . . . . . . . . . . . . 57
4.1.3 Data fusion from the two encoders . . . . . . . . . . . . . . . . . . . . . . . 57
4.1.3.1 Data fusion by direct interpolation . . . . . . . . . . . . . . . . . . 58
4.1.3.2 IIR filtering of the axis encoder data . . . . . . . . . . . . . . . . . 59
4.1.3.3 Sensor fusion based on an extended Kalman Filter . . . . . . . . . 60
4.1.4 Closed loop control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.2 Advanced Telescope Control System . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.2.1 Double Pointing Model as New States in a Telescope Pointing Machine . . 72
4.2.2 Applications of a Double Pointing Model: Blind Acquisition and Autoguiding 73
4.2.2.1 Calculation of the exact position by astrometrical reduction . . . . 73
4.2.2.2 Autoguiding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.2.3 On-Sky Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.2.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.3 Portable Observatory Software Arquitecture . . . . . . . . . . . . . . . . . . . . . 79
4.3.1 Objects Distribution Over the Network . . . . . . . . . . . . . . . . . . . . 79
4.3.2 Object Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.3.3 Common Configuration File . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4.3.4 Browser Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.3.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.4 The SAPACAN Hexapod system . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
4.4.1 Description of the proposed system . . . . . . . . . . . . . . . . . . . . . . . 93
4.4.2 Static positioning algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
4.4.3 Path computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4.4.4 Resolution analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
4.4.5 Accuracy analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
4.4.6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
4.4.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
4.5 Controlling and measuring collimation from single images . . . . . . . . . . . . . . 107
4.5.1 Step 1: Calibration algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4.5.1.1 Single image analysis . . . . . . . . . . . . . . . . . . . . . . . . . 108
4.5.1.2 Local evolution of entropy . . . . . . . . . . . . . . . . . . . . . . 109
4.5.1.3 Optical instrumental model . . . . . . . . . . . . . . . . . . . . . . 109
4.5.2 Step2: Model of the Behaviour of Entropy as a function of Absolute Defocus 111
4.5.2.1 Analysis of di↵erent image metrics . . . . . . . . . . . . . . . . . . 111
4.5.2.2 Entropy behavioral model . . . . . . . . . . . . . . . . . . . . . . . 112
4.5.3 Step3: Determination of the position of the focus . . . . . . . . . . . . . . . 113
4.5.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
IV
TABLE OF CONTENTS
6 Conclusions 143
6.1 An Improved Tracking System based in a novel encoder arrangement . . . . . . . . 143
6.2 An Advanced Telescope Control System (TCS) for Improved Guiding . . . . . . . 143
6.3 A Novel Communication and Visualization Protocol Optimized for Network Operation144
6.4 Low-cost High Precision 5-DOF Positioning System . . . . . . . . . . . . . . . . . 144
6.5 Standalone and Robust Focusing Method . . . . . . . . . . . . . . . . . . . . . . . 145
6.6 Future Work and Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Bibliography 152
Appendices 163
V
Observatory Control System
VI
LIST OF FIGURES
List of figures
VII
Observatory Control System
VIII
LIST OF FIGURES
4.3.4 Example of XML configuration file for the PUC40 telescope installed at Santa Mar-
tina’s Observatory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.3.5 Observatory’s Database tables structure at Santa Martinas Observatory . . . . . . 88
4.3.6 Content of table Puc40Peripherals . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.4.1 Lateral view of a prime focus telescope. Chief (green) and marginal (blue) rays
coming from stars at infinity are represented. . . . . . . . . . . . . . . . . . . . . . 92
4.4.2 Kinematic of the device, including indication of the respective local reference frames
of each scissor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
4.4.3 Lateral view of the device with description of its mechanical components with all
relevant points used in the calculations detailed. Only Arm 1 is presented, Arms 2
and 3 are fully equivalent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
4.4.4 Lateral view of the system in an subtensed condition. Only one arm is represented
with the definition of the distances described in the text. . . . . . . . . . . . . . . . 96
4.4.5 Path computation flowchart. The algorithm can be assimilated to a recursive di-
chotomy. The output is the path made of a list of positions. . . . . . . . . . . . . . 99
4.4.6 Focus range and resolution against a) X shift; and b) Y shift. . . . . . . . . . . . . 100
4.4.7 Focus range and resolution against a) X tilt; and b) Y tilt. . . . . . . . . . . . . . . 101
4.4.8 Accuracy of a) focusing; and b) tensioning radius prior to the homing procedure
described in the text. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.4.9 Accuracy of a) focus position; and b) tensioning radius for di↵erent homing errors
once the homing procedure described in the text has been applied. . . . . . . . . . 103
4.4.103D render of the designed system. . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
4.4.11First prototype built. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
4.4.12Field before correcting shift, tilt and focus. . . . . . . . . . . . . . . . . . . . . . . 105
4.4.13Field after correcting shift, tilt, and focus. . . . . . . . . . . . . . . . . . . . . . . . 105
4.5.1 Organization of boxes in the image. . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.5.2 Entropy measured on one image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
4.5.3 Selection of boxes of measurements displayed in Fig.4.5.4 . . . . . . . . . . . . . . 111
4.5.4 Evolution of the mean entropy on 10 zones of the field as a function of the focus
position. The 10 selected zones are presented in Fig.4.5.3 . . . . . . . . . . . . . . 112
4.5.5 b value (mean of the Gaussian fit of the entropy metric) over the field of HS1.4 . . 113
4.5.6 Optical zOmt model, or Field curvature model of HS1.4 (a) and residuals (b). Fig.(a)
represents the map of the expected best focus positions over the field, in focuser
steps, according to the zOmt model. The lowest RMS defocus over the field is
obtained for the focus position z = 264.99 steps. The model converged to an
optical center at coordinates (XShif t , YShif t ) = ( 273.55, 131.16) pixels o↵ the
geometric image center. The tilt found was (XT ilt , YT ilt ) = (3.91, 1.24)arcmin.
Fig.(b) represets the di↵erence, in focuser steps, between the model presented in
Fig.(a) and the measurement of the best focus positions over the field presented in
Fig.4.5.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
4.5.7 FWHM metric measured values as a function of absolute defocus (b parameter of
the quadratic fit of the evolution of FWHM per zone across focus per zone). . . . . 115
4.5.8 Ellipticity metric measured values as a function of absolute defocus (b parameter of
the quadratic fit of the evolution of Ellipticity per zone across focus per zone). . . 115
4.5.9 Entropy metric measured values as a function of absolute defocus (b parameter of
the quadratic fit of the evolution of Entropy per zone across focus per zone). . . . 116
4.5.10Entropy value against defocus (blue) and fit to a 1D smoothed spline (red) . . . . 116
4.5.11Entropy as a function of the absolute defocus. The 1-D repressentation in (a) ex-
presses all the measured and modeled values of the entropy vs the absolute defocus.
In the 2-D representation in (b) we show the color map of the measured entropy
against the distance radius from the optical center. Every black dot represent the
position of the entropy measurement in defocus vs the distance from the optical
center. The color in between the dots are the interpolated values of the entropy
between the measurement we used to build the map. . . . . . . . . . . . . . . . . . 117
IX
Observatory Control System
4.5.12(a) is a smoothed b-spline version of the map presented in Fig.4.5.11 (b). The
graph on the right represents the residuals expressed by the di↵erence of (a) and
Fig.4.5.11 (b). Fig.4.5.12 (b) and Fig.4.5.11 (a) show a clearly better convergence
than Fig.4.5.10 thanks to the use of a 2D B-Spline model. . . . . . . . . . . . . . . 117
4.5.13mean residuals for an image 6000units before focus . . . . . . . . . . . . . . . . . . 118
4.5.14mean residuals for an image 1000 units before focus . . . . . . . . . . . . . . . . . 118
4.5.15Comparaison of the results given by the entropy method and the usual FWHM
method during an actual observation night. We show the evolution of the resutls
given by both algorithms when the telescope is tracking and pointing. ( we show
with an arrow the moment where the target has been changed) . We also show the
evolution of the temperature during the night. During this test the initial best focus
guess was only 600 motors steps away from the best focus. It is possible to see that
both algorithms converge quiclky. The Entropy method is as efficient as the usual
FWHM method in this case. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
4.5.16Comparison of the results given by the entropy method and the usual FWHM
method during an actual observation night. We show the evolution of the resutls
given by both algorithms when the telescope is tracking and pointing. ( we show
with an arrow the moment where the target has been changed) . We also show the
evolution of the temperature during the night. During this test the initial best focus
guess was more than 2000 motors steps away from the best focus. It is possible to
see that The entropy algorithm converges quiclky while the the FWHM method can
get lost. The Entropy method is much more robust than the usual FWHM method
in this case. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
5.1.1 The Virtud-50cm telescope Rowe Coma Corrector commercially packaged by Baader
Planetarium. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
5.1.2 The Virtud-50cm telescope mounted in its final location. It is possible to apreciate
the truss tube design and the SAPACAN 5-DOF remotely controlled collimation
system next to the prime focus of the telescope. . . . . . . . . . . . . . . . . . . . . 123
5.1.3 Attachment method of the primary mirror to the tube. Since the telescope is Alt-
Azimuthal it is not necesssary to hold the mirror at the top. The mirror is sustained
by two slings which have the advantage of self-centering the mirror inside the tube
and provide an homogeneous e↵ort on the periphery of the flange. . . . . . . . . . 123
5.1.4 The Virtud-50cm Cell support design. The mirror is supported by 6 floating triangles
(18 contact points). Each pair of triangles is united by a joining lever. The left
drawing represents the positioning of the triangles and the position of the supports
relative to the mirror, while the right plot shows the resulting mirror deformations
at the zenith (in m) according to the FEM model simulated in this configuration. . 124
5.1.5 Aerial view of the refurbished dome at Sirene’s Observatory, France. . . . . . . . . 125
5.1.6 Installation schematics of the rotation wheel of the dome. . . . . . . . . . . . . . . 125
5.1.7 Interface board for reading the position of the Axis encoder over a serial or serial/usb
Port. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
5.2.1 General view of the PUC40 telescope installed at Santa Martina’s Golf Club. It is
possible to appreciate the typical Boller & Chivens deported Hour Angle Axis. . . 128
5.2.2 Motor swap and encoder fitting on the Hour Angle Axis of the telescope. . . . . . 129
5.2.3 Telescope Control System interface Screenshot. The figure shows the TCS interface
as it appears to the user during a remote observing session. . . . . . . . . . . . . . 130
5.2.4 Camera Interface Screenshot. The figure shows the real-time interface used for the
remote operation of the Main Imaging Camera . . . . . . . . . . . . . . . . . . . . 131
5.2.5 Integration of a 2h session on the NGC104 Globular cluster using the PUC40cm
telescope imager. The result presented is the combined sum of 40 consecutive 180s
exposures with a Johnson &Cousins R filter with no auto-guiding. The roundness
of the images of stars shows that the system can perfectly handle 3mn exposures
without the need of guiding, partially thanks to the e↵ect of using the advanced
pointing model described in Section 4.2. . . . . . . . . . . . . . . . . . . . . . . . . 132
5.3.1 Initial state of the ESO50 telescope when still installed at La Silla . . . . . . . . . 133
X
LIST OF FIGURES
5.3.2 ESO50 after being moved down to Santa Martina’s observatory . . . . . . . . . . . 134
5.3.3 The new control box and electronics of the 50cm telescope. The TCS and associated
susbsystems are embedded in the left-hand box, while imaging-related equipment
has been installed in the right-hand box . . . . . . . . . . . . . . . . . . . . . . . . 135
5.4.1 Old TCS in the ESO1m telescope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
5.4.2 Mechanical setup of each axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
5.4.3 New TCS installed, showing the position of the two new electronic boxes which
implement the TCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
5.4.4 Most useful pointing machine states and transitions according to SLALIB and to
the Telescope Pointing Machine definitions. . . . . . . . . . . . . . . . . . . . . . . 140
5.4.5 New states added in the telescope pointing machine (in dark gray) and their asso-
ciated transitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
5.5.1 Pointing residuals of the telescope control system. The graph shows the pointing
error in Right Ascension versus the pointing error in Declination express in arc-
seconds for 15 bright stars randomly selected in the sky. . . . . . . . . . . . . . . . 141
5.5.2 CoolObs TCS GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
XI
Observatory Control System
XII
LIST OF TABLES
List of tables
XIII
Observatory Control System
XIV
Glossary
Glossary
Instrument Neutral Distributed Interface (INDI) The Instrument Neutral Distributed In-
terface is a distributed control system (DCS) protocol to enable control, data acquisition
and exchange among hardware devices and software front ends, emphasizing astronomical
instrumentation. 79, 80
Internet Communications Engine (ICE) The Internet Communications Engine, or Ice, is an
open source RPC framework developed by ZeroC. It provides SDKs for C++, C#, Java,
JavaScript, MATLAB, Objective-C, PHP, Python, and Ruby, and can run on various op-
erating systems, including Linux, Windows, macOS, iOS, and Android. ICE implements a
proprietary communications protocol, called the Ice protocol, that can run over TCP, TLS,
UDP, and WebSocket. As its name indicates, ICE can be suitable for applications that
communicate over the Internet, and includes functionality for traversing firewalls. 79, 80
Specification Langage for ICE (SLICE) The Specification Language for ICE (SLICE) is a
simple definition object-oriented language in which we define member functions of every
possible server object present in an ICE collection of peripherals. 80
XV
1 Introduction
In this thesis, we synthesize the development of a new concept of operation of small robotic tele-
scopes operated over the Internet. Our design includes a set of improvements in control algorithmic
and hardware of several critical points of the list of subsystems necessary to obtain suitable data
from a telescope.
We can synthesize the principal contributions of this thesis into five independent innovations:
An advanced drive closed-loop control We designed an innovative hardware and software
solution for controlling a telescope position at high precision and high robustness.
A complete Telescope Control System (TCS) We implemented a light and portable soft-
ware using advanced astronomical algorithms libraries for optimally compute in real-time
the telescope positioning. This software also provides a new multiple simultaneous point-
ing models system using state machines which allows reaching higher pointing precision and
longer exposure times with external guiding telescopes.
A distributed software architecture (CoolObs) CoolObs is the implementation of a ZeroC-
ICE framework allowing the control, interaction, and communication of all the peripherals
present in an astronomical observatory.
A patented system for dynamic collimation of optics SAPACAN is a mechanical parallel
arrangement and its associated software used for active compensation of low-frequency aber-
ration variations in small telescopes.
Collimation estimation algorithms A sensor-less AO algorithm have been applied by the anal-
ysis of images obtained with the field camera. This algorithm can detect e↵ects of lousy col-
limation. The measured misalignments can later feed corrections to a device like SAPACAN.
Due to the constant presence of new technologies in the field of astronomy, it had been one
of the first fields to introduce material which was not democratized at this time such as Coupled
Charged Devices, internet, adaptive optics, remote and robotic control of devices.
However every time one of these new technologies was included in the field it was necessary to
design software protocol according to the epoch’s state of the art software. Then with the democ-
ratization of the same devices years after the definition of their protocols, the same communication
rules tend to be used to keep backward compatibility with old - and progressively unused- devices.
When using lots of cumulated software knowledge such as with robotic observing, we can dig in
several nonsenses in the commonly used architectures due to the previously explained reasons.
The described situation is the reason why we will propose as follows a new concept of considering
an observatory as an entity and not a separated list of independent peripherals. We will describe
the application of this concept in the field or robotic telescopes and implement it in various com-
pletely di↵erent examples to show its versatility and robustness.
First of all, we will give a short introduction of the astronomical concepts which will be used all
along the document, in a second part, we will expose a state of the art of the current solutions used
in the di↵erent subsystems of an observing facility and explain why they fail in being used in small
telescopes. The principal section will be dedicated to detail and explain each of the five innovations
enumerated previously, and finally, we will present the fabrication and integration of these solutions.
We will show here how the joint use of all of them allowed obtaining satisfactory outstanding
results in the robotic use of a new prototype and on the adaptation on several existing refurbished
telescopes. Finally, we dedicate the last chapter of this thesis to resuming the conclusions of our
work.
1
Observatory Control System
2
2 Astronomical and Telescopes Mechanics
3
Observatory Control System
0o and 360o counted from the north direction positive clockwise for the observer. The elevation is
measured vertically by an angle between 0o for an object at the horizon to 90o for a target at zenith.
Earth’s rotation
Zenith
North
an
di
Target
Meri
West
tion
Azimuth
Eleva
East
South
Even is the Alt-Azimuthal notation is quite convenient for the earth referenced cases because
the main axis is aligned with gravity vector which makes things very convenient for flexure or
atmospheric related computations, it is not the most adequated for absolute referencing of stars
position. When the Earth is rotating around the Sun and on its main axis, the path done by the
stars along the night or the year in this referential is not straight-forward when expressed in this
reference frame.
The equivalent of the terrestrial longitude called the Right Ascension is measured along the
celestial equator as an angle between the vernal point and the target object. Declination is mea-
sured from -90 to +90 degrees as the angle to the celestial equator as the terrestrial parallels.
The right ascension is measured in Hours of time, corresponding at the time an observer must wait
, observing from a fixed position on earth between the instant where the vernal point crosses merid-
ian and the measured position crosses. As a result, Right ascention is measured from 0h00m00,0s
to one sidereal day, the time in which earth is doing a full 360 degrees rotation or 23h56m4.1s
approximately.
4
2.2. TELESCOPE MOUNTS
The biggest benefit of using an equatorial reference frame is that over long periods of time, ( a
few years) the coordinates of a distant star can be considered as constant.
However, the earth axis is a↵ected by a periodic movement called precession of 26.000 years in-
troducing a progressive drift of the vernal point over the years. The inclination of this same axis
related to the ecliptic also su↵ers some oscillation called nutation.
Earth rotation also su↵ers from speed variations which implicates that some correction must be
done to know the proper angle of the vernal point related to the earth.
As a result sky coordinates are always referenced to a given date and a set of correction must
be performed to the instant we observe. As an example, a flowchart of corrections to be applied
depending on the environmental conditions of the telescope which noticeable a↵ect pointing quality
are presented in Fig.2.1.3 where a typical sequence of corrections proposed by P.T Wallace in the
SLALIB library[5, 6] from the catalog position to the real telescope position is presented.
The main disadvantage of British mounts comes from their control. For every position on the
sky to be pointing at, there are two mathematically possible positions of the telescope, but only
one of them is valid (the one with the tube over the counterweight), otherwise it touches the pillar
of the mount. Heavy calculations are necessary in order to avoid this if we don’t want to limit the
range of action when we pass the meridian.
This means that the tube must be at East of the mount when the telescope is pointing West
and on West of the mount when pointing East. That makes control and error correction more
complicated. An example of this mechanical mount may be seen in figure 2.2.2. Another relevant
problem to be taken into account is that pictures are flipped xy when the telescope is flipped in a
5
Observatory Control System
Nutation Precession
Earth’s rotation
North
Greenwich Pole
Observer
Ce Eq
l es ua
tia to
lE r .
qu
at
or
Target
South pti c
Pole Ecli
Sun
6
2.2. TELESCOPE MOUNTS
Precess to J2000.0
Heliocentric Mean FK5
J2000.0
Heliocentric Parallax
Geocentric Mean FK5 J2000.0
Geocentric Parallax
Topocentric Mean FK5
J2000.0
Light Deflection
Deflection corrected
Aberration
Aberration corrected
Precess to Date
Date coordinates
Nutation
Topocentric Apparent FK5
Current Equinox
Earth’s Rotation
Topocentric Apparent
(Ha,Dec)
AltAz Conversion
Topocentric Apparent (Az,El)
Refraction
Topocentric Observed (Az,El)
AltAz Conversion
Topocentric Observed
(Ra,Dec)
Figure 2.1.3: Most useful pointing machine states and transitions according SLALIB and to the
Telescope Pointing Machine definitions.
7
Observatory Control System
long exposure.
Nowadays, in the design of very large and giant telescopes, (6-40m of diameter), all designs
converge to azimuthal mounts, because of the difficulties implied by an equatorial mount in such
8
2.3. USUAL OPTICAL CONFIGURATIONS
large systems. The other big advantage of an azimuthal configuration in a large telescope is the
possibility of having two Nasmyth foci in which we can install all required heavy instrumentation,
which only moves around itself for field rotation. Heavy instrumentation can be used also on
equatorial mounts using a coudé focus, although this implies a higher number of reflections and a
lower throughput efficiency in the system.
2.3.1 Newtonian
In a Newtonian telescope, the primary mirror is parabolic and there is a 45 degrees folded flat
secondary mirror between the prime focus and the primary mirror that sends the focal plane
outside the tube. This design is very simple but needs a corrector if we want to observe wide fields
because of the importance of field coma (fig. 2.3.1).
9
Observatory Control System
2.3.2 Cassegrain
In a Cassegrain telescope, there is a concave parabolic primary mirror and a convex hyperbolic
secondary mirror which sends the focal plane through the primary mirror, in which we need a
hole. This configuration has the advantage of being very compact and is a classical solution for
amateur astronomers who need a transportable solution. In this configuration, the e↵ect of field
coma remains strong and these telescopes still need a strong coma corrector in order to give proper
images as soon as you try to observe away from the optical axis (fig.2.3.2).
This configuration along with the Newtonian one are the most common configurations among the
amateur astronomers. Newtonian one is more stable in terms of vibrations and parts stability
while the Cassegrain one is giving the advantage of a smaller tube.
2.3.3 Ritchey-Chrétien
The Ritchey-Chrétien telescope is a variation of the Cassegrain telescope where both primary
mirror and secondary mirrors are hyperbolic and the shapes of the surfaces are adapted to each
other. This configuration gives better results far away from the optical axis (fig.2.3.2).
2.3.4 Gregorian
In a Gregorian telescope, the optical configuration is equivalent to that of a Cassegrain one but for
the fact that the secondary mirror is concave too, and is placed after the prime focus. This has the
advantage of giving a real pupil between the primary and the secondary mirror when we look from
the detector position. It can be used to install a removable flat field screen usable for spectroscopic
calibrations, avoiding that way a screen of the same size of the primary mirror (fig.2.3.3).
10
2.3. USUAL OPTICAL CONFIGURATIONS
in order to avoid large aperture obstruction. On the other hand, it is a setup which only has one
reflector (the primary) so there is a single surface to polish making the system cheaper in some
cases (here we should notice that as a consequence of the configuration the tube must be longer and
that implies a bigger dome (fig.2.3.4). This is the design has been selected for the first prototype
in the present project.
Primary mirror
1
Detector
Optical axis
Telescope tube
Figure 2.3.4: Lateral view of a prime focus telescope. Chief (green) and marginal (blue) rays
coming from stars at infinity are represented.
11
Observatory Control System
12
3 State of the Art
In the field of telescope design, a long path have been done since the 1960’s but most of the soft-
ware, or hardware solutions which are used in modern profesional telescopes are based on solutions
and restrictions which were established several decades ago.
For this reason we are making in this chapter a review of the existing methods or hardware
solutions used to facilitate the obtention of automatic data. Our goal is to analyse way large
profesional telescopes adquire data from the point of view of several steps of the adquisition chain.
This mean from the software and hardware point of view of movement of the telescope to the data
analisis and their retro alimentation in order to get better image quality.
Along this study, we will analyse the status or feasability of their possible porting to small
telescopes.
As described in the previous chapter, the earth is moving over its axis at a constant speed of
15 arcseconds per second and as first approximation this is the needed tracking speed on the right
ascention axis of an equatorial telescope. The tracking precision needed by a telescope is usually
driven by the size of a star on the telescope imager which must be rigourusly kept at the very
same position of the detector while it is integrating the light of the target. on the other side, the
athmosferic turbulence degrades the image quality and make that the best achivable star point
spread function on the detector is hardly better than 0.3 to 2 arcseconds in diameter depending
on the quality of the observing site.
As a result, for telescopes which are note equipped with adaptive optics the limitation is driven
by the athmospheric quality but the maximum acceptable tracking error during the observation if
we want to keep correct shapes of the stars still remains below a fraction of an arcsecond which is
still mechanically hard to achieve.
13
Observatory Control System
As a result, if we want to keep the star within 0.3arcseconds of its nominal position while
tracking at sidereal speed (15.04 arcsec/s approximately) it is necessary to choose a control time
sampling t so that the angular distance done by the star would be less than 0.3arcseconds at
this speed. As a consecuence, it was early defined that the axes control loop need to have a time
sampling inferior to a thenth of the maximum acceptable error which would be around 20ms.
Since this time sampling was not sufficient for the epoch’s computers existing at this time
to proceed to astronomical computation and correction Wallace proposed to split the positioning
control loop of a telescope into two loops running at di↵erent speeds. The fist one would control
blindly the motor speed closing the loop with a tacometer or an encoder while the second one would
control the position of the telescope at a lower rate allowing astronomical correction computation.
Computing performance and the availablility of closing a position control loop at a sampling
period inferior to 100ms also allowed the design of the first alt-azimuthal telescopes available to give
similar image quality thant the equatorial ones. The multi mirror telescope which starte operating
in 1979 is one of the first big size telescopes to use an alt-azimuthal design. The advantage of this
configuration is a more compact mechanics and dome which allows to build a bigger telescope for
a fixed budget and less mechanical restrictions in terms of flexures and mechanical tensions. The
counterpart of this it the fact that it becomes absolutely necessary to recompute the transformation
from equatorial coordinates to mount coordinates at each loop iteration in order to have a proper
tracking since both axes would have to track to di↵erent and non-constant speeds all the time in
order to keep the star in the field of the sensor.
A similar control loop was implemented at the Multi Mirror Telescope (MMT) from its first
initial design [10] where the speed control loop runs at 4kHz while the position control loop runs at
50Hz. For the new design of this telescope by the Smithsonian Institution for Astropysics (SAO)
and the University of Arizona, High resolution on-axis encoders coupled to specific control algo-
rithms have been used.
A PID consist of a control loop that send a set point speed to the motor. An encoder measures
the position or speed error and the PID corrects the set point speed of the motor by a sum of three
terms. The first term is proportional to the last measured error, the second to the sum of all the
previous errors, and the third to the mean of the previous errors. This specific algorithm attacks
DC servomotors that have their own high resolution PID control embedded. The PID parameters
are actively tuned using the algorithmics when the speed of the motor changes, allowing to get a
better response of the motor at every speed. This strategy is going to be adapted to the active
telescope we are proposing.
Fig.3.1.1 is a representation of the organization of the control loops of the elevation azimuth
axis of the MMT ( the azimuth control loop is very similar and thus not displayed). First of all
we will remark the presence of a GPS timing board for a precise timing of the absolute position
which is indisociable with a precise pointing. Then we can note that the motors are controlled
by a standard PID which receives the signal from a high resolution relative encoder. The refer-
ence to the encoders is given by an absolute encoder of lower resolution and the control computer
unit send the PID parameters to the PID controller according to the current speed of the telescope.
14
3.1. APPLIED ALGORITHMICS TO ROBOTIC OBSERVING
GPS/NTP
Server
Ethernet
ym
However, controlled close loop tracking has been for a long time reserved to the bigger telescopes
where the latest technologies were implemented and remained that way out of accessibility of small
scientific projects or amateur class telescopes. Implementing such a control usually implicated the
use of a full rack of computers and custom designed control boards which cost and maintenance
were prohibitive for the 20cm-1m class telescopes.
However these systems implemented only basical coordinates transformations or mount cor-
rections running in open loop on the axes since the computing capabilities of these massified
microcontrollers didn’t allow such corrections.
Shen & al.[11] proposed a distributed arquitecture we can see in Fig:(3.1.2) where the fastest
and simplest operations could be run in an embeeded control loop in a standard PIC16F876 micro-
controller while positioning could be run in an embedded computer and most complex operation
in a standar personal computer. This arquitecture allowed to export the most coslty operations to
more powerfull system while the fastest operation could be run inside dedicated real time hardware.
This system does not o↵er close loop control of the positioning but allows to distribute the power
and the speed of the system inside a low cost overall arquitecture.
15
Figure 3: Logging service architecture
The following list of e↵ects, however, are only dependent on the mechanism of the telescope
itself, and must be calibrated using the same instrument and weight balance to be used in the final
observations. They also have a dependence on which position the telescope is pointing at, and may
a↵ect severely the position of the object we try to observe on the science camera.
• The telescope is not properly aligned towards the pole (for an equatorial mount) or the zenith
( for an azimuthal mount)
• The zero positions of the encoders used to control the motor position have some o↵set relative
to the true zero position
• The tube or the camera holder have flexures. This flexure changes as a function of the
position where the telescope is pointing at, so this e↵ect will require a specific calibration to
characterize the error introduced depending on the position
16
3.1. APPLIED ALGORITHMICS TO ROBOTIC OBSERVING
• Some optics can slightly move in their respective cells, or the cell itself may be deformed
depending on the position of the telescope
Proper Motion
Parallax
Light deflexion
Annual aberration
Precssion
Nutation
Apparent [↵, ✓]
Sideral time
Apparent [ , ✓]
Diurnal aberration
Topocentric place
Refraction
Observed Place
Figure 3.1.3: Corrections to be considered between the catalogue position and the telescope posi-
Instrument Place
tion. ↵ is the Right Ascension, is the Declination and h is the Local Hour Angle.
At a larger or minor extent, these errors in pointing are unavoidable and need be corrected. The
philosophy of the equations defining a pointing model is presented next. It is a general problem
which can be reduced to a minimum square matrix inversion. We will define three matrices which
will represent a vector containing the parameters of interest, a matrix describing the equations in
the considered pointing model, and a vector containing the pointing error to be calculated. Let us
call: 5
• p1 ..pn the n parameters of the model defining the mount
• ( , ⇥) the position of the mount (Local hour angle and declination)
• f1 ( , ⇥)..fn ( , ⇥) the contribution function of every parameter on the axis
• g1 ( , ⇥)..gn ( , ⇥) the contribution function of every parameter on the ⇥ axis
• (d , d⇥) the position error on every axis of the mount at a given position.
We can assume the following relationships for a given position assuming errors are linear:
17
Observatory Control System
F~ and G
~ are vectors containing constant values characteristic of the arrangement of the telescope
optics and mount. Then the two previous relationships can be expressed as
! ~
d = F( ,⇥) .P (3.1.3)
!
d⇥ = G( ,⇥) .P~ (3.1.4)
We now can observe a number m of theoretical positions ( i , ⇥i ) using our telescope; for every of
these positions the telescope will observe the real position ( ˜i , ⇥˜i ), which may present di↵erences
with the theoretical ones. The real position is deduced using a common technique called an
astrometrical reduction of the stars present on the field of the CCD [16]. It becomes obvious that
the vector
[d i , d⇥i ] =[ i
˜i , ⇥i ⇥˜i ] (3.1.5)
Now we can construct the following relation which is a matricial expression of the previous one:
2 3 2 3
f1 ( 1 , ⇥1 ) · · · fn ( 1 , ⇥1 ) 2 3 d 1
6 g 1 ( 1 , ⇥ 1 ) · · · g n ( 1 , ⇥ 1 ) 7 P1 6 d⇥1 7
6 7 6 7
6 .. . . .
. 7 6 .
. 7 6 .. 7
6 . . . 7 . 4 . 5 = 6 7 (3.1.6)
6 7 6 . 7
4 f 1 ( m , ⇥ m ) · · · f n ( m , ⇥ m ) 5 Pn 4d m 5
g 1 ( m , ⇥m ) · · · gn ( m , ⇥m ) d⇥m
So if we have series of observations we can fill the first and the third matrices of the previous
expression and deduce the parameter vector P~ using a least squares solution of the matrix inversion
as follows:
2 3 1 2 3
2 3 f1 ( 1 , ⇥ 1 ) · · · fn ( 1 , ⇥ 1 ) d 1
P1 6 g1 ( 1 , ⇥1 ) · · · gn ( 1 , ⇥1 ) 7 6 d⇥1 7
6 7 6 7
6 .. 7 6 .. .. .. 7 6 .. 7
4 . 5=6 . . . 7 . 6 7 (3.1.7)
6 7 6 . 7
Pn 4 f1 ( m , ⇥m ) · · · fn ( m , ⇥m ) 5 4 d m 5
g1 ( m , ⇥m ) · · · gn ( m , ⇥m ) d⇥m
TPoint is able to solve a model based on linearised equations like the ones in Tab. 3.1.1 and
Tab.3.1.2.
Term Description h
IH h index error IH
ID index error ID
CH Collimation error CH sec
NP h/ non-perpendicularity NP tan
MA Polar axis left-right misalignment -MA cos h tan MA sin h
ME Polar axis vertical misalignment ME sin h tan ME cos h
Table 3.1.1: Model of the six primary parameters for TPoint correction
The obtained parameters model the telescope-dependant e↵ects. The possibilities of applying
a pointing model can be seen in figures 3.4(a) and 3.4(b). In these graphs we can see that without
applying any advanced correction, a high quality mecanism is pointing with a 35 arcsecond rms
precision and this value can be divided by a factor 10 with an apropriate mathematical model.
With the increase of computational possibility, TPoint was then implemented not only for cor-
recting pointing but also to include dynamic tracking speeds as a function of where the telescope
18
3.1. APPLIED ALGORITHMICS TO ROBOTIC OBSERVING
Term Description h
TF Tube flexure TF cos sin h sec TF (cos cos h sin sin cos )
FO Fork flexure FO sec h
DAF axis flexure DAF(cos cos h + sin tan )
(a) O↵sets correction only (b) Correction of flexures, misaligments and non per-
pendicularities
Figure 3.1.4: The HALE 200 inches telescope with two kinds of corrections
is pointing. Latest generation big telescopes such as Keck[17] or Gemini[18] all use this functions
in their tracking control loop. Using this method, the telescope can track longer without drift due
to flexures e↵ectos or mechanical misalignments.
Terret et al. [19] proposed in 2006 a c++ class library for large scale telescopes layered on
TCSpk and SLALIB that implements virtual telescope objects for generating mount and rotator
position and speeds. It also permits the interaction with TPOINT[13].
Percival and al [1] proposed the Telescope Pointing Machine Library (TPM). TPM is less com-
plete than the previous one but popose multi lengage backends such as python.
As we can see in fig. (3.1.2) The telescope is considered as a state machine and transitions of
coordinates is considered as a state transition. This concep makes more versatile the change from
one state to another and its further implementation in a custom control loop.
19
Observatory Control System
S08 S13
Telescope Pointing Machine
Version 1.14
T09 - Aberration T09 - Aberration
20-Nov-2008
Based on the Keck
Pointing Flow by
P. T. Wallace S09 S14
S10 S15
As soon as image misalignment becomes more important than the image of the star a↵ected
by athmospherical turbulence (the seeing disk), all telescopes become sensitive to this e↵ect, so
approaches use din the case of larger telescopes could be interesting to analyze, and to extend
to smaller telescopes. When the MMT (Multiple Mirror Telescope) [20] was changed from a seg-
mented seven-mirror primary arrangement to a 6.5 monolithic one, conceptors were confronted to
the positioning of the f/5 secondary mirror issue. The secondary mirror was then placed onto an
hexapod ( a positioning system with six degrees of freedom) and a devoted control algorithm was
used to recenter it depending on the position of the telescope in order to compensate misalign-
ments due to flexures. In a technical memorandum[21] of MMT, a method is defined in order to
compensate the mirror positioning as a function of the telescope position. The method consist of
thee main steps:
• We place a camera at the prime focus and replacing the secondary mirror by a doughnut
shaped counterweight of equal mass.
• We point the telescope at several altitudes and register the variation of the position of the
20
3.1. APPLIED ALGORITHMICS TO ROBOTIC OBSERVING
aimed star related to the encoder position and store it in a flexure model that can be inter-
polated by a simple polynomial function.
• When the telescope is in normal operation we compensate the flexure tilting and shifting the
secondary mirror around its cardinal zero coma point in an open loop mode.
It has been shown that the telescope mount behavior can be assumed as repetitive and how such a
correction can drastically improve the general image quality of the instrument. A similar correction
can be applied in a quite straightforward way on a smaller telescope like ours, and the model could
be done with the same science camera in case we use it in a prime focus configuration.
• It is hard to install a microlens array or a collimating lens at the same focal plane because
we physically don’t have enough space. Moreover, these sensors must be close to the optical
axis in order to get small out of axis aberration and measure wavefronts undistorted by the
sensor arrangement.
• When the Shack-Hartman system can be moved around the science detector in order to find
a proper wavefront sensing star, it is very useful to have a very small aperture of the SH
(pinhole) in order to reduce the contribution of the sky background and be sure of having a
single star in the aperture. Of course the problem is the same lack of practicable space in
small telescopes, so we cannot install moving stages close to the focus.
• We cannot use any beam splitter or any device that introduces light losses to the science
detector. Light cannot be wasted in instruments where the light gathering capability is
limited by the aperture size.
• In a Shack-Hartmann system, the light of a single star is divided into many sub-pupils, thus
reducing the number of stars available for wavefront sensing procedures. On a small telescope
we could become limited very quickly by the low number of bright enough stars.
As a result a Shack-Hartmann sensor may not be the right solution for such an instrument.
That is the reason why we searched a proposed solution in comparable optical configurations where
Shack-Hartmann sensors are rather difficult to operate, such as in microscopy. Using this strategy
we could isolate several sensorless adaptive optics methods. In our application, these techniques
are expected to retrieve the aberrations of the incoming wavefront out of the image registered in
the science field by making use of some algorithmics and some measurement strategy.
We will now present a brief overview of the main techniques which could be applied to our
system with this purpose is presented in the following.
21
Observatory Control System
Let’s call (~r) the aberration wave front phase, (~r) the included o↵set and F {} The Fourier
transform operator . We obtain the two following measurements I1 and I2 on the sensor:
These aberration can then be expressed using a decomposition of orthogonal functions fn with a
or b defining the influence of the mode n and we have:
2
ZZ
I1,2 = exp(jafk ± bfi ) dA (3.1.10)
A
And the output of the sensor can be expressed as the di↵erence of intensities of the same sensor
a↵ected by the positive and negative o↵set aberration:
Neil et al.
I = I1 I2 (3.1.11) 1099
Vol. 17, No. 6 / June 2000 / J. Opt. Soc. Am. A
Fig. 1. Schematic description of the aberration sensor that uses biasing elements and Fourier transform lenses.
Figure 3.1.6: A standard modal sensor
where F denotes the FT and v is the coordinate vector de- and it follows that the sensitivity is given by
Then,plane.
scribing the detector in order to be able to close the loop properly , we must know which is the sensitivity of
# "" "" $
This sensor is a generalization
a mode n to an actuationof theofcurvature
a mode m(defo-
in order to be aware of any eventual crossover degeneracy.
cus) sensor In
in fact,
whichdepending
one detector pinhole
on the modesis weplaced in decompose
use to S!the4 signal
Im inexp
these
$ jbforthogonal
i % dA
functions
fk exp$ #jbfi %we
dA . (7)
front of the nominal
may notfocal plane
assure of an
that a lens and oneon
actuation behind
a mode does not a↵ect another A A
one. A sensitivity matrix
it.1,5 For the special
can case in which
be generated thethe
getting biasgradient
aberration is output of the sensor on a mode n with respect to the
of the
chosen such input
that !(r) " r 2 , the
aberration sensor
when this of Fig. 1 is opti-
aberration Thezero.
a is close to two exponential terms can be expanded as a Maclau-
cally identical to the curvature sensor. rin series to separate the imaginary parts, giving
The performance of the sensor depends on the aperture
@ In
% "" ""
shape, the chosen bias aberration, and the size and shape
""
Sn,a = (3.1.12)
of the pinhole, all of which we may choose. In general, @a
a=0 S ! 4 b fi dA fk dA # b fi fk dA
our aim is to obtain the magnitude of the various modes A A A
"" &
in an orthogonal modal expansion of the input phase ab-
erration #(r). We now proceed to show that if we choose $ dA & ¯ . (8)
our bias aberration to be exactly one of those orthogonal A
modes, then, to first order, the sensor will be sensitive to
that mode in the input wave front while rejecting all oth- So, if the bias b is sufficiently small that only first-order
ers. Let us consider the case in which the detection pin- terms are significant and the aberration modes fi and fk
holes are infinitely small and positioned on the optical have zero mean across the aperture, only the second term
axis. We take our sensor output signal to be the differ- remains on the right-hand side of Eq. (8). It follows that
ence between the intensities at the two detectors. Let if fi and fk are members of a set of functions orthogonal
!(r) ! afi (r) and #(r) ! bfk (r), where fi and fk are or- over the aperture, A, then the sensitivity will be given by
thogonal functions representing the modal expansion of
the phase aberration and a and b are representing the
magnitude of the input aberration and the applied bias.
The intensities at the two detectors are then given by
S ( #4bA
A
""
fi fk dA ! #4bAC ) ik , (9)
I 1,2$ 0% ! ! ""
A
!
exp$ jafk " jbfi % dA .
2
(3)
where ) ik is a Kronecker delta and C is the constant of
orthogonality. S will be nonzero only if fi and fk are the
22 is
The sensor output same function. The sensor would therefore respond only
to an input aberration mode that is identical to its bias
3.1. APPLIED ALGORITHMICS TO ROBOTIC OBSERVING
This becomes very helpful when we want to use the guide chips as modal sensors, as described
in the previous subsection. We only need to scale both guide stars, and to make the di↵erence of
the two windows we obtain the same output as defined in equation 3.1.11. Figure 3.1.8 shows the
output of the modal sensor when the optical path is a↵ected by several types of basic aberrations.
It shows that we can easily measure low-order Zernike coefficients, from defocus to astigmatism,
coma or sphericity, that could be used to send corrections of these modes to the secondary mirror
positioning system, and some primary mirror actuators. Defocus information can basicaly be
retrieved from the sum of the di↵erence image while the higher order aberrations must be fitted
using the shape of the di↵erence image.
In practice, in the MEGACAM imager it appears that we only need to use the defocus guiding
information because the telescope has an embedded Shack-Hartman wavefront sensor which runs
hourly coupled to the primary mirror server, which interpolates the astigmatism with a FEM
analysis of the primary mirror deformation due to changes in its inclination. This approach yields
good enough results, although it is hardly applicable to a small telescope.
23
Observatory Control System
Figure 3.1.8: Using the Megacam Guide sensors as modal sensors to retrieve information about
astigmatism coma and sphericity
As shown in Fig 3.1.9 Debarre’s set-up is quite simple. It consists of re-imaging the pupil on a
deformable mirror and then sending it to the sensor. The imaging sensor is therefore a↵ected by
the unknown aberration that arrives at the entry of the system plus the known aberration included
in the deformable mirror.
L4=200mm
L2=120mm
L1=150mm
object
m = M1
light emitting diode
m = M2
m = 2.0
Figure 3.1.9: D. Debarre setup for optimizing a wavefront based on direct image measurement and
a deformable mirror Fig. 2. (a) Schematic diagram of the experimental apparatus; (b) Raw image of scatterer
without aberrations; (c) Spectral density of the scatterer image (log scale) with M1 , M2 and
incoherent cut-off frequencies marked. The horizontal and vertical lines at the edge of this
D. Debarre showed that if the included modal o↵sets by the deformable mirror are decomposed
image are FT artefacts arising from the sharp image boundaries.
in Lukosz modes instead of classical Zernike modes, their sensitivity related to the low frequencies
of the image becomes decoupled. In other words, the computation of the sensitivity matrix of the
equation 3.1.12 gives
where a the
diagonal
elements matrix.
of the vectors a and c are identical to the elements of {a i } and {ci }, respec-
Or, in other words,
tively. The matrixmind
if we take in the three
D provides following
conversion fromrestrictions:
Lukosz coefficients into Zernike coefficients (see
†
Appendix B). The pseudo-inverse matrix B permits
• The actuation on the deformable mirror follows a Lukosz decomposition the calculation of the control signals from
the Zernike coefficients. This matrix was obtained using an interferometric method described
• We considerinonly the lowest
Reference frequencies
[17], which of the flattening
also enabled Fourier transform of DM
of the initial the aberration
obtained images
figure.
In order to characterise the properties of g, we used a holographic scatterer (Physical Op-
• We try to maximize
tics Corp.)the sumtransmissive
as the of these lower frequencies
specimen. using a bilinear
An aberration-free imageinterpolation of one
of the scatterer is shown
start point in
with two known o↵sets on each mode
Fig. 2(b). This specimen is ideal for this characterisation as it contains all spatial frequen-
ciesobserve
In that case we can within the
thepass band ofrelations:
following the imaging system; this can be seen in the image spectral density
(Fig. 2(c)). Figure 3(a) shows the measured variation of g with the root mean square (rms)
aberration amplitude using Sn,a |n=a =spatial
different 1 (3.1.13)
frequency ranges. The aberration consisted of eight
Lukosz modes (i = 4 to 11). The rms
Sn,a |n6=a = 0 amplitude was calculated from the Lukosz coefficients
(3.1.14)
as a = |a| (see Appendix B). Each data point shows the mean and standard deviations for an
meaning that every mode is
ensemble of linearly
200 randomindependent
aberrationsofofthe others ina.the
magnitude Eachlinear decomposition
aberration so they
was constructed by gen-
do not contaminate eachrandom
erating other. coefficients
Making 2N+1 in themeasurements based
range (-1,1) with on a start
uniform point plus
probability; 2 knownvector
the resulting
was then scaled to the magnitude a. When only small spatial frequencies are used in the calcu-
lation of g, the deviation from the mean is small and the response is predominantly quadratic,
24 as predicted by Eq. 27. When larger frequencies are also included, so that the low frequency
approximations no longer hold, the value of g drops off more quickly and the deviation from
3.1. APPLIED ALGORITHMICS TO ROBOTIC OBSERVING
o↵sets in each mode we want to maximize, we can make a bilinear interpolation and thus find
the N-dimensional point where the sum of the lower frequencies is maximum. This N-dimensional
point represents the values to be used for the set of modes we want to optimize. In other words,
if we simplify the problem to the optimization of one single mode, we can call the base position
a↵ected by un unknown aberration G0 , and the same position a↵ected of a known o↵set b in this
mode are called G+ G . The optimized GBest position in this mode will be so far:
G+ G
GBest = b (3.1.15)
2G+ 4G0 + 2G
25
Observatory Control System
a1 b1 c1 d1
a2 b2 c2 d2
a3 b3 c3 d3
g(a)
g(a)
g(a)
g(a)
a a a a
Fig. 4. Correction of a single Lukosz aberration mode (astigmatism, i = 5) using the scat-
Figure 3.1.10:with
terer specimen Example ofMoptimization
M1 = 0.06 and for a single mode.
2 = 0.4. The first row shows the raw images and the
second row contains the corresponding spectral densities. The third row illustrates schemat-
ically the sampling of the Lorentzian curve used in the optimisation calculation. The dia-
grams correspond to: (a1-a3) initial aberration of magnitude a5 = 4.9; (b1-b3) additional
negative bias b = 11.5 applied; (c1-c3) additional positive bias b = 11.5 applied; (d1-
3.1.4.4 Direct star measurement
d3) correction applied.
It is also possible to measure some aberrations directly measuring the shape of a point source.
Since the telescopeofiseach result was at
pointing calculated,
stars,giving
we G can, Geasily
0 and G+find
. The optimum
a bunch correction aberration
of point was all over the
sources
then estimated by parabolic minimisation as [18]
field. Grisan & Naletto[25][26] demonstrated in 2007 that we can simply measure and minimize
some aberrations on the image of a pointacorr source b(G G )
whenever
+ this one is sampled (33) enough onto the
= ,
2G+ 4G0 + 2G
detector.
Grisan & Naletto used
which isan iterative
exactly method
equivalent to minimize
to the Lorentzian andofdetect
maximisation the metricthe
g. Toaberrations
correct this one by one
single mode, the correction aberration Φ = a corr Lk would be added to the deformable mirror.
starting by the spherical and sending correction until no further improvement could
For multiple mode correction, each modal coefficient would be measured in this manner before
be measured,
then proceeded with coma, defocus and astigmatism. An example
applying the full correction aberration containing all modes. is shown in Fig 3.1.11. For the
optimization of each mode, a box is drawn around the point source we want to analyse and assigned
7.1. Correction of a single mode
a merit function. The aberration is then measured and a correction is sent to the deformable mirror
The correction process is illustrated in Fig. 4 for the correction of one Lukosz mode using the
according to this measured aberration. Here we give some examples of these merit functions.
scatterer specimen. A suitable range of spatial frequencies and the bias amplitude were chosen
based upon the curves in Fig. 3. An initial aberration was added using the DM, an image was
acquired and the
For spherical aberration, asvalue of g was
a first calculated.
step we can Positive and negative
maximize thebiasoverall
aberrations were added of
intensity in the box as in
turn and the corresponding values of g were calculated. The correction aberration was obtained
function M1 and then maximize the mean intensity of the non zero pixels as in function M2 .
using Eq. 33 and the correction was applied to the DM. The final rms phase aberration was
found to be 0.18, corresponding to RR a Strehl ratio of 0.97.
M1 = Area
I(x, y) dxdy (3.1.16)
#81604 - $15.00 USD
RR
291Mar 2007; revised 11 Jun 2007; accepted 11 Jun 2007; published 14 Jun 2007
M2 =Received NI(x,y)>0 I(x,y)>0
I(x, y) dxdy (3.1.17)
(C) 2007 OSA 25 June 2007 / Vol. 15, No. 13 / OPTICS EXPRESS 8185
For coma, we can define the merit function M3 as follows: Let’s call P1 the unweighed barycentre
of the nonzero pixels in the box and P2 the barycenter of the non zero pixels weighted by their
respective intensities, then we have:
✓ ◆
RR 2
M3 = 1 Area
I (x, y) dxdy P2 P1 2 (3.1.18)
26
process: the first stage tries to maxi
of power present in the image by c
Fig. 2. Point source image affected by astigmatism at different ficient z9 in the direction that locally
focal plane positions. The position (b) corresponds to the so-called image metric described by Muller [
“circle of least confusion.”
#
3.1. APPLIED ALGORITHMICS TO ROBOTIC OBSERVING
S1 " I$ x, y% dxdy.
quence of four problems of dimensionality at the most
in !2: First we will estimate z9 ! !, then !z7, z8" ! !2,
subsequently z4 ! !, and finally !z5, z6" ! !2. In the second stage, the mean intens
nonzero pixels of the image I is m
B. Aberration Correction way we obtain an image with the
The described aberration ranking allows the devel- trated as possible around Pq. To ha
opment of a system that progressively corrects the proach in the correction of the var
aberrations, starting from those highest in the hier- considered, we recast the maximiza
archy and ending with the lowest. An outline of the zation problem. Hence, the objectiv
proposed system is shown in Fig. 3: after the image minimized is
I is acquired, a spherical aberration is corrected by
&
iteratively modifying the Zernike coefficient z9,!2 of
$S1 for stage
the AO until no further improvement can be ob- Js "
tained. Then the algorithm detects and corrects $#I for stage
coma, defocus, and finally astigmatism.
At every iteration i of the optimiz
we set
6436 APPLIED OPTICS ' Vol. 46, No. 25 ' 1 September 2007
27
Observatory Control System
I1 (⇢,✓) I2 (⇢,✓+⇡)
S(⇢, ✓) = I1 (⇢,✓)+I2 (⇢,✓+⇡) (3.1.19)
It is interesting to note that due to the inversion of the image through focus, the conjugate pixel
of the pixel in the image before focus is rotated by an angle ⇡ around the optical axis in the
image after the focus as we can see in figure 3.1.12 Then, we can find a relationship between the
sensor output S and the actual wavefront. According to Roddier[28], the irradiance transport pupil
equation yields to equation 3.1.20:
@W
S(x, y) = @n c P r2 W z (3.1.20)
where:
• c is a Dirac distribution equals to one on the edge of the pupil and zero everywhere else
28
3.1. APPLIED ALGORITHMICS TO ROBOTIC OBSERVING
l being the distance bewteen the focal plane and the image of the cross section of the beam through
the telescope. Inserting equation 3.1.21 into equation 3.1.20 yields to :
⇣ ⌘
S(x, y) = f (fl l) @W
@n c P r2 W (3.1.22)
Roddier[27] implemented an iterative solution to this equation that would give a precise estima-
tion of the wavefront. The convergence of the algorithm is fast so it can be used in closed-loop
applications. The algorithm was tested on the ESO New Technology Telescope (NTT) at La Silla
Observatory actuating on the primary mirror to correct the wavefront deformation. Fig.3.1.13
shows the e↵ect of the algorithm for reducing coma on a star image. It can be used for collimation
in our system. 2282 J. Opt. Soc. Am. A/Vol. 10, No. 11/November 1993 C. R
• The first few terms of Zernikes functions express optical aberrations familiar to opticians as
they may be related to classical third-order Seidel aberrations (tilt, defocus coma, sphericity,
astigmatism, etc.)
• An analytical solution of the wavefront can be found provided we know the derivative of the
wavefront, which is the resut we obtain in a Shack-Hartmann wavefront sensor
• An analytical solution of the residuals of any wavefront can be found
• When the wavefront is a↵ected by Kolmogorov turbulence, Zernike polynomials permit to
evaluate the solution integrals in a non-iterative process.
The main problem of Zernike polynomials when applied to astronomy is the fact that most of
the telescopes are a↵ected by a central obstruction due to the secondary mirror, which gives an
annular pupil, di↵erent from the circular pupil domain where Zernike polynomials are defined. On
the other hand, Zernike polynomials are orthogonal only on a circular pupil, so relevant mode
crosstalk appears in annular pupils.
For that reason, Mahajan[31] developed a set of polynomials comparable to Zernike ones, with the
property of taking into account the central obstruction and being orthogonal over an annular pupil.
If we consider r as the pupil radius normalized to one and ✏ the obstruction factor, the Zernike
annular polynomials can be written as :
8 p
< p2(n + 1)Rnm (r, ✏) cos(m✓) m>0
m m m 6= 0
Un (r, ✏) = 2(n + 1)R (r, ✏) sin( m✓) m<0 (3.1.27)
: p n
n + 1Rnm (r, ✏) m=0
with
m 1 Xj
2(2j + 2m 1) hj Qm 1
(0)Qm 1
(r2 )
Qm 2
j (r ) = i i
(3.1.30)
(j + m)(1 ✏2 ) Qm
j
1
(0) i=0 hm
i
1
m 1
2(2j + 2m 1) Qj+1 (0) m 1
hm
j = h (3.1.31)
(j + m)(1 ✏2 ) Qm
j
1
(0) j
Q0j (r2 ) = 0
R2j (r, ✏) (3.1.32)
2
1 ✏
h0j = (3.1.33)
2(2j + 1)
m |m|
when m < 0, R2j+m (r, ✏) = R2j+m (r, ✏) (3.1.34)
rn
when m = n, Rnn (r, ✏) = s (3.1.35)
Pn
✏2i
i=0
In 2001, Restaino & al.[32] used a Roddier imaging method to test the surface quality of the
Naval Observatory Flagsta↵ Station 1m telescope. Using the same sensor output, which is the
di↵erence of two out of focus images, they computed the wavefront aberration decomposing the
signal into normal circular Zernikes as a fist approximation and then with annular Zernikes. The
comparison of the results obtained from the same raw images show that if the obstruction ratio
30
3.2. HARDWARE IMPLEMENTATIONS
is large, the circular Zernike decomposition su↵ers a degeneracy compared to annular Zernikes.
They concluded that using annular Zernikes would be more precise at the moment of analyzing the
absolute aberration of the optical system. On the other hand,they affirm that in adaptive optics
this degeneracy wouldn’t be a particularly important e↵ect as typically the wavefront analysis
does not use the Zernike modes explicitly but rather sets the deformable element to null out the
wavefront error. Dai & Mahajan[33] confirmed later that even if the error included using standard
Zernikes instead of annular Zernikes would be dominated by the turbulence, it is interesting to use
annular Zernikes to have a better estimation of the image degradation.
Maeda[35] developed algorithms for optimizing a wavefront using a direct measurement of the
PSF, without including o↵sets. Maeda[35] directly recovered the wavefront using a method defined
by Gerchberg and Saxton[36][37]. The idea is to fit a model to find the best set of aberration for the
measured PSF, then according to the measured phase aberration, we can give the corresponding
correction on the actuator. Ideally no iteration is necessary and the correction is very fast but a
high level of sampling is needed in order to get a proper estimation of the pupil.
31
Observatory Control System
-
telescope will give an image with no astigmatism on m the axis of the telescope. On the other D . hand,
.tTJrlllAitT
-._......-....,.0l>Il_
only when both mirrors are perfectly collimated the field astigmatism
Daip ....
will be minimized with a
(I)'fIM_oI _ _ _
radial profile. In a cassegrain configuration, the Coma free point Z is located T,,_on the secondary axis
at a distance L0 behind the vertex expressed in equation(3.2.1). WhereToR•s....w is the secondary
............ be< _ ver-
(l) on II><
tex radius of curvature, m is the system magnification and k 2 is the conic
(3} £o<:II"""""OpttItftdircrt· constant for the secondary.
To KIIieve .... _Urn p<t.
1, "" Ill< .."'. Ioool.
-,
fonna.ooe b • Ii .... POW"
(4) Ilia!> P"Y IoWI,,"'''''''''' To ",hi"" ,I>< .... ';mum 1><"
... ,..!lo, to"".,« from power .,... 1_
.hle.
boll .... to< ioJtmi6<d. willi Simplicity _tfOI.
Rs (m ")
+ 1) 0(
L0 = (3.2.1)
m + 1 k2(6)(mLowfrlmon_.
1) T. «doa: ......... '-<0 0li<l '"
_in hiah ... _ .
Wben ,hc:st dol", lim> ..
For this reason it is necessary to be able to move the secondary mirror
ptrtIlI ,hat aim 5 _
with 5l1udied
tmicaIlJ
il bcamc If>"
degrees of freedom (
inmmpouible with lbo.
two rotations , x tilt and y tilt and three tranlations)
aimt in
2-4.order to be
T'berefun: timable
5 __toa.cIoned
align the ill foptics
• .o.... ofin an
optimal way. achi<vinl u.., other "ms ."d IbtKby ",inin, in _un.]
n,idl.y and system I"tSpODSC..
Tho followin, da<ription is of. mcd>lniun which does
comply largely .. ilb these aimI. The failure to 111«1 lim
' iii alO!lI ill _y . imple mechanism ""£<:S, bill wben I h.
3.2.1.2 Parallel platforms input Dr cutpu, information is IX>mplu, .he latk of lim-
plidty;n th. information to!) lh. mechani,,,,
Historically, open loop serial chain manipulators were
;1 of lellpresented
importance. first because of their simplicity
of use and manoevrability similar to a human arm. On the other hand for high load aplications)XlI.. co-ordinlte$ relative
tho pll ne conllining Ihe Ie
paralel systems look way more suitable because of the higher stregth and
D E S C RIPTION OF ME theC HAN
lower sensitivity tomtan position Itt . .ringed
I SM
vibrations. The main particularity of parallel manipulators is that since
The m-deuces-oI-morion it is necessary
pbtfonn if, 1$ lbe n&meto movein I ptaoc Ih'l i, Utng<nli.I
implicto, ,,"pabl. of ID<>Ying in 111m: Unclr directions ..,d three pain.. nf the platform
every parallel axis at the same time the whole system cannot be run in open loop mode and a close0<:<'I1n
tl\,eoe ""ular directions singly or in ... y rombinltion. II '1 lhe point of Itloc
loop must be implemented. (OIJIpriscs • IriaP,m ptaor oolkd . he plallonn, ol1O'hid> lbe opcntion of each I
each (>f,1Ie .h,eetotuIlCI'S
Stewart [41] designed in 1965 a 6-DOF parallel mechanism simulate flightlhtotoJh
is coru>ected 1.Iltft-uiJ
conditions for he-.,IOCh.....,' pol'" ..'l, hi" the
joiru.o ..... ofthm: kp (Fig. 2). Eacb It, i. CODnCatd to 10. .... bIe Ihis plane: 10. a
licopter pilots training by generating general motion
Ih. ,..,und by • ,....,...uio
in space. This design joint. One as of
seen
Ibc:sein. _
fig(3.2.1)
it coasickn.tions..
consists of a triangular platforms linked to threeI>Ofmallo.
controllable linear
Ihe lq and acutators
is provided "';!h I _ with for ball joints. To <bribe how 10. ac:h
COIIttOI.
The ocher om
Each actuator is linked to the ground by an indepentant two.. axes
nonrWrotary In d", fi nl ODd .. _ one
actuator, prowi!kd
of theseklthe fuactioninl of one Ie
" 'ilh I matll for """Irol. Qch lq: al", h .. conlrolloblo Various med"",. can be
axes is controllable while the other is free. AmongmOln,thefornumerous
nundiD, ilS answer
Icnglh. Intothiltheminner,
Stewart’s
1M pl'l-paper,<'\HlI'dinm: control of
-._.. -
D . .tTJrlllAitT form .nKhmom point In <:aeh ki <In be "".urolled hy deKribed by way of 111"""
The leg can con,i,. of o
T,,_ ....-....,.0l>Il_ the '!foke of the ilck will
cnd of the piston rod is co
To • ....w ............ be< _ three-lXis joint, and lhe cy
I><
rt· To KIIieve .... _Urn p<t. fCHIndation by I ,,.'o-nit joi
-,
fonna.ooe b • Ii .... POW"
'''' To ",hi"" ,I>< .... ';mum 1><"
coru>«1cd by a , ingle-..,i, j
analisis for the application of the plateform into robotics. Although Gough’s system is much closer
than Stewart’s, the name of Stewart platform will remain in further aplications. Some authors will
refer to it as Stewart-Gough plateform later.
Figure 3.2.2: Implementation of the Gough platform in the tyre testing machine and modification
of the Stewart’s platform according to Gough comments by Hunt using CV joints
Generic Stewart platform kinematics is comonly defined as follows and as we can see in
Fig(3.2.3):
• We consider n legs [L1 ...Li ...Ln ] of lrespective lenghs [l1 ...li ...ln ]
#» # »
• Attach point on the base of the leg Li is defined by the vector bi = O1 Bi in the referential
(O1 , RBase )
# »
• Attach point on the platform of the leg Li is defined by the vector p#»i = O2 Pi in the referential
(O2 , RP latf orm )
#»
• The translation vector T between the referential RBase and the referential RP latf orm is
#» # »
defined by T = O1 O2
• The rotation between the two referentials is represented by Euler’s angles (Rotation around
x axis), ✓ (Rotation around y axis) and (Rotation around z axis).
We can then compute the lengh of each leg Li according to equation (3.2.2).
#» #»
li = T + RB · p#»i bi (3.2.2)
2
(3.2.3)
33
Observatory Control System
where:
RB = Rz ( ) · Ry (✓) · Rx ( ) (3.2.4)
2 3
cos cos ✓ sin cos phi + cos sin ✓ sin sin sin + cos sin ✓ cos
RB = 4 sin cos ✓ cos sin phi + sin sin ✓ sin cos sin + sin sin ✓ cos 5(3.2.5)
sin ✓ cos ✓ sin cos ✓ cos
Even if control and dynamics of the Stewart platform was widely studied in 80’s and 90’s decade
the first application such device for positioning of a secondary mirror appears with the upgrade
from six primary mirrors to one monolithic primaray mirror of the MMT ( Multiple Mirror Tele-
scope) at Mt Hopkins and of the design of the LBT ( Large Binocular Telescope) at Mt Graham
[44]. A set of six secondary mirrors of di↵erent optical configuration focal ratio and aplication was
designed for these two telescopes in order to fulfill di↵erent functions at each telescope. A similar
support was designed at LBT and MMT as seen in Fig.(3.2.4) in order to be able to make coinci-
dent primary and secondary optical axese implemented a Stewart platfromed also called Hexapod.
This technical solution allows precise and vibrationless positioning over 6 axes of a heavy weight
at such a hanging position.
Starting from this period every large diameter telescopes of more than 4m diameter such as
Figure 3.2.4: MMT 6.5m telescope and detail of adaptive secondary mirror and its Stewart platform
support
34
3.2. HARDWARE IMPLEMENTATIONS
E↵ects of periodical errors due to worm gearing have been known for the beginning of the 20th
century [51], the first attempts to correct for the e↵ects can be found in the 1960’s. Kron first
proposed to slightly decenter on end of the worm in its bearing to produce a counter periodic
error[52] but only a first harmonic can be corrected using this technique.
Hardie and Ballard [53] proposed in 1962 the use of several gears between the motor and the worm.
These gears would have an adjustable excentricity. By adjusting the number ratio phase and ex-
centricity of each gear, they managed to mechanically correct for several harmonics of periodical
error by introducing a known aberration that would compensate the exisiting worm error. The
result of the e↵ect of their technique is shown in Fig.3.2.6 which is a scan of a photographic plate
obtained by setting a non null speed in declination while the telescope is tracking on a star. The
e↵ect of the drift due to speed variations can be appreciated with and without the correction.
A more direct solution to reduce the worm error is to obtain a better transmission from a
begining. This way we can avoid to correct for the periodic errors. Groenveld [54] analyzed in
1969 the use of non-standard worm-gear shapes in order to optimze the contact surface which
would minimize the variation errors at extremely low rotation speeds used in telescopes. On way
to maximize the contact surface between the teeth and the worm pitch is to use smaller pressure
angles than the standard ones. An illustration of the definition of the pressure angle is detailed in
Fig.3.2.7. In the standard industry where the precision at very low speed is not that important
35
Observatory Control System
Figure 3.2.6: Enlargements of stellar images trailed in declination to reveail periodical drive error.
Left. Original error, right: error reduced by compensation.
the commonly used angles are 14.5, 20 and 25 degrees. These relatively high values minimize the
contact surface in order to increase the system efficiency. In our case the power is not a limitation
so the contrary may be interesting in the sense that increasing the surface of contact make the
micrometric variation to statistically compensate together. The overal movement in this case is
then smoother. In order to obtain better transmission while keeping a good manufacturability,
Groenveld proposed the use of 10 degrees pressure angles for the construction of a telescope at Mt
Stromlo Observatory in Australia.
36
3.2. HARDWARE IMPLEMENTATIONS
Several ways can be used to remove the backlash of a gear, in the case of an horizontal gear, the
most common way is to split the gear horizontally as shown in Fig.3.2.8 and o↵set both parts by
an angle removing this way any possible backlash. This technique as defined in [55] is efficient but
increase the friction between both gears by a considerable amount.
Additionnaly the transmission ratio needed in a telescopes between the motor and the main axis
it typically between 100X and 400X which cannot be achived with only two of these drives. As a
result a minimum of 3 to 5 stages of such gearing would be necessary to complete the transmission
ratio multiplicating also the friction level. This solution is commonly used for single reduction
stages between the motor and the worm but is not suitable for the complete transmission chain.
In order to remove backlash in high reduction stages such as worm gears one solution is to split
the worm into two parts as detailed in figure 3.2.9 and adjust the space between the two parts
so that there is no gap remaining between the worm and the gear theeths. Doing this way it is
possible not only to reduce the backlash but also to increase the drive precision by reducing the
periodic errors.
On the other hand, this solution is quite costly since a more complex worm mechanics has to be
designed with a high precision. Moreover it also increases the friction factor a↵ecting negatively
the efficienty of the gearing.
The most commonly used solution so far is the use of a preload cable and weigh on each axis
or, as defined in [56], where a single cable passes through both gears without any need to pass
through the main axis of the telescope. Fig.3.2.10 shows how Hannel preloaded a 1m telescope at
La Silla Observatory. The telescope has to be correctely balanced before installing the cable and
37
Observatory Control System
the weight and the torque introduced by the weight unbalances sligthely each axis so that the gear
teeth always pushes the worm on the same side, removing this way any possible backlash. The
weight hung to the cable can be adjusted so that is is sufficiently heavy to unbalance the telescope
but with the minimum amount so that the gear efficiency and the friction bewteen the worm and
the gear would be a↵ecteed by a neglectable amount.
38
3.2. HARDWARE IMPLEMENTATIONS
identify precisely the positioning of the tape relatively to the readhead . An absolute positionning
is possible by using various contiguous scales printed on the tape.
This solution has shown to be extremely precise and reliable and has been used in many large
profesional telescopes such as the William Hershel Telescope which is a 4.2m telescope installed
at La Palama in 1985. The TCS upgrade was performed in 1993 [57] and the first use of such
encoders compared to the epoch techniques demostrated the viability of such hardware.
However, the use of a tape encoder and a read head may generate positioning errors, which
may be periodical or not due to two main factors.Warner pointed out this problem in 2008 [58]
and proposed a technique to take these variations into account.
Periodical errors are mostly due to a miscalibration of the sine and cosine waves o↵sets and gains
and their frecuency corresponds to the grating spacing of the tape. They can not be corrected by a
large lookup table since as it reaches the telescope encoder bandwidthg it depends on the telescope
mount rates. As a result he proposes to model and correct for the amplitudes and o↵ests of each
encoder signal inside the control loop.
On the other hand the non periodic errors mostly induced by local variations of the distance bew-
teen the tape and the head, have to be modelized using a lookup table and kalmann filtering.
39
Observatory Control System
In the end, tape and read head absolute encoders require very high mechanization precision of
the tape support because any variation of the distance between the readhead and the tape will in-
duce variation of the grating period as seend form the readhead. Introducing this way, positioning
errors with a important number of harmonics, with a 1-turn period which can be hardly corrected
using classical pointing models equations.
Unfortunately their installation on small telescope still represents an important proportion com-
pared to the overall cost of a 1m class telescope and less. As a results it is very unusual to find use
of high precision encoding tapes and high resolution on-axis encoders on amateur’s class telescopes
of 50cm of diameter and less.
However it is possible to find some companies proposing a↵ordables control systems making use
of an absolute on axis encoder such as Gemini Telescope Design [59] become a↵ordable to this
class of telescopes. The Gothard Astrophysical OBservatory of Eotvos University, Szombatheley
in Hungary Carried out a robotization of a 50cm telescope [60] using an integrated mount from
Gemini Telescope Design. Even if in this commercial device, the absolute encoder allows to keep
track of the mount’s position even after failure or powero↵, its resolution is not high enough to
have an active control of periodical errors or windburst. These still need to be corrected using a
separate guiding telescope and a remote camera head.
In the commercial amateur oriented market, Sidereal Technology TCS[61] also o↵ers the option to
instal high resolution relative encoders placed on the main axes of the telescope. [62][63]
Figure 3.2.13: Coil and magnets organization of a direct drive brushless motor
Every newly designed large scale and very large scale telescopes trend to uses a drives technol-
ogy based on direct drive motors. It can be seen by reviewing the system definitions of most of the
lately constructed large telescopes such as the four 8.2m Very Large Telescope (VLT) [64] of the
European Southern Observatory (ESO), or the Gemini 8m[65] telescopes in Hawaii an Chile uses
this technology. Additionaly, the Preliminary Design Reviews (PDR) and Final Design Reviews
(FDR) of the next telescopes generation such as the Thirty Meters Telescope (TMT)[66], or the
next generation of chinese observatories [67] will also make use of this tecnology.
40
3.2. HARDWARE IMPLEMENTATIONS
Figure 3.2.14: Stator and rotor of the defined direct drive brushless motor
Direct Drive systems aplications for telescopes were first proposed from the 90’s [68] but spread
progressively [69] and recently became a standard in the large telescopes aplications[70]. The
success on large telescope is mostly due to the following reasons:
• Absence or periodical and non periodical error
• High efficiency due to the absence of reduction stage
• Absence of backlashes
• High range of available speeds from very slow to extremely fast movements (tens of degrees
per second) compared to geared systems
However, in order to fulfill the precision specifications inherent to a telescope drive the position
control loop of a direct drive motor has to be closed using an encoder with a precision and resolution
of at least 1/20th of an arcsecond. With this restriction, it implies an important minimum cost
regarding the encoder and limits the possibility of application of this technique to small telescopes
(50cm-1m range). As a result this technical solution has been used in very few small systems such
as in [71] since the cost of the drive system would remain very high compared to the overall cost
of such a telescope.
3.2.2.5 Auto-guiding
This last method we mention to compensate and correct for mechanical aberration of the telescopes
mechanics consists of adding a second detector either on the focal plane of the telescope or behind
a second smaller telescope in parallel of the main instrument.
The goal is to obtain an image as close as possible from the target field where a bright star can
be selected and its centroid computed at a periodicity between 1/10s and a few seconds. At every
centroid computation its value is compared to the one taken at the begining of the observation
and a tracking error is computed. An o↵set in arcseconds is computed and sent to the telescope
mount to correct for the positioning. This method has been extensively detailed in [72] . It has
been shown in [73] that the precision can be increased by computing better the centroid of the
target star.
Performing this way we can measure and correct for the aberration related to bad mechanics. Even
if this method has been widely used in every range of telescope sizes it presents some drawbacks.
The main complexity is to find a proper guiding star which is bright enough to have a proper
centroid computation at small exposures without saturating the detector which must also be close
from the target and can be found into the guiding field. Since the guiding field is usually a few
arcminutes wide and the guide telescope is less powerfull thant the main one, a proper guiding star
may not be found easily.
Another problem exposed by this method is that since the guiding telescope is installed in parallel
of the main ones, the support and proper flexures of the devices are not necesarily the same as the
ones of the main telescopes. As a result, during long exposures, the drift recorded in the guider
will not be the same as the one in the main instrument and even if the star was properly guided
we can observe a di↵erential drift in the science detector.
41
Observatory Control System
Figure 3.2.15: Autoguiding principle and its possible implementation on a amateur’s setup
The same Telescope Control System was also used in the defense area in projects such as the
Raven Automated Telescope System used at NASA [80] in order to track and discover space debris
42
3.2. HARDWARE IMPLEMENTATIONS
43
Observatory Control System
constantly at the same speed. This need makes this subsystem unusable in this configuration for
our application when we want the system to be tracking at very high precision for several hours.
44
3.2. HARDWARE IMPLEMENTATIONS
other side Plug-In based and Ascom compatible software use a similar structure way. Ascom or
Plug-In drivers can be developped by any user as soon as they respect the defined common pro-
tocol independantly from the astronomical software using these peripherals. However, the Plug-in
arquitecture forces the control software to use a specific programming language while with Ascom
protocol, the astronomical software can connect to the hub undependantly of the programming
language it used which makes it a more versatile solution. In both cases, the big advantage com-
pared to the monolithic UI arquitecture is that new features integration or new hardware drivers
would only need to respectively release a new plugin or new ascom driver independantly of the
client program and its programming langage.
The ASCOM protocol is widely used in amateur astronomy, even in remote operation and was
also used in some scientific implementations such as the preivously cited Mossop’upgrades [82]. The
open source protocol allowed many independant software with an important variety of specialities
to connect to it and operate a station remotely. Howerver it presents to major restrictions if we
want to operate an observatory in a autonomous configuration in a robust and optimized way:
• ASCOM software only run on Windows, thus every drivers and client software must be run
on the same platform
• Every peripheral drivers has to be coded using C# programming langage. But not all pe-
riphral manufacturer would give a C# API. Additionnally it does restrict the collaborative
programming since not all collaborators would fell comfortable with this
• Every peripheral and client has to be connected to the same computer which can become
heavy sowtare consuming in come cases of observatories with an important number of pe-
ripherals. Additionaly Windows operating system trends to give erratic and none constant
response when several peripherals are connected to the same computer. ( Ports number can
change after reboot and similar issues that can be handled but do need an important time
consuming e↵ort at the moment of putting a new telescope in operation)
45
Observatory Control System
Mount
Control GUI
INTERNET
Focuser
Daemon
INDI Server
C++ Drivers
Interlock Camera
[...] [...]
layers: The lower level is the hardware layer, the drivers programmed in C++ language are included
into the servers while the users can interact with the servers from the user layer over Internet. The
public protocol of interaction over the TCP/IP network is managed by XML commands.
• Command are sent over network, so that the peripheral load can be distributed over several
computers which makes the system more reliable
• The server run on Unix/Linux systems which is more stable than windows but since the
control is done through XML commands over TCP/IP it remains independant of the user’s
Operating System (OS)
• New hardware drivers can be programmed by the user using C/C++ standard APIs
Most of the mutliplateform astronomical software with the possiblitiy of connecting to hardware
as seen in [89] have linked their interfacing through the INDI protocol arquitecture. And the fact
that a hole observatory peripherals can be installed in separate computers or even single board
computers such as raspberry pi’s, because of its versatility and robustness compared to ASCOM ,
have allowed many scientific robotic projects such as the 60cm robotic telescope of Montsec Ob-
servatory [90][91] to prefer this technical solution.
However, even if INDI platform is multiplatform compatible, which means clients can be run
on any platform, the servers needs to run on a Unix/Linux environment. As a consequence, every
driver have to be develop in C/C++ only and will run under Linux environment only which also
limits the compatible hardware since some peripheral vendors will only provide Windows APIs.
Another interesting point is that the fact that drivers can be developped only in C/C++ language
also limits the amplitude of collaborative community even if the software is open source. Most of the
potential collaborators in terms of drivers programming will prefer other programming languages,
as a result , fixing the inteface to a single language limits the quantity of people able to collaborate
on the project. Finally we can point out that the fact that INDI server will only run under Linux
and ASCOM is Windows only, both system will remain totally exclusive from each other and it is
not possible to develop any kind of bridge system that would allow peripherals with an ASCOM
driver to be controlled from an INDI client and vice-versa.
46
3.2. HARDWARE IMPLEMENTATIONS
Figure 3.2.21: Example RTS2 environment. Two basic setups are present on the site—a telescope
with a guiding CCD and a main CCD equipped with a filter wheel, and an all sky camera. There
are two domes (one for telescope, second for all sky camera), three CCD detectors, executor and
selector services controlling telescope setup and scriptor, a simplified executor service, controlling
the all sky camera. Both observatories o↵ers external access through XML-RPC protocol, provided
by XMLRPCD. This is used by the Graphical User Interface and the Web server. XMLRPCD also
provides a Web browser with direct access to some functions. Access to XML-RPC and Web
functions can be protected with a password.
The RTS2 plateform constantly evolved [96][97] since its creation and its latest features im-
plementation such as scheduling based on either classical dispached[98, 99] or queued scheduling
implemented in big observtories such as the Very Large Telescope (VLT)[100] but also provides
scheduling facility based on Genetic Algorithms (GA Scheduling) [101, 102, 103]. The versatility
modularity and stability of the system makes it the most used integrated software in robotic as-
tronomy projects.
47
Observatory Control System
48
4 Algorithmics
In this chapter we will present the five major contributions of this Thesis related to control of
robotic observations. Obtaining data through a telescope can be a↵ected by errors introduced in
several levels of the acquisition chain and this has been tackled along the Thesis through di↵erent
methods.
As for the hardware level subsystems, in the first section of this Chapter we will present the
hardware level improvements applied to the telescope drive control. This first contribution consists
of a novel approach based on Kalman’s filter data fusion coming from multiple rotative encoders
placed at strategic positions in the mechanical axis chain. The data from thes rotative encoders
feeds a closed loop algorithm which allows to obtain a very high resolution measurement of the
telescope speed and an accurate position using a↵ordable hardware for astronomical projects with
small budgets.
The software level subsystems will be covered in sections 2 and 3. In Section 2 we will present a
software for integrated control which o↵ers the possibility of integration of multiple pointing mod-
els in the same telescope, in order to improve the guiding quality. Such an approach is proposed
and implemented for the first time, to our knowledge. Currently, most of the commercial telescope
control system for small telescope do propose a pointing model software but in most of them it
only corrects for the telescope position at the moment of aiming the target at the beginning of
the observation. Very few of them have the capability of track the pointing model evolution in a
closed control loop as the telescope tracks the target. The technique we propose would be the first
to allow small telescopes to have a dedicated pointing and tracking model for a guiding system.
Such a simple proposal is able to improve the guiding precision of the overall system, improving
the quality of imaging, specially when long exposures are involved.
The third contribution of this Thesis involves a new distributed and portable software architec-
ture for sharing sets of peripherals in an observatory, which will be presented in detail in Section
3. This system is specifically designed to facilitate the collaboration of a developer community
in order to easily integrate new compatible hardware, a situation usual in many small-sized am-
ateur observatories. As defined in the State of the Art, the two mostly used software platforms
for controlling telescope peripherals are currently ASCOM and INDI. Both solutions implement a
modular structure of peripherals allowing collaborative development, and a hardware abstraction
layer for the end-user’s observatory control softwares. In addition to this, the solution we propose
o↵ers two drastic improvements which are the possibility of distributed servers and peripherals
over the network, which is not possible with ASCOM, and the multi-plateform compatibility for
both servers and clients. The proposed system is designed for being compatible with ASCOM or
INDI, which currently are not compatible with each other, so the software may also be used as a
bridge between them.
Sections 4 and 5 will present respectively two improvements allowing a better positioning of the
optics in order to consistently obtaining a high image quality. While Section 4 describes in details
a system for active repositioning of the telescope optics using a simplified, yet e↵ective, mechanical
approach, the proposal presented in Section 5 describes an approach based on software analysis
of the defocused images near the focal plane. This analysis shows how it is possible to generate a
model of the focal plane behavior of the optics in order to be able to position precisely the detector
using a system such as the one described in Section 4 from single images shots, using the entropy
of the image as the main parameter in a merit function.
49
Observatory Control System
The first goal of our design is to obtain a cost-e↵ective drive mechanism, suitable for advanced
amateur astronomers, able to track sidereal objects on the sky smoothly, with no visible jumps or
periodic errors larger than typical sky turbulence jitter of the stars. As we previously saw in the
State of the Art Section, closing the control loop with an absolute high resolution band encoder
placed on the main axis of the telescope would easily permit to fulfill this requirement, but its cost
is way above the overall budget for mechanics of a typical amateur astronomer telescope. This is
why the second goal of our design is to be able to fit the cost of electronics, motors and positioning
sensors within the same order of magnitude of a typical amateur astronomer hardware.
Along the following subsection we will first define the hardware components we chose to drive
each of the two axes of the telescopes. We then expose the specifications of the encoders we used to
measure speed and position of each axis and their positioning into the chain. Then, we will present
the electronics used to give power to the motors and close the loop with the previously defined
encoders, and then all the components will be embedded together to handle all the previouly
detailed hardware and sensors. Finally, we will expose which are the requirements this system
must fulfill in order to be able to track smootly a star on sky so that the hardware would now
a↵ect the image quality in any sense.
• A horizontal axis is used for azimuth pointing and tracking using a simgle motor
• A vertical axis is used for elevation pointing and tracking using a single motor
Both motors are DC Motors model Pittman 16 AMP 14204 24V. their specifications are shown
in Tab.4.1.1. Electrical performance of the motor is expressed in Fig.4.1.2. Each axis is driven by
a worm/gear assembly with 359 teeth which induce a reduction factor of 359X between the worm
axis and the main axis. In addiiton ot this reduction the worm is driven by a 7X reductor from
APEX Dynamics. The specification of the reductor can be seen in Table 4.1.2. The reductor is
driven itself by a belt drive with an additional 3X reduction factor.
As a result, the total reduction factor between the main axis and the motor shaft may be cal-
culated as:
!axis
!motor = (4.1.1)
(Number of teeth).(Reductor factor).(Pulley factor)
!axis
!motor = (4.1.2)
7539
50
4.1. HIGH PRECISION DRIVE CONTROL
Axis Encoder
Reductor
Worm Encoder
Gear
DC Motor Belt
Motor Pulley
4.1.1.2 Encoders
Each axis is coded in position and speed by two independent encoders of di↵erent resolutions.
While the first encoder, of higher resolution, is positioned on the main axis, the second one mea-
51
Viscous Damping Factor
Nm s/rad 1.21E-5 1.21E-5 1.21E-5 1.21E-5
Electrical Time Constant ms 1.5 1.5 1.5 1.6
Mechanical Time Constant ms 7.5 7.5 7.2 7.0
Thermal Time Constant min 29 29 29 29
Thermal Resistance Celsius/W 7.7 7.7 7.7 7.7
Max. Winding Temperature Celsius 155 155 155 155
Observatory Control System oz-in-sec2 0.0037 0.0037 0.0037 0.0037
Rotor Inertia
kg-m2 2.61E-5 2.61E-5 2.61E-5 2.61E-5
oz 35.2 35.2 35.2 35.2
Weight (Mass)
g 997.9 997.9 997.9 997.9
4000 24
3500 21
3000 18
Speed (rpm)
Complimentary Products
Current (A)
2500 15
2000 12
1500 9
1000 6
Notes
500 3
0 0
0 30 60 90 120 150 180 210
Torque (oz-in)
This document is for informational purposes only and should not be considered as a binding description of the products or their performance in a
laboratory conditions. Actual performance will vary depending on the operating environment and application. AMETEK products are not designed
revise its products without notification. The above characteristics represent standard products. For product designed to meet specific applications
Figure 4.1.2: Motor Performance
AMETEK TECHNICAL & INDUSTRIAL PRODUCTS
343 Godshall Drive, Harleysville, PA 19438
USA: +1 215-256-6601 - Europe: +44 (0) 845 366 9664 - Asia: +86 21 5763 1258
www.ametektip.com
sures the angle and speed of the worm. Their respective configuration can be detailed as follows: G 22
____
• The axis encoder (with higher resolution) is a quadrature incremental Gurley RS158S en-
coder. Its specifications are detailed in Table 4.1.3. When considering the inner interpolation
of the sinusoidal signals, this encoder delivers quadrature signals of 250000 steps per revo-
lution. With the proper reading interpolation, the final resolution of this encoder can reach
1.000.000 steps per revolution. The encoder is installed directly on the main axis as shown
on fig4.1.1. The total resolution on the main axis in arcseconds per step may be calculated
as:
360 ⇤ 3600
ResAxisArcseconds = (4.1.3)
(ResAxisStepsPerTurn )
ResAxisArcseconds = 1.296 Arcsec/step (4.1.4)
• The worm encoder is a quadrature incremental ELCIS 58-10000, whose specifications are
detailed in table 4.1.4. This encoder gives a direct output of 10.000 periods per revolution.
With the proper reading interpolation the final resolution can reach 40.000 counts per revolu-
tion. This encoder is installed on the input axis of the 7x reductor. The mean total resolution
in arcseconds per step on the sky will then be:
360 ⇤ 3600
ResWormArcseconds = (4.1.5)
359 ⇤ 7 ⇤ 40000
ResWormArcseconds = 0.013 Arcsec/step (4.1.6)
However, although the resolution of the second encoder is e↵ectively smaller as the absolute
accuracy of the system will be limited by the errors of the reductor and of the worm/gear
assembly. As a result, the axis encoder will give a low resolution with high accuracy while
the worm encoder will give a high resolution with low accuracy.
52
4.1. HIGH PRECISION DRIVE CONTROL
53
Observatory Control System
4.1.1.3 Drivers
The connexion layout of the drivers in order to control speed in position is detailed in Fig.4.1.3.
For each axis we use a driver controller IPECMOT 48/10. This driver handles the connexion of
one DC (or brushless, if brushless motors were preferred for the setup) motor, its corresponding
limit switches and the axis encoder. The connexion to the IPECMOT is done using a TCP socket
from the versalogic single board computer . The IPECMOT can handle position reading and TCP
connection every 0.5ms and encoder frequencies of up to 400kHz. The local position counter on
the board is incremented every 0.5ms and saved to the Electrically Erasable Programmable
Read-Only Memory (EEPROM) memory as soon as the supply voltage goes down to 7V. As
a result the absolute position is always recorded, even when the system is shut down. This is useful
whenever the system is power cycled or is started after being o↵ for a while, it will remember its
actual position whenever it was not moved after being parked. Since in most of the cases, the sys-
tem is powered o↵ every day and is not moved while it is not used, storing the last known position
allows the system to imediately know its position even if it does not use absolute encoders. This
avoids the need of finding a home position for each axis at the begining of operation which can be
a tedious operation.
The reading of the worm encoder is done using a dedicated PIC 18F26K22 microcontroller con-
nected to the single board computer serial port. Reading can be done every 5ms and the device
can handle encoder signal frequencies up to 1MHz.
Worm Encoder
Azimut
Versalogic PIC
Axis Encoder
Single Board 18F26K22
Serial1 Azimut
Computer
DC Motor
Ethernet
Serial2
Azimut
IpecMot
48/10
Axis Encoder
Gigabit Elevation
Ethernet
Switch
DC Motor
IpecMot Elevation
48/10
LAN
Worm Encoder
Elevation
PIC
18F26K22
Figure 4.1.3: General layout of the drivers for motors and encoders
4.1.1.4 Specifications
The precision in motor positioning and tracking is dependent essentially on one main parameter,
which is the sampling on the CCD. Sampling was designed to oversample stars by a factor oftwo
under normal seeing conditions, in order to satisfy the Shannon-Nyquist sampling theory. As a
result we chose a pixel sampling of 0.5 arcsec/pixel on sky.
However, for photometric applications, which are one of the key applications enabled by small
telescopes, the telescope must be able to track a star so that the mean position over an exposure
stays within a tenth of the pixel. This permits to avoid photometrical perturbations due to di↵erent
pixel sensitivity within the same pixel.
54
4.1. HIGH PRECISION DRIVE CONTROL
Regarding its speed limit, telescopes depend on the position in the sky and the slewing speed.
For the first item we express the tracking speed in Az and El as a function of Az and El in the
drawing figure 4.1.4. Speeds are computed for a site at 44 North latitude. The drawing expresses
speeds on elevation and azimut axes as a function of azimuth and zenithal distance. We can see
that in terms of tracking the tracking az speed should be infinite at the zenith, but if we limit the
observation to +85o from the horizon tracking can go from 0 arcsec/sec to 200 arcsec/sec, which
become reasonable values. This is obviously at the prize of a blind tracking cone of only 5 degrees
in the zenit.
Then, in order to take advantage of the best seeing conditions, the tracked star must stay within
1 arcsec RMS of its nominal position. If such condition is attained, the error in the tracking speed
over a measurement sampling period does not depend on the setpoint of the tracking speed. In
order to keep the actual speed of the motor as close to the target desirated speed, the motor is
controlled with a Proportional–Integral–Derivative controller (PID). This algoritm consists
of measuring the Target and actual speed at a fixed timing period, and at each of this measurement
samples, the target instantaneous input voltage of the motor will be a combination of three values:
• The cumulated sum of the error and previous error measurements mutiplied by a integral
fixed gain Ki
• The cumulated di↵erence of the error and previous error measurements mutiplied by a deriva-
tive fixed gain Kd
The acceptable error then only depends on the EEPROM sampling time and can be ex-
pressed in equation 4.1.7. Two typical examples of caculated acceptable error at two EEPROM
frequencies are presented in equations 4.1.8 and 4.1.9.
SpeedError
AcceptableError = (4.1.7)
PIDFreq
AcceptableError|PIDFreq=10Hz = 5Arcsec/sec/sample (4.1.8)
AcceptableError|PIDFreq=20Hz = 10Arcsec/sec/sample (4.1.9)
55
Observatory Control System
set using either a TCP Socket or the internal web interface of the device. For our application we
will use the TCP socket since it can be directly controlled by our python scripts with a console.
Additionnaly, the TCP socket is compatible with the ICE network distributed architecture of the
complete control system to be presented in the proper moment.
Despite of the important reduction ratio between the main axis and the motor, the minimum speed
requirements is so low that the motor needs be arranged to turn at sub-rpm speeds, which cannot
be obtained in standard designs and thus will be developed on purpose.
On the other hand, the maximum speed requirements for slewing do not permit to have a larger
reduction ratio, as otherwise the motor would work out of its maximum speed limits. Table 4.1.5
shows the necessary RPM on the motor axis depending on the rotation speed in the main axis.
In a first test, we determined experientally which was the minimum acceptable PWM ratio before
Table 4.1.5: Motor speed required at minimum and maximum tracking speeds
the motor stalled at low speed. In the graph 4.1.5 we express the measured speed on the main axis
as a function of the PWM value. The PWM value is expressed as normalized on a 16bits signed
scale, so 32767 means 100% clockwise , -32767 means 100% counter clockwise, while e.g. 16536
means 50% .
For each PWM value measured, we recorded the speed on the telescope main axis during 20s.
The graph 4.1.5 represents the mean speed measured on these 20s as a function of the PWM value.
Error bars represent the standard deviation of the speed during the related measurement.
In the graph we can observe that the motor stalls at a PWM value of 1800 units, which is
about 5% of its full speed, both in clockwise and counterclockwise directions. The minimum speed
before the motor stalls is around 100 arcsec/s on the main axis, which is not enough to fit our
requirements. This stalling value is essentially due to the 2ms electrical time constant of the motor,
which filters the PWM pulses when they become shorter than 2ms.
We can also observe that the speeds are increasing by steps of 50arcsec/s. This e↵ect is due to
the digitalization inside the PWM hardware, and as a consequence introducing a fast closed loop
control at the lowest possible level is absolutely necessary if we want to be able to smoothly move
the motor in the complete range of speeds from 0.1 arcsec/sec to 10000arcsec/sec.
56
4.1. HIGH PRECISION DRIVE CONTROL
In Fig. 4.1.6 we plot the mean speed measured over 20s against the target speed sent to the
Ipec device . Every 20s measurement is obtained by computing the mean of 20 measurements over
1s. Error bars represent the standard deviation of the speed for individual measurement according
to the 20s measurement.
We can see that the obtained mean for each measurement speed is not only accurate over the
complete desired range, but also how the standard deviation is also stable along the complete
measurement. Thus, the implementation of the closed-loop working mode at the Ipec is mandatory,
and it has been shown to enable our specified range of speed.
• The Axis encoder data does not su↵er propagation or mechanical error since it is directly
installed on axis and its step changes every 1.296 Arcsecond (we consider 5.000 sinusoidal
57
Observatory Control System
periods per revolution in the encoder, with a 50X internal interpolation and 4X quadrature
interpolation and no reduction stage)
• The worm encoder has a resolution of 0.18 Arcsecond per step (we consider a 5.000 sinusoidal
periods per revolution in the encoder, with no internal interpolation and 4X quadrature
interpolation and one 360X worm/gear reduction stage)
• The worm gear a↵ects the position of the worm encoder with a period of 1/360 turn of the
main axis corresponding to one turn of the worm. The amplitude of this sine wave periodic
aberration is set to ±19.1Arcseconds
• Local aberration of the worm gear and general transmission are modeled as brownian noise.
Each step is a↵ected by a random normal error between 0.2 and +0.2 Arcseconds which
gets propagated to the next step.
• We also include two mechanical eigen frequencies a↵ecting the worm encoder position as
experimentally measured from the same encoder. These two eigen frequencies have respective
periods of 3.0 and 8.2 Arcseconds and amplitudes of 0.2 Arcseconds.
• Robustness of the algorithm can be tested against a step of 4.0 Arcsecond a↵ecting only the
axis encoder, meaning the maximum step jump acceptable to the algorithms will be that
value
• Robustness of the algorithm can also be tested against a simulation of a wind burst, consisting
in a exponentially amortiguated oscilation with an amplitude of 10.0 Arcsecond and a 500
Arcseconds pseudo-period
Fig.4.1.7 shows a close-up view of the response of both encoders as a function of the real position
of the mount on the sky over a movement of 30 arcseconds. By comparing the relative positions
given by the worm encoder and the axis encoder with the real position, we can clearly see the
e↵ect of the resolution of both encoders and the high frequency aberration introduced by the worm
encoder.
In order to see the e↵ect of the main worm periodic aberration, we have to refer to Fig.4.1.8.
This plot shows the positioning error of the worm encoder and of the axis encoder over a movement
of 9000 arcseconds. In this plot we can clearly see the large periodic error due to the worm e↵ect.
The relatively lower resolution of the axis encoder is expressed as a thicker zero mean uncertainty
around the exact position.
58
4.1. HIGH PRECISION DRIVE CONTROL
10
10
20
0 2000 4000 6000 8000
Real Position (Arcsec)
• The fused position and the position of the encoders is initialized with the latest known fused
position
• At each sampling time we measure the position in arcseconds given by both encoders
• If the position of the axis encoder has changed since the last measurement, the fused position
is set to the axis position
• Othrewise, in case the axis encoder position has not change since the last measurement, the
fused position is set to the last known axis position plus a correction corresponding to the
movement done by the worm encoder since the last change in position of the axis encoder.
This simple algorithm has shown to be extremely robust, since it completely filters out most of
the long period aberration of the worm gear, and significantly increases the final resolution of the
system compared to the value given by the data from the axis encoder. However, this approach
will not reduce the high frequency noises included in the worm encoder due to its resolution, or
aberrations included by the gearing system with periods in the order of magnitude of the resolution
of the axis encoder. The e↵ect of the direct interpolation method can be seen in Fig.4.1.9 where
the result can be compared to the axis encoder and worm encoder values.
59
Observatory Control System
50
45
46 48 50 52 54 56 58 60
Real Position (Arcsec)
Figure 4.1.9: Response of direct fusion interpolation and e↵ect of IIR Filtering on the worm data
101
100
10 1
Amplitude
10 2
10 3
10 4 Filter response
5
Worm data
10
Worm data - Filtered
10 6
10 5 10 4 10 3 10 2 10 1 100
Frequency (1/arcsec)
Figure 4.1.10: Direct fusion response and e↵ect of IIR Filtering on the worm data
One possible solution to take this e↵ect into account is to add an aditional IIR filter on the
data from the axis encoder. However, its low frequency introduces a delay in the measurement of
the position which cannot be calibrated, and which will deteriorate the true position measurement
obtained. Another potential solution to filter out this e↵ect would be to use a lookup table of
o↵sets for each of the 200 steps of the sinusoid, but this becomes technically impossible since the
phase of the sinusoidal signal of this encoder may be lost when the system is shut down.
Such an e↵ect has been known since a few years on tape encoders, as we can see in [104], and
also in the case of analog encoders [105]. In both cases, the measurement errors induced were
compensated using an Extended Kalman filter as defined in [106][107]. We propose to implement a
similar technique in order to correct for the periodic interpolation error of the quadrature encoder
60
4.1. HIGH PRECISION DRIVE CONTROL
we are using. As a result, the best solution is to evaluate dynamically the phase and amplitude of
the periodic encoder error according to the data from both encoders.
The goal of a Kalman Filter is to estimate the real state of a system by statically evaluating which
part of the measured signal is noise an which part is the real state. In practice we use a priori-
information of the system state in order to predict a measured vector according to the previous
state and evaluate the noise of the obtained measurement according to the deviation of the mea-
surement compared the predicted one.
The output estimated state is in fact a ponderation of the estimated measurement and the mesured
one. The ponderation is expressed in a matrix called the Kalman Gain. In other words, more the
predicted measurement is close to the measurement itself , more the estimated state will take into
account the measurement and vice-versa.
In our situation, we measure the position from two encoders which we know the following infor-
mation:
• The worm encoder has a high resoultion but is a↵ected by the periodical error and noise of
the worm/gear system which is hardly predictable.
• The axis encoder has a lower resolution and is not a↵ected byt the worm/gear noise and
periodical errors. Howerver it is a↵ected by an interpolation error of 200 steps of period and
roughly ±3arcseconds of amplitude which phase is unknown.
Our goal is to have a proper estimation of the real position of the axes by melting and removing
the respective noises of these two measurements by using a-priori knowledge of the system such as
the speed and amplitude of the axis encoder interpolation error. The phase of this error is then
constantly estimated and reevalueated as a state parameter of the system.
This said, we propose to fuse the data of both encoders using a non linear, or Extended Kalman
Filter (EKF) defined as follows. Let’s assume the state vector xk at the sampling time k of the
system is defined by the following four states, as depicted in equation 4.1.15:
• ṗw is the speed of the axis given by the first derivative of the worm encoder
In this case, the periodic error e of the axis encoder may be expressed as a single harmonic
sinusoidal function of period ! equal to 200 interpolated steps of the axis encoder, as we can see in
equation 4.1.10. For improving the linearity of the system we prefer to use the linearized version
of the function as expressed in equation 4.1.11 to 4.1.14
e= a cos(!pa + ) (4.1.10)
e= a(cos(!pa ) cos( ) sin(!pa ) sin( )) (4.1.11)
a1 = a cos( ) (4.1.12)
a2 = a sin( ) (4.1.13)
e= a1 cos(!pa ) + a2 sin(!pa ) (4.1.14)
2 3
pa
6ṗw 7
xk = 6 7 (4.1.15)
4 a1 5
a2
The state transition equation model that allows to pass from the state at sampling time k to
the state at sampling time k + 1 can then be expressed as in equation 4.1.16.
61
Observatory Control System
Where uk is the input form the actuator input and wk is the estimated noise introduced in the
transition. In our case, f is linear and can be expressed as in equation 4.1.17.
2 3
1 t 0 0
60 1 0 07
fk (xk , uk ) = 6 7x (4.1.17)
40 0 1 05 k
0 0 0 1
While the standard Kalman Filter uses a linear state transition matrix, the Extended version allows
to use non linear function for the state transition. The matrix used for the state transition is then
the Jacobian matrix, or matrix of partial derivatives constructed from the non linear transition
functions.
Since the state transition funcion fk (xk , uk ) is time invariant, its Jacobian is equal to the state
transition matrix as seen in equation 4.1.18.
2 3
1 t 0 0
60 1 0 07
JFk = 6 7 (4.1.18)
40 0 1 05
0 0 0 1
The state measurement function hk (xk )gives the relationship between the obtained measurement
vector zk and the actual state of the system xk as expressed in equation 4.1.19. In our case, the
measurement vector is composed of two values, given by our two sensors, which are the position
m1 of the axis encoder and the speed given by first derivative v of the worm encoder. Then we
can compute the state measurement function, in which we take into account the nonlinearity of
the axis encoder according to equation 4.1.20.
zk = hk (xk ) (4.1.19)
T T
m1 p + a1k ⇤ cos(! ⇤ pa k ) + a2k ⇤ sin(!pa k )
hk (xk ) = = ak (4.1.20)
v ṗwk
The Jacobian JHk of the measurement function hk (xk ) can then be expressed as shown in equations
4.1.21 and 4.1.22.
" #
@m1 @m1 @m1 @m1
@pa @ ṗw @a1 @a2
JHk = @v @v @v @v (4.1.21)
@pa @ ṗw @a1 @a1
1 !a1k sin(!pa k ) + !a2k cos(!pa k ) 0 cos(!pa k ) sin(!pa k )
JHk = (4.1.22)
0 1 0 0
Once the measurement function is known, the Kalman filter needs an apriori knowledge of three
covariances matrixes.
The process covariance matrix This matrix express how the target state can evolve. Since
each state parameter is independant from the others, it is initialized empirically as a diagonal
matrix. The trace of this matrix is obtained by setting initial guess of the squared standard
deviation of movement of each state parameter.
The process noise covariance matrix This matrix express how the state can deviate from its
target a↵ected by movement noise. Since each state parameter is independant from the
others, it is initialized empirically as a diagonal matrix. The trace of this matrix is obtained
by setting initial guess of the squared standard deviation of the white noise of each state
parameter.
The measurement noise covariance matrix This matrix express how the measurement is af-
fected by noise evolve. Since each measured parameter is independant from the others, it is
initialized empirically as a diagonal matrix. The trace of this matrix is obtained by setting
initial guess of the squard tandard deviation of the noise of each sensor.
62
4.1. HIGH PRECISION DRIVE CONTROL
let’s consider as initialization parameters the initial process covariance matrix P 0|0 , the process
noise covariance matrix Q and the measurement noise covariance matrix R as diagonal matrices.
The amplitude of the diagonal values is initialized as the squared value of the estimated process
standard deviation and noise standard deviation. This leads to the matrices express in equations
4.1.23, 4.1.24 and 4.1.25.
2 3
0.1 0 0 0
6 0 0.1 0 07
P 0|0 = 6 40
7 (4.1.23)
0 0.1 0 5
0 0 0 0.1
2 5
3
1e 0 0 0
6 0 1e 5 0 0 7
Q = 6 4 0
7 (4.1.24)
0 1e 5
0 5
0 0 0 1e 5
2 3
0.01 0 0 0
6 0 0.01 0 0 7
R = 6 4 0
7 (4.1.25)
0 0.01 0 5
0 0 0 0.01
Once the model is properly defined we can now apply it at each step of measurement in order
to obtain the estimated prediction of the next state of the system x̂k|k 1 together with the esti-
mated prediction of the covariance matrix P k|k 1 . The prediction equations for the next state are
expressed in equations 4.1.26 and 4.1.27.
The filtered estimated state is then computed at each step as a sum of the predicted x̂k|k 1
state and a influence of the latest measurement innovation ỹ k expressed in equation 4.1.28. As we
can see in equation 4.1.29, the latest measurement innovation is ponderated by a gain matrix K k
recomputed at each step according to the last measurement, and updated according to the new
measurements. The estimation of the position is then obtained by solving the Riccati equation.
ỹ k = zk h(x̂k|k 1) (4.1.28)
x̂k|k = x̂k|k 1 + K k ỹ k (4.1.29)
The gain matrix is in fact an expression of the reliability of the latest measurement itself. In
simple words, the closest is the measurement from its predicted value, the bigger will be its relative
influence in the estimation. The Kalman Gain K k joint with the estimated covariance matrix are
obtained by solving the Riccati equation which solution is expressed in equations 4.1.30 to 4.1.32.
>
Sk = JH k P k|k 1 JH k + Rk (4.1.30)
Kk = P k|k 1 JH >
k Sk
1
(4.1.31)
P k|k = (I K k JH k )P k|k 1 (4.1.32)
The previously defined filter is then applied to simulated data representing the output of the axis
encoder when the worm is moved at a constant speed. The simulations take into account the
periodic aberration of the worm gear coupling with a period of 7200 arcsecond, which corresponds
to a system with 180 teeth similar to our available setup, and with an amplitude of 15 arcseconds
which has been experimentally measured on the worm/gear couple.
The aberration of the axis encoder is also taken into account with a period of 259.2 arcseconds
and an amplitude of 3.5 arcseconds. Finally we also added to the system a brownian noise of
0.2 arcsecond RMS amplitude, which simulates the local friction aberrration of the gearing and
mechanics of the setup. The simulated result is shown in Fig.4.1.11 where we plot the di↵erence
between the position in the axis encoder and the position of the worm encoder when the worm
moves at constant speed. The telescope encoder position is displayed following the value given by
63
Observatory Control System
the axis encoder in green, and the value obtained after Kalman filtering is displayed in black. We
can clearly see the e↵ect of the filter which removes sucessfully the aberration due to the encoder
periodical error without a↵ecting other kind of fast variation which are clearly identificable in the
filterered data.
Figure 4.1.11: Response of the dat fusion using an extended Kalman filter
Testing of the proposed data fusion algorithm based on Kalman filtering under real conditions
was carried out using a comercial amateur’s mount Orion AzEq-G and its standard tracking con-
troller. A TTL quadrature encoder Gurley R158S with 50x interpolation was adapted and installed
on axis of the mount using a 3D printed part designed ad-hoc to replace the polar scope as can be
seein in Fig.4.1.12.
The encoder output consits of two lines giving a square signal of 250000 period per turn with
a ⇡/4 phase shift. After quadrature interpolation the described setup gave a resolution of 1.296
arcseconds per step. The signal is a↵ected with a periodic interpolation error of ±3.5arcseconds in
amplitude and 200 steps period certainly due to an interpolation done by the encoder internally.
The Kalman filtering will then help us in removing this interpolation error in order to obtain a
precise measurement of theal position of the telecope axis.
The previously defined Kalman filtering algorithm was coded and embedded in an Arduino board
in a script reading the encoder position with a time sampling of 1ms. A Python script running on a
RaspberryPi computer performed a readout of the position at 10Hz, while the mount was tracking
at a theoretical sidereal constant speed of 15.04 arcsec/s for 180s which is the correponding time
of one complete turn of the worm for this setup. The results obtained are presented in 4.1.13.
It may be appreciated that even if the encoder aberration is not exactly sinusoidal, a second order
harmonic remains present in the processed signal. However, most of the e↵ects have been removed
by the Kalman filter whilst keeping a fast response to high speed positioning variations. The
approach is thus considered adequate. A similar measurement over a longer period corresponding
to a 6.5 turns of the worm gear is shown in Fig.4.1.14, which also shows a measurement of the
mechanical periodical error of the mount.
As a result, we have shown Kalman filtering provides a precise method to estimate the exact
position of the mount, using the best advantage of the high precision on axis encoder which is not
a↵ected by mechanical aberration, and of the higher resolution of the worm encoder. Once we can
measure properly the position of the axis, the next step would be to close the control loop in order
to keep the system moving whithin the specifications previously exposed.
64
4.1. HIGH PRECISION DRIVE CONTROL
Figure 4.1.12: Picture of the setup and the encoder installation on the main axis of the commercial
mount
Our first attempt to implement a closed loop control in the tracking system was by adding to the
open loop measurement setup desbribed in the last Section a relay output controlling the guide
port intput of the commercial mount as depicted in Fig.4.1.15. In this setup, the position of the
axis of the mount is read by a Gurley encoder placed on the right-ascention axis of the mount
using an Arduino board in an infinite loop. The corresponding positions are then retrieved using
a Raspberry Pi computer equipped with a Linux Debian operating system and Python scripts
reading the data from the Arduino over the USB.
The guiding port strategy implemented in commercial mounts is standardized to the following
widely used protocol usually referenced as ST-4 port:
• When conductivity is established between pin 2 and 3 of the mount guide port, the integrated
mount controller increases the tracking speed on the right ascention axis by 1.5 arcsec/sec.
• When conductivity is established between pin 2 and 6 of the mount guide port, the integrated
mount controller decreases the tracking speed on the right ascention axis by 1.5 arcsec/sec.
A standard ST-4 port and its pinout is shown if Fig.4.1.16. Conductivity between the pins can be
done by using relays for instance.
As a consequence, the implemented control loop on this test setup is a simple proportional
position control loop in the Arduino C++ code with a time sampling Ts = 10ms whose flow chart
is represented in Fig.4.1.17. This is a very simple control strategy since it only consists of relay
activation/deactivation when we detect that the mount is delayed or in advance of the theoretical
position. The speed control being handled by the integrated commercial controller of the mount
does not need to be taken into account at the Arduino level in this case.
65
Observatory Control System
15
10
Encoder data
Kalman estimation
5
25 0 25 50 75 100 125 150
Time (s)
Figure 4.1.13: Response of the data fusion on real data using an extended Kalman Filter along a
complete turn of the worm encoder
Kalman estimation
5620
5630
5640
5650
5660
400 600 800 1000 1200 1400
Time (s)
Figure 4.1.14: Response of the data fusion on real data using an extended Kalman Filter along 6.5
turns of the worm encoder.
After implementing the described setup on the commercial prototype presented previously, we
recorded the behavior of the telescope position using the Raspberry Pi computer while the tele-
scope was tracking. The closed loop position control was activated after 1150s of recording. We
can see in the results in Fig.4.1.18 that at this precise moment, the variable drift completeley
disappears and the mount starts tracking its target within ±0.3 arcseconds. This value can be
compared as being two orders of magnitude below the previous ±30arcseconds positioning error
due to mechanical aberrations in the gearing of the mount.
In order to test the robustness of the system we left it activated and generated a perturbation
by mechanically displacing abruptly the telescope tube by 5mm, and releasing it after 1s. This
action is stronger than any external actual perturbation expected on the mount, such as e.g. a
windburst which may happen under real operation conditions (we are omitting strong earthquakes,
which won’t be treated). The obtained result is shown in Fig4.1.19. In this figure, the mechanical
impulse on the mount is realized at the instant t = 270s of the record is clearly visible. We also ob-
serve that even under such a level of perturbation (a pulse inducing an error of almost 20 arcsec) the
system does not loose stability after the perturbation and recovers the tracking accuracy within less
than 5s, wich is acceptable considering the strength of the artificially induced perturbation applied.
66
4.1. HIGH PRECISION DRIVE CONTROL
A
B
In conclusion, the simple and very economic setup proposed completely fulfills the requirements
required for astronomical object tracking when adapted to a commercial setup. Further, it could
be packaged as a standalone precision tracking system. On the other hand, others conventional
commercial telescope control systems have a lot of aditional precision restrictions. These restric-
tions are not clearly evident but are usually consequences of the comunication protocol being used
and its software implementation:
• The commonly used standard LX200 returns or accepts truncated values with a precision of
only 1 arcsecond in declination and 1s of right ascention, corresponding to as much as 15
arcseconds.
• These systems do not accept non-sidereal speed inputs and so cannot track using real time
pointing data models in order to reduce the tracking drift due to non perpendicularities or
flexure e↵ects.
• Internally, these systems cannot generate pointing models with more than 3 to 6 stars, which
is indeed another strong limitation since it is not enough to take into account extended
parameters. Typically the first 6 parameters represent the polar alignment of the mount and
67
Observatory Control System
Target
Speed no
yes
changed?
t0 =now()
t=now() t=now()
Initial Target Position =Current Position
False
Figure 4.1.17: Flow chart of the proportional position control loop proposed.
68
4.1. HIGH PRECISION DRIVE CONTROL
300
Encoder data
Position error (arcseconds)
340
360
380
400
0 200 400 600 800 1000 1200 1400
Time (s)
Figure 4.1.18: Data fusion response on real data using an extended Kalman Filter and a closed
loop position control. The loop is closed after 1150s of data recording, and its e↵ects easily visible
on the error residuals.
the non perpendicularities but extended parameters such as tube or fork flexures do have an
important e↵ect on the pointing residuals.
As a consequence, the setup finally implemented in our work slightly di↵ers from the one we ap-
plied on a commrecial mount. The motor control is handled as described previously by a TCP
industrial controller taking care of an internal PID control loop of the DC motor and its encoder.
The position of the motor encoder and speed of the motor can be read by the TCP protocol from
a Raspberry Pi while the position of the axis encoder is still obtained via an Arduino board. The
output sent to the TCP motor controller corresponds directly to the computed error of the control
loop multiplied by a proportional control Kp value.
The main di↵erence between this setup and a commerical one is that on a commercial mount
the tracking speed is applied blindly in an open loop and raises or lowers by 1.5arcsec/sec while
one of the guide input is activated. In our case the system directly measures the positioning er-
ror and applies the correction speed inside the control loop which increases precision and reactivity.
In this configuration, the Kalman filtering is implemented inside the Raspberry Pi, but since
the parameters and speeds of the control loop are the same, the results and performances are
equivalent. The main di↵erence is driven by the fact that this last configuration, presented in
Fig.4.1.20 does not have any major restriction of precision in terms of protocol and speed when it
gets integrated in the complete Telescope Control Software.
69
Observatory Control System
Encoder data
Position error (arcseconds)
505
510
515
520
Figure 4.1.19: Data fusion response on real data using an extended Kalman Filter. This plot shows
the e↵ect of a manual e↵ort on the telescope tube while tracking, to simulate the action of external
perturbations such as windburst. The e↵ort is generated at the instant 270s along the recording
and we can see that the system recovers itself in less than 5s.
DC Motor + Encoder +
A
B
70
4.2. ADVANCED TELESCOPE CONTROL SYSTEM
Accurate pointing and guiding of a telescope is a task of utmost importance for any obser-
vatory. Achieving such a goal entails a transformation from the mean coordinates and velocities
of an astronomical target at a given epoch and equinox to a coordinate system attached to the
mechanical axes of a given telescope. Most of these transformations can be accurately carried out
based on astronomical and geographical data, and geometric considerations.
If a telescope could be made that was a perfectly aligned rigid body, ultimately the transforma-
tion required for pointing the telescope to a given astronomical source would entail transforming
the right ascension, declination and proper motion of a given source into the corresponding angles
of the telescope mechanical axes, either an altitude-azimuth or hour angle-declination pair, at the
time of the observations. Alas, some level of misalignment and flexure is unavoidable. Therefore,
an additional transformation, done with a so-called pointing model [12], is made to go from the
topocentric hour, angle and altitude predicted from the astronomical coordinates to the actual
positions at which the telescope axes angles have to be set. The pointing model corrections will
depend on where the telescope points to. Such corrections have been implemented either as lookup
tables that are interpolated, or as analytic models depending on a number of adjustable param-
eters, which are determined by pointing at a number of stars evenly distributed in the sky and
minimizing the di↵erence between the catalog stellar coordinates and those predicted by the model.
Once an accurate pointing is achieved, the telescope needs to track objects accurately across
the night sky. In some sense, this is not a di↵erent problem than pointing, as tracking can be
thought as just as series of movements determined by the evolution of pointing with time. That
said, any remaining errors in the pointing model will as a rule accumulate as we track, resulting
on a drift of the astronomical sources in the detector. Lacking a perfect mechanical systems, the
solution to this drift is to provide a guiding system, with which the centroid of a bright source is
monitored at a relatively high frequency (typically in the range 1–10 Hz) and any drifts on this
source are used to correct the pointing of the telescope via a closed loop control.
The positioning control loop is a Proportional Integral Derivative (PID) controller we present in
Fig.4.2.1. We can see that the controller is fed with a model and features a feed forward predictive
capability as explained in [108]. In practice the input of the system is the target position of the
axis. From this position the model computes a target position achievable by the system depending
on a set of parameters, representing the behaviour of the axis. This set of parameters is composed
of the previous positions and speeds, the maximum speed, and the maximum acceleration and
deceleration of the device.
Additionally, the feedforward facility uses this same set of parameters and, using the previous
target position, computes an achievable target speed of the system, which is added to the output of
the PID controller. This sum is then fed as an input speed to the axis. The presence of the model
and of the feedforward loop reduces to the maximum the input error rm of the PID. This minimizes
the integrated error, which avoids oscillations and overshoots, but remains very responsive in order
to minimize the slew time between a target and the next one. In practice such an approach is
much more efficient than a anti-windups strategy which only consists in saturating the integrated
71
Observatory Control System
error.
Feedforward
uf Disturbances
+
r rm + e uc + u y
Model Controller System
ym
Measurements
When considering instruments mounted on large professional telescopes the guiding is usually
done using stars very close to the instrument’s field of view, via imagers that gather light in the
outskirts of the center of the focal plane. Even in this most favorable case there will be di↵erences
between the drifts of the guide stars and the center of the focal plane, and more than one guide
star is required to avoid, for example, field rotation [12]. A minor complication with this setup
is to find a proper guiding star which is bright enough to have a proper centroid computation at
small exposures without saturating the detector, which must also be close to the target. Since the
guiding field is usually a few arcminutes across, a proper guiding star may not be found easily.
Thus, the most usual guiding setup for small telescopes is to have a second smaller telescope
attached to the main one. By having a small telescope with a wider field finding appropriate guide
stars is not an issue in this setup, but the axis alignment and flexures of the small telescope need
not be identical to those of the bigger telescope. Therefore, discrepancies between the pointing
corrections needed for the guiding star and science targets can be significantly larger than when
having a guider in the same focal plane of the instrument, resulting usually in noticeable drifts
during long exposures. We note that guiding can also be done using the science frames, in the
sense that we can keep a series of exposures aligned by computing the needed corrections to bring
back the image to a given reference image of the field, a scheme successfully implemented in the
DONUTS software package [109]. While this algorithm works well in the short exposure regime or
a very robust mechanical setup, it fundamentally cannot correct for drift within the exposure, so
it is in general not suited for long exposures. Thus, the fundamental problem when using a second
guiding telescope is that the pointing model of the main telescope will not be accurate enough for
the second guider telescope, which is the one determining the correction.
In this work we describe and test on sky the use of a double pointing model that allows accurate
tracking when using a second guiding telescope by accounting for the di↵erences in the alignment
and flexures of the axes between the main and guider telescopes. The work is structured as follows.
In § 4.2.1 we describe the double pointing model as states in a telescope pointing machine; in § 4.2.2
we describe the implementation of the presented double pointing model to two real-world applica-
tions, namely blind acquisition and autoguiding; in § 4.2.3 we present the on-sky performance of
the autoguiding application, and finally in § 4.2.4 we conclude.
72
4.2. ADVANCED TELESCOPE CONTROL SYSTEM
corresponding to state vectors, which are 6-element vectors specifying position and velocity of a
given target. All the knowledge of the TPM is encoded in a table indexed by the initial and final
desired states. Each entry in the table, labeled by one of the states defined in the machine, gives
an operation connecting to the nearest state, and the label of the resulting new state, and by fol-
lowing iteratively this procedure until the desired state is reached any operation can be efficiently
encoded. This scheme has many advantages, but the most important for our purposes is that it is
very easy to add new states: you just need to define a new state, and provide the transformation
(and its inverse) to the nearest state only, not to all existing states.
In what follows we will concentrate in the case of an equatorial mount, but all that follows is
equally applicable to an alt-az system (we will indicate the required modifications below). The
first step when pointing is a series of transformations required to go from heliocentric mean right
ascension ↵ and declination to topocentric observed h and at a given observatory. The relevant
states (denoted by the letter S followed by a number) and transitions (denoted by T followed by a
number) in the TPM library for the equatorial mount are illustrated in Figure 5.4.4. The arrows
in the transitions indicate the direction of the direct transformation as defined in the TPM, while
going in the opposite direction requires the inverse transformation. Once we have the topocentric
observed coordinates (h, ), as explained in § ?? we have to convert to the hour angle and declina-
tion of each of the mechanical axes, (haxis , axis ), using a pointing model. In the context of TPM,
this can be implemented by just adding an additional state. Given that we have both a guiding
and a main telescope, we propose to add two new states, each associated with its own pointing
model and corresponding to each of the telescopes, as illustrated in Figure 4.2.3.
The basic idea we propose in this work is to use the new states, which for an equatorial mount
are connected to a single state S20 in TPM (topocentric observed h and ), in order to properly
propagate guiding information to the main instrument. We note that there is one possible final
state for each pointing model. Depending on which instrument in the system we want to use for
tracking and which for pointing, we only have to select which is the desired final state. The tele-
scope control system will then use the target hour angle and declination (haxis,a , axis,a ) of state
S23a if we want to track and point for the main instrument, or those of state S23b if we want the
telescope to aim at a particular position for the guider telescope. The target hour angle and dec-
lination of the desired state will be recomputed at each iteration of the control loop, recalculating
in the process the new target speed for each of these axes.
For the case of an alt-az mount, the di↵erence is that the new states are incorporated di↵er-
ently into the TPM scheme. We show in Figure 4.2.4 how the pointing model states would be
incorporated in this case. Other than this di↵erence, all that follows applies just by considering
alt-az pairs of angles for the mounts instead of the equatorial ones.
73
Observatory Control System
S02 - Heliocentric
Mean FK5 Any
equinox
T02 - Precess to J2000.0
S06 - Heliocentric
Mean FK5 J2000.0
S13
T09 - Aberration
S14
T11 - Nutation
S16 - Topocentric
Apparent FK5
Current Equinox
T12 - Earth’s Rotation
S17 - Topocentric
Apparent (Ha,Dec)
T14 - Refraction
S19 - Topocentric
Observed (Az,El)
Figure 4.2.2: Most useful pointing machine states and transitions according to the TPM definitions
[1].
74
4.2. ADVANCED TELESCOPE CONTROL SYSTEM
S20 - Topocentric
Observed (HA,Dec)
Figure 4.2.3: New states S23a and S23b added to the TPM (in dark gray) and their associated
transitions for the case of an equatorial mount.
S19 - Topocentric
Observed (Az,El)
Figure 4.2.4: New states S23a and S23b added to the telescope pointing machine (in dark gray)
and their associated transitions for the case of an alt-az mount.
following sequence of 12 steps using the secondary pointing model and its wide field imager in
order to blindly send the target very close to the field center or the spectrograph slit of the main
instrument:
1. Set the state S06 of a TPM state machine, we denote as SM1 as the target state of the
telescope control loop. At every iteration of the loop, the control algorithm will transform
the target state S06 to state S23a. The output position of S23a is then used as the current
floating target position of the telescope’s drives. This means in practice that we will point
and track the telescope according to the main imager.
2. Take an image I1 with the guiding (wide field) telescope.
3. Store the current celestial coordinates of the state S23a of the the machine SM1 in a variable
P1 , as expressed in equation (4.2.1), below this list of actions.
4. Initialize a new state machine SM2 with the coordinates of S23a, i.e. P1 .
5. Transform the state of SM2 from the state S23a to the state s23b representing the guider
telescope and store the position of the state S23b in a variable P2 , as expressed in equa-
tion(4.2.2).
6. Transform from the state S23b to the heliocentric mean FK5 J2000 coordinates represented
by state S06, in order to obtain the expected celestial position P3 of the guider, as expressed
in equation (4.2.3).
7. Build the expected world coordinate system (WCS) of the guider imager according to the
coordinates of P3 .
8. Run an astrometry software such as astromety.net [110] or similar on the guider image in
order to obtain the actual (real) coordinates of the guider imager, which we denote as P4 , as
expressed in equation (4.2.4)
9. Set the position vector of the machine SM2 , which is in state S06, to P4 .
75
Observatory Control System
10. Transform the state of SM2 back to S23b in order to obtain the axis coordinates P5 of the
mount that correspond to position P4 in the guider image, according to equation (4.2.5).
11. Compute the pointing o↵set dP , the di↵erence between the expected and real axes coordinates
of the guider, according to equation (4.2.6).
12. Apply the o↵set to the target coordinates P1 in order to get the final, corrected coordinates
P6 of the state machine SM1 , according to equation (4.2.7).
hcurrent
P1,S23a = (4.2.1)
current Imager
hcurrent
P2,S23b = (4.2.2)
current Guider
↵expected
P3,S06 = (4.2.3)
expected Guider
↵real
P4,S06 = (4.2.4)
real Guider
hreal
P5,S23b = (4.2.5)
real Guider
As a result, in a case like this, where for example we cannot have an astrometric reduction on the
main instrument, the pointing error can be improved by accounting explicitly via the two pointing
models the di↵erences between the guider and main telescopes. As long as the calibration data for
both pointing models is taken simultaneously, the measuring points are equally distributed, and
there is no erratic di↵erential movements between both telescopes, the residuals of the pointing
quality should be determined by the standard deviation of the di↵erence of both pointing models.
4.2.2.2 Autoguiding
A similar approach may be used for autoguiding, which as stated in §1 can be seen as di↵erential
pointing. In what follows, we assume that the main instrument was properly recentered before
the start of autoguiding. At the end of the procedure described in the last section, the imager
field will be centered to a precision characterized by the typical di↵erence in accuracy between the
two pointing models. To center exactly the imager or a target in a spectrograph slit one can use
several methods, for example centroiding on the target, which is by now very close to the center
given the former procedures have been properly performed. We denote the di↵erence in position
between the fine-tuned center and the center obtained by using the method of the previous section
as dPinit . Note that if we run the centering procedure of the previous section after fine-tuning the
center we will be told by the procedure that we need to correct by this value to center (assuming
an insignificant amount of time has ellapsed since we run the pointing procedure, otherwise the
value will start evolving as the sky moves and we start sampling di↵erent regions in the pointing
model). This just reflects that we are working at the precision limit of the pointing models and
the value will be of a magnitude typical of the pointing model errors.
Once we have determined dPinit , autoguiding is run on a continuous loop in which we repeat
successively steps 1 to 11 of the list we used in the previous section for exact astrometrical reduction.
Step 12 is replaced by a similar operation, where the o↵set applied to the machine SM1 is defined by
dPguiding as expressed in equations (4.2.8) and (4.2.9). These two equations replace equations 4.2.6
and 4.2.7 above, the value of dPS23b is given by equation 4.2.6.
76
4.2. ADVANCED TELESCOPE CONTROL SYSTEM
The process is thus essentially the same we used for the blind pointing procedure, but taking
into consideration the initial o↵set dPinit . Performing the process in this way, the autoguider will
guide on a drifting star in the guider telescope, and this drift will correspond to the evolving
di↵erences between the pointing models given by the instances of states SM 23a and SM 23b. This
is, the commanded drift of the star in the guider is such that the star stays centered on the main
imager.
4.2.4 Conclusions
We have proposed and implemented a new method using multiple pointing models which is able to
perform high precision recentering and autoguiding of a small telescope using an external imaging
train installed in parallel to the main optical tube assembly. Precision autoguiding usually is
achieved having a guide sensor in the same focal plane of the main imager, either using a multichip
imager with a dedicated chip for this use, or using a pick-o↵ mirror for guiding purposes. Both of
these solutions need a reduced field of view for the guider, which cannot be used for recentering
purposes, and can be limited in finding guide stars on low density fields or when used behind
a narrowband filter. With the solution we have proposed, both high accuracy recentering and
tracking can be done using the guiding imager, which is not a↵ected at all by the filter we use on
77
Observatory Control System
Figure 4.2.5: Right ascension drift in the main imager (green) (corrected by our proposal), measured
drift in the guider (red), and accumulated corrections sent to the mount (blue).
the main optical train or the position of the star field under analysis. Since we can use a wide field
telescope, we can choose the external guider sensor size and related optics so that a much wider
set of guiding stars can be considered, and astrometry can be performed regardless of the target
field or moonlight pollution. An improvement of a whole order of magnitude in the quality of the
tracking in conventional autoguiding application was experimentally demonstrated.
Our scheme is specially useful for telescopes where total cost is a major design consideration. In
our test setup the flexures of the telescope and the di↵erential flexures between the main telescope
and the guider are significant, and our solution solves this problem in an e↵ective and low-cost
fashion, via software. One can always improve the mechanics and achieve improved overall per-
formance without using multiple pointing models, but our scheme provides an additional tool to
increase the performance of small telescope observations, which have played an increasingly promi-
nent role in Astronomy in the late years due in part to the increase in their use fueled by research
in exoplanets.
78
4.3. PORTABLE OBSERVATORY SOFTWARE ARQUITECTURE
Both solutions propose a list of drivers for the most common peripherals used in Astronomy-
related projects, grouped by types of interfaces (camera, mount, weather station, focuser, rotators,
and so on.). Each kind of interface proposes then a public common Application Programming
Interface (API), which is expected to be independent of the hardware used. The interface to
these APIs is done by a software hub which asynchronously translates requests from the com-
mon API to the device of the specific peripheral. As a result, client’s software can connect and
send orders to the API regardless of the brand or driver of the hardware connected to the computer.
Under Windows systems, ASCOM allows device hardware drivers and clients to be pro-
grammed in any Microsoft .NET language since the communication is done using Windows COM
objects. On the other hand, it does not provide a network distribution layer of these objects. As
a consequence, when using an astronomical setup handled using ASCOM, every peripheral and
client have to be installed in the same computer unless a specific driver or a specific client including
a dedicated workaround to communicate with other devices on the network is present, which is not
always the case. This bottleneck represents the major limitation of the ASCOM platform in the
moment of integrating it in a remotely controlled environment, which may involve an observatory
local network, susceptible to have some common peripherals in the network.
As of Linux systems, INDI provides a hub interface similar to the one proposed by ASCOM.
This interface is developed in C++ and drivers for new peripherals can be installed on the folder
where the INDI hub is running. These drivers have to be compiled in C++ and need to be run on
a Linux system. Regarding the communication with the clients, the hub proposes an XML based
communication language exported through TCP/IP sockets. These communication protocols allow
running the client on any platform regardless of the system or machine the server is running on.
This feature represents the breakthrough of INDI versus what is proposed by ASCOM. However,
even if INDI allows to distribute peripherals and their respective drivers and clients over multiple
machines on a local network, it is still necessary to develop the drivers in a common language
running on Linux machines, which is not evident.
In this Section, we will first present a novel software architecture based on the Internet Com-
munications Engine (ICE) Library and its implementation in the management of observatory
peripherals. The main advantage over previously defined existing architectures is that the system is
genuinely distributed, as far as both the software and the clients are entirely system independent.
In the second part of this Section, we will present the list of peripherals and their inheritances
currently handled by the proposed software, and how new types of peripherals may be added.
Then, we will we present how all the configuration of a complete observatory can be grouped in
a single XML file joint with its architecture and management. Finally, we will present how we
can additionally include a fast web-socket based communication technique for each driver allowing
real-time communication with clients over the web interface, which allows the client to commu-
nicate with any kind of device without the need to install any additional software on the client
computer.
79
Observatory Control System
tralization of communication being performed by a specific software called hub which will receive
orders from various clients and jeopardize them to their respective drivers. ASCOM runs on
Windows systems and proposes to export COM objects between applications. This COM objects
can be retrieved from clients programmed in any .NET language. ASCOM is widely used among
the amateur astronomers’ community because most of the hardware vendors do propose compat-
ible drivers. However, clients and hardware using ASCOM must imperatively run on the same
Windows computer. The reason for this is that the as the ASCOM hub is a particular driver
developed for each client which need to run on the same computer.
In order to communicate between peripherals, drivers and clients, we propose to use the ZeroC-
ICE framework. When compared to ASCOM, ZeroC-ICE allows to get all the benefits INDI
has proven over ASCOM, so:
On the other hand, it is better to use ICE rather than INDI for the following reasons:
• In ZeroC-ICE servers can run on di↵erent platforms and operating systems, which is not the
case with INDI where the servers need to be run on Unix-based systems only. Additionally,
all the INDI drivers must be programmed in c++ which may add unnecessary complexity to
the coding of some peripherals. On the other side, the modularity of ZeroC-ICE allows as an
example, that the graphical interfaces of all the cameras to be a single universal Python client
exporting the interface as a web server. As a result, it can be accessed by any computer from
any system without any need to installing anything on the user’s computer. On the INDI
driven system, the drivers necessarily run on Linux and have to be programmed in C. The
Users have then to install a client on its computer, for all the peripherals, and every interface
has to be programmed within the same software, and thus, within the same programming
language. This makes the ZeroC-ICE interface much more versatile and much easier to use
for both the programmer and the user.
• Servers and clients can also be programmed using any language, not being limited to just
C++
• Servers can run in a standalone mode and do not need to be started through a node. As
an example, a server can also uses client objects, and become a client to other servers, to
which it can connect directly without going through third entities. This greatly simplifies the
system architecture and the overall understanding of the relationships of a typically complex
collection of peripherals
• The ZeroC-ICE interface also provides the use of callback functions of the network, which
are not available in INDI and allow programming completely asynchronous objects.
• Finally, ZeroC-ICE provides the tool ICEGrid which allows to automatically check and keep
needed servers running, or to start them as soon as they are needed, which also simplifies
the programming of the complete architecture.
The implementation of the architecture is performed by defining every server object using
the Specification Langage for ICE (SLICE). SLICE is a simple definition object-oriented
language in which we define member functions of every possible server object present in our setup.
For the case of the test setup developed in this Thesis we will need the following objects to be
specified:
80
4.3. PORTABLE OBSERVATORY SOFTWARE ARQUITECTURE
Axis Motor controller: The motor server will be in charge of interacting with the Ipec motor
Controller and will receive orders from the mount (target speeds and position requests)
Dome: A dome server will interact with the dome electronics to move it according to orders
received from the mount, or from other peripherals, to open or close it. As a typical example,
the weather station will be able to send directly an order to close the dome in case of possible
rain.
Field Rotator This server will move the motor rotating the camera to position the field correctly
to dynamically compensate for the field rotation which appears in the case of an alt-azimuthal
telescope. As the dome, it needs to be synchronized with the mount server.
Focuser The focuser server will move the focus motor of the camera, under direct orders from
the user after visual inspection of the images, or from the camera manager after automated
focus analysis.
Filter Wheel The filter wheel will interact with the motor positioning the proper filter in front
of the detector. It will receive orders from the Acquisition Manager
Mount The mount server contains the implementation of the Advanced Telescope Control System
as defined in Chapter 4.2. As a consequence, it will continuously monitor the target object
and transform the coordinates to the pointing machine state 23.
Camera The Camera Object includes all possible functions needed to handle a standard CCD.
It will receive direct orders from the Acquisition Manager. Since most camera APIs are
defined in C or C++ language, a significant part of these drivers will be programmed in this
language. This will be an exception, as we chose to program all other servers in Python
for convenience. Images are sent back to the Acquisition Manager using Callback functions,
which allow utterly asynchronous handling of this critical peripheral. As a result, the camera
and Acquisition manager objects will not remain blocked by any process while the camera is
exposing or retrieving images from the device.
Acquisition Manager This server is the top level of the acquisition chain and should be able to
connect to all the previously defined systems (directly or indirectly) to perform the observa-
tion of an object. It should be able to communicate with the camera to take an image and
retrieve it, to process it, to analyze the data in to refocus or correct for the mount position
if necessary and to image again as soon as the mount is on target. We can consider the
acquisition manager as the chief director of all peripherals who makes them work together
smoothly.
WeatherStation Above the other peripherals, the weather station can monitor for environmental
condition and decide to send orders to the proper ones to protect the system if a dangerous
weather condition is detected.
Another significant advantage of being able to distribute object servers on di↵erent computers
is that it becomes simple to choose a way of placing the computers to avoid the cables to pass
through moving mechanical parts. In our case, we identify three primary locations for placing
computers allowing a minimal number of moving cables:
Concrete pier We chose to use a computer placed on the ground which will control peripherals
which are not moving such as the azimuth axis controllers, the dome rotation or the weather
station.
Mount fork A single board computer is placed on the mount’s fork to control the elevation axis.
Telescope tube Another single board computer is placed on the telescope tube and will handle
the camera, focuser, rotator and filter wheel.
Since we use this specific arrangement, only power cables for computers and peripherals need
to be dragged to the mount and the tube of the telescopes, reducing the complexity of the setup
81
Observatory Control System
and preventing from complications due to cable wrapping. All the communication can be per-
formed wirelessly, using WiFi in our case. We represent in Fig.4.3.1 a schematic lateral view of an
alt-azimuthal telescope and components putting in evidence this optimal positioning of the three
computers to limit cabling passing through axes.
Tube Computer
Box
Telescope Tube
Camera
Rotator
Focuser
Filter Wheel
Elevation Axis
Elevation Gear
Elevation Worm
Elevation Rotation
Elevation Motor
Mount Fork
Azimuth Axis
Aximuth Worm
Azimuth Motor
Azimuth Rotation
Azimuth Gear
Concrete Pier
Ground Computer
Box
Figure 4.3.1: Alt-Azimuthal telescope and components with optimal placement of computers to
limit the possibility of cable wraps. The software architecture allows maximizing intercommunica-
tion between peripherals using WiF, so we only need power cables in the mount and tube.
• A list of the most relevant class member functions with their respective parameters
82
4.3. PORTABLE OBSERVATORY SOFTWARE ARQUITECTURE
Simply shaped arrows point towards the corresponding class it derivates from, while rhombohedric-
shaped arrows point towards the classes containing the type of object connected to the departure
of the the arrow. Rhombohedric arrows show on their side the number and names of objects
instantiated in the class indicated by the arrow. It is essential to keep in mind that for simplicity
of understanding and visualization only the most relevant components have been displayed. We
can find an extended definition of each class and SLICE object in Annex A.
It is possible to see from this graph that the di↵erent objects which make the observatory work
together communicate precisely in the same way as if they were a single Object Oriented program.
However, the ZeroC-ICE Architecture we propose for the system, allows each of these components
to be included in a separate softwares, and can even be programmed in di↵erent languages or run
on di↵erent computers. For the case of our example, we present in Fig.4.3.3 the UML Deployment
Diagram. This diagram shows the same objects presented previously in Fig.4.3.2,placed in the
hardware environment with their associated servers and connections as follows:
• Red entities represent the hardware peripherals with their respective names
• We represent the servers which are running on each computer inside their associated computer
in green boxes with the name of the device server
• We represent physical hardware connections as continuous black lines between the peripheral
and their respective driver. The Communication type is expressed on the server side next to
the line
However, even if this is an efficient approach to the architecture of the system when it does not
evolve, it may bring several issues when the configurations trend to change. Consider the case when
we chagne a parameter one peripheral. The configuration will have to be changed consequently in
the file on the local computer where it is running on. However, we will also have to synchronize it
with the respective configuration files of every computer in the network to keep all the files con-
sistent together. This step becomes a more and more tedious operation as the complexity of the
overall system increases. We successfully managed to get around this problem by centralizing the
complete configuration inside a single MySQL Database running on one single computer. Every
peripheral driver at startup would connect to this database to retrieve its configuration, and act
accordingly. Since the location and parameters of the database will not change frequently (or, in
practice, will not change), it is possible in this case to store the configuration of the database itself
83
Observatory Control System
AdquisitionManager Camera
Camera : CameraDriver 1
Focuser : FocuserDriver
FilterWheel : FilterWheelDriver CameraDriver
Mount: TelescopeMount - Device : *USBPort
1 + SetTargetPosition(Ra : double Dec : double) : void + SetExposureType(Exptipe : int) : void
Mount + SetTargetFilter(Filter : String) + StartExposure(Exptime:double ,CallBack : *func ) : void
+ SetSequence( nExpo: int, ExpTime : double) + StopExposure( ) : void
+ StartSequence( ) : void + AbortExposure( ): void
+ StartGuiding( ) : void + SetBinning( BinX: short int , BinY : short int )
+ StopGuiding( ) : void + GetBinning( ) : Array < BinX: double, BinY : double>
+ AutoFocus( ): void
1 FilterWheel 1 Focuser
FilterWheelDriver FocuserDriver
- Device : *USBPort - Device : *USBPort
+ GetFilter( ) : CurrentFilter : string + GetFocus( ) : CurrentPosition : double
+ GetFilters( ): Array < Filter : string > + SetFocus(CurrentPosition : double ) : void
+ GotoFilter( Filter : string) + GotoFocus( Position) : void
+ FindHome( ) : void + FindHome( ) :void
CoolObsObject
DomeDriver
- Status : string
- Position:double
+ GetStatus( ) : string Status
+ GetAzimuth( ) : double
+ FindHome( ) : void
+ Open( ) : void
MotorDriver + Close( ) : void
- Position: double + GotoAzimuth( Azimuth : double )
+ GetPosition( ) : double
+ SetPosition( Position : double ) : void
+ SetSpeed( Speed : Double ) : void WeatherStation
+ Stop( ) : void Mount : TelescopeMount Dome
+ FindHome( ) : void Dome : DomeDriver 1
+ GetWeather( ) : WeatherLine : string
Axis1/Axis2/Rotator 3
1 Mount
TelescopeMount
- CurrentDate: Date
- Axis1 : MotorDriver
- Axis2 : MotorDriver
- Rotator: MotorDriver
- PointingModels : Array < PointingModel>
- Dome : DomeDriver
- ControlLoop : TimerInterruptionThread CallBack *func
+ GetPositionJ2000( ) : Array< RightAscention: double , Declination : double>
Dome
+ GotoPositionJ2000(RightAscention: double, Declination : double ) : void
1
+ GotoPositionJ2000(Ra: double, Dec: double , SpeedRa:Double , SpeedDec : double ) : void
+ SyncOnCurrentTarget( ) : void
+ SyncOnJ2000( RightAscention: double, Declination : double ) : void
+ GuideO↵set( RaO↵set : double , DecO↵set : double )
+ Park( ) : void
+ SetSpeed( Speed : Double ) : void
+ Stop( ) : void
+ FindHome( ) : void
- ComputeControl( ) : void
Figure 4.3.2: Simplified UML Class diagram of the CoolObs objects arquitecture
84
4.3. PORTABLE OBSERVATORY SOFTWARE ARQUITECTURE
USB USB
TCP/IP
ICE:5901
ICE:5908 ICE:5906
ICE:5907 ICE:5905
ICE:5903 ICE:5903
ICE:5902
Ground Computer
ICE:5904
ICE:5902
USB TCP/IP
85
Observatory Control System
in an XML document in each computer. The software will then first look for the configuration file
on the local drive to get the parameters of the location and credentials of the database. Then use
them to connect the database and retrieve the configuration of the wanted peripheral.
Then, we present the implementation of our configuration system in the 40cm Telescope at Santa
Martina’s Observatory (Pontificia Universidad Catolica de Chile, PUC). The XML file is stored in
each computer in the folder $HOME/.CoolObs/Config.xml, encoded as shown in Fig.4.3.4, which
is the usual configuration in the mentioned telescope.
Figure 4.3.4: Example of XML configuration file for the PUC40 telescope installed at Santa Mar-
tina’s Observatory.
Regarding the implementation of the database itself considering a site with multiple telescopes,
we decided to group only one database per site, which would have one master table of peripherals
per telescope. This master table of peripherals would group every device composing the telescope
together with the table where we can find its extended definition. Then we can define one shared
table format per each type of peripheral.
The figure 4.3.5 shows a real example of the database implemented at the Santa Martina’s
Observatory in Chile. Every table represents a type of peripheral, in which each line represents
one peripheral with its associated properties. All the peripherals of the observatory are expressed
this way in the same database. It is then essential to notice the existence of two di↵erent types of
table:
The Puc40Peripherals Table: We define one specific table like this one per telescope on the
site. In this table, a list of all peripherals related to one single telescope is detailed. As a
consequence, the software in charge of starting the system would only have to look over this
table to start the necessary components. The table has one line per telescope component,
and we define each component by five fields:
86
4.3. PORTABLE OBSERVATORY SOFTWARE ARQUITECTURE
A real example of the Puc40Peripherals table from Santa Martina’s Observatory, where all
the components of a 16” telescope are properly stored is presented in Fig.4.3.6 to show a
practical implementation of the proposed architecture.
The Logs Table: All the logs of all peripherals are stored in this table in their order of appear-
ance, with one line per log. The table is composed of 5 columns:
• Date represents the instant of the event
• Name is the name of the peripheral initiating the current log line
• Level represents the type of log information ( INFO, WARNING, ERROR, etc.)
• Action field is a text backing up the function from which the log was initiated
• Message is the corresponding logged message
87
Observatory Control System
88
4.3. PORTABLE OBSERVATORY SOFTWARE ARQUITECTURE
89
Observatory Control System
On Windows-based systems, conventionally each ASCOM driver would provide its interface.
It is possible to access it from the computer it is running on, with the evident complications in a
distributed system like the one we are proposing. On the other side, on Linux-based Systems, the
client itself would provide its interface. We can access it from the machine the client is running
on. The user thus needs to install the client software and dependencies on its machine to access
the interface of each peripheral.
1. Every interface should potentially be opened from any web-enabled device, and equipped with
a standard modern web browser without the need of adding plug-ins or add-ons (including
at least Windows/OSX/Linux/Android/iOS systems)
2. Some server must be able to display the data and the description of some critical internal
variables several times per second
3. Each web interface should give access to commands corresponding to most of the slice func-
tions it exports. Each server must also be able to give access in its interface commands to
the SLICE function of its children clients whenever it could be useful
In the first approach to a browser implementation, frameworks like Tornado or Django seemed
to be the best option.
However, first tests showed that its standard configuration using post requests becomes too slow
to smoothly display the data evolution in real time. As a result we finally based our user interface
on a simple WebSocket handler class so each ZeroC-ICE server should provide the following:
• A Javascript applet will provide a websocket linked with the ZeroC-ICE (requirement #2)
• A Javascript applet will provide a list of function which can be called from the server for
visualization updates (requirement #3)
• The same Javascript should provide a list of functions which can be called from HTML5
buttons, which will transcript in a visual way the functions of the slice interface and will
communicate to the server accordingly (requirement #3)
• Finally, the webpage should be featured using HTML5 and CSS3 (requirement #4)
From a practical point of view, JavaScript functions handle the visualized data on the web-
page of the peripheral. These functions along with their respective parameters are called from the
Python server’s WebSocket to each connected client browser. The advantage of using the Web-
Socket is its speed since this type of functions only refreshes numerical or text values on the pages,
so they can be executed more than ten times a second, transforming a webpage into a real-time
display for the user. Further, the optimal way to send a list of parameters from the Python server
to the WebSocket is using a single parameter sent to the refresh function which is a JavaScript
Object Notation (JSON) formatted string. This string can be easily created from the Python side
by concatenating dictionaries or sets of parameters using the Python JSON library. On the web
90
4.3. PORTABLE OBSERVATORY SOFTWARE ARQUITECTURE
side, it can be easily parsed using the JSON.parse( ) Javascript function or created using the JSON
concatenator function JSON.stringify( ).
As the main conclusion of this browser subsection, an embedded web server using the WebSocket
protocol for each of the peripherals successfully allows the user to have a shared, user-friendly and
recognizable User Interface regardless the operating system he is using, and without any need to
install additional software, add-ons or plugins. Any recent browser would be able to give access to
the complete system in real time, so the system adds the simplicity of technical implementation to
a speedy response which enables data display in real time.
4.3.5 Discussion
Within this complete section, we have successfully defined and implemented an object-oriented
software architecture which can be used to interact with any observatory peripheral. This software
allows programming servers and clients in a variety of programming languages at the convenience
of the programmer. It is even possible to select the optimal programming language depending on
the performance of the hardware or the performance of the algorithmics. This becomes an advan-
tageous approach for the integration of the servers in a community, as clients become independent
from the hardware they are used for, as the programmer sees peripherals as black box objects from
the client side, so that they will have a similar behavior from the server side point of view whatever
is the hardware connected.
One of the main advantages of the proposed architecture is that the deployment of new systems
can quickly be done using automation scripts and modern programming tools such as GitHub, al-
lowing a highly dynamic development inside large software teams, keeping updates controlled and
making the incoming tools being deployed to be transparent and user-friendly.
Beyond this modularity and transparency, one of the primary goals of this work was proposing
highly accessible visualization methods of the indicators of the performance of the system. We suc-
cessfully managed to define a method which would not need any installation of third party tools
or plug-ins for the user keeping the ability to interact with every peripheral of the observatory in
real time, equivalently as if it was on a local GUI.
Additionally, our method also allows a total abstraction from the network layer so the use of
a VPN connection can even allow opening peripheral interface over a Wide Area Network (WAN)
from any smartphone/tablet computer using minimal bandwidth. This is a very desirable feature
for robotic and automated control of remote observation stations.
91
Observatory Control System
Primary mirror
1
Detector
Optical axis
Telescope tube
Figure 4.4.1: Lateral view of a prime focus telescope. Chief (green) and marginal (blue) rays
coming from stars at infinity are represented.
However, keeping that positioning accuracy of the focus of the primary mirror while the tele-
scope tracks a field in the moving night sky is a mechanically demanding problem, as flexures and
misalignments happen while the telescope moves [115][116]. This is usually solved by spending a
large amount of money in a rigid, non-flexible mechanical mount, or by using hexapod systems
which reposition the secondary mirror or detector to meet the focus of the primary mirror. Such
arrangements are thought to relax the rigidity requirements of the mounts of the telescopes. The
most popular hexapod system in telescope optics used with this purpose is the Stewart platform
[41], which has been described and analysed extensively. [117][118][46]. However, such approach is
usually limited to telescopes with primary mirrors over 2m[116]. For smaller telescopes the relative
cost of installing a Stewart platform behind the secondary mirror becomes too large to be consid-
ered when compared to the total cost of the project. Additionally, the increase of weight and size
at the primary focal plane due to such device implies the use of heavier and stronger mechanics.
As a result, in small class telescopes only focus is usually controlled, while the other four possible
collimation positioning axes are only manually adjustable with tilt screws, which become fixed
once the observation starts.
Within this Chapter we present a new, simple, 5-DOF (Degrees Of Freedom) parallel mechanism
baassed on a cost-e↵ective and simple mechatronic manipullator [119][120] which works in open loop
conditions and may be used for positioning precisely secondary mirrors or detectors in multiple
mirror systems [121] on the main axis of a telescope at the desired focal point. The proposed
approach overcomes the cost limitations which prevent the use of hexapod systems such as the
Stewart platform in small telescopes. The proposed mechanism is a 3T2R (3 Translations, 2
Rotations) device intended to position and orient a plane in space which works under open loop
conditions. The structure provides the 5 degrees of freedom required to position the hardware with
an active adjustment of tip, tilt, defocus, and lateral shift along two directions, while leaving a
92
4.4. THE SAPACAN HEXAPOD SYSTEM
93
Observatory Control System
central ring; O1 , O2 , and O3 are the points of fixation of the scissor on the telescope body, while
S1 , S2 and S3 are the attachment points of each flexible lame on its corresponding 2D actuator.
We also define the points Bi , Ci and Ii , Ji whose relative distances define angles between the lower
scissor and the telescope tube, and the relative angle between the lower scissor arm and the top
scissor arm. Each motor can perform 200 individual steps per turn, and each actuates on a worm
screw of 2mm per turn, so the minimum possible displacement along the distances Ii Ji and Bi Ci
is 0.01mm. Three flexible lames make the union between the central part where the detector will
be fixed and the three actuators. The lames were obtained using stainless steel sheets of thickness
0.1mm. The length of the sheets was cut to link the central ring supporting the detector with each
of the arms of the device placed at its middle range position. Width of such lames was chosen to
be strong enough to support the weight of the detector and associated mechanics without flexures.
Fig.4.4.3 shows the detail of all the components of one of the three independent actuators involved.
The system is thus simple, efficient, suitable for small telescopes and very cost-e↵ective.
The main di↵erence between this device and other previously defined 5 or 6-DOF parallel
systems previously proposed [122][123] is its simplicity and cost-e↵ectiveness, plus the presence
of the flexible lames, which allow the system to be driven to released (subtense) positions before
moving, so each motor can be moved independently from the others over small distances, and
subsequently can be run in open loop as if the lames were flexible strings [124]. Obviously, the
performance of the system regarding the attainable positioning range, resolution and accuracy
will not match that of a Stewart platform, although it will be enough to achieve the mentioned
specifications in a small size telescope, as will be discussed later along Sections V and VI of this
Chapter.
S1 Z1 P2
P1 O1 X1 S2
Y1
O1 Rmec
1
O2
P3
S3
O3
Z2
Y2
Z3
X3 X2
O 3 Y3
O2
Rmec Rmec2
3
Figure 4.4.2: Kinematic of the device, including indication of the respective local reference frames
of each scissor.
94
4.4. THE SAPACAN HEXAPOD SYSTEM
Top screw
I1 Top nut
J1 Top scissor
Top motor
S1
D1
Lame
O1
C1
Low scissor B1
Low nut
Low screw
Low motor z1
y1
Rmec1
Figure 4.4.3: Lateral view of the device with description of its mechanical components with all
relevant points used in the calculations detailed. Only Arm 1 is presented, Arms 2 and 3 are fully
equivalent.
The relationship of the target coordinates of the hexapod and the parameters required to
reposition the detector is established in equations (4.4.1) to (4.4.3) using trigometric relation-
ships.
0 1
sin ( y ) sin (↵x ) RP + tx
P1Rmec1 =@ cos (↵x ) RP + ty A (4.4.1)
cos ( y ) sin (↵x ) RP + tz
0 p
cos( y )RP 3 sin( y ) sin(↵x )RP tx
1
4 p 4 2 +
B 3( 1/2 cos(↵x )RP +ty ) C
B ✓ p 2 ◆ C
B p cos( y )R P 3 sin( y ) sin(↵x )RP C
P2Rmec2 =B 2
3
2 + 2 +tx + C (4.4.2)
B C
@ p
cos(↵x )RP
4
ty
2
A
sin( y )RP
3 cos( y ) sin(↵x )RP
2 2 + tz
0 p
cos( y )RP 3 sin( y ) sin(↵x )RP tx
1
4 4 2
p ⇣ ⌘
B 3 cos(↵x )RP
+ty C
Bp ✓ 2
p
2 ◆ C
B cos( y )R P sin( y ) sin(↵x )RP C
P3Rmec3
3
= B 23 2 + 2 +tx +C (4.4.3)
B C
@ p
cos(↵x )RP
4
ty
2
A
sin( y )RP 3 cos( y ) sin(↵x )RP
2 2 + tz
Using these equations we can compute the position of points S1 , S2 and S3 in their respective
coordinate reference systems Rmec1 , Rmec2 and Rmec3 . Every Y Z mechanical stage can move on
the plane Y Z of its corresponding reference frames. The setpoints S1 , S2 and S3 are each linked to
P1 , P2 and P3 respectively by the three metallic flexible lames of equal length L. We define RS as
the radius of the circle passing through the points S1 ,S2 and S3 . We also introduce the tensioning
o↵set parameter ✏, which is a virtual value we substract from the previously defined radius RP in
order to be able to adjust the tensioning of the lames. The same o↵set value ✏ is applied to the
three lames. As a result we obtain a new virtual radius value R0 , which we will call the tensioning
radius, defined by R0 = RP ✏. If ✏ > 0 the algorithm will compute positions for the points S1 , S2
and S3 so that the respective distances Si Pi will be smaller than the actual length L of the flexible
lames, so the whole system will not be completely tensed in this configuration. If ✏ is null the
system will be perfectly adjusted to its nominal position and the points Si and Pi will be coplanar.
It is important to notice than ✏ should not lead to a negative value since in this case the distances
Si Pi would become bigger than the actual length L. The lames are flexible but not elastic, so this
95
Observatory Control System
RS
R0 ✏ S1
Optical axis
RP
P1 z1
y1 O1
Rmec1
Figure 4.4.4: Lateral view of the system in an subtensed condition. Only one arm is represented
with the definition of the distances described in the text.
The coordinates of S1 , S2 and S3 in the X axis of their corresponding reference frames are
set to be null. This arrangement leads to the coordinates of the setpoints S1 , S2 and S3 to be
expressed as in equations (4.4.4) to (4.4.9).
0 1
0
S1Rmec1 = @ cos (↵x ) R0 + ty + a A (4.4.4)
cos ( y ) sin (↵x ) R0 + tz
0 1
✓
0 ◆
p
B p3 cos( y )R0 3 + sin( y ) sin(↵x )R0 +tx + C
Rmec2 B 2 2 2 C
S2 =B cos(↵x )R0
C (4.4.5)
@ p 4
ty
2 +b
A
0 0
sin( y )R 3 cos( y ) sin(↵x )R
2 4 + tz
0 1
✓
0 ◆
p
B p3 cos( y )R0 3
+
sin( y ) sin(↵x )R0
+tx C
B 2 2 2 C
S3Rmec3 =B cos(↵x )R0
C (4.4.6)
@ p+ 4
ty
+c
2
A
0
sin( y )R 3 cos( y ) sin(↵x )R0
2 2 + tz
with:
q
0 2
a= L2 ( sin ( y ) sin (↵x ) R + tx ) (4.4.7)
0
p 0
cos ( y) R 3 sin ( y ) sin (↵x ) R
b = L2
4 4
p ✓ ◆ ! 1 12
2
(4.4.8)
tx 3 cos (↵x ) R0 A
+ + ty
2 2 2
0
p 0
2 cos ( y) R 3 sin ( y ) sin (↵x ) R
c= L
4 4
1 (4.4.9)
◆! 2 2
1
p ✓
tx 3 cos (↵x ) R0 A
+ ty
2 2 2
96
4.4. THE SAPACAN HEXAPOD SYSTEM
Once the local coordinates of S1 ,S2 and S3 are known, we can obtain the positions of the
lengths I1 J1 , I2 J2 , I3 J3 , and B1 C1 , B2 C2 , B3 C3 , which fix the position of the mechanical axis
on the corresponding worm, and are directly related to the distance to the home position of the
corresponding stepper motor. Since the computation is equivalent for the three reference frames
we will only express their equations for computing the distances Ii Ji and Bi Ci as a function of
the coordinates of Si in the reference frame RM eci . Obtaining this distances can be performed as
follows. First we compute the position of the points Di indicated in Fig.4.4.3 by solving (4.4.10),
and then we solve the coordinates of Ii ,Ji and Ci using (4.4.11), (4.4.12), and (4.4.13) respectively.
( ! ! ! !
O i Si + Si D i + D i O i = 0
! ! (4.4.10)
O i D i · yi > 0
( ! ! ! !
O i Ii + Ii D i + D i O i = 0
! (4.4.11)
O i Ii · !yi < 0
( ! ! ! !
O i Ji + Ji D i + D i O i = 0
! ! (4.4.12)
O i J i · yi < 0
( ! ! ! !
Oi Ci + Ci Di + Di Oi = 0
! (4.4.13)
Oi Ci · ! yi < 0
( ! ! ! !
O i Bi + Bi C i + C i O i = 0
! ! (4.4.14)
O i B i · yi < 0
The inverse transformation permits to obtain the five parameters of the system and is per-
formed in a similar way. At first, the positions of the points Ci can be obtained by solving (4.4.14)
according to the coordinates of Bi and Oi and lengths Bi Ci , then the coordinates of Di can be
obtained by solving (4.4.13). Given this we can recover Ii according to (4.4.11), which leads us to
obtain Ji from (4.4.12). Finally we can deduce the coordinates of Si by solving (4.4.10).
⇥ ⇤
Once the coordinates of the points Si are known, the vector ↵x , y , tz is obtained by solving
the coordinates of the center Oc of the circle passing through S1 , S2 , and S3 , ↵x (x tilt) and y (y
tilt) are obtained according to (4.4.15) and (4.4.16).
! ! !
( S1 S2 ^ S1 S3 ) · !
y1
↵x = arctan ! ! (4.4.15)
( S1 S2 ^ S1 S3 ) · !
z1
! ! !!
( S1 S2 ^ S1 S3 ) · x 1
y = arctan ! ! (4.4.16)
( S1 S2 ^ S1 S3 ) · !
z1
97
Observatory Control System
Each path can then be split into 3 sub-paths from position A to position B. First the system will be
driven from position A to A’, where A’ has the same five positional parameters of position A, but
with ✏ > 0 (see Fig. 4.4.4), meaning first the lames are set to an undertensed position. In practice,
setting ✏ to a value of 3mm allows every movement to be performed without overconstraining the
motors because of the potential existence of paths containing unreachable position. This way the
tension applied to the three lames is slightly relaxed and motors can move without the risk of
overload. Position A’ is then driven to position B’ which is described by the position of B, but
with the same R0 value of A’. In the last step, the system is driven from position B’ to position B
by progressively driving the value of ✏ to zero, so lames are again tensed.
As we can see in Fig.4.4.5 we use a recursive dichotomy algorithm [125] to find the intermediate
positions of the path between two given positions. The algorithms developed permit to generate a
path driving the system from a position Nm (characterized by the abovementioned six parameters)
to a position Nn following recursively a path {N1 , [...], Nm , Nn [...], Nk }. m and n are respectively
the indexes of the start and end positions of the path chain to be generated. The algorithm first
calculates the maximum distance dmax to be covered by every motor in a given number of steps
to go from Nm to Nn . When the maximum number of steps to be done by a motor is smaller
than the maximum distance allowed for a single iteration, it means the system reached its final
position and the algorithm ends returning the generated path. If the maximum number of steps
to be done by a a motor is higher than this distance, a 6 axis position Nj between Nm and Nn
is interpolated linearly and added to the path in between these two positions. The algorithm is
then called recursively twice with this update path, using as check points the pairs (Nm , Nj ) and
(Nj , Nn ).
Due to the loose constraint in terms of speed we specified for the system, it is important to see
that we do not include dynamic analysis in the scope of this document. Positioning corrections will
be done when the detector is not collecting light between observations. As a result the movements
can be considered as a simple sequence of semi-static positions. Higher frequency e↵ects susceptible
of degrading the image quality, such as windbursts, atmospheric turbulence or drive dynamics will
be corrected, if necessary, by separate subsystems.
98
4.4. THE SAPACAN HEXAPOD SYSTEM
Compute A-A’-B’-B
Compute path
Input=(*path,Indexstart ,Indexend )
Compute distance
Input=(*path,Indexstart ,Indexend )
no
Compute intermediate position
P osmean = M ean(path[Indexstart ], path[Indexend ])
Indexstart =
Indexstart = Indexstart
IndexP osmean
Indexend = indexP osmean
Indexend = Indexend
Figure 4.4.5: Path computation flowchart. The algorithm can be assimilated to a recursive di-
chotomy. The output is the path made of a list of positions.
a serial system. This has the additional advantage that equations describing restrictions between
axes are not required simplifying notably the mechanical behavior of the system.
Thus, for each position of the other axes we will find a change in focus, according to the inverse
of the transformations described in equations (4.4.1) to (4.4.16), which may be plotted. Inverse
equations can be analytically obtained in a straightforward manner using the same approach used
for the direct equations. In such a graph, non-plotted points will be regions where the device
cannot reach due to mechanical constraints, so the graphs will also depict the range of actuation
of the device on the displayed axis. Fig.4.4.6 presents the resolution in focus displacement and the
range of allowed values of the primary focus against the range of X or Y lateral shifts, when X and
Y tilts are null (meaning they correspond to a geometrical focus plane perfectly normal to Z axis).
In the graphs of Fig.4.4.7 we display the resolution of the displacement and range of positioning
of the primary focus against the range of X or Y tilt when X and Y lateral shifts are null. As an
example, we can read on Fig.(4.4.6a) that at the position where focus=+5mm, xshift= +30mm,
yshift= 0mm, xtilt= 0 and ytilt= 0 is not coloured, meaning it is unreachable, while on Fig.(4.4.6b)
we can read that at the position where focus=+5mm, xshift= 0mm, yshift= +20mm, xtilt= 0 and
ytilt= 0, the movement of one step of each motor will give a focus variation of approximately 2.2µm.
99
Observatory Control System
The maximum change in focus position at all positions is below 4µm, exceeding the specification
presented for focus adjustment.
(a)
���
Resolution (mm)
focus (mm)
x shift (arcmin)
(b)
Resolution (mm)
focus (mm)
y shift (arcmin)
Figure 4.4.6: Focus range and resolution against a) X shift; and b) Y shift.
It may also be appreciated how the smallest possible displacement for a motor step (and thus,
the best resolution) in focus positioning is obtained when the focus is set at Z close to its maximum
value, while the largest one (the worse) is obtained when Z is close to its minimal position. This
is in close relationship with the lengths Bi Ci in these respective positions. Since they are maximal
when the focus position Z is at the lower regions, and the angle of the scissor at this position is close
to 0, a single step of the motor will induce a bigger displacement in this position compared to the
one performed when Z is close to its maximum, and then Bi Ci is minimal. In this case the angle
of the scissor is bigger and the variation provided by a single step of the motor is also bigger, since
it depends on the arctangent of the angle of the scissor. Thus, close to the maximum in Z we reach
the best resolution, although also the minimum movement range, something coherent with the
mechanical constraints of the proposed mechanical design. An identical situation, for equivalent
reasons, happens when comparing focus with xtilt and ytilt (see Fig.4.4.7), yielding minimum (more
precise) resolution values at the highest focus positions and much larger displacement values when
the scissor is at the lower points of its trajectory.
100
4.4. THE SAPACAN HEXAPOD SYSTEM
(a) (b)
Resolution (mm)
Resolution (mm)
focus (mm)
focus (mm)
x tilt (arcmin) y tilt (arcmin)
Figure 4.4.7: Focus range and resolution against a) X tilt; and b) Y tilt.
the accuracy in focus position has obvious e↵ects in the overall image of the observed field, and
the accuracy with which the tensioning radius is fixed leads to lames under or overtensed and
thus to potential focus errors. A situation with overtensed lames will keep the system in a proper
position but with the side e↵ect of losing the exact homing position of the motors. A configuration
with undertensed lames will make the system to slightly flex under the e↵ect of gravity and as a
result the points Si and Pi will not be coplanar anymore, introducing a critical error between the
theoretical and the actual focus position, and a subsequent degradation in image quality.
In order to perform the analysis of the accuracy of the system, we will numerically induce
various sizes of mechanical homing errors to the distances Bi Ci and Ii Ji through errors in the
positions of the motors [m1 ..m6 ]. Homing errors and lost motor step counts are the main source
of inaccuracies in the system, if mechanical elements are stable, and they are conditions prone to
happen if overtensed lame conditions are achieved, which is a realistic situation. The e↵ects of these
errors across equally distributed sets of positions of x shift, y shift, x tilt, y tilt, and focus within
the possible actuation range have been studied according to the inverse transformations previously
described in equations 4.4.1) to (4.4.16. In particular, we will concentrate in the measurement of
the absolute error in focus position and the absolute error on tensioning radius they induce, as
errors on these two parameters have a dramatic e↵ect on image quality. Such errors are measured
in mm, and they are physically related to problems in homing or in positioning of the motors, as
stated above.
The error in positioning along each of the six axes is computed using the inverse transformations
so the distances Bi Ci and Ii Ji may be computed in a perfect system and then in a system with
positioning misadjustments due to lost motor counts or bad homing. This gives us the associated
position of the points SiRmeci . Using these points we compute the 3D circle they define. Tilts over
X and Y are computed from the 3D normal vector to the plane in which the circle is drawn, and
lateral shifts and defocus are deduced from the position of the centre of the mentioned circle. We
then define a total RMS homing error as the root mean square of the errors of the six individual
motors. The accuracy of compensation of tilts, lateral shifts and defocus is obtained by respectively
comparing the values obtained in the perfect system and in the mechanically misadjusted one. The
accuracy of R0 (that is, the accuracy in lame tensioning), will be defined by the di↵erence of the
value RP and the theoretical radius R0 obtained through R0 = RS L where RS is the radius of
the circle passing through the points S1 ,S2 and S3 .
Fig.4.4.8 shows that the absolute positioning errors in focusing and in tensioning radius are in
the same order of magnitude. Positioning errors may be seen to be unacceptably large. However,
since the positioning errors of the motors are mostly due to bad trigger distance estimation and/or
repeatability of the magnetic proximity homing sensors, these aberrations can be considered as
constant once the system has been homed. The range of variability of triggering of the sensors we
used was measured using a distance measuring gauge and was estimated to ±200µm. It should be
noted that relevant RMS motor positioning errors may result in small focus or tensioning errors
101
Observatory Control System
(a) (b)
Focus position error (mm)
Motors Positioning error (RMS) (mm) Motor Positioning error (RMS) (mm)
Figure 4.4.8: Accuracy of a) focusing; and b) tensioning radius prior to the homing procedure
described in the text.
102
4.4. THE SAPACAN HEXAPOD SYSTEM
(a) (b)
Absolute focus position (mm)
Figure 4.4.9: Accuracy of a) focus position; and b) tensioning radius for di↵erent homing errors
once the homing procedure described in the text has been applied.
4.4.6 Results
A prototype of the system was adapted to a 50cm telescope using the usual process of design
(Fig.4.4.10) and construction (Fig.4.4.11). The telescope mount was built using conventional com-
mercial aluminium profiles, yielding a very light and economic shape but with less rigidity than a
conventional telescope. In order to check the performance of the device on such a small telescope,
we verified the system was performing as desired by observing series of open and globular star
cluster fields, in order to obtain a sufficient star density all over the detector so image quality could
be evaluated. We show as an example one of the results obtained in Fig.4.4.12 and Fig.4.4.13. Both
images show a mosaic formed by an extraction of subimages of 300x300 pixels selected at the four
corners of the detector and the central area of the image, so the image aberrations at the edges are
properly highlighted. Fig.4.4.12 was taken with the hexapod device in its initial homing position.
The stars imaged at the edge of the field of view show very relevant defocus and even vignetting
may be appreciated in the left corner, due to misalignment of the optical axis. The size of the stars
in all corners is also very variable, due to improper tilt of the detector. The corrections to be done
were obtained by visually inspecting the evolution of defocus and coma of the stars across the field
in order to obtain the tilt and lateral shift parameters, while the best focus position was obtained
also by analysing the size of the stars. Fig.4.4.13 shows the same field after applying the proper
corrections to the device, resulting in the following parameters in the axes: focus o↵set: 500µm,
xshift: 1000µm, yshift: 150µm, xtilt: 58 arcmin, ytilt: 28arcmin. The images show now the
appearance of unvignetted stars, and a comparable size of objects in all areas of the image. It is
important to notice that the two images seems to aim at slightly di↵erent fields of stars, due to
the fact that the observed point was slightly modified by adjusting the Xtilt and Ytilt parameters
of the device.
To finalize, we show in Tab.4.4.1 a comparison of the main features of the presented device
compared to a standard commercial hexapod based on a Stewart platform of similar size and weight.
It is important to notice that although the presented device cannot reach the features proposed by
a commercial one in terms of range, repeatability or speed, the cost has been dramatically reduced,
making it feasible for its use in the considered range of telescopes, while mechanically keeping the
optical path clean as the actuators are left in the outer part of the telescope tube.
Additionally, the range of movements we need for our device is still much smaller in comparison
to what the device o↵ers since we only want to compensate for flexures due to the mechanical mount
proposed. The property of having the actuators outside the optical path o↵ers also the potential
of using larger, even more demultiplicated hardware for higher resolution positioning at low cost of
manufacture. The gain in resolution in this case, however, would be paid o↵ with a slower system
response and a smaller actuation range which would not be useful for the telescope application we
103
Observatory Control System
4.4.7 Discussion
We have proposed a low cost hardware system able to position efficiently an optical device over 5
axis of freedom. The system has been adapted to a small telescope showing the capability of adapt-
ing the focus point position along a tracking process, a procedure unavailable to this telescope class
due to the large cost of the associated Stewart platforms. The system has been designed to use a
minimal number of actuators and a mechanical arrangement which minimizes the footprint of the
device in the optical path of the telescope, using a set of flexible lames between the actuators and
the central ring containing the sensor. The proposed geometry removes the need of constraining
all actuators together and enables the device to perform adequately in open-loop conditions. The
description of the main equations and algorithms used in the control software has been presented,
as the algorithmics for path computation between neighbouring points. A methodology for com-
puting both the resolution and the accuracy of the positioning system proposed has been discussed,
showing its agreement with the desired specifications but also the nonlinearity and limitations in
104
4.4. THE SAPACAN HEXAPOD SYSTEM
range imposed by the low cost approach used. The proposed prototype has been built and tested
on sky, showing significant improvements in images when the position of the device is optimized.
As future work, we plan to integrate the one shot focusing algorithm described in next Section,
which extracts aberration information directly from the acquired images in the CCD [126], so it
can automatically measure and compensate the aberration values after each image acquisition so
the detector is kept at in its optimal position in all moments.
105
Observatory Control System
Table 4.4.1: Comparison of the proposed device and a commercial hexapod based on a Stewart
platform of comparable features.
Our device Hexapod
Supported weight 10Kg 10Kg
Device weight 10Kg 10Kg
Size 3 arms 200x300x50mm ↵300mm x 200mm
Obstructs optical field no yes
Shift range ±10x10x15mm ±50x50x25mm
Angular range ±3o x3o x0o ±15o x15o x30o
Repeatability 3 10µm 1 2µm
Speed < 0.1mm/s > 20mm/s
Price < 1000 USD > 15000 USD
106
4.5. CONTROLLING AND MEASURING COLLIMATION FROM SINGLE IMAGES
• First, we calibrate the field curvature, sensor tilt, and the position of the optical center using
a set of images which are a↵ected by di↵erent amounts of known mean defocus.
• In step two, we determine a model function that allows us to associate an absolute defocus
value with a set of values of a given image quality metric.
• The third step consists in measuring the value of the image quality metric chosen in step 2
in di↵erent parts of the image.
• Finally, a minimization algorithm will find the mean defocus value which best fits the data
obtained in step 3 using the model obtained in step 2.
Steps 1 and 2 can be done a single time for the full instrument lifetime (as long as it is stable
and the user does not intervene), while we will run steps 3 and 4 every time we want to compute
the best focus position as determined by a single image. In this work, we use data of focus series
taken with four telescopes of the HAT-South1 project [129] to illustrate the procedure outlined
above. The four HAT-South telescopes are based on the same optical design but may be a↵ected
by di↵erent alignments of the sensors relative to the corresponding optical axes. We show that the
local entropy of a sub-image around the detected stars is the best image metric for our application,
rather than other image metrics more usual in the definition of astronomical image quality, such
as, for example, full-width at half maximum. Finally, we briefly discuss how stable is the proposed
technique under variations of the model such as, e.g., variations in the sensor tilt which can arise
1 The HATSouth hardware was acquired by NSF MRI NSF/AST-0723074, and is owned by Princeton University.
The HATSouth network is operated by a collaboration consisting of Princeton University (PU), the Max Planck
Institute for Astron-omy (MPIA), and the Australian National University (ANU). The station at Las Campanas
Observatory (LCO) of the Carnegie Institution for Science, is operated by PU in conjunction with collaborators at
the Pontificia Universidad Catolica de Chile (PUC), the station at the High Energy Spectroscopic Survey (HESS)
site is operated in conjunction with MPIA, and the station at Siding Springs Observatory (SSO) is operated jointly
with ANU.
107
Observatory Control System
due to instrumental flexure. We will analyze the four steps we have described in detail along the
text.
len(P )
X
e= Pi log(Pi ) (4.5.1)
i=0
We define the box’s size so that the star at the maximum defocus image we use still fits inside
the box:The measured values for each star are then smoothed using a 10x10 zone grid covering the
full field of view. In practice, we take the available field of view and divide it in a mesh of 10x10
independent boxes. We present the organization of the boxes within the visible field of view and a
typical density of detected stars in Fig.4.5.1
Every box of the mesh has an associated value of the metric at its center, which is the mean
of all values of the sources present in the considered box. The value of the metric at an arbitrary
position in the box is obtained by interrelating from the grid. An example of a calculated entropy
map in a single imageis presented in Fig 4.5.2.
An extended analysis of several images a↵ected by di↵erent values of mean defocus shows that
images before focus tend to present lower entropy values close to the optical center rather than
on the edges, while images taken behind the focus tend to present better results on the edges
rather than on the center of the field. This asymmetry is due to the e↵ect of field curvature, and
as a result, a detailed analysis of any defocus image that can measure this asymmetry should in
principle be sufficient to determine on which side of the focus we are and how far we are from
the best focus position. Thus, the local behavior of entropy becomes a vital issue in the process
as it potentially contains information to ascertain the position of the focus, and deserves further
attention.
108
4.5. CONTROLLING AND MEASURING COLLIMATION FROM SINGLE IMAGES
Image
Boxes
Detected stars
(x b)2
f (x) = ae c +c (4.5.2)
to the variation of a given metric in each zone of the image. We then can extract from the fit
the value of the Gaussian mean b (i.e., the decentering of the Gaussian of each metric), and plot
109
Observatory Control System
its value over the entire field of view of the image. Since b corresponds to the decentering of the
Gaussian, the fitted value of b for a given zone will give the absolute best focus position for this
position in the field. We show the best fit value of b for entropy over the field obtained with our set
of focus images for HS1.4 in Fig 4.5.5. It is now clear how the best focus position is not constant
over the field. We can see the radial distribution nature of the focus and how the focal plane is
curved.
The map obtained from this figure gives us the value of the best focus as a function of the
position in the field (x, y) calculated out of the entropy value. This shows how it is possible to
make a radial model of best focus as a function of the distance to the optical center. From this
information we can estimate the parameters {z, O, ~ m,
~ ~t} defined previously. To achieve this, we
use the map of b values obtained to fit the following model:
where:
r2 ⌘ (x Ox )2 + (y Oy )2 . (4.5.4)
The multidimensional model for HS1.4 and we show its residuals in Fig. 4.5.6, where we see
that a reasonable quality fit is obtained enabling all parameters to be calculated, including the
best focus position for the image (in this case, 264.99mm). We could, of course, obtain a better
fit with a 3rd or higher order model, but this would have made the fitting and later evaluation
processes slower and, as we will show in the following, a second order model proves to be accurate
enough for our purposes.
Thus, we obtained a consistent model of the focus position of the telescope based on the analysis
of the evolution of the entropy function. The model allows the determination of the best focus for
the image plus the tilt, vertex position, and field curvature. Fig.4.5.5 shows a map of the fitted
value b as a function of the position of the field. Since b represents the o↵set of the Gaussian fit
of the evolution of the entropy when the focal plane moves across the focus, the value of b gives
the focus position of the entropy minima, and thus the best focus for the corresponding measured
zone of the field. It is now clear that the best focus is obtained at di↵erent positions of the focuser
depending on which part of the focal plane we are watching. The evolution is radial, and we can
also see a linear evolution from the bottom of the field to the top which is due to the focus tilting.
The same work has been performed using the ellipticity and the measured Full Width at Half
110
4.5. CONTROLLING AND MEASURING COLLIMATION FROM SINGLE IMAGES
Line9
Line8
Line7
Line6
Line5
Line4
Line3
Line2
Line1
Line0
Maximum (FWHM) of the stars, and we plot their respective evolution across focus in Fig 4.5.7
and 4.5.8. We can naturally assimilate the FWHM to a Gaussian, and its minima is related merley
to the best focus position. However, it is possible to see in Fig 4.5.7 that the measurement quickly
deviates from any possible Gaussian when we get away from the best focus because the shape of
the stars is not gaussian anymore and the annular pupil starts to be visible.On the other side, we
can still assimilate the evolution of ellipticity to a Gaussian was we get away from the best focus
position, but the minima is not related to the real best focus.
As a consequence, we decided to use only the entropy as a measurement metric since it remains
robust as we get far away from the focus and is strongly related to the evolution of the entropy
when we are close from the focus. Which lets us conclude that the entropy is, in fact, the best
metric of focus quality both for out-of-focus and in-focus images.
From now on we will introduce the term Absolute Defocus. While we define a Focus position
the position, in steps, of focuser motor, the Absolute Defocus represents, for a given zone of the
field, the di↵erence in focuser ’s steps, from the actual position to reach the best focus position for
this zone.
We can then plot the evolution of any given metric measured on a per star basis as a function of
its Absolute Defocus, (i.e., the value of its defocus relative to the best focus position at the position
111
Observatory Control System
Figure 4.5.4: Evolution of the mean entropy on 10 zones of the field as a function of the focus
position. The 10 selected zones are presented in Fig.4.5.3
of the source). The best metric will be the one which shows a nearly constant behavior across the
whole field of view, as far as this will allow using this single function to model the e↵ects of defocus
on our instrument. We carried out this exercise for the three image metrics which we mentioned
above, namely FWHM, ellipticity, and entropy.
We display the results in Fig 4.5.7, 4.5.8 and 4.5.9. It is clear from the figures that entropy is
a significantly more stable metric at all positions of the field of view. Its behavior as a function of
absolute defocus is more homogeneous across the field, making it the best metric for our purposes
and enabling to predict the best focus position of the telescope out of single images. Just as a
comparison, ellipticity is strongly unstable as a clear minima region where to set the focal point is
hard to find.
Once entropy has been fixed as the optimal image metrics for our purposes, it will now be straight-
forward to define a behavioral model which will associate an entropy value to a known absolute
defocus. We based this model on a functional fitting of the entropy measurements shown in
Fig. 4.5.9. When testing quadratic, cubic and Gaussian functions as basis functions for the fit the
results showed an insufficient quality fit to the set of points, so we turned into non-parametric
functions to model entropy. Our first trial was a 1-dimensional smoothed spline. We can see this
model overlaid on the experimental data points in Fig. 4.5.10.
If we plot the evolution of entropy in a 3D plot including absolute defocus and distance to the
center of the image, it may be seen how the behavior of the focal point has a particular dependence
on the distance to the optical axis (see Fig. 4.5.11). Since we are plotting the entropy as a function
of the absolute defocus, it appears the distance to the optical center is the only parameter which
still has an influence on the shape of the evolution of entropy when we are getting away from
the absolute focus. This is because the distance to center does have an e↵ect on the coma and
vignetting which may not be constant nor symmetric when we get away from the best focus on
one side or another. However, since we measure as a function of the absolute defocus, which is the
distance from the best focus position in this zone of the image, the tilt parameters do not have any
influence since they are included into the absolute defocus.
To further strengthen the model and to make it more accurate, we also tried the fit of a 2-
dimensional model based on a smoothed B-spline which simultaneously fits entropy as a function
of defocus and of the distance to the center. We present the fit of the 2D B-spline model and the
112
4.5. CONTROLLING AND MEASURING COLLIMATION FROM SINGLE IMAGES
Figure 4.5.5: b value (mean of the Gaussian fit of the entropy metric) over the field of HS1.4
associated data in Fig. 4.5.11, enabling to see the dependence of entropy with the distance to the
center and how the B-spline enables a proper fit of the data. It is possible to see the residuals
obtained after the fit for both the 1D and the 2D smoothed spline fits in Fig. 4.5.12, where, clearly,
the 2D spline shows a better behavior.
This result gives us directly the position of the best focus for the global image. Figure 4.5.13 shows
the evolution of k~e m(b)k
~ as a function of b for a real image obtained with HS1.3. For this image,
it was necessary to move the focus 6000 steps to get the best overall focus in the image. We
obtained Figure 4.5.14 from another image taken with HS1.3. In this case, it was necessary to
move the focus by 1000 motor steps to obtain the best overall focus of the image. In both cases, we
can see that the absolute minima of the root mean square of comparison of the vector of entropies
of stars obtained by the model at a given focus position. Moreover, as we expected, the actual
entropies measured at this same position reach an absolute and stable minimum at the same point
we had to move the focus to get to the best focus in practice.
Figure 4.5.13 and Figure 4.5.14 show how, both for large and small values of defocus, the
minima of the norm is coincident with the amount of defocus present in the system. Thus, the
calculation of the norm is enabling to estimate correctly the value of defocus to be corrected from
the images. In the case of small defocus, however, the model presents a broad plateau around the
best focus value we expect, so in practice, many positions of the focus could be used equivalently.
113
Observatory Control System
(a) (b)
Figure 4.5.6: Optical zOmt model, or Field curvature model of HS1.4 (a) and residuals (b). Fig.(a)
represents the map of the expected best focus positions over the field, in focuser steps, according
to the zOmt model. The lowest RMS defocus over the field is obtained for the focus position
z = 264.99 steps. The model converged to an optical center at coordinates (XShif t , YShif t ) =
( 273.55, 131.16) pixels o↵ the geometric image center. The tilt found was (XT ilt , YT ilt ) =
(3.91, 1.24)arcmin. Fig.(b) represets the di↵erence, in focuser steps, between the model presented
in Fig.(a) and the measurement of the best focus positions over the field presented in Fig.4.5.5
As a last phase of the process, we implemented a minimization algorithm for finding the best
fit between the data in Step 3 and the model in Step2. We used a variety of algorithms to try to
find the best focus condition by minimizing X 2 (Levemberg-Marquardt, simplex, and brute-force
minimum search over an interval). It happens, however, that the error function presents several
minima around the absolute best focus, which makes the convergence with local search algorithms
not very stable. In practice, the most robust solution was obtained generating an entropy vector
for a grid of focuser positions inside an interval around a guess position. The algorithm returns
the absolute minimum norm of residuals as the absolute best focus, and an interpolation around
this position can be used as a local refinement.
We tested the reliability of the entropy-based algorithm compared to an FWHM algorithm and
their evolution over a night on the Hat-South telescopes HS 1.3. To exercise the robustness of
the algorithm, we obtained the calibration of the behavior model with the telescope 1.4 which has
the same optical configuration as 1.3 but has a di↵erent tilt. The tilt angle di↵erence it what we
typically observe between one telescope to another which is between one and two arcminutes on
each axis. The figure 4.5.15 shows the evolution of both algorithms over a night of observation and
their correlation to the ambient temperature. We can observe that both algorithms give similar
results in these conditions which are favorable to the FWHM algorithm. Under more complicated
conditions as shown in figure 4.5.16, the first field is further away from the best focus. We observe
that the FWHM algorithm hardly converges while the entropy gives an acceptable focus position
on the first iteration.
As a consequence, we could check the robustness of the algorithm under harsh conditions, and it
showed to be more robust than the traditional FWHM measurements, even if the calibration was
done using a telescope which was a↵ected by a di↵erent tilt.
4.5.4 Discussion
We have presented a new technique for one-shot focusing based on an entropy-based merit function.
Our results show that this technique can be more robust compared to usual techniques based on
the use of FWHM or ellipticity as metric of image quality, and becomes a simple computational
approach which enables to implement a sensorless adaptive optics approach for small telescopes.
Such robustness can be explained by the shape-independent nature of the entropy function if
114
4.5. CONTROLLING AND MEASURING COLLIMATION FROM SINGLE IMAGES
FWHM (pixels)
Figure 4.5.7: FWHM metric measured values as a function of absolute defocus (b parameter of the
quadratic fit of the evolution of FWHM per zone across focus per zone).
Ellipticity (%)
Figure 4.5.8: Ellipticity metric measured values as a function of absolute defocus (b parameter of
the quadratic fit of the evolution of Ellipticity per zone across focus per zone).
compared to the other metrics. For both ellipticity and FWHM, we compute the metric from a 2D
Gaussian fit of the star. This fit can become unstable as soon as the star is a↵ected by a significant
amount of defocus or coma since the shape to be fitted is not approximately Gaussian anymore.
As a result, the focusing technique we have described can become a useful tool when applied to
quick f/d ratio or wide-field telescopes. In these configurations, the focusing precision is critical to
keep ⇠Gaussian-shaped images over the field, as the comma aberration a↵ecting defocused stars
near the edge of the field may prevent classical techniques from converging correctly.
As part of the future work of this Thesis, we are currently applying the proposed technique to a
control program capable of analyzing in real time images taken in a series and compute the focus
correction to apply according to the focus position recorded in the image header. The computed
correction can feed either directly to a focuser peripheral or a telescope control scheduler.
115
Observatory Control System
Figure 4.5.9: Entropy metric measured values as a function of absolute defocus (b parameter of
the quadratic fit of the evolution of Entropy per zone across focus per zone).
Shannon Entropy (bits)
Figure 4.5.10: Entropy value against defocus (blue) and fit to a 1D smoothed spline (red)
116
4.5. CONTROLLING AND MEASURING COLLIMATION FROM SINGLE IMAGES
(a) (b)
Figure 4.5.11: Entropy as a function of the absolute defocus. The 1-D repressentation in (a)
expresses all the measured and modeled values of the entropy vs the absolute defocus. In the 2-D
representation in (b) we show the color map of the measured entropy against the distance radius
from the optical center. Every black dot represent the position of the entropy measurement in
defocus vs the distance from the optical center. The color in between the dots are the interpolated
values of the entropy between the measurement we used to build the map.
(a) (b)
Shannon Entropy residuals (bits)
Distance to center (pixels)
Figure 4.5.12: (a) is a smoothed b-spline version of the map presented in Fig.4.5.11 (b). The graph
on the right represents the residuals expressed by the di↵erence of (a) and Fig.4.5.11 (b). Fig.4.5.12
(b) and Fig.4.5.11 (a) show a clearly better convergence than Fig.4.5.10 thanks to the use of a 2D
B-Spline model.
117
Observatory Control System
Figure 4.5.14: mean residuals for an image 1000 units before focus
118
4.5. CONTROLLING AND MEASURING COLLIMATION FROM SINGLE IMAGES
Temperature Sensor #1
Temperature Sensor #2
Temperature ( C)
Entropy method
FWHM method
Figure 4.5.15: Comparaison of the results given by the entropy method and the usual FWHM
method during an actual observation night. We show the evolution of the resutls given by both
algorithms when the telescope is tracking and pointing. ( we show with an arrow the moment where
the target has been changed) . We also show the evolution of the temperature during the night.
During this test the initial best focus guess was only 600 motors steps away from the best focus.
It is possible to see that both algorithms converge quiclky. The Entropy method is as efficient as
the usual FWHM method in this case.
Temperature Sensor #1
Temperature Sensor #2
Best focus (Motor Steps)
Temperature ( C)
Entropy method
FWHM method
Figure 4.5.16: Comparison of the results given by the entropy method and the usual FWHM
method during an actual observation night. We show the evolution of the resutls given by both
algorithms when the telescope is tracking and pointing. ( we show with an arrow the moment
where the target has been changed) . We also show the evolution of the temperature during the
night. During this test the initial best focus guess was more than 2000 motors steps away from
the best focus. It is possible to see that The entropy algorithm converges quiclky while the the
FWHM method can get lost. The Entropy method is much more robust than the usual FWHM
method in this case.
119
Observatory Control System
120
5 Construction, Implementation and Results
In this last chapter before conclusions, we will present several applications of the solutions proposed
in chapter 4 on actual astronomy projects executed on small telescopes which are the main goal of
the developments in this Thesis.
As first implementation, we will present the construction of Virtud50, a 50cm alt-azimuthal tele-
scope built from scratch to test and tune the solutions previously presented. This 50cm telescope
is currently installed in Observatoire SIRENE ( Silo REhabilité pour Nuits Etoilées) in France in
a refurbished 6.5m dome which was unused at the time. This part consisted in modifying and
automating the dome, and, specially, in the construction and testing of the 50cm alt-azimuthal
telescope equipped with SAPACAN 5-DOF positioning system in its prime focus. We then use
this telescope for imaging purposes.
The second application which will be described will refer to the Tololo 40cm and ESO 50cm
telescopes. Both stations operate from Santa Martina’s Observatory in the surroudings of San-
tiago, Chile and currently belong to the Pontificia Universidad Catolica de Chile (PUC). The
Tololo 40cm telescope is one of the first two 16-inch telescopes deployed by Kitt Peak National
Observatory (KPNO), translated to Tololo’s Observatory, Chile for the Landolt photometric sur-
vey. It was later donated to the PUC in Santiago where it has been equipped with the very first
version of the CoolObs system described above in 2014, and it has been used ever since on a
daily basis for education purpose at the undergraduate programs of the university. It provides a
36mm front-illuminated Sbig imager. The ESO 50cm telescope is a Cassegrain telescope installed
on a fork-equatorial mount. It was designed and manufactured in the decade of the 1960’s and
installed at La Silla Observatory in 1970. Decommissioned in 2000, it was then donated to PUC
and installed at Santa Martina in 2003. After a first telescope control system was retrofitted [131],
it became a tool for high-resolution spectroscopy after the installation of the PUCHEROS [132]
Echelle fiber-fed spectrograph designed at PUC. The CoolObs TCS system was installed in 2015
and allowed a more precise and stable operation of the spectrograph.
In the third section, we describe the robotization and upgrade of the ESO 1m telescope located
at La Silla Observatory. The ESO 1m telescope was the first telescope installed in La Silla in 1966.
It now hosts as a primary instrument the FIber Dual EchellE Optical Spectrograph (FIDEOS),
a high-resolution spectrograph designed for precise Radial Velocity (RV) measurements on bright
stars. To meet the special requirements of this project , it was required to upgrade the Telescope
Control System (TCS) and some of its mechanical peripherals . We upgraded the existing TCS into
a modern and robust software running on a group of single board computers interacting together as
a network, using the CoolObs TCS architecture described before. One of the particularities of the
CoolObs TCS is that it allows fusing the input signals of 2 encoders per axis, so that high precision
and resolution of the tracking can be achieved with encoders of moderate cost. In the approach
taken here, one encoder directly reads the angular position of the telescope’s right ascension axis,
while the other encoder returns actual position of the motor’s axis. The TCS also integrates itself
with the FIDEOS instrument in such a way that the operator can interact with all the subsystems
through the same remote user interface. Our modern TCS unit allows the user to run observations
remotely through a secured Internet web interface, minimizing the need of an on-site observer and
thus opening a new age in robotic astronomy for the ESO 1m telescope.
121
Observatory Control System
Figure 5.1.1: The Virtud-50cm telescope Rowe Coma Corrector commercially packaged by Baader
Planetarium.
The support of the primary mirror is composed of 6 floating triangles to compensate for the
deformation of the mirror itself, which constitute the cell support system. Concept design and
FEM analysis were performed using the Plop Freeware software. Results of the optimization and
the positioning of the 18 floating points of the primary mirror are presented in Fig.5.1.4.
The lateral support of the mirror was built using two crossed steel cables, ensuring a continuous
122
5.1. THE VIRTUD 50CM TELESCOPE WITH HEXAPOD
Figure 5.1.2: The Virtud-50cm telescope mounted in its final location. It is possible to apreciate
the truss tube design and the SAPACAN 5-DOF remotely controlled collimation system next to
the prime focus of the telescope.
lateral force and minimizing the induced astigmatism. The cables go along the width of the mirror
in a plane crossing the Center of Gravity (CG). We present the mechanism we used to sustain the
mirror in Fig. 5.1.3.
Figure 5.1.3: Attachment method of the primary mirror to the tube. Since the telescope is Alt-
Azimuthal it is not necesssary to hold the mirror at the top. The mirror is sustained by two slings
which have the advantage of self-centering the mirror inside the tube and provide an homogeneous
e↵ort on the periphery of the flange.
Beyond the tube structure made of aluminum profile, the basis and teeth gear wheels of the
mount were built on demand by the manufacturer Roberto Castillo, in Chile. A cast aluminum
fork mount in an alt-azimuthal configuration was constructed on purpose for the Virtud 50cm. The
drive provides a 360 teeth gear and a fitting worm. Encoders of 100.000 steps per revolution equip
each of the worm axes and encoders of 1.000.000 steps per revolution equip each of the respective
Azimuth and Elevation axes.
123
Observatory Control System
Figure 5.1.4: The Virtud-50cm Cell support design. The mirror is supported by 6 floating triangles
(18 contact points). Each pair of triangles is united by a joining lever. The left drawing represents
the positioning of the triangles and the position of the supports relative to the mirror, while the
right plot shows the resulting mirror deformations at the zenith (in m) according to the FEM
model simulated in this configuration.
The positions of the worm and axis encoders for each axis are measured inside a dedicated RPI.
We perform the encoder fusion inside this embedded computer following the procedure described in
chapter 4.2.3. Two magnetic limit sensors were installed on each axis to avoid mechanical problems
along the telescope motion control, or in case of malfunction. Regarding the elevation control, the
upper limit (+90 deg) is active only when the motor moves positively and the negative limit when
the target speed is negative. For the azimuth axis, the problem is more complex as the system
must be able to point beyond 360 degrees.
124
5.1. THE VIRTUD 50CM TELESCOPE WITH HEXAPOD
Figure 5.1.5: Aerial view of the refurbished dome at Sirene’s Observatory, France.
125
Observatory Control System
Figure 5.1.7: Interface board for reading the position of the Axis encoder over a serial or serial/usb
Port.
As a result, the upper limit (400degrees) is active only when the elevation angle is superior to 180
degrees, and the speed is positive. The negative limit will be active when the latest known position
of the telescope azimuth is lower than 180degrees and the target speed negative.
5.1.1.5 Software
After installing CoolObs Software on the computer network, the configuration has been set to use
the S20 state of the telescope pointing machine as a final state. As a consequence, the control
system performs in alt-azimuthal mode, in a completely transparent way to the user.
A July 2015 service mission allowed a significant upgrade to the telescope control system ca-
bling and connections. All cabling was changed, and installed in a new electronic control box.
Unfortunately, a major lightning strike during the 2016 winter damaged most of the electronics,
including the CCD and the telescope remained unused until the 2016 August service mission. A
last service mission of July 2017 allowed to upgrade the control system to the latest version and the
control boards from RPI1 to RPI3. The telescope is currently operative and used in observations
from Sirene.
5.1.3 Results
The telescope demonstrated being able to track and keep stars over several minutes exposures even
in alt-azimuthal mode up to 75 degrees above the horizon. Focusing tilting and collimation were
successfully performed using the method defined in section 4.5.
126
5.2. THE PUC 40CM TELESCOPE IMAGER
Although the university sta↵ attempted at first to upgrade the control system in early 2000
soon after telescope installation in the campus of Santiago, the result obtained was not robust
enough to be used easily by students. The astronomy department later moved the telescope to
the teaching observatory inside the Santa Martina’s Golf club in the outskirts of Santiago. After
fitting a commercial telescope control system (Servocat/Argo Navis), it could be observed that
the communication protocol (LX200), which is the one used in most of the commercial telescope
control systems, had strong inherent limitations which would not allow the telescope to point at
field positions with robustness enough to perform good quality images, even after using a reference
commercial pointing software (TPOINT). As a consequence, it was decided to implement the first
version of the CoolObs TCS in this telescope and use it as a test platform. In this section we will
show how we implemented various technical solutions presented in this thesis. This includes the
High precision drive control presented in section 4.1, the Telescope Control System presented in
Section 4.2, and the ZeroC-ICE-based software architecture presented in 4.3.
127
Observatory Control System
Figure 5.2.1: General view of the PUC40 telescope installed at Santa Martina’s Golf Club. It is
possible to appreciate the typical Boller & Chivens deported Hour Angle Axis.
with a standard NEMA34 stepper motor. We replaced them by a pair of Pittmann DC054B Series
Brush DC 24V motors which have the advantage of allowing a configuration which includes a 5000
ticks/revolution encoder on its axis.
As the encoders embedded within these motors were of high quality, apparently a solution using
them could in principle be feasible. However, we considered significantly safer to upgrade to Gur-
ley R158s encoders placed on the worm with 25000 pulses/revolution o↵ering 0.036 arcsec relative
resolution when projected on the sky. We can see the mechanical implementation and fitting of
the encoder of the mount in Fig.5.2.2. The control loop between the motors and the encoders
was closed using the same IPECmot 48/10 industrial DC servo controller which was used in the
Virtud50cm alt-azimuthal telescope described in the previous Section.
Position and speed control of both axes were initially driven using a single RaspberryPi 1.0, but
after several software upgrades, we upgraded the control devices to three independent RaspberryPi
3.0 units, with the first and second ones used to control the speed of each axis, and the third one for
control of the position. All the electronic subsystems described here fit inside the white electronic
box visible in the lower left corner of Fig.5.2.1, yielding a quite compact setup.
128
5.2. THE PUC 40CM TELESCOPE IMAGER
Figure 5.2.2: Motor swap and encoder fitting on the Hour Angle Axis of the telescope.
The telescope is controlled from the interface seen in Fig.5.2.3,which is the final layer between
all the implemented algorithmics and the astronomer. The operator can be remotely located and
so performing a fully robotic observation. The user interface may be seen to be composed of four
main parts:
• The top left part is dedicated to the sky map, showing the respective current positions of the
telescope and the target.
• The top right part is the main control center and provides three di↵erent areas. The first,
top one is the action board with the list of possible operations which can be performed by the
129
Observatory Control System
mount. The second, below, represents the movement and o↵sets status of the telescope in
the current moment. The last one, at the bottom of the area, displays the current variables
defining the position of the telescope and the state of its motion.
• The bottom left part shows the airmass and visibility graph of the current target.
• The bottom right part is a 3D simulated plot representing the current position of the tele-
scope.
On the camera side, we present its interface in Fig.5.2.4. The last image taken from the camera
is visible on the left part of the display in a JS9 Applet able to dynamically adjust the contrast of
a 16bits image for proper visualization. In complement to the standard observation tools, JS9 also
o↵ers some handy observing tools integrated into the applet such as a radial plot of stars to check
proper focusing, or catalog overlays. Astrometry is seamlessly integrated into the header so that
the current exact mouse position is continuously coherent with the displayed coordinates.
Figure 5.2.3: Telescope Control System interface Screenshot. The figure shows the TCS interface
as it appears to the user during a remote observing session.
130
5.2. THE PUC 40CM TELESCOPE IMAGER
Figure 5.2.4: Camera Interface Screenshot. The figure shows the real-time interface used for the
remote operation of the Main Imaging Camera
see that the shape of the stars has been kept perfectly round during the full exposure time giving
rise to a suitable result given the size of the telescope used.
131
Observatory Control System
Figure 5.2.5: Integration of a 2h session on the NGC104 Globular cluster using the PUC40cm
telescope imager. The result presented is the combined sum of 40 consecutive 180s exposures with
a Johnson &Cousins R filter with no auto-guiding. The roundness of the images of stars shows
that the system can perfectly handle 3mn exposures without the need of guiding, partially thanks
to the e↵ect of using the advanced pointing model described in Section 4.2.
132
5.3. THE ESO 50CM TELESCOPE WITH PUCHEROS ECHELLE SPECTROGRAPH
Figure 5.3.1: Initial state of the ESO50 telescope when still installed at La Silla
133
Observatory Control System
the outdated low level hardware still in use could not allow a robust operation of the complete
system. Thus, a new refurbishing was mandatory, and was used in order to turn the telescope into
a fully robotic system using the algorithms and methods described in Section 4.
Figure 5.3.2: ESO50 after being moved down to Santa Martina’s observatory
The telescope is equipped with PUCHEROS[135] a Fiber-fed Echelle spectrograph, so the field
of view of the camera placed in the focal plane is too small to perform any automatic pointing model
based on some astrometrical solution. To overcome the problem, the pointing model is generated
by centering 30 stars one by one in the pinhole of the instrument. Obviously, this method is
significantly more tedious than the one used in the PUC40, although it is not less precise and
enables the telescope to achieve a pointing precision as small as 15arcseconds.
In this case, the high precision of the worm/gear made not necessary to install an on-axis encoder to
correct for the periodical error in motor displacement. In such conditions, the system can perfectly
track a star for more than 20mn without correction within the 2 arcsecond precision required by
the diameter of the fiber projected on the sky.
The right part of Fig.5.3.3 shows a second control box in which the computers and electronic boards
related to the control of the di↵erent peripherals of the instrument have been placed. While the
mentioned first box is dedicated in exclusive to the telescope control subsystems, in the second
box additional equipment was installed, including the computer controlling the wide field camera
in parallel to the telescope, and the motor controller for the flip mirror which allows to select if
the light impinging on the fiber comes from the sky or from a calibration lamp.
134
5.3. THE ESO 50CM TELESCOPE WITH PUCHEROS ECHELLE SPECTROGRAPH
Figure 5.3.3: The new control box and electronics of the 50cm telescope. The TCS and associated
susbsystems are embedded in the left-hand box, while imaging-related equipment has been installed
in the right-hand box
With the stability and versatility of the implemented Telescope Control System that we pre-
sented in Section 4., continuous observations performed with the telescope and the PUCHEROS
Spectrograph since the TCS installation in 2012 allowed to obtain the scientific results listed below,
which become a reference of the quality of the implementation performed:
• Izzo, L., Mason, E., Vanzi, L., Fernandez, J. M., Espinoza, N., Helminiak, K., & Della Valle,
M. (2013). Spectroscopic observations of Nova Cen 2013. The Astronomer’s Telegram, 5639.
• Ederoclite, A., 2013, May. T Pyx: towards a new paradigm for nova explosions. In Highlights
of Spanish Astrophysics VII (pp. 539-542).
• Berdja, A., Vanzi, L., Jordán, A. and Koshida, S., 2012, September. An Echelle Spectro-
graph for precise radial velocity measurements in the near IR. In Ground-based and Airborne
Instrumentation for Astronomy IV (Vol. 8446, p. 844681). International Society for Optics
and Photonics.
• Zapata, A., Vanzi, L., Jones, M. and Brahms, R., 2017, September. Radial Velocity Challenge
at the AIUC. In ESO Calibration Workshop: The Second Generation VLT Instruments and
Friends.
135
Observatory Control System
To describe in detail the tasks done, we will first present the initial status of the telescope
before starting the upgrade. Next, we will present the hardware which needed to be changed, both
electronic and mechanical, and which solutions were chosen for them. In the next section we will
present the new telescope control system, showing its functioning principles, a description of its
software, the structure of the electronic boards, its Graphic User Interface (GUI), and how the
system constructs the telescope’s pointing model. Finally, we will present the results obtained up
to the time of writing this paper. A final section will describe further work to be done.
136
5.4. THE ESO 1M TELESCOPE WITH FIDEOS DUAL FEED ECHELLE SPECTROGRAPH
was not required. The positioning motors (declination and right ascension), were in good conditions
so they could be kept operative in the new system.
The Focusing system, however, was composed of a DC motor using old limit switches, which
needed to be replaced by a new, more precise and versatile solution, which was chosen to be a
stepper motor. The primary mirror M1 shutter motors could also be kept as they were functioning
quite well and their mechanical interface was in perfect conditions. Despite the general good state
of the motors, all encoders needed to be replaced by modern models, as the precision of these
systems has considerably increased since the construction of the original telescope 50 years ago.
There was also an additional problem associated to the availability of spare parts for these encoders,
which potentially might have rendered the telescope inoperative in the long term.
5.4.1.2 Optics
Regarding the optics, the primary mirror M1 was pretty dirty and degraded, so it needed to be
cleaned and aluminized urgently as its reflectance was only of around 50%. The secondary mirror
M2, in di↵erence, seemed to be in good conditions.
5.4.1.3 Dome
The dome needed to be restored and upgraded in order to enable remote operations. The slit
didn’t close properly due to some mechanical imperfections, although both the rotation and the
slit motors were in good condition. The old control system of the dome was also working in perfect
conditions, therefore what was required was just an interface between this old system and the new
control system which enabled the general remote telescope operation.
The former TCS was mounted in an HP1000 computer, which was at present very unpractical to
operate. It didn’t allow any kind of remote operation. Also, by now there isn’t anybody capable
of using it at La Silla. Additionally, as the encoders needed a replacement, all the telescope and
its peripherals required a new control system.
This diagnosis led to the conclusion that, beyond some minor upgrades, a full new Telescope
Control system was required for the ESO 1m telescope following the algorithmic approaches dis-
cussed and detailed in Section 4.
137
Observatory Control System
Regarding the main right ascension and the declination drives, the original DC motors of both
axes were kept in place, but the encoders connected to the worms measuring the motor axis angle
were changed to quadrature encoders delivering 100.000 steps per turn and delivering TTL signal
levels.
Right ascension and declination axes were equipped with median resolution encoders installed
on-axis with no reduction, giving step size of 1.3 arc-seconds approximately. Since these encoders
are placed directly on axis, they are not a↵ected by either gear periodical errors or backlash e↵ects.
This dual-encoder per axis arrangement is the same as the one presented in section 4.1 which was
later implemented in the Virtud50 telescope presented in section 5.1. In this configuration, the
worm encoder allows obtaining a high precision resolution on sky and a smooth movement while
the on-axis encoder gives an absolute precision and corrects dynamically fort the periodical error
of the worm which cannot be measured from the first encoder. Fig.5.4.2 represents the mechanical
setup and encoder placement of the gearing of each axis. The same concept is used for the right
ascension and the declination axes.
Axis Encoder
Reductor
Worm Encoder
Gear
DC Motor Belt
Motor Pulley
Focusing of the telescope is in this case provided by the movement of the secondary mirror.
This movement is performed using a DC motor coupled to a planetary reductor. Feedback on
position was returned by a linear absolute encoder. In the modified system, the reduction gear
was kept but the DC motor was replaced by a stepper motor, and the position feedback encoder
was removed as it could be provided with precision enough using the position value returned by
the stepper motor.
138
5.4. THE ESO 1M TELESCOPE WITH FIDEOS DUAL FEED ECHELLE SPECTROGRAPH
The dome rotation and opening system was also integrated into the control system by modi-
fying the original interface. In this case original motors and relay-based control electronics were
kept in their respective original configurations as they were in good condition and delivered ad-
equate functionality. After the changes, commands and interfacing with the TCS are performed
using Arduino boards and relay shields which communicate through RS232 commands. The three
computer racks in the control room were removed and replaced by two electrical boxes aside the
telescope, which may be seen in Fig.4.1.1. The first box (above) hosts the power supplies, the
power electronics and the computer for the telescope position control, while the second one below
hosts the power supply and control electronics for the mirror covers, focuser and dome control.
The physical integration of the TCS is shown in Fig. 5.4.3. Every connector and enclosure was
designed in order to respect IP65 standards.
Figure 5.4.3: New TCS installed, showing the position of the two new electronic boxes which
implement the TCS
The two first Raspberry Pi are in charge of the speed control of the respective right ascension
139
Observatory Control System
S02 - Heliocentric
S15
Mean FK5 Any
equinox
T11 - Nutation
T02 - Precess to J2000.0 S16 - Topocentric
S06 - Heliocentric Apparent FK5
Mean FK5 J2000.0 Current Equinox
T12 - Earth’s Rotation
T06 - Heliocentric Parallax
S07 - Geocentric S17 - Topocentric
Mean FK5 J2000.0 Apparent (Ha,Dec)
Figure 5.4.4: Most useful pointing machine states and transitions according to SLALIB and to the
Telescope Pointing Machine definitions.
and declination axes. Since we use two encoders per axis (one placed on the axis of the motor
before reduction and the second one on-axis after reduction), both encoders will not have the same
resolution and precision on sky. The on-axis encoder gives a 1.3 arc-second per step resolution
after interpolation, and will not be a↵ected by mechanical gearing errors or backlashes, while the
motor’s encoder will give a 0.03 arc-second per step resolution which can be a↵ected by periodical
errors or backlashes which amplitudes can be between 5 and 15 arc-seconds. Thus each of these
two Raspberry Pi computers will perform the encoder signal fusion using a non linear Kalman
Filter (EKF)[106][107] following what was described in Section 4.1. The fusion of both encoders
can then give a real precision of 0.25 arc-second on sky. Then, the speed closed loop control of the
fused position is also performed for each axis in the corresponding Raspberry Pi.
The third Raspberry Pi hosts the pointing and tracking software itself and is in charge of
controlling the speed and position of the previously defined axes. The pointing and tracking soft-
ware is Python-based and uses the Telescope Pointing Machine Library (TPM) library proposed by
Percival[1] and described in Section 4.2. Mechanical non-perpendicularity, polar alignment flexures
and similar behaviors are modelled using the PT Wallace Pointing model equations presented in
[138, 17]. The mount control consists in a control loop running at a sampling of 100ms, so at each
iteration the target coordinates are expressed from the FK5 state to the local mount coordinates
s22. As described in section 4.2, a new state is added in the TPM including the position correction
computed by the pointing model. As a result the tracking takes into account in real time the
pointing model corrections for the tracking behavior which allows longer exposures in addition to
a better pointing. Fig.5.4.5 also shows that this configuration allows to include multiple pointing
models as independent states of the system. As a result, this provides an easy way of handling
various separate instruments installed on the telescope, each with its own pointing model. Passing
from the imager to the spectrograph installed in two separate foci of the telescope can result in
di↵erent o↵sets, non perpendicularity and flexures behavior, this technique allows to dynamically
change the instrument without a↵ecting the pointing or tracking quality of the telescope.
140
5.5. RESULTS AND FUTURE WORK
S20 - Topocentric
Observed (Ra,Dec)
T23a T23b
Figure 5.4.5: New states added in the telescope pointing machine (in dark gray) and their associated
transitions.
Figure 5.5.1: Pointing residuals of the telescope control system. The graph shows the pointing
error in Right Ascension versus the pointing error in Declination express in arc-seconds for 15
bright stars randomly selected in the sky.
sioning until the end of 2018. The GUI is shown in Fig. 5.5.2. As seen, it is based on the interface
model presented in section 4.3 of this Thesis. It allows the user to control the telescope, add new
points to the pointing model and controlling all the telescope’s peripherals.
141
Observatory Control System
Recent observation and measurement of radial velocities obtained with the ESO1m Telescope
using FIDEOS thanks to the stability of the telescope control system implemented and described
in section 4 resulted in the following publication:
• Vanzi, L., Zapata, A., Flores, M., Brahm, R., Tala Pinto, M., Rukdee, S., Jones, M., Ropert,
S., Shen, T., Ramirez, S. and Suc, V., 2018. Precision stellar radial velocity measurements
with FIDEOS at the ESO 1-m telescope of La Silla. Monthly Notices of the Royal Astro-
nomical Society, 477(4), pp.5041-5051.
142
6 Conclusions
In this Thesis, we presented a complete integrated set of solutions to improve the remote, auto-
matic and unattended operation of a telescope. As detailed in Section 4, we proposed five di↵erent
innovations to the usual remotely controlled telescope setup, which afterwards were total or par-
tially implemented in up to 4 di↵erent upgrades for robotization of telescopes.
We can define an automatic observational station device as an acquisition chain made of a mul-
titude of elements. The work we presented pretends to identify bottlenecks in the robustness of
operation of an automatic station at specific points of this data acquisition chain and improve the
concept of use of a robotic telescope by improving these bottlenecks, trying tocontribute general-
purpose, universal solutions valid for ore than a single system. These innovations we presented
take place at spare places of the data acquisition chain, spread from the hardware level to the
automatic data processing.
We list here a summary of the presented improvements, their implementation on existing telescopes
and the results obtained.
143
Observatory Control System
present in it. Instead of selecting a single star with a limited range of brightness to obtain a proper
guiding, the external guider allows using the complete field of stars and guide on an astrometric
solution with absolutely no intervention of the user nor possible mistake at the moment of select-
ing the guide star. In this case, the telescope control system would use a separate pointing model
for the external guider and compute di↵erential drifts between the main imager and the guider
continuously. The corrections applied to the mount would then only correspond to the residuals
a↵ecting the main imager. As a consequence, the guider guides on a controlled drifting field to
suppress any residual drift in the main imager.
We implemented this advanced telescope control system on most of the telescopes presented in
section 5 such as the Virtud 50cm telescope in France (Section 5.1), the PUC40 (Section 5.2) and
the ESO50cm telescope (Section 5.3) in Santa Martina Observatory in the outskirt of Santiago
for teaching and science purposes. It was also successfully implemented on the ESO1m Telescope
presented in Section 5.4 in La Silla Observatory with a full dedication to radial velocity measure-
ments. In every telescope, the installation has been a success and showed that the precision and
stability of the system allowed to acquire premium quality data used for science measurements, as
shown by the di↵erent publications obtained using the instruments.
144
6.5. STANDALONE AND ROBUST FOCUSING METHOD
could demonstrate the e↵ectivity of the system which was able to position in a repeatable way and
with high precision the camera around the prime focus of the telescope compensating the flexures
due to tracking in a low cost, aluminum profile mount. We could demonstrate that the relative
positioning precision was better than any possible visible misalignment. The simplicity of the
system also showed its advantage of robustness since it is using simple mechanical elements and
electronics. We showed that in our case it could replace a hexapod to 1/10th of its cost. Since we
are doing movement at low frequencies (1 positioning every 10mn), we do not need in our case the
responsivity of the hexapod. The last di↵erence with hexapod is the possibility of rotation around
the optical axis which is not a requirement in our case either since the optics are rotationally
symmetric. In addition to adequately fulfilling our requirements, this design could ideally fit into
other optical application which need precision relative positioning over five axes with limited space
in the optical train. This system could be applied since it deports all the mechanics outside the
optical plane and only the three slings retaining the primary mirror would cross it.
145
Observatory Control System
tronic box could control a complete 10-20cm class telescope for a cost within a comparable range of
one of these telescopes. The system is currently under testing at Obstech-Observatorio El Sauce as
an automatic seeing monitor based on a commercial Celestron 11 telescope in which we retrofitted
the CoolObs Software.
After the testing proof, the goal is to package it to most of the commercial mounts and telescope
usually purchased by the amateur astronomer’s community to facilitate and democratize the auto-
matic observing applied to backyard astronomy. However, we also aim to allow this typical simple
observation station to use them as professional devices for remote observing astrophotography and
collaborative science.
146
7 List of Publications
7.1 Patents
• Royo, S.R. and Suc, V., Pontificia Universidad Catolica de Chile and Universitat Politecnica
de Catalunya, 2016. Method and system for compensating optical aberrations in a telescope.
U.S. Patent 9,300,851.
• Ropert, S., Suc, V., Jordan, A., Tala, M., Liedtke, P. and Royo, S., 2016, July. TCS and
peripheral robotization and upgrade on the ESO 1-meter telescope at La Silla Observatory.
In Advances in Optical and Mechanical Technologies for Telescopes and Instrumentation II
(Vol. 9912, p. 99124W). International Society for Optics and Photonics.
• Galaz, G., Milovic, C., Suc, V., Busta, L., Lizana, G., Infante, L. and Royo, S., 2015. Deep
optical images of Malin 1 reveal new features. The Astrophysical Journal Letters, 815(2),
p.L29.
• Shen, T.C., Soto, R., Reveco, J., Vanzi, L., Fernández, J.M., Escarate, P. and Suc, V.,
2012, September. Development of telescope control system for the 50cm telescope of UC
Observatory Santa Martina. In Software and Cyberinfrastructure for Astronomy II (Vol.
8451, p. 84511T). International Society for Optics and Photonics.
• Royo Royo, S., Arasa Marti, J., Ares Rodrı́guez, M., Atashkhooei, R., Azcona Guerrero, F.J.,
Caum Aregay, J., Riu Gras, J., Sergievskaya, I. and Suc, V., 2009. Nuevas lı́neas de trabajo
en metrologı́a óptica en el CD6 de la UPC. In Reunión Nacional de Óptica: Zaragoza, 4-7
de septiembre 2012: libro de abstracts (pp. 461-464).
• Suc, V.,Ropert,S.,Jordan,A, Royo Royo,S. , 2018, Bringing old telescopes to a new robotic
life, in Rev. Mex. AA [ACCEPTED]
• Vanzi, L., Zapata, A., Flores, M., Brahm, R., Tala Pinto, M., Rukdee, S., Jones, M., Ropert,
S., Shen, T., Ramirez, S. and Suc, V., 2018. Precision stellar radial velocity measurements
with FIDEOS at the ESO 1-m telescope of La Silla. Monthly Notices of the Royal Astro-
nomical Society, 477(4), pp.5041-5051.
• Brahm, R., Hartman, J.D., Jordán, A., Bakos, G.Á., Espinoza, N., Rabus, M., Bhatti, W.,
Penev, K., Sarkis, P., Suc, V. and Csubry, Z., 2018. HATS-43b, HATS-44b, HATS-45b, and
HATS-46b: Four Short-period Transiting Giant Planets in the Neptune–Jupiter Mass Range.
The Astronomical Journal, 155(3), p.112.
147
Observatory Control System
• Bento, J., Schmidt, B., Hartman, J.D., Bakos, G.Á., Ciceri, S., Brahm, R., Bayliss, D.,
Espinoza, N., Zhou, G., Rabus, M. and Bhatti, W., 2017. HATS-22b, HATS-23b and HATS-
24b: three new transiting super-Jupiters from the HATSouth project. Monthly Notices of
the Royal Astronomical Society, 468(1), pp.835-848.
• Bayliss, D., Hartman, J.D., Zhou, G., Bakos, G.Á., Vanderburg, A., Bento, J., Mancini,
L., Ciceri, S., Brahm, R., Jordán, A. and Espinoza, N., 2018. HATS-36b and 24 other
transiting/eclipsing systems from the HATSouth-K2 Campaign 7 program. The Astronomical
Journal, 155(3), p.119.
• Bakos, G.Á., Csubry, Z., Penev, K., Bayliss, D., Jordán, A., Afonso, C., Hartman, J.D.,
Henning, T., Kovács, G., Noyes, R.W. and Béky, B., 2013. HATSouth: a global network of
fully automated identical wide-field telescopes. Publications of the Astronomical Society of
the Pacific, 125(924), p.154.
• Chen, Y.T., Kavelaars, J.J., Gwyn, S., Ferrarese, L., Côté, P., Jordán, A., Suc, V., Cuillandre,
J.C. and Ip, W.H., 2013. Discovery of a new member of the inner Oort cloud from the Next
Generation Virgo Cluster Survey. The Astrophysical Journal Letters, 775(1), p.L8.
• Kavelaars, J., Suc, V., Chen, Y.T. and Gwyn, S., 2013. 2010 GB174. Minor Planet Electronic
Circulars, 2013.
• Chen, Y.T., Kavelaars, J.J., Gwyn, S., Parker, A., Suc, V., Jordan, A. and Ip, W.H., 2012,
May. The Population of Sedna-Like Objects. In Asteroids, Comets, Meteors 2012 (Vol.
1667).
• Szentgyorgyi, A., McLeod, B., Fabricant, D., Fata, R., Norton, T., Ordway, M., Roll, J.,
Bergner, H., Conroy, M., Curley, D. and Epps, H., 2012, September. The f/5 instrumentation
suite for the Clay Telescope. In Ground-based and Airborne Instrumentation for Astronomy
IV (Vol. 8446, p. 844628). International Society for Optics and Photonics.
• Penev, K., Bakos, G.Á., Bayliss, D., Jordán, A., Mohler, M., Zhou, G., Suc, V., Rabus,
M., Hartman, J.D., Mancini, L. and Béky, B., 2012. HATS-1b: The first transiting planet
discovered by the hatsouth survey. The Astronomical Journal, 145(1), p.5.
• Mohler-Fischer, M., Mancini, L., Hartman, J.D., Bakos, G.Á., Penev, K., Bayliss, D., Jordán,
A., Csubry, Z., Zhou, G., Rabus, M. and Nikolov, N., 2013. HATS-2b: A transiting extrasolar
planet orbiting a K-type star showing starspot activity. Astronomy & Astrophysics, 558,
p.A55.
• Bayliss, D., Zhou, G., Penev, K., Bakos, G.Á., Hartman, J.D., Jordán, A., Mancini, L.,
Mohler-Fischer, M., Suc, V., Rabus, M. and Béky, B., 2013. HATS-3b: An inflated hot
Jupiter transiting an F-type star. The Astronomical Journal, 146(5), p.113.
• Jordán, A., Brahm, R., Bakos, G.Á., Bayliss, D., Penev, K., Hartman, J.D., Zhou, G.,
Mancini, L., Mohler-Fischer, M., Ciceri, S. and Sato, B., 2014. HATS-4b: A dense hot
Jupiter transiting a super metal-rich G star. The Astronomical Journal, 148(2), p.29.
• Zhou, G., Bayliss, D., Penev, K., Bakos, G.Á., Hartman, J.D., Jordán, A., Mancini, L.,
Mohler, M., Csubry, Z., Ciceri, S. and Brahm, R., 2014. HATS-5b: A transiting hot saturn
from the HATsouth survey. The Astronomical Journal, 147(6), p.144.
148
7.4. OTHER COLLABORATIONS
• Hartman, J.D., Bayliss, D., Brahm, R., Bakos, G.Á., Mancini, L., Jordán, A., Penev, K.,
Rabus, M., Zhou, G., Butler, R.P. and Espinoza, N., 2015. HATS-6b: a warm Saturn
transiting an early M dwarf star, and a set of empirical relations for characterizing K and M
dwarf planet hosts. The Astronomical Journal, 149(5), p.166.
• Zhou, G., Bayliss, D., Hartman, J.D., Bakos, G.Á., Penev, K., Csubry, Z., Tan, T.G., Jordán,
A., Mancini, L., Rabus, M. and Brahm, R., 2013. The mass–radius relationship for very low
mass stars: four new discoveries from the HATSouth Survey. Monthly Notices of the Royal
Astronomical Society, 437(3), pp.2831-2844.
• Mohler-Fischer, M., Mancini, L., Hartman, J.D., Bakos, G.A., Penev, K., Bayliss, D., Jordan,
A., Csubry, Z., Zhou, G., Rabus, M. and Nikolov, N., 2013. VizieR Online Data Catalog:
HATS-2b griz light curves (Mohler-Fischer+, 2013). VizieR Online Data Catalog, 355.
• Bakos, G.Á., Penev, K., Bayliss, D., Hartman, J.D., Zhou, G., Brahm, R., Mancini, L.,
de Val-Borro, M., Bhatti, W., Jordán, A. and Rabus, M., 2015. HATS-7b: A Hot Super
Neptune Transiting a Quiet K Dwarf Star. The Astrophysical Journal, 813(2), p.111.
• Bayliss, D., Hartman, J.D., Bakos, G.Á., Penev, K., Zhou, G., Brahm, R., Rabus, M., Jordán,
A., Mancini, L., de Val-Borro, M. and Bhatti, W., 2015. HATS-8b: A low-density transiting
super-neptune. The Astronomical Journal, 150(2), p.49.
• Brahm, R., Jordán, A., Hartman, J.D., Bakos, G.Á., Bayliss, D., Penev, K., Zhou, G., Ciceri,
S., Rabus, M., Espinoza, N. and Mancini, L., 2015. HATS9-b and HATS10-b: Two Compact
Hot Jupiters in Field 7 of the K2 Mission. The Astronomical Journal, 150(1), p.33.
• Rabus, M., Jordán, A., Hartman, J.D., Bakos, G.Á., Espinoza, N., Brahm, R., Penev, K.,
Ciceri, S., Zhou, G., Bayliss, D. and Mancini, L., 2016. HATS-11b AND HATS-12b: Two
Transiting Hot Jupiters Orbiting Subsolar Metallicity Stars Selected for the K2 Campaign 7.
The Astronomical Journal, 152(4), p.88.
• Rabus, M., Jordan, A., Hartman, J.D., Bakos, G.A., Espinoza, N., Brahm, R., Penev, K.,
Ciceri, S., Zhou, G., Bayliss, D. and Mancini, L., 2016. VizieR Online Data Catalog: Spec-
troscopy and photometry of HATS-11 and HATS-12 (Rabus+, 2016). VizieR Online Data
Catalog, 515.
• Mancini, L., Hartman, J.D., Penev, K., Bakos, G.Á., Brahm, R., Ciceri, S., Henning, T.,
Csubry, Z., Bayliss, D., Zhou, G. and Rabus, M., 2015. HATS-13b and HATS-14b: two
transiting hot Jupiters from the HATSouth survey. Astronomy & Astrophysics, 580, p.A63.
• Mancini, L., Hartman, J.D., Penev, K., Bakos, G.A., Brahm, R., Ciceri, S., Henning, T.,
Csubry, Z., Bayliss, D., Zhou, G. and Rabus, M., 2015. VizieR Online Data Catalog: HATS-
13b and HATS-14b light and RV curves (Mancini+, 2015). VizieR Online Data Catalog,
358.
• Zhou, G., Bayliss, D., Hartman, J.D., Bakos, G.A., Penev, K., Csubry, Z., Tan, T.G., Jordan,
A., Mancini, L., Rabus, M. and Brahm, R., 2015. VizieR Online Data Catalog: 4 transiting
FM binary systems (Zhou+, 2014). VizieR Online Data Catalog, 743.
• Zhou, G., Bayliss, D., Hartman, J.D., Rabus, M., Bakos, G.Á., Jordán, A., Brahm, R.,
Penev, K., Csubry, Z., Mancini, L. and Espinoza, N., 2015. A 0.24+ 0.18 M double-lined
eclipsing binary from the HATSouth survey. Monthly Notices of the Royal Astronomical
Society, 451(3), pp.2263-2277.
• Ciceri, S., Mancini, L., Henning, T., Bakos, G., Penev, K., Brahm, R., Zhou, G., Hartman,
J.D., Bayliss, D., Jordán, A. and Csubry, Z., 2016. HATS-15b and HATS-16b: Two Massive
Planets Transiting Old G Dwarf Stars. Publications of the Astronomical Society of the
Pacific, 128(965), p.074401.
149
Observatory Control System
• Brahm, R., Jordán, A., Bakos, G.Á., Penev, K., Espinoza, N., Rabus, M., Hartman, J.D.,
Bayliss, D., Ciceri, S., Zhou, G. and Mancini, L., 2016. HATS-17b: A Transiting Compact
Warm Jupiter in a 16.3 Day Circular Orbit. The Astronomical Journal, 151(4), p.89.
• Brahm, R., Jordan, A., Bakos, G.A., Penev, K., Espinoza, N., Rabus, M., Hartman, J.D.,
Bayliss, D., Ciceri, S., Zhou, G. and Mancini, L., 2016. VizieR Online Data Catalog: Spec-
troscopy and photometry of HATS-17 (Brahm+, 2016). VizieR Online Data Catalog, 515.
• Penev, K., Hartman, J.D., Bakos, G.A., Ciceri, S., Brahm, R., Bayliss, D., Bento, J., Jordan,
A., Csubry, Z., Bhatti, W. and de Val-Borro, M., 2017. VizieR Online Data Catalog: Sloan
i follow-up light curves of HATS-18 (Penev+, 2016). VizieR Online Data Catalog, 515.
• Bhatti, W., Bakos, G.Á., Hartman, J.D., Zhou, G., Penev, K., Bayliss, D., Jordán, A.,
Brahm, R., Espinoza, N., Rabus, M. and Mancini, L., 2016. HATS-19b, HATS-20b, HATS-
21b: Three Transiting Hot-Saturns Discovered by the HATSouth Survey. arXiv preprint
arXiv:1607.00322.
• Bento, J., Schmidt, B., Hartman, J.D., Bakos, G.Á., Ciceri, S., Brahm, R., Bayliss, D.,
Espinoza, N., Zhou, G., Rabus, M. and Bhatti, W., 2017. HATS-22b, HATS-23b and HATS-
24b: three new transiting super-Jupiters from the HATSouth project. Monthly Notices of
the Royal Astronomical Society, 468(1), pp.835-848.
• Espinoza, N., Bayliss, D., Hartman, J.D., Bakos, G.Á., Jordán, A., Zhou, G., Mancini, L.,
Brahm, R., Ciceri, S., Bhatti, W. and Csubry, Z., 2016. HATS-25B THROUGH HATS-
30B: A HALF–DOZEN NEW INFLATED TRANSITING HOT JUPITERS FROM THE
HATSOUTH SURVEY. The Astronomical Journal, 152(4), p.108.
• Espinoza, N., Bayliss, D., Hartman, J.D., Bakos, G.A., Jordan, A., Zhou, G., Mancini, L.,
Brahm, R., Ciceri, S., Bhatti, W. and Csubry, Z., 2017. VizieR Online Data Catalog: i filter
photometry for HATS-25 through HATS-30 (Espinoza+, 2016). VizieR Online Data Catalog,
515.
• de Val-Borro, M., Bakos, G.Á., Brahm, R., Hartman, J.D., Espinoza, N., Penev, K., Ciceri,
S., Jordán, A., Bhatti, W., Csubry, Z. and Bayliss, D., 2016. HATS-31b through HATS-
35b: Five Transiting Hot Jupiters Discovered By the HATSouth Survey. The Astronomical
Journal, 152(6), p.161.
• de Val-Borro, M., Bakos, G.A., Brahm, R., Hartman, J.D., Espinoza, N., Penev, K., Ciceri,
S., Jordan, A., Bhatti, W., Csubry, Z. and Bayliss, D., 2017. VizieR Online Data Catalog:
Photometry for HATS-31 through HATS-35 (de Val-Borro+, 2016). VizieR Online Data
Catalog, 515.
• Bayliss, D., Hartman, J.D., Zhou, G., Bakos, G.Á., Vanderburg, A., Bento, J., Mancini,
L., Ciceri, S., Brahm, R., Jordán, A. and Espinoza, N., 2018. HATS-36b and 24 other
transiting/eclipsing systems from the HATSouth-K2 Campaign 7 program. The Astronomical
Journal, 155(3), p.119.
• Bento, J., Hartman, J.D., Bakos, G.Á., Bhatti, W., Csubry, Z., Penev, K., Bayliss, D., de Val-
Borro, M., Zhou, G., Brahm, R. and Espinoza, N., 2018. HATS-39b, HATS-40b, HATS-41b,
and HATS-42b: three inflated hot Jupiters and a super-Jupiter transiting F stars. Monthly
Notices of the Royal Astronomical Society, 477(3), pp.3406-3423.
• Brahm, R., Hartman, J.D., Jordán, A., Bakos, G.Á., Espinoza, N., Rabus, M., Bhatti, W.,
Penev, K., Sarkis, P., Suc, V. and Csubry, Z., 2018. HATS-43b, HATS-44b, HATS-45b, and
HATS-46b: Four Short-period Transiting Giant Planets in the Neptune–Jupiter Mass Range.
The Astronomical Journal, 155(3), p.112.
150
7.4. OTHER COLLABORATIONS
• Henning, T., Mancini, L., Sarkis, P., Bakos, G.Á., Hartman, J.D., Bayliss, D., Bento, J.,
Bhatti, W., Brahm, R., Ciceri, S. and Csubry, Z., 2018. HATS-50b through HATS-53b:
four transiting hot Jupiters orbiting G-type stars discovered by the HATSouth survey. The
Astronomical Journal, 155(2), p.79.
• Zhou, G., Bayliss, D., Hartman, J.D., Rabus, M., Bakos, G.A., Jordan, A., Brahm, R., Penev,
K., Csubry, Z., Mancini, L. and Espinoza, N., 2017. VizieR Online Data Catalog: Di↵erential
photometry of the EB* HATS551-027 (Zhou+, 2015). VizieR Online Data Catalog, 745.
• Sarkis, P., Henning, T., Hartman, J.D., Bakos, G.Á., Brahm, R., Jordán, A., Bayliss, D.,
Mancini, L., Espinoza, N., Rabus, M. and Csubry, Z., 2018. HATS-59b, c: A Transiting
Hot Jupiter and a Cold Massive Giant Planet Around a Sun-Like Star. arXiv preprint
arXiv:1805.05925.
• Schöller, M., Argomedo, J., Bauvir, B., Blanco-Lopez, L., Bonnet, H., Brillant, S., Cantzler,
M., Carstens, J., Caruso, F., Choque-Cortez, C. and Derie, F., 2006, June. Recent progress
at the Very Large Telescope Interferometer. In Advances in Stellar Interferometry (Vol. 6268,
p. 62680L). International Society for Optics and Photonics.
151
Observatory Control System
152
BIBLIOGRAPHY
Bibliography
[2] R. Spangenburg and K. Moser, Observing the Universe, ser. Out of This World.
Scholastic Library Publishing, 2004. [Online]. Available: https://books.google.cl/books?id=
ekH9GwAACAAJ
[3] R. Burnham, Burnham’s Celestial Handbook, Volume One: An Observer’s Guide to the
Universe Beyond the Solar System, ser. Dover Books on Astronomy. Dover Publications,
2013. [Online]. Available: https://books.google.cl/books?id=z3 CAgAAQBAJ
[5] P. Wallace, “Proposals for keck telescope pointing algorithms,” University of Hawaii, 1994.
[6] ——, “Slalib, positional astronomy library,” Starlink User Note67, vol. 61, 1995.
[7] J. Cheng, The Principles of Astronomical Telescope Design, ser. Astrophysics and
Space Science Library. Springer New York, 2010. [Online]. Available: https:
//books.google.cl/books?id=z-mQWTu7zFoC
[10] D.Clark, “MMT Mount Control System operation and performance ,” SAO, Tech. Rep. #02-
2, November 2002.
[11] T.-C. Shen, R. Soto, J. Reveco, L. Vanzi, J. M. Fernández, P. Escarate, and V. Suc, “Devel-
opment of telescope control system for the 50cm telescope of uc observatory santa martina,”
in SPIE Astronomical Telescopes+ Instrumentation. International Society for Optics and
Photonics, 2012, pp. 84 511T–84 511T.
[12] P. T. Wallace and K. P. Tritton, “Alignment, pointing accuracy and field rotation of the
uk 1.2-m schmidt telescope,” Monthly Notices of the Royal Astronomical Society, vol. 189,
no. 1, pp. 115–122, 1979. [Online]. Available: http://mnras.oxfordjournals.org/content/189/
1/115.abstract
[13] P. SPIE, Ed., A rigourous algorithm for telescope pointing, vol. 4848, 2002.
[14] P. T. Wallace, “TPOINT – Telescope Pointing Analysis System,” Starlink User Note, vol.
100, 1994.
[15] TheSky6 Getting Started Guide Revision 1.11 Copyright 1984-2004 Software Bisque, Inc.
[17] P. Wallace, “Pointing and tracking algorithms for the keck 10-meter telescope,” in
Instrumentation for Ground-Based Optical Astronomy, ser. Santa Cruz Summer Workshops
in Astronomy and Astrophysics, L. Robinson, Ed. Springer New York, 1988, pp. 691–706.
[Online]. Available: http://dx.doi.org/10.1007/978-1-4612-3880-5 72
153
Observatory Control System
[18] P. T. Wallace, “Pointing and tracking software for the Gemini 8-m Telescopes,” in Optical
Telescopes of Today and Tomorrow, ser. Society of Photo-Optical Instrumentation Engineers
(SPIE) Conference Series, A. L. Ardeberg, Ed., vol. 2871, Mar. 1997, pp. 1020–1031.
[19] D. L. Terrett, “A C++ class library for telescope pointing,” in Society of Photo-Optical
Instrumentation Engineers (SPIE) Conference Series, ser. Society of Photo-Optical Instru-
mentation Engineers (SPIE) Conference Series, vol. 6274, Jun. 2006, p. 12.
[20] P. A. Strittmatter, “Multiple mirror telescopes,” in Optical Telescopes of the Future, F. Pacini,
W. Richter, & R. N. Wilson, Ed., 1978, pp. 165–184.
[21] D. R. Blanco, “Specifying and holding collimation tolerance on fast cassegrain telescopes,”
SAO, Tech. Rep. #87-3, August 1987.
[22] T. W. Mark A. A. Neil, Martin J. Booth, “New Modal Wave-front Sensor: a theoretical
analysis,” J. Opt. Soc. Am, vol. 17, no. 6, June 2000.
[23] SAO, Ed., Megacam: Paving the focal plane of MMT with silicon. Proc. SPIE, July 1998.
[24] D. Debarre, M. J. Booth, and T. Wilson, “Image based adaptive optics through optimisation
of low spatial frequencies,” Opt. Express, vol. 15, no. 13, pp. 8176–8190, 2007. [Online].
Available: http://www.opticsexpress.org/abstract.cfm?URI=oe-15-13-8176
[25] E. Grisan, F. Frassetto, V. D. Deppo, G. Naletto, and A. Ruggeri, “No wavefront sensor
adaptive optics system for compensation of primary aberrations by software analysis of a
point source image. 1. methods,” Appl. Opt., vol. 46, no. 25, pp. 6434–6441, 2007. [Online].
Available: http://ao.osa.org/abstract.cfm?URI=ao-46-25-6434
[27] C. Roddier and F. Roddier, “Wave-front reconstruction from defocused images and the
testing of ground-based optical telescopes,” J. Opt. Soc. Am. A, vol. 10, no. 11, pp. 2277–2287,
1993. [Online]. Available: http://josaa.osa.org/abstract.cfm?URI=josaa-10-11-2277
[28] F. Roddier, “Wavefront sensing and the irradiance transport equation,” Appl. Opt., vol. 29,
no. 10, pp. 1402–1403, 1990. [Online]. Available: http://ao.osa.org/abstract.cfm?URI=
ao-29-10-1402
[29] A. B. Bhatia and E. Wolf, “The zernike circle polynomials occurring in di↵raction theory,”
Proceedings of the Physical Society. Section B, vol. 65, no. 11, p. 909, 1952. [Online].
Available: http://stacks.iop.org/0370-1301/65/i=11/a=112
[30] R. J. Noll, “Zernike polynomials and atmospheric turbulence,” J. Opt. Soc. Am., vol. 66,
no. 3, pp. 207–211, 1976. [Online]. Available: http://www.opticsinfobase.org/abstract.cfm?
URI=josa-66-3-207
[31] V. N. Mahajan, “Zernike annular polynomials for imaging systems with annular
pupils,” J. Opt. Soc. Am., vol. 71, no. 1, pp. 75–85, 1981. [Online]. Available:
http://www.opticsinfobase.org/abstract.cfm?URI=josa-71-1-75
[33] G. ming Dai and V. N. Mahajan, “Zernike annular polynomials and atmospheric
turbulence,” J. Opt. Soc. Am. A, vol. 24, no. 1, pp. 139–155, 2007. [Online]. Available:
http://josaa.osa.org/abstract.cfm?URI=josaa-24-1-139
154
BIBLIOGRAPHY
[34] M. J. Booth, “Wavefront sensorless adaptive optics for large aberrations,” Opt. Lett., vol. 32,
no. 1, pp. 5–7, 2007. [Online]. Available: http://ol.osa.org/abstract.cfm?URI=ol-32-1-5
[35] J. Maeda and K. Murata, “Retrieval of wave aberration from point spread function or
optical transfer function data,” Appl. Opt., vol. 20, no. 2, pp. 274–279, 1981. [Online].
Available: http://ao.osa.org/abstract.cfm?URI=ao-20-2-274
[36] P. Hawkes, “The end of an era?” Micron, vol. 24, no. 2, pp. 159 – 162, 1993.
[Online]. Available: http://www.sciencedirect.com/science/article/B6T9N-476DYDN-9/2/
7dd8a969a2be368de2a3d0e253079eec
[37] G. zhen Yang, B. zhen Dong, B. yuan Gu, J. yao Zhuang, and O. K. Ersoy,
“Gerchberg-saxton and yang-gu algorithms for phase retrieval in a nonunitary transform
system: a comparison,” Appl. Opt., vol. 33, no. 2, pp. 209–218, 1994. [Online]. Available:
http://ao.osa.org/abstract.cfm?URI=ao-33-2-209
[38] L. Noethe and S. Guisard, “Analytical expressions for field astigmatism in decentered
two mirror telescopes and application to the collimation of the eso vlt,” Astron.
Astrophys. Suppl. Ser., vol. 144, no. 1, pp. 157–167, 2000. [Online]. Available:
http://dx.doi.org/10.1051/aas:2000201
[39] J. I. KRUGLER and A. N. WITT, “An alignment technique for ritchey-chretien telescopes,”
Publications of the Astronomical Society of the Pacific, vol. 81, no. 480, pp. pp. 254–258,
1969. [Online]. Available: http://www.jstor.org/stable/40674724
[41] D. Stewart, “A platform with six degrees of freedom,” Proceedings of the Institution of Me-
chanical Engineers, vol. 180, pp. 371–386, Jun 1965.
[42] V. Gough and S. Whitehall, “Universal tyre test machine,” in Proc. FISITA 9th Int. Technical
Congress, 1962, pp. 117–137.
[43] K. H. Hunt, Kinematic geometry of mechanisms, ser. Oxford engineering science series. Ox-
ford Clarendon Press, 1978.
[44] P. M. Gray, S. C. West, and W. W. Gallieni, “Support and actuation of six secondaries for
the 6.5-m mmt and 8.4-m lbt telescopes,” in Advanced Technology Optical Telescopes V, vol.
2871, 1997, pp. 374–384. [Online]. Available: http://dx.doi.org/10.1117/12.269060
[46] P. Schipani, S. D’Orsi, D. Fierro, and L. Marty, “Active optics control of vst telescope
secondary mirror,” Appl. Opt., vol. 49, no. 16, pp. 3199–3207, Jun 2010. [Online]. Available:
http://ao.osa.org/abstract.cfm?URI=ao-49-16-3199
[48] S. M. Gunnels and D. Carr, “Design of the magellan project 6.5-meter telescope: telescope
structure and mechanical systems,” in Advanced Technology Optical Telescopes V, vol. 2199,
1994, pp. 414–427. [Online]. Available: http://dx.doi.org/10.1117/12.176208
155
Observatory Control System
[51] J. Hartmann, “Ueber die correction eines periodischen fehlers in der bewegung des
potsdamer 80 cm refractors,” Astronomische Nachrichten, vol. 158, no. 1, pp. 1–14, 1902.
[Online]. Available: http://dx.doi.org/10.1002/asna.19021580102
[52] G. E. Kron, “Periodic Error in Worm and Gear Telescope Drives,” Publications of the Astro-
nomical Society of the Pacific, vol. 72, p. 505, Dec. 1960.
[53] R. H. Hardie and C. M. Ballard, “On reducing the periodic error in a telescope drive,”
Publications of the Astronomical Society of the Pacific, vol. 74, no. 438, pp. pp. 242–243,
1962. [Online]. Available: http://www.jstor.org/stable/40673854
[54] D. G. S. Groeneveld, “Considerations in the design of primary worm-gear drives for astro-
nomical telescopes,” Proceedings of the Astronomical Society of Australia, vol. 1, p. 245, Mar.
1969.
[55] C. Hannel, “Anti-backlash gear assembly,” May 31 1988, uS Patent 4,747,321. [Online].
Available: https://www.google.com/patents/US4747321
[56] J. F. R. van der Ven, “A New System to Eliminate Gear Backlash in Telescopes,” The
Messenger, vol. 29, p. 23, Sep. 1982.
[57] M. Fisher, “High-resolution incremental tape encoder on the william herschel telescope,”
in 1994 Symposium on Astronomical Telescopes & Instrumentation for the 21st Century.
International Society for Optics and Photonics, 1994, pp. 889–900.
[58] M. Warner, V. Krabbendam, and G. Schumacher, “Adaptive periodic error correction for
heidenhain tape encoders,” SPIE Astronomical Telescopes+ Instrumentation, pp. 70 123N–
70 123N, 2008.
[60] B. Csák, J. Kovács, G. Szabó, L. Kiss, Á. Dózsa, Á. Sódor, and I. Jankovics, “A↵ordable
spectroscopy for 1m-class telescopes: recent developments and applications,” 2014.
[63] R. McWilliams, “Portable telescope mount with integral locator using magnetic encoders for
facilitating location of objects and positioning of a telescope,” 2003.
[64] M. Ravensbergen, “Main axes servo systems of the vlt,” in 1994 Symposium on Astronomical
Telescopes & Instrumentation for the 21st Century. International Society for Optics and
Photonics, 1994, pp. 997–1005.
156
BIBLIOGRAPHY
[67] C. Ren, J. Xu, Y. Ye, G. Wang, and X. Jiang, “China song telescope tracking system based
on direct drive technology,” SPIE Astronomical Telescopes+ Instrumentation, pp. 84 491N–
84 491N, 2012.
[68] F. Leonard, M. Venturini, and A. Vismara, “Pm motors for direct driving optical telescope,”
Industry Applications Magazine, IEEE, vol. 2, no. 4, pp. 10–16, 1996.
[69] W. Guo-min, “Review of drive style for astronomical optical telescope,” Progress in Astron-
omy, vol. 4, p. 008, 2007.
[72] L. Chen, Z. Zhang, and H. Wang, “The improvement of ccd auto-guiding system for 2.5m
telescope,” SPIE Astronomical Telescopes+ Instrumentation, vol. 8451, p. 84512K, Sep. 2012.
[73] R. Suszynski and K. Wawryn, “Stars’ centroid determination using psf-fitting method,”
Metrology and Measurement Systems, vol. 22, no. 4, pp. 547–558, 2015.
[74] J. S. Lopez, R. J. Tobar, T. Staig, D. A. Bustamante, C. Menay, H. von Brand, and M. Araya,
“A reference architecture specification of a generic telescope control system,” Astronomical
Data Analysis Software and Systems XIX, vol. 434, p. 317, 2010.
[75] R. J. Tobar, H. H. von Brand, M. A. Araya, and J. S. López, “An amateur telescope control
system: toward a generic telescope control model,” SPIE Astronomical Telescopes+ Instru-
mentation, pp. 70 192I–70 192I, 2008.
[78] J. Irwin, D. Charbonneau, P. Nutzman, and E. Falco, “The mearth project: searching for
transiting habitable super-earths around nearby m dwarfs,” in IAU Symp, vol. 253. Cam-
bridge Univ Press, 2009, pp. 37–43.
[81] F. Melsheimer and R. Genet, “A computerized low-cost 0.4-meter research telescope,” In-
ternational Amateur-Professional Photoelectric Photometry Communications, vol. 15, p. 33,
1984.
157
Observatory Control System
[83] D. di Cicco, “S&t test report: The telescope drive master,” Sky and telescope, vol. 122, no. 4,
pp. 60–63, 2011.
[84] A. Pál and G. A. Bakos, “Astrometry in wide-field surveys,” Publications of the Astronomical
Society of the Pacific, vol. 118, no. 848, pp. 1474–1483, 2006. [Online]. Available:
http://www.journals.uchicago.edu/doi/abs/10.1086/508573
[85] G. Bakos, C. Afonso, T. Henning, A. Jordan, M. Holman, R. W. Noyes, P. D. Sackett,
D. Sasselov, G. Kovacs, Z. Csubry, and A. Pal, “Hat-south: A global network of southern
hemisphere automated telescopes to detect transiting exoplanets,” in Transiting Planets,
ser. Proceedings of the International Astronomical Union, vol. 4, 5 2008, pp. 354–357.
[Online]. Available: http://journals.cambridge.org/article S174392130802663X
[86] D. Reichart, M. Nysewander, J. Moran, J. Bartelme, M. Bayliss, A. Foster, J. Clemens,
P. Price, C. Evans, J. Salmonson et al., “Prompt: panchromatic robotic optical monitoring
and polarimetry telescopes,” arXiv preprint astro-ph/0502429, 2005.
[87] A. Shporer, T. Brown, T. Lister, R. Street, Y. Tsapras, F. Bianco, B. Fulton,
and A. Howell, “The lcogt network,” in The Astrophysics of Planetary Systems:
Formation, Structure, and Dynamical Evolution, ser. Proceedings of the International
Astronomical Union, vol. 6, 10 2010, pp. 553–555. [Online]. Available: http:
//journals.cambridge.org/article S1743921311021193
[88] G. Christie, “Detecting exoplanets by gravitational microlensing using a small telescope,”
arXiv preprint astro-ph/0609599, 2006.
[89] L. Books, “Free astronomy software: Kstars, celestia, stellarium, starlink project, digital
universe atlas, instrument neutral distributed interface,” 2010.
[90] O. Fors, F. Montojo, J. Nunez, J. Muiños, J. Boloix, R. Baena, R. Morcillo, and M. Merino,
“Status of telescope fabra roa at montsec: Optical observations for space surveillance &
tracking,” arXiv preprint arXiv:1109.5903, 2011.
[91] F. Montojo, O. Fors, J. Muinos, J. Núñez, R. López-Morcillo, R. Baena, J. Boloix, T. López-
Moratalla, and M. Merino, “The fabra-roa telescope at montsec (tfrm): A fully robotic
wide-field telescope for space surveillance and tracking,” arXiv preprint arXiv:1109.5918,
2011.
[92] M. Jelı́nek, P. Kubánek, M. Nekola, and R. Hudec, “Bart: an intelligent grb and sky mon-
itoring telescope (2000–2004),” Astronomische Nachrichten, vol. 325, no. 6-8, pp. 678–678,
2004.
[93] P. Kubánek, M. Jelı́nek, M. Nekola, M. Topinka, J. Strobl, R. Hudec, T. d. J. M. Sanguino,
A. d. U. Postigo, and A. J. Castro-Tirado, “Rts 2- remote telescope system, 2 nd version,”
Gamma-Ray Bursts: 30 Years of Discovery, vol. 727, pp. 753–756, 2004.
[94] P. Kubánek, M. Jelı́nek, S. Vı́tek, A. de Ugarte Postigo, M. Nekola, and J. French, “Rts2:
a powerful robotic observatory manager,” Astronomical Telescopes and Instrumentation, pp.
62 741V–62 741V, 2006.
[95] P. Kubánek, “Rts2,Äı̂the remote telescope system,” Advances in Astronomy, vol. 2010, 2010.
[96] P. KLIBANEK, M. Jelı́nek, S. Vı́tek, A. de Ugarte Postigo, M. Nekola, J. French, and
M. PROUIZA, “Status of robotics telescopes driven by rts2 (bart, bootes, fram and watcher),”
Il Nuovo cimento della Società italiana di fisica. B, vol. 121, no. 12, pp. 1501–1502, 2006.
[97] P. Kubánek, M. Jelı́nek, J. French, M. Prouza, S. Vı́tek, A. J. Castro-Tirado, and V. Reglero,
“The rts2 protocol,” SPIE Astronomical Telescopes+ Instrumentation, pp. 70 192S–70 192S,
2008.
[98] S. Fraser, “Scheduling for robonet-1 homogenous telescope network,” Astronomische
Nachrichten, vol. 327, no. 8, pp. 779–782, 2006.
158
BIBLIOGRAPHY
[100] A. Chavan, G. Giannone, D. Silva, T. Krueger, and G. Miller, “Nightly scheduling of eso’s
very large telescope,” in Astronomical Data Analysis Software and Systems VII, vol. 145,
1998, p. 255.
[101] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic
algorithm: Nsga-ii,” Evolutionary Computation, IEEE Transactions on, vol. 6, no. 2, pp.
182–197, 2002.
[102] F. Förster, N. López, J. Maza, P. Kubánek, and G. Pignata, “Scheduling in targeted transient
surveys and a new telescope for chase,” Advances in Astronomy, vol. 2010, 2009.
[103] P. Kubanek, “Genetic algorithm for robotic telescope scheduling,” arXiv preprint
arXiv:1002.0108, 2010.
[105] Y. Zimmerman, Y. Oshman, and A. Brandes, “Improving the accuracy of analog encoders
via kalman filtering,” Control Engineering Practice, vol. 14, no. 4, pp. 337 – 350, 2006.
[Online]. Available: http://www.sciencedirect.com/science/article/pii/S0967066105000407
[106] R. E. Kalman, “A new approach to linear filtering and prediction problems,” Journal of basic
Engineering, vol. 82, no. 1, pp. 35–45, 1960.
[107] R. E. Kalman and R. S. Bucy, “New results in linear filtering and prediction theory,” Journal
of basic engineering, vol. 83, no. 1, pp. 95–108, 1961.
[111] J. Medke↵, “The ascom revolution,” Sky and Telescopes, vol. 99, no. 5, May 2000.
[112] J. Mutlaq. Instrument neutral distributed interface white paper. [Online]. Available:
http://www.clearskyinstitute.com/INDI/INDI.pdf
[113] ——. (2017) Indi open astronomy instrumentation. [Online]. Available: http://www.indilib.
org
[114] J. Rumbaugh, I. Jacobson, and G. Booch, Unified Modeling Language Reference Manual,
The (2nd Edition). Pearson Higher Education, 2004.
[115] C. Pernechele, F. Bortoletto, and K. Reif, “Hexapod control for an active secondary mirror:
general concept and test results,” Appl. Opt., vol. 37, no. 28, pp. 6816–6821, Oct 1998.
[Online]. Available: http://ao.osa.org/abstract.cfm?URI=ao-37-28-6816
159
Observatory Control System
[117] P. Nanua, K. Waldron, and V. Murthy, “Direct kinematic solution of a stewart platform,”
Robotics and Automation, IEEE Transactions on, vol. 6, no. 4, pp. 438–444, Aug 1990.
[118] P. Schipani, “Hexapod kinematics for secondary mirror aberration control,” Mem. SAIt Suppl,
vol. 9, pp. 472–474, 2006.
[119] L.-W. Tsai, Robot analysis: the mechanics of serial and parallel manipulators. John Wiley
& Sons, 1999.
[121] S. Royo and V. Suc, “Método y sistema para compensar aberraciones ópticas en un
telescopio,” Mar. 1 2012, wO Patent App. PCT/ES2011/070,541. [Online]. Available:
https://www.google.cl/patents/WO2012025648A1?cl=es
[122] F. Gao, W. Li, X. Zhao, Z. Jin, and H. Zhao, “New kinematic structures
for 2-, 3-, 4-, and 5-dof parallel manipulator designs,” Mechanism and Machine
Theory, vol. 37, no. 11, pp. 1395 – 1411, 2002. [Online]. Available: http:
//www.sciencedirect.com/science/article/pii/S0094114X02000447
[125] H. Li, O. Tutunea-Fatan, and H.-Y. Feng, “An improved tool path discretization
method for five-axis sculptured surface machining,” The International Journal of Advanced
Manufacturing Technology, vol. 33, no. 9-10, pp. 994–1000, 2007. [Online]. Available:
http://dx.doi.org/10.1007/s00170-006-0529-z
[126] V. Suc, S. Royo, A. Jordán, G. Bakos, and K. Penev, “One-shot focusing using
the entropy as a merit function,” pp. 844 914–844 914–10, 2012. [Online]. Available:
http://dx.doi.org/10.1117/12.927021
[127] K. Kuehn and R. Hupe, “Real-Time Analysis of Large Astronomical Images,” ArXiv e-prints,
Mar. 2012.
[128] M. Booth, “Wave front sensor-less adaptive optics: a model-based approach using
sphere packings,” Opt. Express, vol. 14, no. 4, pp. 1339–1352, 2006. [Online]. Available:
http://www.opticsexpress.org/abstract.cfm?URI=oe-14-4-1339
[130] E. Bertin and S. Arnouts, “Sextractor: Software for source extraction,” Astron.
Astrophys. Suppl. Ser., vol. 117, no. 2, pp. 393–404, 1996. [Online]. Available:
http://dx.doi.org/10.1051/aas:1996164
160
BIBLIOGRAPHY
[131] T.-C. Shen, R. Soto, J. Reveco, L. Vanzi, J. M. Fernandez, P. Escarate, and V. Suc,
“Development of telescope control system for the 50cm telescope of uc observatory santa
martina,” pp. 8451 – 8451 – 8, 2012. [Online]. Available: http://dx.doi.org/10.1117/12.925567
[132] L. Vanzi, J. A. Chacon, M. Baffico, G. Avila, C. Guirao, T. Rivinus, S. Stefl, and D. Baade,
“Pucheros: a low-cost fiber-fed echelle spectrograph for the visible spectral range,” pp. 7735
– 7735 – 7, 2010. [Online]. Available: http://dx.doi.org/10.1117/12.857020
[133] M. Baffico, G. Avila, D. Baade, E. Bendek, C. Guirao, O. Gonzalez, P. Marchant, V. Salas,
I. Toledo, S. Vasquez et al., “Observatorio uc at santa martina: A small observing facility
operated by puc,” in Ground-based and Airborne Telescopes II, vol. 7012. International
Society for Optics and Photonics, 2008, p. 70122O.
[134] T.-C. Shen, R. Soto, J. Reveco, L. Vanzi, J. M. Fernández, P. Escarate, and V. Suc, “Devel-
opment of telescope control system for the 50cm telescope of uc observatory santa martina,”
in Software and Cyberinfrastructure for Astronomy II, vol. 8451. International Society for
Optics and Photonics, 2012, p. 84511T.
161
Observatory Control System
162
Appendices
163
A Slice Definitions
1 #pragma once
2 module CoolObs
3 {
4
5 // D e f i n e v e c t o r t y p e s
6 sequence<string> S t r i n g V e c t o r ;
7 sequence<double> DoubleVector ;
8 sequence<int> I n t V e c t o r ;
9 sequence<short> S h o r t V e c t o r ;
10
11
12 // D e f i n e t h e main master i c e c l a s s from which e v e r y i c e o b e c t d e r i v a t e s from
13 interface CoolObsPeripheral
14 {
15 void G e t S t a t u s ( out s t r i n g S t a t u s ) ;
16 void Shutdown ( ) ;
17 };
18
19 // D e f i n e t h e mount c l a s s
20 i n t e r f a c e GenericMount extends C o o l O b s P e r i p h e r a l
21 {
22 void Home ( ) ;
23 void G o t o P o s i t i o n ( double Ra2000 , double Dec2000 , double RaNonSiderealSpeed ,
double D e c N o n S i d e r e a l S p e e d ) ;
24 void GotoAltAz ( double Az , double Alt , out bool r e s u l t ) ;
25 void GetObject ( s t r i n g Name , out double ra , out double dec ) ;
26 void GetObjectEphem ( s t r i n g Name , out double az , out double a l t , out double
a i r m a s s , out s t r i n g ha , out i n t L i m i t s ) ;
27 void GetCoordEphem ( double ra , double dec , out double az , out double a l t , out
double a i r m a s s , out s t r i n g ha , out i n t L i m i t s ) ;
28 void D i t h e r ( double deltaRa , double DeltaDec ) ;
29 void GotoObject ( s t r i n g Name) ;
30 void T r a c k P o s i t i o n ( double Ra2000 , double Dec2000 , double RaNonSiderealSpeed ,
double D e c N o n S i d e r e a l S p e e d ) ;
31 void Sync ( double Ra2000 , double Dec2000 ) ;
32 void SyncObject ( s t r i n g Name) ;
33 void MapOnTarget ( s t r i n g P o i n t i n g M o d e l I d ) ;
34 void MapOnPosition ( double Ra1000 , double Dec2000 , s t r i n g P o i n t i n g M o d e l I d ) ;
35 void G e t R e a l P o s i t i o n ( out double Axis1 , out double Axis2 ) ;
36 void GetRealTarget ( out double Axis1 , out double Axis2 ) ;
37 void GetRealSpeed ( out double SpeedAxis1 , out double SpeedAxis2 ) ;
38 void GetAltAz ( out double Alt , out double Az ) ;
39 void GetAirmass ( out double Airmass ) ;
40 void GetJd ( out double J u l i a n D a t e ) ;
41 void GetHourAngle ( out s t r i n g HourAngle ) ;
42 void GetLST ( out s t r i n g L o c a l S i d e r e a l T i m e ) ;
43 void SetPark ( ) ;
44 void GotoPark ( ) ;
45 void Stop ( ) ;
46 void GetObservatory ( out double Lat , out double Lon , out double A l t ) ;
47 void GetMountType ( out s t r i n g MountType ) ;
48 void M e a s u r e P o s i t i o n ( s t r i n g P o i n t i n g M o d e l I d , out double Ra2000 , out double
Dec2000 ) ;
49 void MeasureTarget ( s t r i n g P o i n t i n g M o d e l I d , out double Ra2000 , out double Dec2000
);
50 void G e t C u r r e n t P o i n t i n g M o d e l ( out s t r i n g P o i n t i n g M o d e l I d e n t i f i e r ) ;
51 void G e t P o i n t i n g M o d e l s ( out S t r i n g V e c t o r P o i n t i n g M o d e l s ) ;
52 void S e t P o i n t i n g M o d e l ( s t r i n g P o i n t i n g M o d e l I d e n t i f i e r ) ;
53 void G e t T r a c k i n g O f f s e t ( out double Axis1 , out double Axis2 ) ;
54 void G e t T r a c k i n g E r r o r ( out double Axis1 , out double Axis2 ) ;
55 void G e t T c s O f f s e t ( out double Axis1 , out double Axis2 ) ;
56 void Guide ( double RaGuide , double DecGuide ) ;
57 void MountGuide ( double Axis1 , double Axis2 ) ;
58 void S e t C o n t r o l ( double KpA1 , double KiA1 , double KdA1 , double Kp2A1 , double Ki2A1 ,
double Kd2A1 , double KpA2 , double KiA2 , double KdA2 , double Kp2A2 , double Ki2A2 ,
double Kd2A2 , double SamplingTime , double MaxSpeed , double C o n t r o l 1 2 L i m i t ) ;
59 };
165
Observatory Control System
60
61 // D e f i n e t h e p o i n t i n g model c l a s s
62 i n t e r f a c e P o i n t i n g M o d e l extends C o o l O b s P e r i p h e r a l
63 {
64 void SetModel ( s t r i n g P o i n t i n g M o d e l F i l e ) ;
65 void StartNewModel ( ) ;
66 void SaveModel ( ) ;
67 void G e t O b s e r v a t i o n s ( out DoubleVector S k y P o s i t i o n A x i s 1 , out DoubleVector
MountPositionAxis1 , out DoubleVector S k y P o s i t i o n A x i s 2 , out DoubleVector
MountPositionAxis2 ) ;
68 void S e t O b s e r v a t i o n s ( DoubleVector S k y P o s i t i o n A x i s 1 , DoubleVector
MountPositionAxis1 , DoubleVector S k y P o s i t i o n A x i s 2 , DoubleVector
MountPositionAxis2 ) ;
69 void S e t R a n s a c O p t i m i z a t i o n ( i n t A c t i v e ) ;
70 void I s R a n s a c O p t i m i z a t i o n A c t i v e ( out i n t A c t i v e ) ;
71 void GetParamNames ( out S t r i n g V e c t o r ParamNames ) ;
72 void GetParamActive ( out I n t V e c t o r ActiveParams ) ;
73 void GetParamValues ( out DoubleVector ParamValues ) ;
74 void SetParamActive ( I n t V e c t o r ActiveParams ) ;
75 void ComputeModel ( ) ;
76 void G e t I d e n t i f i e r s L i s t ( out S t r i n g V e c t o r I d e n t i f i e r s L i s t ) ;
77 void GetPlot ( s t r i n g A x i s 1 I d e n t i f i e r , s t r i n g A x i s 2 I d e n t i f i e r , out DoubleVector
A x is1 Va lue s , out DoubleVector A x i s 2 V a l u e s ) ;
78 void C o r r e c t P o s i t i o n ( double A x i s 1 S e t P o i n t , double A x i s 2 S e t P o i n t , out double
C o r r e c t e d A x i s 1 , out double C o r r e c t e d A x i s 2 ) ;
79 void AddObservation ( double SkyAxis1 , double SkyAxis2 , double RealAxis1 , double
RealAxis2 ) ;
80 };
81
82 // D e f i n e t h e r o t a t o r
83 i n t e r f a c e R o t a t o r extends C o o l O b s P e r i p h e r a l
84 {
85 void S e t P o s i t i o n A n g l e ( double P o s i t i o n ) ;
86 void HomeRotator ( ) ;
87 void G e t P o s i t i o n A n g l e ( out double P o s i t i o n ) ;
88 void G o t o P o s i t i o n A n g l e ( double P o s i t i o n ) ;
89 void Track ( double s p e e d ) ;
90 };
91
92
93 // D e f i n e F i l t e r wheel o b j e c t s
94 i n t e r f a c e C o o l O b s F i l t e r W h e e l extends C o o l O b s P e r i p h e r a l
95 {
96 void G e t F i l t e r s ( out S t r i n g V e c t o r F i l t e r s ) ;
97 void G e t C u r r e n t F i l t e r ( out s t r i n g f i l t e r ) ;
98 void S e t C u r r e n t F i l t e r ( s t r i n g f i l t e r ) ;
99 };
100
101 //The c a l l b a c k l i n e r e c i v e r i s a c a l l b a c k which i s s e n t f o r each r e a d l i n e from a
camera o b j e c t
102 interface CallbackLineReceiver
103 {
104 void c a l l b a c k ( S h o r t V e c t o r L i n e ) ;
105 void Abort ( ) ;
106 };
107
108 // D e f i n e t h e g e n e r i c camera o b j e c t
109 i n t e r f a c e CoolObsSimpleCamera extends C o o l O b s P e r i p h e r a l
110 {
111 void GetCameras ( out S t r i n g V e c t o r Cameras ) ;
112 void G e t C a m e r a s S e r i a l s ( out S t r i n g V e c t o r S e r i a l s ) ;
113 void Connect ( s t r i n g Camera , out short r e s u l t ) ;
114 void GetCameraModel ( out s t r i n g CameraModel ) ;
115 void GetCameraSensor ( out S t r i n g V e c t o r SensorsNames ) ;
116 void GetCameraSerialNumber ( out s t r i n g S e r i a l N u m b e r ) ;
117 void G e t S e n s o r S i z e ( out short XSize , out short YSize ) ;
118 void G e t P i x e l S i z e ( out double P i x e l S i z e X , out double P i x e l S i z e Y ) ;
119 void G e t F u l l W e l l C a p a c i t y ( out double F u l l W e l l ) ;
120 void GetMaxAdu ( out l o n g MaxAdu) ;
121 void GetBinningModes ( out short BinXMax , out short BinYMax ) ;
122 void GetBinning ( out short BinX , out short BinY ) ;
166
123 void GetGainModes ( out S t r i n g V e c t o r GainModes ) ;
124 void GetCurrentGain ( out double Gain ) ;
125 void GetImagingModes ( out S t r i n g V e c t o r ImagingModes ) ;
126 void GetCurrentImagingMode ( out s t r i n g ImagingMode ) ;
127 void G e t S u b r a s t e r ( out short XMin , out short YMin , out short Width , out short
Height , out bool r e s u l t ) ;
128
129 void GetTemperaturesNames ( out S t r i n g V e c t o r T e m p e r a t u r e s S e n s o r s ) ;
130 void GetTemperatures ( out DoubleVector Temperatures ) ;
131 void GetTemperatureSetPoint ( out double S e t P o i n t ) ;
132 void G e t C a m e r a P r o c e s s I n f o ( out double P e r c e n t ) ;
133 void GetCameraInfo ( out s t r i n g CameraInfo ) ;
134 void C a n S e t F i l t e r ( out bool v a l u e ) ;
135 void CanControlTemp ( out bool v a l u e ) ;
136 void CanSetGain ( out bool v a l u e ) ;
137 void C a n S e t S u b r a s t e r ( out bool v a l u e ) ;
138 void LockCameraSensor ( bool v a l u e ) ;
139 void SetCameraSensor ( s t r i n g SensorName , out bool r e s u l t ) ;
140 void S e t B i n n i n g ( short BinX , short BinY , out bool r e s u l t ) ;
141 void S e t C u r r e n t G a i n ( s t r i n g Gain ) ;
142 void SetCurrentImagingMode ( s t r i n g ImagingMode ) ;
143 void S e t S u b r a s t e r ( short XMin , short YMin , short Width , short Height , out bool
result ) ;
144
145 void S e t T e m p e r a t u r e S e t P o i n t ( double S e t P o i n t ) ;
146 void SetExpTime ( double ExposureTime , out bool r e s u l t ) ;
147 void S t a r t E x p o s u r e A s y n c ( C a l l b a c k L i n e R e c e i v e r ∗ proxy ) ;
148 void StopExposure ( ) ;
149 void AbortExposure ( s t r i n g SensorName ) ;
150 void S t a r t C o o l i n g ( double S e t p o i n t ) ;
151 void S t o p C o o l i n g ( ) ;
152 void StartWarmUp ( ) ;
153 };
154
155 // G en e r a t e a c o o l o b s camera which i s a group o f camera and f i l t e r w h e e l i n t h e
same s e r v e r ( l i k e a s b i g camera )
156 i n t e r f a c e CoolObsCamera extends C o o l O b s F i l t e r Wh e e l , CoolObsSimpleCamera
157 {
158 };
159
160 // D e f i n e t h e camera manager o b j e c t
161 i n t e r f a c e CoolObsCameraManager extends CoolObsCamera
162 {
163 void CanFocus ( out bool v a l u e ) ;
164 void CanCollim ( out bool v a l u e ) ;
165 void TakeImage ( s t r i n g ImageName ) ;
166 void SetImageSave ( bool Sa v e Ima ge A c ti v e ) ;
167 void SetImagePath ( s t r i n g SaveMode , s t r i n g ImagePath ) ;
168 void SetFlatTargetADU ( double TargetADU ) ;
169 void SetAutoAstrometry ( bool A c t i v e , s t r i n g A s t r o m e t r y A c t i o n ) ;
170 void S e t D i t h e r i n g ( double D i t h e r V a l u e ) ;
171 void SetExposureTime ( double ExposureTime ) ;
172 void GetExposureTime ( out double ExposureTime ) ;
173 void GetReadoutTime ( out double ReadoutTime ) ;
174 void SetGuideDelay ( double GuideDelay ) ;
175 void GetFocus ( out double v a l u e ) ;
176 void S e t F o c u s ( double v a l u e ) ;
177 void AutoFocus ( ) ;
178 void AutoCollim ( ) ;
179 void S t a r t G u i d i n g ( ) ;
180 void StopGuiding ( ) ;
181 void PauseGuiding ( ) ;
182 void ResumeGuiding ( ) ;
183 void GetGuidingDataX ( out DoubleVector GuidingDataX ) ;
184 void GetGuidingDataY ( out DoubleVector GuidingDataY ) ;
185 void G e t G u i d i n g O f f s e t s X ( out DoubleVector G u i d i n g O f f s e t s X ) ;
186 void G e t G u i d i n g O f f s e t s Y ( out DoubleVector G u i d i n g O f f s e t s Y ) ;
187 void GetGuidingDataTime ( out DoubleVector GuidingDataTime ) ;
188 void GetLastFlatParams ( out double C o r r e c t i o n F a c t o r ) ;
189 void GetPointingModel ( out s t r i n g P o i n t i n g M o d e l ) ;
190 void GetDefaultMount ( out s t r i n g MountId ) ;
167
Observatory Control System
168
258 void C l o s e L o o p P o s i t i o n ( f l o a t P o s i t i o n ) ;
259 void SetP ( double P) ;
260 void S e t I ( double I ) ;
261 void SetD ( double D) ;
262 void SetPID ( double P , double I , double D) ;
263 void Stop ( ) ;
264 void G e t P o s i t i o n ( out double P o s i t i o n ) ;
265 void GetSpeed ( out double Speed ) ;
266 void GetTrackingData ( out DoubleVector Speed , out DoubleVector S e t P o i n t , out
I n t V e c t o r Output , out DoubleVector Time , out DoubleVector P o s i t i o n , out
DoubleVector A b s P o s i t i o n , out DoubleVector S e r i a l P o s i t i o n , out DoubleVector
P o s S e t P o i n t , out double R e s o l u t i o n , out double A b s R e s o l u t i o n , out double Sampling )
;
267 void S t a r t R e c o r d ( ) ;
268 void SetDoublePwmSpeedLimit ( i n t l i m i t ) ;
269
270 };
271
272 // D e f i n e t h e c l o u d s e n s o r
273 i n t e r f a c e AAGCloudSensor extends C o o l O b s P e r i p h e r a l
274 {
275 void GetAmbientTemp ( out double temp ) ;
276 void GetRainSensorTemp ( out double temp ) ;
277 void GetSkyTemp ( out double temp ) ;
278 void GetRainFrequency ( out double Freq ) ;
279 void G e t I l l u m i n a t i o n ( ) ;
280 };
281
282 };
169
Observatory Control System
170
B Puc40 Electrical documentation
171
C Eso50 Electrical documentation
193
CONEXIONES ELECTRICAS
ESO50TCS
Yerko Luco
Ingeniero en Electrónica
Observatorio Docente
Pontificia Universidad Católica de Chile
yluco@astro.puc.cl
Descripción
El siguiente documento tiene por objeto describir las conexiones eléctricas internas dentro del
sistema de control TCS del telescopio ESO50.
La Figura 1 muestra la electrónica utilizada por el TCS, la conexión de los cables y los conectores
de las entradas y salidas.
1
Cables TCS
Los cables internos de la caja están etiquetados de la siguiente manera:
TCS-01
El cable TCS-01 conecta los Limit Switches del Telescopio al controlador IPEC. La tabla 2 muestra
los materiales para la construcción del cable. La figura 3 muestra el detalle del cable TCS-01.
Detalle Proveedor
2
TCS-02
El cable TCS-02 conecta los motores DC de RA y DEC del Telescopio a los controladores IPEC RA
y DEC. La tabla 3 muestra los materiales para la construcción del cable. La figura 4 muestra el detalle
del cable TCS-02.
Detalle Proveedor
3
TCS-03
El cable TCS-03 conecta la entrada de la red del tablero al switch del TCS . La tabla 4 muestra los
materiales para la construcción del cable. La figura 5 muestra el detalle del cable TCS-03.
Detalle Proveedor
4
TCS-04
El cable TCS-04 conecta los encoders principales desde la entrada del tablero hasta la tarjeta PC-
104 . La tabla 5 muestra los materiales para la construcción del cable. La figura 6 muestra el detalle
del cable TCS-04.
Detalle Proveedor
5
TCS-05
El cable TCS-05 conecta los encoders secundarios desde la entrada del tablero hasta la tarjeta PC-
104 . La tabla 6 muestra los materiales para la construcción del cable. La figura 7 muestra el detalle
del cable TCS-05.
Detalle Proveedor
6
TCS-06
El cable TCS-06 conecta los encoders secundarios desde la tarjeta PC-104 RA y PC-104 DEC hasta
los Raspberry PI RA y Raspberry Pi DEC que los leen. La tabla 7 muestra la descripción y el
proveedor del cable. La figura 8 muestra el detalle del cable TCS-06.
Detalle Proveedor
7
TCS-07
El cable TCS-07 conecta los encoders primarios desde las tarjetas PC-104 RA y PC-104 DEC hasta
los IPEC RA e IPEC DEC . La tabla 8 muestra los materiales para la construcción del cable. La figura
9 muestra el detalle del cable TCS-07.
Detalle Proveedor
8
TCS-08
El cable TCS-08 conecta los E-STOP desde la entrada del tablero hasta los IPEC RA e IPEC DEC.
La tabla 9 muestra los materiales para la construcción del cable. La figura 10 muestra el detalle del
cable TCS-08.
Detalle Proveedor