POCS Overview

The PANOPTES Observatory Control System (POCS) is the primary software responsible for running a PANOPTES unit. POCS is implemented as a finite state machine (described below) that has three primary responsibilities:

  • overall control of the unit for taking observations,
  • relaying messages between various components of the system,
  • and determining the operational safety of the unit.

POCS is designed such that under normal operating conditions the software is initialized once and left running from day-to-day, with operation moving to a sleeping state during daylight hours and observations resuming automatically each night when POCS determines conditions are safe.

POCS is implemented as four separate logical layers, where increasing levels of abstraction take place between each of the layers. These layers are the low-level Core Layer, the Hardware Abstraction Layer, the Functional Layer, and the high-level Decision Layer.



POCS software layers Diagram of POCS software layers. Note that the items in yellow (Dome, Guider, and TheSkyX) are not typically used by PANOPTES observatories (note: PAN006 is inside an astrohaven dome).

TheSkyX interface was added by the Huntsman Telescope, which also uses POCS for control. They are included in the diagram as a means of showing the flexibility of the Functional Layer to interact with components from the HAL.

POCS Software Design

Core Layer

The Core Layer is the lowest level and is responsible for interacting directly with the hardware. For DSLR cameras this is accomplished by providing wrappers around the existing gphoto2 software package. For PANOPTES, most other attached hardware works via direct RS-232 serial communication through a USB-to-Serial converter. A utility module was written for common read/write operations that automatically handles details associated with buffering, connection, etc. Support for TheSkyX was written into POCS for the Huntsman Telescope. The overall goal of the Core Layer is to provide a consistent interface for modules written at the HAL level.

Hardware Abstraction Layer (HAL)

The use of a HAL is widespread both in computing and robotics. In general, a HAL is meant to hide low-level hardware and device specific details from higher level programming [Elkady2012]. Thus, while every camera ultimately needs to support, for instance, a take_exposure(seconds=120) command, the details of how a specific camera model is programmed to achieve that may be very different. From the perspective of software at higher levels those details are not important, all that is important is that all attached cameras react appropriately to the take_exposure command.

While the Core Layer consists of one module per feature, the HAL implements a Template Pattern [Gamma1993] wherein a base class provides an interface to be used by higher levels and concrete classes are written for each specific device type. For example, a base Mount class dictates an interface that includes methods such as slew_to_home, set_target_coordinates, slew_to_target, park, etc. The concrete implementation for the iOptron mount then uses the Core Layer level RS-232 commands to issue the specific serial commands needed to perform those functions. Likewise, a Paramount ME II concrete implementation of the Mount class would use the Core Layer interface to TheSkyX to implement those same methods. Thus, higher levels of the software can make a call to mount.slew_to_target() and expect it to work regardless of the particular mount type attached.

Another advantage of this type of setup is that a concrete implementation of a hardware simulator can be created to test higher-level software without actually having physical devices attached, which is how much of the PANOPTES testing framework is implemented [1].

Functional Layer

The Functional Layer is analogous to a traditional observatory: an Observatory has a location from which it operates, attached hardware which it uses to observe, a scheduler (a modified dispatch scheduler [Denny2004] in the case of PANOPTES) to select from the available target_list to form valid observations, etc.

The Observatory (i.e. the Functional Layer) is thus where most of the operations associated with taking observations actually happen. When the software is used interactively (as opposed to the usual automatic mode) it is with the Observatory that an individual would overwhelmingly interact.

The Functional Layer is also responsible for connecting to and initializing the attached hardware, specified by accompanying configuration files. The potential list of targets and the type of scheduler used are also loaded from a configuration file. The particular type of scheduler is agnostic to the Observatory, which simply calls scheduler.get_observation() such that the external scheduler can handle all the logic of choosing a target. In the figure listed above this is represented by the “Scheduler” and “Targets” that are input to the “Observatory.”

Decision Layer

The Decision Layer is the highest level of the system and can be viewed as the “intelligence” layer. When using the software in interactive mode, the human user takes on the role of the Decision Layer while in automatic operations this is accomplished via an event-driven finite state machine (FSM).

A state machine is a simple model of a system where that system can only exist in discrete conditions or modes. Those conditions or modes are called states. Typically states determine how the system reacts to input, either from a user or the environment. A state machine can exist solely in the software or the software can be representative of a physical model. For PANOPTES, the physical unit is the system and POCS models the condition of the hardware. The “finite” aspect refers to the fact that there are a limited and known number of states in which the system can exist.

Examples of PANOPTES states include:

  • sleeping: Occurs in daylight hours, the cameras are facing down, and themount is unresponsive to slew commands.
  • observing: The cameras are exposing and the mount is tracking.
  • scheduling: The mount is unparked, not slewing or tracking, it is dark, and the software is running through the scheduler.

PANOPTES states are named with verbs to represent the action the physical unit is currently performing.

POCS is designed to have a configurable state machine, with the highest level logic written in each state definition file. State definition files are meant to be simple as most of the details of the logic should exist in the functional layer. Students using POCS for educational purposes will most likely start with the state files.

State machines are responsible for mapping inputs (e.g. get_ready, schedule, start_slewing, etc.) to outputs, where the particular mapping depends on the current state [Lee2017]. The mappings of input to output are governed by transition events [2].

State definitions and their transitions are defined external to POCS, allowing for multiple possible state machines that are agnostic to the layers below the Decision Layer. This external definition is similar to the “Scheduler” in the Functional Layer and is represented similarly in the figure above.

POCS is responsible for determining operational safety via a query of the weather station, determination of sun position, etc. The transition for each state has a set of conditions that must be satisfied in order for a successful transition to a new state to be accomplished and a requisite check for operational safety occurs before all transitions. If the system is determined to be unsafe the machine either transitions to the parking state or remains in the sleeping or ready state.

POCS Alternatives

A primary software adage is to avoid “recreating the wheel” and while automated OCS systems are not unique, an initial review found that none of the available systems were suitable to the PANOPTES goals outlined in the PANOPTES Overview. First, all software that required license fees or was not otherwise free (of cost) and open (to modification) was not applicable. Second, software was examined in terms of its ability to handle the hardware and observing requirements of a PANOPTES unit. Third, the ease-of-use of the software was determined, both in terms of installation and usage as well as in ability to serve as a learning tool. Three popular alternatives to the POCS ecosystem were identified. A brief summary of each is given along with reasons for rejection (in alphabetical order):


INDI (Instrument-Neutral-Distributed-Interface) consists of both a protocol for agnostic hardware control and a library that implements that protocol in a server/client architecture. INDI is written specifically as an astronomical tool and seems to be used exclusively within astronomical applications. The code base is written almost exclusively in C/C++ and the software is thus static and requires compilation in order to run. The software is released under a GPLv2 license and undergoes active development and maintenance.

The basic idea behind INDI is that hardware (CCDs, domes, mounts, etc.) is described (via drivers) according to the INDI protocol such that an INDI server can communicate between that hardware and a given front-end client (software used by the astronomer which can either be interactive or automated) using standard Inter-process Communication (ICP) protocols regardless of the particular details of the hardware.

This is in fact an ideal setup for a project like PANOPTES and INDI was initially used as a base design, with POCS serving primarily as an INDI client and a thin-wrapper around the server. However, because of the lack of suitable drivers for the chosen mount as well as complications with the camera driver and the implementation of the server software, this approach was eventually abandoned. It should be noted, however, that the server/client architecture and the agnostic hardware implementation in both POCS and INDI means that the eventual adoption of INDI should be largely straight-forward. Should a group choose to implement this approach in the future, much of the hardware specifications contained within POCS could be relegated to INDI, allowing POCS to be a specific implementation of an INDI server/client interaction. The specific details of POCS (state-based operation, scheduling details, data organization and analysis) would remain largely unchanged.


ROS (Robotic Operating System) is a set of software libraries and various other scripts designed to control robotic components. The idea is similar to INDI but ROS is designed to work with robotic hardware in general and has no specific association with astronomy. ROS has a widespread community and significant adoption within the robotics community, specifically concerning industrial automation. In addition to simple hardware control, ROS also implements various robotics-specific algorithms, such as those associated with machine vision, movement, robotic geometry (self-awareness of spatial location of various components of the robot), and more. The specific design goals of ROS relate to its use as a library for “large-scale integrative robotics research” for “complex” systems [73]. The library is designed to be multi-lingual (with respect to programming languages) via the exchange of language-agnostic message. The entire library consists of a number of packages and modules that require specific management policies (although these can be integrated with the host-OS system package manager).

ROS is primarily designed to be used in large-scale applications and industrial automation and was thus found to be unsuitable for the design goals of PANOPTES. Specifically, the package management overhead made the system overly complex when compared with the needs of PANOPTES. While there are certainly some examples of small-scale robotics implementations available on the website13 for ROS, the adoption of the software as a basis for PANOPTES would have required significant overhead merely to understand the basic operations of POCS. Working with the system was thus seen as too complex for non-professionals and students.

However, the advantages of the messaging processing system used by ROS were immediately obvious and initially the messaging system behind the PANOPETS libraries was based directly on the ROS messaging packages. Unfortunately, because of the complexity of maintaining some of the ROS subpackages without adoption of the overall software suite this path was eventually abandoned.

The core ideas behind the messaging system (which are actually fairly generic in nature) have nevertheless been retained. More recently others have pursued the use of ROS specifically for use within autonomous observatories. While the authors report success, the lack of available code and documentation make the software not worth pursuing in light of the fact that POCS had already undergone significant development before the paper was made available.

Details about the code are sparse within the paper and the corresponding website (accessed 2017-01-24) doesn’t offer additional details.


RTS2 is a fairly mature project that was originally developed for the BART telescope for autonomous gamma ray burst (GRB) followup. The overall system is part of the GLORIA Project, which has some shared goals with the PANOPTES network but is aimed at more professional-level telescopes and observatories18. The software implements a client/server system and hardware abstraction layer similar to INDI. The software base is primarily written in C++ and released under a LGPL-3.0 license and is under active development. RTS2 further includes logical control over the system, which includes things such as scheduling, plate-solving, metadata tracking, etc.

The primary reason for not pursuing RTS2 as the base for PANOPTES was due to the desire to employ Python as the dominant language. While RTS2 could provide for the operational aspects of PANOPTES it was not seen as suitable for the corresponding educational aspects.

[1]Writing hardware simulators, while helpful for testing purposes, can

also add significant overhead to a project. For major projects such as the LSST or TMT this is obviously a requirement. PANOPTES implements basic hardware simulators for the mount and camera but full-scale hardware simulation of specific components has not yet been achieved.

[2]The Python FSM used by POCS is in fact called transitions.