October 2020 Paranal Service Mode User Satisfaction Survey

Once per year, now in the northern autumn, the User Support Department of ESO launches a Paranal Service Mode User Satisfaction Survey campaign.  This report details the findings of the October 2020 survey campaign, while previous such reports are found here.

We view these reports as an important way to

  • close the loop with the ESO Community,
  • gather information on issues that need to be addressed or reinforced,
  • thank all respondents, and
  • demonstrate clearly that such feedback is important to us! 

To this end, here we provide a summary of the responses received and trends in these responses over the last years, predominantly in the form of graphs. It should also be stressed that for those cases where respondents did identify themselves and did make specific free-text comments we have contacted them by e-mail to address their particular comments.

Methodology and General Results

The ESO Service Mode Questionnaire is always available on-line for users to fill in but the usual rate of return is less than 2 per month. However, experience shows that a targeted campaign focused on a single (in this case Phase 2 related) aspect results in many more survey completions.

In October 2020, we asked Principal Investigators (PIs) of Service Mode runs scheduled for Paranal in Periods 105 and/or 1061 (plus their then-active Phase 2 delegates) to complete the survey by a fixed deadline. We thus solicited a response from 414 PIs and their then-active Phase 2 delegates (298 individuals). Because of overlap this amounts to a total of 639 individuals, who were contacted via e-mail. A deadline was set for two weeks from the date of contact, and one reminder e-mail was sent a week before the deadline.

A total of 139 responses were received by the deadline (some 33 of which were not fully complete), representing a 21.8% return rate.  This is 2.8% higher than in 2019.  This is driven, at least partially, by the 12.5% decrease in the number of PIs.

Interactive Figure Features

The figures below are all interactive.  By this we mean:

  • Puttling the cursor over the plot will display the data values on the screen.
  • Clicking on the menu icon in the upper right (the three short parallel horizontal lines) will open a menu of print/download options.
  • For those figures with legends to the right of the plot clicking on any entry in the legend will toggle display of the corresponding data within the graph.

As a start in detailing the results from the survey, in the figure below we show the number of responses we received per instrument. In spite of the overall good response rate the large number of instruments offered in Service Mode means that on average this year we received 12.3 responses per instrument (essentially the same as last year's figure of 12.0 responses per instrument).

In the following two stacked histogram plots we present a general overview of user satisfaction (in percentage of responses) with two general items:

  • the Phase 2 web documentation and
  • the overall support provided by the User Support Department.

The plots, designed to show the trend in user satisfaction expressed in survey results since March 2015, clearly show the consistently high satisfaction with these services offered by the User Support Department.

p2 and Other Observation Preparation Tools

As in the past, we also asked about both the p2 tool and other, instrument-specific, observation preparation tools. For comparative purposes we include still the last P2PP results (September 2018) for a direct side-by-side comparison of P2PP and p2. In general the satisfaction levels with these tools are somewhat poorer than with the above mentioned services provided by the User Support Department.  p2 is still overall well accepted, with both ease of use and functions provided consistently more favourably rated than its predecessor, p2pp.  Though there does remain room for improvement, especially in its documentation.

ESO has released a Phase 2 Application Programming Interface (API) that can be used to create, modify, or delete observation blocks (OBs), containers and an accompanying ReadMe file that define an observing run.  We asked respondents whether or not they had made use of this powerful facility.  The way they replied is shown below. In that plot we see that there is essentially no change in the user uptake of that facility, perhaps signaling an opportunity for better user outreach in this area.

Since, with the exception of ObsPrep, the number of responses per observing preparation tool other than p2 is rather limited (see the table below), any presentation of individual-tool responses on documentation, ease of use, or functionality would suffer from small number statistics. Thus, in the three figures below answers for all tools, except ObsPrep, are combined. The ObsPrep figures can be found furrther down the page.  As with p2, when one considers the other tools as an emsemble we see room for improvement.

Observing Preparation Tool Number of Responses
CalVin 4
FIMS 8
FPOSS 6
KARMA 4
SADT 0
VisCalc 2
ObsPrep (p2 built-in, instrument-specific plug-in) 27

As the usage of the ObsPrep tool outweighs all the other tools combined (see above) we have separated out responses to our questions about it for display below.  Starting with usage statistics in the upper left plot we see that usage has more than doubled in our most recent survey over the previous year's numbers.  This is more than likely a result of the expanded suite of supported instruments and instruments modes for Periods 105 and 106 over that of the previous questionnaire period (e.g. all MUSE modes now supported as opposed to only WFM-NoAO for Period 104).

The ObsPrep documentation satisfaction level (top right plot below) appars to show a small drop from the previous year.  Whether this is significant or not remains to be seen, but in any event we will continue to monitor this (through future surveys and incoming tickets), and, naturally, to work to improve the documentation.

ObsPrep ease of use and functionality provided (bottom two plots) both show very strong user satisfaction levels.  (The one anonymous user dissatisfied with the ease of use of ObsPrep was likely referring to the schedule tab of p2 and not ObsPrep itself.)

Related to the above tools is, of course, the suite of Exposure Time Calculators.  Thus, we asked survey participants the question, “How satisfied are you with the ETCs you have used?” The responses are shown below. There are consistently few respondents that express dissatisfaction with the ETC. The over-the-years-steady ~9% (average) of "No opinion" answers could be interpreted as that fraction of the respondents that did not use any ETC.

Again we asked users to tell us how satisfied they are with the p2fc (finding chart generator) app within p2.  We asked how they liked the tool in terms of documentation and its usefulness (as compared to any other alternatives for producing finding charts), with the plots below displaying the results.  Both the documentation and the usability appear to have slightly decreased since last year, but as with ObsPrep above whether this is significant or not remains to be seen. Our work to monitor these aspects and to improve the tool continues.  Indeed, the aspect raised by the one user dissatisfied with the usability had already been solved between the time the survey was active and when we contacted them about their survey comment!

And lastly we asked survey participants, "Which operating system(s) do you use for ESO tools (e.g. for proposal/observation preparation, data reduction), excluding any browser-based tools?"  The breakdown of responses is shown in the figures below.  Here we see a continued dominance of Mac OS X over Linux, with a roughly steady small percentage of Windows usage.

Within the Linux usage it appears that Ubuntu is decreasing while CentOS is strongly growing.

Notes:

1The total time allocated in Service Mode for Periods 105 and 106 was 15236.3 hours, while the corresponding number for Visitor Mode was 2350.0 hours.  Thus, the October 2020 survey targetted PIs (and their then-active delegates) representing 86.6% of the total Paranal time allocation, including all public surveys.