that, to paraphrase the feedback of one early-career reviewer, “Time Allocation Committees
[TACs] are better at judging
proposals than we are, so only
proposals that really need the
FT program should go through
this route.” Or perhaps there’s
a psychological component,
something like “my program really needs to be done soon, so
why should it compete against
something that can wait?” In
any case, we have updated the
instructions to emphasize that
the need for a quick response is
not paramount. Time will tell if
this works.
Although we don’t yet have a lot of data to
work with, in true astronomer fashion we
haven’t been able to resist the temptation to
start analyzing the numbers we do have. Figure 1, for instance, shows the mean proposal
grade vs. proposal rank for the first FT cycle.
Clearly, the dispersion in grades is rather
large for most of the proposals. However,
a couple of things stand out. The top two
proposals were uniformly recognized to be
very good or excellent, and the three lowestranked proposals were not rated as excellent
by any of the reviewers. This roughly mirrors
what many people say about proposal assessment mechanisms in general: the top
and bottom proposals are easy to recognize,
while those in the middle elicit much less
agreement.
One concern that we have heard from the
community is that non-expert reviewers
may be too easily swayed by a proposal and
unable to recognize its flaws. Figure 2, which
shows the number of times each score was
awarded, separated by the reviewer’s selfproclaimed expertise, allows us to look for
signs that this is the case.
April 2015
If anything, it seems that the opposite may
be happening; the lowest scores were given
almost exclusively by reviewers who consider themselves not to be particularly familiar with the subject area of a proposal.
While data from more FT cycles are certainly
needed to show whether this trend persists,
prospective PIs may wish to ensure that their
proposals are accessible to a broad, non-expert audience.
Figures 1 and 2 also show that most reviewers
thought most proposals were “good” or better. Some have wondered whether people
will exploit the peer review system to give
competing proposals unfairly low grades. We
see no evidence of this so far, although we
will continue to monitor carefully.
Figure 1.
Mean proposal grade
vs. proposal rank for the
first FT cycle. The vertical
bars show the range of
scores received by each
proposal (on a scale of
0-4), and the dashed
horizontal line indicates
the cutoff score that any
proposal must reach
in order to be awarded
telescope time.
Others have questioned how the quality of
FT proposals will compare to that of “regular” Gemini proposals. Will the people with
the best ideas be wary of the peer review
system? To gauge this, the feedback survey
asks how the FT proposals compare to other
proposals the reviewers may have judged
in the past. Of the reviewers who have replied so far, all reported that the FT propos-
GeminiFocus
19