This paper describes the development and validation of an instrument for evaluating classroom response systems (CRS). While a number of studies evaluating CRS have been published to date, no standardised instrument exists as a means of evaluating the impact of using the CRS. This means that comparing the different systems, or evaluating the benefits of using the CRS in different ways or settings, is very difficult despite the number of published reports, as indicated by Kay and LeSage (2009). An instrument was developed, called the classroom response system perceptions (CRiSP) questionnaire, which allows the evaluation of varied CRS on three scales: the usability; the impact on student engagement; and the impact on student learning. The development of CRiSP was undertaken in three universities, using different CRS, and evaluated through focus groups, one-on-one interviews and a factor analysis of the survey responses. We found no evidence of difference on the scales according to gender or age groups. The final CRiSP questionnaire consists of 26 base questions, with additional optional questions available. This paper proposes that the CRiSP Questionnaire could, in its current state or with minor changes, be used to evaluate the impact on learning of other classroom technologies also.