This paper asked participants to assess four selected expert-rated Taiwan International Student Design Competition (TISDC) products using four methods: Consensual Assessment Technique (CAT), Creative Product Semantic Scale (CPSS), Product Creativity Measurement Instrument (PCMI), and revised Creative Solution Diagnosis Scale (rCSDS). The results revealed that, between experts and non-experts, the ranking results by the CAT and CPSS were the same, while the ranking results of the rCSDS were different. The CAT, CPSS, and TISDC methods provided the same results indicating that raters may return the same results on creativity assessment, and the results are not affected by the selected methods.
If it is necessary to use non-experts to assess creativity and the creativity results are expected to be the same with that of experts, asking non-expert raters to use CPSS to assess creativity and then ranking the creativity score is more reliable. The study offers a contribution to the creativity domain on deciding which methods may be more reliable from a comparison perspective.