The present study aims to explore the usefulness of EBB(Empirically-based Boundary and Binary) scale with which pre-service English teachers in collaboration rate their peers' speaking performance. This study has the following two research questions: whether the collaborative peer assessment is truly reliable in comparison with the professor and expert raters and what perception the pre-service English teachers have on the use of EBB as a means of developing their rating expertise. For the first research questions, 57 students were led to rate their peers' speaking performance onh three topics: major-specific, educational, and general issue. After students rated the quality of performance individually, they as a group argued the final score for the student who played as a role of test-taker. The final scores on which each group reached agreement were compared to the professor's and expert's scores. These data were analyzed using the FACETS program, which enables us to explore the structure of variance affecting a performance test in microscopic view. The analyses suggested that the collaborative peer raters made reliable decision as they are judged as an infit rater. For the research question 2, the researcher conducted survey analysis. Students made positive responses about practicality and controllability of EBB scale, while they questioned the discriminant power of the EBB scale. This study would remind readers of importance of the rater training program tailored to the need of pre-service English teachers and further suggested that EBB is one of the practical way to enhance the prospective teacher's competence as a potential rater of speaking performance