透過您的圖書館登入
IP:18.225.209.95
  • 學位論文

高中生運算思維評量工具之發展

Development of a Computational Thinking Test for Senior High School Students

指導教授 : 吳正己
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


現有運算思維的評量工具,有些為自我評量方式,偏向瞭解學生對運算思維的理解與態度;有些為測驗方式,著重評量學生程式設計或資訊科學技能,但卻未能真正評量學生的運算思維通用能力。而且大多數現有評量工具,缺乏嚴謹試題發展過程及信、效度檢驗。 本研究目的為發展評量高中生運算思維能力的工具,並探討與高中生運算思維能力有關的因素。本研究發展的運算思維測驗包含問題分解、資料表示、演算法、與模式一般化等四個構念。測驗試題由八位中學資訊教師出題及討論修改,再由四位大學資訊科學教育專家審題編修,另由一位教育測驗專家協助測驗實施與分析,以確保測驗信、效度。測驗試題共有12題,題型包括單選題及填充題,學生不需具備程式設計能力或資訊科學知識即可答題。本研究以北北基地區為施測範圍,採分層抽樣,分層標準為會考成績、年級及班群,總計有249位高中學生參與測驗。 研究結果顯示,本測驗試題難度接近適中,並具鑑別度,Cronbach's alpha係數與折半信度顯示試題具內部一致性,個別試題也皆與總分呈正相關,所有題目具有同質性與穩定度。本測驗試題係由專家發展與編修,具專家效度;測驗總分與國際運算思維挑戰賽分數呈正相關,具有效標關聯效度。然而,在CFA模型檢驗結果顯示,四個構念相對適配度指標多未達良好適配度,構念效度仍待進一步檢驗。本研究分析結果也發現運算思維能力與性別、班群、程式設計學習傾向、選修程式設計相關課程經驗以及相關領域學科成就有關。 本測驗發展嚴謹,具信、效度,測驗成績可呈現學生運算思維整體能力,並作為學生修習及升學資訊科學相關領域的參考。建議未來研究可精進試題之構念解釋力,精簡試題內容敘述,增加研究參與者數量,以及測驗施測安排等做適當調整。

關鍵字

運算思維 評量 測驗

並列摘要


Some existing computational thinking (CT) assessment tools adopt self-report methods which measure students’ understanding, attitude, or disposition of CT. Others are tests that assess students’ knowledge and skills of programming and computer science but cannot effectively assess CT generic skills. In addition, most CT assessments lack a strict development procedure and reliability and validity tests. The purpose of this study is to develop an effective assessment for CT skills in senior high school. Additionally, this study also explored factors associated with CT skills. This study, as a result, established a language/tool/knowledge-independent test for CT in terms of the four CT concepts: decomposition, data representation, algorithm, and pattern generalization. Eight high school computing teachers drafted the items and then four computer scientists evaluated and revised the items. In addition, a testing specialist assisted with test administration and statistical analysis to ensure the tool’s reliability and validity. Finally, the CT test was composed of 12 items and was used in this study. The participants of this study consisted of 249 students from 4 senior high schools in Taipei metropolitan area in Taiwan. Stratified sampling (schools, grades, major subjects) was applied when recruiting the participants. Item analysis shows that this CT test is at medium difficulty level and each item is effective on assessing students’ CT skills. Based on the reliability and validity analysis result, Cronbach's alpha and Guttman values report acceptable internal consistency, and each item is positively correlated with total scores, indicating that there is homogeneity between 12 items and this CT test. The verification of content validity was conducted by computer scientists. The correlation between this CT test and the Bebras Challenge is positively significant. This proves the criterion-related validity. However, the CFA results show that the constructs of this CT test need to be examined in the future. This study finds that CT skills are under the influence of gender, major, programming disposition, taking programming related courses, and the performance of science and mathematics subjects. Through the strict development procedure, this CT test is a valid and reliable tool measuring overall CT skills. Also, it helps students determine whether taking computer science-related courses in high school or pursuing majors in university. This study proposed some recommendations for improving the explanatory power of this CT model, simplifying the context of items, increasing the number of participants, and test administration.

參考文獻


Chen, G., Shen, J., Barth-Cohen, L., Jiang, S., Huang, X., & Eltoukhy, M. (2017). Assessing elementary students’ computational thinking in everyday reasoning and robotics programming. Computers & Education, 109, 162-175. https://doi.org/10.1016/j.compedu.2017.03.001
Chi, H., & Jain, H. (2011). Teaching computing to STEM students via visualization tools. Procedia Computer Science, 4, 1937-1943. https://doi.org/10.1016/j.procs.2011.04.211
Hambrusch, S., Hoffmann, C., Korb, J. T., Haugan, M., & Hosking, A. L. (2009). A multidisciplinary approach towards computational thinking for science majors. ACM SIGCSE Bulletin, 41(1), 183-187. https://doi.org/10.1145/1539024.1508931
McDonald, R. P., & Ho, M.-H. R. (2002). Principles and practice in reporting structural equation analyses. Psychological Methods, 7(1), 64–82. https://doi.org/10.1037/1082-989X.7.1.64
Psycharis, S. (2013). Examining the effect of the computational models on learning performance, scientific reasoning, epistemic beliefs and argumentation: An implication for the STEM agenda. Computers & Education, 68, 253-265. https://doi.org/10.1016/j.compedu.2013.05.015

延伸閱讀