Each year, as part of Stanford's foreign language requirement, the Language Center administers an online Simulated Oral Proficiency Interview as part of their assessment program. Most first- and second-year language students are assessed each year and the results are reported directly to the Committee on Undergraduate Standards and Policies. This test is one of the highest stakes assessments that the Language Center implements and it requires a coordinated effort between Language Center staff and instructors, Language Lab personnel, learning management system support staff and network administrators, as well as VPTL imaging and hardware specialists.
Stanford's language requirement mandates that all undergraduates complete a one year of college level study of a foreign language. The Language Center is charged with fulfilling this requirement, which is addressed in its Annual Report, available at http://language.stanford.edu. A key part of this reporting is the annual exit assessment.
The Language Center has sought a centralized platform for two tests that are actually quite difficult to do using standard web technology, especially in a unified system: a simulated interview and a closed-book essay test. A simulated interview means that, like a real face-to-face interview, users should not be able to pause or repeat item prompts. It also means limiting or sometimes eliminating preparation time before a response, and allowing only one attempt at a response, even if that means that the user stumbles, repeats, or starts over. A closed-book essay test means that there are no outside resources available, much like sitting in a classroom writing in a blank blue test book. While a low-tech version of these testing formats meets security needs on the student side, it presents challenges when trying to ensure security in grading and returning completed work. It also goes without saying that turning the responses into data can be analyzed would be a significant undertaking.
In the spring of 2015, this new High Stakes Testing system was implemented to deliver SOPIs to 735 students in 89 sections. Adding WPAs, the grand totals are 1296 students in 154 sections with 17 offline tests and 3 recoveries of online tests. There were no cases of any data loss in any of these tests. After this initial success, midterm and final exams for one language course were run, as a pilot. This required minimal software development, and was as successful as the program assessment. Currently, plans are being made to use it in placement testing in the summer and fall of 2016. There is interest in the platform for delivering additional question types, such as multiple choice, matching, etc., for use in high-stakes (grade impact) exams in other programs. Looking forward, it is possible that in many cases this system could replace the ubiquitous blue-book exam that happen at universities everywhere, bringing better information security and massive data analysis possibilities.