Skip to content Skip to navigation

About the SOPIs

Each year, as part of Stanford's foreign language requirement, the Language Center administers an online Simulated Oral Proficiency Interview as part of their assessment program. Most first- and second-year language students are assessed each year and the results are reported directly to the Committee on Undergraduate Standards and Policies. This test is one of the highest stakes assessments that the Language Center implements and it requires a coordinated effort between Language Center staff and instructors, Language Lab personnel, learning management system support staff and network administrators, as well as VPTL imaging and hardware specialists.

Stanford's language requirement mandates that all undergraduates complete a one year of college level study of a foreign language. The Language Center is charged with fulfilling this requirement, which is addressed in its Annual Report, available at http://language.stanford.edu. A key part of this reporting is the annual exit assessment.

The Language Center has sought a centralized platform for two tests that are actually quite difficult to do using standard web technology, especially in a unified system: a simulated interview and a closed-book essay test. A simulated interview means that, like a real face-to-face interview, users should not be able to pause or repeat item prompts. It also means limiting or sometimes eliminating preparation time before a response, and allowing only one attempt at a response, even if that means that the user stumbles, repeats, or starts over. A closed-book essay test means that there are no outside resources available, much like sitting in a classroom writing in a blank blue test book. While a low-tech version of these testing formats meets security needs on the student side, it presents challenges when trying to ensure security in grading and returning completed work. It also goes without saying that turning the responses into data can be analyzed would be a significant undertaking.

We have been running online speaking tests for about 10 years now, usually for about 800 students in each implementation. In the past, the speaking assessments have been done through an in-person oral interview, a simulated interview using an overhead projector and a cassette recorder or handheld digital recorder, and more recently, presented on a desktop computer through which the student can listen to a question and respond using a headset with a microphone. As the tests expanded to writing, nearly doubling the number of tests, the weaknesses of the previous system, which used a Java app or a standard LMS assignment on existing desktops/laptops, quickly became apparent. Ideally, the tests would be delivered in multiple classrooms on portable devices with failure modes that would address loss of power and/or network connectivity. Various tablet computers were evaluated, but  Chromebooks were chosen for the price, battery life, and the existence of a keyboard to facilitate writing exams. The application had to load the exam questions from the Learning Management System's course sites, deliver them, acquire audio and written responses, and deliver them back to the LMS for grading. Using a grant from the Provost, 80 devices and two carts were purchased and a software application was developed. The software development team worked to the requirements by learning JavaScript programming using the AngularJS framework.

In the spring of 2015, this new High Stakes Testing system was implemented to deliver SOPIs to 735 students in 89 sections. Adding WPAs, the grand totals are 1296 students in 154 sections with 17 offline tests and 3 recoveries of online tests. There were no cases of any data loss in any of these tests. After this initial success, midterm and final exams for one language course were run, as a pilot. This required minimal software development, and was as successful as the program assessment. Currently, plans are being made to use it in placement testing in the summer and fall of 2016. There is interest in the platform for delivering additional question types, such as multiple choice, matching, etc., for use in high-stakes (grade impact) exams in other programs. Looking forward, it is possible that in many cases this system could replace the ubiquitous blue-book exam that happen at universities everywhere, bringing better information security and massive data analysis possibilities.