top of page
Search
  • Writer's pictureShea Swauger

Taking Back the Narrative of Ed Tech

I remember reading the subject line in my inbox ‘Request for retraction: "Our Bodies Encoded"’ and slowly blowing out air between my lips. I had written the article in question which was about algorithmic test proctoring and how it could potentially harm students. The email was from Proctorio, one of the companies I mentioned and also the one my employer, the University of Colorado Denver, has a contract with for its online classes. For what it’s worth, out of all the products I looked into Proctorio was the best when it comes to student privacy, encryption, and openness regarding their product. They are also the only company to say I was lying on Twitter, or for that matter, respond to the article at all. Researchers critique companies all the time in the course of their scholarship but companies rarely ask for retractions. After reading through their complaint a few more times, I emailed the editor of the journal to figure out what to do next. They backed me up, sent a polite response to Proctorio asking some questions in return, and that was that.


A few weeks after the retraction email, Inside Higher Education published a response piece written (somewhat ominously) by “The Proctorio Team” that addressed a blog post discussing my article. Maybe it’s the pandemic-induced sprint towards surveillance technologies in education that’s changing the landscape faster than I can process, but none of this feels normal. For cheating-detection companies, however, it probably feels pretty good. In an interview for the Washington Post published on April 1st, 2020, Proctorio’s CEO Mike Olsen said, “It’s insanity. I shouldn’t be happy. I know a lot of people aren’t doing so well right now, but for us — I can’t even explain it… We’ll probably increase our value by four to five X just this year.” One month later, Vox reported that Proctorio “has administered 2.5 million exams — a 900 percent increase from the same period last year — and [Mike Olsen] anticipates many more as finals season approaches.” For-profit ed tech companies don’t usually and shouldn’t have unmoderated content in news publications meant to inform educators. Regardless, Proctorio and companies like it are significantly profiting off of disaster capitalism and there are serious consequences we shouldn’t ignore, and we shouldn’t allow them to control the narrative of their products that are being adopted so rapidly and at scale. Towards that end, I wanted to respond to the piece they wrote and to the general state of ed tech.


Proctorio opens their response by calling what I wrote an “opinion piece.” That’s a cheap shot. What I wrote went through full peer review and was published in an academic journal. By saying it was an opinion piece, they’re attempting to undermine the credibility of the scholarship before they even respond to the claims I make.


Proctorio states that its software doesn’t dictate what’s considered “normal behavior.” When you’re taking a proctored test, any behavior that isn’t flagged by the test settings as suspicious is, by default, considered normal. Proctorio designed all the test settings and the parameters for their algorithms, which means they’ve already determined the range of behavior that constitute ‘normal’ and ‘suspicious’. That’s their whole product: identifying behaviors that are potentially cheating. You can’t do that unless you’ve bracketed out behavior that isn’t cheating; in other words, normal.


Any software that has an encoded definition of ‘normal’ and ‘not normal’ behavior greatly increases the risk of discriminating against people with disabilities. For example, a common test setting is that students aren’t supposed to leave the camera frame because they might be sneaking off to look up answers. However, if students have conditions that require frequent restroom breaks, that test setting will flag them as suspicious. Proctorio says that their tests don’t discriminate against students with disabilities because an instructor can make changes to the test settings or turn them off. This sidesteps the fact that they built test settings that can discriminate against certain students in the first place. This is like a car company building a feature that allows drivers to remove seatbelts, but when drivers complain about injuries, the car company responding with ‘just don’t use that feature’. Companies should anticipate if their software can be discriminatory (intentionally or unintentionally), proactively prevent it, and educate their users about the dangers certain uses might present. If a test setting has the potential to discriminate against certain students under any circumstance, it should never have been rolled out.


And then there’s this, which I can’t even paraphrase:

“It may be important to note that there is a potential threat to society when students are not expected to meet integrity standards set forth by their institution. The healthcare setting is a prime example of this. When a healthcare provider is not held to the highest standards of academic integrity, people’s lives are at risk. By protecting academic integrity, institutions can rest assured that the value of their degrees, and future patients, are protected.”

So, it’s Proctorio or death now? I’m only half joking, but this is a gross and irresponsible mischaracterization of what’s at stake regarding academic integrity. There’s several implicit pedagogical commitments in this framing, namely it paints students as untrustworthy and dangerous, a premise that I refuse to accept, and also the entire premise on which they built their company. Another implicit pedagogical commitment is that tests accurately measure learning (they don’t), and that tests ensure the quality of professional expertise (they don’t). Proctorio is glossing over decades of educational scholarship to make it seem like they’re the only thing standing in the way of students cheating their way into being your surgeon. This contradicts what studies about teaching, learning, and student motivation have demonstrated, which means that Proctorio either hasn’t done the reading in the field they’re claiming to solve problems in, which is negligence, or that they have done the reading and don’t care, which is malpractice.


A point Proctorio is keen on making, both in the IHE piece and a half-dozen times in their request to me for a retraction, is that they don’t use facial recognition technology. This is true and they are an exception in an industry that is rapidly deploying it. Facial recognition software is demonstrably racist, transphobic, and erases non-binary identities. While Proctorio’s clarification that they use facial detection, not facial recognition, is also true, facial detection is still not a good technology as it also has racist implications by less consistently detecting Black faces as compared to white faces.


Proctorio says that they don’t collect biometric information. Again, while this is true, it’s complicated, and it requires us to get a little into the weeds. ‘Biometrics’ is a term meant to signify technologies that measure people’s bodies for authentication. Think of retinal or fingerprint scanners to make sure only authorized people can get into bank vaults or read top secret information; that’s how facial recognition works. There’s another usage of ‘biometrics’ which are technologies meant to measure people’s bodies to classify them. Think of self-driving cars scanning for pedestrians or TSA body scanners differentiating bodies verses contraband where the goal is to determine the presence of bodies, but not whose body it is; that’s how facial detection works. Authentication verses classification; this particular person verses a person. Facial detection isn’t trying to determine who someone is, it’s just trying to identify that there is a person in front of a camera. When facial detection in algorithmic test proctoring software fails to detect someone’s face because their skin is too dark, it has consequences academically and emotionally. Not only does it create circumstances where students can’t complete online exams, it also sends the message to students that they are literally invisible, that they don’t belong, and that the academy wasn’t built for them. These are messages that vulnerable students already face every day; facial detection technology reifies and amplifies it.


Facial detection is a biometric-based classification system which is how it knows if you’ve left the screen, or if there’s more than one face in front of the camera. It’s detecting your face and, probably with a different piece of software, measuring your gaze to see if you’re looking away from the screen. Proctorio doesn’t have to collect and store your biometric information in order to run facial detection or gaze estimation, it’s measuring your body in real time. No collection needed. Biometric-based authentication technologies like facial recognition, fingerprint/retinal ID, voice, or keystroke recognition should never be used in an educational setting. There are too many privacy, security, and equity risks to justify any potential benefits. Biometric-based classification technologies like facial detection, while presenting fewer risks, should also not be used in an educational setting. Proctorio wants to make sure you know that while they use facial detection, it’s not facial recognition, and while that’s technically biometric information, they don’t store it. Yeah, the stuff we’re doing isn’t great but at least it’s not the really bad stuff that our competitors use. It’s an ed tech whataboutism intended to avoid legitimate critique.


The processes by which we talk about ed tech companies is upside down. Generally, when a new technology is introduced into education, companies and evangelists release marketing campaigns, go on tour at education conferences and expos, and if they’re lucky, get mentioned in the Horizon Report. If there are critics of the technology who challenge it based on equity or efficacy, the burden of proof is on the critics to unquestionably demonstrate it, otherwise they are usually ignored. This is in part due to technological solutionism, a common belief that technology is objective, benevolent, and can eventually solve most problems. It’s also a myth. Technology too often reinforces oppressive social relationships such as white supremacy, sexism, ableism, cis/heteronormativity, and xenophobia. Tech companies launch technologies that ignore historians and call it revolutionary when it’s in fact very old. They ignore sociologists and call it disruption when it’s exploitation. They ignore gender and queer scholars and call it innovation when it’s erasure. They ignore Black scholars and call it optimization when it’s redlining. They ignore indigenous scholars and call it capitalism when it’s colonialism. They ignore disabled scholars and call it efficiency when it’s eugenics. Given this track record, the burden should be on tech companies to prove to us how their technology isn’t oppressive, not on their critics to prove that they are.


Algorithmic test proctoring is having a heyday now that COVID-19 has forced everyone to remote education and institutions are grasping at straws to figure out how to enforce academic integrity online. For university administrations struggling to make the transition, it’s very seductive to have a product claim that they can protect you from online cheating. That’s exactly why it’s so important to have a critical approach to technological ‘solutions’ and make sure the best available information isn’t controlled by the people who have a financial stake in their adoption.

3,606 views1 comment
bottom of page