This article originally appeared in the EDUCAUSE Industry Insights Column.
Artificial intelligence (AI) is a buzzword across nearly every industry these days. Though its origins date back to the 1950s, we’ve seen a tipping point in its application over the past decade that is progressing rapidly. AI has begun to permeate multiple facets of higher education, but the charge has been led by several early adopter faculty members, departments, and individual universities rather than universally.
To give just a few examples:
- The University of Arizona admissions office is using AI-driven keyword analysis tools to assist in the application review process.
- Georgia State University trained a chatbot built by AdmitHub to help answer common financial aid and enrollment questions from students.
- A Georgia Tech professor used IBM’s Watson technology to create his own virtual teaching assistant.
- A recent article in the Chronicle of Higher Education highlighted a Texas A&M professor utilizing an AI-powered peer review tool to assist in cultivating learning engagement through writing and discussion.
- The same Chronicle article discusses the role AI is playing in adaptive courseware used and tested by many universities to achieve a more personalized, scalable learning approach.
Each of these cases in which a faculty member, department, or university has seen positive outcomes lends additional confidence for peers to consider how AI can assist or enhance their own systems and processes. The goal of each of these early adopters was not to replace the human element of teaching and learning but to enhance outcomes for students.
So, what actually constitutes artificial intelligence? At its core, AI is a computerized system or machine made to simulate human intelligence. One of the tenets that characterizes the field of machine learning AI is that an element of “learning” is occurring within the system—either “supervised,” in which the training data contains a specific desired outcome, or “unsupervised,” in which the training data does not contain a desired outcome.
ProctorU first started to investigate whether artificial intelligence could enhance our online proctoring solutions in late 2013. We realized that there was a pattern of related and repeated behaviors across nearly all incidents of cheating. Since we were already training our live proctors to recognize these behaviors, we determined we could teach a machine to recognize the same behaviors. After all, one of the leading applications of artificial intelligence is pattern recognition.
We identified three areas where artificial intelligence will benefit our mission of detecting and preventing breaches of academic integrity:
- Identity fraud
- Cheating behaviors
- Content theft
We determined that investing time, money, and energy on an AI solution targeting these areas would enhance not only outcomes for our clients but also our business processes. Our goal in adding AI to our proctoring platform was not to replace humans but rather to strengthen the accuracy of live and automated proctoring. Furthermore, AI can help reduce human errors, catch things a human will inevitably miss, and assist in the scalability of services.
In developing an automated proctoring solution, we implemented the first of our AI events in the form of facial recognition and basic thresholds for audio and visual cues. The technology behind a typical automated proctoring system is nothing new. The algorithms that run these systems have been in existence for more than four years. But those algorithms are static, unless changed by a developer system-wide.
The main difference between the old automated and new automated systems is that the AI technology will continually learn, adapting and getting “smarter” with every exam proctored. And we’re not just using AI in our automated service—we also layer it behind our live proctoring. Across both service levels, the AI is being taught using an assisted learning model in which our own human proctors are the teachers of the system.
The basic process of our supervised machine learning has four steps:
- Humans segment and label data.
- Once enough data is segmented around one label, we create an “event” in the algorithm.
- We run all current data through the algorithm to trigger the newly created event.
- Humans go back through that data and confirm whether the event took place or not, making the system more accurate in detecting that specific event.
At a macro level, this process sounds relatively simple, but it can get complicated very quickly. Under this supervised learning model of machine learning, each action or pattern requires a minimum of 20,000 data points to become an “event” in the system. Once it is trained on that event, the model must continue to be fed more and more “training data” in order to increase the accuracy around that one event.
A multitude of behaviors is indicative of cheating, but let’s focus on one. Imagine a test-taker’s eyes and head moving quickly to the right, looking toward something off screen. It would take 20,000 instances of that one quick motion to train the system to flag it as an event. Next, an exponential number of additional instances must occur to enhance the accuracy of the system recognizing and flagging the behavior. Now, multiply that by the thousands of cheating behaviors we may train the system to flag as “events.” As you can see, this process requires a massive amount of data.
Where does all that data come from? As the largest online proctoring company in the industry, we proctor over a million exams per year. Once anonymized for obvious privacy reasons, all that exam data can be used to train the AI model.
The example above describes only a single cheating behavior. So what about identity fraud and content theft? The process for training in these areas is similar but uses slightly different methodologies. ProctorU is working to integrate a collection of machine learning technologies including advanced facial recognition, object recognition, plane detection, speech-to-text, eye movement detection, and voice detection, to name just a few.
We are still in the training process of our AI model, but as it gets smarter it will be able to do things such as distinguish the difference between an adult speaking, a child speaking, a baby crying, and a dog barking. These are things humans can do very easily, but the system is being taught which of these could pose a threat to academic integrity and which can be tagged as harmless.
The road to building a truly accurate AI model for online proctoring will continue to evolve as new technology emerges and becomes available to the test-taker population. As described in another recent article
, the rate of technological innovation has been moving at an increasingly faster pace. As more technology is implemented in computers, mobile devices, and wearables, we will be able to utilize those technologies in our own solutions and add them to our AI model.
When we look into the future of higher education, we see a place where technology will surpass what we are physically able to process today. We envision a place where adaptive courseware and adaptive testing will be intertwined with online proctoring to such a degree that test-takers will have to go through extreme measures to attempt cheating.
If you would like to continue this discussion or learn more about how we are using technology to enhance our online proctoring solutions, read our position on AI
or get in touch