Artificial intelligence is already changing education in many ways. It brings a lot of new ways to teach, learn, and analyze the data. It also frees teachers’ from mundane tasks and allows them to focus more on interaction with the students. The market of AI-assisted education tech is growing fast. According to the UNESCO report, there will be “a 50% growth in the artificial intelligence market between 2017 and 2021.”
However, there are also numerous challenges of AI adoption in education. They bring up a lot ethical questions and call for great cooperation among institutions to make the benefits of the technology available to everyone. In this article, SOLVVE examines these challenges to give you a full picture of the impact AI is making in education.
Current Challenges of AI Adoption in Education
Bridging the Digital Gap
Let us start at the beginning. Before you engage with all the amazing solutions that machine learning brought into our lives, you need to have stable and reliable electrification, internet connection, and hardware to run your software. There are many places on our planet where none of that is currently available or will be available any time soon.
In some countries, starting an online education business is a key opportunity during the pandemic. In others, schools have to rely on traditional teaching methods. Taking into account the cost of setting up hardware and introducing AI-based solutions into education systems, the digital divide will strip many students around the world of possibilities to benefit from this technology.
Creating Comprehensive Legislative Regulations
Nevertheless, for regions that are well-equipped for integration of AI into the education process, there are other pressing issues to deal with. Cybersecurity continues to be a trend in many domains as more and more of our daily routines depend on data. Collection, storage, disclosure, processing, and dissemination of personal and system information poses a big problem.
Firstly, the widespread adoption of AI solutions in education and other domains is a very recent phenomenon. Its growth outruns the speed of legislative adjustments required to properly and effectively address data handling issues. While governments are only at the stage of abstract strategizing about education in the digital age, commercial projects and startups enter the market every day, widening the gap between what can be done with the data and what would be ethically acceptable to do with the data.
Secondly, not all processes involving machine learning algorithms are transparent. For example, sometimes decision making happens inside a “black box”. E.g. if your enrollment application was declined by an AI algorithm, you will not get any feedback or explanation about how such a decision was made since it cannot be extracted or reverse engineered.
Thirdly, there is a discourse on copyright attribution. If AI creates something valuable and unique, who should be credited? Developing an efficient teaching approach that yields high performance in students takes the cumulative effort of course designers, methodologists, individual instructors, ML-engineers, and the algorithms themselves. And if something goes sideways, who is to blame.
So far, this particular challenge of AI adoption in education and all of the abovementioned questions exist in a legislative vacuum. On the one hand, it gives a lot of maneuvering space for creativity and innovations. On the other hand, there are no rules or good practices about data handling that could be extrapolated to every country or region.
Regardless of the location, our human nature is what drives the development of AI technology further and what creates its limits in many ways.
If you are familiar with the history of artificial intelligence and the basic principles of its operations, then you know that machine learning algorithms and the results they provide are only as good as the input data. Unfortunately, it means that whether we want it or not, the best AI systems are still a reflection of what we are as human species. Sometimes, it shines at its best showing empathy towards others. Other times, it backfires.
For example, in March 2016, it took less than 24 hours for a chatbot named Tay developed by Microsoft to go from cheerful excitement about people to spitting out racial slurs. Created to learn how to sustain a conversation, Tay trained itself through Twitter threads and surfaced all sorts of biases that exist in the society.
When it comes to AI in education, no one can guarantee that, for example, the admission or grading process will be fair. If there is a bias in the initial dataset, machine learning algorithms will eventually incorporate it and proceed to make biased decisions. The trick is that often we are blind to our own biases and we inevitably pass them to our technologies.
Let us assume a bias was detected in an AI-based system that makes decisions about enrollment or grading.
Who will take responsibility for the consequences? Are engineers who developed the program to blame? Probably not, they only gave a system the dataset. Is the data provider to answer for the data quality? Probably not, they are merely collectors of numerous decisions and records that were made for years. Is it a particular person whose judgment clouded the dataset? Probably not, it is impossible to identify one and prove why the decision was wrong. Moreover, there is probably more than a single instance of a deeply-rooted bias, and their bearers might not even know about those biases.
The same questions go towards situations where students do not progress as well as expected through their studies. Is it a student, a teacher, a platform, or an algorithm that is lagging? Identifying faulty places in this complex system is difficult and cannot be attributed to a single source.
Thus, when you decide to go with an artificial intelligence component for your learning management system or tutoring platform, it is recommended to dwell on this issue for a while and discuss with your development team or service provider how to deal with it.
One more roadblock with the datasets is the type of community they represent. Nowadays it is easy to collect a lot of data about education from the systems that are up and running at the moment. However, because of the digital divide, the data we collect represents only a small group of students.
Consequently, we lack content and methods that would suit the diversity of communities that we have, because AI training is done on data that represents only a fraction of human society. For example, recommendation systems might not address the learning needs of an actual user. Instead, they would make recommendations based on the needs of users, whose data was used for training.
Of course, if the system learns from the individual habits of each new student, it is easier to avoid the problem. However, training system anew takes time and money, which are not always available.
Having reliable data
It might look like everyone and everything around us generates data that is properly stored and processed; like we can go online and get information about anything at any time. Nevertheless, the truth is our data is very sporadic, unevenly distributed, distorted, and limited in many other ways.
Moreover, data on learning progress is not the only type of information that describes the learning process. Learning results are also tightly connected to the physical and emotional health of students, their socio-economic status, family situation, governmental policies, and many other factors that modify academic performance.
There is no central repository for all of this data. As we have discussed above, legislation varies greatly in different regions. It might not be possible to access, collect, store, or process this vital data. Thus, we build our systems on the bits of what is available incorporating biases into them with our hands.
To overcome this issue, software for education should be built with consideration of individual cognitions, immediate learning environment, and large-scale data about the performance of students in a particular category across the region, country, or the whole world.
Having Tech-Savvy Teachers and Students
To make the most of the AI-enhanced eLearning, students and teachers should have adequate understanding and skills in using such technology. It is not always true even in the countries that are at the forefront of eLearning adoption and deal with challenges of AI adoption in education rather effectively. For example, in the USA 8% of students are facing difficulties with mastering new technologies and software.
Moreover, introduction of AI-based solutions also requires readjustments of people skills for the teachers. According to a recent McKinsey report, “That translates into approximately 13 hours per week that teachers could redirect toward activities that lead to higher student outcomes and higher teacher satisfaction.” It means that teachers might transform into mentors when routine tasks are done by the system. However, mentoring and teaching techniques require different approaches and set of skills, not to mention the rising emotional impact educators will have to deal with when adjusting to the new interaction model.
Dealing with These Challenges
Overcoming the challenges of AI adoption in education requires a collective effort. International organizations have already set a course to formulating goals and means of dealing with the digital divide and data processing issues.
For example, the United Nations Broadband Commission pursues a goal to make access to the internet a human right, since a lot of factors defining the quality of life depend on it. UNESCO advocates for “ensur[ing] inclusive and equitable quality education and promot[ing] lifelong learning opportunities for all” in its report on Artificial Intelligence in Education: Challenges and Opportunities for Sustainable Development.
However, when it comes to the implementation of specific solutions, you have to make a lot of decisions right here and now keeping in mind the long-lasting effect they will have on your business. Thus, when you decide to start your eLearning business or introduce a new ML-powered solution into the market, be sure to address ethical and practical issues of your product or service with the business analysts and tech leads of your project.
SOLVVE offers a comprehensive set of services to bring your product to market, including business analysts and machine learning engineers with experience in the eLearning sector. IF you have any questions or ideas regarding AI-based technologies for your project, do not hesitate to contact us. Let us make it happen!