At MyEdMaster, a team of professionals and students are working together to create the first of its kind fully-automated online tutoring service that uses artificial intelligence (AI) technology to teach students.
AI Educational Software Project
According to the US Department of Education’s National Assessment of Educational Progress, roughly 90% of US students perform below grade level in at least one academic subject. This means that more than 50 million students in the US alone (and countless millions more worldwide) need help. However, over the years, the Government has spent billions of dollars on school reform, and the US Department of Education’s own assessments show that these efforts have produced little in the way of results.
Fortunately, there is a “cure” for education: tutoring. Working with students individually and teaching them based on their individual learning needs have been shown scientifically and commercially (in tutoring centers such as MyEdMaster) to reliably produce large educational gains in students. The challenge is that tutoring is not scalable. First, many commercial services charge $50-$70 per hour for two-hour tutoring sessions, and, even then, their tutors teach more than one student at a time. High quality 1-1 tutors often charge more than $100 per hour. These prices are beyond what many families can afford, particularly those with more than one child. Second, even if tutoring were more affordable, there simply are not enough quality tutors available for the more than 50 million US students who need them.
Consequently, even with available Government-funded tutoring programs helping low-income families, only about 5% of students who need tutoring actually receive help. This means that 95% of students who need help go unserved. As a result, 40% of US students graduate high school without even basic math or science skills and 30% graduate without basic reading skills. This creates a tremendous need and a tremendous opportunity.
Our solution is to create a fully-automated, online tutoring service where AI software does the teaching. By using AI as our tutor, we eliminate the cost of the human instructor. By placing the service online, we make it universally available, around the world even, on any platform. This service will reach any student, anywhere, at any time and teach him or her at a fraction of the cost that human tutors charge. This type of disruptive service has the ability to transform global education, which is the reason Dr. John Leddo started MyEdMaster in the first place.
Our proprietary AI technology is based on years of scientific research conducted by Dr. Leddo on how to teach people to become experts at what they do. Therefore, at the core of our technology is a knowledge model of what it means to master a subject area. We build our AI teaching approach around this knowledge model and design it to teach the way a human would.
A human tutor teaches a student individually, based on what he or she knows and needs to learn. The tutor assesses not only whether a student can give the right answer, but also what he or she knows. The tutor looks at a student’s step-by-step work to understand why the student makes mistakes and talks to the student to see whether the student understands critical concepts. Our software works the same way. After every lesson, the software talks to the student to ask him or her questions about critical concepts to make sure the student understands them. Any misconceptions are immediately corrected. When the student engages in problem solving activities, s/he does his or her work step by step. The software evaluates the step-by-step work and corrects specific errors. It can even give the student hints if s/he gets stuck along the way.
A human tutor does more than just direct learning. The tutor also responds to the student. When a student is confused, s/he can ask the tutor questions. So, too, can students ask our software questions. After all, this capability exists in smart phones, so why shouldn’t it exist in educational software? Moreover, a student can also ask a tutor for help with homework assignments or reviewing test study guides. So, too, can a student enter his or her homework or test study guide into our software and receive the same, step-by-step help s/he would receive as when doing one of our problems. The software even adapts its hints and corrective feedback to the specific information and numbers contained in the problem the student entered.
Our technology has been scientifically-validated and the results have been published in professional scientific journals. In one study, students using our software outperformed those using Khan Academy’s by 80%. In another, students using our software outperformed those using Pearson Education’s (the world’s largest educational publisher) electronic textbooks by 300%. Finally, students using our software outperformed those taught by experienced math teachers by 37%. We know of no other educational software that can make this claim.
As impressive as these results are, what is even more impressive is how our software achieved the results it did. For example, when teachers taught students in our study, about one-third scored in the A range, one-third in the B to C range, and one-third were in the failing range. This matches the general performance pattern found in the National Assessment of Educational Progress. However, when students used our software, about 60% scored in the A range and the remaining scored in the B to C range. The average score was about 90%. No student scored below 70%. Imagine if schools could report that no student ever scored below 70% and the average score was 90%. This would be seen as a miracle! What’s even more impressive is how well our software holds up as students progress from easier to harder material. In our Khan Academy study, students using Khan Academy’s software averaged 70% on the easiest material, but their performance dropped to 16% on the hardest. On the other hand, students using our software averaged 90% on the easiest material and 88% on the hardest, which was statistically equal to their performance on the easiest material. In other words, our software has been shown to teach every student, and it is as effective with difficult topics as it is with easier topics.
We offer links to our published papers below. Note in the MyEdMaster vs Khan Academy paper, we refer to our software at “A-list Empire” as we were planning to market it through a start-up company we created. However, the software will be marketed now through MyEdMaster and A-list Empire no longer exists.
You can also see a video demonstration of our technology by clicking on this link:
Future Technology Features
Any technology needs to be continually improved to keep it competitive. In our case, improvements are designed to improve the power of our technology and increase its educational effectiveness. Current R&D projects we are working on include:
Machine Learning to learning how to best answer students’ questions so that students will learn better
One feature of our technology and also of many personal assistants is the capability for a user to verbally ask a question and have the software answer it. Just as there are many ways to ask a question, so, too, are there many ways to answer one. Personal assistants typically have a single way to answer any given question. However, people know that, in practice, different explanations work better for different people.
Our goal is to use machine learning so that our software can learn best how to answer user questions to help the user learn best. This is a complex problem that raises a number of questions such as;
- What are the different ways one can answer a question?
- What factors (e.g., type of question asked, characteristics of the user) determine which is the best way to answer a question?
- How does the software determine if the user understood the answer?
At MyEdMaster, we are actively researching each of these questions and are developing a method for achieving adaptive question answering that others on the market do not yet have.
Our research started with the most basic question: does it even matter how software answers a question as long as the right information is given? To address this question, middle and high school students were given software that provided answers to questions students needed answers to in order for them to design a vaccine. For half the students, there was only one answer format available. For the other half, four answer formats were available: an informational format (the standard that personal assistants use and the one that was available to the control group), a real-world example, a cause and effect explanation, and a goal-based explanation that stated what goal was being served. In this group, students could select the answer type they wanted, but only one. Students who were given the choice of answer formats learned twice as well as those given only one answer format, suggesting how a question is answered makes a very big difference in learning.
This experiment also helped address the question of what factors determine the best way to answer a question. Our results showed that when students asking questions about facts (“what” questions), they overwhelmingly preferred information answers. However, when asking questions about procedures (“how” questions) or causes (“why” questions), their preferences for the other types of answer formats rose dramatically. A link to our published paper is shown below:
We are also in the process of examining what factors influence the best ways to answer user questions. Among the factors we are looking at in addition to type of question asked (as cited above) include the user’s age and existing knowledge about the topic. We are currently carrying out experiments to delineate these factors.
The final question, and one of the most important, is how does the software know if someone has understood the answers to the questions the software gave? One might think the easiest thing to do is to ask the person, as we often do with each other, “Did you understand what I said?” Unfortunately, our research shows that people are often unreliable in their self-assessments of whether or not they understood something. We found that middle schoolers are accurate about 2/3 of the time when they say that they understand something and accurate only about 5/8 of the time when they say they don’t understand something. Adults were accurate less than 3/4 of the time when they said they understood something and about 90% of the time when they said they didn’t. These results, which were accepted for publication in the International Journal of Social Science and Economic Research (link to paper below), suggest that more is needed than simply asking people if they understand in order to make sure that they do.
Educational software typically gives students tests to make sure they understand the information they are given. Our software does as well, but we believe that if users are constantly tested every time they ask questions, they will soon stop asking them. Therefore, we need a middle ground. Dr John Leddo has developed a question-and-answer technique called Cognitive Structure Analysis (CSA) that questions people about what they know to see how well they know it. CSA produces a near perfect correlation between people’s answers to CSA questions and how well they can solve problems. We believe this can revolutionize educational software and personal assistants by ensuring that the right way to answer people’s questions is used to maximize understanding.
We are also exploring the question of whether there are ways to answer questions that produce an “Aha!” effect, namely a deep understanding that can lead to making connections that normally might not be made. For example, the Wright Brothers, bicycle makers, applied principles of light weight bicycle construction rather than heavy train construction, to create the first motorized airplane, even though trains were powered by engines and bicycles weren’t. Albert Einstein imagined the different perspectives someone on a train vs. someone on a platform might have regarding an object in motion to create the Theory of Relativity. Steve Jobs applied principles of Chinese calligraphy in designing Apple’s products. Imagine if we could trigger such insight in the students our software teaches!
Machine Learning to allow the software to teach itself new topics by reading websites on the Internet
Current educational software is limited by knowledge already programmed into it. However, the world is dynamic and knowledge updates rapidly. Current technology does a poor job at keeping up with it. Ask Google to teach you something and you get a long list of websites. It’s up to you to go through them, find the relevant one, read through it to find the relevant material and then learn it on your own. Ask Siri a question and if it’s not one in its database, it’ll will respond “Here’s what I found on the web for…”
This isn’t how a person would behave. If you ask a teacher to teach you something s/he doesn’t already know, the teacher doesn’t give you a list of websites to search through. The teacher reads the websites, learns the material and then teaches it to you. If you ask a human assistant a question about something s/he doesn’t know, the assistant doesn’t give you a list of websites, but does the research and gives you an answer. Shouldn’t software do the same?
At MyEdMaster, we’ve created a program that you can type in a math topic, “two-step equations” and it will go to Google, enter the topic, read through a retrieved website to find the relevant information and then build its own understanding of the topic. If you ask it to teach you the topic, it will explain it based on its own understanding of what it read. If you ask it to solve a problem, it can. If you ask it to check your step-by-step work for a problem you solve, it will do so and point out and correct any mistakes you make. What’s more, just as a smart human can read something and then infer new knowledge not mentioned in what s/he read, our software reads about two-step equations and then learns about one-step equations even though the topic is never mentioned in the website, nor was the software asked to learn it. A video of this technology is shown below.
We believe the applications of this technology are limitless. A software product that can teach anyone anything because it will first go on the Internet and learn the topic itself in a matter of seconds. Robots that self-program themselves to perform new tasks by first teaching themselves how to do them. An Internet of Things in which all devices can learn from each other, thus increasing their power.
We are currently working on our first commercial application for this technology. Historically, we would build our AI software by having teachers write lessons and then having a knowledge engineer translate the teachers’ lessons into an AI format that our AI technology could read. We are adapting our technology so that it can read the teachers’ lessons directly and convert them into an AI format for our AI technology, thus eliminating the middleman. We are also working on related technology for medical applications for the future Internet of Things. Imagine a person walking into the bathroom to get ready in the morning. S/he asks the mirror, which has sensors and is connected to the Internet, “Am I getting acne?” or “Am I getting fat?” Ideally, if the mirror of the future wasn’t already programmed to make those assessments, it would teach itself how to do so and then answer the person’s questions. We are currently working with a medical doctor to create such technology (although not in mirror form yet). We expect to have this completed by the end of summer, 2021.