[go: up one dir, main page]

0% found this document useful (0 votes)
59 views9 pages

Agent-Based Adaptive E-Learning Model For Any Learning Management System

An Article related to Excessive Buying Behavior.

Uploaded by

Waleed Tahir Ch
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views9 pages

Agent-Based Adaptive E-Learning Model For Any Learning Management System

An Article related to Excessive Buying Behavior.

Uploaded by

Waleed Tahir Ch
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Original Research

doi:10.4102/ijmla.v2i1.6 htp://www.ijmla.net
Agent-based adaptve e-learning model for any learning
management system
Authors:
George W. Musumba
1
Robert O. Oboko
2
Henry O. Nyongesa
3
Afliatons:
1
Department of Computer
Science, Dedan Kimathi
University of Technology,
Kenya
2
School of Computng and
Informatcs, University of
Nairobi
3
Department of Computer
Science, University of the
Western Cape
Correspondence to:
George Musumba
Email:
musumbaggw@gmail.com
Postal address:
P.O. Box 657-10100, Nyeri,
Kenya
Dates:
Received: 29 Jan.2013
Accepted: 28 Apr.2013
Published: 06 June 2013
How to cite this artcle:
Musumba GW, Oboko
RO, Nyongesa HO. Agent
Based Adaptve E-Learning
Model for Any Learning
Management System. Int
J Machine Learn Appl.
2013;2(1),Art. #6, 9 pages.
htp://dx.doi.org/10.4102/
ijmla.v2i1.6
Copyright:
2013. The Authors.
Licensee: AOSIS
OpenJournals. This work
is licensed under the
Creatve Commons
Atributon License.
Many scholars are interested in improving e-learning in order to provide easy access to
educational materials. There is, however, the need to incorporate the ability to classify
learners into these learning systems. Learner classifcation is used adaptively to provide
relevant information for the various categories of learners. There is also a need for learning
to continue, whether learners are on- or off-line. In many parts of the world, especially in
the developing world, most people do not have reliable continuous internet connections. We
tested an Adaptive e-Learning Model prototype that implements an adaptive presentation
of course content under conditions of intermittent Internet connections. This prototype was
tested in February 2011 on undergraduate students studying a database systems course. This
study found out that it is possible to have models that can adapt to characteristics such as the
learners level of knowledge and that it is possible for learners to be able to study under both
on- and off-line modes through adaptation.
Introducton
Problem statement
The e-learning paradigm should capitalise on two aspects,
1
(1) the elimination of the barriers of
time and distance and (2) the personalisation of the learners experience. The current trends in
education and training should put emphasis on identifying methods and tools for delivering
just-in-time, on-demand knowledge experiences tailored to individual learners, taking into
consideration their differences in skills level, perspectives, culture and other educational contexts.
However, most of these systems are still static and infexible, being developed on the basis of One
size fts all.
2
As a result, they do not present only relevant information to learners, but instead
present all information (learning materials) from basic to advanced concepts at the same time. This
gamut of information given to learners without considering their specifc requirements makes
them experience diffculties in perceiving and applying the information when answering quizzes
and solving problems. This is because learners get all the information they are supposed to get,
but in bits and pieces and in different learning stages. Different stages have specifc information
and quizzes are based on the information per stage, hence accessing all the information at the
same time makes some of the information irrelevant.
These systems are also developed to work under constant internet connections (web applications
that must be accessed online in order to function). In these instances, the learners can only
learn when online. These concepts are found in nearly all e-learning systems which have been
deployed by learning institutions and even in organisations that train their staff and clients using
Learning Management Systems (LMS). For example, nearly all university e-learning portals are
only functional online, meaning that learners must be connected to the internet in order to be
able to carry out learning activities. The same case applies to most online training systems for
organisations.
In the modern world, although internet accessibility has increased signifcantly, many areas still do
not have access. This is seen especially in rural areas of the developing world where most Internet
Service Providers (ISPs) have not yet invested in infrastructure to facilitate internet connectivity.
This limitation locks out many people who would otherwise have wanted to participate in this
learning process. This disadvantaged category of people can only learn by being present at a
learning centre. Moreover, even in areas where internet is available, the connection does not
always have 100% uptime. During the internet downtime, learning does not take place unless
there is a physical instructor and a learner at the same venue or the learner has to wait for the
internet connection to be re-established.
Furthermore, the LMSs that have been developed and distributed for use are installed as
applications that must be in place for implementation and subsequent usage by the target groups.
Page 1 of 9
Scan this QR
code with your
smart phone or
mobile device
to read online.
Read online:
Original Research
doi:10.4102/ijmla.v2i1.6 htp://www.ijmla.net
It is necessary to have some modules which can be plugged
into a system to provide additional functionality as and
when necessary, such as synchronisation of course content
and profles to the off-line models from the online model
after loss of connection.
The service developed in this research can integrate with
any LMS and has the advantage of being deployed for
both on- and off-line learning (under intermittent internet
connection conditions). A framework is designed in the
form of an Application Programming Interface (API)
3
which
can be integrated into any LMS and can be used to classify
learners dynamically into various categories as defned by
information in the learner model.
The K-Nearest Neighbour algorithm (KNN) was used to
classify new learners.
4
K-Nearest Neighbour algorithm is
a component of supervised learning that has been used in
many applications in the feld of data mining. It is a method
for classifying objects based on the closest training example in
the feature space. An object is classifed by a majority vote of
its K neighbours and K is always an integer. The neighbours
are taken from a set of pre-existing training examples for
which the correct classifcation is known.
5
Key focus
This article outlines how to overcome the above challenges by
developing an agent-based personalised adaptive learning
model. This model is deployed as a service using agent
technology and not just as an application as is the case with
all other available LMS. The service, or agent, is packaged as
a Dynamic-Link Library (DLL).
In summary, this research is intended to meet the following
objectives:
To develop an adaptive learning model to support
learning under conditions of intermittent internet
connections.
To classify learners correctly using the KNN classifcation
algorithm which considers learners features as values.
To update the learners profles depending on their
acquired knowledge, performance in quizzes and
classifcation in order to ensure that they are able to avail
themselves of the relevant learning materials.
Related work
E-learning has long been recognised as the new wave in
education. It allows learners to study without the limitations
of time and space. Despite the advantages of e-learning, most
systems have not been designed well enough to respond
adaptively to the individual learners characteristics and
needs. Granular information is essential for the delivery
of the right information, to the right learner, in the right
amounts.
6
The development of these systems enables just-
in-time learning and the convergence of e-learning with
Knowledge Management.
The ideal system should classify learners and provide
appropriate learning materials customised for the individual.
The one size fts all philosophy results in too much
information for users and lacks personalisation.
2
Today,
many vendors offer products called Learning Management
Systems (LMS), which they claim provide a complete
e-learning solution.
6
However, products in this category
do not address the need to develop and manage increasing
volumes of content in smaller chunks by a larger group of
content providers such as learning institutions. Nor do they
provide adequate mechanisms for maintaining consistent
instructional presentation or adapting that content to the
needs of learners. It is thus important for organisations
embarking on an e-learning track as a mode of training,
to request that the vendor have in place a framework that
allows for personalised training.
A survey conducted by Sun et al.,
7
which set out to investigate
the critical factors affecting learners satisfaction in e-learning,
revealed that learner computer anxiety, instructor attitude
toward e-learning, e-learning course fexibility, e-learning
course quality, perceived usefulness, perceived ease of use
and diversity in assessments are the critical factors affecting
learners perceived satisfaction. We focus here on learner
course fexibility and e-learning course quality in order to
develop the Adaptive e-Learning Model (AEM).
Adaptve e-learning systems
Traditional Technology-Enhanced Learning (TEL) systems
offer very few strategies for the personalisation of educational
offerings. This limits the scope for providing tailored, effective
TEL experiences to learners.
8
However, adaptive educational
hypermedia systems (AEHS) have been developed in order to
address learner dissatisfaction by attempting to personalise
the learning experience. Recent research in TEL has focused
on the provision of adaptive educational experiences that are
tailored to the particular needs of a learner. This adaptivity
can be based upon various characteristics of the learner,
including knowledge level, goals or motivation. The purpose
of such adaptive educational offerings is to maximise learner
satisfaction, learning speed (effciency) and educational
effectiveness.
9
Blochl et al.
2
proposed an adaptive e-learning system. The
system applied a user-centric approach so as to improve its
usability and acceptance by users. E-learner requirements,
including user skills, learning styles, learning strategy
and other user profle information, were introduced into
the system.
2
In this system, the user learning activities are
observed and are used to update the user profle. The
e-learning system is adjusted according to a dynamic user
profle.
Saleh et al.
10
presented an adaptive active e-learning
framework which consists of self-learning material,
visualisation in an interesting way and self-testing. The
framework was implemented using simple tools to support
adaptive e-learning systems for numbering educational
Page 2 of 9
Original Research
doi:10.4102/ijmla.v2i1.6 htp://www.ijmla.net
Page 3 of 9
material and it could be also used in other courses such
as logic design, image processing, computational models,
information theory, information engineering and digital
communications. The framework fnds a better way to engage
learners in the learning process. Through their experimental
results, it was shown that their model improved the learning
process and affected the students in a positive way.
Component technologies and artifcial intelligence are used
to deliver e-learning. These components include: pedagogy
agents, interactivity level, quality of feedback, control
strategies, tutorial remediation and student models.
Pedagogy agents are used for integrating the behaviour of
users and e-learning components of the system. They can be
used to check student participation, track student progress
through task procedures and address students errors. Other
agents can be used as tools for feedback. User performance
during instruction should be analysed in order to monitor
learning. Control strategies, planning for content and delivery
strategies should be based on learner knowledge and concept
structures such as curricula. Tutorial remediation is the
component responsible for selecting appropriate actions to be
performed by the learner in order to accomplish a pedagogy
task. Student models can be used to render individualised
instruction in the system. Students instructional activities
can be fltered, analysed and sorted based on individual
profles. This kind of system adapts to the changing
knowledge requirements of the learner, is interactive and
provides regular access to resource materials.
11

Web-mining techniques have been used to build
recommender agent-based e-learning systems. An agent
recommends activities to a learner based on his access
history. The recommendation should be an on-line activity
including doing an exercise, providing messages on
conferencing systems, running an on-line simulation, or
web resources. This agent is claimed to improve course
material navigation and assist the on-line learning process.
12

By observing user typing events, behaviours on studying
lessons on web browser, tasks and examples, errors made by
users and debugging events on the editor, the agent learns to
understand user behaviour.
13
Agent technology
A software agent is a computer programme that is capable of
autonomous (or at least semi-autonomous) actions in pursuit
of a specifc goal.
14
The autonomy characteristic of a software
agent distinguishes it from general software programmes.
Autonomy in agents implies that the software agent has
the ability to perform its tasks without direct control, or at
least with minimum supervision, in which case it will be a
semi-autonomous software agent. Software agents can be
grouped, according to specifc characteristics, into different
software agent classes.
15
The available literature does not
agree on the different types or classes of software agents.
As software agents are classifed according to a common set
of characteristics, different classes of software agents often
overlap, implying that a software agent might belong to more
than one class at a time. For the purposes of this research, we
distinguished between two simple classes of software agents,
namely stationary agents and mobile agents. Agents in both
these classes might, or might not, have any or a combination
of the following characteristics: a user interface, intelligence,
adaptivity, fexibility and collaborative properties.
16
Whether or not an agent has a user interface, depends
on whether it collaborates with humans, other agents or
hosts. User interfaces are commonly only found where
agents interact with humans. According to Wooldridge,
17

intelligence implies the inclusion of at least three distinct
properties, namely reactivity, proactiveness and social
ability. Reactivity refers to the agents ability to perceive its
environment and respond to changes that occur in order to
achieve its design goals; proactiveness is the agents ability
to take the initiative in its environment in order to achieve
its design goals; and social ability alludes to the collaborative
nature of the agent.
There are different ways to defne the collaborative nature
of software agents. For the purposes of this paper we use
Nienabers
18
defnition in which the collaborative nature
of a software agent refers to the agents ability to share
information or barter for specialised services so as to cause a
deliberate synergism amongst agents. It is expected that most
agents should have a strong collaborative nature without
necessarily implying other intelligence properties. Adaptivity
is a characteristic that can also be regarded as an intelligence
property, although it is not counted as being a prerequisite
for identifying an agent as intelligent. Adaptivity refers to
an agents ability to customise itself on the basis of previous
experiences. An agent is considered fexible when it can
choose dynamically which actions to invoke, and in what
sequence, in response to the state of its external environment.
3
A stationary agent can be seen as a piece of autonomous (or
semi-autonomous) software that resides permanently on
a particular host. An example of such an agent is one that
performs tasks on its host machine such as accepting mobile
agents, allocating resources, performing specifc computing
tasks, enforcing security policies and so forth.
A mobile agent is a software agent that has the ability to
transport itself from one host to another in a network. The
ability to travel allows a mobile agent to move to a host that
contains an object with which the agent wants to interact,
and then to take advantage of the computing resources of the
objects host in order to interact with that object. An example
of a mobile agent is provided by a fight-booking system
where a logged request is transferred to a mobile agent that
then traverses the web seeking suitable fight-information
quotations as well as itineraries.
We considered only stationary agents in this research. Agents
reside on host devices and only interact with others through
the implemented functions.
Original Research
doi:10.4102/ijmla.v2i1.6 htp://www.ijmla.net
Page 4 of 9
Research method and design
The model was set up in a computer laboratory and 30
students were allowed to use it for six hours a day for fve
days. An instructor introduced the system to the students,
explaining the objectives of the model, how it works and the
expected outcomes of the experiment.
The instructor the demonstrated the use the system for
the learners: how to register, how to answer questions, the
user interface changes as a result of adaptation, subsequent
processes and, fnally, flling out a questionnaire about the
system. The students were then shown how to learn when
online and also when off-line, they were given an explanation
of similarities of individual profle status in both remote and
local models, how to make a change on a local model whilst
off-line, connecting to the remote model and how to check
the similarities of the profles both locally and remotely.
Two sets of learners were used. One group connected to both
the intranet and internet, downloaded the information to the
database in the local module and used it off-line to learn. The
other group did the learning online only. The two groups
were swapped around half-way through the course and the
process was repeated. The internet was disconnected and
reconnected fve times for the online group. The results of
the test (the test scores) were investigated to fnd out if there
was a correlation between the treatments (i.e. learning online
and learning under intermittent conditions but supported by
the DLL).
For the second part of the evaluation, the learners were
required to assess the system. A questionnaire was provided
to be answered by the students after the learning process.
The system provided the questionnaires online once all
the requirements had been satisfed. The questions were
designed to capture data related to the research objectives.
Aspects considered included model usability, challenges in
using the system and recommendations for improving the
system.
The learning process
The following is a detailed description of the learning process
as designed in this research.
Step 1: Registraton of new learners
This was the frst step, where details of a new learner were
captured and the user name and password were created for
subsequent logins and use of the system. An existing learner
could also login and continue with the learning process. The
learners updated profle would determine the information
that would be available to him.
Step 2: Prerequisite questons
These questions were designed to be able to test if the new
learner met the prerequisite conditions so that he could be
allowed to study the course. A combination of a number of
the questions showed whether the new learner qualifed to
proceed with the course or not.
Step 3: Inital classifcaton questons
At this stage, questions were designed that covered all
sections of the course, beginning with the basic level through
to the expert level. Basic level contained the introductory
concepts of the course and the expert level had the most
advanced concepts of the course. Questions were designed
in such a way that those presented at the beginning tested
the basics of the course whilst the questions presented at the
end tested the complex concepts of the course. Each question
was given a weight. The weights also refected the level of the
course being tested by the question, hence weights increased
from frst question to the last question. If a learner failed the
frst questions and subsequent ones he would be classifed
as a basic learner. Depending on how the learner performed
in each section, together with other learning attributes, the
learner was classifed into an appropriate class level.
Step 4: Pointer to the appropriate level of notes and
questons
Once a learners class level was determined, the relevant
learning information and subsequent section questions were
highlighted. Reading time for the section notes was calculated.
The learner had an option either to read the section notes
or choose to answer questions only. In the former case, the
learner was provided with one section quiz and classifcation
attributes such as quiz time and scores were determined. For
the latter case, two sets of section quizzes were provided,
with the second quiz being more detailed.
Step 5: Determine new class level
Subsequent classifcation was carried out so as to determine
the new class level for the learner and the relevant information
was relayed to the learner.
Step 6
Steps 4 and 5 were repeated until the expert level was
reached.
Step 7: Course evaluaton
Upon fulflling all the requirements for the expert level, both
soft and hard copies of the evaluation questionnaires were
provided and the learners assessment of the system was
captured.
System architecture
The model had four modules, namely the (1) learner module,
(2) classifer agent module, (3) synchroniser agent module
and (4) data storage module. All the modules were linked
to work as one module as shown in Figure 1. The modules
could also function independently as long as the database
(data store) was available.
The learner agent module
The main function of the learner module was to facilitate the
learning process both on- and off-line. This module made
possible the interaction between the learner and the system
Original Research
doi:10.4102/ijmla.v2i1.6 htp://www.ijmla.net
Page 5 of 9
and was where the learner could register or login, access
his or her profle details, get learning materials, read the
notes, answer section quizzes and view all changes as they
occurred. This module was connected to the data store and
displayed information from the data store to both the learner
and instructor.
There were two versions of the learner model namely, (1) the
client model (off-line or stand-alone model) and (2) the server
model (client-server or server-centric model). The client
model was installed on the client machine (local machine)
which was used whilst the learning was off-line. The server
model was installed on the server machine (remote) and was
used for online learning. It also facilitated profle updates by
updating the off-line model whenever the internet connection
was re-established.
The classifer agent module
The classifer
The classifer module or classifer DLL was a stationary agent
that used the KNN algorithm to classify new learners for the
frst time and to do all other subsequent classifcations of
existing learners. The parameter K is an integer parameter
representing the number of nearest neighbours to a new
learner and whose most common class becomes the new
learners class. The default value of K can be fxed to an
odd number such as 3. However, a low value of K restricts
the classifcation of the new learner to classes of only a few
neighbours. The best choice of K depends on the data and,
in general, larger values of K reduce the effect of noise on
classifcation but make boundaries between the classes less
distinct. A good K is chosen using heuristic techniques such
as cross-validation.
The parameter K should also be an odd integer number so
that the majority vote is always attained. Even numbers
for K can result in a tying vote that can hamper correct
classifcation. For this research, K was 9. This fgure was
arrived at after considering that learners would be increasing
with time and also to avoid restricting classifcation to a few
training examples.
For the feature vector representing a learner, the KNN
training data had features whose values could be combined
in order to determine the class of the learner. If the attributes
were too many, say more than 20 but where only two of
them were relevant in the determination of the class of a
particular query instance, the problem commonly referred
to as the curse of dimensionality is experienced.
4
The
distance between training examples and the query instance
is dominated by the large number of irrelevant attributes. To
avoid this scenario, this research used fve attributes namely,
(1) score, (2) quiz time, (3) reading time, (4) prerequisite score
and (5) weight of questions.
Determining factors in setling for these atributes
1. Score: Normally, students performances are determined
by their scores in examinations. All learning institutions
use the scores as a major factor in grading their students.
The higher the score a student attains, the higher the grade
the student is awarded. In this research, we designed score
ranges for all the classes, guided by the grading system
used for undergraduate computing courses in the university
where the study took place. The ranges and their classes are
shown in Table 1.
2. Quizz time: This refers to the time the learner uses whilst
taking a test. Normal practice is that examinations have
a specifc time allocated for their completion. When the
allocated time is over, the learner sitting for the examination
stops or, in the case of an online examination, is timed out. In
this research, the system did not time out a learner if the set
time was exceeded. Instead, the more time spent after the set
time, the less the performance for the learner and the lower
the class assigned. On the other hand, if the learner took
less than the set time to sit for the examination, a higher the
level of classifcation was assigned. A combination of both
performance and time or other resources spent in achieving
the learning is called learning effciency
19
and is a measure
of a learners expertise. Expertise is higher for students who
achieve a score with minimum effort compared with those
who achieve the same performance after expending more
resources.
3. Reading time: This is the time taken by the learner to read
the learning material for the level or section. Normally, there
is no limit in time for reading notes in preparation for an
examination. A learner can take as long as possible to read
the notes. In this research, a reasonable time threshold was
set for reading the notes to enable both slow readers and fast
readers to complete a topic. It was adjusted during testing of
the course to make it appropriate.
Classifer agent module
Input feature space
<Pre-requisite, quiz-tme,
read-tme, quiz-score,
weight-of-queston> and
output a class
Synchroniser agent
module
Check connecton status,
check status of records in
local and remote DBs and
update the out of date DB
Learner agent module
(Client/local) Deployed
when learning is taking
place of-line
Learner agent module
(Server/remote)
Deployed when learning
is taking place on-line
Learner
atributes
Learning
contents, learning
experience, etc.
Profle updates
Learner atributes
updates, release
relevant learning
content
Source: Authors own constructon
FIGURE 1: System architecture.
Atributes
Output
Learning
process
Of-line (disconnected)
On-line (disconnected)
Classifcaton
process
Classifcaton
process
New classifcaton
informaton
Checking on-line status
Updates profles/
learning content
Synchronise both client and server models
TABLE 1: Student classes and their corresponding score ranges.
Score Range Class
039 Beginner
4059 Intermediate
6079 Advanced
80100 Expert
Source: Authors own constructon
Original Research
doi:10.4102/ijmla.v2i1.6 htp://www.ijmla.net
Page 6 of 9
In considering this attribute, it was assumed that fast readers
also perform better than slow readers, so the less time a
learner took to read the notes, the better the performance
that was achieved, leading to a higher class being assigned.
A learner who took too much time beyond the threshold was
assumed to be a slow learner and was assigned a basic class
level.
4. Prerequisites: These were the conditions that a learner
must satisfy before being allowed to proceed with the
learning process. They included interest in studying the
course. Whilst considering this attribute, the learner may
either meet or not meet the conditions. This attribute cannot
be used independently to classify a learner, but must applied
in combination with other defned attributes for proper
classifcation.
5. Weight of questions: The questions were weighted in an
increasing manner from the frst question to the last question.
In addition, the questions were designed such that basic
questions come frst and complex ones come toward the
end. Considering the design of the questions, it was prudent
that basic questions were assigned less weight compared
with complex questions. If a learner failed to answer basic
questions correctly, it was assumed that he was still a
beginner and was assigned to the beginner level class.
Course level
The class level was based on the experience of the learner.
It was assumed for simplicitys sake that learners can be
categorised into four levels, with the novices in the course
being studied being referred to as Beginner. The second
category, we thought, should have a bit more experience,
hence the label Intermediate. Advanced learners were
the ones who had vast knowledge of the subject matter and
Expert learners were those who had the ability to apply the
knowledge from the study. There were no particular criteria
considered in coming up with these learner classes.
Choice of the model training data
The training data had two sections: the feature values and
the target function values (i.e. the associated class) which
was represented as a vector in the form <a
1
, a
2
, a
3
,
n
> <T
a
>
where <a
1
, a
2
, a
3
,,a
n
> was the feature vector and <T
a
>
represented the target function value.
In this research, the training example vectors were defned as
<prerequisite, score, readTime, quizTime, weight> <course
level>. Table 2 has training examples demonstrating how the
2 vectors were populated.
Given the training data, when a new learner joins with
feature vector values such as <1, 45, 43, 10, 9> <?>, the KNN
algorithm takes the new instance and compares it with
the training data. The distances between the new instance
attributes and the training data attributes are calculated.

The total distance of each new training example from the
query instance is determined by summing all the attribute
distances for the particular example. The closest nine
neighbours were identifed and the most popular class
amongst these examples was assigned to the new instance.
The assigned class was used to point to the relevant notes
in the notes index and the notes were then displayed to the
learner.
It is important to note that after classifcation of the new
instance is carried out, the instance becomes part of the training
data. The classifer agent receives data from the environment
and after applying the KNN algorithm, classifes the learner
and updates his or her profle dynamically. This agent is
autonomous as it does not require any supervision and
makes decisions depending on the prevailing information.
This agent trains the model so that, based on the experience
the model has with existing training data, it can classify new
instances correctly.
The synchroniser agent module
The synchroniser agent or synchroniser DLL was also a
static autonomous agent that synchronised the learner
model contents for both the local and remote database. It
collaborated with the classifer agent and learner module
so that after the classifer agent had made changes with
regard to the learner status, it made sure that learners profle
matched both locally and remotely.
The connection status of the models (client and server)
was checked by the agent. The agent tried to establish a
connection to the URL of the online application by using
the public Internet Protocol address. Depending on whether
the application was accessible or not, internet connection
establishment was confrmed or failed. If connection
establishment was confrmed then the remote version was
used; if not, the local version was used. The connection of the
model to the local and remote databases was checked. After
establishing the connection status, status of the contents was
compared. The status was determined by examining which
database had more records and/or latest records. If the local
copy was the latest, then the remote copy was updated and
vice versa.
The module also displayed a message to a learner if there
was no connection to the remote server, but allowed the
learner to continue learning with a local copy which was later
synchronised with the remote copy when the connection
was reestablished. For synchronisation of both databases
to take place, the synchroniser agent in the client machine
located the domain address for the remote server and then
connected to the database in the remote server. All records
TABLE 2: Sample training data based on the feature vector format.
Prerequisite Score ReadTime QuizTime Weight Course level
1 91 40 15 8 Expert
1 75 47 10 9 Advanced
1 30 65 35 4 Beginner
1 50 50 20 7 Intermediate
Source: Authors own constructon
Original Research
doi:10.4102/ijmla.v2i1.6 htp://www.ijmla.net
Page 7 of 9
were compared. The records of the side with more or the
latest records were copied to the side with the missing data.
This update was made per profle so that only the affected
profle(s) were updated.
The following is the connection status testing algorithm:
Start
Client computer pings the address of the remote computer.
If there is a reply from remote computer, then connection status
true,
Learning done using remote computer / server or online version,
Update the obsolete version of the database
Else connection status false,
Learning done using client computer / offine version
End
Results
The results of the research are explained in terms of the
research objectives.
The frst objective was to design a learner model that would
use the KNN learning algorithm to get trained and to classify
new learners.
Table 3 has 10 out of 30 new query data items that were
classifed using KNN.
The experiment was carried out with 30 students and the
results were captured as shown in Table 3. Zero was used
to indicate a query data that was classifed incorrectly whilst
1 was used to show query data that was classifed correctly.
Of the 30 learners studied, only 5 learners were classifed
incorrectly. The percentage accuracy was:
25
/
30
x 100 = 83.3% [Eqn 1]
In this model, attributes were defned that were used to train
the classifer so that when the classifer was presented with
a new learner represented by a vector of attribute values, the
classifer classifed him intelligently. In most existing LMSs,
learners just read the learning materials at their own pace.
There are no checks put in place to determine if the reading is
taking place or not. In this model, however, a note of reading
time for notes and time taken by the learner to do the quiz is
taken and these times contribute in both classifcation and
subsequent profle updates.
The second objective was to make sure that learner profles
were updated as the learners continued with the learning
process and relevant learning information was displayed to
them, based on their profles.
After the experiment was carried out, the learners were
given hard-copy questionnaires. Table 4 shows a summary
of the answers provided by the learners. The survey collected
information related to the frst and second objectives to
double-check results from the logging of the learner activities.
The results show that an overwhelming majority of the
students indicated that they were able to learn both on-
and off-line. A great majority of the students also indicated
that it was easy to learn with the system, that they could
recommend the system to others, that they were able to get
appropriate notes and that they were classifed fairly. The
questions and prerequisite questions were also well designed.
The students also indicated that their profles were updated
and that the timings for the course were appropriate. The
highest percentage of agreement was 100% and the lowest
affrmative percentage was 60%.
Discussions and Conclusions
In considering the research objectives and other issues of
interest, together with the results from the study, a number
of conclusions were made.
The frst objective was regarding developing the classifer
module to classify learners appropriately. From the
percentages of the learners that were classifed correctly, it
could be concluded that the model was accurate in classifying
learners, with an accuracy of 83.3%. Likewise, from the
survey results, 27 out of 30 (90%) learners said that they were
classifed as per their expectations.
The second objective was to ensure that the learners profles
were updated. From the questionnaire results, this objective
was achieved since 25 out of 30 (83%) learners stated that
their profles were updated. This is also seen from the log of
updates of the learner class from 100 (beginner) through 200
(intermediate) to 300 (advanced) and 400 (expert). A sample
of the logs is shown in Table 5.
TABLE 3: Sample results showing how learners were classifed.
Student ID Prerequisite Score Read Time Quiz Time Weight Output (Class) Correctly classifed
10 011 10 50 31 3 Beginner 1
10 020 40 60 30 4 Beginner 1
10 031 90 43 19 9 Expert 1
10 040 47 65 28 5 Beginner 0
10 051 39 54 26 7 Intermediate 1
10 061 65 45 15 9 Advanced 1
10 071 100 30 8 8 Expert 1
10 081 90 32 10 10 Advanced 0
10 091 95 40 20 8 Expert 1
10 100 100 29 6 9 Expert 1
Source: Authors own constructon
Note: The column correctly classifed has value 0 for NO and 1 for YES
Original Research
doi:10.4102/ijmla.v2i1.6 htp://www.ijmla.net
Page 8 of 9
The third objective of this research was to enable the learning
process to take place both on- and off-line. From what other
scholars have performed, it is evident that all learning
systems and/or models are developed as applications which
are deployed to be used online by end users. The end users
must always be online to do the learning.
In this research, a model was developed which is an API,
packaged as a DLL and which can be used as a service by
other LMS developers and users. It can be used either
independently or integrated with other LMS. A master API
was developed which was installed in a server to be accessed
by the clients APIs which were located remotely. The client
APIs had two versions of API: the DLLs for off-line learning
and HTTP protocol for online learning.
The model used for this research had a functionality that
could detect internet connection and then connect to the
server version of the model. The user frst logged into the
model on the client machine. He was then able to continue
learning even if there was no internet connection. A
replication of the database took place whenever there was
an internet connection. Whichever of the databases had
the more updated information was then replicated in the
database along with an older version of the information. This
property enabled the learners to be able to do their learning
seamlessly whether an internet connection was established or
not. This has been shown in the summary of their responses
to the post-experiment survey.
User profles were also updated, so when the internet
connection timed out, both databases were on a par in terms
of learning and learner information. The learners could thus
go on learning off-line but with the most up to date profle
and learning information. This contributed to the continued
learning of the students even under conditions of intermittent
internet connections.
Limitatons and Challenges
Classifcation was only based on the training data
attributes and how close the attributes were to those of
already classifed learners. Other learners characteristics
are not considered. Bearing in mind that the KNN
algorithm usually classifes data using up to 20 attributes,
the fve attributes as used in this research were possibly
not enough to give conclusive results.
The study was conducted over six hours a day for fve
days. This was not a long time, but was dictated by other
factors, such as the available duration of time for the
research. The amount of data available for use during
classifcation was therefore also limited. It would have
also been ideal to observe changes in learner knowledge
levels over an entire semester instead of over just fve
days of intensive work.
Only the KNN algorithm was used for the experiment. It
would have been better to have been able to compare its
performance against that of other algorithms.
During the testing stage of the model, not all stakeholders
were involved due to time constraints. It would possibly
have been a good idea to hold off and let them participate
later so that the model could gain a wider audience
acceptance.
Future work
The authors would like to note the following for the
intended users. As it is, this model is installed both on the
server and client machine as separate entities after which
the client version accesses the server version for updates. In
case the learner does not have the application he will only
be able to use the online version. An online downloadable
version should be available for installation by any interested
learner from anywhere in the world. This way, the learner
can download the system and continue learning from
any machine, especially in the case of travelling from one
place to another. This is because whenever there is internet
connection, updates to the learner profle and learning
activities are logged into the online server and the server will
therefore always contain the most up-to-date information
about the students learning.
This application has been developed using Java language,
which is a resource-intensive language requiring higher
specifcation computers for effcient running. This, however,
may not be possible with everyone who might be interested
in learning in this model. Therefore, it is recommended that
research should be carried out to enable the development of
the application with a lighter programming language.
TABLE 4: Summary of responses to questonnaires afer the learning process.
Queston Queston content YES NO
1 Would you recommend this learning
model to someone else?
27 3
2 Is this learning model easy to use? 28 2
3 Were you able to learn online? 30 0
4 Were you able to learn of-line? 22 8
5 Do you think you were classifed fairly? 27 3
6 Were you able to get appropriate notes? 28 2
7 Were the questons well designed? 26 4
8 Were the prerequisite questons
appropriate?
30 0
9 Were the tmings (tme allocated)
appropriate for all sectons?
18 12
10 Did your profle get updated? 25 5
11 Did you face any challenges that relate to
the research objectves?
3 27
12 Do you have recommendatons that relate
to the research objectves?
6 24
Source: Authors own constructon
TABLE 5: Learners History Report for one course unit.
Student ID Classifed to Date
8888 100 Wed Jan 19 13:28:52 EAT 2011
8899 400 Wed Jan 19 13:28:52 EAT 2011
8877 400 Wed Jan 19 13:28:52 EAT 2011
8866 100 Wed Jan 19 13:28:52 EAT 2011
8855 300 Wed Jan 19 13:28:53 EAT 2011
8833 300 Wed Jan 19 13:28:53 EAT 2011
8822 100 Wed Jan 19 13:28:53 EAT 2011
8811 200 Wed Jan 19 13:28:53 EAT 2011
Source: Authors own constructon
Original Research
doi:10.4102/ijmla.v2i1.6 htp://www.ijmla.net
Page 9 of 9
Acknowledgements
The authors thank the editors and reviewers of the
International Journal of Machine Learning and Applications
for their constructive comments on this article.
Competng interests
The authors declare that they have no fnancial or personal
relationship(s) that may have inappropriately infuenced
them in writing this article.
Authors contributons
M.G.W. (Dedan Kimathi University of Technology) was
the main researcher and O.O.R. (University of Nairobi) and
N.O.H. (University of the Western Cape) provided guidelines
during the research process
References
1. Sampson D, Karagiannidis C, Kinshuk D. Personalised learning: educatonal,
technological and standarisaton perspectve. Digital Educaton Review.
2002;4:2439.
2. Blochl M, Rumetshofer H, Wob W. Individualized e-learning systems enabled by a
semantcally determined adaptaton of learning fragments. Paper presented at
DEXA 2003. Proceedings of the 14th Internatonal Workshop on Database and
Expert Systems Applicatons; 2003 Sep 15.
3. Pai WC, Wang CC, Jiang DR. A sofware development model based on quality
measurement. Paper presented at ICSA 2000. Proceedings of the ICSA 13th
Internatonal Conference; 2000. p. 4043.
4. Wang Y, Wang ZO. Paper presentaton. A fast KNN algorithm for text
categorizaton. Proceedings of 2007 Internatonal Conference on Machine
Learning and Cybernetcs; 2007. Vol. 6, p. 34363441. htp://dx.doi.org/10.1109/
ICMLC.2007.4370742
5. Srihivok A, Intrapairote A. A conceptual framework for e-learning in the tertary
educaton in Thailand: Report of the Natonal Council Research of Thailand; 2003.
6. Ismail J. The design of an e-learning system: Beyond the hype. The Internet
and Higher Educaton. 2001;4(34):329336. htp://dx.doi.org/10.1016/S1096-
7516(01)00069-0
7. Sun PC, Tsai RJ, Finger G, et al. What drives a successful e-learning? An empirical
investgaton of the critcal factors infuencing learner satsfacton. Compu Educ.
2008;50(4):11831202. htp://dx.doi.org/10.1016/j.compedu.2006.11.007
8. Mulwa C, Lawless S, Sharp M, et al. Adaptve educatonal hypermedia systems in
technology enhanced learning: a literature review. Paper presented at ACM 2010.
Proceedings of the 2010 ACM Conference on Informaton Technology Educaton;
2010. p. 7384.
9. Popescu E, Trigano P, Badica C. Towards a unifed learning style model in adaptve
educatonal systems. Paper presented at ICALT 2007. Proceedings of the 7th IEEE
Internatonal Conference on Advanced Learning Technologies; 2007, July 18-20.
p. 804808.
10. Saleh AA, El-Bakry HM, Asfour TT, et al. Adaptve e-learning tools for numbering
systems. Paper presented at ACE 2010. Proceedings of the 9th WSEAS
Internatonal Conference on Telecommunicatons and Informatcs; 2010 May
2931. p. 293298.
11. Atolagbe TA. E-learning: the use of components technologies and artfcial
intelligence for management and delivery of instructon. Paper presented at
ITI 2002. Proceedings of the 24th Internatonal Conference on Informaton
Technology Interfaces; 2002. p. 121128.
12. Zaane OR. Building a recommender agent for e-learning systems. In. Paper
presentaton. Proceedings of Internatonal Conference on Computers in
Educaton; 2002 Dec 36. p. 5559.
13. Mungunsukh H, Cheng Z. An agent based programming language learning support
system. Paper presentaton. Proceedings of the Internatonal Conference on
Computers in Educaton; 2002 Dec 36. p. 148152.
14. Lima RM, Sousa RM, Martns PJ. Distributed producton planning and control
agent-based system. Int J Prod Res. 2006;44(18-19), 36933709. htp://dx.doi.
org/10.1080/00207540600788992
15. DInverno M, Luck M. Understanding agent systems. Berlin: Springer-Verlag, 2001.
16. Pacheco O, Carmo J. A role based model for normatve specifcaton of organized
collectve agency and agents interacton. Auton Agent Mult-Ag. 2003;6(2):145
184. htp://dx.doi.org/10.1023/A:1021884118023
17. Wooldridge M. An introducton to multagent systems. Cambridge: John Wiley &
Sons, 2009.
18. Nienaber R, Cloete E. A sofware agent framework for the support of sofware
project management. Paper presented at SAICSIT 2003. Proceedings of the
2003 annual research conference of the South African insttute of South African
Insttute for Computer Scientsts and Informaton Technologists on Enablement
through Technology; 2003. p. 1623.
19. Van Merrinboer JJG, Sweller J. Cognitve load theory and complex learning:
Recent developments and future directons. Educ Psychol Rev. 2005;17(2):147
177. htp://dx.doi.org/10.1007/s10648-005-3951-0

You might also like