How Ubiquitous Computing can Support Language Learning
Hiroaki Ogata and Yoneo Yano
Faculty of Engineering, University of Tokushima, Japan
{ogata, yano}@is.tokushima-u.ac.jp
Abstract
This paper describes a computer supported ubiquitous
learning (CSUL), called CLUE, the learners provide their
own knowledge about the language learning in their
everyday life, share them in the community and discuss
about them. This paper focuses on knowledge awareness
map and its design, implementation and evaluation. The
map visualizes the relationship between the shared
knowledge and the current and past interactions of
learners. The map plays a very important role of finding
peer helpers, and inducing collaboration.
Keywords: ubiquitous learning, computer assisted
language learning, collaborative learning, knowledge
awareness, authentic learning.
1. Introduction
Ubiquitous computing [1] will help organize and
mediate social interactions wherever and whenever these
situations might occur [8]. Its evolution has recently been
accelerated by improved wireless telecommunications
capabilities, open networks, continued increases in
computing power, improved battery technology, and the
emergence of flexible software architectures. With those
technologies, an individual learning environment can be
embedded in the real everyday life. The fundamental issue
is how the computer software can support appropriately
ubiquitous learning.
The challenge in an information-rich world is not
only to make information available to people at any time,
at any place, and in any form, but specifically to say the
right thing at the right time in the right way [6]. A
ubiquitous computing environment enables people
learning at any time and any place. But the fundamental
issue is how to provide learners right information at the
right time in the right way. This paper tackles the issues of
right time and right place learning (RTRPL) in a
ubiquitous computing environment.
Especially, we focus on language learning as an
application domain of this research. That is because
language is much influenced by situations. The user of
this system is an overseas student of a University in Japan,
and wants to learn Japanese Language. The other user is a
Japanese student who is interested in English as the
second language and plays an important role of a helper
for an overseas student. The learners with PDA (Personal
Digital Assistant) store and share the useful expressions
that are linked to any place in everyday life. Then, the
system provides each learner the right expressions at the
right place. For example, if the learner enters a hospital,
then the right expressions at that place are provided at that
time for RTRPL. It is very important to encourage not
only individual learning but also collaborative learning in
order to augment practical communication among learners
and accumulation of the expressions.
Knowledge Awareness KA is defined as awareness of
the use of knowledge [9,10]. In a distance-learning
environment, it is very difficult for the learner to be aware
of the use of other learners' knowledge because the learner
cannot understand their actions in the remote site beyond
Internet. KA messages inform a learner about the other
learners’ real-time or past-time actions: look-at, change,
and discuss, that have something to do with knowledge on
which a learner was or is presently engaged. For example,
KA messages are “someone is changing the same
knowledge that you are looking at”, “someone discussed
the knowledge which you have inputted.” These messages
make the learner aware of someone:
(1) who has the same problem or knowledge as the
learner;
(2) who has a different view about the problem or
knowledge; and
(3) who has potential to assist solving the problem.
Therefore, these messages that are independent of the
domain, can enhance collaboration opportunities in a
shared knowledge space, and make it possible to shift
from solitary learning to collaborative learning in a
distributed learning space.
In order to induce collaborative learning, this paper
proposes KA map that visualizes KA information for
ubiquitous learning environments. The map helps learner
to mediate and recognize collaborators in the shared
knowledge space. On this map, the system identifies
learning-companions who can help solving a problem.
The characteristics of the map are:
(1) Visualization of the objects in the map and
expressions as educational materials,
(2) Visualization of the links between expressions and
learners to induce collaboration,
(3) Recommendations of appropriate collaborators on KA
map to help find suitable partners.
We are developing an open-ended collaborative learning
support system, which is called CLUE (CollaborativeLearning support-system with a Ubiquitous Environment).
CLUE is a prototype system for KA map, and facilities to
share individual knowledge and to learn through
collaboration.
2. Computer Supported Ubiquitous
Learning
CSUL is defined that a ubiquitous learning
environment that is supported by embedded and invisible
computers in everyday life. Figure 1 shows the
comparison of four learning environments according to
[9]. The CAL (computer assisted learning) systems using
desktop computers are not embedded in the real world,
and difficult to be moved. Therefore those systems hardly
support learning at anytime and anywhere.
Compared with desktop computer assisted learning,
mobile learning is fundamentally about increasing
learners’ capability to physically move their own learning
environment with them. Mobile learning is implemented
with lightweight devices such as PDA, cellular mobile
phones, and so on. Those mobile devices can connect to
Internet with wireless communication technologies, and
enable learning at anytime and anywhere. In this situation,
however, computers are not embedded in learner’s
surrounding environment, and they cannot seamlessly and
flexibly obtain information about the context of his/her
learning.
In pervasive learning, computers can obtain
information about the context of learning from the
learning environment where the small devices such as
sensors, pads, badges, and so on, are embedded and
communicate mutually. Pervasive learning environment
can be built either by embedding models of specific
environment into dedicated computers, or by building
generic capabilities of into computers to inquire, detect,
explore, and dynamically build models of their
environments. However, this makes availability and
usefulness of pervasive learning limited and highly
localized.
Finally, ubiquitous learning is integrated high
mobility with pervasive learning environment. While
learner is moving with his/her mobile device, the system
dynamically supports his/her learning by communicating
with embedded computers in the environment. As for the
broad definition of ubiquitous learning, however, both
pervasive learning and mobile learning would be in the
category of ubiquitous learning.
Level of embeddedness
high
Pervasive Learning
Ubiquitous Learning
Level of mobility
high
low
Desktop-Computer
Assisted Learning
Mobile Learning
low
Figure 1: Comparison of learning environments
(based on [9]).
2.1 Learning Theories for CSUL
CSUL is advocated by pedagogical theories
such as on-demand learning, hands-on or minds-on
learning, and authentic learning. CSUL system provides
learners on-demand information such as advices from
teachers or experts at the spot at the precise moment they
want to know something. Brown, Collins, and Duguid [2]
define authentic learning as coherent, meaningful, and
purposeful activities. When the classroom activities are
related to the real world, students receive great academic
delights. There are four types of learning to ensure
authentic learning: action, situated and incidental learning
[7]. Action learning is a practical process where students
learn by doing, by observing and imitating the expert, and
by getting feedback from teachers and their fellow-pupils.
Usually, learning is promoted by connecting knowledge
with workplace activities.
Situated learning is similar to action learning
because trainees are sent to school-like settings to learn
and understand new concepts and theories to be applied
later on in practice. Knowledge is developed through the
authentic activities and important social interactions.
Cognitive apprenticeship methods try to enculturate
students into authentic practices through activity and
social interaction in a way similar to that evident in craft
apprenticeship” ([2], p.37).
Incidental learning includes unintentional and
unexamined learning from mistakes, unexpected incidents,
etc. For example, a child can acquire an unexpected result
in the science lab by the mistake of dropping some other
liquid to the given experiment that may lead a great
discovery. Learners discover something while they are
doing something else in the process; therefore, it is
considered as a surprised by-product. Knowledge from
incidental learning develops self-confidence and increases
self-knowledge in learning.
There are three forms of experiential learning:
action learning, future search, and outdoor education.
Action learning is a social process of solving the
difficulties, by involving that learners are doing things and
thinking about what they are doing. In the classroom,
action learning is a problem-solving process by
developing knowledge and understanding at the
appropriate time. The future search process is to develop
thinking and understanding and is not about problem
solving, but rather an exercise in developing insights,
understanding, learning from one another, and reducing
misunderstandings. Outdoor education is an outdoor
program of team members to apply their new learning
during an outdoor experience upon returning to the job in
order to gain more insights through challenge activities.
Learners integrate thoughts and actions with reflection
from the outdoor experiences.
2.2 Characteristics of CSUL
The main characteristics of ubiquitous learning are
shown as follows [3,4]:
(1) Permanency: Learners can never lose their work
unless it is purposefully deleted. In addition, all the
learning processes are recorded continuously in
everyday.
(2) Accessibility: Learners have access to their
documents, data, or videos from anywhere. That
information is provided based on their requests.
Therefore, the learning involved is self-directed.
(3) Immediacy: Wherever learners are, they can get any
information immediately. Therefore learners can
solve problems quickly. Otherwise, the learner may
record the questions and look for the answer later.
(4) Interactivity: Learners can interact with experts,
teachers, or peers in the form of synchronies or
asynchronous communication. Hence, the experts are
more reachable and the knowledge is more available.
(5) Situating of instructional activities: The learning
could be embedded in our daily life. The problems
encountered as well as the knowledge required are all
presented in the nature and authentic forms. It helps
learners notice the features of problem situations that
make particular actions relevant.
2.3 Information Model
Based on [1], CSUL deals with 5W1H information
as follows:
Who: Current systems focus their interaction on the
identity of one particular user, rarely incorporating
identity information about other people in the
environment. As human beings, we tailor our
activities and recall events from the past based on the
presence of other people. CSUL identifies not only
the current user but also other users surrounding the
user. CSUL provides the right information after
interpreting the user-models of them. Especially,
other people influence Japanese language. For
example, we, Japanese people use different level of
polite expressions according to the ages of other
people
What: The interaction in current systems either assumes
what the user is doing or leaves the question open.
Perceiving and interpreting human activity is a
difficult problem. Nevertheless, interaction with
continuously worn, context-driven devices will likely
need to incorporate interpretations of human activity
to be able to provide useful information.
When: With the exception of using time as an index into a
captured record or summarizing how long a person
has been at a particular location, most context-driven
applications are unaware of the passage of time. For
example, the learner might get the right expressions
at the certain time, e.g, morning.
Where: In many ways, the “where” component of context
has been explored more than the others. Of particular
interest is coupling notions of “where” with other
contextual information, such as “when.
Why: Even more challenging than perceiving “what” a
person is doing understands “why” that person is
doing it. Using “why” information, the right
information could be provided to the learner.
How: CSUL has to provide right information at the right
form in the right way. For example, if learner has a
PDA with small memory, the system has to provide
the light information such as texts or pictures. But if
there is a desktop computer near the learner, the
system can provide video clip data.
3. Systems of Computer Supported
Ubiquitous Language-Learning
Environment
We have developed the system called CLUE that
consists of the three subsystems for supporting ubiquitous
language learning: sentences, polite expressions, and
vocabularies.
3.1 Learning Sentences
Learners, overseas students, store useful
expressions into the database of CLUE, or ask questions
with CLUE when they have some problem in everyday
life. Japanese students refine the expressions or answer
their questions. When learner is walking around, CLUE
provides the adequate expressions and/or questions at
RTRP. At the initial state, some fundamental sentences
were stored in the database referring [5]. The order is
determined based on the following conditions:
(1) The expression is frequently used at the learner’s
location.
(C) KA map for expressions
(B) Question window
(A) Map window
Figure 2: Interface of CLUE.
If the learner approaches the certain place, the
window (B) appears, which shows a useful expression at
the place in English. If the current user has already
learned all the expressions at the place, the expressions are
not appeared. If the learner can correctly answer the
Japanese expression corresponding to the English
expression, the next expression will appear. Otherwise, the
learner will be given the same expression in coming the
place at the next time.
If the learner has a question about the expression,
the window (C) shows the relation between expressions
and other learners. The color of an oval icon shows the
level of difficulty of the expression. A teacher gives the
level. Moreover, the color of a rectangle icon shows the
level of proficiency of the learner. The higher the level is,
the more correct-answers the learner gives. From this KA
map, the learner can find the suitable person to ask the
question.
3.2 Learning Polite Expressions
Japanese polite expressions are divided into two types
that are honorific words and modest words. The former is
used to express a speaker's respect for a conversational
companion. The latter is used to express a humble attitude
of a speaker. For example, in a word of "hanasu", its
honorific word is “ossharu," and its modest one is
"mousu." The alteration of Japanese polite expressions
usually occurs in the verb, noun, adjective, and adverb.
Moreover, there are three levels of polite expressions
(LPE): casual, basic, and formal. There are two kinds of
changing patterns: the first one is irregular change to a
different word; the second one is regular change
incorporating with a prefix and/or postfix word.
According to the former, there is no limitation and pattern
like an irregular verb. This makes Japanese expressions
difficult for the overseas learners. Therefore, we have
developed the subsystem of CLUE, the main aim of which
is to provide the learner the appropriate polite-expression
in the specific context.
Figure 3 shows a scene of learning polite expressions
with CLUE. Every user has a PDA and inputs his
information into the database, e.g., name, grade, age etc.
When Mr. X talks to Mr. Z, the system tells Mr. X a casual
expression. On the other hand, when Mr. X turns to Mr. Y
in order to talk, the system tells Mr. X a formal
expression.
Name: X
Grade:M1
Age: 25
Input:待つ
Name: Y
Grade: M2
Age: 24
Formal
Ca
su
al
(2) The learner has never learned the expression.
(3) Most of other learners have already learned the
expression.
(4) The level of expression that is given by a teacher is
appropriate for the learner’s level.
Condition (1), (2) and (3) is derived from the
learner’s information. Condition (4) is also derived from
learning materials and the learner’s level that is detected
by the right answer rate of the learner at that moment. The
more conditions an expression meets, the higher the order
of the expression will be. In this way, CLUE presents the
right expression at the specific place.
Interface of the collaborative learning environment
of CLUE is shown in Figure 2. The map window (A)
shows the current location of each learner. The face icon
on the map means the learning status of each learner. For
example, if a learner has a problem or question, the face
turns into a fad one. By clicking the face icon, the learner
can send a message to the learner corresponding to the
icon. In addition, a rectangle icon on the map shows a
landmark where a teacher or a learner gives some
expressions, or where they communicate with each other.
If a learner enters an expression at the place for the first
time, then a new landmark is created in the map. By cling
the rectangle icon, the user can see the web page of the
place (e.g., the hospital), the expressions that are used in
the place, or the communication logs about the
expressions or the place. Users can also register their
positions at any time if GPS does not work. For example,
it might come out when big buildings surround them, or
when they are in a building.
待って下さい.
待って!
Name: Z
Grade: UG
Age: 22
Figure 3: Scene of polite expressions learning.
The factors of changes of Japanese polite expression
There are three factors of changes in Japanese polite
expressions.
(i) Hyponymy: Generally, people usually use a term of
respect to elder or superior people. Social status depends
on affiliations, the length of career, age and so on.
(ii) Social distance: Japanese polite expressions are often
expressed in the familiar sense. However, the familiar
sense is often different from country to country. For
example, the Japanese familiar sense is narrower than the
American one. The Japanese familiar sense depends on
social relationships, which are classified into an inside
group and an outside group. If the relation is family or
colleague, then they consider being inside a group and
using a casual level of polite expressions. If the relation is
not so close, people use formal expressions.
(iii) Formality: The situation of a conversation influences
polite expressions. For example, Japanese people often
use more formal expressions in the formal situation
(giving a talk at ceremony, writing a letter, and so on).
According the above factors, the rules for making
appropriate polite expressions are built in CLUE.
As shown in figure 4, the users input their
individual personal data, e.g., name, gender, work, age,
relationship etc. When the user talks to another user, the
CLUE gets the information of the other via the infrared
data communication of PDA, and then it suggests the
suitable polite expressions for the user.
Figure 4: Interface for polite expressions learning.
3.3 Learning Vocabularies
At the beginner’s class of language learning, a
label that is written the name of the object is stuck on the
corresponding object in a room in order to remind learners
the word. The idea of this subsystem is the learner sticks
RFID (Radio Frequency Identification) tags on real
objects instead of sticky labels, annotate them, e.g.,
questions and answers, and share them among others (see
figure 5). The tags bridge authentic objects and their
information in the virtual world.
As shown in the left window in figure 5, the
system provides the right words to the learner by scanning
the RFID tags around the learner. For example, when the
learner enters a meeting room, the system asks him/her a
question where an “entaku” is, which is a round table in
Japanese. The leaner can hear the question again if s/he
wants. If the leaner scans the tag labeled on the table, the
answer is correct, and the system will not ask the same
question next time. Otherwise, the system tells a hint.
つくえ(机):desk
例文:机の上のリモコンをとって下さい.
数え方:脚,台
まど(窓):Window
例文:窓をあけて下さい.
数え方:枚,個
いす(椅子):Chair
例文:椅子に座って下さい.
数え方:脚,個
Figure 5: Scene of vocabulary learning.
3.4 System Configuration
We have developed the prototype system of
CLUE, which consists of a server and clients. Each
learner’s client of CLUE is Toshiba Genio-e that is a PDA
with Pocket PC 2002, Personal Java, GPS (Global
Positioning System), and wireless LAN (IEEE 802.11b).
Especially, we selected this device to use GPS and
wireless LAN at the same time. The server program has
been implemented with a java servlet via Tomcat. CLUE
has the following modules (see figure 6):
Learner model: This module has the learner’s profile such
as name, age, gender, occupation, interests, etc, and
the comprehensive level of each expression. Before
using CLUE, the learner enters those data. In
addition to the explicit method like this, CLUE
implicitly detects learner’s interests according to
the history of the visiting. Moreover, this system
records whether the learner understands
expressions.
Environmental model: This module has the data of objects,
rooms and buildings in the map, and the link
between objects and expressions. For example,
(Post office, location (x, y), “I’d like to buy a
stamp.”) means the post office is located at (x, y) on
the map and the expression is often used.
Educational model: This module manages expressions as
learning materials and dictionaries. Teacher enters
the basic expression for each place. Both learners
and the teacher then can add or modify expressions
during the system use.
Communication support: This server manages a BBS
(bulletin board system) and a chat tool like an
instant messenger, and stores these logs into a
database.
Location manager: This module stores the each learner’s
location into the database.
Adaptation engine: This module recommends the learner
the suitable expression and KA map.
Communication client: This is a client of BBS and chat.
Location sensor: This module sends the learner’s location
from GPS to the server automatically.
Information visualization: This module shows KA map to
the learner.
Server
Learner sf
Info.
Communication log
Learner model
Communication server
Real world
data
Environmental model
Sensing data manager
Educational model
Adaptation engine
Learning material
KA Map
Q&A Chat/BBS
Communication client
Learning
materials
Sensor(GPS, RFID,...)
Learning environment
PDA Client
Figure 6: System configuration of CLUE.
4. Conclusion
This paper describes a computer supported
collaborative learning (CSCL) [11,12] in ubiquitous
computing environment. In the environment called CLUE,
the learners provide and share individual knowledge and
other knowledge on the WWW and discuss about them.
This paper focuses on knowledge awareness map and its
design, implementation and evaluation. The map
visualizes the relationship between the shared knowledge
and the current and past interactions of learners. The map
plays a very important role of finding peer helpers, and
inducing collaboration. In the future, we will try to
evaluate CLUE.
Acknowledgement
This work was partly supported by the Grant-in-Aid for
Scientific Research No.15700516 from the Ministry of
Education, Science, Sports and Culture in Japan. Mr. Ushida,
Mr.
References
[1] Abowd, G.D., and Mynatt, E.D.: Charting Past, Present, and
Future Research in Ubiquitous Computing, ACM Transaction on
Computer-Human Interaction, Vol.7, No.1, pp.29-58, 2000.
[2] Brown, J. S., Collins, A., and Duguid, P.: Situated
Cognition and the Culture of Learning. Educational Researcher,
( Jan.-Feb.), pp.32-42, 1989.
[3] Chen, Y.S., Kao, T.C., Sheu, J.P., and Chiang, C.Y.: A
Mobile Scaffolding-Aid-Based Bird -Watching Learning System,
Proceedings of IEEE International Workshop on Wireless and
Mobile Technologies in Education (WMTE'02), pp.15-22, IEEE
Computer Society Press, 2002.
[4] Curtis, M., Luchini, K., Bobrowsky, W., Quintana, C., and
Soloway, E.: Handheld Use in K-12: A Descriptive Account,
Proceedings of IEEE International Workshop on Wireless and
Mobile Technologies in Education (WMTE'02), pp.23-30, IEEE
Computer Society Press, 2002.
[5] Eijiro; English-Japanese Dictionaries, http://member.nifty.
ne.jp/eijiro/
[6] Fischer, G.: User Modeling in Human-Computer Interaction,
Journal of User Modeling and User-Adapted Interaction
(UMUAI), Vol. 11, No. 1/2, pp 65-86, 2001.
[7] Hwang. K.S.: Authentic Tasks in Second Language
Learning, http://tiger.coe.missouri.edu/~vlib/Sang's.html
[8] Lyytinen, K. and Yoo, Y.: Issues and Challenges in
Ubiquitous Computing, Communications of ACM, Vol. 45, No.
12, pp.63-65, 2002.
[9] Ogata, H., Matsuura, K. & Yano, Y. Knowledge awareness:
Bridging between shared knowledge space and collaboration in
Sharlok, Proceedings of Educational Telecommunications ‘96,
232-237. 1996.
[10] Ogata, H. and Yano, Y.: Combining Knowledge Awareness
and Information Filtering in an Open-ended Collaborative
Learning Environment, International Journal of Artificial
Intelligence in Education, Vol.11. pp.33-46 (2000).
[11] O’Malley, C. (1994). Computer supported collaborative
learning, NATO ASI Series, F: Computer & Systems Sciences,
Vol.128.
[12] Wasson, B., Ludvigsen, S., and Hoppe, U. (eds.) Design for
Change in Networked Learning Environments - Proceedings of
International Conference on Computer Support for Collaborative
Learning 2003, Kluwer Academic Publishers, 2003.