Internet Learning Volume 3, Number 2, Fall 2014 | Page 101
Internet Learning
ticipants? What strategies might an instructor
employ to bring others into a discussion
that centers around a participant’s particular
area of expertise? What more might
an instructor be able to do with tools that
support the ability to navigate, understand,
and participate effectively in an unfolding
discussion? We hope future research in this
area will begin to address these and other
questions, in service of improving effectiveness,
efficiency, and engagement around
social and cooperative learning activity in
online environments.
Recall from our discussion of corpus
data that we noted the consistency of
instructor responses. The timeline data provides
some insight into the impact of this
consistent behavior. We used the binary attribute
onTargetPost, for example, to search
for instances where an instructor response
to an off-target post led to a subsequent
on-target post. In the case of the avowedly
small data set we queried, this event
took place only twice over three weeks of
discussion. This points to a need for more
effective instructor responses—assuming
that onTargetPost is a valued attribute for a
given context. Assessing the best response
type for given post characteristics is another
layer of future research that could emerge
from this approach.
The timeline visualizations also
helped us to recognize flaws in the structure
of discussion activities. For example, a typical
assignment asks students to respond to
an initial prompt and then to post responses
to a set number of other students. Yet the
data suggest this type of activity structure
leads to sprawl. For the week visualized and
discussed in 7. RQ2 FINDINGS, a single
prompt leads to 24 unique endpoints. This
highlights the fact that ‘social’ learning assignments
should be clear about the goals
of conversation—converging, diverging,
problem-solving, etc.—and specify writing
activities that guide students towards these
behaviors. We might even come to recognize
particular data fingerprints associated
with different social and cooperative activities,
and distinguish between their more
and less successful forms.
VIII - RQ3 Findings: Can we identify
and visualize content focus over time in an
online discussion or course?
A. RQ3 Conceptual Overview
We felt it was critical for our model
to surface important concepts
in a conversation, how the concepts
are related to each other, and how they
change over time. The topicSpread score
provides one method of tracking changes in
content over time: a rising or falling trend
in the topicSpread scores for successive discussion
responses can provide a sense of the
degree of topical expansion or stasis in the
discussion. However, topicSpread remains
a numerical score, yielding no information
about the actual topics under discussion. It
is also a subjective, manually-applied score
at present, and could be difficult or computationally
expensive to replicate automatically.
Below, we describe our initial efforts
to understand the topical evolution of a
conversation over time, including an examination
of the discussion concepts themselves,
as extracted using NLP and situated
in our graph schema.
A. RQ3 Conceptual Overview
We began our investigation of
topical focus using exploratory
visualizations. We used our
Gremlin DSL to extract discussion graphs
that contained response nodes and concept
nodes, and their connecting edges (re-
100