Internet Learning Volume 3, Number 2, Fall 2014 | Page 78

Visualizing Knowledge Networks in Online Courses • Question: Does a discussant ask a question? • Personal Story: Does a discussant tell a story from personal experience? • Citation: Does a discussant make reference to a book, article, or other work (citation)? • Challenge: Does a discussant challenge another discussant? D. Task Target and On-Targetness To understand conversations in our formal learning environment, we also felt it important to consider the targeted behavior of the collaborative activity or discussion prompt. Activities (discussion prompts) were coded using the knowledge- Activity and topicSpread categories. For example, tasks might ask students to Transfer and Elaborate (knowledgeActivity=2/Transfer, topicSpread=3/Elaborate). General topical alignment was also considered. Each discussant’s comment, as well as the entire thread, was coded for whether or not it was on target in relation to the original task prompt. These binary attributes are called onTargetPost, and onTargetThread. E. Metadata Attributes Finally, we identified a set of quantitative attributes that provide more information about individual participants as well as the shape and structure of conversations themselves. These included: • word count (of participants, conversations, and individual responses) • number of posts (for each participant and conversation) • number of unique participants (in each conversation) • time stamp (of each participant’s posts and the conversation as a whole) • proximity of posts in time (of each participant’s posts and for the conversation as a whole) • level of the response tree at which a response is posted (responseLevel) F. Intersectionality We believed that our richest insights from this type of exploratory study would spring from our ability to identify and visualize the intersection of individual, conversational and content characteristics. For example, do certain combinations of individual students generate more ‘productive’ or ‘successful’ conversations? Are student and instructor questions treated differently? What kinds of instructor strategies might be effective in various kinds of conversations? How does the introduction of certain concepts or resources impact the depth or number of participants in a conversation? See Figure 2 for some examples of these intersectionalities. With this emergent framework as our guide, we manually coded a data set of 948 threaded discussion posts for targeted attributes; designed a graph schema and graph database to aid in describing and analyzing the problem space; and began the project of designing queries and visualizations to facilitate analysis of the threaded discussion data from graph computing and Natural-Language Processing (NLP) perspectives. G. Tools Development and Scalability We decided to employ or build technology solutions where feasible, but to not limit our questions to what was possible with current technologies. We favored a data design that would speak well to our questions, even if at first it would require significant labor to op- 77