Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
2016 3rd International Conference On Computer And Information Sciences (ICCOINS) The Impact of Electronic Collaboration on Learning Outcomes Alimatu-Saadia Yussiff1, Wan Fatimah Wan Ahmad2 and Emy Elyanee Mustapha3 Department of Computer and Information Sciences Universiti Teknologi PETRONAS 32610 Bandar Seri Iskandar, Perak Darul Ridzuan, Malaysia 1 alimasaf@yahoo.co.uk, {2fatimhd, 3emy.elyanee}@petronas.com.my Abstract—Even though the effectiveness of e-collaboration has been empirically confirmed, some researchers and educators still find it a challenge in leading to meaningful learning outcomes. The main goal of this study is to explore the relationship between e-collaborative learning experience and students learning outcomes with moderating and mediating effects of social, teaching and cognitive presences. The study intends to investigate whether e-collaborative learning experience with social, teaching and cognitive presences as the mediator and moderator constructs are predictors of students learning outcomes. The Community of Inquiry (CoI) survey instrument, collaborative learning questionnaire and pre-testpost-test questions were used to collect data through an experimental research design involving 60 students. The results show that the constructs of the hypothesized model are reliable and valid. The results from structural equation modeling also demonstrated that e-collaborative learning experience strongly predict learning outcomes indirectly through the mediating and moderating effects of the three presences. Keywords—learning outcome; collaborative reliability; validation; deep and meaningful Community of Inquiry learning; learning; I. INTRODUCTION E-collaboration is defined as the collaboration among groups of students engaged in a common task using electronic technologies [1-3]. As opposed to e-learning solutions, which are designed to provide environment for personal or individual learning, e-collaboration aims at supporting group interactivities where collaborators become more engaged in knowledge creation and sharing, it also improve the quality of online pedagogy [4, 5]. Carefully structured collaborative learning can is an important step in changing the passive and impersonal character of many Higher Educational Institutions (HEIs). More importantly e-collaboration can help in knowledge construction, information sharing and to work on group project anytime anywhere [6]. Despite the popularity of the Community of Inquiry (CoI) Model in leading to successful educational experiences, researchers over the years have started questioning the lack of empirical evidence to prove that the CoI constructs of social presence, teaching presence and cognitive presence will result in deep and meaningful [158] 978-1-5090-2549-7/16/$31.00 ©2016 IEEE learning outcomes [7-10]. More importantly, “the reliance of prior CoI studies on students’ self-reports of learning may suggest a potential and important research limitation” [10]. The goal of this study is therefore to investigate the impact of e-collaborative learning experience on learning outcomes with the mediating and moderating effect of teaching, cognitive and social presences. Therefore the study intends to investigate whether collaborative learning experience with the interdependences of the three presences as the mediator and moderator constructs are predictors of students learning outcomes. II. LITERATURE REVIEW A. Deep and Meaningful Learning Meaningful and deep learning are related concepts. Deep learning refers to “the critical examination of new facts and the effort to make numerous connections with existing knowledge structures” [7]. Meaningful learning is the conception that the new knowledge to acquire is related with previous knowledge. It emphasizes relating new information to information already known by the learner. Meaningful learning is associated with problem-based and discovery learning approaches where the learners are expected to formulate relationship between new and existing concepts. According to Fyrenius, et al. [11], there are three related prerequisites to meaningful learning: pre-understanding, relevant context, and activities. This study involve the use of problem-based, discovery and brainstorming approaches after which learning was measured using post-test and students perceive e-collaborative teaching and learning. B. Social Presence Social presence is the ability of participants in the CoI to be able to identify with the community of learners or study partners, their ability to project and present their personal characteristics into the online community as real person and not as faceless contributors. It also include the degree to which sense of belonging is felt among those participants, the ability of participants to thrust the environment, communicate purposefully and develop interpersonal 2016 3rd International Conference On Computer And Information Sciences (ICCOINS) relationships [12]. The three main indicators of social presence are affective expression, open communication, and group cohesion. Research has shown that social presence cannot by itself lead to the development of critical discourse likewise “it is difficult for such discourse to develop without it” [13, 14]. On the other hand, some researchers sees “social presence as a mediating variable between teaching presence and cognitive presence” [13] [12]. C. Teaching Presence Teaching presence consists of two main activities; the design of the course content and the facilitation of learning processes [15, 16]. Teaching presence can be carried out by any participant in a CoI; nevertheless, in an educational environment, this can be the sole responsibility of the teachers or instructors. The first of these activities, the design of the course contents involve the selection, design, organization and development of teaching and learning materials and assessments criteria. The second activity, the facilitation of learning processes, can be shared by both the teacher and the students in a CoI. This will involve some elements of students-teacher or students-students interactions. It is believed that teaching presence is a means to an end to support and enhance social and cognitive presence for the purpose of realizing educational outcomes. Thus, the roles of the instructor in online learning environment are collectively referred to as teaching presence [7]. D. Cognitive Presence Cognitive presence is the extent and the ability to which the participants or students within a community of inquiry are able to construct meaning and confirm it through sustained communication. It consists of four elements: the triggering event, exploration, integration and resolution [15-17]. It has been envisage that learning environments that exhibit high degrees of all three elements will lead to higher order learning for students. Will the findings from this study support this argument or not? E. E-collaborative Learning Experience An e-collaboration environment should be effective enough to support knowledge construction within the community of inquiry. In this study it is characterized by the following six elements efficiency, attractiveness, simple navigation, consistency, visibility and controllability F. Learning Outcomes The main purpose for learning is to acquire meaningful knowledge and skills so as to be able to apply what has been learnt. However, knowledge cannot be directly measured, but only through the performance and action resulting from learning can be observed and measured [18]. [19] categorized learning outcomes into three categories: 1) psychomotor learning outcomes (e.g. accuracy, efficiency, and response magnitude); 2) cognitive learning outcomes (knowledge, performance achievement, comprehension, analysis and application); and 3) affective learning outcomes (e.g. students perceived attitude, satisfaction, appreciation for the learning environment). This study focuses on the cognitive measure of learning outcome by utilizing performance achievement – a direct measure of learning outcome to measure students learning. G. Application of Above Constructs in the Study The above mentioned constructs as described in subsections A-F, forms the key elements upon which data were collected for this study. Thus, important variables were deduced for each element to test the relationships among the observed and latent variables. III. METHODOLOGY A. Participants The total participants were sixty (N=60) undergraduate students enrolled in Introduction to Business Information Systems (IBIS) course for May-August 2014 semester at University Teknologi PETRONAS (UTP). B. Sampling Method For the purpose of adopting a particular course in persemester basis and to be able to have adequate number of participants, the sample for this study was not randomly selected; rather participants were drawn from students enrolled in a course after the researcher seek permission from the lecturer to conduct the study in the class. It was therefore a convenience sampling method. C. Instrumentation The four instruments used to collect data for this study are: the CoI Survey instrument, the collaborative learning experience questionnaire, and the pre-test and post-test questions. The collaborative learning experience construct was adapted from [20-22] and consisted of six variables (i.e. attractiveness, simple navigation, consistency, visibility, controllability and efficiency) and it consisted of 24 questions. The CoI survey instrument was adapted from [23] and consisted of 34 questions categorized under three main elements (i.e. perceived teaching, social and cognitive presences). The two questionnaires were scored using fivepoint likert scale ranging from ‘1=strongly disagree’ to ‘5=strongly agree’. Finally, the learning outcome construct was measured using one variable (i.e. the post-test score) and it consisted of 20 questions. D. Experimentation and Data Collection Fig. 1 demonstrated the experimental research design and data collection processes for this study. After course selection and getting approval from the lecturer of the course to use the class for experimenting with TELERECS e-collaboration environment [24], the instructor then randomly assigned the classes into two major groups: the control and the experimental groups of participants. The control group is the group using the conventional methods [159] 2016 3rd International Conference On Computer And Information Sciences (ICCOINS) of in-class collaboration. The experimental group is the group using the ‘TELERECS’ e-collaboration environment. Both groups were involved in the pre-test and post-test activities. However, only the experimental group was involved in the two surveys The hypothesized model in Fig. 2 was analyzed using SmartPLS structural equation modeling tool. Factor loadings were conducted in SmartPLS for all constructs in the model to assess their loadings on their respective latent constructs. Factor loadings that were less than 0.5 as recommended by Hulland [25] were then excluded from the model. This is to ensure that the constituents of the model load sufficiently on other factors. The outputs of the factor analysis are presented in Table II. IV. RESULTS Figure 1: Experimentation and Data Collection Method E. Data Analysis Descriptive analysis was conducted on the data to find the mean and standard deviation scores of the constructs. Secondly, factor loadings, construct reliability and validity were conducted on data. Finally, structural equation modeling analyses were conducted on data to investigate how collaborative learning experience influence learning outcomes directly or indirectly through the mediation and moderation of the interdependence of presences (teaching presence, cognitive presence and learning presence). The goal is that if collaborative learning experience is to be used to support meaningful learning, then the relevant constructs and their relationship need to be examined. The hypothesized relationships model among the constructs is shown in Fig. 2. SmartPLS statistical tool was used to conduct the path analysis. The data was then interpreted with an alpha level of 0.05 for all significance tests in the study. The path analysis was used to test hypothesis H01. H01: the interdependencies of teaching, social and cognitive presences will negatively mediate the relationships between E-collaborative learning experience and learning outcomes. Figure 2: Hypothesized Relationship Model [160] A. Descriptive Statistics of Sample The descriptive analysis of the data as illustrated in Table I shows that 60 students participated in the surveys. In addition, Table I also shows that the mean for all the constructs were greater than 4 out of the maximum of 5. While the standard deviations of the constructs ranges from 0.28 for teaching presence to 0.49 for collaborative learning experience. The closer the Standard Deviation is to 0, the more reliable the Mean is. Therefore, the values of the standard deviations in this study imply that most of the values are positioned very close to the mean. This also indicated that there is very little volatility in the sample. TABLE I. DESCRIPTIVE STATISTICS OF STUDIED CONSTRUCTS Mean Post-Test-Scores 4.09 Std. Dev. N .358 60 Teaching Presence 4.11 .280 60 Social Presence 4.11 .380 60 Cognitive Presence 4.10 .335 60 Collaborative Learning Experience 4.07 .487 60 B. Constructs Reliability and Factor Loadings The results of the reliability coefficient were highly significant as shown in Table II. The results of both Cronbah’s and composite reliabilities exceed the minimum threshold of ≥ 0.6, which suggested that there is a higher level of internal consistency reliability among all the latent constructs. The main reason for performing factor analysis on data is “to summarize data so that relationships and patterns can be easily interpreted and understood. It is normally used to regroup variables into a limited set of clusters based on shared variance. Hence, it helps to isolate constructs and concepts” [26]. Loadings can range from -1 to 1. The outputs of the factor analysis are presented in Table II. According to Hulland [25] the higher the loading the higher is the shared variance between the construct. The factor loadings in Table II range from 0.5-1 which demonstrated that the factors strongly affect their corresponding latent constructs. 2016 3rd International Conference On Computer And Information Sciences (ICCOINS) TABLE II: FACTOR LOADINGS, CRONBACH’S ALPHA, AND COMPOSITE RELIABILITY Latent Variable Teaching Presence (TP) Social Presence (SP) Cognitive Presence (CP) Collaborative Learning Experience (Collab) Learning Outcomes (LO) Indicators A1 A10 A11 A12 Factor Loadings 0.5 0.8 0.6 0.6 A13 A5 A6 A9 0.7 0.5 0.6 0.5 B2 B4 B5 B6 B7 B8 B9 C1 C11 C2 C3 C4 C5 C6 C7 C8 BU2 BU3 BU4 CU1 CU2 CU3 CU4 EU1 EU2 EU3 EU4 FU1 FU2 FU3 FU4 0.6 0.6 0.7 0.7 0.8 0.6 0.6 0.6 0.5 0.5 0.5 0.7 0.6 0.6 0.7 0.7 0.7 0.8 0.7 0.6 0.6 0.6 0.6 0.6 0.5 0.5 0.7 0.7 0.8 0.7 0.8 POSTTEST 1.0 Cronbach’s Alpha Composite Reliability 0.7 0.8 0.8 0.8 1) Convergent Validity Result Convergent validity also known as composite reliability is the “degree of agreement in two or more measures of the same construct” [28]. Two measures of convergent reliability are Composite Reliability (CR) and Cronbach’s Alpha (α). In PLS, the recommended benchmark is that the value of CR ≥ 7 and the recommended threshold for α ≥ 0.6. In addition, the convergent validity was also confirmed using Fornell and Larcker [29] recommendation which suggested that convergent validity is established, if the value of average variance-extracted (AVE) is greater than or equal to 0.5. The results from this study as shown in Table III demonstrated that the values of AVE are greater than or equal to the threshold of 0.5. This implies that the scale of the constructs possessed convergent validity. TABLE III: LATENT VARIABLE CORRELATIONS AND AVE Construct 0.8 0.8 1 1. Ecollaborative learning experience 2. Learning Outcomes 2 3 1.0 0.3 1.0 0.5 0.7 AVE (≥0.5) R2 (≥0.19) Q2 (≥0) 1.00 NA NA 1.00 0.50 0.42 0.79 0.21 0.14 3. Presences 0.9 0.9 1.0 2) Discriminant Validity Result In a PLS context, the discriminant validity is confirmed if "the diagonal elements are significantly higher than the offdiagonal values in the corresponding rows and columns. The diagonal elements are the square root of the AVE score for each construct" [25, 28, 30]. The results from this study as shown in Table III agreed to these conditions. The results demonstrated that discriminant validity is well established. D. Structural Equation Model Results Fig. 3 illustrates the results of the structural equation model for Hypothesized Relationship Model in Fig. 2. 1.0 1.0 C. Constructs Validity According to Golafshani [27] validity is used to determine whether the means of measurement are accurate, whether the research truly measures what it is intended to measure and how truthful the research results are. Therefore, this research employed construct validity to investigate whether the two sets of research instruments are designed to measure the right constructs. Both convergent and discriminant validities of the constructs in the model are validated. The results are discussed below: Figure 3: Structural Equation Modeling Results [161] 2016 3rd International Conference On Computer And Information Sciences (ICCOINS) The results in Fig. 3 show the standardized path coefficients/regression weight (β) that is the numbers on the arrow, which illustrate whether the relationships between the constructs are positive or negative and whether they are statistically significant. In addition, the results also show the values for the endogenous latent variable/squared multiple correlations (R2) in the blue circles, which illustrate “the amount of variance of the dependent constructs that can be explained by the independent constructs” [18]. Furthermore, Figure 3 also shows the values of the factor loadings for the computed constructs on the arrow to the yellow rectangles. These values ranges from 1.00 for both Collab and Learning outcome (LO); 0.93 for SP, 0.90 for CP and 0.84 for TP. The factor loading provide evidence for convergent validity since many of the constructs load was greater than the benchmark of 0.5 [25]. Both R2 and path coefficient can be used to determine the effect of the control constructs on predictors. The results as depicted in Fig. 3 show the values of the coefficient of determination, R2 in the blue circles. The main purpose of the R2 is to help determine the overall impact of the effect. The rule of thumb according to Chin [31], Chin, et al. [32] suggested that R2 value of 0.67 indicate substantial model fit, R2 value of 0.33 indicate moderate model fit, while value of 0.19 indicate weak model fit. Looking at Fig. 3, the value of R2 is 0.50 for the learning outcome endogenous latent variable. This implies that the two latent constructs (COLLAB and Presences) moderately explain 50% of the variance in Learning Outcomes. COLLAB also explain 21% of the variance of Presences. In addition, the numbers on the arrow to the blue circles in Fig. 3 indicate the values of the path coefficients (β). The main purpose of path coefficient is to help determine the direction of the effect (i.e. either positive or negative). They also “explain how strong the effect of one variable is on another variable” [33]. Thus, Fig. 3 shows that Presences has the strongest effect on Learning Outcomes (0.69), followed by COLLAB (0.02). The hypothesized paths between COLLAB and Presences, and between Presence and Learning Outcomes are also statistically significant. On the other hand, the hypothesized path between COLLAB and Learning Outcomes is not statistically significant because the standardized path coefficient (0.02) is less than the normal threshold of 0.1. These results imply that Presences is moderately strong predictor of Learning Outcomes, but collaboration environment alone does not strongly predict Learning Outcomes directly, a strong predictive result can only be achieved through the mediating and moderating effects of presences. A bootstrapping using 1000 sub-sample was run to assess the statistical significance of each path coefficient. “Using a two-tailed t-test with a significance level of 5%, the path coefficient will be significant if the T-statistics is larger than 1.96” [33]. The results of the bootstrapping as shown in Table 5 demonstrated that the relationship [162] between E-collaborative learning experience and Learning Outcomes is positive with β = 0.02, t=0.18, and p = 0.86 indicating that E-collaborative learning experience has direct positive insignificant relationship with Learning Outcomes since the value of T statistics is less than the threshold of ≥ 1.96. This result implies that the Collaborative learning experience is directly proportional to learning outcome with a coefficient of 0.02. This clearly shows that a 100 point of e-collaborative learning experience will result in 02 points changes in learning outcomes. Contrary to the above results, the relationship between E-collaborative learning experience and Presence was significant with β = 0.45, t = 4.12, and p = 0.00 indicating that e-collaborative learning experience has direct positive and significant relationship with Presence. This means that 100 points changes in e-collaborative learning experience will result in 45 points changes in presence. Finally, there is also significant positive relationship between Presences and Learning Outcomes with β = 0.69, t = 8.66, and p = 0.00. This indicates that Presences has direct positive and significant influence on Learning Outcomes. This result implies that the Presences is directly proportional to learning outcome with a coefficient of 0.69. This clearly shows that a 100 point of Presences will result in 69 points change in Learning Outcomes. Table IV. COEFFICIENT (Β) AND BOOTSTRAPPING RESULTS Path in the Path Coefficient T Statistics f2 P Value Model (β) Collab -> LO 0.02 0.18 0.00 0.86 Collab -> 0.45 4.12 0.26 0.00 Presences Presence -> 0.69 8.66 0.65 0.00 LO In addition, the effect size (f2) as illustrated in Table IV, which assesses the magnitude or strength of the relationship between latent constructs was also examined. The value of f2 illustrates “how much an exogenous latent variable contributes to an endogenous latent variable’s R2 value” [33]. The f2 help to assess the overall contribution of a research study. According to Cohen [34], the f2 value of 0.02 indicate small effect, value of 0.15 indicate medium effect, while value of 0.35 indicate large effects. The results from this study as illustrated in Table IV demonstrated that the value of f2 between Collab and learning Outcomes is 0.00, which indicate little effect; between Collab and Presences is 0.26, which indicate a medium effect; and between Presences and Learning Outcomes is 0.65, which indicate a large effect. CONCLUSION The results from the study show that there is a higher level of internal consistency reliability among all the studied constructs. Secondly, the results have also demonstrated that both convergent and discriminant validities of the constructs 2016 3rd International Conference On Computer And Information Sciences (ICCOINS) that constitute the hypothesized model are well established. Finally, the structural equations results have demonstrated that collaborative learning experience strongly predict learning outcomes indirectly through the mediating and moderating effects of the three presences. The results of the structural equation model therefore lead to the rejection of the null hypothesis H01, which stated that teaching, social and cognitive presences will negatively mediate the relationships between E-collaborative learning experience and learning outcomes. [15] [16] [17] [18] REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] R. Chebil, W. Lejouad-Chaari, and S. A. Cerri, "An ECollaboration New Vision and Its Effects on Performance Evaluation," International Journal of Computer Information Systems and Industrial Management Applications, vol. 3, pp. 560-567, 2011. N. Kock and J. Nosek, "Expanding the boundaries of ecollaboration," Professional Communication, IEEE Transactions on, vol. 48, pp. 1-9, 2005. L. Razmerita and K. Kirchner, "Social Media Collaboration in the Classroom: A Study of Group Collaboration," in Collaboration and Technology, ed: Springer, 2014, pp. 279-286. E. G. Oh and T. C. Reeves, "Collaborating Online: A Logic Model of Online Collaborative Group Work for Adult Learners," International Journal of Online Pedagogy and Course Design (IJOPCD), vol. 5, pp. 47-61, 2015. R. James, "ICT's participatory potential in higher education collaborations: Reality or just talk," British Journal of Educational Technology, vol. 45, pp. 557-570, 2014. E. W. Cheng and S. K. Chu, "Students' online collaborative intention for group projects: Evidence from an extended version of the theory of planned behaviour," International Journal of Psychology, 2015. L. Rourke and H. Kanuka, "Learning in communities of inquiry: A review of the literature (Winner 2009 Best Research Article Award)," International Journal of E-Learning & Distance Education, vol. 23, pp. 19-48, 2009. M. D. van der Merwe, "Community of inquiry framework: employing instructor-driven measures in search of a relationship among presences and student learning outcomes," International Journal of Learning Technology, vol. 9, pp. 304-320, 2014. H. Pollard and M. M. andAndree Swanson, "Instructor Social Presence within the Community of Inquiry Framework and its Impact on Classroom Community and the Learning Environment," Online Journal of Distance Learning Administration, vol. 17, 2014. J. A. Maddrell, G. R. Morrison, and G. S. Watson, "Community of inquiry framework and learner achievement," in annual meeting of the Associaiton of Educational Communicaitons & Technology, Jacksonville, FL Retrieved from http://www. jennifermaddrell. com/papers, 2011. A. Fyrenius, B. Bergdahl, and C. Silén, "Lectures in problembased learning-why, when and how? An example of interactive lecturing that stimulates meaningful learning," Medical teacher, vol. 27, pp. 61-65, 2005. D. R. Garrison, T. Anderson, and W. Archer, "The first decade of the community of inquiry framework: A retrospective," The Internet and Higher Education, vol. 13, 1, pp. 5-9, 2010. D. R. Garrison and M. Cleveland-Innes, "Facilitating cognitive presence in online learning: Interaction is not enough," The American Journal of Distance Education, vol. 19, pp. 133-148, 2005. J. B. Arbaugh, M. Cleveland-Innes, S. R. Diaz, D. R. Garrison, P. Ice, J. C. Richardson, et al., "Developing a community of inquiry instrument: Testing a measure of the community of inquiry framework using a multi-institutional sample," The Internet and Higher Education, vol. 11, pp. 133-136, 2008. [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] D. R. Garrison, T. Anderson, and W. Archer, "Critical inquiry in a text-based environment: Computer conferencing in higher education," The internet and higher education, vol. 2, pp. 87105, 2000. D. R. Garrison and J. B. Arbaugh, "Researching the community of inquiry framework: Review, issues, and future directions," The Internet and Higher Education, vol. 10, pp. 157-172, 2007. A. E. Traver, E. Volchok, T. Bidjerano, and P. Shea, "Correlating community college students' perceptions of community of inquiry presences with their completion of blended courses," The Internet and Higher Education, vol. 20, pp. 1-9, 2014. E. A.-L. Lee, K. W. Wong, and C. C. Fung, "How does desktop virtual reality enhance learning outcomes? A structural equation modeling approach," Computers & Education, vol. 55, pp. 1424-1442, 2010. R. Sharda, N. C. Romano Jr, J. A. Lucca, M. Weiser, G. Scheets, J.-M. Chung, et al., "Foundation for the study of computer-supported collaborative learning requiring immersive presence," Journal of Management Information Systems, vol. 20, pp. 31-64, 2004. B. Fetaji, M. Ebibi, and M. Fetaji, "Assessing Effectiveness in Mobile Learning by Devising MLUAT (Mobile Learning Usability Attribute Testing) Methodology," International Journal of Computers and Communications, vol. 5, pp. 178187, 2011. J. R. Lewis and J. Sauro, "The factor structure of the system usability scale," in Human Centered Design, ed: Springer, 2009, pp. 94-103. J. Nielsen. (2012, 29/01/2015). Usability 101: Introduction to Usability. Available: http://www.nngroup.com/articles/usability-101-introduction-tousability/ T. Anderson, "Community of inquiry model," Educational (instructional) design models, p. 71. W. F. W. Ahmad, A.-S. Yussiff, and E. E. Mustapha, "The evaluation of the usability and effectiveness of TELERECS ecollaboration system," in Proceedings of the International HCI and UX Conference in Indonesia, 2015, pp. 18-25. J. Hulland, "Use of partial least squares (PLS) in strategic management research: A review of four recent studies," Strategic management journal, vol. 20, pp. 195-204, 1999. A. G. Yong and S. Pearce, "A beginner’s guide to factor analysis: Focusing on exploratory factor analysis," Tutorials in Quantitative Methods for Psychology, vol. 9, pp. 79-94, 2013. N. Golafshani, "Understanding reliability and validity in qualitative research," The qualitative report, vol. 8, pp. 597-606, 2003. S. Bhakar, S. Bhakar, S. Bhakar, and G. Sharma, "The impact of co-branding on customer evaluation of brand extension," Prestige Information Journal of Management and Information Technology, vol. 1, pp. 21-53, 2012. C. Fornell and D. F. Larcker, "Evaluating structural equation models with unobservable variables and measurement error," Journal of marketing research, pp. 39-50, 1981. C. Schalles, J. Creagh, and M. Rebstock, "Usability of Modelling Languages for Model Interpretation: An Empirical Research Report," in Wirtschaftsinformatik, 2011, p. 36. W. W. Chin, "Commentary: Issues and opinion on structural equation modeling," ed: JSTOR, 1998. W. W. Chin, B. L. Marcolin, and P. R. Newsted, "A partial least squares latent variable modeling approach for measuring interaction effects: Results from a Monte Carlo simulation study and an electronic-mail emotion/adoption study," Information systems research, vol. 14, pp. 189-217, 2003. K. K.-K. Wong, "Partial least squares structural equation modeling (PLS-SEM) techniques using SmartPLS," Marketing Bulletin, vol. 24, pp. 1-32, 2013. J. Cohen, Statistical power analysis for the behavioral sciences, Second Edition ed. Hillside, New Jersey: Lawrence Erlbaun Associates, 1988. [163]