Global Education without Walls: A Multidisciplinary Investigation of University Learning in Online Environments across Disciplines
Increasingly, students use the internet for self-directed learning in higher education, requiring them to develop skills to determine which information is reliable and accurate. Although the need to understand, evaluate, and promote such skills is crucial, little is known about the students’ search for and use of online sources (and the key influences of those sources) in higher education. Current research indicates both that students need specific skills to successfully engage and learn with online materials, and that university practitioners need to rethink their curricula and instructional approaches for online teaching in the age of ChatGPT and other AI-supported tools. Interdisciplinary theoretical and empirical methods can help us gain a deeper understanding of how students develop the various skills required for successful online learning, and how we can support them across domains.
In large-scale national and international longitudinal assessments of studies in various academic domains, higher-education students showed partly negative learning trajectories. In other words, they demonstrated less correct content knowledge at the end of their studies, as outlined in this volume of æ岹ܲ. The international and interdisciplinary PLATO (Positive Learning in the Age of InformaTiOn) research program, with a hub in empirical educational research in Germany, was conceived in the wake of this major insight.1 Currently, PLATO involves over twenty collaborating universities located in several countries, including the United States, Canada, Japan, the Netherlands, and Switzerland.
In PLATO, we found that these results were not adequately explained by typically surveyed influence factors in education—such as demographics, prior education, or courses attended. In search of fuller explanations, we expanded our scope to include expertise from various disciplines, like linguistics, media studies, and communication sciences. Early jointly developed surveys focused on students’ learning input (that is, frequency of use of various media, sources, and information) for acquiring knowledge and preparing for exams. Students reported using a broad range of sources for learning, acquiring domain knowledge, and preparing for exams; most frequently, they stated using online media and sources, for example, through a Google search or nowadays by using AI-supported tools like ChatGPT. Through our consultations with faculty and subsequent reviews of teaching methodologies across disciplines and countries, as well as a systematic literature review, we gained two persisting impressions.
First, higher-education practitioners realized students use the internet as their main source for acquiring study-related information, and faculty suspected a potential negative influence on student learning (though were not aware of the exact extent beyond anecdotal evidence).
Second, skills relating to the use of the internet for successful learning were not specifically fostered in most of their courses. In part, faculty members considered this to be the responsibility of secondary education and university library courses (for beginners) that presumably offer training in academic research skills. Also, lecturers relied on acting as authorities in their field and providing students with a selection of sources they considered scholarly. Notably, sources separate from those selections, particularly internet sources, were frowned upon or considered less relevant by faculty.
To be sure, there were exceptions. Some lecturers used illustrative examples cropped from online media and discussed them in class, as well as offered specific tool acquisition lessons (such as database use). However, even here, for technology to work effectively during class, faculty members usually had students prepare (download) all necessary software and materials beforehand, typically from a curated, university-hosted repository or course platform (e-learning). Sometimes, e-learning spaces were also used for jointly preparing course deliverables (for example, creating a team wiki, asynchronous course discussions, and file upload).
We considered these strategies to be didactic “safe spots” on the internet, where lecturers can monitor and steer students’ development of media literacy skills, and moderate their collective learning process. The use of electronic materials—such as digitized (scanned) books, research studies, and databases—was not discussed, but implicitly accepted as far as the local university library would offer access to them. Apparently, there was a sizable gap between faculty members and students’ expectations and practice regarding the use of internet sources, including the acquisition of literacy skills for conducting research online.
In our pursuit of a broad multidisciplinary research agenda with the PLATO program, we are currently focusing on three areas. In our studies, we directly assess students’ actual internet use for solving typical generic and/or domain-specific tasks (for example, preparing a lesson plan in teacher education or a diagnostic plan in medical education) in a realistic (online) environment. We also analyze in greater detail students’ internet skills based on collected data in real time in major study domains, including economics, teacher education, medicine, and law. And we pay particular attention to the skill set of Critical Online Reasoning (COR) in general (GEN-COR) and domain-specific (DOM-COR) tasks, with three main facets:
searching for and selecting information (online information acquisition, OIA),
evaluating sources for credibility cues (critical information evaluation, CIE),
reasoning with evidence from multiple sources and synthesizing it into an evidence-based argument (reasoning based on evidence, argumentation, and synthesis, REAS).
We strive to control for a suitably challenging range of sources and information problem requirements (such as types of information needed, complexity of tasks, including presence or absence of biased sources, controversial topics, readymade judgments by authors and users), ensuring not all sources and information are trustworthy. In other words, we recreate authentic conditions of self-directed studying on the internet (beyond curated e-learning spaces).
While tasks can be designed to assess secondary skills, like selection and specific judgments, we found that some of these skills, such as searching, could only be validly assessed in a real online environment. For the assessments and training, we continuously vetted search prompts and preselected real online sources (for evaluation) to provide up-to-date realistic challenging tasks, based on a set of joint design criteria and scoring rubrics. Given the large variance of online sources, our focus was on whether and when students take certain actions (for example, leaving a suspicious website) and consistency between their claims of trusting sources, stable reasons for their claims, types of sources cited, appropriate confidence level, and safeguarding against gullibility and incredulity error.
In the PLATO setup, researchers from education, media, and computer sciences collaborate for two primary purposes. First, they keep assessments and training materials up to date regarding specific affordances and challenges online. For example, how and when do they choose to use AI-supported tools like ChatGPT? And second, researchers support educational technology software development. A dedicated IT project supports linkage of the resulting big educational data in meaningful ways and following the highest privacy standards. For instance, when solving generic and domain-specific performance tasks, whether for research or study purposes, whether required or voluntary, students log onto our prepped virtual computers. This way, we can track students’ real behavior on the internet without constricting their study habits. This setup has given us unique insights into the websites students visited for research and the time spent on each, with an opportunity to map students’ navigation routes, and document their preparation to complete different tasks, alongside their troubleshooting processes.
Another integral part of PLATO is our connection of learning material to students’ demonstrated comprehension. By analyzing accessed website content and comparing it with student responses, we found this process requires collaboration among several disciplines (such as education, linguistics, and computer science). We seek to obtain indicators from students working on challenging tasks: where do they go to gather data for higher education, and how do they attempt to use that data to complete required tasks? With these questions in mind, we aim as well to identify promising cues and patterns in the source information (metaphorical framing) that may have led (or misled) students to provisionally trust, reference, cite, or ignore certain sources and pieces of information. While most students performed in the upper half of scores on generic tasks, which were focused on solving everyday online information problems, students performed considerably worse when solving domain-specific tasks in their own disciplines.
The results are clear. We find a need for promoting DOM-COR skills among students within and across academic domains. Because students did not succeed in transferring their often highly developed generic skills in solving domain-specific tasks, we conclude and recommend that support in DOM-COR should be specifically integrated into regular academic studies. While students were typically able to research, evaluate, and process suitable sources and content on everyday issues on the internet, they had more difficulties successfully applying these skills to the research and argumentation processes for preparing domain-specific tasks, such as legal opinions in law or diagnostic plans in medicine.
So far, the log data of student-accessed websites showed that subjects accessed many more established specialized databases when solving tasks in their field. We have one possible but yet unexamined explanation for discrepancies found between performance in GEN-COR versus DOM-COR tasks, which differed across domains: Students may be more versed in using general search engines; or rather they may be less proficient in using domain-specific databases established in disciplines like medicine and law when compared with research via Google. In comparison with economics and education students, students from medicine and law generally showed more use of their domain-specific literature or databases. These databases are, in turn, more strongly promoted in their studies. The latter cohort of students also perceived greater support regarding the information evaluation and argumentation skills in their studies, which enable them to evaluate the quality of online research more effectively and to incorporate it into their academic and/or professional work.
Further, we discovered that initial situations vary based on the major sources of challenges within different domains. Namely, is there a set of online resources (for example, databases and references) used as a base source in the discipline, topic area, information type, and language; and are students aware of it and able to use it? Here, the focus is on competent use. Otherwise, students will need to find and piece together information from diverse, possibly new web platforms. In this case, the skill focus is on searching and quality evaluation.
To provide a couple of examples, consider medical and law students’ practices when researching data online. Medical students in Germany currently have a go-to didactic database to answer most of their questions. They may occasionally carry out their research elsewhere in case of a newly discovered variant or new treatment, or to verify a rarely used relation between symptoms and treatment. By contrast, law students need to reference a specific law and guideline, most of which are available in print, but are often more conveniently accessible online, and sometimes, they need to read up on specific court decisions and interpretations.
In both medicine and law, various client-facing internet resources are useful for beginners, but less so for professionals. What’s more, numerous advocacy and business (or sponsored) websites may advance partially biased interests. For teacher trainees, there is a wealth of open educational resources available, and some educational science databases, but no unified one-stop resource.
All of the professions that have been the focus of PLATO thus far deal with similar situations. They involve patients, clients, or students who can search the internet, but who usually rely on sources that require less specialized knowledge and are easier to understand compared with databases geared for professionals. Consequently, misconceptions among higher-education students and even graduates, or between professionals and their misinformed, distrusting clients (due to unskilled internet use), are predictable problem areas that could be addressed in problem-based teaching approaches to foster students’ COR skills.
In our assessments of the promotion of COR skills in academic studies, the results point to specific deficits in critical research, as well as the evaluation and integration of online sources into one’s argument. These deficits manifest in lower performance in subjects’ research and processing of sources in domain-specific tasks. To be sure, the students were often able to research, evaluate, and process appropriate sources and content to check everyday facts on the internet, but they had difficulty successfully applying the respective skills to their domain-specific tasks (such as legal opinions). In terms of professional practice, assessed deficits among graduates could mean that the content found and used in information research (for example, within medicine or legal databases) was incomplete, partially incorrect (or wrongly interpreted), or not up to date. Needless to say, any or all of these factors can significantly impair the overall quality of the professional decision and action (for instance, legal opinions or diagnostic plans).
Vast personal differences between students’ performances on domain-specific tasks also suggest that stronger individual support is needed here. At the same time, the findings indicated overall that COR skills should be promoted more strongly in curricular training to teach students to apply them within their respective domains. This requires the integration of targeted courses to promote these specific skills effectively in standard curricula. The discrepancies between generic and domain-specific skills suggested that COR skills should be promoted not only in general but in connection to domain-specific training.
DOM-COR training is still in development and needs to be tailored to each discipline (and possibly by course focus as well). However, we were able to easily elicit faculty- and student-offered examples in class through questions framed for group discussions. Some examples: “How did you find this type of information? Why did you think it was reliable? Who found ‘the best’ source? Did you come across less reliable or misleading sources? Whose interests might have influenced that source? Why might people believe it?” Even before the availability of more training, free and wide-ranging discussion can be encouraged by faculty as a means of catalyzing reflection about relevant DOM-COR skills.
One current PLATO research program examines possible trade-offs between the quality and comprehensibility of online information. We suspect that particularly low-performing students—that is, those who have difficulty understanding academic sources and research studies—will more often turn to diverse internet sources, and will therefore require personalized training. From our studies and experiences so far, we have found that both students and lecturers consider GEN-COR and DOM-COR skills to be important. They have voiced interest in learning about credibility cues, research training, (lacking) epistemically grounded discussions of information quality online, and internet “tricks of the trade.” However, without training, students will likely continue to use information they found online to study on their own without support.
We have delivered an initial dedicated GEN-COR and DOM-COR training program in three domains (law, medicine, and teaching), and we have shown the extant databases, how to search them using specific keywords and operators, as well as selected quality cues to check on websites. This newly developed training, in combination with the domain-specific assessments, represents an approach to systematically offer targeted support based on students’ identified strengths and weaknesses in using online information.2 At the same time, our findings indicate that promoting COR skills should be addressed more strongly in the regular curriculum, thereby teaching students these fundamental skills in dealing with a continuously updating digital media landscape.
Looking ahead, recent developments in AI-based search software, such as ChatGPT, are expected to change affordances for internet searches, and consequently task requirements—both for search sub-skills, and also for reasoning in terms of synthesizing information. These challenges can be addressed empirically and through logging search efforts, by expressly allowing or forbidding the use of selected platforms in accomplishing or researching tasks. Still, adjustments need to be made to tracking navigation and scraping sources.3
In a similar vein, the effectiveness of critical thinking training has been examined in a few studies and meta-analyses, highlighting different opportunities to include the development of critical thinking skills both within domain courses and as general training.4 Those studies also point to challenges in applying generic skills to domain-specific tasks. Research on misinformation indicates that various perceptual and analytic routes can contain important cues and reveal source reliability or one-sidedness. They also suggest a wealth of corresponding training approaches, such as logical training, rhetoric moves, debiasing, emotional introspection, empathy, and perspective-taking, in addition to specific lenses for larger systemic biases from sociology, history, media studies, and critical literacies in the humanities.5
Most approaches, however, still largely lack integration with online sources. In terms of opportunities for future work, educational researchers should collaborate across disciplines to narrow the gaps. Our review highlighted many formidable challenges that need to be addressed for assessing and teaching critical use of online information sources fairly (and compassionately, given prior misconceptions). Complexity seems to have increased by orders of magnitude: “rationality” is claimed even by monocriterion advocacy groups, and calls to “think critically” and “do your own research” are used even by demagogues, who encourage closed groups of followers, one-sided agenda and interpretations, cherry-picked data, loaded delivery, and persuasion by (algorithmic) repetition with a science-like look. Applying critical thinking skills to available data can leave inexperienced reasoners with the impression of having successfully uncovered revelations that are wrongfully ignored for their inconvenience rather than incompatibility. In the online environment itself, many platforms are suspected to invite and even train students to become cognitive misers, while aggravating tendencies that invite poor reasoning (for example, sensationalism turned to clickbait), while including hard-to-detect bias (as in those that surface through algorithms). The task for assessment developers and educators is to tease these variants apart, increase awareness of online challenges without ostracizing or overpowering students, and reinforce common standards for thorough thinking and evidence that are demonstrably beneficial for students in our digital age.6
Looking back at the data collected and analyzed through PLATO over the last seven years, we can conclude that interdisciplinary and cross-domain analyses allow for significant progress in theoretical modelling and empirical explanations. Further, one discipline and/or one domain could never achieve these advances alone. For instance, through the multi- and mixed-methods analyses of the same data corpus, using different analytical perspectives and approaches with multiple data triangulation and validation, we can gain a considerable amount of new knowledge as well as high-quality results. At the same time, it requires elaborate research workshops and discussions with multiple steps (such as the joint interpretation of the results). Overall, such interdisciplinary collaborations are associated with relatively high transaction costs and challenges, and require much more time in the research process. Additionally, multiyear communication processes are needed to develop a “common” language in the project, in which all researchers from the very different disciplines (currently over fifteen in PLATO) can effectively work and also successfully publish their findings. Joint interdisciplinary publications are a great challenge in and of themselves and require much more time than a conventional publication in one’s own discipline. There is also a permanent tradeoff between meeting standards in one’s own discipline and being comprehensible to a broader interdisciplinary research community.
To conclude, complex phenomena such as positive or negative learning in different settings in higher education (such as print versus online materials) can only be explored comprehensively by consolidating different areas of expertise in international interdisciplinary research teams such as those presented here. Interdisciplinary international collaborations require a group of discerning researchers who are willing to transcend the boundaries of their own disciplines and bear the relatively high costs in the long run. In this way, we can gain a deeper understanding of challenging issues like engaging student learning, studying with the internet in the digital age, and finding valid solutions to problems that arise from new technology.
AUTHOR’S NOTE
I would like to thank Howard Gardner for his very valuable input and feedback during the preparation of the manuscript, and Key Bird and Phyllis Bendell for their excellent stylistic revisions of the final version.
Endnotes
- 1Olga Zlatkin-Troitschanskaia, ed., Frontiers and Advances in Positive Learning in the Age of InformaTiOn (PLATO) (Berlin: Springer, 2020).
- 2We note that training by the Stanford History Education Group was found effective in fostering civic online reasoning performance. See Sarah McGrew, Mark Smith, Joel Breakstone, et al., “” British Journal of Educational Psychology 89 (3) (2019): 485–500.
- 3Hendrik Drachsler and Frank Goldhammer, “Learning Analytics and eAssessment—Towards Computational Psychometrics by Combining Psychometrics with Learning Analytics,” in Radical Solutions and Learning Analytics, ed. Daniel Burgos (Berlin: Springer, 2020), 67–80.
- 4Hannes Weber, Steffen Hillmert, and Karin Julia Rott, “” Teaching in Higher Education 23 (8) (2018): 909–926.
- 5Barbara Blummer and Jeffrey M. Kenton, Improving Student Information Search: A Meta-cognitive Approach (Amsterdam: Elsevier, 2015).
- 6Jonathan Osborne, Daniel Pimentel, Bruce Alberts, et al., Science Education in an Age of Misinformation (Stanford, Calif.: Stanford University, 2022).