Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/277972 
Year of Publication: 
2023
Series/Report no.: 
32nd European Conference of the International Telecommunications Society (ITS): "Realising the digital decade in the European Union – Easier said than done?", Madrid, Spain, 19th - 20th June 2023
Publisher: 
International Telecommunications Society (ITS), Calgary
Abstract: 
Artificial intelligence (AI) tools such as ChatGPT and GPT-3 have shot to prominence recently (Lin 2023), as dramatic advances have shown them to be capable of writing plausible output that is difficult to distinguish from human-authored content. Unsurprisingly, this has led to concerns about their use by students in tertiary education contexts (Swiecki et al. 2022) and it has led to them being banned in some school districts in the United States (e.g. Rosenblatt 2023; Clarridge 2023) and from at least one top-ranking international university (e.g. Reuters 2023). There are legitimate reasons for such fears to be held, as it is difficult to differentiate students' own written work presented for assessment from that produced by the AI tools. Successfully embedding them into educational contexts requires an understanding of the tools, what they are, and what they can and cannot do. Despite their powerful modelling and description capabilities, these tools have (at least currently) significant issues and limitations (Zhang & Li 2021). As telecommunications policy academics charged with the research-led teaching and supervising both undergraduate and research students, we need to be certain that our graduates are capable of understanding the complexities of current issues in this incredibly dynamic field and applying their learnings appropriately in industry and policy environments. We must be reasonably certain that the grades we assign are based on the students' own work and understanding, To this end, we engaged in an experiment with the current (Q1 of 2023) version of the AI tool to assess how well it coped with questions on a core and current topic in telecommunications policy education: the effects of access regulation (local loop unbundling) on broadband investment and uptake. We found that while the outputs were well-written and appeared plausible, there were significant systematic errors which, once academics are aware of them, can be exploited to avoid the risk of AI use severely undermining the credibility of the assessments we make of students' written work, at least for the time being and in respect of the version of chatbot software we used.
Subjects: 
Artificial Intelligence (AI)
ChatGPT
GPT-3
Academia
Creative Commons License: 
cc-by-nc-nd Logo
Document Type: 
Conference Paper

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.