Research Group Meeting Schedule

Zoom link

Deadlines

Upcoming meeting topics 2023

03 March — Stephen Piccolo — Update on LLM performance in introductory programming for Biology majors, and ongoing research.

10 March – Andrew Meads and Yu-Cheng Tu — SIGCSE Practice talk

 

 

 

 

2022

The following list should always be considered a draft and topics can be rearranged to fit with new ideas and discussions.

4 Nov – Diana Kirk — Practice talk for Koli paper “Refining a Risk Framework for Student Group Projects”

28 Oct — Lauri Malmi guest speaker

21 Oct — Capstone group presentations

7 Oct – Xioahan Liu:  Honours project — “Visualizing Software Measurements for Programming Assignments to Support the Marking Process”

30 Sept – Nalin: Cybersecurity Education

18 Mar – Andrew Meads: A framework for web-based student educational projects

11 Mar

04 Mar — Diana Kirk: Risk management framework for student projects

25 Feb

18 Feb

14-17 Feb – ACE Conference (Program)

11 Feb

04 Feb

28 Jan

23 Jan – ITiCSE Full Papers Due

21 Jan

16 Jan – ITiCSE Abstracts Due

14 Jan

 


2021

26 Nov – Danielle, Ken, Udayangi — Passive Sensing and AI-Enabled Adaptive Intelligent Tutoring Systems for Computer Science: A Systematic Review

Adaptable, intelligent tutoring systems are starting to be integrated within tertiary teaching due to their potential for personalised support for student performance and engagement. Biometric enabled tutoring systems are emerging within this domain, where the system adapts based on students’ measurable psychological and behavioural characteristics. To better understand the state of the art of these systems, we conducted a systematic literature survey based on PRISMA guidelines across six databases.  Our query yielded 21928 results. We included 13 articles with functional, biometric enabled tutoring systems with artificial intelligence (AI) adaptability that were both a) within computer science education and b) included empirical results with students. We review the range of AI and biometric characteristics, the justifications for these features, and how they are reported to support student learning. We discuss the implications for the design of adaptive intelligent tutoring systems.

19 Nov — Thomas Practice Talk

11 Nov — Practitioner / Experience report template

5 Nov — Discussion around upcoming COMPSCI 101 changes and research opportunities

29 Oct – Discussion – Exemplar Experience report

22 Oct – Discussion – Experience reports

15 Oct – Devi – PhD overview

8 Oct – Discussion about Debugging

1 Oct – Cancelled

24 Sept – Managing risks in student group projects — Diana Kirk

17 Sept – Git Utilities (SIGCSE Special Project) – Nasser

3 Sept – Planning ACE papers

27 August – Cancelled

20 August – Reviewing papers

6 August – Ideas discussion and data collection

30 July – Introduction to the Computing Education Research Ecosystem

23 July – Cancelled

16 July – ICER Practice Talk – A Semblance of Similarity: Student Categorisation of Simple Algorithmic Problem Statements (James Finnie-Ansley)

9 July – Scratch and Sense: Using Real-Time Sensor Data to Motivate Students Learning Scratch (Hussel Suriyaarachchi)

Abstract: Block-based programming environments are popular in pre-tertiary classrooms as they present learners with an easy to use graphical interface in which programs are constructed by dragging blocks into a scripting area and snapping them together.  This approach eliminates some of the challenges of text-based languages, such as syntax errors, allowing students to focus on being creative.  Input to programs in block-based environments is typically limited to the keyboard and mouse, meaning programs usually react only to the direct actions of the user.  Allowing programs to respond to changes in the physical environment may influence the types of programs students create and their perceptions of learning to program.  We explore this idea by integrating real-time sensor data into Scratch, one of the most popular block-based environments.  Students simply connect a sensor to their computer via USB and use custom blocks, alongside conventional Scratch code, to read and respond to the sensor data in real-time.  We conducted a field study in a classroom of 30 students, with limited Scratch experience, to understand how they would use the sensors and to what extent they would be motivated by this approach.  In this talk, we provide some background to the project and present preliminary findings from the field study.

2 July – Cancelled due to ITiCSE

25 June – Cancelled due to illness

18 June –

11 June – Casual discussion and catchup

04 June – Repertory Grid Technique for Research (Diana Kirk)

28 May – qChecker – a static analysis tool for applying constraints and providing feedback for student-submitted code (James Finnie-Ansley)

Abstract: I am currently writing a Python package that checks student code satisfies given constraints and provides feedback on possible mistakes students have made. I am looking for feedback on the tool and what possible directions it could go in and how it might be used.

21 May – Cancelled

14 May – Programming Language as Natural Language: Mode and Genre (Robert Biddle)

Continued

07 May – Programming Language as Natural Language: Mode and Genre (Robert Biddle)

Programming languages are not natural languages, because we design them explicitly to program computers. But programs are written and read by humans, supported by human communities and form those communities in turn. If we consider how natural languages are used, we might gain new insight about programming languages, and about programs. In this talk I will focus principally on modes and genres of language use.

30 Apr – Changes to NCEA

23 Apr

16 Apr – Cancelled

09 Apr – Cancelled

08 Apr – Australasian ACM SIGCSE Research Talks (1pm-3pm NZST)

  • https://auckland.zoom.us/j/6552359249
  • Simon (University of Newcastle)
  • Claudia Szabo (The University of Adelaide)
  • Anthony Robins (University of Otago)
  • Margaret Hamilton (RMIT University)
  • Tim Bell (University of Canterbury)
  • Bryn Jeffries (Grok Learning)
  • Andrew Luxton-Reilly (The University of Auckland)

02 Apr – SE Project Update

26 Mar – SIGCSE Debrief / Highlights

19 Mar – ACE summary / ITiCSE preview

12 Mar – Conventicle planning – Andrew

05 Mar – How do programmers think while debugging?  – Songyan Teng (Summer Scholarship Student), Benny Chung (Research Assistant), Danielle Lottridge, Ewan Tempero

Abstract: In order to teaching students to do something (well) we need to understand what it means to do that thing (well). One such thing is debugging. We don’t teach debugging well, perhaps because we don’t understand what it means to do debugging well. Part of the problem with understanding how programmers debug is we can’t see what they’re really thinking.

Danielle and I have been running a study for the last couple of years where we gather biometric data, specifically EEG, eye tracking, and Skin Conductance. We hope to use this data to better understand what’s going on while programmers debug. In this presentation we (including two students that worked with us over the summer) will describe the study and the results we have so far.

 

26 Feb – Spatial Intelligence Training – Burkhard

Abstract: New Zealand students’ achievements in mathematics and science have fallen over the last decade with the percentage of low performers increasing. Previous research shows a clear correlation between spatial intelligence and success in STEM subjects. Spatial intelligence refers to the capacity to think about objects in three dimensions and to draw conclusions about those objects. Spatial reasoning skills are important in many applications such as medicine, engineering, and architecture. Improving these skills can help with producing better outcomes across STEM education broadly. In this talk we classify different types of spatial intelligence, discuss neuroscientific foundations, and review how spatial intelligence can be assessed and trained. We conclude with a discussion of current and future research in this field.

19 Feb – Decomposition in Early School Level Programs – Tyne

Abstract: The NCEA computer programming assessment standards in New Zealand will often be implemented so that students complete relatively open tasks that have runtime user input and output. The assessment standards also outlines that aspects of code quality such as flexibility and ‘logical structure’ should be assessed. At each of the three levels of NCEA students will normally only know a limited range of programming constructs. This combination of limited constructs, open tasks and runtime user input and output can have tradeoffs relating to the qualitative aspects of programs. This session will look at issues relating to balancing all of these aspect and the implications relating to the decomposition of student created programs.

12 Feb – Student Recognition of Salient Features in Computer Science Problems – James Finnie-Ansley

Abstract: When a student reads a programming problem statement, \textit{something} has to happen; that something could be abject confusion, the beginnings of a search for a solution, or a well-formed understanding of what the problem is asking and how to solve it. Barring abject confusion, several theories explain the differences between these responses all revolving around the existence or non-existence of a \textit{problem schema} — some mental concept or knowledge structure which encodes what it is to be a particular type of problem which gets solved in a particular type of way.

Schemata represent our concepts about the world; they allow us to recognise, categorise, and explain the things around us. Problem schemata represent our concepts about different problems, what they are, and how to solve them. Unfortunately, novice programmers are said to not have sets of schemata they can call on when solving problems and must resort to generic problem-solving techniques. This is not only an inefficient method of solving problems, this can even inhibit the development of schemata. But, novice programmers still have to have \textit{some} concepts about problems. Schemata develop slowly; so, we might assume that even novice learners have some developing schemata which inform their perceptions and approaches to problems.

In line with the constructivist theory of learning, it is commonly accepted we need to build on the existing knowledge of learners; however, much of the research on novice learners’ knowledge seems to focus on what they don’t know or what they get wrong. Little has been done to address the nature of what novice learners’ do know — what do their schemata, as undeveloped as they may be, `look like’ and what concepts do they have about problems? Understanding this might mean we can focus on what learners \textit{do} know and we can better understand how we might nurture learners’ preexisting knowledge.

To examine the nature of novice programmers’ knowledge, this thesis examines the features students recognise as salient in simple computing problems and how they use those features when categorising problems. An interpretivist study involving semi-structured interviews, card sorting exercises, and rewording exercises was conducted and the results were analysed using mixed qualitative and quantitative methods drawing from schema and category theory.

5 Feb – The Hidden Curriculum in High School Digital Technology Teaching — Elliot Varoy


2020

23 October: Jacqui Creativity, or something else 🙂

8 October:

2 October: Burkhard — CodeRunner Sandbox

25 September: Planning

18 September: Diana — Qualitative feedback from teachers

11 September: Paramvir et al. Industry skills

4 September: Jacqui — Creativity

28 August:  Tyne — Phenomenography and research questions

21 August: Paul — Time management

14 August: Elliot — Timing of Curriculum for Teaching Computing at Schools

7 August:  Planning

31 July: Robert Sheehan – Programming language for novices

24 July: Cancelled – Research for all day research workshop

17 July: Yannick Ring, Blockly OOP extensions

10 July  Cancelled – Maori Engagement pre-workshop meeting scheduled

3 July: Selecting Papers for the COMPSCI 289 course

26 June: Debugging Study

19 June: Max & Matt and Gargi & Salma

12 June: P4P Ewan + Nathan & Allen P4P

5 June: James – Problem Solving Schema and Recognition of Problem Features

29 May: Timo & Sukhans and Casey & James

22 May: Planning session

15 May: Nasser — Visual Analogies for Parallel Programming

8 May: Elliot on Computing Curricula

1 May: Analysis of CS students’ learning journals

24 Apr: Guest speaker: James Prather (Abilene Christian University)

17 Apr: Diana – TLRI recap

10 Apr: Easter break.  No meeting

3 Apr: Guest speaker: Brett Becker (University College Dublin)

27 Mar:

20 Mar:

13 Mar: SIGCSE 2020 — possible cancellation

6 Mar:

28 Feb:

21 Feb:

Keystroke Data in Programming Courses — Juho Leinonen

14 Feb:

7 Feb: ACE 2020 — Research Meeting CANCELLED

31 Jan:

24 Jan:

17 Jan:

10 Jan: Holiday — Research Meeting CANCELLED

Enjoy the summer!

 

2019 Archive

22 Nov:  Teaching/Scaffolding Decomposition of problems — Tyne, Andrew, Ewan

8 Nov:  Blockly — Nasser

1 Nov:  Visual analogy of OOP concepts — Nasser / Victor

25 Oct: Using control-flow-graphs to provide feedback to students on code quality (Lucy Jiang and Robert Rewcastle)

Writing quality code is an extremely important skill that all good
developers should have. However, software engineering students have not
been receiving enough feedback on code quality, which has led to a
shortcoming in their ability to write quality code. There are many tools
that can provide feedback on functionality, but support for feedback on
higher level concepts such as quality is limited. In this paper, we
propose the use of control flow graphs (CFG) and cyclomatic complexity
number (CCN) to generate personalised feedback for student code
automatically. We have developed a tool for generating three types of
feedback using CFGs and CCN and analysed its usefulness by applying it
to a dataset of 542 method implementations written by students. We also
developed a web client for displaying the feedback and performed a pilot
study that involved students completing a coding exercise, interacting
with the web client, and refactoring their code. In the dataset, the
largest range for an exercise was 21 and all the students in the pilot
study either had the same or a lower CCN after viewing feedback.
Furthermore, students indicated that the feedback helped them find ways
to reduce their CCN and gain a better understanding of code quality This
demonstrates that there is potential in the use of CFGs and CCN to
provide students personalised feedback on their code.

18 Oct: Debugging Patterns

Jacqui will discuss the idea of debugging patterns.

11 Oct: Cancelled for Software Engineering Exhibition Day

4 Oct: Debugging

Danielle and Ewan will discuss their work using biometrics to investigate student arousal and attention when engaged in debugging tasks.

27 Sept: Cancelled for MBIE workshop

20 Sept: Journals for developing student self-regulation

A discussion and possible direction for research on journals. — Andrew

13 Sept:  Journals for learning

In this week’s CELT session, Paramvir will be discussing the development of a research idea around  — the impact of note-taking and journal writing on students’ learning during the undergraduate computer science course proceedings. Some points of discussion will revolve around: the most applicable research methods, data collection approaches, and analysis techniques for the same. He will also present some of his preliminary results supporting some early work.

6 Sept: Pass Rates

Although the CS Education generally accepts the view that pass rates for CS1 are low, there is little evidence to support that view.  Three notable studies have explored pass rates of CS1 courses internationally.  In this seminar, we present a preview of the results of an ITiCSE working group that compared pass rates of CS1 courses with other STEM disciplines within institutions.  We found that pass rates for CS1 courses are generally very similar to the pass rates of other STEM disciplines, but larger courses have slightly lower pass rates on average.

Programming “patterns” – Andrew

Debugging – Andrew

Drill and practice effectiveness – Andrew

CS students and engagement with learning technologies — Jacqui