IntelliGame in Action: Gamifying JavaScript Unit Tests - Results

Written by gamifications | Published 2024/04/03
Tech Story Tags: javascript | gamification | software-testing | intelligame | unit-testing | java-experiment | ide-workshop | java-gamification

TLDRThe chosen target class was well-received by all groups. All groups unanimously agreed that the class was easy to comprehend and implement. Just under half of the participants in the treatment group and fewer than 40% in the control group indicated they had sufficient time to complete the implementation. Only approximately 25% in both groups felt they had enough time to conduct thorough testing.via the TL;DR App

Authors:

(1) Philipp Straubinger, University of Passau, Passau, Germany and this author contributed equally to this research;

(2) Tommaso Fulcini, Politecnico di Torino, Torino, Italy and this author contributed equally to this research;

(3) Gordon Fraser, University of Passau, Passau, Germany;

(4) Marco Torchiano, Politecnico di Torino, Torino, Italy.

Table of Links

Abstract and Introduction

Background and Related Work

Implementation

Experiment

Results

Conclusions, Acknowledgement, and References

5 RESULTS

While the analysis of results is ongoing, we provide a preliminary overview by addressing the challenges encountered, our problem-solving approaches, and insights from the exit survey completed by students regarding their perceived user experience.

5.1 Survey Answers

Based on the responses to the exit survey (Fig. 2), the chosen target class was well-received by all groups. All groups unanimously agreed that the class was easy to comprehend and implement. However, it is noteworthy that just under half of the participants in the treatment group and fewer than 40% in the control group indicated they had sufficient time to complete the implementation. Consequently, it is not surprising that only approximately 25% in both groups felt they had enough time to conduct thorough testing of their implementations.

Interestingly, around two-thirds of all participants actively tested their code. Notably, a smaller percentage of participants wrote tests during the development phase, with 52% in the treatment group compared to 44% in the control group. Roughly 50% of participants in both groups were confident in the correctness of their implementations. Conversely, only 21% in the treatment group and 14% in the control group were certain about the quality of their test suites. No significant differences were observed between the two groups.

Concerning the responses of the treatment group regarding IntelliGame, they are predominantly positive or, at worst, undecided (Fig. 3). Participants demonstrated a clear understanding of the tool’s descriptions and how to make progress in the presented achievements. They also appreciated the frequency of the notifications. About 40% of the participants reported that the achievements positively influenced their testing behavior, and an equivalent percentage mentioned being motivated by both the notifications and the plugin itself. Encouragingly, 42% of participants expressed a desire to use IntelliGame in their own projects.

5.2 Problems Faced and Lessons Learned

In the initial adaptation phase, we faced challenges while transitioning the original TypeScript project to JavaScript. Although automatic transpilation facilitated obtaining the correct JavaScript code, the hurdle lies in converting the TS configuration to a JScompatible one. Since the original project used another testing framework, i.e., Karma, we had to modify the configuration to fit Jest.

Another challenge arose when configuring the main.js file for manual function assessment. Unlike Java, where a Main class defines the starting point, JavaScript, especially in the Node framework, lacks that. So, in order to reproduce the same setting as the original validation experiment [8], we had to create a custom file for this purpose.

Throughout the experiment, additional issues surfaced and required adaptation of the plugin. In the first session, participants could only choose between running the project via tests or main execution due to the provided configuration. Despite being shown how to switch between configurations, most participants stuck with one method for the entire session.

Upon reflection, we observed that the number of tasks overwhelmed participants, leading them to feel demotivated despite knowing that their performance would not be evaluated. While they found the functions appropriately challenging, the abundance of tasks influenced their survey responses negatively (e.g., rating their implementation/test suite as being not good enough and expressing dissatisfaction with the tool).

We attribute the superior performance of students in the pilot study to their greater experience and a less pressured environment. To address this, future experiments will involve students with comparable experience levels to the overall sample. We hypothesize that incorporating gamification elements aimed at promoting TDD approach to the plugin may yield to even more encouraging results.

Minor issues included students incorrectly selecting a parent directory when opening the project, interrupting the script that committed their progress to the repository. We promptly identified and resolved these problems. General challenges with the committing process stemmed from variations in participants’ laptop configurations, which we sometimes deemed unsolvable, opting to collect the final project state at the designated time.

This paper is available on arxiv under CC BY-SA 4.0 DEED license.


Written by gamifications | Gamifications unlocks engagement secrets, merging playful design and tech to turn the ordinary into the extraordinary.
Published by HackerNoon on 2024/04/03