Based on the paper by Winter, Hinderks, Schrepp, and Thomaschewski (2017) ([link to paper]((https://doi.org/10.18420/muc2017-up-0002))),
we conducted a survey following these steps:
1. Create categories based on different software used for similar tasks.
2. Present these categories in the questionnaire with well-known examples.
3. Explain each scale with a brief sentence.
4. To evaluate importance, build a 7-point Likert scale ranging from -3 (very unimportant) to 3 (very important).
To compare our results with the initial paper, we also included scales similar to our new categories.
For 'Dashboard', the most similar scale is 'Learning Platforms', and for 'VR', it is 'Games'.
A total of 69 people participated in our survey, out of which 42 completed it.
The majority of participants were researchers, technical staff, or students.
The correlation analysis showed that there was a significant relationship between 'Dashboard' and 'Learning Platform'. The correlation between 'Games' and 'VR' was strong.
For more details, see our [rejected paper](muc23_ueq_plus_dashboard_vr_REJECTED.pdf).
## Critics from peer review
The paper was submitted to the MuC 2023 and was rejected.
The most critical points are:
* It was not evident why VR and Dashboard were proposed together.
* Why the need for Dashboard and VR UEQ+ modules, if existing modules on Games and Learning already share much overlap?
* The sample size and the method how we construct the scales was criticised.