Skip to content
Snippets Groups Projects
Commit 2a1cfe6e authored by Jan Bernoth's avatar Jan Bernoth
Browse files

added Landing Page

parent 5538f39f
No related branches found
No related tags found
No related merge requests found
stages:
- lint
- test
- publish
pylint:
image: "python:latest"
......@@ -14,4 +15,20 @@ tests:
stage: test
script:
- pip install -r requirements.txt
- pytest
\ No newline at end of file
- pytest
- python src/main.py
artifacts:
paths:
- results/
publish:
image:
name: pandoc/core
entrypoint: [ "/bin/sh", "-c" ]
stage: publish
script:
- pandoc doc/overview.md -o public/index.html
- cp -r results/ public/
artifacts:
paths:
- public
File added
# Identifying UEQ+ Scales for the categories Dashboard and VR in a quantitative study
This repository introduces a set of scales for the product categories, Dashboard and Virtual Reality, to the User
Experience Questionnaire ([UEQ+](https://ueqplus.ueq-research.org/)).
The UEQ+ is a standardized questionnaire that provides a set of scales considered to be most important for evaluating
the user experience of a product family.
We have conducted a quantitative study to determine the most important scales.
For this purpose, a procedure that had been used in previous studies to determine the most important User Experience
scales for the categories Games and Learning Platforms was reproduced.
These product categories were integrated to ensure that the conditions of our study were similar.
The results are presented, critically discussed, and compared with previous studies.
The derived procedure is assessed for its validity and presented in an open repository.
In this repository, we present, based on our study, the most important scales for UEQ+ evaluations for Dashboard and
Virtual Reality products.
## Most important scales
In the following graphics are the top-rated scales for Virtual Reality and Dashboard visualized.
These box plots show the highest-ranked scales sorted by their mean (light triangle).
<p float="left">
<img src="../results/box_Dashboard.png" width="49%" />
<img src="../results/box_Virtual_Reality.png" width="49%" />
</p>
### Top Scales Dashboard
* Quality of Content (Qua)
* Trustworthiness of Content (ToC)
* Trust (Tru)
* Perspicuity (Per)
* Usefulness (Use)
* Clarity (Cla)
### Top Scales VR
* Trust (Tru)
* Clarity (Cla)
* Dependability (Dep)
* Trustworthiness of Content (ToC)
* Efficiency (Eff)
* Stimulation (Sti)
## Methodology & Results
Based on the paper by Winter, Hinderks, Schrepp, and Thomaschewski (2017) ([link to paper]((https://doi.org/10.18420/muc2017-up-0002))),
we conducted a survey following these steps:
1. Create categories based on different software used for similar tasks.
2. Present these categories in the questionnaire with well-known examples.
3. Explain each scale with a brief sentence.
4. To evaluate importance, build a 7-point Likert scale ranging from -3 (very unimportant) to 3 (very important).
To compare our results with the initial paper, we also included scales similar to our new categories.
For 'Dashboard', the most similar scale is 'Learning Platforms', and for 'VR', it is 'Games'.
A total of 69 people participated in our survey, out of which 42 completed it.
The majority of participants were researchers, technical staff, or students.
The correlation analysis showed that there was a significant relationship between 'Dashboard' and 'Learning Platform'. The correlation between 'Games' and 'VR' was strong.
For more details, see our [rejected paper](muc23_ueq_plus_dashboard_vr_REJECTED.pdf).
## Critics from peer review
The paper was submitted to the MuC 2023 and was rejected.
The most critical points are:
* It was not evident why VR and Dashboard were proposed together.
* Why the need for Dashboard and VR UEQ+ modules, if existing modules on Games and Learning already share much overlap?
* The sample size and the method how we construct the scales was criticised.
......@@ -543,7 +543,7 @@ class Plotter:
labels=translate_scales(df_category_area.columns, True), rotation=0)
axes.xaxis.set_label_position('top')
axes.xaxis.tick_top()
save_plot(plt, 'box_' + category_title.value)
save_plot(plt, 'box_' + category_title.value.replace(' ', '_'))
self.plotted = self.plotted + 1
return plt
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment