diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
index 1947959fac7a8b1d9952d1950bfac582f06cfda7..578c5a86a9a4a0a0e93ec737f767beb7258fef04 100644
--- a/.gitlab-ci.yml
+++ b/.gitlab-ci.yml
@@ -1,6 +1,7 @@
 stages:
   - lint
   - test
+  - publish
 
 pylint:
   image: "python:latest"
@@ -14,4 +15,20 @@ tests:
   stage: test
   script:
     - pip install -r requirements.txt
-    - pytest
\ No newline at end of file
+    - pytest
+    - python src/main.py
+  artifacts:
+    paths:
+      - results/
+
+publish:
+  image:
+    name: pandoc/core
+    entrypoint: [ "/bin/sh", "-c" ]
+  stage: publish
+  script:
+    - pandoc doc/overview.md -o public/index.html
+    - cp -r results/ public/
+  artifacts:
+    paths:
+      - public
diff --git a/doc/muc23_ueq_plus_dashboard_vr_REJECTED.pdf b/doc/muc23_ueq_plus_dashboard_vr_REJECTED.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..0935f8a5faaa3d2a5b47362dc102c6ae49256d84
Binary files /dev/null and b/doc/muc23_ueq_plus_dashboard_vr_REJECTED.pdf differ
diff --git a/doc/overview.md b/doc/overview.md
new file mode 100644
index 0000000000000000000000000000000000000000..f5ccd3da21b4ae499339d4c5adce0e9a2019c99a
--- /dev/null
+++ b/doc/overview.md
@@ -0,0 +1,73 @@
+# Identifying UEQ+ Scales for the categories Dashboard and VR in a quantitative study
+
+This repository introduces a set of scales for the product categories, Dashboard and Virtual Reality, to the User
+Experience Questionnaire ([UEQ+](https://ueqplus.ueq-research.org/)).
+The UEQ+ is a standardized questionnaire that provides a set of scales considered to be most important for evaluating
+the user experience of a product family.
+We have conducted a quantitative study to determine the most important scales.
+For this purpose, a procedure that had been used in previous studies to determine the most important User Experience
+scales for the categories Games and Learning Platforms was reproduced.
+These product categories were integrated to ensure that the conditions of our study were similar.
+The results are presented, critically discussed, and compared with previous studies.
+The derived procedure is assessed for its validity and presented in an open repository.
+In this repository, we present, based on our study, the most important scales for UEQ+ evaluations for Dashboard and
+Virtual Reality products.
+
+## Most important scales
+
+In the following graphics are the top-rated scales for Virtual Reality and Dashboard visualized. 
+These box plots show the highest-ranked scales sorted by their mean (light triangle).
+
+<p float="left">
+  <img src="../results/box_Dashboard.png" width="49%" />
+  <img src="../results/box_Virtual_Reality.png" width="49%" /> 
+</p>
+
+### Top Scales Dashboard
+
+* Quality of Content (Qua)
+* Trustworthiness of Content (ToC)
+* Trust (Tru)
+* Perspicuity (Per)
+* Usefulness (Use)
+* Clarity (Cla)
+
+### Top Scales VR
+
+* Trust (Tru)
+* Clarity (Cla) 
+* Dependability (Dep)
+* Trustworthiness of Content (ToC)
+* Efficiency (Eff)
+* Stimulation (Sti)
+
+## Methodology & Results
+
+Based on the paper by Winter, Hinderks, Schrepp, and Thomaschewski (2017) ([link to paper]((https://doi.org/10.18420/muc2017-up-0002))), 
+we conducted a survey following these steps:
+
+1. Create categories based on different software used for similar tasks.
+2. Present these categories in the questionnaire with well-known examples.
+3. Explain each scale with a brief sentence.
+4. To evaluate importance, build a 7-point Likert scale ranging from -3 (very unimportant) to 3 (very important).
+
+To compare our results with the initial paper, we also included scales similar to our new categories. 
+For 'Dashboard', the most similar scale is 'Learning Platforms', and for 'VR', it is 'Games'.
+
+A total of 69 people participated in our survey, out of which 42 completed it. 
+The majority of participants were researchers, technical staff, or students.
+
+The correlation analysis showed that there was a significant relationship between 'Dashboard' and 'Learning Platform'. The correlation between 'Games' and 'VR' was strong.
+
+For more details, see our [rejected paper](muc23_ueq_plus_dashboard_vr_REJECTED.pdf).
+
+
+## Critics from peer review
+
+The paper was submitted to the MuC 2023 and was rejected.
+The most critical points are:
+
+* It was not evident why VR and Dashboard were proposed together.
+* Why the need for Dashboard and VR UEQ+ modules, if existing modules on Games and Learning already share much overlap?
+* The sample size and the method how we construct the scales was criticised.
+
diff --git a/src/main.py b/src/main.py
index 721985a6b2dafa0ebeea60ed3314150946e5e74f..d25a7367af97249917b0f8fdc574fa84822b6dc2 100644
--- a/src/main.py
+++ b/src/main.py
@@ -543,7 +543,7 @@ class Plotter:
                         labels=translate_scales(df_category_area.columns, True), rotation=0)
         axes.xaxis.set_label_position('top')
         axes.xaxis.tick_top()
-        save_plot(plt, 'box_' + category_title.value)
+        save_plot(plt, 'box_' + category_title.value.replace(' ', '_'))
         self.plotted = self.plotted + 1
         return plt