Research projects

For more information about the projects we also refer to our scientific publications.

VR Language Learning

This project was developed in collaboration with the University of Augsburg. The goal of VR Language Learning is that two participants from different locations can join a room in a virtual environment to interact with each other. Through the implemented voice chat, one participant can communicate with a native speaker of another language and quickly gain new skills in learning a foreign language. The context regarding the interaction between the participants is given by certain scenarios of the VR Language Learning application. Furthermore, the scenarios are intended to create an awareness of global issues among the participants. So far, two scenarios have been implemented in the VR application:

Sustainability:

The context of the first scenario is about sustainability. In this scenario the participants meet each other in a fashion store. In this fashion store, participants can look at facts about different items of clothing, such as the manufacturing process, production costs, impact on the environment, etc. Afterwards, garments that the respective participant is interested in can be selected by them. In the next step, both participants can now talk about the garments they have selected and discuss any special features.

Equity:

This scenario takes place in an office room. Both participants witness a predefined business meeting. During the meeting, participants have the opportunity to mark situations that they perceive as unfair or discriminatory. Afterwards, the meeting sequence is played again and the participants can now pay attention to which situations were marked by each other and discuss them.

Motivation

Current research shows that many prospective teachers are not sufficiently well prepared for practical classroom management after completing the 2nd phase of training. The one-year project aims to develop a virtual classroom that can be used to gather initial insights into teaching methodological skills via virtual reality to train teachers.

Perspectives

After the conception and implementation of the basic VR system, a training scenario with disruptive scenes will be used to investigate whether teachers recognize these correctly and initiate the necessary intervention measures.

VR Classroom Cyberball Paradigm

Motivation
The motivation for this study arises from the persistent pursuit within the field of psychology to enhance the precision and consistency of experimental environments. While traditional methodologies are foundational, they often struggle to achieve the necessary level of control and ecological validity. Consequently, there has been a growing interest in integrating innovative technologies, such as virtual reality (VR), to address these challenges. VR offers unprecedented opportunities to create immersive and controlled environments that closely mimic real-world scenarios, providing researchers with a powerful tool to study human behavior and social dynamics.
The study is based on the Cyberball, a well-known paradigm for studying interpersonal ostracism and acceptance. However, by transitioning this paradigm into a VR chat room setting, we aim to push the boundaries of experimental control and realism even further. By immersing participants in a virtual environment where they engage in conversational interactions with virtual avatars, we can explore interpersonal dynamics in more controlled and consistent ways.


Objective
The primary objective of this study is to develop and evaluate a controlled virtual reality environment tailored for psychology and social experiments, with a specific focus on adapting the Cyberball paradigm into an immersive VR setting. We designed, implemented, and evaluated this VR environment to assess its effectiveness in simulating peer exclusion experiences and eliciting corresponding psychological responses from participants. Following positive feedback regarding psychological effects from participants during our initial experiments, we aim to further develop this project to contribute to the advancement of controlled environments in psychology experiments, leveraging VR technology to foster a deeper understanding of social dynamics and psychological phenomena.

The main goal of DIGIMAN is to elaborate a master programme study related to digital manufacturing and meet the challenges of industry 4.0. The project will address the gaps in terms of learning Outcomes, competences, skills in educational technologies of the fourth industrial revolution.

As declared objectives the project has the following topics:

  • Inventory of relevant digital technologiesd in order to elaborate a guideline for alignment of the industry 4.0 to actual digital devices for fabrication process
  • Elaboration of minimal requirements to introduce a new innovative master programme study related to digital manufacturing
  • Elaboration of a new innovative curriculum on digital manufacturing
  • Elaboration of digital lessons materials in order to improve technical and digital competences
  • Elaboration of learning management system for an open education in digital era
  • Promotion of a new innovative programme study to industry stakeholders, HEIs and students.


Regarding innovation, this project will:

  • Create an innovative master programme study related to industrial manufacturing that addresses relevant needs while the current programmes only address a small part of digital manufacturing
  • Study cases that will allow transfer of knowledge between all relevant stakeholders
  • Step forward towards digital transformation of teaching/learning/evaluating process
  • Digitalization of all lesson materials and share through a learning management system

The BMBF-funded project Start_V-AR focuses on the establishment of a European network in the area of virtual and augmented reality training of workers.

Motivation

The training of skilled workers is a central component of every work process in industry. In a large number of cases, however, this is very expensive since special tools or entire parts of, for example, production plants have to be kept exclusively for training purposes. This circumstance can be circumvented by the use of novel technologies for (partial) virtualization of environments.

Objective

Virtual Reality offers the advantage that a user can learn the basics of the tasks in a completely virtual environment and make them easier to implement in the real workplace. Another technology is Augmented Reality, this superimposes the reality with digital objects and can thus display additional information to a user. In the context of the training, the trainees can be shown hints for the task execution here. Almost exclusively, the technologies of augmented and virtual reality have been researched and applied prototypically on their own and independently of each other. However, there is an untapped potential to combine these technologies to optimize work processes in order to create a seamless training concept.

The main goals of the project are:

  • Combine VR and AR technologies to create a seamless framework for the training of workers
  • Set up a European network for joint consideration and research of augmented and virtual reality
  • Submit a research proposal for a European funding program out of this network

Image by vectorjuice on Freepik

The long-term aim of EU ERASMUS+ LOGMASTER project is to establish a solid alliance among higher education institutions, education centres, certification bodies and NGOs working together to boost innovation in the higher education system focused on
supply chain management and logistics for top management positions.

The main objective of the LOGMASTER project is to develop a joint master study programme framework for Supply Chain Management and Logistics to cope with the requirements in terms of competencies of the ELA qualification standards.

Other specific objectives, derived from this one, are:

  • to provide a new modern educational programme using the most advanced digital education tools designed for top
    management positions in supply chain management and logistics;
  • to develop a cutting-edge supply chain management and logistics curriculum, applying modern teaching/learning methods
    involving digitalisation;
  • to develop student-centred approaches and flexible educational pathways fostering the competency-based education
    system
  • to enhance cooperation among HEIs, logistics training associations, certification bodies and enterprises in the fields of
    logistics higher education and digital education supporting the flow and exchange of knowledge from the academia to the
    supply chain and logistics sector.


The UniTyLab participates in this project with its knowledge in the area of Virtual and Augmented Reality for training scenarios.

ARIDLL

The EU ERASMUS+ ARIDLL project aims to develop a cooperation partnership and a professional community in Augmented Reality (AR) instructional design for language learning.

Motivation

The project is motivated by the need for digital innovation in language teaching and directly responds to the need of language teachers to have the appropriate skills for applying digital technologies, such as Augmented Reality (AR), in their practice, as both users and creators of educational materials. AR has become a popular technology and educational mobile AR applications are available, especially for science. Additionally, research documents various positive effects of using educational apps in teaching. However, there are numerous challenges regarding the use of AR in educational settings in general and in foreign language (FL) teaching.

Challenges

  • The usage of AR in educational settings in general and in foreign language teaching is connected to numerous challenges such as:
  • new technology with few best practices existing in the teaching community
  • lack of application and foreign language authoring tools that allow non-technical users to create materials
  • foreign language teachers are generally not familiar with AR-specific instructional and learning design mechanisms
  • AR applications usually do not consider learning theories in their implementation 
  • lack of freely available authoring tools that allow full access to resources and tools needed for building AR environments, especially in the field of foreign language learning and teaching
  • lack of availability of AR materials in a range of languages, including less commonly taught languages


Objectives

ARIDLL aspires to fill the identified gaps by providing pre-service and in-service language teachers with support and materials to facilitate the use of AR in their practice. The project will develop these materials and evaluate them in different contexts, from schools to universities, focusing on different languages.

Therefore, ARIDL focuses on:

  • enabling effective language teaching with AR
  • Facilitating capacity building among language teachers to become professional AR users and creators of AR educational materials
  • Improving the quality of language education across Europe


Motivation

More than 15 percent of the adult population in Germany develops an anxiety disorder requiring treatment at some point in their lives. Fear of heights, arachnophobia or the discomfort of having to speak in front of a large crowd. Anxiety can be treated by means of so-called confrontation therapy - but this approach is costly for both patients and therapists and too rarely achieves the hoped-for treatment success. EVElyn is a joint project that uses virtual reality to help simulate anxiety-inducing situations in a realistic way. Patients wear virtual reality goggles and can interact with the system using natural movements. By using this technology, the therapy effort is considerably reduced and inhibition thresholds are lowered.

Objective

In the meantime, the joint project has become a real success story. At the time of the project's launch, the goal of EVElyn was to research and implement a user-centered concept for outpatient psychotherapy. The poor accessibility of modern treatments and a high organizational effort of individual therapy sessions were to be a thing of the past and replaced by a virtual confrontation therapy. The goal was to make this innovative treatment approach available to clinical practices in the field. After three years of funding, EVElyn is on its way to realizing these goals. The team is working with collaborative partners on new funding opportunities and further development.



Motivation

Mobility is a crucial factor in our everyday life. Therefore, new systems of human-technology interaction are to be developed that contribute to more safety, comfort and reliability in this area. One promising approach may be partially automated driving, where the vehicle cooperates as much as possible with the person behind the wheel, especially in monotonous or dangerous driving situations.

Objectives

In the KoFFI project, research is being conducted into how a partially automated vehicle can become a cooperative partner. The goal is to enable the driver and the vehicle to recognize critical traffic situations at an early stage and to react together accordingly. To this end, theoretical models are being developed that take into account traffic situations as well as driver and vehicle states. These models enable innovative approaches for intuitive interaction between vehicle and driver. For this purpose, natural language dialogs and intuitive graphical elements are developed and transferred into practice. The system is validated both in a driving simulator and in public road traffic. Furthermore, all development steps are accompanied constructively and permanently from an ethical and legal perspective.

Perspectives

The cooperative and intelligent assistance system KoFFI provides new interaction concepts and technologies that meet the special requirements of partially and highly automated driving, especially with regard to acceptance and reliability of automated vehicles.



Motivation

Service applications in the mobile domain can be quickly placed on the market through minimalist development of the feature set, but require an approach to quickly collect, evaluate and process user feedback based on user behavior and experience. Opti4Apps strives for a quality assurance approach that enables and extends the benefits of minimalist development through semi-automated information extraction from user feedback and a combined inspection and testing methodology.



VR Experimenta Projekt

This project was built for and in collaboration with Experimenta, Germany’s largest science center. The goal was to build an application for a design exhibition. This included the cooperation with a construction team to plan and build a wooden booth with mounting points and inlets for hardware.

The finished booth featured an outward facing screen showing the current players vision, or if idle, a preview screen of the application to draw in people. A small tablet set into the entrance wall continually played an instruction video on how to use the application. Once inside the booth and instructed on how to wear the headset, the player was set inside a clothing store with customers approaching the player. Each customer asked for clothing recommendations for the player to pick up. Meanwhile the player had a selection of clothes to choose from. Different effects are trying to guide the player to choose certain clothing articles over other ones. These nudging effects are observed and anything the player decides to do is saved to a database. After serving 3 customers, the player is requested to put down the headset and head for the exit. After leaving the booth, there was a feedback tablet near the exit in which players could take part in a questionnaire, rate the application and then also select a different nudging effect for the people coming after them. This effect was transmitted over to the computer so the application could display it for the next person trying to choose which clothes to pick.

The data collected from the participants can then be used for studies exploring the effects of nudging effects.

Motivation

INVEST-PRO³ (INternationally Networked through the Development of Joint STudy Programs with a High PRaxis Orientation) is divided into three subprojects. The overarching goals of the project are the expansion of international study formats, the strengthening of networks with international (partner) universities and companies, and the intensification of international cooperation in the areas of research and teaching.

To this end, further practice-oriented double-degree programs are being developed (subproject 1), career support for international students is being expanded (subproject 2), and research collaborations are being strengthened (subproject 3).

Subproject 3: Strengthening international research cooperation

The topic "Augmented and Virtual Reality Training of Workers in Industrial Environments" directly addresses the challenges that companies face in the face of increasing digitalization both in Germany and in Vietnam and thus trains project participants in a promising field of work.

In the HHN-INVEST-PRO³ project, an application-oriented, interdisciplinary research project is being conducted with the Vietnamese-German University. Together they are working on the further development of a virtual reality vehicle simulator based on Unity 3D.

In addition to scientific staff, master's and doctoral students from VGU and HHN are also supported by scholarships to carry out research work, project work and final theses.

Goal - Subproject 3

The main goal of subproject 3 is to further develop the research collaboration between HHN and VGU in the field of innovative interactive technologies.

Both project sites in Germany and in Vietnam are working together on the development of a virtual reality driving simulator. In addition, the focus is on the conception and implementation of joint research experiments in the areas of "Virtual Reality Driving Simulation" and "Human-Machine Interaction".

Motivation

The current generation of dedicated Mixed Reality (MR) devices can be considered as the first generation, which is truly mobile while also being capable of sufficient tracking and rendering. These improvements offer new opportunities for the on-set use of MR devices enabling new ways of using MR. However, these new use cases raise challenges for the design and orchestration of MR applications as well as how these new technologies influence their field of application. In this article, we present MR On-SeT, a MR o ccupational health and safety training application, which is based on the experiences of an operational division of a world-wide operating German company.

The intended purpose of MR On-SeT is to increase employees’ awareness of potential hazards at industrial workplaces by using it in occupational health and safety training sessions. Since the application is used at various locations throughout the company’s world-wide subsidiaries, we were able to evaluate it through an expert survey with the occupational health and safety managers of seven plants in France, Germany, Japan, and Romania. They reported the condensed experience of around 540 training sessions collected within three months. The purpose of the evaluation was twofold: 1. to understand their perceived attitudes towards the application-in-use, and 2. to collect feedback they received from respondents in training sessions. The results suggest that MR On-SeT can be used to extend current, predominantly theoretical, methods of teaching occupational health and safety at work, which also motivates experienced employees to actively engage in the training sessions. Based on the findings, several design implications are proposed.

„Prospering Life“ was the motto of the Bundesgartenschau, which was opened on April 17, 2019, in Heilbronn. Within the borders, the BUGA offers many exciting projects and promotions are taking place within the entire region.

For the garden show, UniTyLab developed the mobile application BuGAR, which uses augmented reality to bring parts of the garden show grounds onto a smartphone, with iPOL GmbH . More precisely, the app is a kind of knowledge rally, by which the user receives information about the BUGA from small game applications or experiencing this knowledge through perceptible exhibits.

A feature of the minigame can only be activated at specific locations on the garden showgrounds. Through a push notification the user receives a message that, for example, by entering the area with the wooden skyscraper, the Memory Minigame can now be started.

The app has been developed together with our Partner SevenD. More information can be found here: https://sevend.de/blog/portfolio/impact-karl/

Developement of a Software Tool for automatically checking user Interfaces against usability-related standards and guidelines (DeSTIny)

The development of user interfaces is an expensive and time-consuming task. In order to nevertheless make the software usable, various standards, guidelines and style guides exist. However, many developers are not familiar with these documents. Since fixing bugs in late project phases is more expensive than in early ones, it is considered useful to check the user interface for possible bugs at an early stage. The goal of the project is to develop a tool that checks user interfaces against given relevant documents.

Nowadays, many people suffer from allergies. Many substances in cosmetic products (e.g. in shampoos, body lotions, sun creams, etc.) can trigger allergic reactions. Although all ingredients are printed on the packaging, it is still difficult for consumers to identify the substances that trigger their allergic reaction.

In the GlassAllergy project, the Google Glass camera is used to scan the EAN barcode on the product to validate the substances contained against an allergy profile stored online by the consumer. The translation of the substances into their ingredients is derived from medical semantic networks and a web service is used to check the substances against possible allergies of the consumer.

Introduction

The aim of the project LeapPhysio is to support the physiotherapist
during the treatment of hand injuries. Patients will be able to practice the preconfigured and adjusted exercises independently e.g., at home. This efficiently supports the healing process of the patient’s injuries.

Leap Motion

The Leap Motion works with two cameras (CMOS sensors) and 3 LEDs sending out infrared light. The CMOS sensors translate the back light which is falling on them in tension, so that you can determine the position of the hand with the aid of stereoscopic images. This is done with an accuracy of up to 1/100 mm, on condition that the hand is in the coordinate system of the controller (25-600mm above). 

Leap Physio

The project has been done in close collaboration with a physiotherapist (Praxis Lebenswert, Christiane Pasker, Schwaigern). Five hand exercises were determined, that have be implemented with the Leap Motion. These exercises aim to make everyday movements after hand injuries possible again. Since the potential for the patient movements may be very limited initially, it was important that you can individually set a timeout when the exercise is terminated, even if the patient did not complete the whole exercise. To guide the patient as much as possible during the exercise, the graphical user interface was kept as simple as possible (see Figure 2). The patient will find information about the progress of his exercises, such as the number of repetitions that he (still) has to do, the training set itself, and the countdown until the timeout of the exercise. On the right side, he has both, a textual description with a picture, and also a narrated video tutorial. Of course, the countdown timer stops during the video tutorial.

Introduction 

A common approach to the therapy of acrophobia is an exposure therapy. Such a therapy could be supported and improved by new technologies. Especially head-mounted- displays (HMD) are quite suitable. Therefore a scene was developed that allows a therapy in virtual reality. 

Technical implementation 

The first step was to evaluate the hardware exactly. As HMD the Oculus Rift in the first developer version was deployed. The Oculus Rift has a Field-of-View (FoV) of 110° diagonal. Additionally the Oculus Rift has an integrated head-tracking system. The development phase started with the preparation of a realistic scene. Therefore several game-engines were evaluated according to stated criteria. The result was that Valve’s Source Engine fits the needs best. Further it’s possible to install Garry’s Mod that brings a sandbox-mode with it. With this Mod it is comfortable possible to add items in the game scene. Afterwards a level of HalfLive 2 was modified to a realistic scene for the therapy. Thereby a street was chosen and all the gaming elements were removed. To a house a lattice balcony was modelled. The scene was with the Oculus Rift quite realistic. The figure could be moved by keyboard and the Oculus Rift enabled a perfect head-tracking and


this was mapped accurately to the field of view. At that point the idea of a body-tracking came up. With this it would be possible to show the patients arms and legs. In the native version of HalfLive 2 this is not provided. To implement this feature the Microsoft Kinect 1 was used. The inclusion was realized with Lua-scripts. Through the accomplished inclusion of the Microsoft Kinect the immersion was noticeably improved. The patient now has the opportunity to hold his legs over a bottomless pit (figure 1). 

Outlook 


The actual results were not evaluated from real patients. For the further work it would be necessary to evaluate the system with real patients. Due to this the University of Regensburg was contacted. Also it would be interesting to integrate the Omni Virtuix, so the patient could walk like in real-life. 

In this project together with Harman Becker Automotive Systems GmbH we display how the consumer market, e.g. the smartphone sector, influences the automotive area and what the smartphone-oriented generation expects from today’s infotainment solutions. According to recent studies from J.D. Power, from 2007 to 2014 the amount of smartphone usage has increased from approximately 15 percent to 70 percent. Therefore, nearly everybody is used to have a mobile device every time and everywhere.

Automakers are aware of this trend and offer the possibility for the users to connect their smartphone with their car. Now, users can access their contacts, music and other functions that are brought by the smartphone. But often the interface differs much from the way people know it from their smartphone usage. Thus, there are approaches to integrate existing solutions from the smartphone market into the car to meet the needs of the smartphone-oriented generation. Also automakers can profit by the smartphone integration since the smartphones will always be up-to-date and therefore the HMI keeps up in the field of infotainment.

To have a better understanding of what is really important for the “connected” generation we made an online survey with questions regarding the usage of smartphones and vehicle integrated functions. Results of this survey were compared to current HMI solutions of four selected cars to review current solutions and to find a new way of designing the HMI of a car and implemented it as proof of concept.

The proof of concept that aims to meet the needs of the smartphone-oriented generation focuses on a reduction of redundant hard- and software as well as a driver-centered design which has a smooth smartphone integration and a fresh style.

Introduction 

The aim of this work is to rebuild the Meta1 for eLearning purposes is. Based on the Epson Moverio BT-100 videoglasses released in 2011 which is operated by Android in the version 2.2 and a camera capable of detecting objects an exemplary implementation of the ”Physical Exam” Screen of the CAMPUS Card Player, a medical eLearning system, was realised. Furthermore, the whole system should follow the hardware selection of Meta1 but using open source frameworks, if possible. The set- up is highly adaptable to other use cases. 

Graphical User Interface 

The GUI is geared to the “Physical Exam” card of the CAMPUS card player. The mock-up of the GUI shown above, depicts the aimed functionality of the system. On the left side, the patient puppet and the interaction chooser (blue frame) are displayed and on the right side, the results are listed (yellow frame) and due to the optimization of the space, the tool bar (red frame) also is located on this screens side.  

3D to 2D Transformation 

As the selected Softkinetic DS325 3D camera returns three dimensional coordinates of detected parts of the hand, a mapping into a two dimensional space has to be executed. The underlying idea is sketched in the above figure, where a reference plain ER is introduced and the point P (three dimensions) is mapped on it. As every detected point is mapped, afterwards the third dimension can be neglected. 

Introduction

To treat a patient, surgeons today have to work intraoperatively with an increasing number of computer-based medical systems. The main challenge here is that many peripheral devices are difficult to sterilize. As part of the Osirix in Motion (OiM) project, a plug-in was developed for non-contact interaction in the operating room. OiM enables 1-hand interaction to control Osirix1, a Mac OS X-based software for displaying and processing radiological DICOM image data (including CT, MRI, and X-ray).

Methods

The Leap Motion Controller (LMC), which transmits finger movements at a rate of approximately 200 frames per second and a precision of 0.01 mm, was used to capture the gestures. The evaluation of the data is done by the connected PC. The recognition of movements takes place within the interaction box, which is set up in the form of a pyramid rotated by 180° in the range of 2.5 - 60 cm above the controller.

Previously published implementations for gesture control of the DICOM viewer OsiriX use stereo camera-based methods or time-of-flight cameras such as Microsoft's Kinect to recognize gestures. The solution presented here with the LMC recognizes gestures with the help of infrared technology, through which a significantly higher precision can be achieved. The design of the icon of the respective function was designed according to the recommendations of the Integrating the Healthcare Enterprise (IHE) profile "Basic Image Review "2. The functions were arranged in a circular menu that opens in the center of the screen. In this way, a simple and quick selection of the desired interaction mode is possible. The selection is made depending on the direction of the extended index finger. Due to the higher accuracy of aiming, the selection of a function is confirmed by "wait to click" instead of a tap. OsiriX in Motion offers five functions in the current version: Layer selection, Translation, Contrast adjustment, Rotation and Magnification.

Discussion and outlook

In contrast to previously published solutions based on 2-hand interactions, OiM was designed exclusively for operation with 1-hand gestures, since physicians need both hands to operate in the operating room if possible. In addition, complex gestures (e.g., tilt angle of the hand) are also supported. The functional scope of solutions published to date is limited. Nevertheless, these systems use a large number of different gestures, which makes their learning phase very time-consuming. With OiM, the gesture vocabulary is relatively small, since the gestures used follow a uniform concept and thus facilitate learning. In order to test OiM in practical use, further evaluation under real conditions, for example in the operating room, is necessary. Since the plug-in currently only supports the 2D viewer, it is conceivable to further develop OsiriX in Motion for 3D and also 4D support.

For further development, the code will be made freely available on GitHub.