The traditional route to knowledge is to read a book from a library. We’re investigating how we can go beyond this and embed knowledge directly into the perception of the user, right where action happens and performance is required.
Wearables thereby act as gateways, mediating between objective reality and its enhancements with visual, auditive, haptic, etc. overlays. When done well, they help turn sensorimotor perception into experience.
This requires two types of world knowledge, i.e., data about the workplace and data about the activity pursued. While the first is rather stable, the latter is dynamic and much more rapidly. We’re researching both representation and implementation, working on both standards as well as development toolkits and frameworks.
Last week in Tallinn Alla Vovk presented a paper at the 12th European Conference on Technology Enhanced Learning (EC-TEL 2017) “Affordances for Capturing and Re-enacting Expert Performance with Wearables” written by Will Guest, Fridolin Wild, Alla Vovk, Mikhail Fominykh, Bibeg Limbu , Roland Klemke, Puneet Sharma, Carl H Smith, Jazz Rasool, Soyeb Aswat, Kaj Helin, Daniele Di Mitri, and Jan Schneider. You can watch the presentation in 360 with a Q&A session.
The WEKIT.one prototype is a platform for immersive procedural training with wearable sensors and Augmented Reality. Focusing on capture and re-enactment of human expertise, this work looks at the unique affordances of suitable hard- and software technologies. The practical challenges of interpreting expertise, using suitable sensors for its capture and specifying the means to describe and display to the novice are of central significance here. We link affordances with hardware devices, discussing their alternatives, including Microsoft Hololens, Thalmic Labs MYO, Alex Posture sensor, MyndPlay EEG headband, and a heart rate sensor. Following the selection of sensors, we describe integration and communication requirements for the prototype. We close with thoughts on the wider possibilities for implementation and next steps.
Interactive designer, motion-designer, musical composer. After studying the history of cinema in La Sorbonne (Paris) and studying editing and audio-visual post-production in Cannes, he settled in Brussels in 2006 where he developed his activities as motion-designer and VFX artist. He works for the film industry (Mood Indigo by Michel Gondry, The Brand New Testament by Jaco Van Dormael), music-videos (Puggy, Sacha Toorop), documentaries for Arte, tv-shows for France Television). He occasionally trains professionals and students at digital creation workshops (School Arts2 Mons, EMMD Motion Design Brussels). Between 2012 and 2017, he co-directed the virtual reality performance IMMERSIO. This performance was a mix between live music and digital arts, played in a rich set of venues (SAT Montreal, ADAF Athens, SignalOFF Prague, Wisp Festival Leipzig, Bozar and Halles de Schaerbeek Brussels).
Scenographer, visual art designer. She studied scenography at La Cambre / Brussels and visual arts at ISPG Brussels. She develops an activity around the topics of scenography, visual installations and the organisation of workshops with kids and adults.
Industry 4.0 is on the rise and this coordinated push for automation, big data, and internet-of-things in the smart factory is already causing (and will continue to) disruption in the job market. New skills for ‘new collar’ jobs are needed and intelligent assistance systems with Augmented Reality, Smart Glasses, and other forms of wearable computing may help to deliver them.
In this talk, Dr. Wild introduced to the concept of Performance Augmentation and illustrated how challenges for the future can be met at the hand of several examples of intelligent training and live guidance applications in aircraft maintenance, space assembly, and medical diagnostics.
In this master project, Sophie Kirkham is developing a proof of concept prototype for providing real-time auditory feedback to end-users on their arm movement during simple motor tasks, an application that could be used, for example, in rehabilitation of stroke patients who have to regain motor control of their limbs. This is how she explains the work:
Research using augmented reality and virtual reality as a source of visual feedback in rehabilitation and motor learning has been receiving positive outcomes across the computing and medical fields. AR and VR can provide a method of increasing motivation and stimulation whilst performing the repetitive tasks required for motor development. Little research, however, has been done so far on integrating auditory feedback within these systems despite it’s capabilities at representing spatio-temporal information and to motivate, encourage and reduce stress in people. In this project, I am designing, developing and evaluating a proof of concept real-time auditory biofeedback system that maps EMG and IMU data from arm movements during simple motor tasks to auditory parameters.” (Sophie Kirkham)
Dr Fridolin Wild won the best paper award for his first authored paper entitled “Technology Acceptance of Augmented Reality and Wearable Technologies” at the Immersive Learning Research Network annual conference that this year took place in Combra, Portugal, from June 26 to 29:
Wild F., Klemke R., Lefrere P., Fominykh M., Kuula T. (2017): Technology Acceptance of Augmented Reality and Wearable Technologies, In: Beck D. et al. (eds): Immersive Learning Research Network (iLRN 2017), Communications in Computer and Information Science, Vol 725. Springer, Cham
The paper develops a new method to measure technology acceptance for AR/wearable tech in a workplace context (see abstract below). This is what the paper covers:
“Augmented Reality and Wearables are the recent media and computing technologies, similar, but different from established technologies, even mobile computing and virtual reality. Numerous proposals for measuring technology acceptance exist, but have not been applied, nor fine-tuned to such new technology so far. Within this contribution, we enhance these existing instruments with the special needs required for measuring technology acceptance of Augmented Reality and Wearable Technologies and we validate the new instrument with participants from three pilot areas in industry, namely aviation, medicine, and space. Findings of such baseline indicate that respondents in these pilot areas generally enjoy and look forward to using these technologies, for being intuitive and easy to learn to use. The respondents currently do not receive much support, but like working with them without feeling addicted. The technologies are still seen as forerunner tools, with some fear of problems of integration with existing systems or vendor-lock. Privacy and security aspects surprisingly seem not to matter, possibly overshadowed by expected productivity increase, increase in precision, and better feedback on task completion. More participants have experience with AR than not, but only few on a regular basis.”
In the picture: co-author Mikhail Fomynikh receiving the award from conference chair Jonathan Richter.
A trans-European team of researchers is transforming industrial learning and training with the use of innovative Augmented Reality and Wearable Technology (AR/WT).
Wearable Experience for Knowledge Intensive Training (WEKIT) is a 2.7 million EURO research and innovation project funded by Horizon 2020, to develop and test a novel way of training using smart wearable technology within three years.
Dr Fridolin Wild, Senior Research Fellow at Oxford Brookes University and Principal Investigator and Scientific Director of the WEKIT Project said: “In the modern world, there tends to be a concern that technology is developing so rapidly that it will replace humans in the workplace.”
“WEKIT demonstrates that cutting-edge technology can actually help humans become better at work, quicker, less error-prone, more engaged and healthier.”
Using AR, WEKIT effectively brings textbooks to life using digital visual and audial information that overlay on the physical environment, for example in the form of animations. The WEKIT.one soft- and hardware system shows the trainee what to do through the eyes of the expert, allowing the trainee to learn by experience rather than simply reading about it or watching a video tutorial. It also allows an expert to create instructions easily – by capturing performance using WT.
Dr Wild continued: “Using augmented reality as a medium for learning and work is a powerful tool, particularly in high-skill settings which require the teaching or re-teaching, of complex manufacturing and engineering tasks after ‘Industry 4.0’.”
“Crucially, it also has the potential to have a positive impact on the time and costs of training large numbers of people.”
The new WEKIT.one AR system, involving the Hololens and other wearable devices, was recently put into action for the first time when it was tested with 142 experts and trainees at three separate organisations; in Tromsø, halfway to the North Pole in the Arctic circle, and in Turin and Genoa, Italy.
Throughout the trials, specially developed applications for both the experts and novices were used and feedback was collected to assess the suitability and acceptance of the system in three distinct scenarios:
- Medics and engineers at the Arctic town of Tromsø performed equipment checks on the aircraft used as emergency responders in the region. More than 50 students donned the Hololens and used the WEKIT training application to carry out the checks. The system walked them through the air ambulance with the aid of holograms and audio instructions and gave them real-time feedback on their progress.
- At ALTEC (a service provider for the Italian Space Agency) WEKIT tested a procedure for setting up stowage racks for use by astronauts on the International Space Station. Trainees were tracked as they installed the equipment, monitoring their efficiency on every step as well as their heart-rate variability.
- With the help of radiologists in Genoa and EBIT, a medical software company, a number of medical students were trained to assess the blood flow in the carotid artery on an unfamiliar ultrasound machine. This tricky procedure involves following instructions (laid out in 3D) whilst maintaining control of both an ultrasound probe and a patient (in our case: an actor). A holographic tutor delivers the recorded think-aloud explanation of the expert, while instructional holograms, floating videos and to-be snapshots guide step by step through the procedure. Tested by medical and engineering students, this trial provided in-depth feedback on the subtleties of using AR for complicated, interactive procedures.
The trainees and experts involved in the trial evaluated specific features of the prototype and the training approach, technology acceptance, system usability, user satisfaction of training with AR glasses and human-computer interface, and simulation sickness.
More information about WEKIT: http://www.wekit.eu
For more information, please contact: Natalie Gidley, Communications Officer (Media Relations) at Oxford Brookes University on 01865 484452 or email@example.com.
Notes to Editors
- Images and video footage taken during the WEKIT trails are available. Please contact the Oxford Brookes University press office.
- The scientific coordination of WEKIT is at Oxford Brookes University in the UK, the administrative coordination with the Italian IT company GFT. WEKIT brings together four further academic partners including Ravensbourne (UK), University of Tromsø (Norway), Open University (The Netherlands), and RWTH (Germany).
- The research centres at Oxford Brookes University (UK), Open University of the Netherlands (NL), VTT (Finland), and the high-tech SME MyndPlay (UK) are leading the development of the key components of the platform.
- Three industry partners – Norway-based Lufttransport as well as EBIT and ALTEC from Italy – are leading evaluation cases to test the WEKIT training methodology and technological platform in real practical settings.
- Set in a historic student city, Oxford Brookes is one of the UK’s leading universities and enjoys an international reputation for teaching excellence and innovation as well as strong links with business and industry. More information is available on the Oxford Brookes website at www.brookes.ac.uk
Perspectives on Wearable Enhanced Learning: Current Trends, Research and Practice
An edited volume by Ilona Buchem, Ralf Klamma, Fridolin Wild
to be published by Springer, New York
Springer website: http://www.springer.com
Dedicated website: http://ea-tel.eu/special-interest-groups/well
EasyChair submission: https://easychair.org/conferences/?conf=wellspringer2018
Wearable technologies – such as smart glasses, smart watches, smart objects, or smart garments – are potential game-changers, breaking ground, and offering new opportunities for learning. These devices are body-worn, equipped with sensors, and integrate ergonomically into everyday activities. With wearable technologies forging new human-computer relations, it is essential to look beyond the current perspective of how technologies may be used to enhance learning.
This edited volume “Perspectives on Wearable Enhanced Learning” aims to take a multidisciplinary view on wearable enhanced learning and provide a comprehensive overview of current trends, research, and practice in diverse learning contexts including school and work-based learning, higher education, professional development, vocational training, health and healthy aging programs, smart and open learning, and work. This volume will feature current state of the art in wearable enhanced learning and explore how wearable technologies begin to mark the transition from the desktop through the mobile to the age of wearable, ubiquitous technology-enhanced learning.
The edited volume is divided into seven parts:
Part I The Evolution and Ecology of Wearable Enhanced Learning
This part includes chapters describing an evolution of technology-enhanced learning from the desktop to wearable era, the different phases in the evolution of technologies for learning, introducing in the technological and conceptual shifts from e-learning through m-learning to ubiquitous learning. This part introduces the reader to the topic and provides both a historical perspective and a conceptual framework for a socio-cultural ecology of learning with wearables.
Part II The Topography of Wearable Enhanced Learning
This part includes chapters giving an overview of current trends and uses of wearable enhanced learning including examples of projects, use cases, case studies. This part provides an overview of real-life examples and aims at illustrating the breadth of uses of wearable technologies for learning in different application contexts such as education, work, health and open learning.
Part III Technological Frameworks, Development and Implementation
This part includes chapters providing insight into different technological aspects of wearable enhanced learning focusing both on the hardware and the software. This part also gives an overview of different development and implementation methodologies applied in wearable enhanced learning.
Part IV Pedagogical Frameworks and Didactic Considerations
This part includes chapters providing insight into different pedagogical frameworks and didactic/instructional design approaches applied in wearable enhanced learning. This part also discusses pedagogical affordances of wearables as technologies for learning and the consequences for a didactically sound design and integration of wearables in learning settings/environments.
Part V Design of User Experience
This part includes chapters providing insight into different aspects of user experience design including approaches for enhancing user engagement such as gamification and information visualisation as well as human-computer interaction and interface design. This part also discusses how current insights from research and development in wearable computing, which represents the forefront of HCI innovation, may be applied to designing user experience in learning settings.
Part VI Research and Data
This part includes chapters providing overview of current empirical research results in wearable enhanced learning touching upon the different dimensions of learning including cognitive, social and embodied dimensions. This part also discusses how data can be gathered and exploited in wearable enhanced learning which includes such topics as wearable learning analytics, turning data into information and data-driven approaches to enhancing learning in wearable enhanced learning.
Part VII Synopsis and Prognosis
The final part includes a chapter providing a synopsis and a prognosis for the future development in the field of wearable enhanced learning.
Call for Chapters
Prospective authors (co-authors are welcome) are invited to submit a chapter proposal (via EasyChair: https://easychair.org/conferences/?conf=wellspringer2018) in form of an abstract (max. 300 words) with the title, names of authors, five keywords and the part of the book for the contribution not later than 30 September 2017. The proposals for chapters should be a previously unpublished work.
Upon acceptance of the chapter proposal and notification of authors by 20 October 2017, the final chapter should be completed not later than 01 February 2018.
Contributions will be double blind reviewed and returned with comments by 31 March 2018. Finalised chapters are due no later than 30 April 2018. The final contributions should not exceed 20 manuscript pages. Guidelines for preparing your chapter will be sent to you upon acceptance of your proposal.
The following represents a timeline for completing this volume:
- 20 June 2017: Call for Chapters open
- 30 September 2017: Abstracts due (title, authors, abstract, keywords & book part)
- 20 October 2017: Notification and additional information for authors and templates
- 01 February 2018: Chapters due (according to the template)
- 31 March 2018: Chapters returned with reviewers’ comments
- 30 April 2018: Final chapters due (ready for publication)
- 31 May 2018: Book manuscript delivered to Springer
Inquires and Submissions
Please forward your inquiries to:
The Editors: Ilona Buchem, Ralf Klamma and Fridolin Wild
Twitter: @mediendidaktik @klamma @fwild
Please submit your proposal to:
Dr Fridolin Wild from the Performance Augmentation Lab gave a ring lecture in the postgraduate school ‘Work 4.0’ of North Rhine-Westfalia, Germany, focusing on how Augmented Reality and Wearable solutions can be used to support workers in manufacturing in up-skilling on the job.
The postgraduate school is jointly organised by the universities of Bielefeld and Paderborn, funded by the German Research Society as an interdisciplinary PhD-level research training group. The school sets focus on the development of flexible work environments relevant to the human-centered use of cyber-physical systems. It brings together researchers from various disciplines (including computer science, psychology, business administration, sociology, mechatronics and engineering).
The lecture covered an introduction to the concept of human ‘performance augmentation’, leading from perception and experience over to means for implementing gateway technology to then cover actual demonstrations and evaluation results from pilot trials in WEKIT and related projects.
1st International Workshop on
Mixed and Augmented Reality Experience Capture (XCAP’17)
at the 16th IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
When the user of a Mixed and Augmented Reality (MAR) device is in an experience, the real world with the synchronized digital assets are perceived. The user’s senses continuously detect the visual and/or audio signals from the combination of the real world and digital assets in real time.
As Mixed and Augmented Reality experiences become more common there will be a natural need on the part of developers, users, employers and sponsors to capture and, in some circumstances to share or publish accurate archives of MAR experiences. For example:
- Workplace-based inspections, particularly those necessary to produce certification of a process being performed in compliance with corporate or regulatory policies, would be captured.
- A public safety officer using MAR to aid in posing questions when interviewing a suspect or witness may need to review the interview archive at a later time.
- Those seeking to capture tribal knowledge or best practices may also use an MAR experience capture system for sharing with novice users or students in the context of skills training or performance enhancement.
- The players of a MAR game may want to record, review or share their best moves.
The captured experience file will show the real world and digital content from the user’s point of view.
To support these capabilities, systems connected to the MAR display and sensors, and a variety of recording and file storage technologies are needed. Today, some preliminary tools and architectures are proposed.
The first available are designed to work with HoloLens, based on prior work published by Microsoft Research (see resources for developers). The field as a whole is very young and requires a great deal of further research.
This workshop focuses on approaches and system architectures for content creation and management. Unlike digital content perceived by the user within MAR display devices during experience presentation, the content of concern to the participants of this workshop is linear, time-stamped content that can be captured and then played/reviewed and otherwise treated as “normal” digital video content, at any time in the future. This includes the possibility of multiple points of view (multiple cameras) and multiple microphones. It can also include other metadata about the user’s geospatial position and orientation, hand positions, gaze direction, audio provided the user, photos taken or video viewed and captured. It encompasses the media files in any/all formats and associated metadata.
The topics and questions on which this workshop will focus include:
- Components and/or systems, and architectures for MAR Experience streaming and capture
- Design, selection and integration of sensors for MAR experience capture
- Local power and processor management during MAR experience capture
- Compression during or following MAR experience capture
- MAR experience capture metadata (e.g., session dates, times, duration, geospatial position and orientation, hand positions, gaze direction, audio provided the user, photos taken or video viewed and captured)
- Novel visual interactions with archives of captured MAR experiences
- Network architectures for MAR experience capture and transport
- Components and/or systems for MAR Experience archive storage, replication, management and access
- Benefits and drawbacks of distributed architectures for MAR experience capture and management
- Policies and guidelines for MAR experience capture and management
- Christine Perey, PEREY Research & Consulting, Switzerland
- Fridolin Wild, Oxford Brookes University, UK
- Patrick La Collet, Polytech Nantes/Université de Nantes, France
- Mikhail Fominykh, Independent, Norway
- Kaj Helin, VTT, Finland
- Ralf Klamma, RWTH Aachen, Germany
- Roland Klemke, Open University, Netherlands
- Carl Smith, Ravensbourne University, UK
- Carlo Vizzi, Altec, Italy
- Alla Vovk, Oxford Brookes University, UK
This workshop will feature presentations describing current or past research, design, practice or lessons learned about Mixed and Augmented Reality experience capture and the topics of this workshop.
Authors are invited to submit papers by way of EasyChair (conference ID is XCAP17). Contributions must include at least one paper and can be accompanied by links to downloadable files containing supplementary materials.
Workshop papers should be 2-4 pages in length, submitted in PDF following the ISMAR 2017 guidelines (these may change so check back shortly before submitting your paper) and formatted using the ISMAR 2017 paper templates provided on the conference submissions guideline page.
Submissions should not be anonymized and the author names and affiliations should be displayed on the first page. At least one author of each accepted paper must attend the workshop and register for at least one day of the conference.
All accepted papers will be published in the ISMAR 2017 Adjunct Proceedings and IEEE Xplore.
- Draft paper submission to program committee:
July 3, 2017
- Notification of acceptance and feedback from committee reviewers:
August 7, 2017
- Camera-ready version
August 28, 2017
For our WEKIT augmented reality project, we’re looking for an artist in residence, funded via the VERTIGO project with a residency grants of up to 30k EUR. Application deadline is May 22nd 2017 (10:00 CET).
With the advent of mass-produced holographic, wearable displays and projection systems, reality has indeed become a medium, enhancing human perception with additional, artificially generated sensory input to create a new experience including, but not restricted to, enhancing human vision by combining natural with digital offers.
While successful examples of putting AR on smart glasses to use are plenty, it is by far not clear, what space of opportunity the new aesthetics have, nor is it clear which design principles drive satisficing user experience.
An artist in residence would help explore and research this new aesthetic and design and interaction principles for ‘reality 2.0’.
More specifically, there are two opportunities:
- Using AR on smart glasses as a medium of expression, creating holographic exhibits or performances.
- Reflecting on the holographic experience in other media, making the experience accessible to a wider audience (than ‘audiences of one’ of the wearer). Exploring what makes out the essence of a holographic experience (through translation to other media) would help determine success factors and design principles.
More information? Click here.