brookes_logo_charcoal_small Save Save Save Save

Augmented Reality Job Performance Aids

The traditional route to knowledge is to read a book from a library. We’re investigating how we can go beyond this and embed knowledge directly into the perception of the user, right where action happens and performance is required.

reality-fantasy-glass-v4

Wearables thereby act as gateways, mediating between objective reality and its enhancements with visual, auditive, haptic, etc. overlays. When done well, they help turn sensorimotor perception into experience.

This requires two types of world knowledge, i.e., data about the workplace and data about the activity pursued. While the first is rather stable, the latter is dynamic and much more rapidly. We’re researching both representation and implementation, working on both standards as well as development toolkits and frameworks.

Towards a ‘Museum of the Invisible’

Yu Huang : 11th December 2018 1:32 pm : Augmented Reality

With Alison Kahn from Stanford University and the Pitt Rivers Museum, we started engaging with the culture of the mysterious and little-known Naga, located in North East India. If the vision turns reality, visitors could see holograms on a trail, exhibited among the museum’s gallery collections. They could witness a ‘pop-up’ hologram of Naga natives to understand better and through story telling the meaning of the historical artefacts. Dr Alison Kahn, Stanford University, is now going on a field trip to the Naga communities in India to record interviews and artefacts in support of this project.

We aim to apply multiple methods for the 3D reconstruction, including photogrammetry and 3D scanning in order to create  realistic 3D reconstructions in a Hololens of the witnesses and their chosen cultural artefacts. Photogrammetry is the process in which 3D models are created with the textures of the existing objects by taking many overlapping images from different angles in order to generate a point cloud. For a purpose like we have in mind, 30-60 degrees are usually deemed suitable, requiring about 50-80 photos per object in order to keep capturing time to a minimum.

Additionally, we intend to use 3D scanning (using an Occipital Structure Sensor) to directly recreate 3D models on an iPad. Compared to optical photogrammetry, it is faster and easier to reconstruct high-resolution meshes.

Considering the final quality, however, each mesh and texture is at risk of stretching and distortion, especially for thin and shiny objects with reflective surfaces. Where this poses a problem, we will have to built the affected parts manually according to photographs instead of relying on 3D scans and photogrammetry.

Leave a response »

Impact accelerator: Augmented Reality for schools

Fridolin Wild : 30th November 2018 4:32 pm : Augmented Reality

The UK is facing a STEAM skills crisis. These skills are important for a range of industries from manufacturing to the arts. The supply of candidates into engineering (and more broadly STEM) occupations is not keeping pace with demand. The use of AR in education can transform the school environment to a more technology-friendly one, with a bigger variety of opportunities of learning for students. According to researchers the use of AR can increase the level of participation, understanding and learning, three key elements of all educational systems’ targets. Since previous information technology tools have already been implemented in the classrooms, the incorporation of AR in education is something that can be accomplished more easily, as students are familiar with handling devices already.

Within the internally-funded Augmented Reality for STEAM education (AR-STEAM) project, we seek to organize a series of events which would create a common space for teachers, policy makers, and technology providers to meet, exchange knowledge and experience in AR for STEAM education  in order to help stimulate wider uptake of this technology. The project runs from 1.1. to 31.12. 2019.

Leave a response »

AGAST Project Reading Machine Workshops at MakerSpace

Eric White : 30th November 2018 3:29 pm : Augmented Reality, User Interface

In 1931, the American Surrealist writer Bob Brown invented a reading machine. The device produced a form of machine-assisted speed reading, in which micrographically-printed text would scroll under a magnification screen in a single, streaming line. Where cinema gave the world the ‘talkies’, Brown offered ‘the readies’, and important writers like Gertrude Stein, Ezra Pound and William Carlos Williams helped him demonstrate its potential. However, Brown’s reading machine never went into production, and even though he is gaining popularity as the ‘godfather of the e-reader’, some people still doubt whether Brown’s machine actually worked.

Recently, however, the Avant-Gardes and Speculative Technology (AGAST) Project at Oxford Brookes University have teamed up with MakerSpace, the Ashmolean Museum, and EOF Hackspace to reconstruct his prototype!

AGAST is a cross-disciplinary research group associated with PAL and Co-Creation that re-imagines the inventions of 20th-century avant-gardes using AR technology. In a series of community workshops at the new MakerSpace at Oxfordshire County Libraries, we invited local teens to help us retro-engineer and then build reading machines and write about the ‘future of reading’ in Oxford.

The prototypes, built with vintage and 3D printed parts, complete with micro-printed texts, were exhibited with other AGAST reading machine projects at a lecture by Eric White at the Ashmolean Museum. They are now on permanent display at MakerSpace.

Leave a response »

Special Track on Wearable Technology Enhanced Learning (@iLRN’19)

Fridolin Wild : 27th November 2018 6:57 pm : Augmented Reality, Wearable Computing

Wearable technologies – such as smart watches, smart glasses, smart objects, smart earbuds, or smart garments – are just starting to transform immersive user experience into formal education and learning at the workplace. These devices are body-worn, equipped with sensors and conveniently integrate into leisure and work-related activities including physical movements of their users.

Wearable Enhanced Learning (WELL) is beginning to emerge as a new discipline in technology enhanced learning in combination with other relevant trends like the transformation of classrooms, new mobility concepts, multi-modal learning analytics and cyber-physical systems. Wearable devices play an integral role in the digital transformation of industrial and logistics processes in the Industry 4.0 and thus demand new learning and training concepts like experience capturing, re-enactment and smart human-computer interaction.

This proposal of a special track is the offspring of the SIG WELL (http://ea-tel.eu/special-interest- groups/well/) in the context of the European Association for Technology Enhanced Learning (EATEL). It is a follow up proposal for the inaugural session we had at the iLRN 2015 in Prague and in iLRN 2017 in Coimbra.

In the meantime, the SIG was successful in organizing a number of similar events at major research conferences and business oriented fairs like the EC-TEL, the I-KNOW and the Online Educa Berlin OEB. Moreover, the SIG has involved in securing substantial research funds through the H2020 project WEKIT (www.wekit.eu). The SIG would like to use the opportunity to present itself as a platform for scientific and industrial knowledge exchange. EATEL and major EU research projects and networks in the field support it. Moreover, we’ll seek to attach an IEEE standard association community meeting of the working group on Augmented Reality Learning Experience Models (IEEE ARLEM).

List of Topics

  • Industry 4.0 and wearable enhanced learning
  • Immersive Learning Analytics for wearable technologies
  • Wearable technologies for health and fitness
  • Wearable technologies and affective computing
  • Technology-Enhanced Learning applications of smart glasses, watches, armbands
  • Learning context and activity recognition for wearable enhanced learning
  • Body-area learning networks with wearable technologies
  • Data collection from wearables
  • Feedback from wearables, biofeedback
  • Learning designs with wearable technologies
  • Learning designs with Augmented Reality
  • Ad hoc learning with wearables
  • Micro learning with wearables
  • Security and privacy for wearable enhanced learning
  • Collaborative wearable enhanced learning
  • Development methods for wearable enhanced learning

Author Info

Submitted papers must follow the same guidelines as the main conference submissions. Please visit https://immersivelrn.org/ilrn2019/authors-info/  for guidelines and templates. For submitting a paper to this special track, please use the submission system https://www.easychair.org/conferences/?conf=ilrn2019 , log in with an account or register, and select the track “ST6: Wearable Technology Enhanced Learning” to add your submission.

Special  Track  Chairs

  • Ilona Buchem, Beuth University of Applied Sciences Berlin, Germany
  • Ralf Klamma, RWTH Aachen University, Germany
  • Fridolin Wild, Oxford Brookes University, UK
  • Mikhail Fominykh, Norwegian University of Science and Technology, Norway

Tentative Program Committee (t.b.c.)

  • Mario Aehnelt, Fraunhofer IGD Rostock, Germany
  • Davinia Hernández-Leo, Universitat Pompeu Fabra, Spain
  • Carlos Delgado Kloos, UC3M, Spain
  • Elisabetta Parodi, Lattanzio Learning Spa, Italy
  • Carlo Vizzi, Altec, Italy
  • Mar Perez Sangustin, Pontificia Universidad Católica de Chile, Chile
  • Isa Jahnke, University of Missouri-Columbia, USA
  • Jos Flores, MIT, USA
  • Puneet Sharma, Norwegian University of Science and Technology, Norway
  • Yishay Mor, Levinsky College of Education, Israel
  • Tobias Ley, Tallinn University, Estonia
  • Peter Scott, Sydney University of Technology, Australia
  • Victor Alvarez, University of Oviedo, Spain
  • Agnes Kukulska-Hulme, The Open University, UK
  • Carl Smith, Ravensbourne University, UK
  • Victoria Pammer-Schindler, Graz University of Technology &Know-Center Graz, Austria
  • Christoph Igel, CeLTech, Germany
  • Peter Mörtel, Virtual Vehicle, Austria
  • Brenda Bannan, George Mason University, USA
  • Christine Perey, Perey Consulting, Switzerland
  • Kaj Helin, VTT, Finland
  • Jana Pejoska, Aalto, Finland
  • Jaakko Karjalainen, VTT, Finland
  • Joris Klerxx, KU Leuven, Belgium
  • Marcus Specht, Open University, Netherlands
  • Roland Klemke, Open University, Netherlands
  • Will Guest, Oxford Brookes University, UK

Contact

For more information, please contact Ralf Klamma ( klamma@dbis.rwth-aachen.de )

Leave a response »

Open Textbook for Augmented Reality

Fridolin Wild : 27th November 2018 1:03 pm : Augmented Reality, Projects

Call for Contributors

Teaching how to create and code Augmented Reality (AR) is an emerging topic in Higher Education. This should not be confused with the interest of various other subjects to use AR applications and content. Only a few top-tier universities world-wide currently offer courses that give instruction of how to code AR. Few have related content and none have a full curriculum on AR. The goal of this book project is to create the first comprehensive Open Educational Resource (OER) as a foundation for AR curricula in Higher Education. Every book about high tech risks being outdated already when going into print, so we are planning for a continuously developed and updated online book, working with an open community of contributors, Open Source style.

The book will be available as Open Educational Resource (OER) under the Creative-Commons-License CC-BY-SA. This allows using figures and texts for own presentations as Attribution-ShareAlike 4.0 International. The book is planned as a living resource, where chapters can be reworked or added as needed.

The book production is supported by the ERASMUS+ Strategic Alliances Project for Higher Education called “Augmented Reality in Formal European University Education (AR-FOR-EU). The project AR-FOR-EU establishes and deepens a strategic partnership for teaching Augmented Reality in Higher Education at scale on undergraduate and graduate levels.

Scope

The book will cover the necessary prerequisites to understand and follow the core concepts of teaching Augmented Reality. It will have a section for advanced topics that can be covered optionally in the supporting curricula. A section of the book will also be dedicated to a collection of good practices in teaching AR coding. The book offers a comprehensive and introductory perspective on the topic Augmented Reality.

Contents

Foundational Chapters

  • History of AR
  • Future of AR
  • Perceptual Foundations of AR
  • Sensors and Signal Processing
  • Computer Graphics
  • Programming
  • Algorithms and Data Structures
  • Linear and Geometric Algebra

Core Chapters

Core Technologies

  • Display Technologies
  • Tracking Technologies
  • Interaction Technologies
  • Hardware Abstraction Layer

AR Development Skills

  • AR SDKs
  • Unity / Unreal Engines
  • 3D Modeling & Rendering
  • Spatial Audio
  • Interactive Graphics and Sound Design
  • Gesture and Speech Recognition and Interaction
  • Human Computer Interaction and User Centered Design

Computer Vision

  • Image Analysis
  • Image Processing
  • Object Detection and Recognition
  • 3D Sensing
  • Tracking
  • Depth Sensing

Artificial Intelligence

  • Data Mining
  • Machine Learning, Deep Learning
  • Sensor Fusion

Advanced Topics

  • AR Agile Project Management
  • AR Game Development, Gamification and Serious Games
  • AR Applications
  • AR for the Web
  • Mobile AR
  • Hardware Accelerated Computing
  • Internet of Things, Robots and Wearables
  • Hardware and Optical Design
  • 2D/3D Printing

Good Practices and Examples

  • Maker Communities
  • Workflows and Company Practices
  • Privacy, Ownership and Intellectual Property
  • Applications, Employments and Careers in AR

Contribution Model and Infrastructure

The book project follows an agile approach differing from the classic development process typical for printed content. Contributors can play several different roles in the production process. We are looking for authors, reviewers, agile editors, designers, software developers, visual artists, and testers. Agile teams are responsible for the generation of chapters and act as product owners. Reviewers will review chapters and communicate with the author teams. Team champions drive forward the agile development of chapters. Designers lay-out the online book and printed versions. Software developers are responsible for interactive Web graphics, application examples, and other dynamic code. Visual artists are responsible for appealing visualizations. Testers will thoroughly try out the final versions of the book.

The book uses Git for version management and a GitHub organization for the creation, hosting, and delivery of the book contents to guarantee agile development. We use the GitHub-based issue tracking system for the communication between the community members, such as the the authors and the reviewers. Based on this content sharing and version management platform we use the static site generator Jekyll for rendering the content of the Git repository into a Web site. With every commit, a new version of the Web site is created automatically. Formatting of the content is done using a simple markdown language. Programming and lay-out uses JavaScript and CSS.

Leave a response »

Dr Wild to join editorial board of Frontiers in AI

Fridolin Wild : 8th November 2018 1:27 pm : Augmented Reality

Dr Fridolin Wild, the director of the lab, has joined the editorial board of Frontiers in Artificial Intelligence as associate editor. He will be overseeing in particular the area of AI for Human Learning and Behavior Change.

Frontiers, the publisher, was founded in 2007 by EPFL neuroscientists and is headquartered in Lausanne, Switzerland.

The new series in Frontiers in AI for Human Learning and Behaviour change welcomes article submissions in the full spectrum of applying AI theories, concepts, and techniques to support people in their learning and voluntary behavior change.

Research in the areas of AI in Education (AIED), Collaborative Learning, and more recently into Data Mining and Learning Analytics, is helping create better learning tools and support environments to democratize education and make it more effective. Behavior Change technologies, traditionally included in the health science domain, can help people avoid addictions and engage in healthy behaviors. With the increasing availability of powerful mobile and ubiquitous computing technologies, as well as inexpensive sensors, behavior change technologies have become commonplace and have expanded towards human behaviors in relation to environment, social engagement, safety, productivity and learning (for example, avoiding procrastination). Just like intelligent tutors, behavior change systems require understanding and modeling user activities, personalization, recommending new activities and sequences, supporting decision-making. They also often engage the user’s friends to provide social support of the activity through collaboration or competition. These topics are particularly the focus of research in the areas of Quantified Self, Persuasive Technology, Recommender Systems, Decision-Support Systems and Learning Technologies.

Changing human behavior is a learning process – in other words, Human Learning and Behavior Change are interconnected. In fact, on the one hand researchers in Persuasive Technology and Behavior Change can learn from the advances in the area of AIED, Learning Analytics and Educational Data mining, Recommender Systems; on the other hand, researchers in AIED can learn from the current work that is going on in Behavior Change and Persuasive Technologies, Quantified Self, regarding the use of context, motivation strategies, cognitive biases, etc.

Bridging these currently distinct areas in our journal section is key to enable cross-fertilization, and to provide an innovative approach to foster the work “in-between”. AI for Human Learning and Behavior Change publishes review articles, communications, and original research papers describing applications of AI technologies to learning and behavior change.

Editor-in-Chief

Professor Julita Vassileva, University of Saskatchewan, Faculty member ARIES lab

Associate Editors

Susanne Lajoie, McGill University
Sabine Graf, Athabasca University
Vania Dimitrova, University of Leeds
Seiji Isotani, University of Sao Paulo
Chad Lane, University of Illinois Urbana Champaign
Esma Aimeur, University of Montreal
Judith Masthof, Utrecht University
Harri Oinas-Kukkonen, University of Oulu
Alexandra Cristea, University of Durham
Sergey Sosnovsky, Utrecht University
Ralf Klamma, University of Aachen
Roger Nkambou , University of Quebec at Montreal
Fridolin Wild, Oxford Brookes
Fabrice Popineau, CentraleSupélec
Marcus Specht, TU Delft
Milos Kravcik, DFKI

Topics include but are not limited to:

• Intelligent tutoring systems
• AI or data-driven methods for modeling pedagogical knowledge and instructional planning
• AI methods and techniques for modeling collaborative learning processes, cohorts of learners and learning social networks
• Learning goals, learning sequences, recommendation of learning activities • Affective and motivational aspects in AIED systems
• Data-driven design or adaptation of learning environments
• Data-powered collaborative learning environments
• Peer-help, peer-mentoring, peer-review systems
• Pedagogical / persuasive agents
• Adaptive / personalized Incentives and motivations for participation in collaborative (learning or not) environments
• Gamified (learning or not) environments, games with a purpose
• Adaptive / personalized persuasive strategies and persuasive systems design in different domains (environment, learning, social engagement, etc.)
• Self-monitoring and Persuasive systems for behavior change in health and medicine
• Transparency and accountability – open learner / user models, self-monitoring, generating explanations of pedagogical and persuasive strategies and recommendations
• Ethical issues of persuasive and behavior change systems

Leave a response »

Visit to Audiomotion Studios

Yu Huang : 7th November 2018 5:07 pm : Augmented Reality, Wearable Computing

On entering Audiomotion Studios, visitors will encounter a ginormous space for motion capture (MoCap) with over 160 Vicon cameras mounted on the rigs (see picture). The Audiomotion Studio is the largest performance capture stage in Europe. Quite a number of actor and animal movements can be recorded at the same time for production of accurate animations. Behind the MoCap space there are several rooms  with green screens and Motion Control Crane for filming. PAL’s PhD candidate Yu Huang, Dr Fridolin Wild, and John Twycross went to see Brian Mitchell, the Managing Director, to explore possibilities for further collaboration on volumetric video capture.

Leave a response »

Research Excellence Award 2018/19

Fridolin Wild : 3rd November 2018 10:58 am : Augmented Reality, User Interface

The work of the Performance Augmentation Lab was recognised by the University with the Research Excellence Award for the academic year 2018/19. Fridolin Wild and John Twycross were awarded by the Pro-Vice Chancellor for Research and Global Partnerships, Prof Dr Linda King, in the category Interdisciplinary/Collaborative/Pump-prime. The award comes with a grant of £10,000, which the lab will use to build a ‘holodeck’, a capturing studio for volumetric video to research more efficient and effective ways for holographic projection.

This will help to advance our capacity for creating and simulating digital twins of real life characters and their action sequences for training, media, and entertainment purposes. Gaps in current research have been identified in the areas of body and face scanning (the process of converting a human actor and their expressions into a virtual model ready for animation). While there is a significant body of work completed by video game and Hollywood film producers there is a significant lack of disseminated knowledge by commercial companies. This project aims to address this gap in knowledge. Through experimentation and practice based production we will refine and disseminate a robust workflow that will enable photo-real full body capture with movement and facial expression digitisation and reproduction. We seek to collaborate with industry on this.

Leave a response »

WEKIT.one Halloween Release Candidate

Will Guest : 31st October 2018 8:00 pm : Augmented Reality, Performance Analytics, Wearable Computing

Just in time for Halloween, we have finalised work on a major release of the ‘WEKIT.one’, our next-generation app for wearable experiences in knowledge-intensive training.

The development of the experience capturing software has been led by members of the Performance Augmentation Lab. This is one of the first of such tools that allows the generation of content to be done completely within AR. Using a HoloLens and other wearable sensors, the software guides experts to record immersive training procedures using all available AR content. Blending 2D and 3D instruction into the workplace creates a far richer and more interactive training experience.

The expert works through the procedure, capturing their actions, thoughts, and guiding instruction step by step. We are able to capture their movement in and around the workplace, their hand positions, and even some additional biophysical signals, such as heart rate variability or galvanic skin resistance. With just the technology at hand, trainees can now visualise the expert, listen to live guidance, and have access to on-demand knowledge about the task at hand.

To now we have seen experts in the field of aircraft maintenance, radiology and astronaut training use this software and, in 2019, we aim to establish new collaboration within the university, within Oxford and abroad – most imminently with the European Space Agency.

Leave a response »

Augmented World Expo

Fridolin Wild : 21st October 2018 4:41 pm : Augmented Reality, Wearable Computing

At the Augmented World Expo in Munich, we have been exhibiting the WEKIT project last week, showcasing the breakthrough achievements for our augmented reality and wearables solution in the space industry, aviation, and medicine. On stand #217, we exhibited the different versions of the e-textile garment (and its underlying sensor harness) as well as the WEKIT.one software solution. The director of our lab, Dr Fridolin Wild, gave a keynote presentation in the enterprise track about AR experience capturing and sharing for Training 4.0, explaining the technologies of the project and the findings of the pilot trials reported so far in a series of articles and papers.

Leave a response »
« Page 1, 2, 3 ... 8, »

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.