The traditional route to knowledge is to read a book from a library. We’re investigating how we can go beyond this and embed knowledge directly into the perception of the user, right where action happens and performance is required.
Wearables thereby act as gateways, mediating between objective reality and its enhancements with visual, auditive, haptic, etc. overlays. When done well, they help turn sensorimotor perception into experience.
This requires two types of world knowledge, i.e., data about the workplace and data about the activity pursued. While the first is rather stable, the latter is dynamic and much more rapidly. We’re researching both representation and implementation, working on both standards as well as development toolkits and frameworks.
Our PhD candidate Alla Vovk created a reading list with textbooks for those interested to learn more about Augmented Reality and related topics.
Finally did something that was planning for a very long time — created an Augmented Reality Reading List. As you may know, I’m researching Spatial Interfaces and interaction in AR and here is something that I’ve been reading for all these PhD years. The list has some titles that would be quite a good choice for an absolute beginner, and a selection of books about cognition, perception and spatial memory for a deeper understanding of the topic (my favorite part) — Alla Vovk
1. Handbook of Virtual Environments: Design, Implementation, and Applications. Kelly S. Hale, Kay M. Stanney.
2. Augmented Reality: Principles and Practice. Tobias Höllerer, Dieter Schmalstieg.
3. Handbook of Augmented Reality. Furht, B.
4. Spatial Augmented Reality: Merging Real and Virtual Worlds. Oliver Bimber, Ramesh Raskar.
5. Understanding Augmented Reality. Concepts and Applications. Alan Craig.
6. Envisioning Holograms. Design Breakthrough Experiences for Mixed Reality. Mike Pell.
7. 3D User Interfaces. Theory and Practice. Joseph LaViola, Ernst Kruijff, Ryan McMahan.
8. Things That Make Us Smart: Defending Human Attributes In The Age Of The Machine. Donald A. Norman.
9. Computational Interaction. Antti Oulasvirta, Per Ola Kristensson, Xiaojun Bi.
10. Being Digital. Nicholas Negroponte.
11. The Mind is Flat. Nick Chater.
12. Designing with Blends: Conceptual Foundations of Human-Computer Interaction and Software Engineering. Manuel Imaz, David Benyon.
13. Action in Perception. Alva Nöe.
14. Human Factors in Augmented Reality Environments. Huang, Weidong, Alem, Leila, Livingston, Mark.
15. Immersive Analytics. Marriott, K., Schreiber, F., Dwyer, T., Klein, K., Riche, N.H., Itoh, T., Stuerzlinger, W., Thomas, B.H.
16. How to Design Programs. An Introduction to Programming and Computing. Matthias Felleisen, Robert Bruce Findler, Matthew Flattand Shriram Krishnamurthi.
17. Where the Action Is. The Foundations of Embodied Interaction. Paul Dourish.
18. Visual Intelligence: How We Create What We See. Donald D Hoffman.
19. Human Spatial Memory: Remembering Where. Gary L. Allen.
20. Metaphors We Live By. George Lakoff and Mark Johnson.
21. Cognitive Mapping: Past, Present and Future. Rob Kitchin, Scott Freundschuh.
22. Things and Places. How the Mind Connects with the World. Zenon W. Pylyshyn.
23. Joy of UX, The: User Experience and Interactive Design for Developers. David Platt.
24. Storytelling for User Experience. Crafting stories for better design. Whitney Quesenbery and Kevin Brooks.
25. The Age of Insight: The Quest to Understand the Unconscious in Art, Mind, and Brain, from Vienna 1900 to the Present. Eric Kandel.
26. Human Error. James Reason.
27. Is the Visual World a Grand Illusion? Alva Noe.
28. Human space. O.F. Bollnow.
29. Cognition in the Wild. Edwin Hutchins.
30. Eye, Brain and Vision. David H. Hubel.
31. The Oxford Companion to the Mind. Richard L. Gregory.
32. Envisioning Information. Edward R. Tufte.
33. The Embodied Mind, Revised Edition. Cognitive Science and Human Experience. Francisco J. Varela, Evan Thompson and Eleanor Rosch.
34. Cognition Beyond the Brain. Computation, Interactivity and Human Artifice. Stephen J. Cowley, Frederic Vallee-Tourangeau.
35. Emotional Engineering. Shuichi Fukuda.
36. Toward a Theory of Instruction. Jerome Bruner.
Learning Analytics in Augmented Reality (or LAAR for short), the EU funded project I’ve been working on as part of my position in PAL, looks at how technology can support learning. In particular it looks at work-based training where augmented environments will soon be training new employees, teaching them and evaluating their performance. In order to perform the learning analytics we must gather data on the users while using the prototype applications. The Experience API (or xAPI) was chosen to do this.
TinCan is an implementation of the xAPI standard, provided by Rustici Software. It has been ported into a number of languages but for Hololens / Unity development we are only worried about the C# version which can be found here.
Now I had a few problems making this work in Unity… When you drop TinCan into your project, it’s not going to compile. There are a few dependencies that do not work. Something to do with system.net not include in mono or C# or UWP… I’ve spent more time than I’d like to admit on this. Let’s ignore that and instead look at the solutions, here is how you can fix all those redlines:
Fix 1: Use Unity.WebRequest
This works and has been tested on Unity 2017.2.1f1 (and should work for all newer versions too). I’ve used this to build Hololens applications and have also seen it deployed on an IOS compile of a unity project.
This solution replaces the “System.Net” calls with Unitys native “Unity.WebRequest”. It’s embarrassingly simple.
- Scripting Runtime Version: Experimental (.NET 4.6 Equivalent)
- Api Compatibility Level: .NET 4.6
- Replace all references to HttpWebRequest with Unity.WebRequest (Download it here)
Fix 2: Use IL2CPP
This works and has been tested on Unity 2018.2.2f1.
Another solution that works for the latest Unity release was to
- Go to File > Building Settings… > Player Settings… > Other settings > Configuration > Scripting Backend
- Set the “Scripting Backend” to “IL2CPP”
- Replace the reference “System.Web.WebUtility” with “System.Net.WebUitility”
The code will still not compile however, throwing up a number of errors. It appears when you hit compile the project has the same dependency errors that occurred in 2017, to do with only get a subset of .net or something…
The solution I found that seemed to work for Unity 2018.2.2f1 involved making a few basic changes to the Remotelrs class and setting the Scripting Backend to IL2CPP. (The mixed reality toolkit sets this to .Net by default but Unity now warns .Net is going to be deprecated… so we will see how this plays out.) Mentioned here but also mentions UWP is fully supported in IL2CPP. This version of Unity a warning pops-up if you have .NET set as the scripting backend which suggest that setting is being deprecated… so we will all probably be compiling into IL2CPP in the future no-matter what
Thank you for reading, I hope this post has been helpful. Any thoughts, questions, comments, criticisms? I’ll happily answer all.
With Alison Kahn from Stanford University and the Pitt Rivers Museum, we started engaging with the culture of the mysterious and little-known Naga, located in North East India. If the vision turns reality, visitors could see holograms on a trail, exhibited among the museum’s gallery collections. They could witness a ‘pop-up’ hologram of Naga natives to understand better and through story telling the meaning of the historical artefacts. Dr Alison Kahn, Stanford University, is now going on a field trip to the Naga communities in India to record interviews and artefacts in support of this project.
We aim to apply multiple methods for the 3D reconstruction, including photogrammetry and 3D scanning in order to create realistic 3D reconstructions in a Hololens of the witnesses and their chosen cultural artefacts. Photogrammetry is the process in which 3D models are created with the textures of the existing objects by taking many overlapping images from different angles in order to generate a point cloud. For a purpose like we have in mind, 30-60 degrees are usually deemed suitable, requiring about 50-80 photos per object in order to keep capturing time to a minimum.
Additionally, we intend to use 3D scanning (using an Occipital Structure Sensor) to directly recreate 3D models on an iPad. Compared to optical photogrammetry, it is faster and easier to reconstruct high-resolution meshes.
Considering the final quality, however, each mesh and texture is at risk of stretching and distortion, especially for thin and shiny objects with reflective surfaces. Where this poses a problem, we will have to built the affected parts manually according to photographs instead of relying on 3D scans and photogrammetry.
The beginning of September saw me attend the forth (technically my second) of the partner meetings for the LAAR project. This meeting was hosted by the wise Chris Van Goethem, our projects connection to the weird and wonderful world of theatre. The projects use case is training stage-hands using augmented reality combined with learning analytics. Chris teaches and advices the team, comprised mainly of proud computer geeks, about this ancient craft. The event was hosted in beautiful and busy KAAI Theatre, Brussels. I’d never been to Brussels, how exciting!
Prior to the trip, tensions for me were high. This was our last meeting before the big deadline: a conference in Frankfurt the first week of April. It would be great to see everyone again, I just hoped everything would be coming together. I returned to the UK after the two day meeting relieved, a bounce in my step. It was great to see everyone’s progress and (most importantly for me) we have a plan for Frankfurt, a good one. In hindsight, I didn’t need to worry. The teams managers had already looked far into the future and been planning the conference for months.
On the last day we were given a tour of the theatre. This was probably my favourite part of the trip and I found it surprisingly helpful and relevant to the research I have been doing. The theatre has a rich history, explained to us by our lovely tour guide (who’s name I’ve forgotten, sorry). I would try and explain the history, but I would not give justice so maybe just visit? We were shown around the backstage and even allowed to climb to the top of the stage, a scary experience for me – I seem to be developing a fear of heights with age which I believe is totally unfair (a year ago I would sit, intoxicated, on the edge of cliffs, my feet dangling over the drop). The main realisation I had from the tour was in regards to safety. Working in the theatre industry can be dangerous! On-stage there are weights suspended high above people’s heads, ropes with tension, it’s busy, there’s always new people coming and going as shows change… Chris had mentioned the discipline required by the professionals who work this industry but this really made me understand why. I hope this is reflect now in the final iteration of research for this project.
The UK is facing a STEAM skills crisis. These skills are important for a range of industries from manufacturing to the arts. The supply of candidates into engineering (and more broadly STEM) occupations is not keeping pace with demand. The use of AR in education can transform the school environment to a more technology-friendly one, with a bigger variety of opportunities of learning for students. According to researchers the use of AR can increase the level of participation, understanding and learning, three key elements of all educational systems’ targets. Since previous information technology tools have already been implemented in the classrooms, the incorporation of AR in education is something that can be accomplished more easily, as students are familiar with handling devices already.
Within the internally-funded Augmented Reality for STEAM education (AR-STEAM) project, we seek to organize a series of events which would create a common space for teachers, policy makers, and technology providers to meet, exchange knowledge and experience in AR for STEAM education in order to help stimulate wider uptake of this technology. The project runs from 1.1. to 31.12. 2019.
In 1931, the American Surrealist writer Bob Brown invented a reading machine. The device produced a form of machine-assisted speed reading, in which micrographically-printed text would scroll under a magnification screen in a single, streaming line. Where cinema gave the world the ‘talkies’, Brown offered ‘the readies’, and important writers like Gertrude Stein, Ezra Pound and William Carlos Williams helped him demonstrate its potential. However, Brown’s reading machine never went into production, and even though he is gaining popularity as the ‘godfather of the e-reader’, some people still doubt whether Brown’s machine actually worked.
Recently, however, the Avant-Gardes and Speculative Technology (AGAST) Project at Oxford Brookes University have teamed up with MakerSpace, the Ashmolean Museum, and EOF Hackspace to reconstruct his prototype!
AGAST is a cross-disciplinary research group associated with PAL and Co-Creation that re-imagines the inventions of 20th-century avant-gardes using AR technology. In a series of community workshops at the new MakerSpace at Oxfordshire County Libraries, we invited local teens to help us retro-engineer and then build reading machines and write about the ‘future of reading’ in Oxford.
The prototypes, built with vintage and 3D printed parts, complete with micro-printed texts, were exhibited with other AGAST reading machine projects at a lecture by Eric White at the Ashmolean Museum. They are now on permanent display at MakerSpace.
Wearable technologies – such as smart watches, smart glasses, smart objects, smart earbuds, or smart garments – are just starting to transform immersive user experience into formal education and learning at the workplace. These devices are body-worn, equipped with sensors and conveniently integrate into leisure and work-related activities including physical movements of their users.
Wearable Enhanced Learning (WELL) is beginning to emerge as a new discipline in technology enhanced learning in combination with other relevant trends like the transformation of classrooms, new mobility concepts, multi-modal learning analytics and cyber-physical systems. Wearable devices play an integral role in the digital transformation of industrial and logistics processes in the Industry 4.0 and thus demand new learning and training concepts like experience capturing, re-enactment and smart human-computer interaction.
This proposal of a special track is the offspring of the SIG WELL (http://ea-tel.eu/special-interest- groups/well/) in the context of the European Association for Technology Enhanced Learning (EATEL). It is a follow up proposal for the inaugural session we had at the iLRN 2015 in Prague and in iLRN 2017 in Coimbra.
In the meantime, the SIG was successful in organizing a number of similar events at major research conferences and business oriented fairs like the EC-TEL, the I-KNOW and the Online Educa Berlin OEB. Moreover, the SIG has involved in securing substantial research funds through the H2020 project WEKIT (www.wekit.eu). The SIG would like to use the opportunity to present itself as a platform for scientific and industrial knowledge exchange. EATEL and major EU research projects and networks in the field support it. Moreover, we’ll seek to attach an IEEE standard association community meeting of the working group on Augmented Reality Learning Experience Models (IEEE ARLEM).
List of Topics
- Industry 4.0 and wearable enhanced learning
- Immersive Learning Analytics for wearable technologies
- Wearable technologies for health and fitness
- Wearable technologies and affective computing
- Technology-Enhanced Learning applications of smart glasses, watches, armbands
- Learning context and activity recognition for wearable enhanced learning
- Body-area learning networks with wearable technologies
- Data collection from wearables
- Feedback from wearables, biofeedback
- Learning designs with wearable technologies
- Learning designs with Augmented Reality
- Ad hoc learning with wearables
- Micro learning with wearables
- Security and privacy for wearable enhanced learning
- Collaborative wearable enhanced learning
- Development methods for wearable enhanced learning
Submitted papers must follow the same guidelines as the main conference submissions. Please visit https://immersivelrn.org/ilrn2019/authors-info/ for guidelines and templates. For submitting a paper to this special track, please use the submission system https://www.easychair.org/conferences/?conf=ilrn2019 , log in with an account or register, and select the track “ST6: Wearable Technology Enhanced Learning” to add your submission.
Special Track Chairs
- Ilona Buchem, Beuth University of Applied Sciences Berlin, Germany
- Ralf Klamma, RWTH Aachen University, Germany
- Fridolin Wild, Oxford Brookes University, UK
- Mikhail Fominykh, Norwegian University of Science and Technology, Norway
Tentative Program Committee (t.b.c.)
- Mario Aehnelt, Fraunhofer IGD Rostock, Germany
- Davinia Hernández-Leo, Universitat Pompeu Fabra, Spain
- Carlos Delgado Kloos, UC3M, Spain
- Elisabetta Parodi, Lattanzio Learning Spa, Italy
- Carlo Vizzi, Altec, Italy
- Mar Perez Sangustin, Pontificia Universidad Católica de Chile, Chile
- Isa Jahnke, University of Missouri-Columbia, USA
- Jos Flores, MIT, USA
- Puneet Sharma, Norwegian University of Science and Technology, Norway
- Yishay Mor, Levinsky College of Education, Israel
- Tobias Ley, Tallinn University, Estonia
- Peter Scott, Sydney University of Technology, Australia
- Victor Alvarez, University of Oviedo, Spain
- Agnes Kukulska-Hulme, The Open University, UK
- Carl Smith, Ravensbourne University, UK
- Victoria Pammer-Schindler, Graz University of Technology &Know-Center Graz, Austria
- Christoph Igel, CeLTech, Germany
- Peter Mörtel, Virtual Vehicle, Austria
- Brenda Bannan, George Mason University, USA
- Christine Perey, Perey Consulting, Switzerland
- Kaj Helin, VTT, Finland
- Jana Pejoska, Aalto, Finland
- Jaakko Karjalainen, VTT, Finland
- Joris Klerxx, KU Leuven, Belgium
- Marcus Specht, Open University, Netherlands
- Roland Klemke, Open University, Netherlands
- Will Guest, Oxford Brookes University, UK
For more information, please contact Ralf Klamma ( email@example.com )
Call for Contributors
Teaching how to create and code Augmented Reality (AR) is an emerging topic in Higher Education. This should not be confused with the interest of various other subjects to use AR applications and content. Only a few top-tier universities world-wide currently offer courses that give instruction of how to code AR. Few have related content and none have a full curriculum on AR. The goal of this book project is to create the first comprehensive Open Educational Resource (OER) as a foundation for AR curricula in Higher Education. Every book about high tech risks being outdated already when going into print, so we are planning for a continuously developed and updated online book, working with an open community of contributors, Open Source style.
The book will be available as Open Educational Resource (OER) under the Creative-Commons-License CC-BY-SA. This allows using figures and texts for own presentations as Attribution-ShareAlike 4.0 International. The book is planned as a living resource, where chapters can be reworked or added as needed.
The book production is supported by the ERASMUS+ Strategic Alliances Project for Higher Education called “Augmented Reality in Formal European University Education (AR-FOR-EU). The project AR-FOR-EU establishes and deepens a strategic partnership for teaching Augmented Reality in Higher Education at scale on undergraduate and graduate levels.
The book will cover the necessary prerequisites to understand and follow the core concepts of teaching Augmented Reality. It will have a section for advanced topics that can be covered optionally in the supporting curricula. A section of the book will also be dedicated to a collection of good practices in teaching AR coding. The book offers a comprehensive and introductory perspective on the topic Augmented Reality.
- History of AR
- Future of AR
- Perceptual Foundations of AR
- Sensors and Signal Processing
- Computer Graphics
- Algorithms and Data Structures
- Linear and Geometric Algebra
- Display Technologies
- Tracking Technologies
- Interaction Technologies
- Hardware Abstraction Layer
AR Development Skills
- AR SDKs
- Unity / Unreal Engines
- 3D Modeling & Rendering
- Spatial Audio
- Interactive Graphics and Sound Design
- Gesture and Speech Recognition and Interaction
- Human Computer Interaction and User Centered Design
- Image Analysis
- Image Processing
- Object Detection and Recognition
- 3D Sensing
- Depth Sensing
- Data Mining
- Machine Learning, Deep Learning
- Sensor Fusion
- AR Agile Project Management
- AR Game Development, Gamification and Serious Games
- AR Applications
- AR for the Web
- Mobile AR
- Hardware Accelerated Computing
- Internet of Things, Robots and Wearables
- Hardware and Optical Design
- 2D/3D Printing
Good Practices and Examples
- Maker Communities
- Workflows and Company Practices
- Privacy, Ownership and Intellectual Property
- Applications, Employments and Careers in AR
Contribution Model and Infrastructure
The book project follows an agile approach differing from the classic development process typical for printed content. Contributors can play several different roles in the production process. We are looking for authors, reviewers, agile editors, designers, software developers, visual artists, and testers. Agile teams are responsible for the generation of chapters and act as product owners. Reviewers will review chapters and communicate with the author teams. Team champions drive forward the agile development of chapters. Designers lay-out the online book and printed versions. Software developers are responsible for interactive Web graphics, application examples, and other dynamic code. Visual artists are responsible for appealing visualizations. Testers will thoroughly try out the final versions of the book.
Dr Fridolin Wild, the director of the lab, has joined the editorial board of Frontiers in Artificial Intelligence as associate editor. He will be overseeing in particular the area of AI for Human Learning and Behavior Change.
Frontiers, the publisher, was founded in 2007 by EPFL neuroscientists and is headquartered in Lausanne, Switzerland.
The new series in Frontiers in AI for Human Learning and Behaviour change welcomes article submissions in the full spectrum of applying AI theories, concepts, and techniques to support people in their learning and voluntary behavior change.
Research in the areas of AI in Education (AIED), Collaborative Learning, and more recently into Data Mining and Learning Analytics, is helping create better learning tools and support environments to democratize education and make it more effective. Behavior Change technologies, traditionally included in the health science domain, can help people avoid addictions and engage in healthy behaviors. With the increasing availability of powerful mobile and ubiquitous computing technologies, as well as inexpensive sensors, behavior change technologies have become commonplace and have expanded towards human behaviors in relation to environment, social engagement, safety, productivity and learning (for example, avoiding procrastination). Just like intelligent tutors, behavior change systems require understanding and modeling user activities, personalization, recommending new activities and sequences, supporting decision-making. They also often engage the user’s friends to provide social support of the activity through collaboration or competition. These topics are particularly the focus of research in the areas of Quantified Self, Persuasive Technology, Recommender Systems, Decision-Support Systems and Learning Technologies.
Changing human behavior is a learning process – in other words, Human Learning and Behavior Change are interconnected. In fact, on the one hand researchers in Persuasive Technology and Behavior Change can learn from the advances in the area of AIED, Learning Analytics and Educational Data mining, Recommender Systems; on the other hand, researchers in AIED can learn from the current work that is going on in Behavior Change and Persuasive Technologies, Quantified Self, regarding the use of context, motivation strategies, cognitive biases, etc.
Bridging these currently distinct areas in our journal section is key to enable cross-fertilization, and to provide an innovative approach to foster the work “in-between”. AI for Human Learning and Behavior Change publishes review articles, communications, and original research papers describing applications of AI technologies to learning and behavior change.
Professor Julita Vassileva, University of Saskatchewan, Faculty member ARIES lab
Susanne Lajoie, McGill University
Sabine Graf, Athabasca University
Vania Dimitrova, University of Leeds
Seiji Isotani, University of Sao Paulo
Chad Lane, University of Illinois Urbana Champaign
Esma Aimeur, University of Montreal
Judith Masthof, Utrecht University
Harri Oinas-Kukkonen, University of Oulu
Alexandra Cristea, University of Durham
Sergey Sosnovsky, Utrecht University
Ralf Klamma, University of Aachen
Roger Nkambou , University of Quebec at Montreal
Fridolin Wild, Oxford Brookes
Fabrice Popineau, CentraleSupélec
Marcus Specht, TU Delft
Milos Kravcik, DFKI
Topics include but are not limited to:
• Intelligent tutoring systems
• AI or data-driven methods for modeling pedagogical knowledge and instructional planning
• AI methods and techniques for modeling collaborative learning processes, cohorts of learners and learning social networks
• Learning goals, learning sequences, recommendation of learning activities • Affective and motivational aspects in AIED systems
• Data-driven design or adaptation of learning environments
• Data-powered collaborative learning environments
• Peer-help, peer-mentoring, peer-review systems
• Pedagogical / persuasive agents
• Adaptive / personalized Incentives and motivations for participation in collaborative (learning or not) environments
• Gamified (learning or not) environments, games with a purpose
• Adaptive / personalized persuasive strategies and persuasive systems design in different domains (environment, learning, social engagement, etc.)
• Self-monitoring and Persuasive systems for behavior change in health and medicine
• Transparency and accountability – open learner / user models, self-monitoring, generating explanations of pedagogical and persuasive strategies and recommendations
• Ethical issues of persuasive and behavior change systems
On entering Audiomotion Studios, visitors will encounter a ginormous space for motion capture (MoCap) with over 160 Vicon cameras mounted on the rigs (see picture). The Audiomotion Studio is the largest performance capture stage in Europe. Quite a number of actor and animal movements can be recorded at the same time for production of accurate animations. Behind the MoCap space there are several rooms with green screens and Motion Control Crane for filming. PAL’s PhD candidate Yu Huang, Dr Fridolin Wild, and John Twycross went to see Brian Mitchell, the Managing Director, to explore possibilities for further collaboration on volumetric video capture.