The traditional route to knowledge is to read a book from a library. We’re investigating how we can go beyond this and embed knowledge directly into the perception of the user, right where action happens and performance is required.
Wearables thereby act as gateways, mediating between objective reality and its enhancements with visual, auditive, haptic, etc. overlays. When done well, they help turn sensorimotor perception into experience.
This requires two types of world knowledge, i.e., data about the workplace and data about the activity pursued. While the first is rather stable, the latter is dynamic and much more rapidly. We’re researching both representation and implementation, working on both standards as well as development toolkits and frameworks.
Our artists in residency, Yann Deval and Marie G. Losseau, presented a preview of ATLAS, the work in progress, on invitation of the European Commission at SxSW in Texas, Austin, from March 10-12. ATLAS is an experience that mixes augmented reality and virtual reality and spectators were invited to engage with it to build virtual cities using the ATLAS seed launcher. Each seed sprouts a house, following some urbanistic rules, adapting to their environment. There are houses in the clouds, uprooted houses, houses on stilts, flying houses, and more. The cities assembled by the audience take on a life of their own, with and without interaction of the users, just like any living organism. The work allows to assemble huge cities, in which you can wander and loose yourself. It provokes a reflection on urbanism, architecture, and their influence on our lifestyle. It gives life to the inanimate.
The big question ATLAS helps to explore is one about the relationship of physical space to virtual space. ATLAS transforms the user’s physical space into a map of the environment, swapping perspectives.
The map serves as a launch pad for sprouting houses, establishing a link between the physical space of the user’s surrounding and the holographic houses.
Technically, this was realised using custom shaders, using CG, the shader programming language by NVidia. Holographic houses can be hidden, when they pass behind a physical object. This was created with ease using Hololens tools, but adds real value and credibility to the experience.
The Hololens, by Microsoft, is a ‘developer kit’, a prototype. The space where holograms are displayed is very limited, and that can often disrupt the feeling of immersion for the user. ATLAS uses this limitation as a creative constraint: user gaze unveils the virtual overlays organically, making the limitations appear as if by choice.
Though not an essential technical change, from the user perspective this was a significant one. Frustration decreased and the number of participants mentioning the limitation of the field of view decreased eminently.
ATLAS will offer different chapters, each submerging the user into another layer of augmented reality. Dividing the experience into chapters is perhaps an influence of literature on our work.
To give the houses life, an animation was created with Houdini, a 3D software for procedural animations and simulations. Houdini is a procedural tool, which helped to speed up development of the organic animation, compared with when using other tools such as MeshLab or Cinema4d.
Several different houses were 3D-scanned and integrated, some of them built by artist Marie Loseau, others by 9-year-old kids during the MCCS series of workshops in Molenbeek, Brussels.
To build bridges between realities, windows were created that let the spectator view into a VR world.
The outcome of the residency at this point is a functional prototype, a five minute MR (Mixed Reality) experience for a single user.
The prototype was also presented at MCCS Molenbeek, where 100 kids participated in the project Classes Urbaines. A lot of ideas for the next chapters of the work are coming together during discussions with the WEKIT project team and our Oxford Brookes University students. These will be integrated in September during the next Oxford residency!
ATLAS (prototype): An experience by Yann Deval & Marie G. Losseau
In collaboration with WEKIT / Dr. Fridolin Wild (Oxford Brookes University) for the Mixed Reality part.
Kids from Molenbeek’s schools 1, 5 and 10 for contributing to the building of the cities.
With the support of VERTIGO STARTS program of the European Commission with IRCAM-Centre Pompidou and EPFL
Maison des Cultures et de la Cohésion Sociale de Molenbeek Saint-Jean
Programming, graphic design and music by
Model making and scenography by
Marie G. Losseau
WEKIT (Oxford Brookes University / Performance Augmentation Laboratory)
Dr. Fridolin Wild
External contributions (Mixed Reality)
TriplanarWorld Shader by Robert Yang
SpatialMappingRendererWithNormals by Matt@DeckTwelve
Professor Anu Ojha from the National Space Academy guided us through the day of an astronaut, explaining how the pressure works and what are the pressure difference effects on the body. With him, our team experienced the Sokol space suit, a type of Russian spacesuit, worn by all who fly on the Soyuz spacecraft! We were trying to understand the effect of a mechanical pressure suit on a body of astronauts and think how we can integrate different sensors to measure physiological parameters alongside augmented reality glasses. The suit consists of an inner pressure layer of rubberised polycaprolactam and an outer layer of green nylon canvas. Boots are integrated with the suit but gloves are removable and attach by means of blue anodised aluminium wrist couplings. We must say the full suit is quite heavy to wear, especially when you think about it being on your body for more than 10 hours per day, but it’s a unique feeling when your body is squeezed by the air.
The R package ‘mpia’ provides a computational model to understand, analyse, and advise on human learning.
Meaningful Purposive Interaction Analysis (MPIA) combines the ‘best of’ from social network analysis (SNA) with latent semantic analysis (LSA) to help create and analyse a meaningful learning spaces from the digital traces left by a learning community in the co-construction of knowledge.
The hybrid algorithm is implemented in the statistical programming language and environment R, introducing packages which capture – through matrix algebra – elements of learners’ work with more knowledgeable others and resourceful content artefacts. This building block application allows to use and build analytics to guide students and support decision-making in learning.
The package can be downloaded here:
Photo: Sonia Bernac
The Performance Augmentation Lab (PAL) seeks to close the dissociative gap between abstract knowledge and its practical application, researching radically new forms of linking directly from knowing something ‘in principle’ to applying that knowledge ‘in practice’ and speeding its refinement and integration into polished performance.
Rooted in Computer Science, the lab sets strategic focus on three interconnected areas of research. Within Human Computer Interaction, we focus on Augmented Reality and Wearable Technologies. Within the Information Systems, we design, develop, and validate Technology-Enhanced Learning, Knowledge-Based Systems, and Job Performance Aids. Within Data Science, we concentrate on Performance Analytics and Optimisation.
Over the course of the year 2017, we have grown the lab and the network around it, inside and outside of the University.
On top of consolidating the EU- funded WEKIT and TCBL projects, we have attracted more funding.
WEKIT, short for Wearable Experience for Knowledge Intensive Training, builds and validates the next generation of training technology, capturing expert experience where it emerges and sharing it with trainees intuitively, immersively, in situ.
In TCBL, short for Textile Clothing and Business Labs, we develop novel knowledge sharing technology for the sector to help innovation to spread more efficiently and effectively.
The Learning Analytics for Augmented Reality (LAAR) project will complement our endeavours on analytics and assessment.
The Augmented Reality for Formal European University Education (AR-FOR-EU) will help establish a curriculum for Performance Augmentation.
We won the University’s Research Excellence Award and a Best Paper Award.
We received an industry grant from Daqri and from Pearson Education.
And finally, just with the end of the year, we now have an artist in residence, sponsored by the Vertigo project.
2017 was a great year for the lab, but I am sure that 2018 will be equally, if not even more exciting — because Performance Augmentation is our calling!
Download the report:
This year has seen the development of the first synthesis between the Myo armband and visualisation in the HoloLens. Using high-speed Bluetooth communication, arm movements can be used to manipulate virtual objects in Augmented Reality. By attaching a virtual object (such as a model of a hand or lightsabre) to the gyroscope and accelerometer sensors on the armband, it can be made to move around the user in a natural and believable way. The position of the centre of this rotation is based on the positional tracking of the HoloLens, effectively giving this wearable device a new appendage.
An integrated peripheral such as this is thought to have significant potential as a platform for interaction design in AR. The possible applications for interaction and use for this are as numerous and varied as the uses of our hands in everyday life. We will investigate applications that allow interaction with the AR spatial mapping as well as those that record, recognise and report on the use of a person’s hands, using multi-stream mining and classification-driven machine learning tools.
Since the armband is not equipped with a magnetometer, it suffers from yaw drift to such an extent that a hologram will end up no longer aligned with the person’s arm. The HoloLens’ hand-tracking ability was used to re-align and position the object, before giving the control back to the armband, whose resolution and sensitivity far exceeds that of the hand-tracking software tools in the HoloLens.
The fusion was explored by Will Guest, of the Performance Augmentation Lab, using Unity 3D and the Windows 10 APIs. A major hurdle overcome was the use of asynchronous tasks (built on Microsoft’s SDK) within a non-thread-safe Unity environment. The result is a natural feel and immersive representation; the lightsaber shows intuitive movement and, of course, features hyper-realistic sound effects.
Yesterday we kicked off the artist residency funded by the Vertigo project. Yann Deval is visiting Oxford Brookes now and over the course of the next year to create a new arts installation and performance, ATLAS. Vertigo is teaming up artists with H2020-funded ICT projects, to foster dialogue and support public engagement in science. We are truly excited and look forward to working together on the fusion of AR and arts.
Just in time for our AR-FOR-EU kick-off meeting in Molde, Norway, November 14-16, 2017, we launched http://CodeReality.net as the public outlet of the project. CodeReality.net will be the gateway for developer education, helping to meet the growing demand of the creative and digital industries of today and tomorrow.
AR-FOR-EU is an Erasmus+ strategic partnership of five Higher Education institutions, funded by the European Union, in which we team up with universities in Germany, Norway, Russia, and the UK to support capacity building of new and emerging digital skills for Augmented Reality, produce two innovative courses on this topic, promote excellence in teaching accordingly, and design OERs and MOOCs.
Dr. Fridolin Wild gave a TEDx talk on ‘reality as a medium’, speaking about truth, reality, and perception, and how we can hack into perception to actually ‘make’ reality. The talk will be available online soon.
PAL kicked off a new project on effective learning analytics for Augmented Reality learning apps, project LAAR, at the beginning of October, with partners from Belgium, Denmark, Germany, and Liechtenstein.
The goal of LAAR is to develop, pilot, and validate an exhaustive set of formative assessment exercises for AR-based vocational training, involving interactive, sequential learning exercises linked up with a directory of competencies (such as ESCO or ETTE). The reusable formative exercises provide direct, smart feedback to the learner, while at the same time enabling the development of summative analytics.
Alla Vovk from the Performance Augmentation Lab participated in the ‘Reality, Virtually’ Hackathon, organised by the MIT media labs the second year in a row.
Participants from all over the world came to Boston, Massachusetts, to hack reality and develop innovative software solutions in areas such as VR/AR for Good, Film & Journalism, Health & Medicine, Learning & Education, Industry, Art, Productivity, Advertising & Monetisation, Social Networking, and more.
Visiting the MIT Media Lab was an amazing experience! You get to work on every stage of the product life-cycle in a team, which is something that you can be pretty isolated from during a regular job — for example, I did both development and Ux design work and got to work really closely with other UI designer and artist. I do believe that hackathons are a great way of getting things done in the fastest way possible. They help you to polish your creativity by forcing you to think in different ways. Hackathons are also very valuable for making friends and meeting new people. — Alla Vovk
Alla lead a team developing a smart glasses Augmented Reality app for building Memory Palaces to enhance memorability, deploying for the Microsoft Hololens. Find below a demo video and the more detailed project description.
AR Memory Palaces: Inspiration
We all have moments, where we need to recall long lists of ideas &emdash; whether it is studying for a course, remembering words, or simply memorizing items on the shopping list. The Ancient Greeks used mnemonic devices called Memory Palaces for this, creating an imaginary place in the mind that acts as storage for knowledge.
The Memory Palace is a concept that allows a user to walk through a familiar place in their mind (for example, a living room), attaching the objects to remember to distinct elements of the surrounding. Along the way, the user can use these elements to better recall the list of concepts attached. The idea is that people generally have better memory for objects in a place that they know, than they have for abstract ideas or words in isolation.
What it does
We introduce learning into the space you live in. In our application, you become a Sherlock Holmes and map out a concept trail in the environment, helping you to build up your own memory palace and removing the cognitive burden of creating this map on your own in your head. We are using AR and Microsoft HoloLens to build this knowledge map in AR.
Using spatial mapping — one of the key features of the Microsoft Hololens — we let people use the surrounding physical environment as a memory palace (your living room, office, etc.). Imagine your room as a whiteboard, where all the things you want to remember are augmented objects (3d models, voice recordings, text annotations, images) and you can place them in your own space. The application allows you to build a cognitive map using different objects from the library and attaching them to the location in the physical world you want to associate them with. This method of loci allows you to connect the location with a specific concept you are trying to remember.
For example, imagine we are learning the periodic table of elements. You need to remember facts about potassium, so you select the object associated with that element, i.e., a banana, and place it onto the chosen place (such as on a pillow). Once the object is placed, you can also add further annotations to the objects. In the end, you can complete a quiz and try to recall all the objects (and connected information).
Try it out: https://github.com/Reality-Virtually-Hackathon/Within