Competence is a human potential for action, leading to performance, when put to action. With tracking, processing, and analysis technology, traces of performance allow prediction of and reasoning about underlying competence.
We deploy data science techniques and technologies to systematically develop a rich toolkit for human performance extraction, analysis, and prediction. In a sense, in this area we do data science with a specialisation in knowledge media.
Learning Analytics in Augmented Reality (or LAAR for short), the EU funded project I’ve been working on as part of my position in PAL, looks at how technology can support learning. In particular it looks at work-based training where augmented environments will soon be training new employees, teaching them and evaluating their performance. In order to perform the learning analytics we must gather data on the users while using the prototype applications. The Experience API (or xAPI) was chosen to do this.
TinCan is an implementation of the xAPI standard, provided by Rustici Software. It has been ported into a number of languages but for Hololens / Unity development we are only worried about the C# version which can be found here.
Now I had a few problems making this work in Unity… When you drop TinCan into your project, it’s not going to compile. There are a few dependencies that do not work. Something to do with system.net not include in mono or C# or UWP… I’ve spent more time than I’d like to admit on this. Let’s ignore that and instead look at the solutions, here is how you can fix all those redlines:
Fix 1: Use Unity.WebRequest
This works and has been tested on Unity 2017.2.1f1 (and should work for all newer versions too). I’ve used this to build Hololens applications and have also seen it deployed on an IOS compile of a unity project.
This solution replaces the “System.Net” calls with Unitys native “Unity.WebRequest”. It’s embarrassingly simple.
- Scripting Runtime Version: Experimental (.NET 4.6 Equivalent)
- Api Compatibility Level: .NET 4.6
- Replace all references to HttpWebRequest with Unity.WebRequest (Download it here)
Fix 2: Use IL2CPP
This works and has been tested on Unity 2018.2.2f1.
Another solution that works for the latest Unity release was to
- Go to File > Building Settings… > Player Settings… > Other settings > Configuration > Scripting Backend
- Set the “Scripting Backend” to “IL2CPP”
- Replace the reference “System.Web.WebUtility” with “System.Net.WebUitility”
The code will still not compile however, throwing up a number of errors. It appears when you hit compile the project has the same dependency errors that occurred in 2017, to do with only get a subset of .net or something…
The solution I found that seemed to work for Unity 2018.2.2f1 involved making a few basic changes to the Remotelrs class and setting the Scripting Backend to IL2CPP. (The mixed reality toolkit sets this to .Net by default but Unity now warns .Net is going to be deprecated… so we will see how this plays out.) Mentioned here but also mentions UWP is fully supported in IL2CPP. This version of Unity a warning pops-up if you have .NET set as the scripting backend which suggest that setting is being deprecated… so we will all probably be compiling into IL2CPP in the future no-matter what
Thank you for reading, I hope this post has been helpful. Any thoughts, questions, comments, criticisms? I’ll happily answer all.
Just in time for Halloween, we have finalised work on a major release of the ‘WEKIT.one’, our next-generation app for wearable experiences in knowledge-intensive training.
The development of the experience capturing software has been led by members of the Performance Augmentation Lab. This is one of the first of such tools that allows the generation of content to be done completely within AR. Using a HoloLens and other wearable sensors, the software guides experts to record immersive training procedures using all available AR content. Blending 2D and 3D instruction into the workplace creates a far richer and more interactive training experience.
The expert works through the procedure, capturing their actions, thoughts, and guiding instruction step by step. We are able to capture their movement in and around the workplace, their hand positions, and even some additional biophysical signals, such as heart rate variability or galvanic skin resistance. With just the technology at hand, trainees can now visualise the expert, listen to live guidance, and have access to on-demand knowledge about the task at hand.
To now we have seen experts in the field of aircraft maintenance, radiology and astronaut training use this software and, in 2019, we aim to establish new collaboration within the university, within Oxford and abroad – most imminently with the European Space Agency.
In a presentation for Oxford Brookes Computer Science department, Carla Marcolin talked about the work-in-progress “Business Analytics for Unstructured Data”. The main objective of the work is to help fulfill the gap between developed and applied Machine Learning and NLP (Natural Language Processing) techniques, helping business to make better decisions. In the talk, the structure for a classifier based on TripAdvisor comments was presented, along with some next steps. The subsequent high-quality discussion helped to shape and improve the work aspirations. The slides can be found here.
PAL kicked off a new project on effective learning analytics for Augmented Reality learning apps, project LAAR, at the beginning of October, with partners from Belgium, Denmark, Germany, and Liechtenstein.
The goal of LAAR is to develop, pilot, and validate an exhaustive set of formative assessment exercises for AR-based vocational training, involving interactive, sequential learning exercises linked up with a directory of competencies (such as ESCO or ETTE). The reusable formative exercises provide direct, smart feedback to the learner, while at the same time enabling the development of summative analytics.
Last week in Tallinn Alla Vovk presented a paper at the 12th European Conference on Technology Enhanced Learning (EC-TEL 2017) “Affordances for Capturing and Re-enacting Expert Performance with Wearables” written by Will Guest, Fridolin Wild, Alla Vovk, Mikhail Fominykh, Bibeg Limbu , Roland Klemke, Puneet Sharma, Carl H Smith, Jazz Rasool, Soyeb Aswat, Kaj Helin, Daniele Di Mitri, and Jan Schneider. You can watch the presentation in 360 with a Q&A session.
The WEKIT.one prototype is a platform for immersive procedural training with wearable sensors and Augmented Reality. Focusing on capture and re-enactment of human expertise, this work looks at the unique affordances of suitable hard- and software technologies. The practical challenges of interpreting expertise, using suitable sensors for its capture and specifying the means to describe and display to the novice are of central significance here. We link affordances with hardware devices, discussing their alternatives, including Microsoft Hololens, Thalmic Labs MYO, Alex Posture sensor, MyndPlay EEG headband, and a heart rate sensor. Following the selection of sensors, we describe integration and communication requirements for the prototype. We close with thoughts on the wider possibilities for implementation and next steps.
Industry 4.0 is on the rise and this coordinated push for automation, big data, and internet-of-things in the smart factory is already causing (and will continue to) disruption in the job market. New skills for ‘new collar’ jobs are needed and intelligent assistance systems with Augmented Reality, Smart Glasses, and other forms of wearable computing may help to deliver them.
In this talk, Dr. Wild introduced to the concept of Performance Augmentation and illustrated how challenges for the future can be met at the hand of several examples of intelligent training and live guidance applications in aircraft maintenance, space assembly, and medical diagnostics.
The 12th European Conference on Technology-Enhanced Learning
12-15 September 2017, Tallinn University, Tallinn, Estonia
The European Conference on Technology-Enhanced Learning (EC-TEL) engages researchers, practitioners, educational developers, entrepreneurs and policy makers to address current challenges and advances in the field. This year’s theme of ‘Data Driven Approaches in Digital Education’ focuses on the new possibilities and challenges brought by the digital transformation of the education systems. The increasing amount of data that can be collected from learning environments but also various wearable devices and new hardware sensors provides plenty of opportunities to rethink educational practices and provide new innovative approaches to learning and teaching. This kind of data can provide new insights about learning, inform individual and group-based learning processes and contribute to a new kind of data-driven education for the 21st century.
The conference will explore how data can be used to change and enhance learning in different ways and to collect evidence for technological innovations in learning: for instance multimodal data, personal data stores, data visualisations for learner and teacher awareness, feedback processes, predictions of learning progress, personalisation and adaptation, as well as data-driven learning designs, or ethics and privacy policies for the data-driven future.
Papers should consider data at different scales (individual, group, class, massive) and different dimensions (cognitive, emotional, behavioral) of learner engagement with the technology. We are looking forward to receiving papers that address the conference themes and are informed by theories of pedagogy and evidence of effective practice. Qualitative papers offering robust meta-analyses or having visionary new educational designs are also welcome.
The venue for this year’s conference is Estonia’s capital Tallinn, the best preserved medieval city in Northern Europe directly at the Baltic Sea..
Full Papers, Short Papers, Posters & Demonstrations:
- 3 April 2017- Mandatory submission of an abstract
- 10 April 2017 – Submission of full version
- 29 May 2017 – Notification of acceptance
- 26 June 2017 – Camera-ready versions
- 10 April 2017 – Submission of workshop proposal (Abstract not needed)
- 5 May 2017 – Workshops notification
- 12 and 13 September 2017- Workshops
- 20 June 2017 – Room reservation for project meetings
- 11, 12 and 13 September 2017- Project Meetings
- 24 July 2017 – Early-bird registration ends
- 14 and 15 September 2017- Main conference, Tallinn University, Estonia
- 22 May 2016 – Doctoral Consortium application submission
- 19 June 2016 – Doctoral Consortium application notification
- 31 July 2016 – Doctoral Consortium reviews
- 28 August 2016 – Doctoral Consortium camera-ready versions
- 13 September 2016 – Doctoral Consortium
Submissions will be handled through EasyChair (https://easychair.org/conferences/?conf=ectel2017). All papers will be reviewed through a single blind review process. Accepted papers will be published in the conference proceedings. As every year, we will publish proceedings within Springer “Lecture Notes in Computer Science” (LNCS) Series”. The use of supplied template is mandatory: http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0
- Full Papers: max. 14 pages, published in proceedings
- Short Papers: max. 6 pages, published in proceeding
- Demonstration Papers: max. 4 pages (published in proceedings) plus max. 2 additionals pages describing the demo (not published in proceedings)
- Poster Papers: max. 4 pages (published in proceedings)
- Workshop proposals: use the provided form, (not published in proceedings)
Katrien Verbert, KU Leuven, Belgium
Élise Lavoué, Jean Moulin Lyon 3 University, France
Hendrik Drachsler, Open University & ZUYD University of Applied Sciences, Netherlands
Olga C. Santos, UNED, Spain
Luis P. Prieto, Tallinn University, Estonia
Poster and Demonstration Chairs:
Mar Pérez-Sanagustín, PUC Chile
Julien Broisin, University of Toulouse, France
Sharon Hsiao, Arizona State University, USA
Doctoral Consortium Chairs:
Katherine Maillet, Institut Mines-Télécom, Télécom Ecole de Management, France
Lone Dirckinck-Holmfeld, Aalborg University, Denmark
Ellen Rusman, Open University of the Netherlands, Netherlands
Local Organization Chairs:
Tobias Ley, Tallinn University, Estonia
Kairit Tammets, Tallinn University, Estonia
Steering Committee Representative:
Ralf Klamma, RWTH Aachen University, Germany
Kadri-Liis Kusmin, Proekspert and Tallinn University, Estonia
We’ve given the CRAN task view on Natural Language Processing an overhaul and added the following packages to the list:
- gutenbergr allows downloading and processing public domain works in the Project Gutenberg collection. Includes metadata for all Project Gutenberg works, so that they can be searched and retrieved.
- hunspell is a stemmer and spell-checker library designed for languages with rich morphology and complex word compounding or character encoding. The package can check and analyze individual words as well as search for incorrect words within a text, latex or (R package) manual document.
- monkeylearn provides a wrapper interface to machine learning services on Monkeylearn for text analysis, i.e., classification and extraction.
- mscstexta4r provides an interface to the Microsoft Cognitive Services Text Analytics API and can be used to perform sentiment analysis, topic detection, language detection, and key phrase extraction.
- mscsweblm4r provides an interface to the Microsoft Cognitive Services Web Language Model API and can be used to calculate the probability for a sequence of words to appear together, the conditional probability that a specific word will follow an existing sequence of words, get the list of words (completions) most likely to follow a given sequence of words, and insert spaces into a string of words adjoined together without any spaces (hashtags, URLs, etc.).
- PGRdup supports fuzzy, phonetic and semantic matching of words. In particular, the DoubleMetaphone function converts strings to double metaphone phonetic codes.
- phonics provides a collection of phonetic algorithms including Soundex, Metaphone, NYSIIS, Caverphone, and others.
- quanteda supports quantitative analysis of textual data.
- tesseract is an OCR engine with unicode (UTF-8) support that can recognize over 100 languages out of the box.
- text2vec provides tools for text vectorization, topic modeling (LDA, LSA), word embeddings (GloVe), and similarities.
- tidytext provides means for text mining for word processing and sentiment analysis using dplyr, ggplot2, and other tidy tools.
Dr Fridolin Wild will be giving a keynote at the Orphee Rendezvous in Font Romeu, France, to help shape the future research agenda of the French technology-enhanced learning R&D community. Orphee is a network of networks with over 30 partner organisations. The retreat takes place January 31 and February 1, 2017, in the Pyrenees and will bring together experts in the field. Dr Wild will speak about Performance Augmentation.
Here’s the abstract: Augmented Reality (AR) gained momentum in recent years, branching out beyond mere object-superimposition in marketing to more complex use cases. Other than Virtual Reality, AR refers to enhancing regular human perception with additional, artificially generated sensory inputs, merging natural and digital offers into a combined experience. Obviously, such novel technology is relevant to education and training. AR offers potential especially for human performance augmentation: to improve efficiency and effectiveness of learners through extended live guidance. In this talk, Dr. Wild will introduce to the concept of Performance Augmentation and report on latest findings from the R&D projects ARPASS, WEKIT, and TCBL.
The lecture notes in computer science (LNCS 6964) of the 6th European Conference on Technology-Enhanced Learning, co-edited by Dr. Fridolin Wild, reached a total of over 41,166 chapter downloads (thereof more than 11k in 2015) for the eBook on SpringerLink. This means, the book was one of the top 25% most downloaded eBooks in the relevant Springer eBook Collection in 2015.