Skip to content
Michel Valstar edited this page Apr 5, 2018 · 12 revisions

Welcome to the public release of the ARIA-VALUSPA Platform (AVP). The platform is developed as part of ARIA-VALUSPA, a EU Horizon 2020 project running from 1 January 2015 to 31 December 2017. The work was funded by European Union Horizon 2020 research and innovation programme, grant agreement No 645378. The AVP has been adopted by the Nottingham Biomedical Research Centre, which will maintain it until April 2022.

The current version (3.0) was released on 23 February 2018.

With AVP, you can build your own Virtual Humans. They can be run in Ogre3D, or in Unity3D, the latter also allowing Augmented and Virtual Reality settings. Two example videos can be found here:

Our Virtual Human Technology is provided in a modular fashion, with in particular distinct Behaviour Analysis, Dialogue Management and Behaviour Generation modules. See the figure below for a schematic of the framework.

ARIA Framework Architecture

The Framework comes with the following features:

Behaviour Analysis

  • Automatic Speech recognition (ASR, in Kaldi)
  • Paralinguistic analysis (OpenSMILE)

Dialogue Management

  • Written in JSON/Java-script compliant Flipper 2.0
  • Outputs FML to Behaviour Generation block

Behaviour Generation

  • GRETA for visual behaviour generation
  • CereProc's CereVoice for Text to Speech (high quality voices available separately from CereProc)
  • Unity3D port to put an AVP character in your own Game/AR/VR environment

This wiki contains information how to install the Framework, detailed documentation you need when adapting the framework to your own needs, and some tutorials to get you started.