-
Notifications
You must be signed in to change notification settings - Fork 16
Basic Configuration
D0natell4 edited this page Oct 31, 2018
·
7 revisions
We can explain the role of each module by following the workflow that FML and BML files take to be processed and to be translated into real verbal and non-verbal behaviours. The workflow can start from a very high level, like the FML file, or from lower level, BML file. The FML represents what an agent wants to achieve: its intentions, goals and plans. The BML language allows us to specify the signals that can be expressed through the agent communication modalities (head, torso, face, gaze, body, legs, gesture, speech, lips).
Advanced
- Generating New Facial expressions
- Generating New Gestures
- Generating new Hand configurations
- Torso Editor Interface
- Creating an Instance for Interaction
- Create a new virtual character
- Creating a Greta Module in Java
- Modular Application
- Basic Configuration
- Signal
- Feedbacks
- From text to FML
- Expressivity Parameters
- Text-to-speech, TTS
-
AUs from external sources
-
Large language model (LLM)
-
Automatic speech recognition (ASR)
-
Extentions
-
Integration examples
Nothing to show here