I have just started exploring blender.
I am looking followings:
If I build a human character (for virtual assistant kind of application), can we have lip syncing and voice for dynamic sentences that has support of markups like EmotionML or SSML. These can be sent from external server.
Is there any way to control animation from an external server using some message broker (such as ActiveMQ) or similar communication channel. Is this possible?