Using Blender to teach children with autsim sign language

Ok so we have addons and scripts to play shape keys based on vowels in text,

we have the ability to rig and animate actors,

a system be made that converts words in to sign language, and plays them in real time…

A “sign language interpreter”

This would help many children learn to speak sign through passive absorption, (look upGemiini )

A basic program would just be something that interacted with something like Gemiini to produce a “Key”

to help set in the signs and what they mean, after that they could watch youtube or tv or DVD’s and
learn by observation… (this is how my daughter seems to learn best … she learned to read from captions)

another avenue - converting Leap data into sign language for speech synthesis.

thanks for the read.

Happy Blending!

Is there any advantage to doing it via a copy of Blender on a PC as opposed to more intimate methods such as mobile apps. and the various ‘analog’ methods?

Just because one could perhaps use Blender for that type of thing doesn’t mean it’s the solution that should be used. There’s dozens upon dozens of apps. designed for autistic users on Android for instance.

I am familiar with all of them,

there is nothing at present that can proccess a video and create a sign language caption.

my daughter already uses a Pc keyboard and mouse,

I am talking about another avenue into a child that is otherwise
sometimes impossible to reach.

some of these children skip the language part of the brain and process words using the visual parts of the brain, this is why the aproach is so promising,

I have seen a few autistic children that cant speak, but can type, then read the text.

But is Blender the best way though?

What I mean is that it would be naive to think that the BF has the only technology available to make this type of thing possible, and that assumes Blender’s tools and Python API are even capable of this (Blender has element tracking tools for compositing purposes but I don’t think it has speech recognition).

python is powerful,

the idea is to do it for free, so the path of least effort for the most output.

btw I have pecs etc, this is something to use along with other methods.

Do you have any proof of an existing module that can do speech recognition, as speech recognition technology that can actually interpret your voice correctly has only recently come out.

Python isn’t doing that - it’s simply passing it onto Google’s servers where the powerful C/C++ services convert it into text. That’s like saying that HTML can be used to analyse images because TinEye’s front-end is in HTML for the web-browser. :rolleyes:

With that in mind, I think that the idea you have in regards to sign language interpretation / translation is interesting… but I do not see Blender as the best tool for the job. Square peg, round hole.

@BPR
To be honest I’m not sure what you are asking, but I’m reminded of this lightning talk. Have you seen it?

Just getting the input to output in order, You need audio capture converted to a visual display of sign-language rather then normal text?
As for whatever captures the audio and converts it to text you have many options there, And as someone who is nuro-atypical and in the range of hearing impaired (50% hearing in one ear still, And one hell of a lip reader btw ) This is a type of project I have given thought to from time to time.

But as tedious as my solution to this could be. When I considered something like this in the past, The option my brother and I kicked around was formatting the hand gestures into a pictogram language format (such as many Asian languages) and when there was not a suitable hand-gesture to be had it would spell out the words. But our intention at the time was to have something where it could be copy/pasted on a computer.

As a stop gap solution I think there is a text font that is the sign language hand gestures. Although I am not sure if they would suit your needs or not. http://www.lifeprint.com/asl101/pages-layout/gallaudettruetypefont.htm

Converting speech to text then playing animations that are the words in sign language,

“Sign language captioning”

Import youtube video -> export animated actor video to be overlayed in corner of video as new video

play video for kid instead of normal video they love anyway.

Profit = Communication from a non verbal child.

I can only imagine that a lot of machines out there will not have the processing power to extract language from a video, parse through the vocals (no matter how grungy it sounds) to get the words, and then control a character’s hands in real-time. (this is more advanced stuff than the existing technology where you speak into something and then it gets processed).

Plus, what happens if the voice in the video has a thick accent or it’s partially obscured by noise, if the algorithm is not perfect, then you might end up giving the wrong hand signals to the child (which would be worse than not learning at all).

I was talking about running a video through 1 at a time, and polishing it, and then using it, (not real time)

Just using blender in the workflow, and automating as much as possible, and then correcting where needed.

like captioning a movie.

@ blueprintrandom

If you had to do this on the fast, You could use a model from make a human and import that to blender. That would give you both the hand signs and the ability to adjust facial expressions as needed.

And ace, Most people who use sight to compensate for a hearing loss, also learn how to read lips to an extent. As a 3d artist you should know about lip syncing to match the lip shape to the phonemes in use. And Part of the learning process of using sight to compensate for a lack of audio is learning to spot your own errors in interpretation So I fail to see how it would be worse for the child. Learning to deal when things go foul is an essential life skill. It is not child abuse to let your kid learn though mild adversity, It is what gets them ready for the real world and helps them move out of their parents house and engage the real world rather then video game land.

A python api that uses google is a msart way to do it.
As voice to text depends on training data, and they got enough of such data because of android phones google voice search…

Another option might work too. since youtube allready does add caption to some content
This might be more directly used see here : https://www.youtube.com/watch?v=drTyNDRnyxs

But the large part making a word encyclopedia for use in animation… would be still be huge to generate.
Maybe with some luck some university has allready done this in some bhv format ?

Next you would need to combine them.

I once created this for someone who had some ideas about sign language to, i just made a model that only had hands in which he was interested here it is : Hands3.blend (1.44 MB)