Computer learns sign language

Discuss

19 Responses to “Computer learns sign language”

  1. bazzargh says:

    @Brainspore #2: actually they still do that (here in the UK). The BBC even has signed programmes online in iPlayer:

    http://www.bbc.co.uk/iplayer/categories/signed/

  2. mdh says:

    Uh oh, now Steve Jobs is going to know what I think of his OS.

  3. Church says:

    Lords, this was done (non-computer) for ASL back in the eighties or nineties.

    Signs have three vectors. Hand shape, position, and movement. Once you get those down, only close homonyms (as it were) can be confused.

    So, cool that it was done by computer, but I’m curious to why it was only 65% successful.

  4. Brainspore says:

    @ bazzargh #3:

    That’s interesting. Is there some advantage to BSL over closed captioning? Are there many people who are fluent in sign language but text-illiterate?

  5. Jerril says:

    #6: Sounds like a basic courtesy to present it in the signers native language, instead of in written English.

    Sign languages are, with very few exceptions, only vaguely related or even completely unrelated to spoken languages. ASL is not “American English, transcribed to gestures”, and as far as I know, BSL isn’t either. They are their own languages, with their own grammatical structures and distinct vocabularies.

    Written English is a second language for deaf children – and a very difficult one, because the child often has to learn the written form before learning the spoken form through lip reading.

    To my knowledge, there is no written form of any sign language yet – short of video recordings. A tad inconvenient for filling out forms and writing books…

  6. Church says:

    @8 Jerril

    There was, or is, but it came about in the late eighties/early nineties, just in time to be eclipsed by the internet. That’s what I was referencing in my previous post.

    Unfortunately, my google-fu is weak with this one…

  7. Cheqyr says:

    (@2 … from old SNL:)

    Chevy Chase: And now, as a public service to those of our viewers who have difficulty with their hearing, I will repeat the top story of the day, aided by the Headmaster of the New York School for the Hard of Hearing, Garrett Morris.

    [Garrett's face appears in a circle to Chevy's right]

    Chevy Chase: Our top story tonight…

    Garrett Morris: [screaming] OUR TOP STORY TONIGHT…

  8. Church says:

    Ah, here we go. It’s called Stokoe notation.
    http://en.wikipedia.org/wiki/Stokoe_notation

    It’s apparently been abandoned by the Deaf in favor of the newer ‘sign writing’ (which IMHO looks cartoonish in comparison. I’ll have to study it to see if it offers any advantages.)

  9. Anonymous says:

    I believe that this will be a useful tool in advancing the rights and culture of the deaf. However I find it insulting that I cannot share this newscast with my deaf friends because there are no subtitles!

  10. Anonymous says:

    #8. To my knowledge, there is no written form of any sign language yet – short of video recordings. A tad inconvenient for filling out forms and writing books…Actually, there is: Sutton SignWriting http://www.omniglot.com/writing/signwriting.htm

  11. Anonymous says:

    I believe the narrator must have it wrong at 0:40.

    They want to use this software in order to generate “a very realistic signer”? Wouldn’t this technology be better applied to translate FROM sign language to text or spoken word rather than the opposite? No optical recognition is necessary to do what she says at 0:40, I don’t see how it applies and its pretty unimaginative.

  12. Anonymous says:

    nah, it isn’t that amazing,

    the most amazing thing was that monitor mind reader, if your not familiar with it, search the net and think how frequencies can read your mind.

  13. Anonymous says:

    Facial expression and lip pattern are a crucial component of British Sign Language syntax. Until this software analyses the face and lips along with the arms and hands, it’s never going to be very accurate.

    That’s only scratching the surface of the subtleties and complexities of the language that software designed solely to identify vocabulary would miss. You could literally write a book on what this software misses…

  14. Anonymous says:

    @6 “Lords, this was done (non-computer) for ASL back in the eighties or nineties.

    Signs have three vectors. Hand shape, position, and movement. Once you get those down, only close homonyms (as it were) can be confused. ”

    There’s more now. If I remember correctly, there’s now accepted to be about 6 orthogonal vectors in sign language, and they don’t think they’ve found all of them yet. IIRC the list includes handshape, orientation, direction, repetition, positioning, and another one I can’t remember.

    Also don’t forget that hands only convey about 50% of the meaning in sign lanuage – the other 50% is facial expressions, eye gaze, head position, etc etc.

    And that’s without going into grammatical elements like indexing, referencing, topic marking, time lines, etc.

    Basically, computer sign translation is always gonna be crap. Computers have a hard time translating spoken english into written english, or from one language into another, and that’s with the billions pumped into these use-cases.

    The quality of these translations is so dire they’re only accepted straight with severe reluctance. Computerised sign translation is even worse, with far less money going into it. To say deaf people aren’t keen on it is a massive understatement.

  15. Ian70 says:

    Congratulations! With enough work a production company can have their very own computer-generated signer – no need for those pesky human beings for that job anymore.

  16. 54N71460 says:

    To me, its not surprising at all. Everything I know I learned from watching television.

  17. Brainspore says:

    @ Ian70 #1:

    I’m guessing that’s a joke, but I still remember when news broadcasts sometimes included a little picture-in-picture view of a person signing whatever the anchor was saying for the benefit of deaf viewers. They must have stopped doing that around the same time that somebody invented closed-captioning.

  18. Anonymous says:

    Learning signs is one thing; learning sign language is another thing entirely.

    Sign language has a very complex grammar and inflectional system (there are a number of regional and national sign languages, but I feel fairly sure that most if not all of them fit that description; I know American Sign Language does). What the computer has done is akin to learning individual lexemes in a spoken language without knowing how to correctly put them together.

Leave a Reply