SignAll is pioneering the first automated sign language translation solution, based on computer vision and natural language processing (NLP), to enable everyday communication between individuals with hearing who use spoken English and deaf or hard of hearing individuals who use ASL. The SignAll prototype is currently able to translate part of an ASL user’s vocabulary. According to research professionals and leading tech companies, SignAll is the best automatic sign language translation system available worldwide. We believe that the boundaries between hearing and hearing impaired people can be removed by technology. We are working for a world where the Deaf/HoH can communicate with other people spontaneously and effectively - anytime, anywhere.
Over 100 million people - more than 1% of the world’s population - are unable to hear. Being deaf from birth or childhood, many of these people use sign language as their primary form of communication.
There are several hundred sign languages around the world and these also have their own dialects. One of the most common of these is American Sign Language (ASL). More than 500,000 people use ASL in the US alone, and millions more use it worldwide.
Most hearing people don’t know that written English is only the second language of people who are born deaf. Although they can settle mostly everything in writing, there might be such official situations in which the cooperation of a sign language interpreter is necessitated as they prefer communicating on their first language – sign language.
Background map: Areas where ASL or its dialect/derivative is the national sign language or is used alongside another sign language. (Background Pic Source: Wikipedia)
Stop by the place you want, whether it's a bank or a visit at the doctor!
Sign what you need at the front desk, using SignAll!
The translation appears on the computer
The prototype uses 3 ordinary web cameras, 1 depth sensor and an average PC.
The depth sensor is placed in front of the sign language user at chest height and the cameras are placed around them. This allows the shape and the path of the hands and gestures to be tracked continuously. The PC syncs up and processes the images taken from different angles in real-time.
Having identified the signs from the images, a natural language processing module transforms the signs into grammatically correct, fully formed sentences. This enables communication by making sign language understandable to everyone.
The prototype has a modular construction, which allows components to be replaced as technology improves. This means that SignAll will be become faster, smaller and even more accurate over time.
Most people think that sign language is just about different hand movements. Sign language is actually much more complex than that.
Research has shown that fully automated sign language recognition requires a solution that combines all of the important factors. That is why - according to computer vision experts - the automated interpretation of sign language is one of the biggest challenges for technology.
Sign language has many different aspects that are combined to convey the intended meaning. These are:
Manual components (also called parameters) are the bases of signs. Their significance is so obvious that most (hearing) people think they make up the entirety of sign language. Signs have four (manual) parameters:
-Handshape: arrangement of fingers to form a specific shape
-Movement: characteristic movement of the hands
-Orientation: the direction of the palm
-Location: place of articulation: can refer to the position of the hands itself (relative to each other) or position of markers (e.g. Fingertips) relative to other places (chin, other wrist, etc.)
Facial expressions, also referred to as non-manual components, convey important grammatical meaning too. They are formed from two components: upper part (eyes, eyebrows: nms – non manual markers/signals) and lower part (mouth and cheeks: mms – mouth morphemes)
Just like with spoken languages, there are different levels and manners in which sign languages can be used to communicate. These are referred to as registers that can be intimate, consultative, casual, formal, cold and distant, though the relevance of the last one is still debated. Signing also has different levels of politeness: not all signed phrases are polite/proper in all situations.
Prosody is the elusive component of languages that subtly shapes the way we say what we say. It incorporates, among other things, the setting of rhythmic and intonational features that allow us to perceive the ways linguistic units are combined. In asl, this is realized in a visual-spatial manner, involving head and body movements, eye squints, eyebrow and mouth movements, the speed and formation of signs, pacing, and pausing. Such impalpable components are often found difficult by interpreters to comprehend.
Using space is a powerful tool in sign languages. It can be used to visually represent measures and arrangements of the objects and concepts in a dialogue.
SignAll spun off from the research lab of Dolphio Technologies, one of the most successful technology companies in Central and Eastern Europe. Our unique dream team of fifteen experienced researchers gained more than 100 years’ of experience together in the field of computer vision and natural language processing. After closing a seed investment round of EUR 1.5 million from an international consortia SignAll’s enthusiastic team is on it’s way to provide a technology that can improve the quality of life of the Deaf community.
A mathematician specialized in computer vision, Zsolt Robotka is SignAll’s co-founder and CEO. Mr. Robotka also founded Dolphio Technologies, together with János Rovnyai.
An economist, passionate entrepreneur and the co-founder of SignAll and Dolphio Technologies. Mr. Rovnyai is the CEO of Dolphio and He supports SignAll’s financial management.
With an MBA from Purdue University (US) and qualifications as an economist, Ms Szeles manages the international expansion of the business. She has worked in business development for 15 years, and joined the team in 2013.
An applied mathematician specialized in data mining. He is a senior data scientist at SignAll.
Mathematician, with 10+ years experience in computer vision, ASL user and fanatical problem solver.
A leader in Deaf community and subject matter expert in ADA, Sean provides SignAll with his invaluable insights and advises regarding business operation and product management.
As an economist he gained his professional experience as senior financial manager of several R&D projects. Laszlo Nagy is the CFO of SignAll Technologies.
We are proud to have Milad, Our deaf developer in our team. He motivates us to fulfill our mission all the time.
Dawn Croasmun has BA in American Sign Language (ASL) and Deaf Studies, with a minor in Linguistics. She completed her Masters at Gallaudet University in Sign Language Education in 2016. She teaches ASL at SignAll and is part of the Grammar team.
The project manager and SCRUM master of SignAll. He has advanced skills in agile software development and people-centric management.
A linguist, postdoctoral researcher. She has been in love with Sign Languages and has been engaged in Sign Language research for more than 20 years.
Come and join us to work on the most exciting, groundbreaking project! If you are interested in R&D or Marketing, drop us a line! We also have a great internship program for ASL users, check it below!
Mr Plaszkó, a well-known member of the Hungarian Deaf community, gave us an insight into his life.
As one of the biggest and most complex tech projects ever for automatic Sign Language recognition and translation, we keep receiving many questions and media inquiries.
The M-Enabling Summit is a highly awaited global conference where the leading technologies and innovations that promote accessibilities for people with all abilities are presented annually.