Touchpad Figure Recognition
Our project implements a
touchpad input system which takes user input and converts it to a printed
character. Currently, the device only recognizes the 26 letters of the alphabet,
but our training system could be easily generalized to include any figure of
completely arbitrary shape, including alphanumerics, punctuation, and other
symbols. A stylus is used to draw the figure/character on the touchpad, and the
result is shown on an LCD display. Pushbutton controls allow the user to format
the text on the display.
We chose this project because touchscreens and touchpads are prevalent today in many new technologies, especially with the recent popularity of smartphones and tablet PCs. We wanted to explore the capabilities of such a system and were further intrigued by our research into different letter-recognition methods. Finally, we have had previous course experience in signal processing, computer vision, and artificial intelligence; we feel that this project was an excellent way to synthesize all of this knowledge.
Upon completion, we decided to extend our project by interfacing it with a project created by another group. Jun Ma and David Bjanes created a persistence-of-vision display; we use a wireless transmitter to send text which is then shown on the display. The same pushbutton controls may be used to format the text on both the LCD display as well as the POV display.
With more time, we would implement more advanced classification algorithms to detect similarity between single figures. Such a feature could be used to perform tasks like signature verification. Furthermore, we would expand the classification space to include all alphanumerics as well as punctuation marks and symbols, increasing the user's freedom with what he/she can write to the LCD.
We conformed to the specifications of the SPI standard by ensuring that the correct wire interface was used and by monitoring the bitwise transmission and reception of data to check for accuracy.
We chose this project because touchscreens and touchpads are prevalent today in many new technologies, especially with the recent popularity of smartphones and tablet PCs. We wanted to explore the capabilities of such a system and were further intrigued by our research into different letter-recognition methods. Finally, we have had previous course experience in signal processing, computer vision, and artificial intelligence; we feel that this project was an excellent way to synthesize all of this knowledge.
Upon completion, we decided to extend our project by interfacing it with a project created by another group. Jun Ma and David Bjanes created a persistence-of-vision display; we use a wireless transmitter to send text which is then shown on the display. The same pushbutton controls may be used to format the text on both the LCD display as well as the POV display.
Conclusions
Analysis
Our design far exceeded our initial expectations. We were able to achieve near-perfect character recognition and users who tested our board reported that it was both accurate, quick, and easy to use. Though not as fast, our touchpad would serve as an excellent and novel alternative input system to a traditional keyboard.With more time, we would implement more advanced classification algorithms to detect similarity between single figures. Such a feature could be used to perform tasks like signature verification. Furthermore, we would expand the classification space to include all alphanumerics as well as punctuation marks and symbols, increasing the user's freedom with what he/she can write to the LCD.
We conformed to the specifications of the SPI standard by ensuring that the correct wire interface was used and by monitoring the bitwise transmission and reception of data to check for accuracy.
No comments:
Post a Comment