You open your wallet and take out the right amount of cash. You may think this is an easy task, but not for all. The visually impaired struggle to perform daily tasks that we consider simple. We experienced this firsthand when Nidhi visited her grandfather over summer. Due to macular degeneration, he had lost complete vision through his left eye, and his right eye vision was quickly deteriorating. After seeing his condition, I was determined to help the visually impaired lead an easier life.
We decided to create a device that assists the visually impaired by reading dollar bills, improving the shopping experience, and in turn boosting social confidence with peers/friends. In order to gain more insight into the situation, we visited the Shree Ramana Maharishi Academy for the Blind in India, where I interacted with blind students of ages ranging from 6 to 12. In a survey, we gave them two options to pick: a smartphone money reader app or a clip-on device to their sunglasses that can read out the dollar bill denomination. The choice was very clear: the clip-on device because of its ease of use and affordability. Using these responses, we started to create my device.
Creating the device was split into three main parts: (1) creating the machine learning model that can accurately identify the denomination of the different dollar bills, (2) using the Raspberry Pi to take the photo, and (3)mounting the Raspberry Pi on top of sunglasses and testing the prototype. To create the model, there were two approaches: using (1) Google Vision’s API AutoML to train and test a model based on a set of images that we provide to the API OR using (2) Keras and Tensorflow libraries to code a model. We started off by using the AutoML API, which gave very promising results. Using this API, we created 2 models: the first model had a total of 400 images of one, five, ten, and twenty dollar bills and an accuracy of 81.8%, while the second model had over 1000 images and an accuracy of 97.8%.
Although these models gave very accurate results, AutoML works by connecting the model to the cloud, and thus the model will only work if the device is connected to the Internet. Therefore, to be able to operate in the absence of the Internet, we created models using Keras and Tensorflow libraries. Using these libraries and other resources, we created 4 models, the last one having an accuracy of about 92%. Next, the device uses the Pi Camera to take an image and use the model, predicts the image, and identifies the denomination.
Overall, the final model has an accuracy of 92%. Currently, the device communicates the denomination to the user via earphones and we are working on making the device smaller by using camera modules such as the ESP-32 CAM Module. We are also working with my school to 3D print these devices and get them manufactured and delivered as soon as possible.
You open your wallet and take out the right amount of cash. You may think this is an easy task, but not for all. The visually impaired struggle to perform daily tasks that we consider simple.
With this project, we have decided to work with the Shree Ramana Maharishi Academy for the Blind in an effort to get these devices to the. We modified the device in order for it to use the device to recognize rupees.
Shown is a prototype of our wearable device in its early stages. We have since created CAD models for the device and changed its design.
Code for this project can be found here: https://github.com/nidhimath/moneyReader. Feel free to contact us and let us know if you would like to contribute to this product or would like to work with us in order to get a device to help the visually impaired to those who need it. A video showing how our device works can be viewed here: https://youtu.be/d0geFpAe79I.
This device has been recognized at the California Synopsys Science Fair, the American Association of Engineers of Indian Origin.