JoWalk is an app aimed to help visual impaired people to detect objects or differences on the floor.
It uses AI to determine if there is a change in the pattern in the upper area of the camera in respect to the center side, to be used with the device positioned horizontally or a little slanted in hand, with the back camera pointing down and towards and in third mode to check if the camera is pointing to a previous registered image.
JoWalk has 3 modes of utilization, accessible in rotation by a single tap on the screen: the first gives you the possibility, starting from horizontal position and tilting, to obtain, at the end of the tilting, two haptic signals whose interval is proportional to the distance the variation in pattern is detected.
The second mode matches the upper area of the rear camera (that corresponds to ahead with the device horizontally positioned, screen upside) with the center area of the camera, giving an haptic feedback every time the detection runs (this detection interval is changeable by swiping up and down on the screen and is lowered by movement) and two haptics if there is a difference in the pattern recognized in front respect to the center. You can tilt your device to explore more far area.
The third mode matches the camera captured image by swiping down and give you haptic feedback if you are pointing to a similar image. When swiping down you will receive a smooth haptic when the reference image is captured, about after a second swiping is ended.
To permits the app to use haptics go to Settings ->Sounds & Haptics ->System Haptics Option ->Toggle this option on. If the device don’t use haptics you will receive acoustic feedbacks, but the app is designed to give haptic feedbacks. I you want to use audio feedback shake your device in mode 1 and shake again in mode 1 to return to silent mode.
NOTE: JoWalk is developed to be used in a safe environment, JoWalk doesn’t detect every obstacle or subsidence and is light influenced. JoWalk uses AI to match areas and it means that there is a lot of battery consumption at every frame analyzed, in mode 2 you can reduce this consumption lowering the frame analyzing rate, in mode 3 it detects approximately two frames per second.
The matching of images is based on DTD dataset by M.Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, A. Vedaldi.