Skip to content

mtbehisseste/FurniFuture

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FurniFuture

FurniFuture is an intelligent home assistant for the blind. It helps to locate the furnitures in the space and can also find objects for the blind.
The system can be set up with common camera instead of special functionality cameras which brings great economic benefit. All actions are done using audio commands and is very user-friendly to the blind. The project can also be extend to help the blind in the public space like school or station.
This project wins the second place in system-integration-and-implementation-group of 經濟部搶鮮大賽.

Project Structure

Server Side

The Server Side code receives the realtime images taken from the camera set in the user's house. It then process the object detection and identification. It will also convert the position of the object to the first person sight for the blind, e.g. " The chair is three step from you in your right hand side. "
For object detection, we use tf-faster-rcnn. Since data and models are too big so they are not uploaded here. Check out the repo for details.

User Side

User side includes hand-hold devices and the camera. The hand-hold device receives the audio commands from the user and we use Speech-to-Text technology from ITRI(工研院) to transfer the commands to text. It will send text commands to the camera to take pictures. Then the camera sends pictures to the server to process object detection. After getting the result from the server, it convert the result from string to audio hint and speak out to the blind.

SnapShots