Here is an article from IKALOGIC describing how to implement CMOS cameras as sensors in 8bit microcontrollers. Processing color video in real time requires powerful DSPs or MCUs, but by downgrading the image to gray-scale, and using a lower resolution, processing becomes much easier to handle, so even small 8bit MCUs can handle it
Many robotics competitions imply the usage of line sensors, to follow guidance lines traced on the ground plane. Most competitors will build some kind of optical sensors array for this purpose. This solution tends to be the cause of many “bugs” due to the variation of ambient light which leads to faulty sensor readings. Trust me on this, I have been in that exact same place back when I was a student. Some other competitors will completely get rid of the lines traced on the ground plance, and will use very complex camera based navigation where they direct the camera to the front of the robot.Then they and have to deal with the analysis of all other moving objects and elements and implement various algorithms like shape recognition. So, what I’m mean is why not use the camera as a line sensor array?
Via Embedded Lab.