Extracting the information about impending collisions from the visual scene viewed from the ego-perspective of a fast moving agent is a challenging task that is solved in an efficient way in locusts flying in swarms of million individuals. Rapid luminance changes in adjacent receptors lead to the excitation of motion-sensitive layers that are connected to neurons selectively responding to impending collisions. This was simulated in a bionic computer model where the collision risk is extracted from traffic movies exhibiting a low spatial resolution. This method relies on relative object expansion and is therefore independent of distance measurements and object recognition. Additionally, the calculation of directional motion information can be used to compute the direction and force of evasive steering. Camera shaking and quickly approaching ground shadows are partially compensated. After parameter tuning, simulation results show that this method reliably indicates impending collisions and, if possible, an evasive steering direction using various crash car movies as input.
The problem for camera sensors used in dark conditions comes with noise that is amplified after application of common image enhancement procedures. Therefore, the second part of my talk addresses the night vision capabilities of the solitary bee Megalopta genalis (Greiner et al. 2004). The way eyes of this bee cumulate photons by simultaneously maintaining spatial and temporal resolution already inspired the development of innovative night vision cameras. Recently, I developed an iterative procedure that increases grey value saturation by simultaneously reducing sensor noise. Image blur is prevented by 'adaptive spatial averaging'. Additionally, temporal summation of grey values using static images as input was made possible by modelling saccadic eye movements. In both computer models most processing steps can be performed on the sensor thus computational demands can be minimized.