NZSM Online

Get TurboNote+ desktop sticky notes

Interclue makes your browsing smarter, faster, more informative

SciTech Daily Review

Webcentre Ltd: Web solutions, Smart software, Quality graphics

Quick Dips

Vision Research Looks to the Sky

Space exploration, robotics and aeronautics may seem fairly unrelated to road safety, but each of these fields stands to benefit from research currently under way at the University of Waikato Psychology Department, where human visual navigation specialist Dr John Perrone is working on further developing a computer model of human self-motion perception.

Built up from networks of biologically-based, motion-sensitive cells, the model attempts to simulate the complicated process of how the two-dimensional images presented to the retina provide the brain with the three-dimensional information required to avoid obstacles and determine one's correct heading.

Perrone says such a model, once fully operational, has the potential to minimise the disorientation of astronauts in space, aid the development of better cockpit displays in spacecraft and aeroplanes, help reduce pilot error, prevent road accidents, and eventually provide a means for creating visual aids for automotive navigation technology, robotics and maybe even space probes.

"Whether you're driving a car, flying a plane or controlling a space craft of some sort, in the end it comes down to the same basic navigation principles, so understanding how the brain works in the general case will help us learn a lot more about these other situations."

To assist in this project, he has funding from the New Zealand Lotteries Science Board relating to the road safety issues, and funding from NASA's Ames Research Centre in California, where he collaborates with an American neuroscientist, Dr Leland Stone.

The model only operates with artificial image sequences at the moment, but testing with video inputs should start soon.

"Once the model is fully functional we can run it on particular test cases and, in a sense, anticipate potential errors in navigation. This makes it a useful tool for analysing certain driving situations," says Perrone. For instance, he adds, it will help identify situations where drivers could be mislead into incorrectly negotiating bends or wrongly estimating speed, safe braking and overtaking distances.

It is expected to help identify the minimum visual motion cues that need to be present to facilitate correct and safe navigation. This is especially important for the likes of cockpit displays, which are limited by on-board computer power and cost, but can not afford to have pilots confused about their heading or altitude.

"There's a lot of factors you can play with," he points out. "For instance, how much detail and fidelity do you need to put in a flight simulator to make sure you're training your pilots property?"

In space, astronauts find themselves in visually impoverished situations with no ground plane to focus on and no gravity to help them with their spatial orientation. Perrone says the model will help here by anticipating some of the navigation errors that could arise in this environment, and eventually providing solutions to overcome any potential disorientation.

Another spin-off expected from the understanding of the human visual navigation process is its application to robotics.

"Robots often use infra red and laser scanning techniques to find out what's out there in the environment. These are called active systems, whereas the approach we take is a passive system where you just use the video images coming in from a camera instead of projecting something out to the world. It's more difficult to develop but once we get it working it could be a lot simpler and cheaper than other methods," Perrone says. "People in the field of robotics are realising you can learn a lot from biological systems."

Julie Hannam