This project is to investigate adaptive natural human pointing and gestures to control an interface on a pseudo-3D display. Highly complex data with interconnections between elements is hard to visualize on screens, such data could be networks of academic citations, or named entities in an investigation. Most current tools are operated using point/click/drag metaphors on 2D screens. The physical technology to capture appropriate human behaviors exists already, but not the adaptive learning of the syntax & semantics of individual gestures and actions, nor the multi-gesture information fusion required for 'understanding'. All of this is done naturally by most human beings, using biological neural networks.