Tuesday, January 21, 2014
Bio-Inspired Artificial Intelligence
I am still reading the three books above, but have already begun to wonder at the marvel that is Nature. Like all scientific endeavors, Artificial Intelligence is constantly evolving. But I can see that the direction for the future is towards 'softer' systems that emulate the biological neural systems. Less of the machine-based algorithms that are faster but more brittle; towards systems which closer emulate Nature. Nature's ways are sometimes strange. Many redundancies (e.g. junk DNA), meandering and seemingly aimless moves. But eventually a more robust solution. The workings of AI's current Genetic Algorithms that are used for Optimization are more of a Mendelian process of selective breeding rather than natural evolution. Nature's evolution is an open process. There is no predetermined goal. Each generation adapts to whatever its environment throws up, and the fitness of a generation is only specific to fitness for the environment of that moment. Genetic Algorithms also do not take into account Co-evolution- as an organism evolves, it affects the players in its environment who subsequently evolve(adapt) in response to the organism's evolution (adaptation) and invites another response from the organism ad infinitum. And thus Nature wanders with no final destination in mind, but choosing to go whichever way is the 'best' way at a moment in time.
Another not so realistic aspect of current GA's is that Optimization is deemed to have a single Objective. In the real world, most Optimization problems are multi-objective, multi-constraint problems.
In the field of Neural Networks, recent advances will result in better machine-vision and Natural Language /context sensitive translation, (as well as speech and text recognition) algorithms. In Robotics, this will make for robots which are more human.
The basis for this current big leap in AI is a better understanding of how the human brain processes information (auditory,visual, text etc). It seems that we process information in layers-each cortical layer progressively dealing with more complex and abstract information. The output from each layer is fed to the next layer on top. Thus for example in visual object recognition, the lowest layer may do just edge-detection while subsequently higher layers deal with shape and colors and spatial characteristics. In an attempt to emulate this, researchers have created Deep Belief Networks with many layers. Unlike previous multi-layered supervised neural networks (which failed miserably), the bottom layers are trained without supervision via something like a Self-Organizing Map; in this case a Restricted Boltzman Machine. Studies by Bengio et al have shown that unsupervised training of lower layers with a final tune-up at the top via supervised learning gives good results. Why this is so is the subject of academic study though we know this closely emulates how our brain works.
I am attempting to emulate some aspects of this bio-inspired Artificial Intelligence in my field of applying AI technology to financial markets time series. For example, I can create networks within networks, with the inner network being a unsupervised network. I am trying this out with the unsupervised Recurrent net from Neuroshell cloaked within a net with out- of- sample output because it learns by walking forward 10 days at a time (and thus on unseen data). This Advance Turboprop2 neural network, also from Neuroshell is novel because starting from 1 neuron it progressively adds neurons until the optimal number of neurons for pattern recognition is attained-with some controls to prevent over-fitting. Preliminary results seem to confirm the writings above. The AI models for pattern recognition of financial markets time series seem to be more robust-in the sense that they don't get out-of-tune as often as before and thus need less re-calibration.
One final aspect of natural systems is that they are not only adaptive, but self-healing. This is something which is very difficult to emulate.