Most neural networks are used as passive learners, i.e. they will work on a fixed training set. In natural learning problems, however, the learner can gather new information from its environment to improve the learning process. Using active learning in connection with neural networks, the network is allowed to select a new training input at each time step.
To achieve a real improvement of the learning process, the selected new input is not chosen by random. The goal is to chose an input, that minimizes the expectation of the learner's mean squared error. So you will find methods here, that will estimate the output variance of the network, when adding a new example to the training set.
For more details about active learning in neural networks and the formulas used here for the variance estimations, please refer to David A. Cohn: "Neural Network Exploration Using Optimal Experimental Design."
This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this library; if not, write to the Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
Definition in file VarianceEstimator.cpp.