100 data points can define, at most, a 99-dimensional subspace. Therefore, a very important step is to reduce the dimensionality of the 1000-dimensional inputs to a more practical value, I. A common, non-ideal method is to use the I dominant principal components as the orthogonal basis of a new I-dimensional input space.
The input and target matrices then have the dimensions
[ I N ] = size(p) % input, I <= 99, N = 100 [ O N ] = size(t) % output, O = 1
where the components of t are ones and zeros corresponding to the good and bad categories.
It is sufficient to use one hidden layer with H nodes to form a feedforward multilayer perceptron ( MLP) with an I-H-O node topology. With the choice of the function NEWFF, the rest of the problem is standard and is adequately covered in the documentation, demos and archive Newsgroup postings.
Hope this helps.
Greg