Results (
Thai) 1:
[Copy]Copied!
The rule finds a total squared distance (i.e. separation) between inputs and weights for connections to each cluster (i.e. output). It then restricts the updating of weights to the "winner" (i.e. the one with minimum distance) and to any neighboring units if R > 0. Notice that the Kohonen approach also recommends a dynamic rather than a fixed learning rate as used with other learning algorithms. The process tends to match the weights to the inputs in a way which groups similar input patterns together. Notice that unlike Perceptron, this algorithm does not depend on target values being specified.B. Self-Organizing Maps Applied to Pattern RecognitionIn an example taken from a text by L. Fausett [3] features seven letters (A, B, C, D, E, J, and K) defined by a 9x7 array pattern, and each with three font styles. Our net therefore has 63 inputs and thus 7x3 = 21 different patterns are to be grouped. The number of clusters chosen is arbitrary and may be fewer or greater than the number of patterns. For this example 25 clusters were specified.A MATLAB routine was written to generate the Kohonen map for this requirement, assuming a linear (one-dimensional) array of cluster units. Randomizing of the weight matrix causes differences in clustering for different runs. But the results converge rapidly for the case where R=0 (i.e. no topological structure) and more slowly for R=1. The character patterns and results of two sample runs are shown below.
Being translated, please wait..