The Connectionist Paradigm

Over the years, artificial neural network has been the focus of various researchers who have tried to explain the intellectual abilities of the human brain using models. Neural networks are simplified brain models that are made up of numerous neurons analogs together with weights that quantify the strength of conditions between the neurons. These weight models synapses that join together neuron. Artificial Neural Network models have demonstrated abilities to learn such as grammatical structure detection, reading, and face recognition[1]. Connectionism has raised hot debates as it is promising to provide a classical theory alternative of the mind. However, how exactly and to what extent the connectionist paradigm does challenges classicism over the years? This paper, thus, tries to explain the meaning of artificial neural networks, the effectiveness of their models, weaknesses, and how does such a model offer a novel way of answering some traditional epistemology questions.


A neural network has numerous units joined together in connection patterns. These units are usually segregated into input, hidden, and output units. The input units accept information to be processed; the hidden units join the input and the output, and output stores the processed results[2]. If an artificial neural network models an animal’s nervous system, the input unit would represent the analogous sensory neurons, the output unit would correspond to the motor neurons, and the hidden unit would correspond other neurons. 


Each input unit is made up of activation values which represent an external feature of the net. An input unit conveys activation values to hidden units which in turn calculates its activation value depending on the input. The signal is then distributed to the output unit or other hidden units for activation value computation and direct them to their neighbors. The input unit signals are then propagated via the net to control the values to be activated at the output unit.


The activation pattern of the net is agreed upon by means of the weights and connection strength amid the units. A negative weight inhibits the receipt unit with the aid of conveyance unit activity. A sample activation function can calculate each receiving unit's activation function. Despite that activation function varies, they conform to the same basic plan. The function adds up the contributions sending units where unit’s contribution, weights of the connection, sends the activation value[3]. The sum obtained is then further modified by either adjusting the activation sum value or by setting the activation to 0 until a threshold is obtained.


Connectionists believes the collection of units can give insight about cognitive functioning as it is assumed that all the units calculate the same simple activation function and human intellectual accomplishment primary depend on the set of weights in the units. A more realistic brain model would comprise various layers of hidden units, and recurring connections are sending signals from a higher to lower levels. The recurrence necessarily explains features such as short memories. A recurrent presentation of same input indicates the same output; however, even a simple being learn to ignore repeated presentation from the same stimulus. Most connectionists avoid recurrent connection since not much is understood about the general recurrent nets training issues. Elman (1991) among other models have made notable strides with simple recurrent nets especially where the recurrence is constrained tightl[4]y.


With the quest for finding the right sets of weights to accomplish a given connectionist’s goal, learning algorithms, supervised and unsupervised, have been developed. Hebbian learning, a well-known supervised form of learning, has inputs, increased active weight nodes, and reduced inactive weight nodes. Hebbian training form is useful for constructing useful net categories. In backpropagation, for instance, a user needs to train a set of input samples and their corresponding output for a specific task. The developed external set of samples supervises the training process. For instance, if a task is to distinguish faces, the training set might have numerous pictures of the face together with sex indication of participating individuals. A net for such a task might have male and female output unit indications as well as numerous input units with one input devoted to the brightness of each picture’s pixel. Initially, the weights of the trained nets are set to random values, and each member is exposed to the net repeatedly.


The input values of each member are then placed to the input units while its output is likened to the desired output. The weights are then adjusted in the direction that will put forth the output values of the net closer to the values of the desired output. For instance, when the face of a male is presented to the input units, the weights are attuned to ensure that the value of the male output unit is augmented[5]. At the same time, the female output value is decreased. After several recurrences of this process, the net may learn to output the looked-for output for each input in the training set. If the training is effective, the net may learn to generalize the desired training set’s behavior for inputs and outputs[6]. A net may do a considerable task of differentiating females from males in representations that were never presented as inputs.


To meet human intelligence, the success of backpropagation among other connectionist learning approaches may heavily hinge on subtle training set and algorithm adjustment. Set training typically entails numerous rounds of weight adjustment. Given the computer challenges that are accessible to many connectionist scholars, teaching a net to execute a stimulating task may consume a lot of time. However, some of the limitations may be set on when parallel circuits are made to execute neural network models. Even the connectionist theories of learning have limitations while studying learning behavior. Human beings among other less intelligent animals can learn from one event. For instance, an animal that has fed on a gastric distress food will never eat it again. Backpropagation among other connectionist learning techniques cannot explain this piece of one-shot learning.

Strengths and Weaknesses of Neural Network Models

Neurological perspectives affirm that the brain is indeed a neural net with neurons as units and synapses as connections. Neural networks offers a novel framework for giving insight on the nature of the mind and how it relates to the brain. Several neural network model properties suggest that connectionism offers a faithful image for cognitive processing. They exhibit full-bodied flexibility in real-world challenges. Destruction or noisy units gracefully degenerate function. Net responses are appropriate even though they are somewhat less accurate. Well problem-oriented neural networks call for conflict resolutions in parallel. Research pieces of evidence in AI affirm that cognitive activities such as coordinate motion, planning, and object recognition present these issues. Although classical systems can offer multiple constraint satisfaction, neural network models provide a mechanism of eliminating such issues.


Characterizing ordinary concepts with sufficient and necessary settings is predestined to catastrophe. There is always a wished-for characterization in the wing. For instance, an individual may suggest that a tiger is an enormous orange and black feline. However, what of albino tigers? Cognitive Psychologists and philosophers affirm that categories are delimited in flexible means; for instance through prototype similarity and family notation resemblance. Connectionist models well suit grade notations for category membership.  Nets can escalate subtle arithmetical patterns that are hard to express. Connectionism explains flexibility and knowledge in human intelligence by means that are easy to express in exception free principles’’ form. In the process, the symbolic representation’s brittleness standards are avoided.


Despite the intriguing strengths, connectionist models have weaknesses that bear mentioning. Most neural network models are interested and are possibly important for a specific feature of the brain. Most neither model various brain neurons nor explain the impact of hormones as well as neurotransmitters. Besides, the brain has reverse connections that would be needed for learning backpropagation, and various recurrence that is needed for training such methods are not realistic. A concise explanation and attention would be essential while convincing human cognitive processing’s connectionist models are made and a thoughtful objection should be lit. Among classicists, it is extensively perceived that neural networks are not worthy in rule-based processing. They try to understand undergird reasoning, higher form of thoughts, as well as language.   

Neural Network and Epistemology

Neural networks have similar features with psychology behavioristic learning theory, hence answering some of the conventional questions in epistemology.  The first Epistemology query, what is learned, addresses the content or the substance of learning. It defines the basic units of learning including both stimuli and inputs from the external surrounding to the subject. It emphasizes what is formed internally. The second question, how it is learned, addresses learning mechanisms or how units are connected internally. The third question, nature or nurture, identifies factors and intimate responsible for learning to occur.


Connectionist models are properties of nonlinear and massive interaction among elements of a network. Thus, its knowledge cannot be understood and explained by a single connection. Neural network models are not hardwired or programmed, and they are processes of representing patterns recurrently until it adapts. Patterns such as images, features, sound, characters, and waves are inputs and outputs of a neural network. Through experience, connectionist models are learning systems as they change their behaviors by changing their inner representations. The learning processes in a neural network are identical to the human brain although their subjects are not living. Behavioral changes such as native responses, maturation, and growth are not sensible in connectionist models. Even in learning, such behaviors are not considered. 


Supervised learning associatively relates input patterns to output pattern and can be likened to the empiricists’ connectionist theories. Supervised learning emphasizes finding a set of connection whenever a particular pattern appears on the second set of neurons. It does not have much attention to the inner representation of the model. In supervised learning, the contiguity of the pattern has an analogy to the Pavlovian condition. In both cases, learning is continuous in that they require repeated trial and error sequence to come up with an appropriate connection between response and stimulus. Behavioristic learning theory is a realistic comparison since learning in the Pavlovian scheme is intimate or stimulus-dependent. Learning is a product of rote repetition’s S-R bond and stimulus and response contingency.  A standard backpropagation neural network corresponds to the behavioristic learning paradigm.


Unsupervised learning does not require an external teacher to guide the learning process. It is therefore plausible if we are receiving the correct answers when we are learning. Besides, what psychological and biological mechanism calculates and conveys an error-term as a distinction of the actual output and output? In some learning, human beings are corrected to accommodate the inappropriate conditions through trial and error mechanism. We can self-organize even in learning. Thus unsupervised learning is far much plausible learning paradigm when compared to how we learn. Unsupervised learning can detect features and discriminate them among data sets without referring to an external corrector. Although unsupervised learning has some empirical learning doctrine traits such as contiguity principle of clustering inputs to produce a relevant response, it has some aspect of rationalistic doctrine[7]. For instance, reparative unsupervised learning has internal topological traits for the inputs that are similar to Gestalt theory’s isomorphism. Harmony and adaptive resonance theory can be used to verify the rationalistic doctrine claims. Neutral network model assumes that knowledge is presented as a connection amid neurons and no high-level symbolic knowledge or explicit rules are present[8]. The conventional Artificial Intelligence method classifies knowledge as a rule-based large conceptual structure. Learning is thus at symbolic and subsymbolic. 


As much as both supervised and unsupervised learning paradigm separate epistemology learning doctrines intrinsically, they have consistent properties to epistemological doctrines. Learning is an integrated part of a system and can be imposed by external mechanisms.   Real learning adapts directly to the mechanisms in which it operates. In the same way learning processes in connectionism is by repetitive processes until knowledge emerges. 

 

Works Cited


Ellingsen, Barry Kristian, and Harald Grimen. "The Epistemology of Learning in Artificial Neural Networks." (1994).


Karaminis, Themis N., and Michael SC Thomas. "Connectionism." Encyclopedia of the Sciences of Learning. Springer, Boston, MA, 2012. 767-771.


"Connectionism (Stanford Encyclopedia Of Philosophy)". Plato.Stanford.Edu, 2015, https://plato.stanford.edu/entries/connectionism/. Accessed 6 Dec 2018.


[1]


"Connectionism (Stanford Encyclopedia Of Philosophy)", Plato.Stanford.Edu, 2015, https://plato.stanford.edu/entries/connectionism/. Accessed 6 Dec 2018.


[2]


Ibid.,


[3]


"Connectionism (Stanford Encyclopedia Of Philosophy)", Plato.Stanford.Edu, 2015, https://plato.stanford.edu/entries/connectionism/. Accessed 6 Dec 2018.


[4]


"Connectionism (Stanford Encyclopedia Of Philosophy)", Plato.Stanford.Edu, 2015, https://plato.stanford.edu/entries/connectionism/. Accessed 6 Dec 2018.


[5]


Karaminis, Themis and Michael SC Thomas, "Connectionism." Encyclopedia of the Sciences of Learning. Springer, Boston, MA, 2012. 767-771.


[6]


Ibid., 30


[7]


Ellingsen, Barry Kristian, and Harald Grimen. "The Epistemology of Learning in Artificial Neural Networks." (1994).


[8]


Ibid.

Deadline is approaching?

Wait no more. Let us write you an essay from scratch

Receive Paper In 3 Hours
Calculate the Price
275 words
First order 15%
Total Price:
$38.07 $38.07
Calculating ellipsis
Hire an expert
This discount is valid only for orders of new customer and with the total more than 25$
This sample could have been used by your fellow student... Get your own unique essay on any topic and submit it by the deadline.

Find Out the Cost of Your Paper

Get Price