What Is a Neural Network?

Neural Network Neural Network

Investopedia / Joules Garcia

What Is a Neural Network?

A neural network is a series of algorithms that endeavors to recognize underlying relationships in a set of data through a process that mimics the way the human brain operates. In this sense, neural networks refer to systems of neurons, either organic or artificial in nature.

Neural networks can adapt to changing input; so the network generates the best possible result without needing to redesign the output criteria. The concept of neural networks, which has its roots in artificial intelligence, is swiftly gaining popularity in the development of trading systems.

Key Takeaways

  • Neural networks are a series of algorithms that mimic the operations of an animal brain to recognize relationships between vast amounts of data.
  • As such, they tend to resemble the connections of neurons and synapses found in the brain.
  • They are used in a variety of applications in financial services, from forecasting and marketing research to fraud detection and risk assessment.
  • Neural networks with several process layers are known as "deep" networks and are used for deep learning algorithms
  • The success of neural networks for stock market price prediction varies.
Image Image
Image by Sabrina Jiang © Investopedia 2020

Understanding Neural Networks

Neural networks, in the world of finance, assist in the development of such processes as time-series forecasting, algorithmic trading, securities classification, credit risk modeling, and constructing proprietary indicators and price derivatives.

A neural network works similarly to the human brain’s neural network. A “neuron” in a neural network is a mathematical function that collects and classifies information according to a specific architecture. The network bears a strong resemblance to statistical methods such as curve fitting and regression analysis.

A neural network contains layers of interconnected nodes. Each node is a known as perceptron and is similar to a multiple linear regression. The perceptron feeds the signal produced by a multiple linear regression into an activation function that may be nonlinear.

History of Neural Networks

Though the concept of integrated machines that can think has existed for centuries, there have been the largest strides in neural networks in the past 100 years. In 1943, Warren McCulloch and Walter Pitts from the University of Illinois and the University of Chicago published "A Logical Calculus of the Ideas Immanent in Nervous Activity". The research analyzed how the brain could produce complex patterns and could be simplified down to a binary logic structure with only true/false connections.

Frank Rosenblatt from the Cornell Aeronautical Labratory was credited with the development of perceptron in 1958. His research introduced weights to McColloch's and Pitt's work, and Rosenblatt leveraged his work to demonstrate how a computer could use neural networks to detect imagines and make inferences.

Even though there was a dry spell of research (largely due to a dry spell in funding) during the 1970's, Paul Werbos is often credited with the primary contribution during this time in his PhD thesis. Then, Jon Hopfield presented Hopfield Net, a paper on recurrent neural networks in 1982. In addition, the concept of backpropagation resurfaced, and many researchers began to understand its potential for neural nets.

Most recently, more specific neural network projects are being generated for direct purposes. For example, Deep Blue, developed by IBM, conquered the chess world by pushing the ability of computers to handle complex calculations. Though publicly known for beating the world chess champion, these types of machines are also leveraged to discover new medicine, identify financial market trend analysis, and perform massive scientific calculations.

Recent analysis from the Los Alamos National Library allows analysts to compare different neural networks. The paper is considered an important part in moving towards characterizing the behavior of robust neural networks.

Multi-Layered Perceptron

In a multi-layered perceptron (MLP), perceptrons are arranged in interconnected layers. The input layer collects input patterns. The output layer has classifications or output signals to which input patterns may map. For instance, the patterns may comprise a list of quantities for technical indicators about a security; potential outputs could be “buy,” “hold” or “sell.”

Hidden layers fine-tune the input weightings until the neural network’s margin of error is minimal. It is hypothesized that hidden layers extrapolate salient features in the input data that have predictive power regarding the outputs. This describes feature extraction, which accomplishes a utility similar to statistical techniques such as principal component analysis.

Types of Neural Networks

Feed-Forward Neural Networks

Feed-forward neural networks are one of the more simple types of neural networks. It conveys information in one direction through input nodes; this information continues to be processed in this single direction until it reaches the output mode. Feed-forward neural networks may have hidden layers for functionality, and this type of most often used for facial recognition technologies.

Recurrent Neural Networks

A more complex type of neural network, recurrent neural networks take the output of a processing node and transmit the information back into the network. This results in theoretical "learning" and improvement of the network. Each node stores historical processes, and these historical processes are reused in the future during processing.

This becomes especially critical for networks in which the prediction is incorrect; the system will attempt to learn why the correct outcome occurred and adjust accordingly. This type of neural network is often used in text-to-speech applications.

Convolutional Neural Networks

Convolutional neural networks, also called ConvNets or CNNs, have several layers in which data is sorted into categories. These networks have an input layer, an output layer, and a hidden multitude of convolutional layers in between. The layers create feature maps that record areas of an image that are broken down further until they generate valuable outputs. These layers can be pooled or entirely connected, and these networks are especially beneficial for image recognition applications.

Deconvolutional Neural Networks

Deconvolutional neural networks simply work in reverse of convolutional neural networks. The application of the network is to detect items that might have been recognized as important under a convolutional neural network. These items would likely have been discarded during the convolutional neural network execution process. This type of neural network is also widely used for image analysis or processing.

Modular Neural Networks

Modular neural networks contain several networks that work independently from one another. These networks do not interact with each other during an analysis process. Instead, these processes are done to allow complex, elaborate computing processes to be done more efficiently. Similar to other modular industries such as modular real estate, the goal of the network independence is to have each module responsible for a particular part of an overall bigger picture.

Application of Neural Networks

Neural networks are broadly used, with applications for financial operations, enterprise planning, trading, business analytics, and product maintenance. Neural networks have also gained widespread adoption in business applications such as forecasting and marketing research solutions, fraud detection, and risk assessment.

A neural network evaluates price data and unearths opportunities for making trade decisions based on the data analysis. The networks can distinguish subtle nonlinear interdependencies and patterns other methods of technical analysis cannot. According to research, the accuracy of neural networks in making price predictions for stocks differs. Some models predict the correct stock prices 50 to 60% of the time. Still, others have posited that a 10% improvement in efficiency is all an investor can ask for from a neural network.

Specific to finance, neural networks can process hundreds of thousands of bits of transaction data. This can translate to a better understanding of trading volume, trading range, correlation between assets, or setting volatility expectations for certain investments. As a human may not be able to efficiently pour through years of data (sometimes collected down second intervals), neural networks can be designed to spot trends, analyze outcomes, and predict future asset class value movements.

There will always be data sets and task classes that a better analyzed by using previously developed algorithms. It is not so much the algorithm that matters; it is the well-prepared input data on the targeted indicator that ultimately determines the level of success of a neural network.

Advantages and Disadvantages of Neural Networks

Advantages of Neural Networks

Neutral networks that can work continuously and are more efficient than humans or simpler analytical models. Neural networks can also be programmed to learn from prior outputs to determine future outcomes based on the similarity to prior inputs.

Neural networks that leverage cloud of online services also have the benefit of risk mitigation compared to systems that rely on local technology hardware. In addition, neural networks can often perform multiple tasks simultaneously (or at least distribute tasks to be performed by modular networks at the same time).

Last, neural networks are continually being expanded into new applications. While early, theoretical neural networks were very limited to its applicability into different fields, neural networks today are leveraged in medicine, science, finance, agriculture, or security.

Disadvantages of Neural Networks

Though neutral networks may rely on online platforms, there is still a hardware component that is required to create the neural network. This creates a physical risk of the network that relies on complex systems, set-up requirements, and potential physical maintenance.

Though the complexity of neural networks is a strength, this may mean it takes months (if not longer) to develop a specific algorithm for a specific task. In addition, it may be difficult to spot any errors or deficiencies in the process, especially if the results are estimates or theoretical ranges.

Neural networks may also be difficult to audit. Some neural network processes may feel "like a black box" where input is entered, networks perform complicated processes, and output is reported. It may also be difficult for individuals to analyze weaknesses within the calculation or learning process of the network if the network lacks general transparency on how a model learns upon prior activity.

Neural Networks

Pros
  • Can often work more efficiently and for longer than humans

  • Can be programmed to learn from prior outcomes to strive to make smarter future calculations

  • Often leverage online services that reduce (but do not eliminate) systematic risk

  • Are continually being expanded in new fields with more difficult problems

Cons
  • Still rely on hardware that may require labor and expertise to maintain

  • May take long periods of time to develop the code and algorithms

  • May be difficult to assess errors or adaptions to the assumptions if the system is self-learning but lacks transparency

  • Usually report an estimated range or estimated amount that may not actualize

What Are the Components of a Neural Network?

There are three main components: an input later, a processing layer, and an output layer. The inputs may be weighted based on various criteria. Within the processing layer, which is hidden from view, there are nodes and connections between these nodes, meant to be analogous to the neurons and synapses in an animal brain.

What Is a Deep Neural Network?

Also known as a deep learning network, a deep neural network, at its most basic, is one that involves two or more processing layers. Deep neural networks rely on machine learning networks that continually evolve by compared estimated outcomes to actual results, then modifying future projections.

What Are the 3 Components of a Neural Network?

All neural networks have three main components. First, the input is the data entered into the network that is to be analyzed. Second, the processing layer utilizes the data (and prior knowledge of similar data sets) to formulate an expected outcome. That outcome is the third component, and this third component is the desired end product from the analysis.

The Bottom Line

Neural networks are complex, integrated systems that can perform analytics much deeper and faster than human capability. There are different types of neural networks, often best suited for different purposes and target outputs. In finance, neural networks are used to analyze transaction history, understand asset movement, and predict financial market outcomes.

Article Sources
Investopedia requires writers to use primary sources to support their work. These include white papers, government data, original reporting, and interviews with industry experts. We also reference original research from other reputable publishers where appropriate. You can learn more about the standards we follow in producing accurate, unbiased content in our editorial policy.
  1. IBM. "What Are Neural Networks?"

  2. Mcculloch, Warren S. and Pitts, Walter. "A Logical Calculus of the Ideas Immanent in Nervous Activity." Bulletin of Mathematical Biophysics, vol. 5, 1943, pp. 115-133.

  3. Rosenblatt, Frank. "The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain." Psychological Review, vol. 65, no. 6, 1958, pp. 386-408.

  4. Werbos, Paul. "Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences." PhD Thesis, Harvard University, January 1974.

  5. Yi, Zhang and Tan, K.K. "Hopfield Recurrent Neural Networks." Convergence Analysis of Recurrent Neural Networks, vol. 13, 2004, pp. 15–32.

  6. IBM. "Deep Blue."

  7. Jones, Haydn and et al. "If You've Trained One, You've Trained Them All: Inter-Architecture Similarity Increases With Robustness." 38th Conference on Uncertainty in Artificial Intelligence, 2022.

  8. ScienceDirect. "Multilayer Perceptron."

  9. University of Toronto, Department of Computer Science. "Roger Grosse; Lecture 5: Multilayer Perceptrons." Pages 2-3.

  10. ScienceDirect. "Feedforward Neural Network."

  11. Lu, Jing and et al. "Extended Feed Forward Neural Networks with Random Weights for Face Recognition." Neurocomputing, vol. 136, July 2014, pp. 96-102.

  12. IBM. "What Are Recurrent Neural Networks?"

  13. IBM. "What are Convolutional Neural Networks?"

  14. ScienceDirect. "Deconvolution."

  15. Anupam Shukla, Ritu Tiwari, and Rahul Kala. "Towards Hybrid and Adaptive Computing, A Perspective; Chapter 14, Modular Neural Networks," Pages 307-335. Springer Berlin Heidelberg, 2010.

  16. Pang, Xiongwen and et al. "An Innovative Neural Network Approach for Stock Market Prediction." The Journal of Supercomputing, vol. 76, no. 1, March 2020, pp. 2098-2118.

  17. IBM. "What Is Deep Learning?"

Compare to Similar Robo Advisors
×
The offers that appear in this table are from partnerships from which Investopedia receives compensation. This compensation may impact how and where listings appear. Investopedia does not include all offers available in the marketplace.
Provider
Name
Description