← Вернуться к списку

Какие существуют модели обработки информации помимо многослойных перцептронов (MLP)?

Краткое содержание

Нейронные сети прямого распространения (feedforward) или многослойные нейронные сети (MLP), подобные изображенной выше, обычно характеризуются тем фактом, что все взвешенные соединения могут быть представлены непрерывным вещественным числом. Более того, каждый узел слоя соединён со всеми остальными узлами предыдущего и последующего слоев. Существуют ли другие модели обработки информации помимо сетей прямого распространения (FFNN) или MLP? Например, есть ли системы, в которых топология нейронной сети является переменной? Или системы, в которых связи между узлами не являются вещественными числами?

Полный текст

What are some information processing models besides MLPs?Ask Question Asked8 years, 8 months agoModified3 days agoViewed717 times Asked8 years, 8 months ago 7$\begingroup$Feedforward or multilayered neural networks, like the one in the image above, are usually characterized by the fact that all weighted connections can be represented as a continuous real number. Furthermore, each node in a layer is connected to every other node in the previous and successive layers.Are there any other information processing models other than FFNNs or MLPs?For example, is there any system in which the topology of a neural network is variable? Or a system in which the connections between nodes are not real numbers?neural-networksdeep-learningreference-requestmodel-requestShareImprove this questionFolloweditedMay 20, 2022 at 13:40nbro43k1414 gold badges121121 silver badges221221 bronze badgesaskedMar 7, 2017 at 20:08user28966141944 silver badges1111 bronze badges$\endgroup$0Add a comment|3 Answers3Sorted by:Reset to defaultHighest score (default)Date modified (newest first)Date created (oldest first)3$\begingroup$Neural Network equivalents that is not (vanilla) feed forward Neural Nets:Neural net structures such as Recurrent Neural Nets (RNNs) and Convolutional Neural Nets (CNNs), and different architectures within those are good examples.Examples of different architectures within RNNs would be: Long Short Term Memory (LSTM) or Gated Recurrent Unit (GRU). Both of these are well described in Colah's blog post onUnderstanding LSTMsWhat are some alternative information processing system beside neural networkThere are sooo many structures. From the top of my head: (Restricted) Boltzmann machine, auto encoders, monte carlo method and radial basis networks to name afew.You can check out Goodfellow'sDeep learning-book that is free online and get the gist of all the structures I mentioned here (most parts requires a bit of math knowledge, but he also writes about them quite intuitively).For Recurrent Neural Nets I recommend Colah's blog post onUnderstanding LSTMsIs there any system in which the topology of a neural network is variable?Depends on what you mean with thetopologyof a neural network:I think in the common meaning of topology when talking about Neural Networks is the way in which neurons are connected to form a network, varying in structure as it runs and learns. If this is what you men then the answer, in short, is yes. In multiple ways actually. On the other hand, if you mean in the mathematical sense, this answer would become a book that I wouldn't feel confortable writing. So I'll assume you mean the first.We often do "regularization", both on vanilla NN and other structures. One of these regularization techniques are calleddropout, which would randomly remove connections from the network as it is training (to prevent something calledoverfitting, which I'm not gonna go into in this post).Another example for another way would be on the Recurrent Neural Network. They deal with time series, and are equipped for dealing with timeseries of different lengths (thus, "varying structure").Does it exist neural net systems where complex numbers are used?Yes, there are many papers on complex number machine learning structures. A quick google should give you loads of results. For example: DeepMind has a paper onAssociative Long Short-Term Memorywhich explores the use of complex values for an "associative memory". Links:Goodfellow's Deep Learning-book:deeplearningbook.orgColah's blogpost on RNN's:colah.github.ioPaper on DeepMinds Associative LSTM:arxiv:1602.03032ShareImprove this answerFolloweditedNov 6 at 8:34CommunityBot1answeredMar 28, 2018 at 9:33Andreas Storvik Strauman49133 silver badges1515 bronze badges$\endgroup$Add a comment|0$\begingroup$To answer the title, there are many other machine learning models, but neural networks work particularly well for some difficult problems (image classification, speech recognition) which is one of the reasons they have gained popularity.Two particularly simple models are the decision tree and the perceptron. These are rather simple models, but they both have redeemable qualities. A decision tree is useful as it provides a model that is easily understood, while a perceptron is fairly quick and works well for linearly separable data. Another, more advanced, model is the Support Vector Machine.For example, is there any system in which the topology of a neural network is variable?Yes, there are many such systems where the topology of the neural network is dynamic throughout training. An entire class of methods labeled TWEANNs are designed to evolve the topology of the networks, one such algorithm is NeuroEvolution of Augmenting Topologies, NEAT (and it's descendants rtNEAT, hyperNEAT, ...).ShareImprove this answerFollowansweredMar 27, 2018 at 22:14Andrew Butler58733 silver badges1010 bronze badges$\endgroup$0Add a comment|-1$\begingroup$A very popular choice areHidden Markov Models.ShareImprove this answerFollowansweredMay 28, 2018 at 13:41pcko125122 silver badges77 bronze badges$\endgroup$0Add a comment|You mustlog into answer this question.Start asking to get answersFind the answer to your question by asking.Ask questionExplore related questionsneural-networksdeep-learningreference-requestmodel-requestSee similar questions with these tags. 7$\begingroup$Feedforward or multilayered neural networks, like the one in the image above, are usually characterized by the fact that all weighted connections can be represented as a continuous real number. Furthermore, each node in a layer is connected to every other node in the previous and successive layers.Are there any other information processing models other than FFNNs or MLPs?For example, is there any system in which the topology of a neural network is variable? Or a system in which the connections between nodes are not real numbers?neural-networksdeep-learningreference-requestmodel-requestShareImprove this questionFolloweditedMay 20, 2022 at 13:40nbro43k1414 gold badges121121 silver badges221221 bronze badgesaskedMar 7, 2017 at 20:08user28966141944 silver badges1111 bronze badges$\endgroup$0Add a comment| 7$\begingroup$Feedforward or multilayered neural networks, like the one in the image above, are usually characterized by the fact that all weighted connections can be represented as a continuous real number. Furthermore, each node in a layer is connected to every other node in the previous and successive layers.Are there any other information processing models other than FFNNs or MLPs?For example, is there any system in which the topology of a neural network is variable? Or a system in which the connections between nodes are not real numbers?neural-networksdeep-learningreference-requestmodel-requestShareImprove this questionFolloweditedMay 20, 2022 at 13:40nbro43k1414 gold badges121121 silver badges221221 bronze badgesaskedMar 7, 2017 at 20:08user28966141944 silver badges1111 bronze badges$\endgroup$0Add a comment| $\begingroup$Feedforward or multilayered neural networks, like the one in the image above, are usually characterized by the fact that all weighted connections can be represented as a continuous real number. Furthermore, each node in a layer is connected to every other node in the previous and successive layers.Are there any other information processing models other than FFNNs or MLPs?For example, is there any system in which the topology of a neural network is variable? Or a system in which the connections between nodes are not real numbers?neural-networksdeep-learningreference-requestmodel-requestShareImprove this questionFolloweditedMay 20, 2022 at 13:40nbro43k1414 gold badges121121 silver badges221221 bronze badgesaskedMar 7, 2017 at 20:08user28966141944 silver badges1111 bronze badges$\endgroup$ Feedforward or multilayered neural networks, like the one in the image above, are usually characterized by the fact that all weighted connections can be represented as a continuous real number. Furthermore, each node in a layer is connected to every other node in the previous and successive layers.Are there any other information processing models other than FFNNs or MLPs?For example, is there any system in which the topology of a neural network is variable? Or a system in which the connections between nodes are not real numbers? Feedforward or multilayered neural networks, like the one in the image above, are usually characterized by the fact that all weighted connections can be represented as a continuous real number. Furthermore, each node in a layer is connected to every other node in the previous and successive layers. Are there any other information processing models other than FFNNs or MLPs?For example, is there any system in which the topology of a neural network is variable? Or a system in which the connections between nodes are not real numbers? neural-networksdeep-learningreference-requestmodel-request neural-networksdeep-learningreference-requestmodel-request neural-networksdeep-learningreference-requestmodel-request ShareImprove this questionFolloweditedMay 20, 2022 at 13:40nbro43k1414 gold badges121121 silver badges221221 bronze badgesaskedMar 7, 2017 at 20:08user28966141944 silver badges1111 bronze badges ShareImprove this questionFolloweditedMay 20, 2022 at 13:40nbro43k1414 gold badges121121 silver badges221221 bronze badgesaskedMar 7, 2017 at 20:08user28966141944 silver badges1111 bronze badges ShareImprove this questionFollow ShareImprove this questionFollow ShareImprove this questionFollow Improve this question editedMay 20, 2022 at 13:40nbro43k1414 gold badges121121 silver badges221221 bronze badges editedMay 20, 2022 at 13:40nbro43k1414 gold badges121121 silver badges221221 bronze badges editedMay 20, 2022 at 13:40 editedMay 20, 2022 at 13:40 nbro43k1414 gold badges121121 silver badges221221 bronze badges 43k1414 gold badges121121 silver badges221221 bronze badges askedMar 7, 2017 at 20:08user28966141944 silver badges1111 bronze badges askedMar 7, 2017 at 20:08user28966141944 silver badges1111 bronze badges askedMar 7, 2017 at 20:08 askedMar 7, 2017 at 20:08 user28966141944 silver badges1111 bronze badges 41944 silver badges1111 bronze badges 3 Answers3Sorted by:Reset to defaultHighest score (default)Date modified (newest first)Date created (oldest first)3$\begingroup$Neural Network equivalents that is not (vanilla) feed forward Neural Nets:Neural net structures such as Recurrent Neural Nets (RNNs) and Convolutional Neural Nets (CNNs), and different architectures within those are good examples.Examples of different architectures within RNNs would be: Long Short Term Memory (LSTM) or Gated Recurrent Unit (GRU). Both of these are well described in Colah's blog post onUnderstanding LSTMsWhat are some alternative information processing system beside neural networkThere are sooo many structures. From the top of my head: (Restricted) Boltzmann machine, auto encoders, monte carlo method and radial basis networks to name afew.You can check out Goodfellow'sDeep learning-book that is free online and get the gist of all the structures I mentioned here (most parts requires a bit of math knowledge, but he also writes about them quite intuitively).For Recurrent Neural Nets I recommend Colah's blog post onUnderstanding LSTMsIs there any system in which the topology of a neural network is variable?Depends on what you mean with thetopologyof a neural network:I think in the common meaning of topology when talking about Neural Networks is the way in which neurons are connected to form a network, varying in structure as it runs and learns. If this is what you men then the answer, in short, is yes. In multiple ways actually. On the other hand, if you mean in the mathematical sense, this answer would become a book that I wouldn't feel confortable writing. So I'll assume you mean the first.We often do "regularization", both on vanilla NN and other structures. One of these regularization techniques are calleddropout, which would randomly remove connections from the network as it is training (to prevent something calledoverfitting, which I'm not gonna go into in this post).Another example for another way would be on the Recurrent Neural Network. They deal with time series, and are equipped for dealing with timeseries of different lengths (thus, "varying structure").Does it exist neural net systems where complex numbers are used?Yes, there are many papers on complex number machine learning structures. A quick google should give you loads of results. For example: DeepMind has a paper onAssociative Long Short-Term Memorywhich explores the use of complex values for an "associative memory". Links:Goodfellow's Deep Learning-book:deeplearningbook.orgColah's blogpost on RNN's:colah.github.ioPaper on DeepMinds Associative LSTM:arxiv:1602.03032ShareImprove this answerFolloweditedNov 6 at 8:34CommunityBot1answeredMar 28, 2018 at 9:33Andreas Storvik Strauman49133 silver badges1515 bronze badges$\endgroup$Add a comment|0$\begingroup$To answer the title, there are many other machine learning models, but neural networks work particularly well for some difficult problems (image classification, speech recognition) which is one of the reasons they have gained popularity.Two particularly simple models are the decision tree and the perceptron. These are rather simple models, but they both have redeemable qualities. A decision tree is useful as it provides a model that is easily understood, while a perceptron is fairly quick and works well for linearly separable data. Another, more advanced, model is the Support Vector Machine.For example, is there any system in which the topology of a neural network is variable?Yes, there are many such systems where the topology of the neural network is dynamic throughout training. An entire class of methods labeled TWEANNs are designed to evolve the topology of the networks, one such algorithm is NeuroEvolution of Augmenting Topologies, NEAT (and it's descendants rtNEAT, hyperNEAT, ...).ShareImprove this answerFollowansweredMar 27, 2018 at 22:14Andrew Butler58733 silver badges1010 bronze badges$\endgroup$0Add a comment|-1$\begingroup$A very popular choice areHidden Markov Models.ShareImprove this answerFollowansweredMay 28, 2018 at 13:41pcko125122 silver badges77 bronze badges$\endgroup$0Add a comment|You mustlog into answer this question.Start asking to get answersFind the answer to your question by asking.Ask questionExplore related questionsneural-networksdeep-learningreference-requestmodel-requestSee similar questions with these tags. 3 Answers3Sorted by:Reset to defaultHighest score (default)Date modified (newest first)Date created (oldest first) 3 Answers3Sorted by:Reset to defaultHighest score (default)Date modified (newest first)Date created (oldest first) Sorted by:Reset to defaultHighest score (default)Date modified (newest first)Date created (oldest first) Sorted by:Reset to defaultHighest score (default)Date modified (newest first)Date created (oldest first) Sorted by:Reset to default Highest score (default)Date modified (newest first)Date created (oldest first) 3$\begingroup$Neural Network equivalents that is not (vanilla) feed forward Neural Nets:Neural net structures such as Recurrent Neural Nets (RNNs) and Convolutional Neural Nets (CNNs), and different architectures within those are good examples.Examples of different architectures within RNNs would be: Long Short Term Memory (LSTM) or Gated Recurrent Unit (GRU). Both of these are well described in Colah's blog post onUnderstanding LSTMsWhat are some alternative information processing system beside neural networkThere are sooo many structures. From the top of my head: (Restricted) Boltzmann machine, auto encoders, monte carlo method and radial basis networks to name afew.You can check out Goodfellow'sDeep learning-book that is free online and get the gist of all the structures I mentioned here (most parts requires a bit of math knowledge, but he also writes about them quite intuitively).For Recurrent Neural Nets I recommend Colah's blog post onUnderstanding LSTMsIs there any system in which the topology of a neural network is variable?Depends on what you mean with thetopologyof a neural network:I think in the common meaning of topology when talking about Neural Networks is the way in which neurons are connected to form a network, varying in structure as it runs and learns. If this is what you men then the answer, in short, is yes. In multiple ways actually. On the other hand, if you mean in the mathematical sense, this answer would become a book that I wouldn't feel confortable writing. So I'll assume you mean the first.We often do "regularization", both on vanilla NN and other structures. One of these regularization techniques are calleddropout, which would randomly remove connections from the network as it is training (to prevent something calledoverfitting, which I'm not gonna go into in this post).Another example for another way would be on the Recurrent Neural Network. They deal with time series, and are equipped for dealing with timeseries of different lengths (thus, "varying structure").Does it exist neural net systems where complex numbers are used?Yes, there are many papers on complex number machine learning structures. A quick google should give you loads of results. For example: DeepMind has a paper onAssociative Long Short-Term Memorywhich explores the use of complex values for an "associative memory". Links:Goodfellow's Deep Learning-book:deeplearningbook.orgColah's blogpost on RNN's:colah.github.ioPaper on DeepMinds Associative LSTM:arxiv:1602.03032ShareImprove this answerFolloweditedNov 6 at 8:34CommunityBot1answeredMar 28, 2018 at 9:33Andreas Storvik Strauman49133 silver badges1515 bronze badges$\endgroup$Add a comment| 3$\begingroup$Neural Network equivalents that is not (vanilla) feed forward Neural Nets:Neural net structures such as Recurrent Neural Nets (RNNs) and Convolutional Neural Nets (CNNs), and different architectures within those are good examples.Examples of different architectures within RNNs would be: Long Short Term Memory (LSTM) or Gated Recurrent Unit (GRU). Both of these are well described in Colah's blog post onUnderstanding LSTMsWhat are some alternative information processing system beside neural networkThere are sooo many structures. From the top of my head: (Restricted) Boltzmann machine, auto encoders, monte carlo method and radial basis networks to name afew.You can check out Goodfellow'sDeep learning-book that is free online and get the gist of all the structures I mentioned here (most parts requires a bit of math knowledge, but he also writes about them quite intuitively).For Recurrent Neural Nets I recommend Colah's blog post onUnderstanding LSTMsIs there any system in which the topology of a neural network is variable?Depends on what you mean with thetopologyof a neural network:I think in the common meaning of topology when talking about Neural Networks is the way in which neurons are connected to form a network, varying in structure as it runs and learns. If this is what you men then the answer, in short, is yes. In multiple ways actually. On the other hand, if you mean in the mathematical sense, this answer would become a book that I wouldn't feel confortable writing. So I'll assume you mean the first.We often do "regularization", both on vanilla NN and other structures. One of these regularization techniques are calleddropout, which would randomly remove connections from the network as it is training (to prevent something calledoverfitting, which I'm not gonna go into in this post).Another example for another way would be on the Recurrent Neural Network. They deal with time series, and are equipped for dealing with timeseries of different lengths (thus, "varying structure").Does it exist neural net systems where complex numbers are used?Yes, there are many papers on complex number machine learning structures. A quick google should give you loads of results. For example: DeepMind has a paper onAssociative Long Short-Term Memorywhich explores the use of complex values for an "associative memory". Links:Goodfellow's Deep Learning-book:deeplearningbook.orgColah's blogpost on RNN's:colah.github.ioPaper on DeepMinds Associative LSTM:arxiv:1602.03032ShareImprove this answerFolloweditedNov 6 at 8:34CommunityBot1answeredMar 28, 2018 at 9:33Andreas Storvik Strauman49133 silver badges1515 bronze badges$\endgroup$Add a comment| $\begingroup$Neural Network equivalents that is not (vanilla) feed forward Neural Nets:Neural net structures such as Recurrent Neural Nets (RNNs) and Convolutional Neural Nets (CNNs), and different architectures within those are good examples.Examples of different architectures within RNNs would be: Long Short Term Memory (LSTM) or Gated Recurrent Unit (GRU). Both of these are well described in Colah's blog post onUnderstanding LSTMsWhat are some alternative information processing system beside neural networkThere are sooo many structures. From the top of my head: (Restricted) Boltzmann machine, auto encoders, monte carlo method and radial basis networks to name afew.You can check out Goodfellow'sDeep learning-book that is free online and get the gist of all the structures I mentioned here (most parts requires a bit of math knowledge, but he also writes about them quite intuitively).For Recurrent Neural Nets I recommend Colah's blog post onUnderstanding LSTMsIs there any system in which the topology of a neural network is variable?Depends on what you mean with thetopologyof a neural network:I think in the common meaning of topology when talking about Neural Networks is the way in which neurons are connected to form a network, varying in structure as it runs and learns. If this is what you men then the answer, in short, is yes. In multiple ways actually. On the other hand, if you mean in the mathematical sense, this answer would become a book that I wouldn't feel confortable writing. So I'll assume you mean the first.We often do "regularization", both on vanilla NN and other structures. One of these regularization techniques are calleddropout, which would randomly remove connections from the network as it is training (to prevent something calledoverfitting, which I'm not gonna go into in this post).Another example for another way would be on the Recurrent Neural Network. They deal with time series, and are equipped for dealing with timeseries of different lengths (thus, "varying structure").Does it exist neural net systems where complex numbers are used?Yes, there are many papers on complex number machine learning structures. A quick google should give you loads of results. For example: DeepMind has a paper onAssociative Long Short-Term Memorywhich explores the use of complex values for an "associative memory". Links:Goodfellow's Deep Learning-book:deeplearningbook.orgColah's blogpost on RNN's:colah.github.ioPaper on DeepMinds Associative LSTM:arxiv:1602.03032ShareImprove this answerFolloweditedNov 6 at 8:34CommunityBot1answeredMar 28, 2018 at 9:33Andreas Storvik Strauman49133 silver badges1515 bronze badges$\endgroup$ Neural Network equivalents that is not (vanilla) feed forward Neural Nets:Neural net structures such as Recurrent Neural Nets (RNNs) and Convolutional Neural Nets (CNNs), and different architectures within those are good examples.Examples of different architectures within RNNs would be: Long Short Term Memory (LSTM) or Gated Recurrent Unit (GRU). Both of these are well described in Colah's blog post onUnderstanding LSTMsWhat are some alternative information processing system beside neural networkThere are sooo many structures. From the top of my head: (Restricted) Boltzmann machine, auto encoders, monte carlo method and radial basis networks to name afew.You can check out Goodfellow'sDeep learning-book that is free online and get the gist of all the structures I mentioned here (most parts requires a bit of math knowledge, but he also writes about them quite intuitively).For Recurrent Neural Nets I recommend Colah's blog post onUnderstanding LSTMsIs there any system in which the topology of a neural network is variable?Depends on what you mean with thetopologyof a neural network:I think in the common meaning of topology when talking about Neural Networks is the way in which neurons are connected to form a network, varying in structure as it runs and learns. If this is what you men then the answer, in short, is yes. In multiple ways actually. On the other hand, if you mean in the mathematical sense, this answer would become a book that I wouldn't feel confortable writing. So I'll assume you mean the first.We often do "regularization", both on vanilla NN and other structures. One of these regularization techniques are calleddropout, which would randomly remove connections from the network as it is training (to prevent something calledoverfitting, which I'm not gonna go into in this post).Another example for another way would be on the Recurrent Neural Network. They deal with time series, and are equipped for dealing with timeseries of different lengths (thus, "varying structure").Does it exist neural net systems where complex numbers are used?Yes, there are many papers on complex number machine learning structures. A quick google should give you loads of results. For example: DeepMind has a paper onAssociative Long Short-Term Memorywhich explores the use of complex values for an "associative memory". Links:Goodfellow's Deep Learning-book:deeplearningbook.orgColah's blogpost on RNN's:colah.github.ioPaper on DeepMinds Associative LSTM:arxiv:1602.03032 Neural net structures such as Recurrent Neural Nets (RNNs) and Convolutional Neural Nets (CNNs), and different architectures within those are good examples. Examples of different architectures within RNNs would be: Long Short Term Memory (LSTM) or Gated Recurrent Unit (GRU). Both of these are well described in Colah's blog post onUnderstanding LSTMs There are sooo many structures. From the top of my head: (Restricted) Boltzmann machine, auto encoders, monte carlo method and radial basis networks to name afew. You can check out Goodfellow'sDeep learning-book that is free online and get the gist of all the structures I mentioned here (most parts requires a bit of math knowledge, but he also writes about them quite intuitively). For Recurrent Neural Nets I recommend Colah's blog post onUnderstanding LSTMs Depends on what you mean with thetopologyof a neural network: I think in the common meaning of topology when talking about Neural Networks is the way in which neurons are connected to form a network, varying in structure as it runs and learns. If this is what you men then the answer, in short, is yes. In multiple ways actually. On the other hand, if you mean in the mathematical sense, this answer would become a book that I wouldn't feel confortable writing. So I'll assume you mean the first. We often do "regularization", both on vanilla NN and other structures. One of these regularization techniques are calleddropout, which would randomly remove connections from the network as it is training (to prevent something calledoverfitting, which I'm not gonna go into in this post). Another example for another way would be on the Recurrent Neural Network. They deal with time series, and are equipped for dealing with timeseries of different lengths (thus, "varying structure"). Goodfellow's Deep Learning-book:deeplearningbook.org Colah's blogpost on RNN's:colah.github.io Paper on DeepMinds Associative LSTM:arxiv:1602.03032 ShareImprove this answerFolloweditedNov 6 at 8:34CommunityBot1answeredMar 28, 2018 at 9:33Andreas Storvik Strauman49133 silver badges1515 bronze badges ShareImprove this answerFolloweditedNov 6 at 8:34CommunityBot1answeredMar 28, 2018 at 9:33Andreas Storvik Strauman49133 silver badges1515 bronze badges ShareImprove this answerFollow ShareImprove this answerFollow ShareImprove this answerFollow editedNov 6 at 8:34CommunityBot1 editedNov 6 at 8:34CommunityBot1 answeredMar 28, 2018 at 9:33Andreas Storvik Strauman49133 silver badges1515 bronze badges answeredMar 28, 2018 at 9:33Andreas Storvik Strauman49133 silver badges1515 bronze badges answeredMar 28, 2018 at 9:33 answeredMar 28, 2018 at 9:33 Andreas Storvik Strauman49133 silver badges1515 bronze badges 49133 silver badges1515 bronze badges 0$\begingroup$To answer the title, there are many other machine learning models, but neural networks work particularly well for some difficult problems (image classification, speech recognition) which is one of the reasons they have gained popularity.Two particularly simple models are the decision tree and the perceptron. These are rather simple models, but they both have redeemable qualities. A decision tree is useful as it provides a model that is easily understood, while a perceptron is fairly quick and works well for linearly separable data. Another, more advanced, model is the Support Vector Machine.For example, is there any system in which the topology of a neural network is variable?Yes, there are many such systems where the topology of the neural network is dynamic throughout training. An entire class of methods labeled TWEANNs are designed to evolve the topology of the networks, one such algorithm is NeuroEvolution of Augmenting Topologies, NEAT (and it's descendants rtNEAT, hyperNEAT, ...).ShareImprove this answerFollowansweredMar 27, 2018 at 22:14Andrew Butler58733 silver badges1010 bronze badges$\endgroup$0Add a comment| 0$\begingroup$To answer the title, there are many other machine learning models, but neural networks work particularly well for some difficult problems (image classification, speech recognition) which is one of the reasons they have gained popularity.Two particularly simple models are the decision tree and the perceptron. These are rather simple models, but they both have redeemable qualities. A decision tree is useful as it provides a model that is easily understood, while a perceptron is fairly quick and works well for linearly separable data. Another, more advanced, model is the Support Vector Machine.For example, is there any system in which the topology of a neural network is variable?Yes, there are many such systems where the topology of the neural network is dynamic throughout training. An entire class of methods labeled TWEANNs are designed to evolve the topology of the networks, one such algorithm is NeuroEvolution of Augmenting Topologies, NEAT (and it's descendants rtNEAT, hyperNEAT, ...).ShareImprove this answerFollowansweredMar 27, 2018 at 22:14Andrew Butler58733 silver badges1010 bronze badges$\endgroup$0Add a comment| $\begingroup$To answer the title, there are many other machine learning models, but neural networks work particularly well for some difficult problems (image classification, speech recognition) which is one of the reasons they have gained popularity.Two particularly simple models are the decision tree and the perceptron. These are rather simple models, but they both have redeemable qualities. A decision tree is useful as it provides a model that is easily understood, while a perceptron is fairly quick and works well for linearly separable data. Another, more advanced, model is the Support Vector Machine.For example, is there any system in which the topology of a neural network is variable?Yes, there are many such systems where the topology of the neural network is dynamic throughout training. An entire class of methods labeled TWEANNs are designed to evolve the topology of the networks, one such algorithm is NeuroEvolution of Augmenting Topologies, NEAT (and it's descendants rtNEAT, hyperNEAT, ...).ShareImprove this answerFollowansweredMar 27, 2018 at 22:14Andrew Butler58733 silver badges1010 bronze badges$\endgroup$ To answer the title, there are many other machine learning models, but neural networks work particularly well for some difficult problems (image classification, speech recognition) which is one of the reasons they have gained popularity.Two particularly simple models are the decision tree and the perceptron. These are rather simple models, but they both have redeemable qualities. A decision tree is useful as it provides a model that is easily understood, while a perceptron is fairly quick and works well for linearly separable data. Another, more advanced, model is the Support Vector Machine.For example, is there any system in which the topology of a neural network is variable?Yes, there are many such systems where the topology of the neural network is dynamic throughout training. An entire class of methods labeled TWEANNs are designed to evolve the topology of the networks, one such algorithm is NeuroEvolution of Augmenting Topologies, NEAT (and it's descendants rtNEAT, hyperNEAT, ...). To answer the title, there are many other machine learning models, but neural networks work particularly well for some difficult problems (image classification, speech recognition) which is one of the reasons they have gained popularity. Two particularly simple models are the decision tree and the perceptron. These are rather simple models, but they both have redeemable qualities. A decision tree is useful as it provides a model that is easily understood, while a perceptron is fairly quick and works well for linearly separable data. Another, more advanced, model is the Support Vector Machine. For example, is there any system in which the topology of a neural network is variable? Yes, there are many such systems where the topology of the neural network is dynamic throughout training. An entire class of methods labeled TWEANNs are designed to evolve the topology of the networks, one such algorithm is NeuroEvolution of Augmenting Topologies, NEAT (and it's descendants rtNEAT, hyperNEAT, ...). ShareImprove this answerFollowansweredMar 27, 2018 at 22:14Andrew Butler58733 silver badges1010 bronze badges ShareImprove this answerFollowansweredMar 27, 2018 at 22:14Andrew Butler58733 silver badges1010 bronze badges ShareImprove this answerFollow ShareImprove this answerFollow ShareImprove this answerFollow answeredMar 27, 2018 at 22:14Andrew Butler58733 silver badges1010 bronze badges answeredMar 27, 2018 at 22:14Andrew Butler58733 silver badges1010 bronze badges answeredMar 27, 2018 at 22:14 answeredMar 27, 2018 at 22:14 Andrew Butler58733 silver badges1010 bronze badges 58733 silver badges1010 bronze badges -1$\begingroup$A very popular choice areHidden Markov Models.ShareImprove this answerFollowansweredMay 28, 2018 at 13:41pcko125122 silver badges77 bronze badges$\endgroup$0Add a comment| -1$\begingroup$A very popular choice areHidden Markov Models.ShareImprove this answerFollowansweredMay 28, 2018 at 13:41pcko125122 silver badges77 bronze badges$\endgroup$0Add a comment| $\begingroup$A very popular choice areHidden Markov Models.ShareImprove this answerFollowansweredMay 28, 2018 at 13:41pcko125122 silver badges77 bronze badges$\endgroup$ A very popular choice areHidden Markov Models. A very popular choice areHidden Markov Models. ShareImprove this answerFollowansweredMay 28, 2018 at 13:41pcko125122 silver badges77 bronze badges ShareImprove this answerFollowansweredMay 28, 2018 at 13:41pcko125122 silver badges77 bronze badges ShareImprove this answerFollow ShareImprove this answerFollow ShareImprove this answerFollow answeredMay 28, 2018 at 13:41pcko125122 silver badges77 bronze badges answeredMay 28, 2018 at 13:41pcko125122 silver badges77 bronze badges answeredMay 28, 2018 at 13:41 answeredMay 28, 2018 at 13:41 pcko125122 silver badges77 bronze badges 25122 silver badges77 bronze badges Start asking to get answersFind the answer to your question by asking.Ask questionExplore related questionsneural-networksdeep-learningreference-requestmodel-requestSee similar questions with these tags. Start asking to get answersFind the answer to your question by asking.Ask question Start asking to get answersFind the answer to your question by asking.Ask question Start asking to get answers Find the answer to your question by asking. Explore related questionsneural-networksdeep-learningreference-requestmodel-requestSee similar questions with these tags. Explore related questionsneural-networksdeep-learningreference-requestmodel-requestSee similar questions with these tags. Explore related questions neural-networksdeep-learningreference-requestmodel-request See similar questions with these tags. Featured on MetaChat room owners can now establish room guidelinesResults of the October 2025 Community Asks Sprint: copy button for code...We’re releasing our proactive anti-spam measure network-wideRelated5Are neurons instantly feed forward when input arrives?11How can I make my network treat rotations of the input equally?5What do the neural network's weights represent conceptually?2Are innovation weights shared in the NEAT algorithm?6Is this idea to calculate the required number of hidden neurons for a single hidden layer neural network correct?3How can I learn a graph given nodes with features in a supervised fashion?0How to uniquely associate a directed graph with a feedforward neural network?1Hyperparameter tuning methods for neural networks2Can NEAT produce output which has no connection with any other node?Hot Network QuestionsWhat kind of Fungus is this?Where are the foci of Earth's orbit?How to reply to "How are you?"how to pull with the hingeCircuits presented in a vintage educational electronics kit from the 1960sWhat reasons has Trump stated for not wanting to fund SNAP?A single magical ant wandering in a vast garden of cubesThe NEW `du` command (in `/lib/cargo/bin/coreutils`) outputs wrong sizes in Ubuntu 25.10Defining Lebesgue non measurable sets with countable informationHow to "attach" tikz arrows to elements without hard-coding position?zsh: Preserve trailing comma inside brace completionHow do I accommodate a player who asks for more structure when Roll For Shoes inherently lacks it?DC crossover from the 70s or 80s: Aliens try to steal the Earth, the Spectre moves it back to the right positionssh and find interact weirdly when no files match the find qualifiersregex for matching after pattern in vimWhat type theory really is?how to make tar stop after reading or listing a few first files from a big tarball?Would longer runways allow pilots to cancel a takeoff for failures that right now are too far into the takeoff to allow for a deceleration?Dear perle von Nurnberg, be more like googleComplexity of CNF-SAT parameterized by the number of clausesIs there an equivalent of Exponent for difference equations?Why can I fit a horizontal line through the 95% confidence intervals region of my OLS model even though its overall p-value is close to 0.05?Is excludeonly still working? (TeXlive 2025)Practice: Quantity or Qualitymore hot questionsQuestion feed Featured on MetaChat room owners can now establish room guidelinesResults of the October 2025 Community Asks Sprint: copy button for code...We’re releasing our proactive anti-spam measure network-wide Chat room owners can now establish room guidelines Results of the October 2025 Community Asks Sprint: copy button for code... We’re releasing our proactive anti-spam measure network-wide Related5Are neurons instantly feed forward when input arrives?11How can I make my network treat rotations of the input equally?5What do the neural network's weights represent conceptually?2Are innovation weights shared in the NEAT algorithm?6Is this idea to calculate the required number of hidden neurons for a single hidden layer neural network correct?3How can I learn a graph given nodes with features in a supervised fashion?0How to uniquely associate a directed graph with a feedforward neural network?1Hyperparameter tuning methods for neural networks2Can NEAT produce output which has no connection with any other node? 5Are neurons instantly feed forward when input arrives?11How can I make my network treat rotations of the input equally?5What do the neural network's weights represent conceptually?2Are innovation weights shared in the NEAT algorithm?6Is this idea to calculate the required number of hidden neurons for a single hidden layer neural network correct?3How can I learn a graph given nodes with features in a supervised fashion?0How to uniquely associate a directed graph with a feedforward neural network?1Hyperparameter tuning methods for neural networks2Can NEAT produce output which has no connection with any other node? 5Are neurons instantly feed forward when input arrives? 11How can I make my network treat rotations of the input equally? 5What do the neural network's weights represent conceptually? 2Are innovation weights shared in the NEAT algorithm? 6Is this idea to calculate the required number of hidden neurons for a single hidden layer neural network correct? 3How can I learn a graph given nodes with features in a supervised fashion? 0How to uniquely associate a directed graph with a feedforward neural network? 1Hyperparameter tuning methods for neural networks 2Can NEAT produce output which has no connection with any other node? Hot Network QuestionsWhat kind of Fungus is this?Where are the foci of Earth's orbit?How to reply to "How are you?"how to pull with the hingeCircuits presented in a vintage educational electronics kit from the 1960sWhat reasons has Trump stated for not wanting to fund SNAP?A single magical ant wandering in a vast garden of cubesThe NEW `du` command (in `/lib/cargo/bin/coreutils`) outputs wrong sizes in Ubuntu 25.10Defining Lebesgue non measurable sets with countable informationHow to "attach" tikz arrows to elements without hard-coding position?zsh: Preserve trailing comma inside brace completionHow do I accommodate a player who asks for more structure when Roll For Shoes inherently lacks it?DC crossover from the 70s or 80s: Aliens try to steal the Earth, the Spectre moves it back to the right positionssh and find interact weirdly when no files match the find qualifiersregex for matching after pattern in vimWhat type theory really is?how to make tar stop after reading or listing a few first files from a big tarball?Would longer runways allow pilots to cancel a takeoff for failures that right now are too far into the takeoff to allow for a deceleration?Dear perle von Nurnberg, be more like googleComplexity of CNF-SAT parameterized by the number of clausesIs there an equivalent of Exponent for difference equations?Why can I fit a horizontal line through the 95% confidence intervals region of my OLS model even though its overall p-value is close to 0.05?Is excludeonly still working? (TeXlive 2025)Practice: Quantity or Qualitymore hot questions