Требуется ли знание класса функции плотности вероятности для явной оценки плотности?
Краткое содержание
В глубоком обучении модели могут изучать вероятность распределения, которое сгенерировало набор данных. Обратите внимание на следующий параграф из Глава 5: Основы машинного обучения из книги «Глубокое обучение» (Аарон Курвиль и др.). Алгоритмы неконтролируемого обучения работают с набором данных, содержащим множество признаков, затем изучают полезные свойства структуры этого набора данных. В контексте глубокого обучения мы обычно хотим изучить всю вероятность распределения, которое сгенерировало набор данных, явно, например, в задаче оценки плотности, или неявно, для задач, таких как синтез или шумоподавление. Другие алгоритмы неконтролируемого обучения выполняют другие роли, такие как кластеризация, которая состоит в разделении набора данных на кластеры похожих примеров. Я прочитал об оценке плотности в той же главе, как представлено ниже. В задаче оценки плотности алгоритму машинного обучения предлагается изучить функцию $p_{model} : R^n \rightarrow R$, где $p_{model}(x)$ можно интерпретировать как функцию плотности вероятности.
Полный текст
Asked 4 years, 4 months ago Modified today Viewed 76 times
Asked 4 years, 4 months ago
0 $\begingroup$ In deep learning, models may learn the probability distribution that generated the dataset. Observe the following paragraph from Chapter 5: Machine Learning Basics from the book titled Deep Learning (by Aaron Courville et al.) Unsupervised learning algorithms experience a dataset containing many features, then learn useful properties of the structure of this dataset. In the context of deep learning, we usually want to learn the entire probability distribution that generated a dataset, whether explicitly, as in density estimation , or implicitly, for tasks like synthesis or denoising. Some other unsupervised learning algorithms perform other roles, like clustering, which consists of dividing the dataset into clusters of similar examples. I read about density estimation in the same chapter, as given below In the density estimation problem, the machine learning algorithm is asked to learn a function $p_{model} : R^n \rightarrow R$ , where $p_{model}$ (x) can be interpreted as a probability density function (if $x$ is continuous) or a probability mass function (if $x$ is discrete) on the space that the examples were drawn from. This question is focused on explicit probability density estimation in continuous case i.e., learning density function $p_{model}$ directly. Suppose I have a dataset $D$ with $n$ continuous random variables (features) $X_1, X_2, X_3,\cdots, X_n$ . And I don't know anything about the probability density function of individual random variables. That is, I don't know about any information about any $X_i$ , such as, whether $X_i$ follows normal distribution or any other distribution. Then, is it possible to learn density function explicitly? Or do I need to provide some necessary information such as the class of probability distribution function to be learned? I am thinking as follows: If I have some information about $X_i$ , such as: $X_i$ falls to a well known distribution, then I can learn the parameters of the underlying density function from $D$ . So, is it mandatory to know some information about the underlying probability density function. density-estimation Share Improve this question Follow asked Aug 28, 2021 at 12:46 hanugm 4,182 3 3 gold badges 33 33 silver badges 67 67 bronze badges $\endgroup$ Add a comment | 1 Answer 1 Sorted by: Reset to default Highest score (default) Date modified (newest first) Date created (oldest first) 0 $\begingroup$ Neural Networks can approximate any function . Quoting the essence in case the article is removed in the future. The key to neural networks’ ability to approximate any function is that they incorporate non-linearity into their architecture. Each layer is associated with an activation function that applies a non-linear transformation to the output of that layer. So no, knowing the class of the probability density function is not required to approximate it via Deep Learning. With a large enough number of samples you could construct an approximation. Share Improve this answer Follow answered Aug 28, 2021 at 15:14 tnfru 358 1 1 silver badge 12 12 bronze badges $\endgroup$ Add a comment | You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions density-estimation See similar questions with these tags.
0 $\begingroup$ In deep learning, models may learn the probability distribution that generated the dataset. Observe the following paragraph from Chapter 5: Machine Learning Basics from the book titled Deep Learning (by Aaron Courville et al.) Unsupervised learning algorithms experience a dataset containing many features, then learn useful properties of the structure of this dataset. In the context of deep learning, we usually want to learn the entire probability distribution that generated a dataset, whether explicitly, as in density estimation , or implicitly, for tasks like synthesis or denoising. Some other unsupervised learning algorithms perform other roles, like clustering, which consists of dividing the dataset into clusters of similar examples. I read about density estimation in the same chapter, as given below In the density estimation problem, the machine learning algorithm is asked to learn a function $p_{model} : R^n \rightarrow R$ , where $p_{model}$ (x) can be interpreted as a probability density function (if $x$ is continuous) or a probability mass function (if $x$ is discrete) on the space that the examples were drawn from. This question is focused on explicit probability density estimation in continuous case i.e., learning density function $p_{model}$ directly. Suppose I have a dataset $D$ with $n$ continuous random variables (features) $X_1, X_2, X_3,\cdots, X_n$ . And I don't know anything about the probability density function of individual random variables. That is, I don't know about any information about any $X_i$ , such as, whether $X_i$ follows normal distribution or any other distribution. Then, is it possible to learn density function explicitly? Or do I need to provide some necessary information such as the class of probability distribution function to be learned? I am thinking as follows: If I have some information about $X_i$ , such as: $X_i$ falls to a well known distribution, then I can learn the parameters of the underlying density function from $D$ . So, is it mandatory to know some information about the underlying probability density function. density-estimation Share Improve this question Follow asked Aug 28, 2021 at 12:46 hanugm 4,182 3 3 gold badges 33 33 silver badges 67 67 bronze badges $\endgroup$ Add a comment |
0 $\begingroup$ In deep learning, models may learn the probability distribution that generated the dataset. Observe the following paragraph from Chapter 5: Machine Learning Basics from the book titled Deep Learning (by Aaron Courville et al.) Unsupervised learning algorithms experience a dataset containing many features, then learn useful properties of the structure of this dataset. In the context of deep learning, we usually want to learn the entire probability distribution that generated a dataset, whether explicitly, as in density estimation , or implicitly, for tasks like synthesis or denoising. Some other unsupervised learning algorithms perform other roles, like clustering, which consists of dividing the dataset into clusters of similar examples. I read about density estimation in the same chapter, as given below In the density estimation problem, the machine learning algorithm is asked to learn a function $p_{model} : R^n \rightarrow R$ , where $p_{model}$ (x) can be interpreted as a probability density function (if $x$ is continuous) or a probability mass function (if $x$ is discrete) on the space that the examples were drawn from. This question is focused on explicit probability density estimation in continuous case i.e., learning density function $p_{model}$ directly. Suppose I have a dataset $D$ with $n$ continuous random variables (features) $X_1, X_2, X_3,\cdots, X_n$ . And I don't know anything about the probability density function of individual random variables. That is, I don't know about any information about any $X_i$ , such as, whether $X_i$ follows normal distribution or any other distribution. Then, is it possible to learn density function explicitly? Or do I need to provide some necessary information such as the class of probability distribution function to be learned? I am thinking as follows: If I have some information about $X_i$ , such as: $X_i$ falls to a well known distribution, then I can learn the parameters of the underlying density function from $D$ . So, is it mandatory to know some information about the underlying probability density function. density-estimation Share Improve this question Follow asked Aug 28, 2021 at 12:46 hanugm 4,182 3 3 gold badges 33 33 silver badges 67 67 bronze badges $\endgroup$ Add a comment |
$\begingroup$ In deep learning, models may learn the probability distribution that generated the dataset. Observe the following paragraph from Chapter 5: Machine Learning Basics from the book titled Deep Learning (by Aaron Courville et al.) Unsupervised learning algorithms experience a dataset containing many features, then learn useful properties of the structure of this dataset. In the context of deep learning, we usually want to learn the entire probability distribution that generated a dataset, whether explicitly, as in density estimation , or implicitly, for tasks like synthesis or denoising. Some other unsupervised learning algorithms perform other roles, like clustering, which consists of dividing the dataset into clusters of similar examples. I read about density estimation in the same chapter, as given below In the density estimation problem, the machine learning algorithm is asked to learn a function $p_{model} : R^n \rightarrow R$ , where $p_{model}$ (x) can be interpreted as a probability density function (if $x$ is continuous) or a probability mass function (if $x$ is discrete) on the space that the examples were drawn from. This question is focused on explicit probability density estimation in continuous case i.e., learning density function $p_{model}$ directly. Suppose I have a dataset $D$ with $n$ continuous random variables (features) $X_1, X_2, X_3,\cdots, X_n$ . And I don't know anything about the probability density function of individual random variables. That is, I don't know about any information about any $X_i$ , such as, whether $X_i$ follows normal distribution or any other distribution. Then, is it possible to learn density function explicitly? Or do I need to provide some necessary information such as the class of probability distribution function to be learned? I am thinking as follows: If I have some information about $X_i$ , such as: $X_i$ falls to a well known distribution, then I can learn the parameters of the underlying density function from $D$ . So, is it mandatory to know some information about the underlying probability density function. density-estimation Share Improve this question Follow asked Aug 28, 2021 at 12:46 hanugm 4,182 3 3 gold badges 33 33 silver badges 67 67 bronze badges $\endgroup$
In deep learning, models may learn the probability distribution that generated the dataset. Observe the following paragraph from Chapter 5: Machine Learning Basics from the book titled Deep Learning (by Aaron Courville et al.) Unsupervised learning algorithms experience a dataset containing many features, then learn useful properties of the structure of this dataset. In the context of deep learning, we usually want to learn the entire probability distribution that generated a dataset, whether explicitly, as in density estimation , or implicitly, for tasks like synthesis or denoising. Some other unsupervised learning algorithms perform other roles, like clustering, which consists of dividing the dataset into clusters of similar examples. I read about density estimation in the same chapter, as given below In the density estimation problem, the machine learning algorithm is asked to learn a function $p_{model} : R^n \rightarrow R$ , where $p_{model}$ (x) can be interpreted as a probability density function (if $x$ is continuous) or a probability mass function (if $x$ is discrete) on the space that the examples were drawn from. This question is focused on explicit probability density estimation in continuous case i.e., learning density function $p_{model}$ directly. Suppose I have a dataset $D$ with $n$ continuous random variables (features) $X_1, X_2, X_3,\cdots, X_n$ . And I don't know anything about the probability density function of individual random variables. That is, I don't know about any information about any $X_i$ , such as, whether $X_i$ follows normal distribution or any other distribution. Then, is it possible to learn density function explicitly? Or do I need to provide some necessary information such as the class of probability distribution function to be learned? I am thinking as follows: If I have some information about $X_i$ , such as: $X_i$ falls to a well known distribution, then I can learn the parameters of the underlying density function from $D$ . So, is it mandatory to know some information about the underlying probability density function.
In deep learning, models may learn the probability distribution that generated the dataset. Observe the following paragraph from Chapter 5: Machine Learning Basics from the book titled Deep Learning (by Aaron Courville et al.)
Unsupervised learning algorithms experience a dataset containing many features, then learn useful properties of the structure of this dataset. In the context of deep learning, we usually want to learn the entire probability distribution that generated a dataset, whether explicitly, as in density estimation , or implicitly, for tasks like synthesis or denoising. Some other unsupervised learning algorithms perform other roles, like clustering, which consists of dividing the dataset into clusters of similar examples.
I read about density estimation in the same chapter, as given below
In the density estimation problem, the machine learning algorithm is asked to learn a function $p_{model} : R^n \rightarrow R$ , where $p_{model}$ (x) can be interpreted as a probability density function (if $x$ is continuous) or a probability mass function (if $x$ is discrete) on the space that the examples were drawn from.
This question is focused on explicit probability density estimation in continuous case i.e., learning density function $p_{model}$ directly.
Suppose I have a dataset $D$ with $n$ continuous random variables (features) $X_1, X_2, X_3,\cdots, X_n$ . And I don't know anything about the probability density function of individual random variables. That is, I don't know about any information about any $X_i$ , such as, whether $X_i$ follows normal distribution or any other distribution. Then, is it possible to learn density function explicitly? Or do I need to provide some necessary information such as the class of probability distribution function to be learned?
I am thinking as follows:
If I have some information about $X_i$ , such as: $X_i$ falls to a well known distribution, then I can learn the parameters of the underlying density function from $D$ . So, is it mandatory to know some information about the underlying probability density function.
Share Improve this question Follow asked Aug 28, 2021 at 12:46 hanugm 4,182 3 3 gold badges 33 33 silver badges 67 67 bronze badges
Share Improve this question Follow asked Aug 28, 2021 at 12:46 hanugm 4,182 3 3 gold badges 33 33 silver badges 67 67 bronze badges
Share Improve this question Follow
Share Improve this question Follow
Share Improve this question Follow
Improve this question
asked Aug 28, 2021 at 12:46 hanugm 4,182 3 3 gold badges 33 33 silver badges 67 67 bronze badges
asked Aug 28, 2021 at 12:46 hanugm 4,182 3 3 gold badges 33 33 silver badges 67 67 bronze badges
asked Aug 28, 2021 at 12:46
asked Aug 28, 2021 at 12:46
hanugm 4,182 3 3 gold badges 33 33 silver badges 67 67 bronze badges
4,182 3 3 gold badges 33 33 silver badges 67 67 bronze badges
1 Answer 1 Sorted by: Reset to default Highest score (default) Date modified (newest first) Date created (oldest first) 0 $\begingroup$ Neural Networks can approximate any function . Quoting the essence in case the article is removed in the future. The key to neural networks’ ability to approximate any function is that they incorporate non-linearity into their architecture. Each layer is associated with an activation function that applies a non-linear transformation to the output of that layer. So no, knowing the class of the probability density function is not required to approximate it via Deep Learning. With a large enough number of samples you could construct an approximation. Share Improve this answer Follow answered Aug 28, 2021 at 15:14 tnfru 358 1 1 silver badge 12 12 bronze badges $\endgroup$ Add a comment | You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions density-estimation See similar questions with these tags.
1 Answer 1 Sorted by: Reset to default Highest score (default) Date modified (newest first) Date created (oldest first)
1 Answer 1 Sorted by: Reset to default Highest score (default) Date modified (newest first) Date created (oldest first)
Sorted by: Reset to default Highest score (default) Date modified (newest first) Date created (oldest first)
Sorted by: Reset to default Highest score (default) Date modified (newest first) Date created (oldest first)
Sorted by: Reset to default
Highest score (default) Date modified (newest first) Date created (oldest first)
0 $\begingroup$ Neural Networks can approximate any function . Quoting the essence in case the article is removed in the future. The key to neural networks’ ability to approximate any function is that they incorporate non-linearity into their architecture. Each layer is associated with an activation function that applies a non-linear transformation to the output of that layer. So no, knowing the class of the probability density function is not required to approximate it via Deep Learning. With a large enough number of samples you could construct an approximation. Share Improve this answer Follow answered Aug 28, 2021 at 15:14 tnfru 358 1 1 silver badge 12 12 bronze badges $\endgroup$ Add a comment |
0 $\begingroup$ Neural Networks can approximate any function . Quoting the essence in case the article is removed in the future. The key to neural networks’ ability to approximate any function is that they incorporate non-linearity into their architecture. Each layer is associated with an activation function that applies a non-linear transformation to the output of that layer. So no, knowing the class of the probability density function is not required to approximate it via Deep Learning. With a large enough number of samples you could construct an approximation. Share Improve this answer Follow answered Aug 28, 2021 at 15:14 tnfru 358 1 1 silver badge 12 12 bronze badges $\endgroup$ Add a comment |
$\begingroup$ Neural Networks can approximate any function . Quoting the essence in case the article is removed in the future. The key to neural networks’ ability to approximate any function is that they incorporate non-linearity into their architecture. Each layer is associated with an activation function that applies a non-linear transformation to the output of that layer. So no, knowing the class of the probability density function is not required to approximate it via Deep Learning. With a large enough number of samples you could construct an approximation. Share Improve this answer Follow answered Aug 28, 2021 at 15:14 tnfru 358 1 1 silver badge 12 12 bronze badges $\endgroup$
Neural Networks can approximate any function . Quoting the essence in case the article is removed in the future. The key to neural networks’ ability to approximate any function is that they incorporate non-linearity into their architecture. Each layer is associated with an activation function that applies a non-linear transformation to the output of that layer. So no, knowing the class of the probability density function is not required to approximate it via Deep Learning. With a large enough number of samples you could construct an approximation.
Neural Networks can approximate any function . Quoting the essence in case the article is removed in the future.
The key to neural networks’ ability to approximate any function is that they incorporate non-linearity into their architecture. Each layer is associated with an activation function that applies a non-linear transformation to the output of that layer.
So no, knowing the class of the probability density function is not required to approximate it via Deep Learning. With a large enough number of samples you could construct an approximation.
Share Improve this answer Follow answered Aug 28, 2021 at 15:14 tnfru 358 1 1 silver badge 12 12 bronze badges
Share Improve this answer Follow answered Aug 28, 2021 at 15:14 tnfru 358 1 1 silver badge 12 12 bronze badges
Share Improve this answer Follow
Share Improve this answer Follow
Share Improve this answer Follow
answered Aug 28, 2021 at 15:14 tnfru 358 1 1 silver badge 12 12 bronze badges
answered Aug 28, 2021 at 15:14 tnfru 358 1 1 silver badge 12 12 bronze badges
answered Aug 28, 2021 at 15:14
answered Aug 28, 2021 at 15:14
tnfru 358 1 1 silver badge 12 12 bronze badges
358 1 1 silver badge 12 12 bronze badges
Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions density-estimation See similar questions with these tags.
Start asking to get answers Find the answer to your question by asking. Ask question
Start asking to get answers Find the answer to your question by asking. Ask question
Start asking to get answers
Find the answer to your question by asking.
Explore related questions density-estimation See similar questions with these tags.
Explore related questions density-estimation See similar questions with these tags.
Explore related questions
See similar questions with these tags.
Featured on Meta A proposal for bringing back Community Promotion & Open Source Ads Native Ads coming soon to Stack Overflow and Stack Exchange Hot Network Questions Cut and rearrange into a regular pentagon 〜を⋯になる (object of state change?) Why are translators confused over 2 Sam 11:4? Supervisor / boss obsessed with ResearchGate and other academic social media platforms Can a world with airships have vast unknown areas? How can I replicate this effect on a shader? Frequently submitting author refuses to review all the time. What could a journal do? Translating "Jesus wept." How does the lack of costs-shifting in USA work? What does the letter signify at the end of the meter? how do we actually reason/think/comprehend about nothing? Halving a 6×6 Grid into Two Identical Shapes Was Islamic law (Sharia) ever historically interpreted or modified to permit bacha bazi–like practices? делать картину make an ostentatious show? Is the Lie bracket on a Lie algebra a tensor? Is next-token prediction sufficient to explain emergent capabilities like complex code generation in LLMs? What is the benefit of installing journal editors-in-chief without an extensive research track record? RPG in generic medieval fantasy; last party member is named Aquamaryann(e) What could be wrong with this light fixture? Filetype detection doesn't work in vimrc Movie. Serial killer attacks every 7-8-9? years. A Rookie cop at the first attack keeps chasing him. After the 4th? attack We see he's a time traveler Significance of the nāmajapa of Śiva As a DM, how can I keep mounts relevant as PCs gain levels? Maximal number of vertices of simplex intersected with linear subspace more hot questions Question feed
Featured on Meta A proposal for bringing back Community Promotion & Open Source Ads Native Ads coming soon to Stack Overflow and Stack Exchange
A proposal for bringing back Community Promotion & Open Source Ads
Native Ads coming soon to Stack Overflow and Stack Exchange
Hot Network Questions Cut and rearrange into a regular pentagon 〜を⋯になる (object of state change?) Why are translators confused over 2 Sam 11:4? Supervisor / boss obsessed with ResearchGate and other academic social media platforms Can a world with airships have vast unknown areas? How can I replicate this effect on a shader? Frequently submitting author refuses to review all the time. What could a journal do? Translating "Jesus wept." How does the lack of costs-shifting in USA work? What does the letter signify at the end of the meter? how do we actually reason/think/comprehend about nothing? Halving a 6×6 Grid into Two Identical Shapes Was Islamic law (Sharia) ever historically interpreted or modified to permit bacha bazi–like practices? делать картину make an ostentatious show? Is the Lie bracket on a Lie algebra a tensor? Is next-token prediction sufficient to explain emergent capabilities like complex code generation in LLMs? What is the benefit of installing journal editors-in-chief without an extensive research track record? RPG in generic medieval fantasy; last party member is named Aquamaryann(e) What could be wrong with this light fixture? Filetype detection doesn't work in vimrc Movie. Serial killer attacks every 7-8-9? years. A Rookie cop at the first attack keeps chasing him. After the 4th? attack We see he's a time traveler Significance of the nāmajapa of Śiva As a DM, how can I keep mounts relevant as PCs gain levels? Maximal number of vertices of simplex intersected with linear subspace more hot questions