leads for me to read further:
https://en.wikipedia.org/wiki/Independent_component_analysis#Defining_component_independence (ICA)
https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction (e.g. SOM)
https://en.wikipedia.org/wiki/Factor_analysis (e.g. PCA)
To clean up my attic. And have the new nomenclatures so I don't share my ramblings in vain.
I am perhaps most likely to not be wrong in the second one. But even that must be bigger than one small brain to fully capture.
just saw this:
https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction#Nonlinear_PCA
It uses NN as the basis of functions to transform the input space into another where to seek better "separation" and "congruence" at the same time.. I wonder if that is also guided by dispersion. My words under quotes for they might not be terms of art.
Quoting the small paragraph there:
Nonlinear PCA (NLPCA) uses backpropagation to train a multi-layer perceptron (MLP) to fit to a manifold.[37] Unlike typical MLP training, which only updates the weights, NLPCA updates both the weights and the inputs. That is, both the weights and inputs are treated as latent values. After training, the latent inputs are a low-dimensional representation of the observed vectors, and the MLP maps from that low-dimensional representation to the high-dimensional observation space.
This paragraph might need work to understand the "latent" thing, as applied to changing the inputs, but that would correspond, for the linear PCA, to considering new combinations from previous input space raw data dimensions, while the NN weights would correspond to the linear combinations weights being explored in PCA.
I am wondering about the task or objective function being optimized and back propagated from (but so do I, now to think of it, back for PCA, part of my self-critical looping tendencies: am I BS-ing myself and others?).
But the lingo here is about optimizing that, the objective functional, by exploring a bigger space... I have no experience there. But curious.
Latent might just mean, here, being searched during optimization. Maybe, not in the same phases. There might be some alternance given the nature. But it might be a full hypercubic set of latent variable as well. Would it matter?
Latent might be in the eyes of the beholder. From what we give the monster algo, and what we target the NN to spit out, and in turn optimize. But clearly we won't just use either of those not "hidden" variable sets (input and such output), but seek which of the new input "latent" variables we end up with that do optimize the objective function.
In more typical MLP training, latent just means the hidden layers between input layer and last layer (decision layer if classification task).
It seems though that they called the NN MLP weights as latent "values". This might be my bad, they do determine in my understanding the transformed variables through the determination of those weights, so maybe this slight slip is warranted or we end up like me rambling for hours on end (and reader steam out of ears might ensue). Latent are the transformed input variables through the NN entrails, and so are the weights of each unit determining those variables (as each unit output). Values being searched in the process seems good enough interpretation of "latent".
leads for me to read further:
https://en.wikipedia.org/wiki/Independent_component_analysis#Defining_component_independence (ICA)
https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction (e.g. SOM)
https://en.wikipedia.org/wiki/Factor_analysis (e.g. PCA)
To clean up my attic. And have the new nomenclatures so I don't share my ramblings in vain.
I am perhaps most likely to not be wrong in the second one. But even that must be bigger than one small brain to fully capture.
just saw this:
https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction#Nonlinear_PCA
It uses NN as the basis of functions to transform the input space into another where to seek better "separation" and "congruence" at the same time.. I wonder if that is also guided by dispersion. My words under quotes for they might not be terms of art.
Quoting the small paragraph there:
> Nonlinear PCA (NLPCA) uses backpropagation to train a multi-layer perceptron (MLP) to fit to a manifold.[37] Unlike typical MLP training, which only updates the weights, NLPCA updates both the weights and the inputs. That is, both the weights and inputs are treated as latent values. After training, the latent inputs are a low-dimensional representation of the observed vectors, and the MLP maps from that low-dimensional representation to the high-dimensional observation space.
This paragraph might need work to understand the "latent" thing, as applied to changing the inputs, but that would correspond, for the linear PCA, to considering new combinations from previous input space raw data dimensions, while the NN weights would correspond to the linear combinations weights being explored in PCA.
I am wondering about the task or objective function being optimized and back propagated from (but so do I, now to think of it, back for PCA, part of my self-critical looping tendencies: am I BS-ing myself and others?).
But the lingo here is about optimizing that, the objective functional, by exploring a bigger space... I have no experience there. But curious.
Latent might just mean, here, being searched during optimization. Maybe, not in the same phases. There might be some alternance given the nature. But it might be a full hypercubic set of latent variable as well. Would it matter?
Latent might be in the eyes of the beholder. From what we give the monster algo, and what we target the NN to spit out, and in turn optimize. But clearly we won't just use either of those not "hidden" variable sets (input and such output), but seek which of the new input "latent" variables we end up with that do optimize the objective function.
In more typical MLP training, latent just means the hidden layers between input layer and last layer (decision layer if classification task).
It seems though that they called the NN MLP weights as latent "values". This might be my bad, they do determine in my understanding the transformed variables through the determination of those weights, so maybe this slight slip is warranted or we end up like me rambling for hours on end (and reader steam out of ears might ensue). Latent are the transformed input variables through the NN entrails, and so are the weights of each unit determining those variables (as each unit output). Values being searched in the process seems good enough interpretation of "latent".