奥硝唑和甲硝唑有什么区别| 宝宝不喝奶是什么原因| 肺部硬结灶是什么意思| 梦到男朋友出轨了预示什么意思| 胖大海配什么喝治咽炎| 梦到抓鱼是什么意思| 什么可以祛痘印| 妈妈的奶奶应该叫什么| 傻缺什么意思| 铁锈用什么能洗掉| 消融术是什么手术| 什么是生酮饮食| 北海有什么好玩的| 测血型挂什么科| 土生土长是什么生肖| 早搏是什么症状| gy是什么颜色| 皮牙子是什么意思| 养殖业什么最赚钱| 大暑是什么时间| 射进去有什么感觉| 脚上有青筋是什么原因| 眼睛跳是什么原因| ifound是什么牌子| 硫酸对人体有什么危害| 胃胀消化不好吃什么药| 计数单位是什么意思| pms是什么| 中校相当于政府什么官| 栀子黄是什么| 吃什么能让阴茎更硬| 药流前需要做什么检查| 番茄和蕃茄有什么区别| 朝朝暮暮是什么意思| 岭南是什么地方| 白手起家是什么生肖| 喝什么对嗓子好| 尿黄是因为什么| 财鱼是什么鱼| 痰多是什么原因引起的| 骨密度是检查什么的| 送男人什么礼物最难忘| 喝酒手麻是什么原因| 厚黑学是什么意思| 金是什么结构的字| 静脉炎的症状是什么| 耳朵后面长痘痘是什么原因| 荨麻疹擦什么药膏| 月经推后是什么原因| 天朝是什么意思| pp和ppsu有什么区别| 青少年长白头发是什么原因| 手脚出汗多是什么原因| 生活是什么| 睡觉总是流口水是什么原因| 心律失常是什么症状| 兼得是什么意思| 低血糖吃什么好的最快| spa按摩是什么意思| 盆腔炎吃什么消炎药| 土字五行属什么| 上海最高的楼叫什么| 肠胃功能紊乱什么症状| 额头容易出汗是什么原因| 布洛芬缓释胶囊有什么副作用| cla是什么| 清白是什么意思| 大便秘结是什么意思| olp是什么意思| 纪梵希为什么不娶赫本| 印堂发红是什么的征兆| 海鲜过敏吃什么药| 流光是什么意思| 十二月七号是什么星座| 吃丝瓜有什么功效和作用| 君子兰不开花是什么原因| 你问我爱你有多深是什么歌| 一个雨一个亏念什么| 成龙姓什么| 作践自己是什么意思| 脾胃不好吃什么水果好| 蓝莓有什么功效与作用| 下午4点到5点是什么时辰| 食品级pp材质是什么| 来月经前胸胀痛什么原因| 蜂蜜什么时候吃最好| dove什么意思| 急性上呼吸道感染是什么引起的| 梦见家里着火了是什么征兆| 04年是什么年| 什么是溶液| 皮试是什么| 为什么一紧张就拉肚子| 肾阴虚是什么意思| 瑞士移民需要什么条件| 山地自行车什么牌子好| 蒙蔽是什么意思| 叶酸基因检测是什么| 荭是什么意思| 口苦是什么原因引起的| 囊肿是什么原因造成的| 前辈是什么意思| 归脾丸什么时候吃效果最好| 食物中毒吃什么| 九出十三归指什么生肖| 频繁是什么意思| 有什么功效| 头痛反胃想吐什么原因| gfr医学上是什么意思| 什么月披星| 天字加一笔是什么字| 为什么老是想吐| 喝什么茶养肝护肝| 什么是体制内的工作| 吃什么能降血压| 乾隆的名字叫什么| 淋巴细胞数高说明什么| 什么时间容易怀孕| 备孕挂什么科| 什么肉不能吃| 韵五行属什么| 舌苔发黄是什么原因| 东四命是什么意思| 1921年是什么年| 男人更年期吃什么药| 小孩办理护照需要什么材料| 怀孕梦到蛇预示着什么| 玉米糁是什么| 梅核气吃什么药能除根| 母猪上树是什么生肖| suki是什么意思| 软骨炎是什么病| 画蛇添足的寓意是什么| 有什么有什么的四字词语| 贡菜是什么菜| 睡醒口干舌燥是什么原因| 舒克是什么职业| 放屁多吃什么药| 男性结扎是什么意思| 假冒警察什么罪怎么判| 蛐蛐进屋有什么预兆| 井代表什么数字| 女人身体发热预示什么| 孕酮低吃什么好提高| 刘姥姥和贾府什么关系| 连锁反应是什么意思| 卵巢分泌什么激素| 什么的坐着| 脚底板痛什么原因| 非营运车辆是什么意思| 小钙化灶是什么意思| 脸油油的是什么原因| 头皮发痒用什么洗发水| 为什么头发会变白| 做梦梦见被蛇咬是什么意思| 转氨酶高对身体有什么影响| 婴儿眼屎多是什么原因| 空腹吃西红柿有什么危害| 大张伟原名叫什么| 狂狷是什么意思| 进去是什么感觉| 松鼠鱼是什么鱼| pu是什么皮| 脚热是什么原因| 头发掉要用什么洗发水| 血糖高看什么科| 盐洗脸有什么好处| 湛江有什么好吃的| 拉拉裤和纸尿裤有什么区别| 穿刺是什么手术| 什么的国王| 风寒感冒吃什么消炎药| 小狗拉稀吃什么药| 羊的守护神是什么菩萨| 男人吃洋葱有什么好处| 亚甲减是什么意思| 出国用什么翻译软件好| 亿后面是什么单位| 小朋友眼袋很重是什么原因| 花容月貌是什么意思| 胸部有硬块挂什么科| 剖腹产吃什么下奶最快| 下体有异味是什么原因| 儿童细菌感染吃什么药| 皮肤发黄什么原因| 超能力是什么意思| 喝什么降血糖| 杨字五行属什么| 喝牛奶放屁多是什么原因| 吃什么容易排大便| 什么淀粉最好| 喉咙痒想咳嗽吃什么药| 孔子的原名叫什么| 男生第一次什么感觉| 西红柿和什么搭配最好| 肚子胀痛吃什么药| 男人身体怕冷是什么原因如何调理| 纯爱是什么意思| 文书是什么意思| 蹲久了站起来头晕是什么原因| 吃什么补充维生素| 逍遥丸是治什么的| 处方药是什么意思| 女生吃什么可以丰胸| 孕妇羊水多是什么原因造成的| 含金量什么意思| 为什么近视| 唯有读书高的前一句是什么| 叩拜是什么意思| 红豆与赤小豆有什么区别| 什么淀粉最好| 牛杂是牛的什么部位| 刚出生的小鱼吃什么| 静脉曲张是什么意思| 什么叫轻断食| 脚发胀是什么前兆| 梅核气吃什么药好得快| 脚底疼痛是什么原因| 一什么小船| 看睾丸去医院挂什么科| 发烧拉稀是什么原因| 蓁字五行属什么| 粉玫瑰代表什么意思| 看看我有什么| 土猪肉和普通猪肉有什么分别| 什么交加| 6月24号什么星座| 慢性肠炎吃什么药| 产褥热是什么病| 火龙果和什么不能一起吃| 莫字五行属什么| 纵欲什么意思| 看胸挂什么科| 黑枸杞泡水喝有什么作用和功效| 什么树没有叶子| 鹿茸有什么功效| 88属什么| 腹膜后是什么位置| 结石吃什么好| 自贸区什么意思| 老想喝水是什么原因| 百合是什么植物| 8月3日是什么日子| 胸闷气短吃什么药| 啰嗦是什么意思| 梦见很多棺材是什么征兆| 头疼挂什么科室| 儿童喝蜂蜜水有什么好处和坏处| 红脸关公代表什么意思| 毕加索全名是什么| 72年属什么生肖属相| 秋后问斩是什么意思| 伤口不愈合用什么药| 大名鼎鼎是什么意思| 1979年属什么| 西兰花和什么菜搭配| 天机不可泄露是什么意思| 故事是什么意思| 小饭桌是什么意思| 白酒优级和一级有什么区别| 什么是黄褐斑| 双龙是什么意思| 人参果什么季节成熟| 用甲硝唑栓有什么反应| 百度Jump to content

爽爸成女儿代言人?谈郑爽杨洋有无可能:不干预

From Wikipedia, the free encyclopedia
百度 日常工作生活中,时常用党规党纪量一量自己的言行,扫一扫身上的尘土,不断检视身心、修正行为,及时涤荡思想之尘,祛除行为之垢,坚决抵制一切迷惘迟疑的观点、及时行乐的思想和贪图私利的行为,带头严守党的政治纪律、组织纪律、廉洁纪律、群众纪律、工作纪律、生活纪律,并管好家属、子女和身边工作人员,不踩“红线”、不闯“雷区”,做到手握戒尺、心存敬畏,遵规守纪、廉洁自律,树立党员领导干部良好形象。

Unsupervised learning is a framework in machine learning where, in contrast to supervised learning, algorithms learn patterns exclusively from unlabeled data.[1] Other frameworks in the spectrum of supervisions include weak- or semi-supervision, where a small portion of the data is tagged, and self-supervision. Some researchers consider self-supervised learning a form of unsupervised learning.[2]

Conceptually, unsupervised learning divides into the aspects of data, training, algorithm, and downstream applications. Typically, the dataset is harvested cheaply "in the wild", such as massive text corpus obtained by web crawling, with only minor filtering (such as Common Crawl). This compares favorably to supervised learning, where the dataset (such as the ImageNet1000) is typically constructed manually, which is much more expensive.

There were algorithms designed specifically for unsupervised learning, such as clustering algorithms like k-means, dimensionality reduction techniques like principal component analysis (PCA), Boltzmann machine learning, and autoencoders. After the rise of deep learning, most large-scale unsupervised learning have been done by training general-purpose neural network architectures by gradient descent, adapted to performing unsupervised learning by designing an appropriate training procedure.

Sometimes a trained model can be used as-is, but more often they are modified for downstream applications. For example, the generative pretraining method trains a model to generate a textual dataset, before finetuning it for other applications, such as text classification.[3][4] As another example, autoencoders are trained to good features, which can then be used as a module for other models, such as in a latent diffusion model.

Tasks

[edit]
Tendency for a task to employ supervised vs. unsupervised methods. Task names straddling circle boundaries is intentional. It shows that the classical division of imaginative tasks (left) employing unsupervised methods is blurred in today's learning schemes.

Tasks are often categorized as discriminative (recognition) or generative (imagination). Often but not always, discriminative tasks use supervised methods and generative tasks use unsupervised (see Venn diagram); however, the separation is very hazy. For example, object recognition favors supervised learning but unsupervised learning can also cluster objects into groups. Furthermore, as progress marches onward, some tasks employ both methods, and some tasks swing from one to another. For example, image recognition started off as heavily supervised, but became hybrid by employing unsupervised pre-training, and then moved towards supervision again with the advent of dropout, ReLU, and adaptive learning rates.

A typical generative task is as follows. At each step, a datapoint is sampled from the dataset, and part of the data is removed, and the model must infer the removed part. This is particularly clear for the denoising autoencoders and BERT.

Neural network architectures

[edit]

Training

[edit]

During the learning phase, an unsupervised network tries to mimic the data it's given and uses the error in its mimicked output to correct itself (i.e. correct its weights and biases). Sometimes the error is expressed as a low probability that the erroneous output occurs, or it might be expressed as an unstable high energy state in the network.

In contrast to supervised methods' dominant use of backpropagation, unsupervised learning also employs other methods including: Hopfield learning rule, Boltzmann learning rule, Contrastive Divergence, Wake Sleep, Variational Inference, Maximum Likelihood, Maximum A Posteriori, Gibbs Sampling, and backpropagating reconstruction errors or hidden state reparameterizations. See the table below for more details.

Energy

[edit]

An energy function is a macroscopic measure of a network's activation state. In Boltzmann machines, it plays the role of the Cost function. This analogy with physics is inspired by Ludwig Boltzmann's analysis of a gas' macroscopic energy from the microscopic probabilities of particle motion , where k is the Boltzmann constant and T is temperature. In the RBM network the relation is ,[5] where and vary over every possible activation pattern and . To be more precise, , where is an activation pattern of all neurons (visible and hidden). Hence, some early neural networks bear the name Boltzmann Machine. Paul Smolensky calls the Harmony. A network seeks low energy which is high Harmony.

Networks

[edit]

This table shows connection diagrams of various unsupervised networks, the details of which will be given in the section Comparison of Networks. Circles are neurons and edges between them are connection weights. As network design changes, features are added on to enable new capabilities or removed to make learning faster. For instance, neurons change between deterministic (Hopfield) and stochastic (Boltzmann) to allow robust output, weights are removed within a layer (RBM) to hasten learning, or connections are allowed to become asymmetric (Helmholtz).

Hopfield Boltzmann RBM Stacked Boltzmann
A network based on magnetic domains in iron with a single self-connected layer. It can be used as a content addressable memory.
Network is separated into 2 layers (hidden vs. visible), but still using symmetric 2-way weights. Following Boltzmann's thermodynamics, individual probabilities give rise to macroscopic energies.
Restricted Boltzmann Machine. This is a Boltzmann machine where lateral connections within a layer are prohibited to make analysis tractable.
This network has multiple RBM's to encode a hierarchy of hidden features. After a single RBM is trained, another blue hidden layer (see left RBM) is added, and the top 2 layers are trained as a red & blue RBM. Thus the middle layers of an RBM acts as hidden or visible, depending on the training phase it is in.
Helmholtz Autoencoder VAE
Instead of the bidirectional symmetric connection of the stacked Boltzmann machines, we have separate one-way connections to form a loop. It does both generation and discrimination.
A feed forward network that aims to find a good middle layer representation of its input world. This network is deterministic, so it is not as robust as its successor the VAE.
Applies Variational Inference to the Autoencoder. The middle layer is a set of means & variances for Gaussian distributions. The stochastic nature allows for more robust imagination than the deterministic autoencoder.

Of the networks bearing people's names, only Hopfield worked directly with neural networks. Boltzmann and Helmholtz came before artificial neural networks, but their work in physics and physiology inspired the analytical methods that were used.

History

[edit]
1974 Ising magnetic model proposed by WA Little [de] for cognition
1980 Kunihiko Fukushima introduces the neocognitron, which is later called a convolutional neural network. It is mostly used in SL, but deserves a mention here.
1982 Ising variant Hopfield net described as CAMs and classifiers by John Hopfield.
1983 Ising variant Boltzmann machine with probabilistic neurons described by Hinton & Sejnowski following Sherington & Kirkpatrick's 1975 work.
1986 Paul Smolensky publishes Harmony Theory, which is an RBM with practically the same Boltzmann energy function. Smolensky did not give a practical training scheme. Hinton did in mid-2000s.
1995 Schmidthuber introduces the LSTM neuron for languages.
1995 Dayan & Hinton introduces Helmholtz machine
2013 Kingma, Rezende, & co. introduced Variational Autoencoders as Bayesian graphical probability network, with neural nets as components.

Specific Networks

[edit]

Here, we highlight some characteristics of select networks. The details of each are given in the comparison table below.

Hopfield Network
Ferromagnetism inspired Hopfield networks. A neuron correspond to an iron domain with binary magnetic moments Up and Down, and neural connections correspond to the domain's influence on each other. Symmetric connections enable a global energy formulation. During inference the network updates each state using the standard activation step function. Symmetric weights and the right energy functions guarantees convergence to a stable activation pattern. Asymmetric weights are difficult to analyze. Hopfield nets are used as Content Addressable Memories (CAM).
Boltzmann Machine
These are stochastic Hopfield nets. Their state value is sampled from this pdf as follows: suppose a binary neuron fires with the Bernoulli probability p(1) = 1/3 and rests with p(0) = 2/3. One samples from it by taking a uniformly distributed random number y, and plugging it into the inverted cumulative distribution function, which in this case is the step function thresholded at 2/3. The inverse function = { 0 if x <= 2/3, 1 if x > 2/3 }.
Sigmoid Belief Net
Introduced by Radford Neal in 1992, this network applies ideas from probabilistic graphical models to neural networks. A key difference is that nodes in graphical models have pre-assigned meanings, whereas Belief Net neurons' features are determined after training. The network is a sparsely connected directed acyclic graph composed of binary stochastic neurons. The learning rule comes from Maximum Likelihood on p(X): Δwij sj * (si - pi), where pi = 1 / ( 1 + eweighted inputs into neuron i ). sj's are activations from an unbiased sample of the posterior distribution and this is problematic due to the Explaining Away problem raised by Judea Perl. Variational Bayesian methods uses a surrogate posterior and blatantly disregard this complexity.
Deep Belief Network
Introduced by Hinton, this network is a hybrid of RBM and Sigmoid Belief Network. The top 2 layers is an RBM and the second layer downwards form a sigmoid belief network. One trains it by the stacked RBM method and then throw away the recognition weights below the top RBM. As of 2009, 3-4 layers seems to be the optimal depth.[6]
Helmholtz machine
These are early inspirations for the Variational Auto Encoders. Its 2 networks combined into one—forward weights operates recognition and backward weights implements imagination. It is perhaps the first network to do both. Helmholtz did not work in machine learning but he inspired the view of "statistical inference engine whose function is to infer probable causes of sensory input".[7] the stochastic binary neuron outputs a probability that its state is 0 or 1. The data input is normally not considered a layer, but in the Helmholtz machine generation mode, the data layer receives input from the middle layer and has separate weights for this purpose, so it is considered a layer. Hence this network has 3 layers.
Variational autoencoder
These are inspired by Helmholtz machines and combines probability network with neural networks. An Autoencoder is a 3-layer CAM network, where the middle layer is supposed to be some internal representation of input patterns. The encoder neural network is a probability distribution qφ(z given x) and the decoder network is pθ(x given z). The weights are named phi & theta rather than W and V as in Helmholtz—a cosmetic difference. These 2 networks here can be fully connected, or use another NN scheme.

Comparison of networks

[edit]
Hopfield Boltzmann RBM Stacked RBM Helmholtz Autoencoder VAE
Usage & notables CAM, traveling salesman problem CAM. The freedom of connections makes this network difficult to analyze. pattern recognition. used in MNIST digits and speech. recognition & imagination. trained with unsupervised pre-training and/or supervised fine tuning. imagination, mimicry language: creative writing, translation. vision: enhancing blurry images generate realistic data
Neuron deterministic binary state. Activation = { 0 (or -1) if x is negative, 1 otherwise } stochastic binary Hopfield neuron ← same. (extended to real-valued in mid 2000s) ← same ← same language: LSTM. vision: local receptive fields. usually real valued relu activation. middle layer neurons encode means & variances for Gaussians. In run mode (inference), the output of the middle layer are sampled values from the Gaussians.
Connections 1-layer with symmetric weights. No self-connections. 2-layers. 1-hidden & 1-visible. symmetric weights. ← same.
no lateral connections within a layer.
top layer is undirected, symmetric. other layers are 2-way, asymmetric. 3-layers: asymmetric weights. 2 networks combined into 1. 3-layers. The input is considered a layer even though it has no inbound weights. recurrent layers for NLP. feedforward convolutions for vision. input & output have the same neuron counts. 3-layers: input, encoder, distribution sampler decoder. the sampler is not considered a layer
Inference & energy Energy is given by Gibbs probability measure : ← same ← same minimize KL divergence inference is only feed-forward. previous UL networks ran forwards AND backwards minimize error = reconstruction error - KLD
Training Δwij = si*sj, for +1/-1 neuron Δwij = e*(pij - p'ij). This is derived from minimizing KLD. e = learning rate, p' = predicted and p = actual distribution. Δwij = e*( < vi hj >data - < vi hj >equilibrium ). This is a form of contrastive divergence w/ Gibbs Sampling. "<>" are expectations. ← similar. train 1-layer at a time. approximate equilibrium state with a 3-segment pass. no back propagation. wake-sleep 2 phase training back propagate the reconstruction error reparameterize hidden state for backprop
Strength resembles physical systems so it inherits their equations ← same. hidden neurons act as internal representatation of the external world faster more practical training scheme than Boltzmann machines trains quickly. gives hierarchical layer of features mildly anatomical. analyzable w/ information theory & statistical mechanics
Weakness hard to train due to lateral connections equilibrium requires too many iterations integer & real-valued neurons are more complicated.

Hebbian Learning, ART, SOM

[edit]

The classical example of unsupervised learning in the study of neural networks is Donald Hebb's principle, that is, neurons that fire together wire together.[8] In Hebbian learning, the connection is reinforced irrespective of an error, but is exclusively a function of the coincidence between action potentials between the two neurons.[9] A similar version that modifies synaptic weights takes into account the time between the action potentials (spike-timing-dependent plasticity or STDP). Hebbian Learning has been hypothesized to underlie a range of cognitive functions, such as pattern recognition and experiential learning.

Among neural network models, the self-organizing map (SOM) and adaptive resonance theory (ART) are commonly used in unsupervised learning algorithms. The SOM is a topographic organization in which nearby locations in the map represent inputs with similar properties. The ART model allows the number of clusters to vary with problem size and lets the user control the degree of similarity between members of the same clusters by means of a user-defined constant called the vigilance parameter. ART networks are used for many pattern recognition tasks, such as automatic target recognition and seismic signal processing.[10]

Probabilistic methods

[edit]

Two of the main methods used in unsupervised learning are principal component and cluster analysis. Cluster analysis is used in unsupervised learning to group, or segment, datasets with shared attributes in order to extrapolate algorithmic relationships.[11] Cluster analysis is a branch of machine learning that groups the data that has not been labelled, classified or categorized. Instead of responding to feedback, cluster analysis identifies commonalities in the data and reacts based on the presence or absence of such commonalities in each new piece of data. This approach helps detect anomalous data points that do not fit into either group.

A central application of unsupervised learning is in the field of density estimation in statistics,[12] though unsupervised learning encompasses many other domains involving summarizing and explaining data features. It can be contrasted with supervised learning by saying that whereas supervised learning intends to infer a conditional probability distribution conditioned on the label of input data; unsupervised learning intends to infer an a priori probability distribution .

Approaches

[edit]

Some of the most common algorithms used in unsupervised learning include: (1) Clustering, (2) Anomaly detection, (3) Approaches for learning latent variable models. Each approach uses several methods as follows:

Method of moments

[edit]

One of the statistical approaches for unsupervised learning is the method of moments. In the method of moments, the unknown parameters (of interest) in the model are related to the moments of one or more random variables, and thus, these unknown parameters can be estimated given the moments. The moments are usually estimated from samples empirically. The basic moments are first and second order moments. For a random vector, the first order moment is the mean vector, and the second order moment is the covariance matrix (when the mean is zero). Higher order moments are usually represented using tensors which are the generalization of matrices to higher orders as multi-dimensional arrays.

In particular, the method of moments is shown to be effective in learning the parameters of latent variable models. Latent variable models are statistical models where in addition to the observed variables, a set of latent variables also exists which is not observed. A highly practical example of latent variable models in machine learning is the topic modeling which is a statistical model for generating the words (observed variables) in the document based on the topic (latent variable) of the document. In the topic modeling, the words in the document are generated according to different statistical parameters when the topic of the document is changed. It is shown that method of moments (tensor decomposition techniques) consistently recover the parameters of a large class of latent variable models under some assumptions.[15]

The Expectation–maximization algorithm (EM) is also one of the most practical methods for learning latent variable models. However, it can get stuck in local optima, and it is not guaranteed that the algorithm will converge to the true unknown parameters of the model. In contrast, for the method of moments, the global convergence is guaranteed under some conditions.

See also

[edit]

References

[edit]
  1. ^ Wu, Wei. "Unsupervised Learning" (PDF). Archived (PDF) from the original on 14 April 2024. Retrieved 26 April 2024.
  2. ^ Liu, Xiao; Zhang, Fanjin; Hou, Zhenyu; Mian, Li; Wang, Zhaoyu; Zhang, Jing; Tang, Jie (2021). "Self-supervised Learning: Generative or Contrastive". IEEE Transactions on Knowledge and Data Engineering: 1. arXiv:2006.08218. doi:10.1109/TKDE.2021.3090866. ISSN 1041-4347.
  3. ^ Radford, Alec; Narasimhan, Karthik; Salimans, Tim; Sutskever, Ilya (11 June 2018). "Improving Language Understanding by Generative Pre-Training" (PDF). OpenAI. p. 12. Archived (PDF) from the original on 26 January 2021. Retrieved 23 January 2021.
  4. ^ Li, Zhuohan; Wallace, Eric; Shen, Sheng; Lin, Kevin; Keutzer, Kurt; Klein, Dan; Gonzalez, Joey (2025-08-14). "Train Big, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers". Proceedings of the 37th International Conference on Machine Learning. PMLR: 5958–5968.
  5. ^ Hinton, G. (2012). "A Practical Guide to Training Restricted Boltzmann Machines" (PDF). Neural Networks: Tricks of the Trade. Lecture Notes in Computer Science. Vol. 7700. Springer. pp. 599–619. doi:10.1007/978-3-642-35289-8_32. ISBN 978-3-642-35289-8. Archived (PDF) from the original on 2025-08-14. Retrieved 2025-08-14.
  6. ^ "Deep Belief Nets" (video). September 2009. Archived from the original on 2025-08-14. Retrieved 2025-08-14.
  7. ^ Peter, Dayan; Hinton, Geoffrey E.; Neal, Radford M.; Zemel, Richard S. (1995). "The Helmholtz machine". Neural Computation. 7 (5): 889–904. doi:10.1162/neco.1995.7.5.889. hdl:21.11116/0000-0002-D6D3-E. PMID 7584891. S2CID 1890561. Closed access icon
  8. ^ Buhmann, J.; Kuhnel, H. (1992). "Unsupervised and supervised data clustering with competitive neural networks". [Proceedings 1992] IJCNN International Joint Conference on Neural Networks. Vol. 4. IEEE. pp. 796–801. doi:10.1109/ijcnn.1992.227220. ISBN 0780305590. S2CID 62651220.
  9. ^ Comesa?a-Campos, Alberto; Bouza-Rodríguez, José Benito (June 2016). "An application of Hebbian learning in the design process decision-making". Journal of Intelligent Manufacturing. 27 (3): 487–506. doi:10.1007/s10845-014-0881-z. ISSN 0956-5515. S2CID 207171436.
  10. ^ Carpenter, G.A. & Grossberg, S. (1988). "The ART of adaptive pattern recognition by a self-organizing neural network" (PDF). Computer. 21 (3): 77–88. doi:10.1109/2.33. S2CID 14625094. Archived from the original (PDF) on 2025-08-14. Retrieved 2025-08-14.
  11. ^ Roman, Victor (2025-08-14). "Unsupervised Machine Learning: Clustering Analysis". Medium. Archived from the original on 2025-08-14. Retrieved 2025-08-14.
  12. ^ Jordan, Michael I.; Bishop, Christopher M. (2004). "7. Intelligent Systems §Neural Networks". In Tucker, Allen B. (ed.). Computer Science Handbook (2nd ed.). Chapman & Hall/CRC Press. doi:10.1201/9780203494455. ISBN 1-58488-360-X. Archived from the original on 2025-08-14. Retrieved 2025-08-14.
  13. ^ Hastie, Tibshirani & Friedman 2009, pp. 485–586
  14. ^ Garbade, Dr Michael J. (2025-08-14). "Understanding K-means Clustering in Machine Learning". Medium. Archived from the original on 2025-08-14. Retrieved 2025-08-14.
  15. ^ Anandkumar, Animashree; Ge, Rong; Hsu, Daniel; Kakade, Sham; Telgarsky, Matus (2014). "Tensor Decompositions for Learning Latent Variable Models" (PDF). Journal of Machine Learning Research. 15: 2773–2832. arXiv:1210.7559. Bibcode:2012arXiv1210.7559A. Archived (PDF) from the original on 2025-08-14. Retrieved 2025-08-14.

Further reading

[edit]
什么是浸润性乳腺癌 权衡利弊是什么意思 1959年属猪的是什么命 辩证思维是什么意思 p是什么面料
hbcab阳性是什么意思 氪金什么意思 嗓子疼吃什么水果好得快 一什么阳光填量词 生姜和红糖熬水有什么作用
指甲发黄是什么原因 什么叫腺瘤 什么是目标 高烧后拉稀说明什么 丁克族是什么意思
梦见自己怀孕大肚子是什么预兆 知柏地黄丸主治什么 梦到自己开车是什么意思 热疙瘩用什么药膏 腹部胀疼是什么原因
糖尿病筛查做什么检查hcv8jop9ns4r.cn 文献是什么hcv8jop4ns0r.cn emoji是什么意思hcv7jop6ns4r.cn 十一月一日是什么星座hcv8jop8ns9r.cn 想飞上天和太阳肩并肩是什么歌hcv8jop5ns4r.cn
7月15什么星座hcv8jop0ns9r.cn 胪是什么意思hcv9jop1ns2r.cn 嗜酸性粒细胞偏低是什么原因hcv9jop6ns5r.cn 口干口苦吃什么中成药hcv8jop5ns5r.cn 补钙什么时间段最好hcv9jop2ns6r.cn
范思哲香水是什么档次hcv8jop5ns6r.cn 金钱草长什么样子图片hcv9jop4ns8r.cn 果脯是什么东西hcv8jop1ns5r.cn 舌头发热是什么原因hcv8jop0ns1r.cn 婚检检查什么项目1949doufunao.com
舌头发白吃什么药hcv7jop5ns4r.cn 34岁属什么的生肖hcv9jop3ns7r.cn 今年夏天为什么这么热gangsutong.com 步后尘是什么意思hcv9jop2ns5r.cn 吃什么健脾胃hcv9jop7ns5r.cn
百度