程序師世界是廣大編程愛好者互助、分享、學習的平台,程序師世界有你更精彩!
首頁
編程語言
C語言|JAVA編程
Python編程
網頁編程
ASP編程|PHP編程
JSP編程
數據庫知識
MYSQL數據庫|SqlServer數據庫
Oracle數據庫|DB2數據庫
您现在的位置: 程式師世界 >> 編程語言 >  >> 更多編程語言 >> Python

[Python deep learning] Python full stack system (27)

編輯:Python

深度學習

第一章 深度學習概述

一、引入

1. 人工智能劃時代事件

  • 2016年3月,Google公司研發的AlphaGo以4:1擊敗世界圍棋頂級選手李世石.次年,AlphaGo2.0對戰世界最年輕的圍棋四冠王柯潔,以3:0擊敗對方,背後支撐AlphaGowith such a powerful ability,就是“深度學習”(Deep Learning).
  • 一時間,“深度學習”This term is specific to computer science,成為包括學術界、工業界、A hot word in the venture capital world and many other fields.

2. 深度學習巨大影響

  • 除了博弈,深度學習在計算機視覺(computer vision)、語音識別、自動駕駛等領域,表現與人類一樣好,甚至有些地方超過了人類.2013年,深度學習就被麻省理工學院的《MIT科技評論》評為世界10One of the big breakthrough technologies.
  • 深度學習不僅是一種算法升級,還是一種全新的思維方式,It's subversive,將人類過去癡迷的算法問題,演變成數據和計算問題,以前“算法為核心競爭力”正在轉換為“數據為核心競爭力”.

二、深度學習的定義

1. 什麼是深度學習?

  • 簡單來說,深度學習就是一種包括多個隱含層(越多即為越深)的多層感知機.它通過組合低層特征,形成更為抽象的高層表示,用以描述被識別對象的高級屬性類別或特征.能自生成數據的中間表示(雖然這個表示並不能被人類理解),It is a unique skill that distinguishes deep learning from other machine learning algorithms.
  • 所以,深度學習可以總結成:通過加深網絡,提取數據深層次特征.

2. 深度神經網絡

3. 深度學習與機器學習的關系

  • 人工智能學科體系
  • 人工智能、機器學習、深度學習三者的關系,Deep learning can be thought of as machine learning“高級階段“

三、深度學習的特點

1. 深度學習的特點

  • 優點:
    • 性能更優異
    • 不需要特征工程
    • 在大數據樣本下有更好的性能
    • 能解決某些傳統機器學習無法解決的問題
  • 缺點:
    • 小數據樣本下性能不如機器學習
    • 模型復雜
    • 過程不可解釋

2. 深度學習的優點

2.1 性能更優異

2.2 不需要特征工程
  • 傳統機器學習需要人進行特征提取(特征工程),Machine performance is highly dependent on the quality of feature engineering.在特征很復雜的情況下,人就顯得無能為力.Deep learning does not require such feature engineering,Just pass the data directly to the deep learning network,由機器完成特征提取.
2.3 深度學習在大樣本數據下有更好的性能和擴展性

2.4 深度學習能解決傳統機器學習無法解決的問題(Such as deep-level feature extraction)

3. 深度學習的缺點

  • 深度學習在小數據上性能不如傳統機器學習
  • 深度學習網絡結構復雜、構建成本高
  • Traditional machine learning has better interpretability than deep learning

4. 深度學習與傳統機器學習對比

5. 為什麼要學習深度學習?

  • 深度學習具有更強的解決問題能力(For example, the accuracy of image recognition significantly exceeds that of machine learning,甚至超過了人類)
  • 掌握深度學習具有更強的職業競爭力
  • 深度學習在行業中應用更廣泛

四、深度學習的應用

  • 圖像分類、人臉識別、圖像遷移、語音處理、自動駕駛、機器博弈、機器人、自然語言處理

五、深度學習總結

第二章 感知機與神經網絡

一、Perceptron overview

1. 什麼是感知機?

  • 感知機(Perceptron),又稱神經元(Neuron).是神經網絡(深度學習)的起源算法,1958by Cornell University psychology professor Frank·羅森布拉特(Frank Rosenblatt)提出,It can receive multiple input signals,產生一個輸出信號.

2. Perceptron function

  • 實現邏輯運算,包括邏輯和(AND)、邏輯或(OR)
  • 實現自我學習
  • 組成神經網絡

3. 實現邏輯和

4. 實現邏輯或

5. 感知機的缺陷

  • The limitation of the perceptron is that it cannot handle it“異或”問題

6. 多層感知機

  • 1975年,感知機的“異或”The problem was completely solved by the theoretical circle,That is, the problem is solved by combining multiple perceptrons,This model is also called a multilayer perceptron(Multi-Layer Perceptron,MLP).如下圖所示,The neuron node thresholds are all set to 0.5

7. 代碼

# 自定義感知機
# 實現邏輯和
def AND(x1, x2):
w1, w2 = 0.5, 0.5 # 兩個權重
theta = 0.7 # 阈值
tmp = x1 * w1 + x2 * w2
if tmp <= theta:
return 0
else:
return 1
print(AND(1, 1)) # 1
print(AND(1, 0)) # 0
print(AND(0, 0)) # 0
# 實現邏輯或
def OR(x1, x2):
w1, w2 = 0.5, 0.5 # 兩個權重
theta = 0.2 # 阈值
tmp = x1 * w1 + x2 * w2
if tmp <= theta:
return 0
else:
return 1
print(OR(1, 1)) # 1
print(OR(1, 0)) # 1
print(OR(0, 0)) # 0
# Implements a logical XOR
def XOR(x1, x2):
s1 = not AND(x1, x2) # 對x1,x2Do logic and calculations,再取非
s2 = OR(x1, x2) # 直接對x1,x2Do logical or calculations
y = AND(s1, s2)
return y
print(XOR(1, 1)) # 0
print(XOR(1, 0)) # 1
print(XOR(0, 0)) # 0

二、神經網絡

1. 什麼是神經網絡?

  • The perceptron has a simple structure,Completed functionality is very limited.Several perceptrons can be linked together,form a cascaded network structure,這個結構稱為“多層前饋神經網絡”(Multi-layer Feedforward Neural Networks).所謂“前饋”It refers to the logical structure of using the output of the previous layer as the input of the latter layer.每一層神經元僅與下一層的神經元全連接.but within the same layer,Neurons are not connected to each other,And also across neurons between layers,not connected to each other.

2. Functions of Neural Networks

  • 1989年,Austrian scholar Kurt·Hornick(Kurt Hornik)A paper by et al,For continuous Borel measurable functions of arbitrary complexity(Borel Measurable Function)f,Only one hidden layer is required,As long as this hidden layer includes enough neurons,Feedforward neural networks use squeeze functions(Spuashing Function)作為激活函數,The simulation can be approximated with arbitrary precisionf.如果想增加fapproximate accuracy,This can be achieved simply by increasing the number of neurons.
  • This theorem is also known as 通用近似定理(Universal Approximation Theorem),該定理表明,Feedforward neural networks can theoretically solve almost any problem.

3. 通用近似定理


4. Advantages of deep networks

  • 其實,There is another structure to the neural network“進化”方向,That is towards“縱深”方向發展,也就是說,Reduce the number of neurons in a single layer,And increase the number of layers of the neural network,也就是“深”而“瘦”的網絡模型.
  • Researchers at Microsoft Research conducted experiments on the above two types of network performance,實驗結果表明:Increasing the number of layers in the network will significantly improve the learning performance of the neural network system.

三、激活函數

1. 什麼是激活函數?

  • 在神經網絡中,The function that converts the sum of the input signal into the output signal is called the activation function(activation function)

2. 為什麼使用激活函數?

  • 激活函數將多層感知機輸出轉換為非線性,使得神經網絡可以任意逼近任何非線性函數,這樣神經網絡就可以應用到眾多的非線性模型中.
  • If a multi-layer network,Multilayer network using continuous function as activation function,稱之為“神經網絡”,否則稱為“多層感知機”.所以,The activation function is the basis for distinguishing multilayer perceptrons from neural networks.

3. 常用激活函數 - 階躍函數

  • 階躍函數(Step Function)is a special continuous time function,是一個從0跳變到1的過程,Functional form and image:

4. 常用激活函數 - sigmoid

5. 常用激活函數 - tanh(雙曲正切)

6. 常用激活函數 - ReLU(修正線性單元)

7. 常用激活函數 - Softmax

四、小結

  • 多層前饋網絡.Several perceptrons are combined into a network of several layers,The output of the previous layer is used as the input of the next layer.
  • 激活函數.Convert the result of the calculation to the output value,Including step functions、sigmoid、tanh、ReLU

  1. 上一篇文章:
  2. 下一篇文章:
Copyright © 程式師世界 All Rights Reserved