Difference between revisions of "User:Francis Tyers/Perceptron"
(17 intermediate revisions by the same user not shown) | |||
Line 5: | Line 5: | ||
The classifier consists of: |
The classifier consists of: |
||
* An input vector, <math>x</math> |
|||
* Binary features |
|||
* A weight vector, <math>w</math> |
|||
* Weights |
|||
* Threshold, <math>\theta</math> |
|||
The input vector is made up of binary features, such as: |
|||
<math> |
|||
h_{\mathrm{estacio'}}(t,c) = \begin{cases}1 & \text{if } t~ =~ season ~ \mathrm{and}~ sec~ \mathrm{follows}~ estacio'~ \\0 & \text{otherwise}\end{cases} |
|||
</math> |
|||
==Training algorithm== |
|||
The objective of the training algorithm is to find the most adequate set of weights, <math>w</math> and a threshold <math>\theta</math>. |
|||
<div style="padding: 1em;border: 1px dashed #2f6fab;color: black;background-color: #f9f9f9;line-height: 1.1em"> |
|||
<source lang="python"> |
|||
def decision(input, weights, threshold): #{ |
|||
#} |
|||
# Initialise weights and threshold. |
|||
weights = [0.0, 0.0]; |
|||
threshold = 0.0; |
|||
errors = 0; |
|||
while True: #{ |
|||
# If there are no errors, training has converged. |
|||
if errors == 0: #{ |
|||
break; |
|||
#} |
|||
#} |
|||
</source> |
|||
</div> |
|||
==Example== |
==Example== |
||
Here is a worked example of a perceptron applied to the task of lexical selection. Lexical selection is the task of choosing a target translation <math>t*</math> for a given source word <math>s</math> in a context <math> |
Here is a worked example of a perceptron applied to the task of lexical selection. Lexical selection is the task of choosing a target translation <math>t*</math> for a given source word <math>s</math> in a context <math>c</math> out of a set of possible translations <math>T</math>. A perceptron makes a classification decision for a single class, so we need to train a separate perceptron for each possible target word selection. |
||
In the example, |
In the example, |
||
Line 27: | Line 62: | ||
|Durant l' estació seca les pluges són escasses. || During the dry season it rains infrequently. |
|Durant l' estació seca les pluges són escasses. || During the dry season it rains infrequently. |
||
|- |
|- |
||
|L' estiu és una estació de l' any. || Summer of one of the seasons of the year. |
|||
|- |
|||
|Barcelona-Sants és una estació de tren a Barcelona. || Barcelona-Sants is a train station in Barcelona. |
|||
|- |
|||
|colspan=2 align="center"|... |
|||
|} |
|} |
||
===Training data=== |
===Training data=== |
||
;s = estació, t* = season |
|||
{|class=wikitable |
{|class=wikitable |
||
! !! |
! <math>c_i</math> !! |
||
|- |
|- |
||
|- |
|- |
||
Line 52: | Line 94: | ||
===Feature vector=== |
===Feature vector=== |
||
This is the above training data expressed as an input vector <math>x_j</math> to the perceptron. |
|||
{|class=wikitable |
{|class=wikitable |
||
! _ sec !! _ de el any !! _ de tren !! _ de la línia !! _ humit !! _ plujós !! un _ a !! !! <math>d</math> |
|||
! !! |
|||
|- |
|||
| 1 || 0 || 0 || 0 || 0 || 0 || 0 || || 1 |
|||
|- |
|||
| 0 || 1 || 0 || 0 || 0 || 0 || 0 || || 1 |
|||
|- |
|||
| 0 || 0 || 1 || 0 || 0 || 0 || 0 || || 0 |
|||
|- |
|||
| 0 || 0 || 0 || 1 || 0 || 0 || 0 || || 0 |
|||
|- |
|||
| 0 || 0 || 0 || 0 || 1 || 0 || 0 || || 1 |
|||
|- |
|||
| 0 || 0 || 0 || 0 || 0 || 1 || 0 || || 1 |
|||
|- |
|- |
||
| 0 || 0 || 0 || 0 || 0 || 0 || 1 || || 0 |
|||
| || |
|||
|} |
|} |
||
Line 62: | Line 118: | ||
{|class=wikitable |
{|class=wikitable |
||
! <math>w_0</math> !! <math>w_1</math> !! <math>w_2</math> !! <math>w_3</math> !! <math>w_4</math> !! <math>w_5</math> !! <math>w_6</math> |
|||
! !! |
|||
|- |
|- |
||
| _ sec || _ de el any || _ de tren || _ de la línia || _ humit || _ plujós || un _ a |
|||
| || |
|||
|- |
|||
| 0.0 || 0.0 || 0.0 || 0.0 || 0.0 || 0.0 || 0.0 |
|||
|} |
|} |
||
===Trace=== |
Latest revision as of 10:51, 9 November 2014
A perceptron is a classifier that
The classifier consists of:
- An input vector,
- A weight vector,
- Threshold,
The input vector is made up of binary features, such as:
Training algorithm[edit]
The objective of the training algorithm is to find the most adequate set of weights, and a threshold .
def decision(input, weights, threshold): #{
#}
# Initialise weights and threshold.
weights = [0.0, 0.0];
threshold = 0.0;
errors = 0;
while True: #{
# If there are no errors, training has converged.
if errors == 0: #{
break;
#}
#}
Example[edit]
Here is a worked example of a perceptron applied to the task of lexical selection. Lexical selection is the task of choosing a target translation for a given source word in a context out of a set of possible translations . A perceptron makes a classification decision for a single class, so we need to train a separate perceptron for each possible target word selection.
In the example,
- = estació
- = {season, station}
- = season
Features[edit]
The features we will be working with are ngram contexts around the "problem word". These can be extracted from the word alignments calculated from a parallel corpus.
Catalan | English |
---|---|
Durant l' estació seca les pluges són escasses. | During the dry season it rains infrequently. |
L' estiu és una estació de l' any. | Summer of one of the seasons of the year. |
Barcelona-Sants és una estació de tren a Barcelona. | Barcelona-Sants is a train station in Barcelona. |
... |
Training data[edit]
- s = estació, t* = season
_ sec |
1 |
_ de el any |
1 |
_ de tren |
0 |
_ de el línia |
0 |
_ humit |
1 |
_ plujós |
1 |
un _ a |
0 |
Feature vector[edit]
This is the above training data expressed as an input vector to the perceptron.
_ sec | _ de el any | _ de tren | _ de la línia | _ humit | _ plujós | un _ a | ||
---|---|---|---|---|---|---|---|---|
1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | |
0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | |
0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | |
0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | |
0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | |
0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | |
0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
Weight vector[edit]
_ sec | _ de el any | _ de tren | _ de la línia | _ humit | _ plujós | un _ a |
0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |