As of August 2020 the site you are on (wiki.newae.com) is deprecated, and content is now at rtfm.newae.com.

Difference between revisions of "Correlation Power Analysis"

From ChipWhisperer Wiki
Jump to: navigation, search
(Started the rewrite)
(Added info on correlation + attacking a subkey)
Line 34: Line 34:
 
</pre>
 
</pre>
  
= Calculating the Correlation Coefficient =
+
= Pearson's Correlation Coefficient =
 +
Once we have a way to model our power consumption, we need a way to compare our power estimate to our measured traces. A helpful tool for finding this relationship is through Pearson's correlation coefficient, which is
  
= Attacking the Subkeys =
+
<math>
 +
\rho_{X,Y}
 +
= \frac{\text{cov}(X, Y)}{\sigma_X \sigma_Y}
 +
= \frac{E[(X - \mu_X)(Y - \mu_Y)]}{\sqrt{E[(X - \mu_X)^2] E[(Y - \mu_Y)^2]}}
 +
</math>
 +
 
 +
This correlation coefficient will always be in the range [-1, 1]. It describes how closely the random variables <math>X</math> and <math>Y</math> are related:
 +
* If <math>Y</math> always increases when <math>X</math> increases, it will be 1;
 +
* If <math>Y</math> always decreases when <math>X</math> increases, it will be -1;
 +
* If <math>Y</math> is totally independent of <math>X</math>, it will be 0.
 +
 
 +
= Attacking with Correlation =
 +
After taking our measurements, we'll have <math>D</math> power traces <math>t</math>, and each of these traces will have <math>T</math> data points. Using subscript notation, <math>t_{d, j}</math> will refer to point <math>j</math> in trace <math>d</math> (<math>1 \le d \le D, 1 \le j \le T</math>).
 +
 
 +
We'll also estimate the power consumption in each trace using our model. We'll say that there are <math>I</math> different subkeys that we want to try. Then, <math>h_{d, i}</math> will refer to our power estimate in trace <math>d</math>, assuming that the subkey is <math>i</math> (<math>1 \le d \le D, 1 \le i \le I</math>).
 +
 
 +
With this data, we can see how well our model and measurements match for each guess <math>i</math> and time <math>j</math>. We'll do this by finding how <math>t</math> and <math>h</math> correlate over the <math>D</math> traces. One way of calculating this is:
 +
 
 +
<math>
 +
r_{i,j}
 +
= \frac{{\sum\nolimits_{d = 1}^D {\left[ {\left( {{h_{d,i}} - \overline {{h_i}} } \right)\left( {{t_{d,j}} - \overline {{t_j}} } \right)} \right]} }}{{\sqrt {\sum\nolimits_{d = 1}^D {{{\left( {{h_{d,i}} - \overline {{h_i}} } \right)}^2}} \sum\nolimits_{d = 1}^D {{{\left( {{t_{d,j}} - \overline {{t_j}} } \right)}^2}} } }}
 +
</math>
 +
 
 +
There is an alternative form of the correlation equation that we can use for ''online'' calculations - it allows us to add one trace at a time without re-summing all of the past data. This form is:
 +
 
 +
<math>
 +
r_{i,j}
 +
= \frac{D \sum_{d=1}^D h_{d,i}t_{d,j} - \sum_{d=1}^D h_{d,i} \sum_{d=1}^D t_{d,j}}{\sqrt{\Big(\big(\sum_{d=1}^D h_{d,i}\big)^2 - D\sum_{d=1}^D h_{d,i}^2\Big)\Big(\big(\sum_{d=1}^D t_{d,j}\big)^2 - D\sum_{d=1}^D t_{d,j}^2\Big)}}
 +
</math>
 +
 
 +
Note that these two sums are equivalent.
 +
 
 +
= Picking a Subkey =
 +
The last step is to use the values of <math>r_{i,j}</math> to decide which subkey matches our traces most closely. There are two steps to this:
 +
* For each subkey <math>i</math>, find the highest value of <math>|r_{i,j}|</math>. This will discard the time information - we want to know how good our guess was, but we don't care where our guess matched the trace.
 +
* Looking at the maximum values for each subkey, find the highest value of <math>|r_i|</math>. The location <math>i</math> of this maximum is our best guess: it correlated more closely with the traces than any other guess.
 +
Note that we're only working with absolute values here because we don't care about the sign of the relationship. All we need to know is that a linear correlation exists.
  
 
= Example: AES-128 =
 
= Example: AES-128 =

Revision as of 09:38, 20 May 2016

Correlation Power Analysis (CPA) is an attack that allows us to find a secret encryption key that is stored on a victim device. There are 4 steps to a CPA attack:

  1. Write down a model for the victim's power consumption. This model will look at one specific point in the encryption algorithm. (ie: after step 2 of the encryption process, the intermediate result is x, so the power consumption is f(x).)
  2. Get the victim to encrypt several different plaintexts. Record a trace of the victim's power consumption during each of these encryptions.
  3. Attack small parts (subkeys) of the secret key:
    1. Consider every possible option for the subkey. For each guess and each trace, use the known plaintext and the guessed subkey to calculate the power consumption according to our model.
    2. Calculate the Pearson correlation coefficient between the modeled and actual power consumption. Do this for every data point in the traces.
    3. Decide which subkey guess correlates best to the measured traces.
  4. Put together the best subkey guesses to obtain the full secret key.

This page discusses some of the basics of these steps, describing what models are typically used in CPA attacks and how the Pearson correlation coefficient is calculated.

Modeling Power Consumption

Electronic computers (microcontrollers, FPGAs, etc) have two components to their power consumption. First, static power consumption is the power required to keep the device running. This static power depends on things like the number of transistors inside the device. Secondly, and more importantly, dynamic power consumption depends on the data moving around inside the device. Every time a bit is changed from a 0 to a 1 (or vice versa), some current is required to (dis)charge the data lines. The dynamic power is the part that we're interested in - it can tell us what's happening inside.

One of the simplest models for power consumption is the Hamming Distance model. The Hamming Distance between two binary numbers is the number of different bits in the numbers. For example,

    HammingDistance(00110000, 00100011) = 3

because there are 3 unequal bits in these two numbers. An easy way to calculate the Hamming Distance is

    HammingDistance(x, y) = HammingWeight(x ^ y)

where ^ is the XOR operator, and the Hamming Weight is the number of 1s in a binary number. Using the example above,

    HammingDistance(00110000, 00100011) 
    = HammingWeight(00010011)
    = 3

because 00010011 has three bits set.

We can use the Hamming Distance model in our CPA attacks. If we can find a point in the encryption algorithm where the victim changes a variable from x to y, then we can estimate that the power consumption is proportional to Hamming Distance(x, y). Often, our model can even be a bit simpler: if we assume that the victim is replacing the value x = 0, then our power model is

    l(y) = HammingWeight(y)

Pearson's Correlation Coefficient

Once we have a way to model our power consumption, we need a way to compare our power estimate to our measured traces. A helpful tool for finding this relationship is through Pearson's correlation coefficient, which is

 
\rho_{X,Y}
= \frac{\text{cov}(X, Y)}{\sigma_X \sigma_Y}
= \frac{E[(X - \mu_X)(Y - \mu_Y)]}{\sqrt{E[(X - \mu_X)^2] E[(Y - \mu_Y)^2]}}

This correlation coefficient will always be in the range [-1, 1]. It describes how closely the random variables X and Y are related:

  • If Y always increases when X increases, it will be 1;
  • If Y always decreases when X increases, it will be -1;
  • If Y is totally independent of X, it will be 0.

Attacking with Correlation

After taking our measurements, we'll have D power traces t, and each of these traces will have T data points. Using subscript notation, t_{d, j} will refer to point j in trace d (1 \le d \le D, 1 \le j \le T).

We'll also estimate the power consumption in each trace using our model. We'll say that there are I different subkeys that we want to try. Then, h_{d, i} will refer to our power estimate in trace d, assuming that the subkey is i (1 \le d \le D, 1 \le i \le I).

With this data, we can see how well our model and measurements match for each guess i and time j. We'll do this by finding how t and h correlate over the D traces. One way of calculating this is:


r_{i,j}
= \frac{{\sum\nolimits_{d = 1}^D {\left[ {\left( {{h_{d,i}} - \overline {{h_i}} } \right)\left( {{t_{d,j}} - \overline {{t_j}} } \right)} \right]} }}{{\sqrt {\sum\nolimits_{d = 1}^D {{{\left( {{h_{d,i}} - \overline {{h_i}} } \right)}^2}} \sum\nolimits_{d = 1}^D {{{\left( {{t_{d,j}} - \overline {{t_j}} } \right)}^2}} } }}

There is an alternative form of the correlation equation that we can use for online calculations - it allows us to add one trace at a time without re-summing all of the past data. This form is:


r_{i,j} 
= \frac{D \sum_{d=1}^D h_{d,i}t_{d,j} - \sum_{d=1}^D h_{d,i} \sum_{d=1}^D t_{d,j}}{\sqrt{\Big(\big(\sum_{d=1}^D h_{d,i}\big)^2 - D\sum_{d=1}^D h_{d,i}^2\Big)\Big(\big(\sum_{d=1}^D t_{d,j}\big)^2 - D\sum_{d=1}^D t_{d,j}^2\Big)}}

Note that these two sums are equivalent.

Picking a Subkey

The last step is to use the values of r_{i,j} to decide which subkey matches our traces most closely. There are two steps to this:

  • For each subkey i, find the highest value of |r_{i,j}|. This will discard the time information - we want to know how good our guess was, but we don't care where our guess matched the trace.
  • Looking at the maximum values for each subkey, find the highest value of |r_i|. The location i of this maximum is our best guess: it correlated more closely with the traces than any other guess.

Note that we're only working with absolute values here because we don't care about the sign of the relationship. All we need to know is that a linear correlation exists.

Example: AES-128

Here we will assume the attack has a power trace t_{d,j}, where j = 1,2,\cdots,T is the time index in the trace, and d = 1,2,\cdots,D is the trace number. Thus the attacker makes D measurements, each one T points long. If the attacker knew exactly where a cryptographic operation occurred, they would need to only measure a single point such that T=1. For each trace d, the attacker also knows the plaintext or ciphertext corresponding to that power trace, defined as p_d.

Assume also the attacker has a model of how the power consumption of the device depends on some intermediate value. For example the attacker could assume the power consumption of a microcontroller was dependent on the hamming weight of the intermediate value. We will define h_{d,i} = l( w( p_d, i )), where l(x) is the leakage model for a given intermediate value, and w(p, i) generates an intermediate value given the input plaintext and the guess number i = 1,2,\cdots,I.

This intermediate value will be selected to depend on the input plaintext and a small portion of the secret key. For example with AES, each byte of the plaintext is XOR'd with each byte (subkey) of the secret key. In this example we would have:

l(x) = HammingWeight(x)w(p, i) = p \oplus i

This implies that the input plaintext p is being attacked a single byte at a time, which means we are attacking a single byte of the AES key at a time. While we still need to enumerate all possibilities for this subkey, we now only have 16 \times 2^8 instead of 2^{128} possibilities for AES-128.

We will be next using the correlation coefficient to look for a linear relationship between the predicted power consumption l(x) and the measured power consumption t_{d,j}. For this reason it is desirable to have a non-linear relationship between w(p, i) and either p or i, as we will otherwise see a linear relationship for all values of i. In this case we take advantage of the non-linear substitution boxes (S-Boxes) in the algorithm, which are simply lookup tables which have been selected to have minimal possible correlation between the input and output. The emphasis on minimum possible correlation between input and output is a requirement to avoid certain linear cryptographic attacks.

Finally, we can calculate the correlation coefficient for each point j over all traces D, for each of the possible subkey values I, as in the following:

{r_{i,j}} = \frac{{\sum\nolimits_{d = 1}^D {\left[ {\left( {{h_{d,i}} - \overline {{h_i}} } \right)\left( {{t_{d,j}} - \overline {{t_j}} } \right)} \right]} }}{{\sqrt {\sum\nolimits_{d = 1}^D {{{\left( {{h_{d,i}} - \overline {{h_i}} } \right)}^2}} \sum\nolimits_{d = 1}^D {{{\left( {{t_{d,j}} - \overline {{t_j}} } \right)}^2}} } }}

This is simply Pearson's correlation coefficient, given below, where X = p, and Y = t:

{\rho _{X,Y}} = \frac{{{\mathop{\rm cov}} \left( {X,Y} \right)}}{{{\sigma _X}{\sigma _Y}}} = \frac{{E\left[ {\left( {X - {\mu _X}} \right)\left( {Y - {\mu _Y}} \right)} \right]}}{{\sqrt {E\left[ {{{\left( {X - {\mu _X}} \right)}^2}} \right]} \sqrt {E\left[ {{{\left( {Y - {\mu _Y}} \right)}^2}} \right]} }}

The problem of finding a known signal in a noisy measurement exists in many other fields beyond side-channel analysis. These two equations are referred to as the normalized cross-correlation, and frequently used in digital imaging for matching a known `template' to an image, e.g. finding the location of some specific item in a photo of a room.