next up previous
Next: Analytical solutions for Up: kelly Previous: kelly

Fixed fraction betting - the Kelly criterion

Let $ X$ be a real random variable with support $ [-1,\infty[$ . Following Kelly [1] consider the reinvested growth of a fixed fraction investment in an instrument that has payoff $ X$ .

$\displaystyle W_n = W_{n-1}(1+\alpha\,X_n), \quad X_n \sim X, \alpha\in[0,1]\quad.$    

(If the support of $ X$ is bounded the restriction on $ \alpha$ can be relaxed.) The long run growth rate is

$\displaystyle G_\alpha = \lim_{N\rightarrow\infty} \frac{1}{N} \log\frac{W_N}{W_0}\quad .$    

By the law of large numbers

$\displaystyle G_\alpha = {\mathsf{E}}\log(1+\alpha\,X), \quad\mathrm{a.s.}\quad .$ (1)

For a given $ X$ the function $ G_\alpha$ is concave in $ \alpha$ , and, writing $ G'(\alpha)=dG/d\alpha$ we have $ G'(0)={\mathsf{E}}X$ . Define the maximal growth rate as

$\displaystyle G[X] = \max_\alpha\,{\mathsf{E}}\log(1+\alpha\,X)\quad.$ (2)



$ G[X]$ and $ {\mathsf{E}}X$ :
Since the slope of $ G(\alpha)$ at $ \alpha=0$ , i.e. $ G'(0)={\mathsf{E}}X$ , determines the location of the maximum we have, for $ \alpha\in[0,1]$ ,
$\displaystyle {\mathsf{E}}X\leq0$ $\displaystyle \Leftrightarrow$ $\displaystyle G[X]=0, \alpha_\mathrm{opt}=0$ (3)
$\displaystyle {\mathsf{E}}X>0$ $\displaystyle \Leftrightarrow$ $\displaystyle G[X]>0, \alpha_\mathrm{opt}>0$ (4)




Upper bound for $ G[X]$ :
Jensen's inequality yields $ G_\alpha \leq \log(1+\alpha {\mathsf{E}}X)$ and because of $ \alpha<1$ and $ \log(1+x) \leq x$ it is

$\displaystyle G[X] \leq \log(1+{\mathsf{E}}X) \leq {\mathsf{E}}X \quad .$ (5)



Mixtures:
Consider a mixture $ p(X)=\int dY p(X\vert Y)p(Y)$ . We have
$\displaystyle G[X]$ $\displaystyle =$ $\displaystyle \max_\alpha{\mathsf{E}}_X\log(1+\alpha X) = \max_\alpha{\mathsf{E...
...mathsf{E}}_{X\vert Y}
\underbrace{\log(1+\alpha X\vert Y)}_{\leq G[X\vert Y]} =$  
  $\displaystyle \leq$ $\displaystyle {\mathsf{E}}_Y G[X\vert Y] \leq {\mathsf{E}}_Y \max_Y G[X\vert Y],$  

hence

$\displaystyle G[X] \leq \max_Y G[X\vert Y] \quad ,$ (6)

i.e. the gain of a mixture cannot be larger than the gain of the best variable in the mixture.

Gambles that are offered with lower fequency $ \rho$ :
Suppose a gamble is offered at a reduced rate, i.e. at every step with probability $ \rho$ the payoff $ X$ is available and with probability $ 1-\rho$ no investment is offered, i.e. the payoff is 0 . The pdf for payoff $ x$ is $ \widetilde p(x)=\rho\,p(x)+(1-\rho) \delta(x)$ . Then $ \max_\alpha\widetilde{\mathsf{E}}\log(1+\alpha X) = \max_\alpha\left[\rho{\mathsf{E}}\log(1+\alpha X) + (1-\rho)\log(1)\right] = \rho G[X]$ . Denoting the reduced rate gain, i.e. the maximal gain under $ \widetilde{\mathsf{E}}$ , as $ G^\rho[X]$ , it is

$\displaystyle G^\rho[X] = \rho\, G[X] \quad .$ (7)

When several gambles $ \{X_i\}_{i=1}^{N}$ are offered with occurrence probabilities $ \{\rho_i\}_{i=1}^{N}$ the individual gains are $ \rho_iG[X_i]$ . Amongst the offered gambles the one with largest $ \rho_iG[X_i]$ is favourable.

Small edge $ {\mathsf{E}}X\rightarrow 0$ :
For small $ {\mathsf{E}}X$ we expand around $ \alpha=0$

$\displaystyle \log(1+\alpha X) = \alpha X - \frac{1}{2}\alpha^2 X^2 + O(\alpha^3)$    

and the quadratic approximation gives $ \alpha_\mathrm{opt} = \frac{{\mathsf{E}}X}{{\mathsf{E}}X^2}$ and then

$\displaystyle G[X] = \frac{1}{2} \frac{({\mathsf{E}}X)^2}{{\mathsf{E}}X^2},\quad \mathrm{for\ } {\mathsf{E}}X\rightarrow 0$ (8)



Gain of averages $ \sum_i w_i X_i$ :
A lower bound to $ G[\sum_i w_i X_i]$ with $ \sum_i w_i=1$ can easily be found:
$\displaystyle G\left[\sum_i w_i X_i\right]$ $\displaystyle =$ $\displaystyle \max_\alpha{\mathsf{E}}\log\left(1+\alpha\sum_i w_i X_i\right) \geq$  
  $\displaystyle \geq$ $\displaystyle \max_\alpha{\mathsf{E}}\sum_i w_i\log(1+\alpha X_i) = \max_\alpha\sum_i w_i\,G_i(\alpha)$  

By convexity of the $ \max$ function it is $ \max_\alpha\sum_i w_i\,G_i(\alpha) \geq\sum_i w_i\, \max_\alpha G_i(\alpha) = \sum_i w_i G[X_i]$ , so we have

$\displaystyle G\left[\sum_i w_i X_i\right] \geq \sum_i w_i G[X_i] \geq \min_i G[X_i] \quad$ (9)

i.e. the gain of an average of payoffs is not smaller than the smallest gain.

Representation in terms of the cumulative distribution function:
Partial integration of the integral $ \int_{-1}^\infty p(x)\log(1+\alpha x)d x$ results in a useful representation for $ G_\alpha$ . Denote the cdf $ \Phi(x)=\int_{-1}^x p(x) dx$ . Starting with $ G_\alpha\leftarrow G_\alpha^M=\int_{-1}^M\frac{d \Phi(x)}{d x}\frac{\alpha}{1+\alpha x} dx$ partial integration gives (use $ \Phi(-1)=0$ )

$\displaystyle \int_{-1}^M p(x) \log\left(1+\alpha x\right) dx = \Phi(M)\log\left(1+\alpha M\right) - \int_{-1}^M \frac{\alpha\,\Phi(x)} {1+\alpha x} dx$    

observe that both terms separately diverge with $ M\rightarrow\infty$ . To alleviate this problem use $ \int_{-1}^M\frac{\alpha}{1+\alpha x}dx=\log(1+\alpha M)-\log(1-\alpha)$ ,

$\displaystyle G_\alpha^M = \Phi(M)\left[\int_{-1}^M\frac{\alpha}{1+\alpha x} dx+\log(1-\alpha)\right] - \int_{-1}^M \frac{\alpha\,\Phi(x)} {1+\alpha x}\, dx$    

and after rearranging and letting $ M\rightarrow\infty$ we have the desired result

$\displaystyle G_\alpha = \log(1-\alpha) + \alpha\int_{-1}^\infty\frac{1-\Phi(x)}{1+\alpha x}\, dx\quad.$ (10)



Representation in terms of the Laplace transform:
We will need a result for the Laplace transform of a (positive) random variable $ X$ , $ (\mathcal{L}X)(s) = {\mathsf{E}}e^{-s X}$ . Doing partial integration of $ (\mathcal{L}X)(s) = \int_0^\infty p(x) e^{-s x}dx$ analogous to the partial integration leading to eq. (10) gives the following result that expresses the Laplace transform in terms of the cdf:

$\displaystyle (\mathcal{L}X)(s) = 1-s \int_0^\infty e^{-s x}\left(1-\Phi(x)\right)\, dx\quad.$ (11)

Starting with the shifted version of the cdf representation eq. (10)

$\displaystyle G_\alpha = \log(1-\alpha) + \alpha\int_0^\infty\frac{1-\Phi(x)}{1+\alpha (x-1)}\, dx$    

and using

$\displaystyle \frac{1}{1+\alpha (x-1)} = \frac{1}{\alpha}\int_0^\infty e^{-s x} \exp\left(-s\frac{1-\alpha}{\alpha}\right)d s$    

we get

$\displaystyle G_\alpha = \log(1-\alpha)+\int_0^\infty dp\, e^{-s\frac{1-\alpha}{\alpha}} \int_0^\infty\, dx\, e^{-s x} (1-\Phi(x))$    

and we can express the integral $ \int_0^\infty\dots dx$ via eq. (11) in terms of the cdf $ \Phi(x)$ to obtain

$\displaystyle G_\alpha = \log(1-\alpha) + \int_0^\infty \frac{1-(\mathcal{L}X)(s)}{s} \, \exp\left(-s\frac{1-\alpha}{\alpha}\right)\, ds, \quad X\ge 0\quad .$ (12)

The restriction $ X\ge0$ in the equation above is used as a reminder that the relation holds for the unshifted, i.e. positive random variable $ X$ .

It is tempting to use $ \log x = \int_0^\infty\!d\zeta\,\frac{1}{\zeta}\left(e^{-\zeta}-e^{-x\zeta}\right), \Re(x)>0$ to derive a relation, but issues with uniform continuity arise.


next up previous
Next: Analytical solutions for Up: kelly Previous: kelly
Markus Mayer 2010-06-04