# General Information¶

Documentation:
http://docs.dit.io
Downloads:
https://pypi.org/project/dit/
Dependencies:

## Optional Dependencies¶

• colorama: colored column heads in PID indicating failure modes
• cython: faster sampling from distributions
• hypothesis: random sampling of distributions
• matplotlib, python-ternary: plotting of various information-theoretic expansions
• numdifftools: numerical evaluation of gradients and hessians during optimization
• pint: add units to informational values
• scikit-learn: faster nearest-neighbor lookups during entropy/mutual information estimation from samples
Mailing list:
None
Code and bug tracker:
https://github.com/dit/dit
License:
BSD 3-Clause, see LICENSE.txt for details.

### Quickstart¶

The basic usage of dit corresponds to creating distributions, modifying them if need be, and then computing properties of those distributions. First, we import:

In : import dit


Suppose we have a really thick coin, one so thick that there is a reasonable chance of it landing on its edge. Here is how we might represent the coin in dit.

In : d = dit.Distribution(['H', 'T', 'E'], [.4, .4, .2])

In : print(d)
Class:          Distribution
Alphabet:       ('E', 'H', 'T') for all rvs
Base:           linear
Outcome Class:  str
Outcome Length: 1
RV Names:       None

x   p(x)
E   1/5
H   2/5
T   2/5


Calculate the probability of $$H$$ and also of the combination: $$H~\mathbf{or}~T$$.

In : d['H']
Out: 0.4

In : d.event_probability(['H','T'])
Out: 0.8


Calculate the Shannon entropy and extropy of the joint distribution.

In : dit.shannon.entropy(d)
Out: 1.5219280948873621

In : dit.other.extropy(d)
Out: 1.1419011889093373


Create a distribution representing the $$\mathbf{xor}$$ logic function. Here, we have two inputs, $$X$$ and $$Y$$, and then an output $$Z = \mathbf{xor}(X,Y)$$.

In : import dit.example_dists

In : d = dit.example_dists.Xor()

In : d.set_rv_names(['X', 'Y', 'Z'])

In : print(d)
Class:          Distribution
Alphabet:       ('0', '1') for all rvs
Base:           linear
Outcome Class:  str
Outcome Length: 3
RV Names:       ('X', 'Y', 'Z')

x     p(x)
000   1/4
011   1/4
101   1/4
110   1/4


Calculate the Shannon mutual informations $$\I[X:Z]$$, $$\I[Y:Z]$$, and $$\I[X,Y:Z]$$.

In : dit.shannon.mutual_information(d, ['X'], ['Z'])
Out: 0.0

In : dit.shannon.mutual_information(d, ['Y'], ['Z'])
Out: 0.0

In : dit.shannon.mutual_information(d, ['X', 'Y'], ['Z'])
Out: 1.0


Calculate the marginal distribution $$P(X,Z)$$. Then print its probabilities as fractions, showing the mask.

In : d2 = d.marginal(['X', 'Z'])

In : print(d2.to_string(show_mask=True, exact=True))
Class:          Distribution
Alphabet:       ('0', '1') for all rvs
Base:           linear
Outcome Class:  str
Outcome Length: 2 (mask: 3)
RV Names:       ('X', 'Z')

x     p(x)
0*0   1/4
0*1   1/4
1*0   1/4
1*1   1/4


Convert the distribution probabilities to log (base 3.5) probabilities, and access its probability mass function.

In : d2.set_base(3.5)

In : d2.pmf
Out: array([-1.10658951, -1.10658951, -1.10658951, -1.10658951])


Draw 5 random samples from this distribution.

Enjoy!