Intuitively, an individually fair Machine Learning (ML) model treats similar inputs similarly. Formally, the leading notion of individual fairness is metric fairness (Dwork et al., 2011); it requires:
Here,
inFairness is a PyTorch package that supports auditing, training, and post-processing ML models for individual fairness. At its core, the library implements the key components of individual fairness pipeline:
For an in-depth tutorial of Individual Fairness and the inFairness package, please watch this tutorial. Also, take a look at the examples folder for illustrative use-cases and try the Fairness Playground demo. For more group fairness examples see AIF360.
inFairness can be installed using pip
:
pip install inFairness
Alternatively, if you wish to install the latest development version, you can install directly by cloning this repository:
git clone <git repo url>
cd inFairness
pip install -e .
inFairness currently supports:
- Learning individually fair metrics : [Docs]
- Training of individually fair models : [Docs]
- Auditing pre-trained ML models for individual fairness : [Docs]
- Post-processing for Individual Fairness : [Docs]
- Individually fair ranking : [Docs]
We welcome contributions from the community in any form - whether it is through the contribution of a new fair algorithm, fair metric, a new use-case, or simply reporting an issue or enhancement in the package. To contribute code to the package, please follow the following steps:
- Clone this git repository to your local system
- Setup your system by installing dependencies as:
pip3 install -r requirements.txt
andpip3 install -r build_requirements.txt
- Add your code contribution to the package. Please refer to the
inFairness
folder for an overview of the directory structure - Add appropriate unit tests in the
tests
folder - Once you are ready to commit code, check for the following:
- Coding style compliance using:
flake8 inFairness/
. This command will list all stylistic violations found in the code. Please try to fix as much as you can - Ensure all the test cases pass using:
coverage run --source inFairness -m pytest tests/
. All unit tests need to pass to be able merge code in the package.
- Coding style compliance using:
- Finally, commit your code and raise a Pull Request.
The examples
folder contains tutorials from different fields illustrating how to use the package.
First, you need to import the relevant packages
from inFairness import distances
from inFairness.fairalgo import SenSeI
The inFairness.distances
module implements various distance metrics on the input and the output spaces, and the inFairness.fairalgo
implements various individually fair learning algorithms with SenSeI
being one particular algorithm.
Thereafter, we instantiate and fit the distance metrics on the training data, and
distance_x = distances.SVDSensitiveSubspaceDistance()
distance_y = distances.EuclideanDistance()
distance_x.fit(X_train=data, n_components=50)
# Finally instantiate the fair algorithm
fairalgo = SenSeI(network, distance_x, distance_y, lossfn, rho=1.0, eps=1e-3, lr=0.01, auditor_nsteps=100, auditor_lr=0.1)
Finally, you can train the fairalgo
as you would train your standard PyTorch deep neural network:
fairalgo.train()
for epoch in range(EPOCHS):
for x, y in train_dl:
optimizer.zero_grad()
result = fairalgo(x, y)
result.loss.backward()
optimizer.step()
Mikhail Yurochkin |
Mayank Agarwal |
Aldo Pareja |
Onkar Bhardwaj |