My Project
|
Classes | |
class | Augmented_model |
class | Data_augV5 |
class | Data_augV7 |
class | Higher_model |
class | RandAug |
Functions | |
def | __init__ (self, TF_dict=TF.TF_dict, N_TF=1, mix_dist=0.0, fixed_prob=False, fixed_mag=True, shared_mag=True) |
def | forward (self, x) |
def | apply_TF (self, x, sampled_TF) |
def | adjust_param (self, soft=False) |
def | loss_weight (self) |
def | reg_loss (self, reg_factor=0.005) |
def | train (self, mode=True) |
def | eval (self) |
def | augment (self, mode=True) |
def | __getitem__ (self, key) |
def | __str__ (self) |
def | TF_prob (self) |
def | __init__ (self, TF_dict=TF.TF_dict, N_TF=1, mag=TF.PARAMETER_MAX) |
Variables | |
mag | |
Data augmentation modules. Features a custom implementaiton of RandAugment (RandAug), as well as a data augmentation modules allowing gradient propagation. Typical usage: aug_model = Augmented_model(Data_AugV5, model)
def dataug.__getitem__ | ( | self, | |
key | |||
) |
Access to the learnable parameters Args: key (string): Name of the learnable parameter to access. Returns: nn.Parameter.
def dataug.__init__ | ( | self, | |
TF_dict = TF.TF_dict , |
|||
N_TF = 1 , |
|||
mix_dist = 0.0 , |
|||
fixed_prob = False , |
|||
fixed_mag = True , |
|||
shared_mag = True |
|||
) |
Data augmentation module with learnable parameters. Applies transformations (TF) to batch of data. Each TF is defined by a (name, probability of application, magnitude of distorsion) tuple which can be learned. For the full definiton of the TF, see transformations.py. The TF probabilities defines a distribution from which we sample the TF applied. Be warry, that the order of sequential application of TF is not taken into account. See Data_augV7. Attributes: _data_augmentation (bool): Wether TF will be applied during forward pass. _TF_dict (dict) : A dictionnary containing the data transformations (TF) to be applied. _TF (list) : List of TF names. _nb_tf (int) : Number of TF used. _N_seqTF (int) : Number of TF to be applied sequentially to each inputs _shared_mag (bool) : Wether to share a single magnitude parameters for all TF. _fixed_mag (bool): Wether to lock the TF magnitudes. _fixed_prob (bool): Wether to lock the TF probabilies. _samples (list): Sampled TF index during last forward pass. _mix_dist (bool): Wether we use a mix of an uniform distribution and the real distribution (TF probabilites). If False, only a uniform distribution is used. _fixed_mix (bool): Wether we lock the mix distribution factor. _params (nn.ParameterDict): Learnable parameters. _reg_tgt (Tensor): Target for the magnitude regularisation. Only used when _fixed_mag is set to false (ie. we learn the magnitudes). _reg_mask (list): Mask selecting the TF considered for the regularisation.
Init Data_augv5. Args: TF_dict (dict): A dictionnary containing the data transformations (TF) to be applied. (default: use all available TF from transformations.py) N_TF (int): Number of TF to be applied sequentially to each inputs. (default: 1) mix_dist (float): Proportion [0.0, 1.0] of the real distribution used for sampling/selection of the TF. Distribution = (1-mix_dist)*Uniform_distribution + mix_dist*Real_distribution. If None is given, try to learn this parameter. (default: 0) fixed_prob (bool): Wether to lock the TF probabilies. (default: False) fixed_mag (bool): Wether to lock the TF magnitudes. (default: True) shared_mag (bool): Wether to share a single magnitude parameters for all TF. (default: True)
Data augmentation module with learnable parameters. Applies transformations (TF) to batch of data. Each TF is defined by a (name, probability of application, magnitude of distorsion) tuple which can be learned. For the full definiton of the TF, see transformations.py. The TF probabilities defines a distribution from which we sample the TF applied. Replace the use of TF by TF sets which are combinaisons of classic TF. Attributes: _data_augmentation (bool): Wether TF will be applied during forward pass. _TF_dict (dict) : A dictionnary containing the data transformations (TF) to be applied. _TF (list) : List of TF names. _nb_tf (int) : Number of TF used. _N_seqTF (int) : Number of TF to be applied sequentially to each inputs _shared_mag (bool) : Wether to share a single magnitude parameters for all TF. _fixed_mag (bool): Wether to lock the TF magnitudes. _fixed_prob (bool): Wether to lock the TF probabilies. _samples (list): Sampled TF index during last forward pass. _mix_dist (bool): Wether we use a mix of an uniform distribution and the real distribution (TF probabilites). If False, only a uniform distribution is used. _fixed_mix (bool): Wether we lock the mix distribution factor. _params (nn.ParameterDict): Learnable parameters. _reg_tgt (Tensor): Target for the magnitude regularisation. Only used when _fixed_mag is set to false (ie. we learn the magnitudes). _reg_mask (list): Mask selecting the TF considered for the regularisation.
Init Data_augv7. Args: TF_dict (dict): A dictionnary containing the data transformations (TF) to be applied. (default: use all available TF from transformations.py) N_TF (int): Number of TF to be applied sequentially to each inputs. Minimum 2, otherwise prefer using Data_augV5. (default: 2) mix_dist (float): Proportion [0.0, 1.0] of the real distribution used for sampling/selection of the TF. Distribution = (1-mix_dist)*Uniform_distribution + mix_dist*Real_distribution. If None is given, try to learn this parameter. (default: 0) fixed_prob (bool): Wether to lock the TF probabilies. (default: False) fixed_mag (bool): Wether to lock the TF magnitudes. (default: True) shared_mag (bool): Wether to share a single magnitude parameters for all TF. (default: True)
def dataug.__init__ | ( | self, | |
TF_dict = TF.TF_dict , |
|||
N_TF = 1 , |
|||
mag = TF.PARAMETER_MAX |
|||
) |
RandAugment implementation. Applies transformations (TF) to batch of data. Each TF is defined by a (name, probability of application, magnitude of distorsion) tuple. For the full definiton of the TF, see transformations.py. The TF probabilities are ignored and, instead selected randomly. Attributes: _data_augmentation (bool): Wether TF will be applied during forward pass. _TF_dict (dict) : A dictionnary containing the data transformations (TF) to be applied. _TF (list) : List of TF names. _nb_tf (int) : Number of TF used. _N_seqTF (int) : Number of TF to be applied sequentially to each inputs _shared_mag (bool) : Wether to share a single magnitude parameters for all TF. Should be True. _fixed_mag (bool): Wether to lock the TF magnitudes. Should be True. _params (nn.ParameterDict): Data augmentation parameters.
Init RandAug. Args: TF_dict (dict): A dictionnary containing the data transformations (TF) to be applied. (default: use all available TF from transformations.py) N_TF (int): Number of TF to be applied sequentially to each inputs. (default: 1) mag (float): Magnitude of the TF. Should be between [PARAMETER_MIN, PARAMETER_MAX] defined in transformations.py. (default: PARAMETER_MAX)
def dataug.__str__ | ( | self | ) |
Name of the module Returns: String containing the name of the module as well as the higher levels parameters.
def dataug.adjust_param | ( | self, | |
soft = False |
|||
) |
Enforce limitations to the learned parameters. Ensure that the parameters value stays in the right intevals. This should be called after each update of those parameters. Args: soft (bool): Wether to use a softmax function for TF probabilites. Not Recommended as it tends to lock the probabilities, preventing them to be learned. (default: False)
Not used
def dataug.apply_TF | ( | self, | |
x, | |||
sampled_TF | |||
) |
Applies the sampled transformations. Args: x (Tensor): Batch of data. sampled_TF (Tensor): Indexes of the TF to be applied to each element of data. Returns: Tensor: Batch of tranformed data.
def dataug.augment | ( | self, | |
mode = True |
|||
) |
Set the augmentation mode. Args: mode (bool): Wether to perform data augmentation on the forward pass. (default: True)
def dataug.eval | ( | self | ) |
Set the module to evaluation mode.
def dataug.forward | ( | self, | |
x | |||
) |
Main method of the Data augmentation module. Args: x (Tensor): Batch of data. Returns: Tensor : Batch of tranformed data.
def dataug.loss_weight | ( | self | ) |
Weights for the loss. Compute the weights for the loss of each inputs depending on wich TF was applied to them. Should be applied to the loss before reduction. Do nottake into account the order of application of the TF. See Data_augV7. Returns: Tensor : Loss weights.
Weights for the loss. Compute the weights for the loss of each inputs depending on wich TF was applied to them. Should be applied to the loss before reduction. Returns: Tensor : Loss weights.
Not used
def dataug.reg_loss | ( | self, | |
reg_factor = 0.005 |
|||
) |
Regularisation term used to learn the magnitudes. Use an L2 loss to encourage high magnitudes TF. Args: reg_factor (float): Factor by wich the regularisation loss is multiplied. (default: 0.005) Returns: Tensor containing the regularisation loss value.
Not used
def dataug.TF_prob | ( | self | ) |
Gives an estimation of the individual TF probabilities. Be warry that the probability returned isn't exact. The TF distribution isn't fully represented by those. Each probability should be taken individualy. They only represent the chance for a specific TF to be picked at least once. Returms: Tensor containing the single TF probabilities of applications.
def dataug.train | ( | self, | |
mode = True |
|||
) |
Set the module training mode. Args: mode (bool): Wether to learn the parameter of the module. None would not change mode. (default: None)