discrete_optimization.generic_tools.ea package

Submodules

discrete_optimization.generic_tools.ea.alternating_ga module

class discrete_optimization.generic_tools.ea.alternating_ga.AlternatingGa(problem: Problem, objectives: str | List[str], encodings: List[str] | List[Dict[str, Any]] | None = None, mutations: List[Mutation] | List[DeapMutation] | None = None, crossovers: List[DeapCrossover] | None = None, selections: List[DeapSelection] | None = None, objective_handling: ObjectiveHandling | None = None, objective_weights: List[float] | None = None, pop_size: int | None = None, max_evals: int = 10000, sub_evals: List[int] | None = None, mut_rate: float | None = None, crossover_rate: float | None = None, tournament_size: float | None = None, deap_verbose: bool = False, params_objective_function: ParamsObjectiveFunction | None = None)[source]

Bases: SolverDO

Multi-encoding single objective GA

Parameters:
  • problem – the problem to solve

  • encoding

    name (str) of an encoding registered in the register solution of Problem or a dictionary of the form {‘type’: TypeAttribute, ‘n’: int} where type refers to a TypeAttribute and n

    to the dimension of the problem in this encoding (e.g. length of the vector)

    by default, the first encoding in the problem register_solution will be used.

solve(**kwargs: Any) ResultStorage[source]

Generic solving function.

Parameters:
  • callbacks – list of callbacks used to hook into the various stage of the solve

  • **kwargs – any argument specific to the solver

Solvers deriving from SolverDo should use callbacks methods .on_step_end(), … during solve(). But some solvers are not yet updated and are just ignoring it.

Returns (ResultStorage): a result object containing potentially a pool of solutions to a discrete-optimization problem

discrete_optimization.generic_tools.ea.deap_wrappers module

discrete_optimization.generic_tools.ea.deap_wrappers.generic_mutate_wrapper(individual: MutableSequence[T], problem: Problem, encoding_name: str, indpb: Any, solution_fn: Type[Solution], custom_mutation: Mutation) Tuple[MutableSequence[T]][source]

discrete_optimization.generic_tools.ea.ga module

class discrete_optimization.generic_tools.ea.ga.DeapCrossover(value)[source]

Bases: Enum

An enumeration.

CX_ONE_POINT = 3
CX_ORDERED = 2
CX_PARTIALY_MATCHED = 5
CX_TWO_POINT = 4
CX_UNIFORM = 0
CX_UNIFORM_PARTIALY_MATCHED = 1
class discrete_optimization.generic_tools.ea.ga.DeapMutation(value)[source]

Bases: Enum

An enumeration.

MUT_FLIP_BIT = 0
MUT_SHUFFLE_INDEXES = 1
MUT_UNIFORM_INT = 2
class discrete_optimization.generic_tools.ea.ga.DeapSelection(value)[source]

Bases: Enum

An enumeration.

SEL_BEST = 2
SEL_RANDOM = 1
SEL_ROULETTE = 4
SEL_STOCHASTIC_UNIVERSAL_SAMPLING = 6
SEL_TOURNAMENT = 0
SEL_WORST = 5
class discrete_optimization.generic_tools.ea.ga.Ga(problem: Problem, objectives: str | List[str], mutation: Mutation | DeapMutation | None = None, crossover: DeapCrossover | None = None, selection: DeapSelection = DeapSelection.SEL_TOURNAMENT, encoding: str | Dict[str, Any] | None = None, objective_handling: ObjectiveHandling = ObjectiveHandling.SINGLE, objective_weights: List[float] | None = None, pop_size: int = 100, max_evals: int | None = None, mut_rate: float = 0.1, crossover_rate: float = 0.9, tournament_size: float = 0.2, deap_verbose: bool = True, initial_population: List[List[Any]] | None = None, params_objective_function: ParamsObjectiveFunction | None = None)[source]

Bases: SolverDO

Single objective GA

Parameters:
  • problem – the problem to solve

  • encoding

    name (str) of an encoding registered in the register solution of Problem or a dictionary of the form {‘type’: TypeAttribute, ‘n’: int} where type refers to a TypeAttribute and n

    to the dimension of the problem in this encoding (e.g. length of the vector)

    by default, the first encoding in the problem register_solution will be used.

evaluate_problem(int_vector: List[int]) Tuple[float][source]
generate_custom_population() List[Any][source]
hyperparameters: List[Hyperparameter] = [EnumHyperparameter(name='crossover', default=None, choices=[<DeapCrossover.CX_UNIFORM: 0>, <DeapCrossover.CX_UNIFORM_PARTIALY_MATCHED: 1>, <DeapCrossover.CX_ORDERED: 2>, <DeapCrossover.CX_ONE_POINT: 3>, <DeapCrossover.CX_TWO_POINT: 4>, <DeapCrossover.CX_PARTIALY_MATCHED: 5>]), EnumHyperparameter(name='selection', default=<DeapSelection.SEL_TOURNAMENT: 0>, choices=[<DeapSelection.SEL_TOURNAMENT: 0>, <DeapSelection.SEL_RANDOM: 1>, <DeapSelection.SEL_BEST: 2>, <DeapSelection.SEL_ROULETTE: 4>, <DeapSelection.SEL_WORST: 5>, <DeapSelection.SEL_STOCHASTIC_UNIVERSAL_SAMPLING: 6>]), IntegerHyperparameter(name='pop_size', default=100, low=1, high=1000), FloatHyperparameter(name='mut_rate', default=0.1, low=0, high=0.9), FloatHyperparameter(name='crossover_rate', default=0.9, low=0, high=1), FloatHyperparameter(name='tournament_size', default=0.2, low=0, high=1)]

Hyperparameters available for this solver.

These hyperparameters are to be feed to **kwargs found in
  • __init__()

  • init_model() (when available)

  • solve()

solve(**kwargs: Any) ResultStorage[source]

Generic solving function.

Parameters:
  • callbacks – list of callbacks used to hook into the various stage of the solve

  • **kwargs – any argument specific to the solver

Solvers deriving from SolverDo should use callbacks methods .on_step_end(), … during solve(). But some solvers are not yet updated and are just ignoring it.

Returns (ResultStorage): a result object containing potentially a pool of solutions to a discrete-optimization problem

discrete_optimization.generic_tools.ea.ga_tools module

class discrete_optimization.generic_tools.ea.ga_tools.ParametersAltGa(mutations: List[Mutation | DeapMutation], crossovers: List[DeapCrossover], selections: List[DeapSelection], encodings: List[str], objective_handling: ObjectiveHandling, objectives: str | List[str], objective_weights: List[float], pop_size: int, max_evals: int, mut_rate: float, crossover_rate: float, tournament_size: float, deap_verbose: bool, sub_evals: List[int])[source]

Bases: object

static default_mrcpsp() ParametersAltGa[source]
static default_msrcpsp() ParametersAltGa[source]
class discrete_optimization.generic_tools.ea.ga_tools.ParametersGa(mutation: Mutation | DeapMutation, crossover: DeapCrossover, selection: DeapSelection, encoding: str, objective_handling: ObjectiveHandling, objectives: str | List[str], objective_weights: List[float], pop_size: int, max_evals: int, mut_rate: float, crossover_rate: float, tournament_size: float, deap_verbose: bool)[source]

Bases: object

static default_rcpsp() ParametersGa[source]

discrete_optimization.generic_tools.ea.nsga module

class discrete_optimization.generic_tools.ea.nsga.Nsga(problem: Problem, objectives: str | List[str], mutation: Mutation | DeapMutation | None = None, crossover: DeapCrossover | None = None, encoding: str | Dict[str, Any] | None = None, objective_weights: List[float] | None = None, pop_size: int = 100, max_evals: int | None = None, mut_rate: float = 0.1, crossover_rate: float = 0.9, deap_verbose: bool = True)[source]

Bases: SolverDO

NSGA

Parameters:
  • problem – the problem to solve

  • encoding

    name (str) of an encoding registered in the register solution of Problem or a dictionary of the form {‘type’: TypeAttribute, ‘n’: int} where type refers to a TypeAttribute and n

    to the dimension of the problem in this encoding (e.g. length of the vector)

    by default, the first encoding in the problem register_solution will be used.

evaluate_problem(int_vector: List[int]) Tuple[float, ...][source]
solve(**kwargs: Any) ResultStorage[source]

Generic solving function.

Parameters:
  • callbacks – list of callbacks used to hook into the various stage of the solve

  • **kwargs – any argument specific to the solver

Solvers deriving from SolverDo should use callbacks methods .on_step_end(), … during solve(). But some solvers are not yet updated and are just ignoring it.

Returns (ResultStorage): a result object containing potentially a pool of solutions to a discrete-optimization problem

Module contents