discrete_optimization.generic_tools.ls package
Submodules
discrete_optimization.generic_tools.ls.hill_climber module
- class discrete_optimization.generic_tools.ls.hill_climber.HillClimber(problem: Problem, mutator: Mutation, restart_handler: RestartHandler, mode_mutation: ModeMutation, params_objective_function: ParamsObjectiveFunction | None = None, store_solution: bool = False)[source]
Bases:
SolverDO
- solve(initial_variable: Solution, nb_iteration_max: int, callbacks: List[Callback] | None = None, **kwargs: Any) ResultStorage [source]
Generic solving function.
- Parameters:
callbacks – list of callbacks used to hook into the various stage of the solve
**kwargs – any argument specific to the solver
Solvers deriving from SolverDo should use callbacks methods .on_step_end(), … during solve(). But some solvers are not yet updated and are just ignoring it.
Returns (ResultStorage): a result object containing potentially a pool of solutions to a discrete-optimization problem
- class discrete_optimization.generic_tools.ls.hill_climber.HillClimberPareto(problem: Problem, mutator: Mutation, restart_handler: RestartHandler, mode_mutation: ModeMutation, params_objective_function: ParamsObjectiveFunction | None = None, store_solution: bool = False)[source]
Bases:
HillClimber
- solve(initial_variable: Solution, nb_iteration_max: int, update_iteration_pareto: int = 1000, callbacks: List[Callback] | None = None, **kwargs: Any) ParetoFront [source]
Generic solving function.
- Parameters:
callbacks – list of callbacks used to hook into the various stage of the solve
**kwargs – any argument specific to the solver
Solvers deriving from SolverDo should use callbacks methods .on_step_end(), … during solve(). But some solvers are not yet updated and are just ignoring it.
Returns (ResultStorage): a result object containing potentially a pool of solutions to a discrete-optimization problem
discrete_optimization.generic_tools.ls.local_search module
- class discrete_optimization.generic_tools.ls.local_search.ModeMutation(value)[source]
Bases:
Enum
An enumeration.
- MUTATE = 0
- MUTATE_AND_EVALUATE = 1
- class discrete_optimization.generic_tools.ls.local_search.RestartHandler[source]
Bases:
object
- best_fitness: float | TupleFitness
- restart(cur_solution: Solution, cur_objective: float | TupleFitness) Tuple[Solution, float | TupleFitness] [source]
- update(nv: Solution, fitness: float | TupleFitness, improved_global: bool, improved_local: bool) None [source]
- class discrete_optimization.generic_tools.ls.local_search.RestartHandlerLimit(nb_iteration_no_improvement: int)[source]
Bases:
RestartHandler
- restart(cur_solution: Solution, cur_objective: float | TupleFitness) Tuple[Solution, float | TupleFitness] [source]
- class discrete_optimization.generic_tools.ls.local_search.ResultLS(result_storage: ResultStorage, best_solution: Solution, best_objective: float | TupleFitness)[source]
Bases:
ResultStorage
discrete_optimization.generic_tools.ls.simulated_annealing module
- class discrete_optimization.generic_tools.ls.simulated_annealing.SimulatedAnnealing(problem: Problem, mutator: Mutation, restart_handler: RestartHandler, temperature_handler: TemperatureScheduling, mode_mutation: ModeMutation, params_objective_function: ParamsObjectiveFunction | None = None, store_solution: bool = False)[source]
Bases:
SolverDO
- aggreg_from_dict: Callable[[Dict[str, float]], float]
- solve(initial_variable: Solution, nb_iteration_max: int, callbacks: List[Callback] | None = None, **kwargs: Any) ResultStorage [source]
Generic solving function.
- Parameters:
callbacks – list of callbacks used to hook into the various stage of the solve
**kwargs – any argument specific to the solver
Solvers deriving from SolverDo should use callbacks methods .on_step_end(), … during solve(). But some solvers are not yet updated and are just ignoring it.
Returns (ResultStorage): a result object containing potentially a pool of solutions to a discrete-optimization problem
- class discrete_optimization.generic_tools.ls.simulated_annealing.TemperatureScheduling[source]
Bases:
object
- nb_iteration: int
- restart_handler: RestartHandler
- temperature: float
- class discrete_optimization.generic_tools.ls.simulated_annealing.TemperatureSchedulingFactor(temperature: float, restart_handler: RestartHandler, coefficient: float = 0.99)[source]
Bases:
TemperatureScheduling