Search space for different types of optimizers and schedulers#

Different optimizers have different update rules and behavior, and they may perform better or worse depending on the specific dataset and model architecture. Hence, trying out different optimizers and learning rate schedulers can be a good technique for HPO.

  • To work with different optimizers effectively in the ablator, it is necessary to create custom OptimizerConfig objects that can handle passing either torch-defined or custom optimizers to the ablator.

  • This is similar to the schedulers.

For different optimizers#

make_optimizer function creates an optimizer object based on inputs from the custom configs.

This example supports three optimizers: Adam, AdamW, and SGD, however, we can also pass our custom-defined optimizers.

  • Creates a list of model parameters called parameter_groups.

  • Defines dictionaries with specific parameters for each optimizer.

  • Sets optimizer parameters using the parameter groups, learning rate, and defined dictionaries.

Returns the optimizer object.

import torch.optim as optim

def make_optimizer(optimizer_name: str, model: nn.Module, lr: float):

    parameter_groups = [v for k, v in model.named_parameters()]

    adamw_parameters = {
      "betas": (0.0, 0.1),
      "eps": 0.001,
      "weight_decay": 0.1
    }
    adam_parameters = {
      "betas" : (0.0, 0.1),
      "weight_decay": 0.0
    }
    sgd_parameters = {
      "momentum": 0.9,
      "weight_decay": 0.1
    }

    Optimizer = None

    if optimizer_name == "adam":
        Optimizer = optim.Adam(parameter_groups, lr = lr, **adam_parameters)
    elif optimizer_name == "adamw":
        Optimizer = optim.AdamW(parameter_groups, lr = lr, **adamw_parameters)
    elif optimizer_name == "sgd":
        Optimizer = optim.SGD(parameter_groups, lr = lr, **sgd_parameters)


    return Optimizer

Finally, we can create our own CustomOptimizerConfig.

Since; we are creating an Optimizer configuration. Ablator requires a method: make_optimizer with input as a model. Thus, creating the method and returning the torch optimizer from our previous function.

@configclass
class CustomOptimizerConfig(ConfigBase):
    name: Literal["adam", "adamw", "sgd"] = "adam"
    lr: float = 0.001

    def make_optimizer(self, model: torch.nn.Module):
        return make_optimizer(self.name, model, self.lr)

optimizer_config = CustomOptimizerConfig(name = "adam", lr = 0.001)

For different Schedulers#

  • create a function make_scheduler which takes all the parameters and passes them to their respective learning rate schedulers.

Returns the torch scheduler object.

from torch.optim.lr_scheduler import OneCycleLR, ReduceLROnPlateau, StepLR

def make_scheduler(scheduler_name: str,model: nn.Module, optimizer: torch.optim):

  step_parameters = {
      "step_size" : 1,
      "gamma" : 0.99
  }
  plateau_parameters = {
    "patience":  10,
    "min_lr":  1e-5,
    "mode":  "min",
    "factor":   0.0,
    "threshold":  1e-4
  }
  cycle_parameters = {
      "max_lr": 1e-3,
     "total_steps": 10
  }

  Scheduler = None

  if scheduler_name == "step":
    Scheduler = StepLR(optimizer, **step_parameters)
  elif scheduler_name == "cycle":
    Scheduler = OneCycleLR(optimizer, **cycle_parameters)
  elif scheduler_name == "plateau":
    Scheduler = ReduceLROnPlateau(optimizer, **plateau_parameters)

  return Scheduler

Similarly, creating a custom CustomSchedulerConfig and making the required method make_scheduler with the inputs: model and optimizer.

@configclass
class CustomSchedulerConfig(ConfigBase):
    name: Literal["step", "cycle", "plateau"] = "step"

    def make_scheduler(self, model: torch.nn.Module, optimizer: torch.optim):
        return make_scheduler(self.name, model, optimizer)

scheduler_config = CustomSchedulerConfig(name = "step")

CustomTrainConfig takes both objects to define the train configuration.

@configclass
class CustomTrainConfig(TrainConfig):
  optimizer_config: CustomOptimizerConfig
  scheduler_config: CustomSchedulerConfig


train_config = CustomTrainConfig(
    dataset="[Your Dataset]",
    batch_size=32,
    epochs=10,
    optimizer_config = optimizer_config,
    scheduler_config = scheduler_config,
    rand_weights_init = True
)

Now, we can try out different optimizers and schedulers by providing a search space to the ablator.

search_space_for_optimizers = {
    ...
    "train_config.optimizer_config.name": SearchSpace(categorical_values = ["adam", "sgd", "adamw"]),
    ...
}

search_space_for_schedulers = {
    ...
    "train_config.scheduler_config.name": SearchSpace(categorical_values = ["step", "cycle", "plateau"]),
    ...
}

Note:

In the default optimizer config, providing the name of the optimizer in the config will create an object of the associated optimizer class. Simply changing the name in the search space will result in a mismatch in the class type, causing an error. Hence, we have to define custom configs in this way.

One benefit this method offers is that we can define our custom optimizers or schedulers as a class and pass them to their respective configs for the ablator to manage training.

Conclusion#

Finally, with this, we can now test different optimizers and schedulers for our model.