Learner
__training_step(engine, batch, model, optimizer, criterion, parameters)
¶
Here the actual training step is performed. It is called by the training engine. Not using PyTorch ignite this code would be wrapped in some kind of training loop over a range of epochs and batches. But using ignite this is handled by the engine.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
engine |
ignite.engine.Engine
|
The engine that is calling this method. |
required |
batch |
NamedTuple
|
The batch that is passed to the engine for training. |
required |
model |
Autoembedder
|
The model to be trained. |
required |
optimizer |
torch.optim
|
The optimizer to be used for training. |
required |
criterion |
torch.nn.MSELoss
|
The loss function to be used for training. |
required |
parameters |
Dict[str, Any]
|
The parameters of the training process. |
required |
Returns:
Type | Description |
---|---|
Union[np.float32, np.float64]
|
Union[np.float32, np.float64]: The loss of the current batch. |
Source code in src/autoembedder/learner.py
__validation_step(engine, batch, model, criterion, parameters)
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
engine |
ignite.engine.Engine
|
The engine that is calling this method. |
required |
batch |
NamedTuple
|
The batch that is passed to the engine for validation. |
required |
model |
Autoembedder
|
The model used for validation. |
required |
criterion |
torch.nn.MSELoss
|
The loss function to be used for validation. |
required |
parameters |
Dict[str, Any]
|
The parameters of the validation process. |
required |
Returns:
Type | Description |
---|---|
Union[np.float32, np.float64]
|
Union[np.float32, np.float64]: The loss of the current batch. |
Source code in src/autoembedder/learner.py
fit(parameters, model, train_dataloader, test_dataloader, eval_df=None)
¶
This method is the general wrapper around the fitting process. It is preparing the optimizer, the loss function, the trainer, the validator and the evaluator. Then it attaches everything to the corresponding engines and runs the training.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
parameters |
Dict[str, Any]
|
The parameters of the training process. In the documentation all possible parameters are listed. |
required |
model |
Autoembedder
|
The model to be trained. |
required |
train_dataloader |
torch.utils.data.DataLoader
|
The dataloader for the training data. |
required |
test_dataloader |
torch.utils.data.DataLoader
|
The dataloader for the test data. |
required |
eval_df |
Optional[Union[dd.DataFrame, pd.DataFrame]]
|
Dask or Pandas DataFrame for the evaluation step.
If the path to the evaluation data is given in the parameters ( |
None
|
Returns:
Name | Type | Description |
---|---|---|
Autoembedder |
Autoembedder
|
Trained Autoembedder model. |
Source code in src/autoembedder/learner.py
335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 |
|