The Skyrun SDK streamlines the setup and execution of model training jobs post-data preparation. With just a data source identifier and a model name, you can initiate training runs. The SDK caters to both beginners and advanced users, offering simple initial setup and extensive customization options for optimizing model performance.

Architecture Selection

Utilize the recommender.train method to choose your model's architecture, including Variational Autoencoders (VAE) for collaborative filtering. Collaborative filtering, a core method in recommendation systems, predicts user preferences based on historical data from many users. VAEs are particularly effective for this purpose, capturing deep latent data representations. To leverage VAEs, use recommender.vae.train.

Original Paper:

Configuring Training Parameters

Training a model with the Skyrun SDK involves specifying various parameters that control the training process. While the only mandatory parameters are the model name and data source identifier, users have the option to customize further aspects of the model's architecture, learning rate, training epochs, and more to optimize performance.

Defining Training Parameters

Below is an example of how to configure and initiate a training job, specifying only the essential parameters alongside optional advanced configurations:

res = skyrun.recommender.vae.train(
    n_epochs=1,          # Optional
    batch_size=100,      # Optional
    learn_rate=0.001,    # Optional
    beta=1,              # Optional
    verbose=1,           # Optional
    train_prop=0.8,      # Optional
    random_seed=42,      # Optional
    latent_dims=10,      # Optional
    hidden_dims=120,     # Optional
    recall_at_k=100,     # Optional
    eval_iterations=1,   # Optional
    act_fn='tanh',       # Optional
    likelihood='mult',   # Optional
    data_subset_percent=1  # Optional


Below are the parameters you can configure for your training run. Key parameters are marked as essential for customizing your model's performance effectively:

custom_model_name (Required)

  • A unique identifier for your custom model. This name is used within your project or system for referencing and managing the model.

data_source_pri (Required)

  • The identifier for the primary data source, corresponding to the dataset or data stream used for training.


  • Number of complete passes through the entire dataset. Increasing epochs can enhance model accuracy but risk overfitting. Reduce epochs if validation loss increases or stops decreasing while training loss decreases. important


  • Controls how quickly the model learns. A higher rate leads to faster changes but risks missing the best solution. A lower rate results in slower learning but may find better results. Finding the right learning rate often requires some experimentation. important


  • This is an evaluation metric, it measures how accurately the model can identify relevant items within the top K suggestions. For instance, if you're recommending 50 items in a user's feed, setting recall_at_k to 50 helps you understand how many of the top 50 recommended items are actually relevant to the user. Adjust this number based on how many items you typically recommend in your application's context to gauge the model's effectiveness in those scenarios. important


  • Size of the compressed data representation. Larger dimensions capture more details but may cause overfitting. Adjust based on model performance and complexity of data.


  • Number of training examples utilized in one iteration. A smaller batch size can lead to more detailed learning but increases computation time. Increase batch size to stabilize training if loss is highly variable.


  • Controls the balance between how detailed the model recreates input data and how well it understands the overall patterns in the data. Increase for more general understanding, decrease for greater focus on detail. (Likely dont need to adjust this)


  • Dimensions of hidden layers in the network. Increase to capture more complex patterns at the cost of more data and computation, decrease if the model overfits.


  • Controls the amount of logging information shown during training, useful for monitoring progress.


  • Proportion of data used for training, with the remainder used for validation. This split impacts model validation and generalization.


  • Seed for random number generation, ensuring reproducibility of your training runs. (Likely dont need to adjust this)


  • Number of iterations for model performance evaluation, affecting the reliability of the evaluation metrics.

act_fn (Important to Experiment)

  • Activation function for the model's layers. Options include relu (recommended for most cases), tanh (useful for outputs ranging between -1 and 1), and sigmoid (good for probabilities or when output is between 0 and 1). (Likely dont need to adjust this)

  • Options: relu, tanh, sigmoid


  • Defines how data is assumed to be generated from latent variables, impacting model assumptions and learning accuracy.


  • Allows training on a subset of the available data, useful for quick experiments or when computational resources are limited.

Expected Output:

{'data': {'message': 'Training model. Model endpoint will be accessible once its ready. Please visit PigeonsAI web app for the status.'}}

This message indicates that the model training has started and provides an endpoint URL where the model will be accessible upon completion. Additionally, it suggests visiting the PigeonsAI web app for status updates.

Reach out to us if you have any questions

You can directly reach out to us on LinkedIn if you have any questions:

Last updated