This page displays Orion scaling group information.
The Scaling Groups subpage details the various compute scaling groups available to Orion. The two tables below represent spot and non-spot/on-demand EC2 instances.
EC2 instance type and specification.
Indicates the state of an ASG.
ASG policy adjusts:
Shows the state of the ASG and whether it is Active or Deactivated.
|Usage||Amount of resources currently provided by the ASG.|
|Cost/Hour||Hourly instance cost. If spot, will update regularly.|
|Pool||Scheduler feature used to segregate tasks into separate scaling groups.|
|Edit||Allows an Orion Stack admin to manage an ASG. Available options are Min. Size, Max. Size, Min Reserve, Affinity, and State.|
As tasks are submitted to Orion, the scheduler decides where to place the work based on the hardware and spot requirements of the Cubes, as well as other factors (such as pool or affinity). As the workload grows, more instances are launched. This is first seen by an increased desired instances count; soon thereafter, the healthy instances should match this. Desired and healthy instances are not allowed to exceed the maximum size.
Once work is complete and Orion starts to scale down, users then see the desired count drop far more quickly than the healthy count. This is for two reasons: (1) those instances are likely still working on their current task, and (2) Orion does not terminate instances immediately after they complete work as startup time is real (several minutes depending on instance type and pricing model), so they remain as hot instances for new work.
If the desired count is higher than the healthy count for a long period, this likely means that either the spot price is now out-bid or there is no/limited spot availability (typically the latter).
Data conversion Floes and special tasks such as Iterative Design can be performed in the system pool, whereas regular jobs use the default pool.