This page displays Orion system information.
The Scaling Groups subpage details the various compute scaling groups available to Orion.
|Group Name||Internal/AWS group name|
|Instance||EC2 instance type and specification|
|Usage||Amount of resource currently provided by the group|
|Affinity||Scheduler feature to increase the preference of a group (the default is 0)|
|Cost||Hourly instance cost (if spot, will update regularly)|
|Healthy||Number of instances currently available to do process work|
|Desired Instances||Number of instances the scheduler would like to have as healthy|
|Min. Size||Minimum size of the group (the default is 0)|
|Max. Size||Maximum size of the group (a useful value to limit spend in Orion)|
|Spot||Spot or on-demand instance pricing|
Scheduler feature used to segregate tasks. In a future release, data conversion floes and synchronous tasks
(such as Iterative Design) will be performed in a system pool, not the default pool.
As tasks are submitted to Orion, the scheduler decides where to place the work based on the hardware and spot requirements of the cubes, as well as other factors (such as pool or affinity). As the workload grows, more instances are launched. This is first seen by an increased desired instances count; soon thereafter, the healthy instances should match this. Desired and healthy instances are not allowed to exceed the maximum size. Once work is complete and Orion starts to scale down, users then see the desired count drop far more quickly than the healthy count. This is for two reasons: (1) those instances are likely still working on their current task, and (2) Orion does not terminate instances immediately after they complete work as startup time is real (several minutes depending on instance type and pricing model), so they remain as hot instances for new work.
If the desired count is higher than the healthy count for a long period, this likely means that either the spot price is now out-bid or there is no/limited spot availability (typically the later).